id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
221882095
|
pes2o/s2orc
|
v3-fos-license
|
Diagnosis of SARS-CoV-2 Infection with LamPORE, a High-Throughput Platform Combining Loop-Mediated Isothermal Amplification and Nanopore Sequencing
ABSTRACT LamPORE is a novel diagnostic platform for the detection of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) RNA combining loop-mediated isothermal amplification with nanopore sequencing, which could potentially be used to analyze thousands of samples per day on a single instrument. We evaluated the performance of LamPORE against reverse transcriptase PCR (RT-PCR) using RNA extracted from spiked respiratory samples and stored nose and throat swabs collected at two UK hospitals. The limit of detection of LamPORE was 10 genome copies/μl of extracted RNA, which is above the limit achievable by RT-PCR, but was not associated with a significant reduction of sensitivity in clinical samples. Positive clinical specimens came mostly from patients with acute symptomatic infection, and among them, LamPORE had a diagnostic sensitivity of 99.1% (226/228; 95% confidence interval [CI], 96.9% to 99.9%). Among negative clinical specimens, including 153 with other respiratory pathogens detected, LamPORE had a diagnostic specificity of 99.6% (278/279; 98.0% to 100.0%). Overall, 1.4% (7/514; 0.5% to 2.9%) of samples produced an indeterminate result on first testing, and repeat LamPORE testing on the same RNA extract had a reproducibility of 96.8% (478/494; 94.8% to 98.1%). LamPORE has a similar performance as RT-PCR for the diagnosis of SARS-CoV-2 infection in symptomatic patients and offers a promising approach to high-throughput testing.
public health emergency of international concern by the WHO. Further expansion of testing to include screening of asymptomatic individuals, which may be needed to prevent SARS-CoV-2 circulation, would require a significant further increase in testing capacity (1,2).
In the United Kingdom, clinical laboratories have struggled to expand conventional RT-PCR workflows to meet the demand for SARS-CoV-2 testing, and many have explored alternative methods that would be more scalable or allow near-patient use (3,4). At the Oxford University Hospitals NHS Foundation Trust (OUH) and Sheffield Teaching Hospitals NHS Foundation Trust (STH), we evaluated LamPORE, a novel diagnostic platform for SARS-CoV-2 developed by Oxford Nanopore Technologies (ONT) that combines loop-mediated isothermal amplification (LAMP) with nanopore sequencing (5). During sample preparation, a unique combination of DNA barcodes are incorporated into the LAMP products from each specimen so that they can be pooled into a single sequencing run. In the current protocol, up to 92 samples can be analyzed on 1 flow cell, potentially allowing thousands of samples to be analyzed per day on a single instrument running multiple flow cells in parallel. The workflow involves a 40-minute amplification, followed by library preparation, and a 60-minute sequencing run, generating results in a comparable time to RT-PCR when starting with extracted RNA.
As well as molecular barcoding, using sequencing to detect the outcome of the LAMP reaction offers other advantages compared with simpler LAMP assays that detect the presence of DNA synthesis by measurement of pH, turbidity, or fluorescent dyes. Sequenced reads from a specific target will contain sequences not present in the primers, avoiding false positives caused by nonspecific amplification (although the amplicons are not large enough to usefully genotype the virus) (6). Conversely, reads confidently assigned to SARS-CoV-2 targets may indicate a true positive even if present at relatively low levels, potentially improving the low sensitivity seen in several LAMP assays compared with RT-PCR (7,8). In addition, the detection of LAMP products by sequencing allows the possibility of multiplexing the assay with other pathogens. LamPORE uses ONT flow cells compatible with several sequencing instruments, including the portable MinION device and high-throughput GridION and PromethION platforms, and thus, it could potentially be used both for mobile and centralized testing.
In this evaluation, we compare the performance of LamPORE with RT-PCR on extracted RNA from respiratory specimens. Initially, we use spiked samples to determine the analytical limit of detection of the assay. We then use stored clinical samples to determine the assay's diagnostic sensitivity, specificity, and reproducibility.
MATERIALS AND METHODS
The evaluation was conducted across three sites, namely, OUH, STH, and the Public Health England National Infection Service at Porton Down (PHE Porton Down).
LamPORE. LamPORE is a CE marked diagnostic assay developed by ONT and described in detail in James, et al. (5). The assay was performed identically at each site using a GridION instrument with operators unaware of reference PCR results. It takes 20-ml RNA input into a single multiplex reaction targeting the following three regions of the SARS-CoV-2 genome using previously published primers (9): ORF1a, envelope, and nucleocapsid genes, plus human b-actin mRNA as a control of sampling adequacy and assay performance. LamPORE sample preparation uses a 96-well plate format, with each sample having 1 of 8 LAMP forward inner primer (FIP) barcodes and 1 of 12 transposase (rapid) barcodes added before pooling. In these experiments, a single LAMP barcode (FIP7) was not used, as it had previously been associated with lower b-actin read counts and was awaiting replacement (unpublished data). As a result, plates contained 80 samples, plus 2 no-template controls and 2 positive controls consisting of synthetic SARS-CoV-2 RNA (Twist Bioscience). To assess for potential sample-sample contamination, positive and negative clinical samples were intermixed, with positions altered between replicates.
We used the LamPORE protocol dated 1 July 2020 (version 1, revision 4). Briefly, this protocol consists of adding sample RNA to LAMP master mix and primers and then incubating the mixture at 65°C to 80°C in a thermocycler for 40 minutes, during which time amplification occurs and the LAMP primer barcodes are incorporated into concatemers containing the target sequence. Following these steps, samples from the same column are pooled and a second set of barcodes are incorporated using a rapid transposase-based method. All samples are then pooled into a single sequencing library, with no need for normalization, as DNA concentrations are similar in all positive samples following LAMP, regardless of the initial viral load. The pooled library has a magnetic bead cleanup, then is added to a MinION flow cell, and sequenced for 60 minutes, after which a report is generated automatically by the instrument within seconds for each barcode set. Unlike RT-PCR, LamPORE does not provide the equivalent of a cycle threshold (C T ) value reflecting the initial viral load, as measurement occurs only after amplification is complete. The number of reads assigned to each target is used to generate a report as follows: (i) invalid, ,50 classified reads in total detected from SARS-CoV-2 and b-actin targets, (ii) positive, $50 SARS-CoV-2 reads detected (adding read counts across all three SARS-CoV-2 targets), (iii) Inconclusive, not invalid and $20 and ,50 SARS-CoV-2 reads detected, and (iv) negative, not invalid and ,20 SARS-CoV-2 reads detected.
Spiked samples-PHE Porton Down. Spiked samples were prepared and analyzed at PHE Porton Down to establish the limits of detection of LamPORE. Aliquots of pooled volunteer saliva were used for spiking experiments, which were confirmed SARS-CoV-2 negative by RT-PCR. They were spiked with cultured SARS-CoV-2 (Victoria/01/202026 passaged twice in Vero/hSLAM cells) at 1,000 SARS-CoV-2 genome copies/ml of sample and serially diluted with the remaining material to create a dilution series of positive samples.
RNA was extracted from 360 ml of the spiked sample using the QiaAMP viral RNA minikit (Qiagen), with RNA eluted in 36 ml. Reference RT-PCR was conducted with the CDC NS1 assay with 5-ml RNA input (10). Quantification was determined by comparison to a standard curve of a plasmid 2019-nCoV_N positive control (Integrated DNA Technologies). Further details are in the supplemental material.
Clinical specimens-OUH and STH. Testing of stored clinical samples was performed at OUH and STH. All samples were nose and/or throat swabs collected into viral transport media during routine clinical care and stored at 280°C.
(i) Sample selection. (a) SARS-CoV-2-positive samples. At OUH, sequentially available positive specimens collected from March to April 2020 were chosen without reference to the RT-PCR cycle threshold (C T ) value. During this time, a uniplex RdRp RT-PCR assay was in use, based on the assay described by Corman et al. (11). At STH, a stratified random sample of specimens collected from April to May 2020 were selected based on their initial SARS-CoV-2 E gene C T value, using an in-house assay based on the Corman et al. protocol (11,12), with 50% chosen to have C T values of ,30 and 50% to have $30. At both sites, testing was largely restricted to hospitalized patients and symptomatic staff during the collection period.
(b) SARS-CoV-2-negative samples. At OUH, negative samples were selected from stored prepandemic respiratory samples. They had initially been tested with either GeneXpert Flu/RSV (Cepheid) or the BioFire FilmArray respiratory panel 2.0 (bioMérieux) and were purposefully chosen to include samples with a range of other respiratory pathogens. Over 90% of samples were collected between October and December 2019, but those samples containing non-SARS-CoV-2 seasonal coronaviruses were used up until a collection date of 10 March 2020 to increase the number available. At STH, negative samples were selected from among those submitted for SARS-CoV-2 testing.
(ii) RNA extraction. For samples originating from OUH, RNA extraction was conducted with the QIAsymphony SP instrument and the DSP virus/pathogen kit (Qiagen) (13). A total of 200 ml of viral transport medium was extracted, and RNA was eluted in 60 ml. For samples originating from STH, RNA extraction was performed using the MagNA Pure 96 instrument with the MagNA Pure 96 DNA and viral neuraminidase (NA) small volume kit (Roche). A total of 200 ml of viral transport medium was extracted, and RNA was eluted in 100 ml. Aliquots of RNA were stored at 280°C prior to analysis.
(iii) Reference RT-PCR. Reference RT-PCR was undertaken contemporaneously with LamPORE on aliquots of the same RNA extract, with operators unaware of LamPORE results. For samples originating from OUH, the reference RT-PCR was the RealStar SARS-CoV-2 RT-PCR assay (Altona Diagnostics) using 10-ml RNA input. For samples originating from STH, an in-house RT-PCR assay based on Corman et al. methods was used with 6-ml RNA input (11,12). Further details are in the supplemental material.
(iv) Replicates. To assess the reproducibility of the assay, LamPORE replicates were performed on aliquots of the same RNA extract. To ensure comparable RT-PCR and LamPORE results between OUH and STH, a subset of samples was exchanged between sites, with LamPORE and reference RT-PCR repeated.
Statistical analysis. R version 3.5.0 was used for analysis with exact binomial confidence intervals calculated for proportions. Initial LamPORE replicates were used to derive estimates of sensitivity and specificity, with second replicates used to estimate LamPORE reproducibility. Results are reported in line with the Standards for Reporting Diagnostic accuracy studies (a STARD checklist is in the supplemental material).
Ethics. The process for collection of the donated saliva was approved by the PHE Research Ethics and Governance Group. The protocol for the use of stored clinical samples at OUH and STH was reviewed by the Institutional Review Board of OUH and the University of Oxford, and it was determined that the activity constituted service evaluation and service development. As such, it did not need research ethics review.
RESULTS
Limit of Detection. Using samples spiked with cultured SARS-CoV-2, LamPORE had a limit of detection of 1,000 SARS-CoV-2 genome copies/ml of sample and detected 15/15 samples (Table 1). With the RNA extraction protocol used, and assuming 100% extraction efficiency, this limit of detection would correspond to a concentration of 10 genome copies/ml of extracted RNA (or 200 copies per 20-ml reaction). Although LamPORE did not consistently detect spiked samples at concentrations below this value, it was positive in 8/18 (44%) samples at a concentration of 100 copies/ml of sample, corresponding to 1 genome copy/ml of extracted RNA (or 20 copies per 20-ml reaction). By comparison, RT-PCR using the CDC NS1 assay was also positive in 15/15 samples at 1,000 SARS-CoV-2 genome copies/ml of sample and in 14/18 (78%) of samples at 100 copies/ml of sample, although the difference with LamPORE was not statistically significant (P = 0.09 by Fisher's exact test).
Diagnostic performance. Diagnostic performance of LamPORE was assessed using 514 stored nose and throat swabs, 400 from OUH and 114 from STH (details in Table S1 in the supplemental material). Requesting location was available for 135/150 (90%) SARS-CoV-2-positive samples from OUH but not for other samples. Among these samples, 41 (30%) were from outpatient locations (including occupational health), 24 (18%) were from community hospitals, 45 (33%) were from emergency departments or acute admission wards, and 25 (19%) were from other inpatient locations. Sixty crosssite replicates demonstrated good correlation between RT-PCR C T values for E gene targets at OUH and STH despite different assays being used, so this was used as the reference C T (see Fig. S1 in the supplemental material). Samples were analyzed on a total of 13 LamPORE runs performed on separate days.
Among 229 RT-PCR-positive samples tested by LamPORE, 226 were reported positive and 2 were reported negative, giving an overall diagnostic sensitivity of 99.1% (226/228; 95% CI, 96.9% to 99.9%) ( Table 2). All valid samples at C T values of 34.9 or lower were positive by LamPORE (Table 3). Considering performance at lower viral loads, 7/9 samples with a C T value of $35 were positive and 22/22 of those with C T values between 30 and 34.9 were positive. Both false-negative samples by LamPORE had C T values of $38, and 1 of them was positive by LamPORE on repeat testing (see Table S2 in the supplemental material). The one RT-PCR-positive sample that was invalid on initial LamPORE testing was correctly positive when repeated.
Of 285 RT-PCR-negative samples, 278 were negative and 1 was positive by LamPORE, giving an overall diagnostic specificity of 99.6% (278/279; 98.0% to 100.0%) ( Table 2). The false positive was a prepandemic respiratory sample that was also positive for adenovirus and which had 2,419 SARS-CoV-2 reads detected. However, this sample was negative on repeat LamPORE testing (Fig. 1). Six RT-PCR-negative samples gave indeterminate results (three invalid, three inconclusive), of which four were correctly negative on repeat testing, one remained invalid, and one was not retested. Overall, among both RT-PCR-positive and -negative samples, 1.4% (7/514; 0.5% to 2.9%) produced an indeterminate result on first testing.
Another respiratory pathogen was detected by multiplex RT-PCR in 153 negative samples, including 43 with rhinovirus, 38 with respiratory syncytial virus (RSV), 33 with influenza, and 24 with seasonal coronaviruses (9 HKU1, 7 NL63, 7 OC43, and 1 229E). Overall, there was no evidence that the presence of any other respiratory pathogen was associated with false-positive results or greater numbers of reads assigned to SARS-CoV-2 targets (Fig. 1).
As well as the categorical result produced by the LamPORE reporting algorithm, RT-PCR results were compared with the number of reads assigned by LamPORE to SARS-CoV-2 targets (Fig. 2). This comparison showed that the prespecified cutoff of $50 for (Table S3 and S4 in the supplemental material). In four samples (0.8%) with discrepant LamPORE results, the same sample switched between negative and positive. In the other 12 discrepant samples, LamPORE replicates included 1 indeterminate result. All 90 cross-site LamPORE replicates performed between Oxford and Sheffield were concordant (60 RT-PCR/LamPORE positive and 30 RT-PCR/ LamPORE-negative). Combining results from replicates to assess the rate of possible sample-sample contamination, 1/576 (0.2%; 0.0% to 1.0%) negative samples or no-template controls were positive by LamPORE.
DISCUSSION
LamPORE has been identified by the UK government as a possible high-throughput platform that could alleviate shortages in SARS-CoV-2 testing capacity (14). In a manuscript released by its developers, LamPORE correctly detected SARS-CoV-2 in 79 of 80 clinical specimens (98.8%; 95% CI, 93.2% to 100.0%), although no SARS-CoV-2-negative specimens were available for testing. Instead, the assay was tested on 85 nonrespiratory human RNA extracts, and 4 were incorrectly reported as positive (sensitivity, 95.2%; 88.3% to 98.7%), which the authors attributed to probable sample contamination (5). In this evaluation, we found that LamPORE had a high diagnostic sensitivity (99.1%) and specificity (99.6%) in our clinical sample set. Combined with a high reproducibility (96.8%) both within and across sites, these results support its practical use for high-throughput testing in a low-prevalence population. Although the assay we evaluated targeted SARS-CoV-2 alone, the LamPORE platform could be adapted to detect other pathogens and is amenable to multiplexing in order to target multiple pathogens in the same assay.
The limit of detection of LamPORE, at 10 genome copies/ml of extracted RNA, was somewhat higher than the 2 copies/ml achievable in previous evaluations of high-performance RT-PCR (15), but this value did not correspond to a significant loss of diagnostic sensitivity in the clinical samples. In our spiking experiments, RNA was extracted from 360 ml transport medium and eluted in 36 ml, a 10-fold concentration. This amount is higher than that of most commonly used extraction protocols, for example, those used at OUH and STH produced 3-fold and 2-fold concentrations, respectively. Therefore, the limit of detection, measured in genome copies/ml of sample, using LamPORE with a high-concentration extraction would be similar to PCR as commonly used with a low-concentration extraction. Automated, commercially available extraction methods can produce a 20-fold RNA concentration, which could further improve the limit of detection, although higher degrees of concentration could lead to assay inhibition, so this would need further evaluation.
Although no clinical metadata were available about the individuals whose samples were used in this evaluation, they would have mainly been derived from patients with acute symptomatic infection, often requiring admission to hospital, as testing was mainly limited to this group during the first wave of infection. The distribution of C T values may be higher in a population with more mild or asymptomatic infections and would be markedly higher among those who remain RT-PCR positive weeks after recovering from acute infection (16). Our data suggest that LamPORE is most likely to miss weakly positive samples with C T values above 35 and thus could have had lower diagnostic sensitivity if tested in such groups. However, this may not be a significant practical disadvantage, as although weak positives have some value for contact tracing, they are likely to come from individuals with low infectious potential (17,18).
Our evaluation has several limitations. It was conducted after the first wave of COVID-19 in the United Kingdom, when there were few incident cases, so we were unable to prospectively collect samples and instead relied on frozen transport media, which could differ from fresh material. Sample collection occurred at a time when there was little genetic variation in SARS-CoV-2, and we did not attempt to assess the possible effect of future sequence variation causing failure in any of the three gene targets. Positives were defined by a positive RT-PCR at the time of initial sample collection and by repeat positive RT-PCR simultaneously with LamPORE, but although RT-PCR is used as a reference test for SARS-CoV-2, there are many reports of its suboptimal sensitivity in clinical infection (19).
This early evaluation of LamPORE compared its performance against RT-PCR using extracted RNA, as this is the standard material used for the detection of SARS-CoV-2. However, the requirement for viral inactivation and RNA extraction and the additional need for LamPORE library preparation could lead to bottlenecks that would mitigate the potential benefit of LamPORE for high-throughput or mobile testing. LAMP reactions are reported to be more robust than RT-PCR to inhibitors present in clinical samples and so may have superior performance with extraction-free protocols (20,21). The use of such extraction-free protocols could greatly streamline the workflow, but further evaluation is required. We also did not evaluate how the throughput and turnaround time of LamPORE would compare with RT-PCR during routine use in a clinical laboratory or centralized testing center. The benchtop GridION instrument can accommodate five flow cells simultaneously and so could analyze over 3,000 samples in a 12-hour day at two-thirds occupancy, and the PromethION instrument has a theoretical capacity more than 10-fold higher, but using LamPORE to test tens or hundreds of thousands of samples per day would be dependent on a streamlined workflow, including automated sample handling, integration with laboratory information management systems, and careful safeguards to minimize the risk of contamination.
In conclusion, we show that LamPORE on extracted RNA offers a promising method of high-throughput SARS-CoV-2 testing. However, further evaluation in mild or asymptomatic infection is needed, and large-scale use requires the development of streamlined workflows, possibly by including simpler sample preparation to avoid the need for conventional RNA extraction.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. SUPPLEMENTAL FILE 1, PDF file, 0.9 MB. SUPPLEMENTAL FILE 2, CSV file, 0.05 MB.
ACKNOWLEDGMENTS
We are grateful to all the clinical microbiology and virology staff at OUH and STH who helped to process the specimens used in this evaluation and to Kevin Bewley, PHE Porton Down, for providing the cultured virus.
Materials for the evaluation were supplied by Oxford Nanopore Technologies, but all experiments and analyses were conducted independently by the investigators.
D.W.E. declares lecture fees from Gilead, outside the submitted work. All other authors declare no competing interests.
|
2020-09-25T13:05:05.800Z
|
2020-09-25T00:00:00.000
|
{
"year": 2021,
"sha1": "75255088a52f30e5f06d2380f8b9272073f21780",
"oa_license": "CCBY",
"oa_url": "https://ora.ox.ac.uk/objects/uuid:391050a7-6737-409e-a6ca-d44dc0e56da8/download_file?file_format=pdf&safe_filename=JCM.03271-20.pdf&type_of_work=Journal+article",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a7d011aaa2ee6fcce4f6ac3f1f6bd7cd065b76b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53348602
|
pes2o/s2orc
|
v3-fos-license
|
RUMOURS AND REALITIES OF MARRIAGE PRACTICES IN CONTEMPORARY SAMlN SOCIETY
Since the mid 19"h century the Samin people have made a contribution to resisting the Dutch colonial rule in rural Java through their non-violence movement and passive resistance (lijdeiijk verset). History also notes that they have a unique culture and system of values which reflect their own local wisdom. However, many negative rumours have become widespread regarding this community. This article explores the marriage practices in Samin society and investigates how this society gives meaning to these marriage practices. It also examines whether the practice of 'virginity tests' and 'stray marriages' exist in contemporary Samin qociety.
people can conclude that they are living together out of wedlock.Furthermore, these marriages usually are not reported officially to the govemment, so that most ofthem are iltegal (not certified).As a mnseqwnce, the multing ch4dfBi-1 do not have birth certificates.The rumour about the 'virginity test' has become Mespreiad, as evidenced by jokes like: 'If you want to "get married under a tme" (iltegdlyt without any mnseqm17~8s) and want to gett "frse" *in& just go to the Sarrtin, you can leave the woman whenever you want, even just one day after having sex.' A number of studies have been conducted on M n society from dierent perspectives.For example, there are those that examine the Samin movement in terms of the causes and origins of the movement2, the nature of their religion3, the patron-client relationships around their movemenv, the uniqueness of their language5, and how they survived the Dutch colonial period by creating a special architectural design in their dwelling system6.However, little information regarding marriage practices in Samin society is available.The study of mamage practices in this community is therefm crucial because it mcems the status and treatment of women.
Marriage can be defined as ' a life-long union of a man and a woman for the purpose of establishing a family'.'Thegoals of marriage are to provide opportunities for sexual intimacy, companionship, family continuity, establishment of parenthood, legitimate reprodudion, emotional fulfilment, and widening of inter-personal relati~ns.~Through marriage, humans can build a family and strengthen their kinship In his worklo, M u l l a h discusses marriage p m d e s in Samin society.He dassifies marriage practices in Samin society as endogamous.
Endogmy refers to a system where group members find spouses from within the group.He ar~ues that endogamous marriage has been Sbrry martiage, acoweling to Shiraishi, is a ritual in which a man has to ask a woman, not his intended wife, to many him without any coeraon or tricks.After sleeping together, only once, and after leaving the woman, he can return to his wife.I4 However, even though he described the 'stray marriage'
1
Another study related to the marriage j practices in Samin society is that of Murnfangati.15She mentions that their marriages are usually performed by parents and are not reported officially to the govemment, so that most of the resulting children do not have birth certificates.However, her study is too superF6cfal and lacks focus, endeavouring as it does to aover too many aspects of the Samin society.
To understand the m a w system in the Samin community we need to understand their belief system.In relation t~ This means that thydo not d i i i m t e against people on the basis of physical appearance or status.
Mamiage in Samin community is performed without government involvement.They refuse kawin coro twgum (marriage by govmment rubs).In the Samin M(lopoduwur community today, whieh can be cxmklered as ribre tmdm than other Samin communities such as BomtKmg, Balong and Tanduran, after their marriage, a newly married cuuple report thdr marriage to the government to get a marriage certificate.In the past, Samin psaple in Klopoduwur did not report their marriages.As a consequence, their offspring did not have birth certifmtes or identity cards.They argued that if they registered their marriage, thaJ gcwwmmt would record them as the fdkwers of one of the six state religion^.^This wcnol d m a n they would be betraying Agama Adam as their belief system. Adat in the Samin community divides the mamiage process into several stages.These stages have been examined by R o ~y i d ~~.However, Rosyid's work is descriptive and does not analyse the meaning of marriage practices in Samin society.According to Rosyid, the first stage is nyuwuk (asking a woman's marib!status).In this stage, a man who is ready to marry asks about the status of a woman he wants to marry.Nyuwuk -can be done by the man or by his parents.If her status is m a W , then the man will withdraw the woman is still woman's parents will ask she consents or not to the p agrees with the proposal, the man a n the woman's house to continuer fbr s stage, that is ngawulo ('appmn woman's house).However, in n ) l m k I h woman has a right to accept or to refuse the marriage proposal.The man has to accept whether st-le agrees r x d h g m .This stmws that in Samin marriage bacfitkwr, a wonm has freedom to chose her hu8band witha$ m y coercion from others;.This difftam fmm t b Javanese adat, where most maMages am arranged by the parents of the coupk.=For some people in Samin communities, nyuwuk is then followed by nywito (the groom's family applies for In this stage, they will and g d a n i g setangkep"as part ofthe The gambjr and sirih is believed spirits away, while gedang symbolises a relationship between the bridegroom and bride that will continue hpptty until the end of their lives.At this &tag&, the parents of the groom eatrrrst their w n fa Ms many cases where after view the nyuwito tmdiiion a s mmty a pro$lem of selem (taste).The groam's pslaene usual& (bride prim).this is because Samin people believe that this system has more disadvantages than advantages.For example, if the groom is from a poor st@tw a8 hwmn beings) so that ef~t B dbtrJrb5ng to the Samin W to stay in the parents-in-law's house and workg forthem as a peasant in sawah or tqal (wet or dry rice fields), or as a shepherd of mffle.In this stage, he does not rec~ive afiy payment, but all of his needs wiU be for by his parents-in-law.In this stage is regarded as a family member and is to sleep with his YisnM.The length af the ngawulo stage is relative and depends on the situation.For bridegrooms who are already mature, it may not take a long time.It m i d be a week or two.But for those who ape still young, it could reach up to two years.In this coptext, the marriage will be considered 'lawful' if the bride's parents have approved their relations.
After ngawulo, the next stage is kondo (reporting to the woman's parents) that they have done sikep rabi31 (sexual intercourse) as proof that they really pod0 senenge (love each other).Samin people believe that marriage has to be based on love and this has to be proven by having sexual intercourse during the ngawulo stage.
The last stage in the Samin marriage process is diseksekno (witness ceremony).In this stage, the groom will testify that he has done sikep rabi with the bride.The bride's parents wilt witness that the bridegroom has passed the process well.It is thus shown that they tumbuh katresnan (realty love each Qther) and are mkun (harmonious).Based on isdat, the task of the bride's mother is oqmbing t M ca;rmmony (to make them mkun).The task of the We's father is to giw agresrrrent to fkdr relaf-ion$hSp.The period far dimk~ekno is apik'(the faster the better).The last argument against the idea of 'virginity tests' in Samin society is based on their bel' mf that 'Bojo siji kanggo s8Iawasca' (one wife farewer).This means that when a rnan has sex with a woman, he is effectively making a pegat mati oath?He and his wife seek to keep their oath and try to niteni (always be introspective and remember the oath).That is why Samin KIopoduwur prefers to be called mngpcsnitt+n (pecpie who are niteno.After having sexual intercourse, a couple will not &wee until one dear that A ~A ~ of 'virginity tests' and polyandry in the Samh community seems b be myths.
'STRAY MARRIAGES'
Another rurnour about the Samin m u n i t y is that they encourage a practice of 'stray marriage'.According to Dangir's Testimony, if Samin men commit adultery, they have to purify their body through a 'stray marriage'.In a 'stray rrraniage' a man should look for a m n (apart from his own wife) to 'marry'.This ritual shoutd be done without coercion or deception, and &so As a result women can piay roles in both the dm&ic and public spheres b e l i i that women can heaven but not to hell hi ghl y untikely that the support a practice as degrading to women as ofwMchistouusa patiently; if insulted mmams~~1TQfWf-ormonsyorfoodfrom anyone; but ifmykine asks for food or money from you, give it freely.'45In Agama Adam they disasters, accidents, failed harvests, social believe that sin cannot be absolved.If a sin isolation or gossiping by others.The could be absolved then people would sin more.consequence of sin after salin sandangan" Samin people also believe in in which (death) is more horrifying as they will be someone who does sin will suffer either in this reincarnated as animals.Related to this, world or the next.In this world, for example, Saministssay: they might face some bad experiences like On the basis of my fieldwork, 1 betierwe 'virginity tests' and 'stray marriwes' are just myths.In contrast, the Sarnin w m u n i Q through their maniage s p t m show 8 wodd full of respect for women, a world which is egalitarian.This flows from A g m a Adam beliefs.To maintain this system, social sanctions are created for those who have broken the rules.These rules seek to mate a society which values women, a society which is harmonious, tolerant, and egalitarian.In this context, the myths of 'virginity tests' and 'stray marriages' do not seem at all credible.
The Samin Klopoduwur marriage practice system seems to show a high level of respect for women.The rumours about the 'virginity tests' and the 'stray marriages' have made Samin people vulnerable and marginalised.Those who were rnostty insulted by the rumours were women.To reveal the reasons behind the rumours, it is necessary to look at and to understand the social and political context surrounding Samin.Understanding the history of Samin society will help in answering why the rumours exist.
i
custom in Samin society, he was interested in the social and political context surrounding the i Samin movement and his data was based only on an old document, 'Dangir's testimony', which 4 comprises the notes that the patih (vice regent) i i of the Regency of Pati made after he intermgated I Dangir, a Samin villager of Genengmulyo who 1 was arrested on November 26,1928.
(
customary law) which requires endogamy.Samin pmpk can many outside the Samin commwfityanty people b e e a m Samin. is meant to mainbin the min community.Samin bib &at marriage is s a d as it is awq+ to adsieve magnanimity and to produce .. ... ?/ Picture 1. Proud to be Samin: A Sarnin ' ~~' W m r , Blom.good descendants.M a w e is based on love without coercion by others.They have a belief that 'Kabeh lmang ganteng, kabeh W o k ayu' (all men are handsome, a11 women beautiful).
) rarely happens.713'1 ##am fmrn the Javanese adat, which has many ceretnonies and demands high costs.SBIn nyuwuk or nyuwo, there is no as& fubn or m s kawin
1
& m i i n peopk do not irmdve the w v m m & .This h because g m r r r m authority is not seen a$ superior to parents' authority (gowrnmnt is made up d human beings who are not su are just as human).That is wh considered 'legal' even th only by parents.By contrast, they rept#t to Sesepuh Samin (a Samin ekler), who attmds the ceremony.This d i i r s from the txdirwry Javanese custom in which marriages are performed by naib or penghulu (a b a d Islamic religious representative).*Diseksekfio is marked by adang a k W . i In diskwkm, the g r m h a to say the j Sadat (The Samin profession of kith).The Sadat contains a statement that the g m acknowledges the bride as b t u r urip ( M d for life) and promises to stay together and nukulke wlji sejati (give descmdanb).Hem is the Sadat that is said by the groom: haBefofly your maniage will produce goorl d m d a n b .'
Ratu Adil or Hem T j (Jugt King). The literature mkw k s shorwn 4hat mmy scholars have been intembd in &dying the
people also call them Samfn-s~mhan or nyamin, which means pretending to be M n people).
women on centre stage; (3) even though there are studies related to the M a g e pradkm in Samin sxkty, the analysis of the practices in these studies is supe-1, and has a lack of focus, notseeking an i -t t on dthe amspt paper poses the following questions:SAMIN KLOPODUWURA study of the Geger Sattrin (Sanin movement) in rural Java during the Dutch cdonial era cannot be separated f m the life of Soemntiko amin, the founding father of this m m & .Sfman& Samb~ was boFn in lW%l as Raden K ~h a r .~ n K&ar changed his name to Soemtiko ita~skto sodalke with 0relinau)c people.Many saum~ menth that P b Kedbn Bkm was bdmed to be the hometown of S08mko."most peopte in PIoss W h do not to b e t h e c a s e ." P e o p l e i n R m ~E r w him as a simple and konest gaodattitudeandwasweb u f t h i s , h e w a s w e l l ~. ~n U k t ) popular in his neighbourhod when he reskstd new image of him as a hero, At the time, generation to sprehld Sammism around 191f. w sons and Kajen Pati, and EngW (a mnEn;korM of Soerantiko), who spread Sarninisrn in m , ~.310ra.#ErlgtaP$c&minim in -due wmgw'(a hmnwhoWv6F)IgOM)mm and saMi, which means he had s u m & and magical power).However, after Surosentiko d i d , Saminism was split into five groups: SBmfn Lugw, a m i n SangM, ~m i n h p i ~m m r p i n g , Ssmin Gogd and&&--, Samblq~ucomprisesthe of §uemn&a Salwln who stressed do not really know about the Samin beliefs but nevertheless dam t h mas Samin people (Singosamin, Mint0 and Sahmo.With hi can'k, Engkrek babat alas, i.e., he deared an area for settlement at Karang Pace.This area was given to his catfk and Karang Pace became the headquarter for the Samin people in Klopodhuwur.Through rnaniageswiththe localwwnen, Engkmk and his cantrik became inmasingly popular in this village.After discovering that Engkrekwas a saktiperson, almost all village rnernbew became his followers and he became well-known to people in other villages.Many people, m i n J y coming from Pati, Kudus and Jepara, came to the area to study aminism.In his efforts to seek followam, Engkrek @n gave kas8MBnsuch 9s f FMn-1-~~ A Samin in THE STAGES OF MARRfAOE The marriage practice in the Samin community differs from that in the common Javanese tradition.Matiage is carried out only between the Barnin members base-el on adat The other reason why the 'virginity test' is nat mmgnised in the Samin community is
a s 4 an
has to be b 'Stray marriage' does ncd rn to exist in the Samin Klopoduwur mmwrMy, This k due to the belief that a woman's status in the community is higher than that of8 man.This is based on the reality that m can ad as mothers, and therefore have signifiaa~t d t because of smh mles as giving birth, b w 8 t feeding, & i , caring and teaching OtaHdten.Thsmfum, mothgr is the qmbd of Thb b illustmbd by the
|
2018-10-15T16:19:52.374Z
|
2010-06-02T00:00:00.000
|
{
"year": 2010,
"sha1": "375bba2cb01c88ae57405f9ba24e1c38888f2cbb",
"oa_license": "CCBYSA",
"oa_url": "https://journal.ugm.ac.id/jurnal-humaniora/article/download/990/822",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "375bba2cb01c88ae57405f9ba24e1c38888f2cbb",
"s2fieldsofstudy": [
"History",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
2776172
|
pes2o/s2orc
|
v3-fos-license
|
Modern Monetary Circuit Theory, Stability of Interconnected Banking Network, and Balance Sheet Optimization for Individual Banks
A modern version of Monetary Circuit Theory with a particular emphasis on stochastic underpinning mechanisms is developed. It is explained how money is created by the banking system as a whole and by individual banks. The role of central banks as system stabilizers and liquidity providers is elucidated. It is shown how in the process of money creation banks become naturally interconnected. A novel Extended Structural Default Model describing the stability of the Interconnected Banking Network is proposed. The purpose of banks' capital and liquidity is explained. Multi-period constrained optimization problem for banks's balance sheet is formulated and solved in a simple case. Both theoretical and practical aspects are covered.
Introduction
Since times immemorial, the meaning of money has preoccupied industrialists, traders, statesmen, economists, mathematicians, philosophers, artists, and laymen alike.
The great British economist John Maynard Keynes puts it succinctly as follows: For the importance of money essentially flows from it being a link between the present and the future. These words are echoed by Mickey Bergman, the character played by Danny DeVito in the movie Heist, who says: Everybody needs money. That's why they call it money.
Money has been subject of innumerable expositions, see, e.g., Law (1705), Jevons (1875), Knapp (1905), Schlesinger (1914), von Mises (1924), Friedman (1969), Schumpeter (1970), Friedman and Schwartz (1982), Kocherlakota (1998), Realfonzo (1998), Mehrling (2000), Davidson (2002), Ingham (2004), Graeber (2011), McLeay et al. (2014), among many others. Recently, these discussions have been invigorated by the introduction of Bitcoin (Nakamoto 2009). An astute reader will recognize, however, that apart from intriguing technical innovations, Bitcoin does not differ that much from the fabled tally sticks, which were used in the Middle Ages, see, e.g., Baxter (1989). It is universally accepted that money has several important functions, such as a store of value, a means of payment, and a unit of account. 1 However, it is extraordinary difficult to understand the role played by money and to follow its flow in the economy. One needs to account properly for nonfinancial and financial stocks (various cumulative amounts), and flows (changes in these amounts). Here is how Michal Kalecki, the great Polish economist, summarizes the issue with his usual flair and penchant for hyperbole: Economics is the science of confusing stocks with flows.
In our opinion, the functioning of the economy and the role of money is best described by the Monetary Circuit Theory (MCT), which provides the framework for specifying how money lubricates and facilitates production and consumption cycles in society. Although the theory itself is quite established, it fails to include some salient features of the real economy, which came to the fore during the latest financial crisis. The aim of the current paper is to develop a modern continuous time version of this venerable theory, which is capable of dealing with the equality between production and consumption plus investment, the stochastic nature of consumption, which drives other economic variables, defaults of the borrowers, the finite capacity of the banking system for lending, etc. This paper provides a novel description of the behaviour and stability of the interlinked banking system, as well as of the role played by individual banks in facilitating the functioning of the real economy. The latter aspect is particularly important because currently there is a certain lack of appreciation on the part of the conventional economic paradigm of the special role of banks. For example, banks are excluded from widely used dynamic stochastic general equilibrium models, which are presently influential in contemporary macroeconomics (Sbordone et al. 2010).
It is a simple statement of fact that reasonable people can disagree about the way money is created. Currently, there are three prevailing theories describing the process of money creation. Credit creation theory of banking has been dominant in the 19th and early 20th centuries. It is discussed in a number of books and papers, such as Macleod (1855-6), Mitchell-Innes (1914), Hahn (1920), Wicksell (1922), and Werner (2005). More recently Werner (2014) has empirically illustrated how a bank can individually create money "out of nothing" 2 . In our opinion, this theory correctly reflects mechanics of linking credit and money creation; unfortunately, it has gradually lost its ground and was overtaken by the fractional reserve theory of banking, see for example, Marshall (1888), Keynes (1930), Samuelson & Nordhaus (1995), and numerous other sources. Finally, the financial intermediation theory of banking is the current champion, three representative descriptions of this theory are given by Keynes (1936), Tobin (1969), and Bernanke & Blinder (1989), among many others. In our opinion, this theory puts insufficient emphasis on the unique and special role of the banking sector in the process of money creation.
In the present paper we analyze the process of money creation and its intrinsic connection to credit in the modern economy. In particular, we address the following important questions: (a) Why do we need banks and what is their role in society? (b) Can a financial system operate without banks? (c) How do banks become interconnected as a part of their regular lending activities? (d) What makes banks different from non-financial institutions? In addition, we consider a number of issues pertinent to individual banks, such as (e) How much capital do banks need? (f) How liquidity and capital are related? (g) How to optimize a bank balance sheet? (h) How would an ideal bank look like? (i) What are the similarities and differences between insurance companies and banks viewed as dividend-producing machines? In order to answer these crucial questions we develop a new Modern Monetary Circuit (MMC) theory, which treats the banking system on three levels: (a) the system as a whole; (b) an interconnected set of individual banks; (c) individual banks. We try to be as parsimonious as possible without sacrificing an accurate description of the modern economy with a particular emphasis on credit channels of money creation in the supply-demand context and their stochastic nature.
The paper is organized as follows. Initially, in Sections 2 and 3 we develop the building blocks, which are further aggregated in Section 4 into the consistent continuous time MMC theory. In Section 2, we introduce stochasticity into conventional Lotka-Volterra-Goodwin equations and incorporate natural restrictions on the relevant economic variables. Further, in Section 3 we analyze the conventional Keen equations and modify them by incorporating stochastic effects and natural boundaries. Building upon the results of Sections 2 and 3, we develop in Section 4 a consistent MMC theory and illustrate it for a simple economic triangle that includes consumers (workers and rentiers), producers and banks. Section 5 details the underlying process of money creation and annihilation by the banking system and discusses the role of the central bank as the liquidity provider for individual banks. In Section 6 we develop the framework to study the banking system, which becomes interlinked in the process of money creation and propose an extended structural default model for the in-terconnected banking network. This model is further explained in Appendix A for the simple case of two interlinked banks with mutual obligations. In Section 7 the behaviour of individual banks operating as a part of the whole banking system is analyzed with an emphasis on the role of banks' capital and liquidity. The balance sheet optimization problem for an individual bank is formulated and solved in a simplified case.
Background
The Lotka-Volterra system of first-order non-linear differential equations qualitatively describes the predator-prey dynamics observed in biology (Lotka 1925, Volterra 1931). Goodwin was the first to apply these equations to the theory of economic growth and business cycles (Goodwin 1967
Framework
Assume, for simplicity, that in the stylized economy a single good is produced. Then the productivity of labor θ w is measured in production units per worker per unit of time, the available workforce N w is measured in the number of workers, while the employment rate λ w is measured in fractions of one. Thus, the total number of units produced by firms per unit of time, Υ f , is given by where both productivity and labor pool grow deterministically as If so desired, these relations can be made much more complicated, for example, we can add stochasticity, more realistic population dynamics, etc. Production expressed in monetary terms is given by where P is the price of one unit of goods. Similarly to Eqs (2), (3) we assume that the price is deterministic, such that Workers' and firms' share of production are denoted by s w , s f = 1 − s w , respectively. The unemployment rate λ u is defined in the usual way, λ u = 1 − λ w . Goodwin's idea was to describe joint dynamics of the pair (s w , λ w ).
Existing Theory
The non-stochastic Lotka-Volterra-Goodwin equations (LVGEs, see Lotka 1925, Volterra 1931, Goodwin 1967, describe the relation between the workers' portion of the output and the relative employment rate.
The log-change of s w is govern by the Phillips law and can be written in the form where φ (λ) it the so-called Phillips curve (Phillips 1958, Flaschel 2010, Blanchflower and Oswald 1994). The log-change of λ w is calculated in three easy steps. First, the so-called Cassel-Harrod-Domar (see Cassel 1924, Harrod 1939, Domar 1946 where K f is the monetary value of the firm's non-financial assets and ν f is the constant production rate, which is the inverse of the capital-to-output ratio 3 It is clear that ν f , which can be thought of as a rate, is measured in units of inverse time, [1/T ], while f is measured in units of time, [T ]. Second, Say's law (Say 1803), which states that all the firms' profits, given by are re-invested into business, so that the dynamics of K f is govern by the following deterministic equation with ξ A being the amortization rate. Finally, the relative change in employment rate, λ w is derived by combining Eqs. (2) -(5) and (9): Symbolically, Thus, the coupled system of equations for (s w , λ w ) has the form Eqs (12) schematically describe the class struggle; they are formally identical to the famous predator-pray Lotka-Volterra equations in biology, with intensive variables s w , λ w playing the role of predator and pray, respectively. Two essential drawbacks of the LGVE are that they neglect the stochastic nature of economic processes and do not preserve natural constraints (s w , λ w ) ∈ (0, 1)×(0, 1). Besides, they are too restrictive in describing the discretionary nature of firms' investment decisions. The conservation law Ψ corresponding Eqs. (12) has the following form Ψ (s w , λ w ) = − ln (s c w λ a w ) + ds w + bλ w , and has a fixed point at where Ψ achieves its minimum. Solutions of the LVGEs without regularization are shown in Figure 1. Both phase diagrams in the (s w , λ w )-space and time evolution graphs show that for the chosen set of parameters λ w > 1 for some parts of the cycle.
Modified Theory
In order to satisfy natural boundaries in the stochastic framework, we propose a new version of the LVGEs of the form where ω > 0 is a regularization parameter, and σ s √ s w s f , σ λ √ λ w λ u are Jacobi normal volatilities. This choice of volatilities ensures that (s w , λ w ) stays within the unit square. Deterministic conservation law Ψ for Eqs (15) is similar to Eq. (13): However, it is easy to see that the corresponding contour lines stay within the unit square, (s w , λ w ) ∈ (0, 1) × (0, 1). The fixed point, where Ψ achieves its minimum, is given by Effects of regularization and effects of stochasticity combined with regularization are shown in Figures 2 and 3, respectively. It is clear that, by construction, Eqs.(15) reflect naturally occurring stochasticity of the corresponding economic processes, while preserving natural bounds for s w and λ w . The idea of regularizing the Goodwin equations was originally proposed by Desai et al. (2006). Our choice of the regularization function is different from theirs but is particularly convenient for further development and advantageous because of its parsimony. At the same time, while stochastic LVEs are rather popular in the biological context, see, e.g., Cai and Lin (2004), stochastic aspects of the LVGEs remain relatively unexplored, see, however, Kodera and Vosvrda (2007), and, more recently, Huu and Costa-Lima (2014).
Background
LVGEs and their simple modifications generate phase portraits, which are either closed or almost closed, as presented in Figures 1, 2, 3. Accordingly, they can not describe unstable economic behaviour. However, historical experience suggests that capitalist economies are periodically prone to crises, as is elucidated by the famous Financial Instability Hypothesis of Minsky (Minsky 1977(Minsky , 1986. His theory bridges macroeconomics and finance and, if not fully develops, then, at least clarifies the role of banks and, more generally, debt in modern society. Although Minsky's own attempts to formulate the theory in a quantitative rather than qualitative form were unsuccessfull, it was partially accomplished by Steven Keen (Keen 1995). Keen extended the Goodwin model by abandoning its key assumption that investment is equal to profit. Instead, he assumed that, when profit rate is high, firms invest more than their retained earnings by borrowing from banks and vice versa.
Below we briefly discuss the Keen equations and show how to modify them in order to remove some of their intrinsic deficiencies.
Keen Equations
The Keen equations (KEs) (Keen 1995), describe the relation between the workers' portion of the output s w , the employment rate λ w , and the firms' debt D f relative to their non-financial assets K f , Γ f = D f /K f . 4 All these quantities are non-dimensional. KEs can be used to provide quantitative description of Minsky's Financial Instability Hypothesis (Minsky 1977).
Keen expanded the Goodwin framework by abandoning one of its key simplifications, namely, the assumption that investment equals profit. Instead, he allowed investments to be financed by banks. This important extension enables the description of ever increasing firms' leverage until the point when their debt servicing becomes infeasible and an economic crisis occurs. Subsequently, Keen (2013Keen ( , 2014 augmented his original equations in order to account for flows of funds among firms, banks, and households. However, KEs and their extensions do not take into account the possibility of default by borrowers, and do not reflect the fact that the banking system's lending ability is restricted by its capital capacity. Even more importantly, extended KEs do not explicitly guarantee that production equals consumption plus investment. In addition, as with LVGEs, KEs do not reflect stochasticity of the underlying economic behaviour and violate natural boundaries. Accordingly, a detailed description of the crisis in the Keen framework is not possible.
Symbolically, KEs can be written as where a, b, c, d are suitable parameters, and f (.) is an increasing function of its argument which represents net profits. Keen and subsequent authors recommend the following choice Solutions of KEs without Regularization are shown in Figure 4. On the one hand, these figures exhibit the desired features of the rapid growth of firms' leverage. On the other hand, they produce an unrealistic unemployment rate λ w > 1.
Modified Theory
A simple modification along the lines outlined earlier, makes KEs more credible: Here ω is a regularization parameter, and σ s √ s w s f , σ λ √ λ w λ u are Jacobi normal volatilities.
Effects of regularization and effects of stochasticity combined with regularization for KEs are presented in Figures 5 and 6, respectively. 5 While these Figures demonstrate the same rapid growth of firms' leverage as in Figure 4, while ensuring that λ w < 1, without taking into account a possibility of defaults they are not detailed enough to describe the approach of a crisis and the moment of the crisis itself.
Here and above we looked at the classical LVGEs and KEs and modified them to better reflect the underlying economics. We use these equations as an important building block for the stochastic MMC theory.
Inspiration
Inspired by the above developments, we build a continuous-time stochastic model of the monetary circuit, which has attractive features of the established models, but at the same time explicitly respects the fact that production equals consumption plus investment, incorporate a possibility of default by borrowers, satisfies all the relevant economic constraints, and can be easily extended to integrate the government and central bank, as well as other important aspects, in its framework. For the first time, defaults by borrowers are explicitly incorporated into the model framework.
For the sake of brevity, we shall focus on a reduced monetary circuit consisting of firms, banks, workers, and rentiers, while the extended version will be reported elsewhere.
Stocks and Flows
To describe the monetary circuit in detail, we need to consider five sectors: households (workers and rentiers) H; firms (capitalists) F ; private banks (bankers) P B; government G; and central bank CB; all these sectors are presented in Figure 7 below. However, the simplest viable economic graph with just three sectors, namely, households H, firms F , and private banks P B, can produce a nontrivial monetary circuit, which is analyzed below. Banks naturally play a central role in the monetary circuit by simultaneously creating assets and liabilities. However, this crucial function is performed under constraints on banks capital and liquidity. The emphasis on capital and liquidity in the general context of monetary circuits is an important and novel feature, which differentiates our approach from the existing ones. Further details, including the role of the central bank as a system regulator, will be reported elsewhere.
Notation
We use subscripts w, r, f, b to denote quantities related to workers, rentiers, firms, and banks, respectively. We denote rentiers' and firms' deposits (banks' liabilities) by D r , D f , and their loans (banks' assets) by L r , L f . Firms' physical, non-financial assets are denoted by K f ; banks' capital K b ; all these quantities are expressed in monetary units, [M ]. Thus, financial and nonfinancial stocks are denoted by D r , L r , D f , L f , K f , K b . By its very nature, bank capital, K b is a balancing variable between bank's assets (L r + L f ) and liabilities (D r + D f ), According to banking regulations, bank assets are limited by the capital constraints, where ν b is a non-dimensional capital adequacy ratio, which defines the overall leverage in the financial system. When dealing with the banking system as a whole, which, in essence can be viewed as a gigantic single bank, we do not need to include the central bank, since the liquidity squeeze cannot occur by definition. It goes without saying that when we deal with a set of individual banks, the introduction of the central bank is an absolute necessity. This extended case will be presented elsewhere.
There are several important rates, which determine monetary flows in our simplified economy, namely, the deposit rate r D , loan rate r L , 6 maximum production rate at full employment, v f , physical assets amortization rate ξ A , default rate ξ ∆ ; all these rates are expressed in terms of inverse time units, [1/T ].
Contractual net interest cash flows for rentiers and firms, ni r,f , which are measured in terms of monetary units per time [M/T ], have the form Profits for firms and banks are denoted as Π f and Π b , respectively, with both quantities being expressed in monetary units per time, [M/T ]. For future discussion, in addition to the overall profits, we introduce distributed, Π d f and Π d b , and undistributed, Π u f and Π u b , portions of the profits. It is also necessary to introduce various fractions, some of which we are already familiar with, such as the workers' share of production s w , the firms' share of production s f = 1 − s w , employment rate λ w , unemployment rate λ u = 1 − λ w , and some of which are new, such as capacity utilization u f , the rentiers' share of firms' profits δ rf , the firms' share of the firms's profits δ f f = 1 − δ rf , the rentiers' share of banks' profits δ rb , the banks' share of the banks's profits δ bb = 1 − δ rb ; all these quantities are non-dimensional, [1] , and sandwiched between 0 and 1. It is clear that Π
Key Observations
(a) Production is equal to consumption plus investment: All quantities in Eq. (24) are expressed in terms of [M/T ] .
(b) On the one hand, the workers' participation in the system is essentially non-financial and amounts to straightforward exchange of labor for goods, so that Thus, as was pointed out by Kaletcki, workers consume what they earn.
(c) On the other hand, rentiers can discretionally choose their level of consumption, C r , introducing therefore the notion of stochasticity into the picture. We explicitly model the stochastic nature of their consumption by assuming that it is governed by the SDE of the form where we use the fact that total stock Σ r of financial and non-financial assets belonging to rentiers (as a class) is given by In other words, the rentiers' property boils down to firms' non-financial assets. Eqs (26) assume that rentiers' consumption is reverting to the mean,C r , which is a linear combination of profits received by rentiers, ni r + Π d f + Π d b , and the theoretical productivity of their capital, ν f K f .
(d) We apply the celebrated Hooke's law and assume that firms invest in proportion to their overall production We view this law as a first-order linearization of any hyperelastic relation, which exists in practice. Thus, firms' production depends on rentiers' consumption Here we assume that firms reinvest in production out of the share of their profits, 0 < υ f < 1, and represent Y f in the form (e) Thus, the level of investment and capacity utilization are given by (f) Firms' overall profits, distributed, and undistributed, are defined as Thus, firms' profits are directly proportional to rentiers consumption. As usual, Kaletcki put it best by observing that capitalists earn what they spend! The dimensionless profit rate π f is The proportionality coefficient υ f introduced in Eq. (30) depends on the profit rate, capacity utilization, financial leverage, etc., so that or, explicitly, where Φ (.) maps the real axis onto the unit interval, constants υ 0 , υ 1 , υ 2 are positive, and constant υ 3 is negative. We choose Φ in the form (g) Banks' overall profits, distributed, and undistributed, represent the difference between interest received on outstanding loans and paid on banks deposits reduced by defaults on loans, so that (h) Rentiers' cash flows are If CF r > 0, then rentiers' deposits, D r , increase, otherwise, their loans, L r , increase. Thus This equation takes into account a possibility of rentiers' default.
(i) Firms' cash flows are If CF f > 0, then firms' deposits, D f , increase, otherwise, their loans, L f , increase. Thus The latter equation takes into account a possibility of firms' default.
(j) Firms' physical assets growth depends on their investments and the rate of depreciation, (k) Banks' capital growth is determined by their net interest income and the rate of default, (l) Physical and Financial capacity constraints (at full employment) have the form We emphasize this direct parallel between financial and non-financial worlds, with the capital ratio playing the role of a physical capacity constraint.
(m) We use the above observations to derive a modified version of the LVGEs (15). While the first equation describing the dynamics for s w remains unchanged, the second equation for λ w becomes or, symbolically, (n) By using Eqs (4) and (31), we can express the level of prices, P , as a function of rentiers' consumption, C r , employment, λ w , and other important economic variables. These equations show that Accordingly, we can represent P as follows
Main Equations
In this section we summarize the main dynamic MMC equations and the corresponding constraints The coefficient υ f introduced in Eq. (30) can be found either via the Newton-Raphson method or via fixed-point iteration. The first iteration is generally sufficient, so that, approximately, The physical and financial capacity constraints are In addition, In summary, we propose the closed system of stochastic scale invariant MMC equations (55), (56). By construction, these equation preserve the equality among production and consumption plus investment. In addition, it turns out modified LVGEs play only an auxiliary role and are not necessary for understanding the monetary circuit at the most basic level. This intriguing property is due to the assumption that investments as driven solely by profits. If capacity utilization is incorporated into the picture, then MMC equations and LVGEs become interlinked.
Representative solution of MMC equations is shown in Figure 8.
Money Creation and Annihilation in Pictures
In modern society, where large quantities of money have to be deposited in banks, banks play a unique role as record keepers. 7 Depositors become, in effect, unsecured junior creditors of banks. If a bank were to default, it would generally cause partial destruction of deposits. To avoid such a disturbing eventuality, banks are required to keep sufficient capital cushions, as well as ample liquidity.
In addition, deposits are insured up to a certain threshold. Without diving into nuances of different takes on the nature of banking, we mention several books and papers written over the last century, which reflect upon various pertinent issues, such as Schumpeter (1912) It is very useful to have a simplified pictorial representation for the inner working of the banking system. We start with a simple case of a single bank, or, equivalently, the banking system as a whole. We assume that the bank in question does not operate at full capacity, so that condition (22) is satisfied. If a new borrower, who is deemed to be credit worthy, approaches the bank and asks for a reasonably-sized loan, then the bank issues the loan by simultaneously creating on its books a deposit (the borrower's asset), and a matching liability for the borrower (the bank's asset). Figuratively speaking, the bank has created money "out of thin air". Of course, when the loan is repaid, the process is carried in reverse, and the money is "destroyed". Assuming that the interest charged on loans is greater than the interest paid on deposits, as a result of the roundtrip process bank's capital increases. 8 The whole process, which is relatively simple, is illustrated in Figure 9. Werner executed this process step by step and described his experiences in a recent paper (Werner 2014). It is worth noting, that in the case of a single bank, lending activity is limited by bank's capital capacity only and liquidity is not important.
We now consider a more complicated case of two (or, possibly, more) banks. In this case, it is necessary to incorporate liquidity into the picture. To this end, we also must include a central bank into the financial ecosystem. We assume that banks keep part of their assets in cash, which represents a liability of the central bank. 9 The money creation process comprises of three stages: (a) A credit worthy borrower asks the first bank for a loan, which the bank grants out of its cash reserves, thus reducing its liquidity below the desired level; (b) The borrower then deposits the money with the second bank, which converts this deposit into cash, thereby increasing its liquidity above its desired level; (c) The first bank approaches the second bank in order to borrow its excess cash. If the second bank deems the first bank credit worthy, it will lend its excess cash, in consequence creating a link between itself and the first bank. Alternatively, if the second bank refuses to lend its excess cash to the first bank, the first bank has to borrow funds from the central bank, by using its performing assets as collateral. Thus, the central bank lubricates the wheels of commerce by providing liquidity to credit worthy borrowers. Its willingness to lend money to commercial banks, determines in turn their willingness to lend to firms and households. When the borrower repays its loan the process plays in reverse.
The money creation process, initiated when Bank I lends 2 monetary units to a new borrower, results in the following changes in two banks' balance sheets:
External Assets Interbank Assets Cash External Liabilities Interbank Liabilities Equity
Step I Bank I Bank II 19 24 6 9 3 4 20 25 3 7 5 5 Step II Bank I Bank II 21 24 6 9 1 6 20 27 3 7 5 5 Step III Bank I Bank II 21 24 6 11 3 4 20 27 5 7 5 5 (60) This process is illustrated in Figure 10. We leave it to the reader to analyze the money annihilation process. In summary, in contrast to a non-banking firm, whose balance sheet can be adequately described by a simple relationship among assets, A, liabilities, L, and equity, E, as is shown in Figure 11a, the balance sheet of a typical commercial bank must, in addition to external assets and liabilities, incorporate more details, such as interbank assets and liabilities, as well as cash, representing simultaneously bank's assets and central bank's liabilities, see Figure 11b. In Section 4 we quantitatively described a supply and demand driven economic system. In this system money is treated on a par with other goods, and the dynamics of demand for loans and lending activity is understood in the supply-demand equilibrium framework. An increasing demand for loans from firms and households leads banks to lend more. Having said that, we should emphasize that the ability of banks to generate new loans is not infinite. In exact parallel with physical goods, whose overall production at full employment is physically limited by ν f K f , the process of money (loan) creation is limited by the capital capacity of the banking system K b /ν b . Once we have embedded the flow of money in the supply-demand framework, we can extend the model to several interconnected banks that issue loans in the economy. These banks compete with each other for business, while, at the same time, help each other to balance their cash holdings thus creating interbank linkages. These linkages are posing risks because of potential propagation of defaults in the system. Our main goal in the next section is to develop a parsimonious model which, nevertheless, is rich enough to produce an adequate quantitative description of the banking ecosystem. We look for a model with as few adjustable parameters as possible rather than one over-fitted with a plethora of adjustable calibration parameters.
Interlinked Banking System
Consider N banks with external as well as mutual assets and liabilities of the form where the interbank assets and liabilities are defined aŝ Accordingly, an individual bank's capital is given by We can represent banks assets and liabilities by using the following asset and liability matrices Thus, by its very nature banking system becomes inherently linked. Various aspects of this interconnectivity are discussed by Rochet and Tirole (1996), In the following subsection we specify dynamics for asset and liabilities, which is consistent with a possibility of defaults by borrowers.
Dynamics of Assets and Liabilities. Default Boundaries
In the simplest possible case, the dynamics of assets and liabilities is governed by the system of SDEs of the form where µ is growth rate, not necessarily risk neutral, W i are correlated Brownian motions, and σ i are corresponding log-normal volatilities.
In a more general case, the corresponding dynamics can contain jumps, as discussed in Lipton and Sepp (2009), or Itkin and Lipton (2015a, 2015b). Following Lipton and Sepp (2009), we assume that dynamics for firms' assets is given by where N i are Poisson processes independent of W i , λ i are intensities of jump arrivals, J i are random jump amplitudes with probability densities i (j), and κ i are jump compensators, Since we are interested in studying consequences of default, it is enough to assume that J i are negative exponential jumps, so that with ϑ i > 0. Diffusion processes W i are correlated in the usual way, Jump processes N i are correlated in the spirit of Marshall-Olkin (1967). We denote by Π (N ) the set of all subsets of N names except for the empty subset {∅}, and by π a typical subset. With every π we associate a Poisson process N π (t) with intensity λ π (t). Each N i (t) is projected on N π (t) as follows with Thus, for each bank we assume that there are both systemic and idiosyncratic sources of jumps. In practice, it is sufficient to consider N + 1 subsets of Π (N ) , namely, the subset containing all names, and subsets containing only one name at a time. For all other subsets we put λ π = 0. If extra risk factors are needed, one can include additional subsets representing particular industry sectors or countries.
The simplest way of introducing default is to follow Merton's idea (Merton 1974) and to consider the process of final settlement at time t = T , see, e.g., Webber and Willison (2011). However, given the highly regulated nature of the banking business, it is hard to justify such a set-up. Accordingly, we prefer to model the problem in the spirit of Black and Cox (1976) and introduce continuous default boundaries, Λ i , for 0 ≤ t ≤ T , which are defined as follows where R i , 0 ≤ R i ≤ 1 is the recovery rate. We can think of Λ i as a function of external and mutual liabilities, If the k-th bank defaults at an intermediate time t , then the capital of the remaining banks is depleted. We change indexation of the surviving banks by applying the following function We also introduce the inverse function ψ k , The corresponding asset and liability matrices A (k) , L (k) assume the form The corresponding default boundaries are given by so that ∆Λ (k) i > 0 and the default boundaries (naturally) move to the right.
Terminal Settlement Conditions
In order to formulate the terminal condition for the Kolmogorov equation, we need to describe the settlement process at t = T in the spirit of Eisenberg and Noe (2001). Let A (T ) be the vector of the terminal external asset values.
Since at time T a full settlement is expected, we assume that a particular bank will pay a fraction ω i of its total liabilities to its creditors (both external and inter-banks). If its assets are sufficient to satisfy its obligations, then ω i = 1, otherwise 0 < ω i < 1. Thus, the settlement can be described by the following system of equations or equivalently It is clear that ω is a fixed point of the mapping Φ ( ω), Eisenberg and Noe have shown that Φ ( ω) is a non-expanding mapping in the standard Euclidean metric, and formulated conditions under which there is just one fixed point. We assume that these conditions are satisfied, so that for each A (T ) there is a unique ω A (T ) such that the settlement is possible. There are no defaults provided that ω = 1, otherwise some banks default. Let I be a state indicator (0, 1) vector of length N . Denote by D I the following domain For the marginal survival probability of the i-th bank we have where I (i) is the set of indicator vectors with I i = 1. Thus far, we have introduced the stochastic dynamics for assets and liabilities for a set of interconnected banks. These dynamics explicitly allows for defaults of individual banks. Our framework reuses heavy machinery originally developed in the context of credit derivatives. In spite of being mathematical intense, such an approach is necessary to quantitatively describe the financial sector as a manufacturer of credit.
General Solution via Green's Function
This Section is rather challenging mathematically and can easily be skipped at first reading.
Our goal is to express general quantities of interest such as marginal survival probabilities for individual banks and their joint survival probability in terms of Green's function for the N -dimensional correlated jump-diffusion process in a positive ortant.
As usual, it is more convenient to introduce normalized non-dimensional variables. To this end, we definē where Thus, The scaled default boundaries have the form The survival domain D (1, ..., 1) is given by Thus, we need to perform all our calculations in the positive cone R (N ) + . The dynamics of X = (X 1 , ..., X N ) is governed by the following equations Below we omit bars for the sake of brevity and rewrite Eq. (90) in the form: The corresponding Kolmogorov backward operator has the form and ς i = σ i ϑ i /Σ. We can formulate a typical pricing equation in the positive cone R (N ) where X, X 0,k , X ∞,k , Y k are N and N − 1 dimensional vectors, respectively, Here χ t, X , φ 0,k (t, y), φ ∞,k (t, y), ψ X are known functions, which are contract specific. For instance, for the joint survival probability Q t, X we have ...,1) . (99) The corresponding adjoint operator is where It is easy to check that We solve Eqs (95)-(97) by introducing the Green's function G t, X , or, more explicitly, G t, x; 0, X , such that G 0, X = δ X − X .
It is clear that Some relatively simple algebra yields where Green's theorem yields Thus, in order to solve the backward pricing problem with nonhomogeneous right hand side and boundary conditions, it is sufficient solve the forward propagation problem for Green's function with homogeneous right hand side and boundary conditions. In particular, for the joint survival probability, we have Similarly, for the marginal survival probability of the first bank, say, we have
Banks' Balance Sheet Optimization
This Section is aimed at increasing the granularity of our model. Let's recall that first we considered a simple economy as a whole and assumed that it is driven by stochastic demand for goods and money, and described the corresponding monetary circuit. In this framework, physical goods and money are treated in a uniform fashion. Next, we moved on to a more granular level and described a system of interlinked banks that create money by accommodating external changes in demand for it. Now, we have reached the most granular level of our theory, and consider an individual bank. We emphasize that MMC theory described in this paper is a top-down theory. However, once major consistent patterns from the overall economy are traced to the level of an individual bank, the consequences for the bank profitability and risk management are hard to overestimate.
Numerous papers and monographs deal with various aspects of the bank balance sheet optimization problem. Here we mention just a few. Kusy and Ziemba (1986) develop a multi-period stochastic linear programming model for solving a small bank asset and liability management (ALM) problem. dos Reis and Martins (2001) develop an optimization model and use it to choose the optimal categories of assets and liabilities to form a balance sheet of a profitable and sound bank. In a series of papers, Petersen and coauthors analyze bank management via stochastic optimal control and suggest an optimal portfolio choice and rate of bank capital inflow that keep the loan level close to an actuarially determined reference process, see, e.g. To complement the existing literature, we develop a framework for optimizing enterprise business portfolio by mathematically analyzing financial and risk metrics across various economic scenarios, with an overall objective to maximize risk adjusted return, while staying within various constraints. Regulations impose multiple capital requirements and constraints on the banking industry (such as B3S and B3A capital ratios, Leverage Ratios, Liquidity Coverage Ratios, etc.).
The economic objective of the balance sheet optimization for an individual bank is to choose the level of Loans, Deposits, Investments, Debt and Capital in such a way as to satisfy Basel III rules and, at the same time, maximize cash flows attributable to shareholders. Balance sheet optimization boils down to solving a very involved Hamilton Jacobi Bellman problem. The optimization problem can be formulated in two ways: (a) Optimize cashflows without using a risk preference utility function, or, equivalently, being indifferent to the probability of loss vs. profits; (b) Introduce a utility function into the optimization problem and solve it in the spirit of Merton's optimal consumption problem. Although, as a rule, balance sheet optimization has to be done numerically, occasionally, depending on the chosen utility function, a semi-analytical solution can be obtained.
Notations and Main Variables
Let us introduce key notation. By necessity, we have to reuse some of the symbols used earlier; we hope this will not confuse the reader. Bank's assets in increasing order of liquidity have the form X π k , outstanding loans with maturity T k and quality p, I, investments in stocks and bonds, C, cash.
We assume that T 1 < ... < T k < ... < T K , and p = 1, ..., P . Quality of loans is determined by various factors, such as the rating of the borrower, collateralization, etc.
Bank's liabilities in increasing order of stickiness have the form D, deposits, Y q l , outstanding debts with maturity T l and quality q, E, equity (or capital).
Assets and liabilities have the following properties: (a) Loans and debts are characterized by their repayment/loss rates λ p k and µ q l , and interest rates ν p k and ξ q l ; (b) Similarly, for deposits we have rates α and β, respectively; (c) Finally, for investments the corresponding growth rates are stochastic and have the form r − ζ + σχ (t), where r is the expected growth rate, ζ is the dividend rate, σ is the volatility of returns on investments, and χ (t) = dW (t) /dt is white noise, or "derivative" of the standard Brownian motion, so that dI = (r − ζ) Idt + σIdW.
Balance Sheet Balancing Equation has the form: Below we omit sub-and superscripts for brevity and rewrite the equation of balance as follows: There are several controls and levers for determining general direction of the bank: (a) rates φ (t) at which new loans are issued; (b) rates ψ (t) at which new borrowings are obtained; (c) rate ω (t) at which new investments are made; (d) rate π (t) at which new deposits are acquired; (e) rate δ (t) at which money is returned to shareholders in the form of dividends or share buy-backs. If δ (t) < 0, then new stock is issued. Of course, dividends should not be paid when new shares are issued.
The evolution of the bank's assets and liabilities is governed by the following equations: and respectively. Here, for convenience, instead of φ (t) and ψ (t) we use Φ (t) and Ψ (t), defined as follows respectively. On the bank's asset side, outstanding loans decay deterministically proportionally to their repayment rate and increase due to new loans issued less amortized old loans repaid. Existing investments grow stochastically as in Eq. (114) and are complemented by new investments. Changes in cash balances are influenced by several factors. On the one hand, prepaid loans, interest charged on outstanding loans, dividends on investments, new deposits, and new borrowings positively contribute to cash balances. On the other hand, new investments, interest paid on deposits and borrowings, withdrawn deposits and losses on lending, as well as money returned to the shareholders as dividends and/or share buy-backs lead to reduction in the bank cash position.
On the bank's liability side, deposits decay deterministically proportionally to their withdrawal rate and increase due to new deposits coming in. Outstanding bank's debts decay deterministically at their repayment rate, and increase due to new borrowings less amortized old debts repaid. Similarly to changes in cash on the asset side, changes in capital (equity) on the liability side are positively affected by the interest paid on outstanding loans, stochastic returns on investments (including dividends), and negatively affected by interest paid on deposits, borrowings, and dividends paid to the shareholders.
Optimization Problem
The cashflow CF (T ) attributable to the common equity up to and including some terminal time T is determined by the discounted expected value of change in equity plus the discounted value of money returned to shareholders over a given time period. By using Eqs (118), CF (T ) can be calculated as follows: Here R is the discount rate, and J (t) is the expected value of investments I (t) with dividends reinvested. The deterministic governing equation for J has the form: Accordingly, in order to optimize the balance sheet at the most basic level, we need to maximize CF (T ), viewed as a functional depending on φ (t) , ω (t) , π (t) , ψ (t) , and δ (t): However, this optimization problem is subject to various regulatory constraints, such as capital, liquidity, leverage, etc., some of which are explicitly described below. Clearly, the problem has numerous degrees of freedom, which can be reduced somewhat by assuming, for example, that φ (t) , ω (t) , π (t) , ψ (t) , δ (t) are time independent.
Capital Constraints
Regulatory capital calculations are fairly complicated. They are based on systematizing and aggregating bank portfolio's assets into risk groups and assigning risk weights to each group. Therefore, for determining Risk Weighted Assets (RWAs), it is necessary to classify loans and investments as Held To Maturity (HTM), Available For Sale (AFS), or belonging to the Trading Book (TB). We start with HTM and AFS bonds. We can use either the standard model (SM), or an internal rating based model (IRBM). SM represents RWA in the form: where the weights rwa SM = rwa p SM,k are regulatory prescribed, and Alternatively, IRBM provides the following expression for the RWAs: where the weights rwa IRBM = rwa p IRBM,k are given by relatively complex formulas, which are omitted for brevity. In both cases, the corresponding regulatory capital is given by Additional amounts of capital K (2) , K (3) , K (4) are required to cover counterparty, operational and market risks, respectively, so that the total amount of capital the bank needs to hold is given by It is clear that for a bank to be a going concern, the following inequality has to be satisfied
Liquidity Constraints
We formulate liquidity constraints in terms of the following quantities: (a) Required Stable Funding (RSF) Here rsf X = (rsf p k ), and In addition, we define: (c) Stylized 30 day cash outflows (CO) (d) Stylized 30 day cash inflows (CI): Here the weights rsf X , rsf I ,asf D , asf Y , co D , co Y , ci X , ci I are prescribed by the regulators.
In order to comply with Basel III requirements, it is necessary to have: CI > CO, (137) or equivalently, ci · X + ci I · I + C − co D · D − co · Y > 0.
In words, Eqs (138) and (139) indicate that having large amounts of equity, E and capital, C is beneficial for the bank's liquidity position (but not for its earnings!).
Mathematical Formulation: General Optimization Problem
A general optimization problem can be formulated in terms of independent variables X, I, C, D, Y defined in the multi-dimensional domain given by the corresponding constraints There are adjacent domains where complementary variational inequalities are satisfied. The corresponding HJB equation reads: In the limit of T → ∞ the problem simplifies to (but still remains very complex):
Mathematical Formulation: Simplified Optimization Problem
Instead of dealing with several independent variables, X, ..., Y , we concentrate on the equity portion of the capital structure, E, which follows the effective evolution equation: where µ is the accumulation rate, d is the dividend rate, which we wish to optimize, σ is the volatility of earnings, W is Brownian motion, N 1,2 are two independent Poisson processes with frequencies λ 1,2 , and J 1,2 are exponentially distributed jumps, J i ∼ δ i exp (−δ i j). The choice of the jump-diffusion dynamics with two independent Poisson drivers reflects the fact that the growth of the bank's equity is determined by retained profits, which are governed by an arithmetic Browinian motion, and negatively affected by two types of jumps, namely, more frequent (but slightly less dangerous due to potential actions of the central bank) liquidity jumps represented by N 2 , and less frequent (but much more dangerous) solvency jumps represented by N 1 . Accordingly, λ 1 > λ 2 , and δ 1 < δ 2 . Below we assume that the dividend rate is potentially unlimited, so that a lump sum can be paid instantaneously. A similar problem with just one source of jumps has been considered in the context of an insurance company interested in maximization of its dividend pay-outs (see, e.g., Taksar 2000 and Belhaj 2010 and references therein). The bank defaults when E crosses zero. We shall see shortly that it is optimal for the bank not to pay any dividend until E reaches a certain optimal level E * , and when this level is reached, to pay all the excess equity in dividends at once. With all the specifics in mind, the dividend optimization problem (140) can be mathematically formulated as follows Solving Eq. (143) supplemented with terminal and boundary conditions (144)-(145) is equivalent to solving the following variational inequality: augmented with conditions (144), (145). We use generic notation to rewrite Eq. (146) as follows: where (148) Symbolically, we can represent Eq. (147) in the form where Solution V (t, E) of this variational inequality cannot be computed analytically and has to be determined numerically. To this end, we use the method proposed by Lipton (2003) and replace the variational inequality in question by the following one where τ = T −t. The corresponding problem is solved in a relatively straightforward way by computing I i and performing the operation max {., .} explicitly, while calculating V in the usual Crank-Nicolson manner. The corresponding solution is shown in Figure 12. For the T → ∞ limit, the time-independent maximization problem has the form max or, equivalently, Here E * is not known in advance and has to be determined as part of the calculation.
It turns out that the time-independent problem can be solved analytically. Since we are dealing with a Levy process, we have where Ψ (ξ) is the symbol of the pseudo-differential operator L, Denote by ξ j , j = 1, ..., 4, the roots of the (polynomial) equation The corresponding function Ψ (ξ) for a representative set of parameters is exhibited in Figure 13, which clearly shows that all roots of Eq. (156) are real. Figure 13 near here.
Then a linear combination solves the pricing problem and the boundary conditions (153) provided that (158) Eqs (158) should be thought of as a system of five equations for five unknowns, namely, (C 1 , C 2 , C 3 , C 4 ) and E * . The corresponding profile V (E) is presented in Figure 14. This graph shows that on the interval [0, E * ) we have V E > 1. Accordingly, the coefficient (1 − V E ) in front of d in Eq. (143) is negative, so that the optimal d has to be zero. To put it differently, it is optimal for the bank not to pay any dividends until E reaches the optimal level E * . On the interval (E * , ∞) we have V E > 1, so that d is not determined. However, this is not particularly important, since when E exceeds the optimal level E * it is optimal to pay all the excess equity in dividends. This situation occurs because we allow for infinite dividend rate, and hence lump-sum payments. When d is bounded, the corresponding optimization problem is somewhat different, but can still be solved along similar lines.
Comparison of Figures 14(a) and 14(b) shows that V (E) is an excellent approximation for V (T, E) for longer maturities T .
Conclusions
In this paper we proposed a simple and consistent theory that enables one to examine the banking system at three levels of granularity, namely, as a whole, as an interconnected collection of banks with mutual liabilities; and, finally, as an individual bank. We demonstrated that the banking system plays a pivotal role in the monetary circuit context and is necessary for the success of the economy. Even in a relatively simple context we gained some nontrivial insights into money creation by banks and its consequences, including naturally occurring interbank linkages, as well as the role of multiple constraints banks are operating under.
The consistent quantitative description of the monetary circuit in continuous time became possible after the introduction of stochastic consumption by rentiers into the model, which enabled us to reconcile the equations with economic reality. We built a quantitative description of the monetary circuit that can be calibrated to real macro economic data which we solved mathematically. The developed framework can be further expanded by adding various sectors of the economy. It is clear that more advanced models will naturally provide deeper actionable insights, which can be used for a variety of purposes, such as setting the monetary policy, positioning banks for responsible growth, and macro investing.
At the top level, we considered the banking system as a whole, disguising therefore the structure of the banking sector and precluding investigation of defaults within it. It is hard to overestimate the importance of the quantitative approach that enables the description of a possible chain of events in the interconnected banking system in the aftermath of the crisis of 2007-2009! Hence, we expanded our analysis to the intermediate level, and demonstrated how the asset-liability balancing act creates nontrivial linkages between various banks. We used techniques developed for credit default pricing to show that these linkages can cause unexpected instabilities in the overall system. Our model can be expanded in several directions, for instance, by incorporating interbank derivatives, such as swaps, into the picture. It can provide insights into snowball effects associated with multiple simultaneous (or almost simultaneous) defaults in the banking system.
Finally, viewed at the bottom level, banks, as all other corporations, have a fiduciary obligation to responsibly maximize their profitability. Given the specifics of the banking business, such a maximization of profitability is intrinsically linked to balance sheet optimization, which is used in order to choose an optimal mix of assets and liabilities. We formulated the constrained optimization problem in the most general case, as well as its reduced version in a specific case of the equity part of the capital structure. Although simplified, the reduced problem still includes such salient elements of the equity dynamics as liquidity and solvency jumps. We then proposed a scheme to efficiently solve the corresponding constrained optimization problem.
We hope that our theory of MMC will stimulate further research along the lines suggested in the paper. In particular, to help to predict future economic crises, which naturally arise within the proposed framework.
Appendix A
To make our calculations in Section 6 more concrete, let us consider the case of just two banks with mutual obligations without netting, N = 2. Additional details can be found in Itkin and Lipton (2015b).
For 0 < t < T default boundaries have the form whereī = 3 − i. In the (A 1 , A 2 ) quadrant we have four domains where δ i,j is the Kronecker delta, and It is clear that in D (1, 1) both banks survive, in D (1, 0) the first bank survives and the second defaults, in D (0, 1) the second bank survives and the first defaults, and in D (0, 0) both banks default. The corresponding domains are shown in Figure 15(a). In log coordinates the domain D i has the form where We emphasize that the domain D i has a curvilinear boundary which depends on the value of A i . It is worth noting that The corresponding domains are shown in Figure 15 Payoffs for different options are as follows. For the joint survival probability For marginal survival probabilities For the CDSs on the first and second bank the payoffs are as follows where the coefficients κ i are determined from the detailed balance equations so that Finally, for the FTD the payoff has the form For brevity, we consider just the calculation of the joint and marginal survival probabilities. The joint survival probability Q (t, X 1 , X 2 ) solves the following terminal boundary value problem Q t (t, X 1 , X 2 ) + LQ (t, X 1 , X 2 ) = 0, Q (T, X 1 , X 2 ) = 1 X∈D(1,1) , The corresponding marginal survival probability for the first bank, say, Q 1 (t, X 1 , X 2 ), which is a function of both X 1 and X 2 solves the following terminal boundary value problem Here q 1 (t, X 1 ) is the 1D survival probability, which solves the following terminal boundary value problem It is very easy to show that The corresponding 2D Green's function has the form (see, e.g., Lipton 2001, Lipton and Savecu 2014): where C = 1 ρ ρ 1 , It is clear that (178) Substitution of these formulas in Eqs (112), (113) yield semi-analytical expressions for Q and Q 1 . The corresponding expression for Q 2 is similar.
|
2015-10-26T12:38:59.000Z
|
2015-10-26T00:00:00.000
|
{
"year": 2015,
"sha1": "c0a623340f140bc1e4dfb9bcb5e219f1e6eb0b1b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1510.07608",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c0a623340f140bc1e4dfb9bcb5e219f1e6eb0b1b",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
236199041
|
pes2o/s2orc
|
v3-fos-license
|
Differential regulation of oxidative stress, microbiota-derived, and energy metabolites in the mouse brain during sleep
Sleep has evolved as a universal core function to allow for restorative biological processes. Detailed knowledge of metabolic changes necessary for the sleep state in the brain is missing. Herein, we have performed an in-depth metabolic analysis of four mouse brain regions and uncovered region-specific circadian variations. Metabolites linked to oxidative stress were altered during sleep including acylcarnitines, hydroxylated fatty acids, phenolic compounds, and thiol-containing metabolites. These findings provide molecular evidence of a significant metabolic shift of the brain energy metabolism. Specific alterations were observed for brain metabolites that have previously not been associated with a circadian function including the microbiome-derived metabolite ergothioneine that suggests a regulatory function. The pseudopeptide β-citryl-glutamate has been linked to brain development and we have now discovered a previously unknown regioisomer. These metabolites altered by the circadian rhythm represent the foundation for hypothesis-driven studies of the underlying metabolic processes and their function.
Introduction
The impact of sleep on brain homeostasis and healthy neuronal activity is an increasingly investigated field in neuroscience. While sleep is a biological state that can increase the vulnerability to external threats such as limited defense against predation, it is highly conserved among species. Despite this negative evolutionary development, sleep has an essential role in brain physiology. 1 While this role is not yet fully understood, sleep has been associated with restorative functions in the brain and memory consolidation. Several sleep deprivation studies have further demonstrated the detrimental effects of reduced sleep on cognitive performance. 2 Furthermore, the recent glymphatic hypothesis has shown the importance of sleep for the clearance process of potentially neurotoxic compounds from the central nervous system (CNS). [3][4][5] It is hypothesized that some molecules accumulate during the awake phase. In addition, it has been demonstrated that instead of an energetical "shut down", the brain performs a shift in metabolic processes during sleep. 6 Specifically, the relationship between glucose consumption and oxidative metabolism in sleep is significantly different compared to wakefulness. 6,7 While many studies have focused on the profoundly altered neuronal activity during sleep, comprehensive metabolic profiles of the sleeping and conscious brain are limited. These studies are crucial to provide insights into the metabolic function and underlying mechanisms during sleep such as changes in energy consumption or oxidative processes.
Metabolomics is a powerful tool to reveal molecular processes that are prominent during the sleep state. The brain is constituted by highly inhomogeneous regions that are anatomically and functionally distinct. Comparative analysis of multiple brain regions can provide a comprehensive understanding of complex brain functions and only a limited number of studies has been reported. 8,9 Our method of choice for investigation of metabolites was mass spectrometry-based untargeted metabolomics that has been extensively applied in elucidating metabolic processes in different (patho)physiological states. 10 In sleep research, metabolomics studies have highlighted small molecules such as tricarboxylic acid (TCA) cycle intermediates, methionine metabolism, other amino acids, and fatty acids to be significantly altered depending on the arousal state. 11 These studies mainly focused on circulating plasma metabolites as well as sleep deprivation and thus not on normal sleep. 11,12 However, systemic concentration changes in plasma are not representative of the CNS status as the brain entry and clearance of metabolites is regulated by a highly restrictive formation of the brain capillary endothelia, the blood-brain barrier (BBB). 13,14 In the present study, global metabolomics using ultra performance liquid chromatography coupled with tandem mass spectrometry (UPLC-MS/MS) was applied to uncover sleep-induced metabolic changes in the four major mouse brain regions: cortex, hippocampus, midbrain, and cerebellum. Our findings demonstrate the regional effect on multiple metabolite classes including metabolites altered during sleep such as acylcarnitines, amino acids and their modifications, nucleotides among other metabolic intermediates. These findings were further corroborated by global metabolomics analysis of plasma and liver tissue samples from the same animals to identify systemic changes. The comprehensive results from this study build the basis for future mechanistic studies during sleep, metabolite clearance and mapping metabolic processes in the brain.
Chemicals
Solvents and reagents were purchased from Sigma-Aldrich or Fisher Scientific and were used without further purification. All synthesized compounds were !95% pure as determined by NMR. NMR spectra were recorded on an Agilent 400 MHz spectrometer ( 1 H-NMR: 399.97 MHz and 13 C NMR: 100.58 MHz). Chemical shifts are reported in parts per million (ppm) on the d scale from an internal standard. Multiplicities are abbreviated as follows: s ¼ singlet, d ¼ doublet, t ¼ triplet, q ¼ quartet, and m ¼ multiplet. Authentic standards were also purchased from Sigma-Aldrich or Fisher Scientific. The in-house built metabolite library was obtained from MetaSci. Mass spectrometry grade solvents were used for UPLC-ESI-MS analysis.
Animal experiments
Adult male C57BL/6 mice (Janvier Labs) were housed in standard laboratory conditions with a 12-hour dark light cycle with ad libitum access to water and food. All experiments were performed according to ethical approval from the Malm€ o-Lund Ethical Committee on Animal Research (Dnr 5.8.18-08269/2019) and conducted according to the CODEX guidelines by the Swedish Research Council, Directive 2010/63/EU of the European Parliament on the protection of animals used for scientific purposes and Regulation (EU) 2019/ 1010 on the alignment of reporting obligations. This study complies with the ARRIVE guidelines. 15 Animals were sacrificed by cervical dislocation at either 10AM or 10PM and subsequently decapitated, at an age of 8 weeks. Trunk blood was collected and plasma samples were prepared by centrifugation. Brains were rapidly extracted (<2 minutes) after dislocation and dissected to isolate cortex, hippocampus, midbrain and cerebellum. Brain regions were then placed in Eppendorf tubes and snap frozen at À80 C in liquid nitrogen. Liver tissue was extracted from the same animals and snap frozen at À80 C in liquid nitrogen.
Tissue processing and sample preparation
The brain regions were weighed and transferred into beads-containing vials. Methanol:water (80: 20) was added approximately at a concentration of 4 mL/mg brain tissue to every sample. As internal standard (I.S.), a mixture of C-13 isotopically labeled tyrosine (5 mg/mL), phenylalanine (10 mg/mL) and valine (30 mg/ mL). The volume of the I.S. mixture added to every sample was adjusted according to the corresponding sample weight, with a minimum of 10 mL for the lowest weight sample. The homogenization was performed in a Lysing matrix D instrumentation (MP Biomedicals) in dry ice at a cycle of 20 s shaking (4 m/s) and 30 s performed three times. Samples were collected, precipitated on ice for 1 h and centrifuged at 13,400 rpm for 5 min. The supernatant was collected and dried under vacuum on a Speedvac and subsequently stored in -20 C for a maximum of three days prior to analysis. Samples were re-suspended with water:acetonitrile (95:5) prior to UPLC-MS/MS analysis, at a volume normalized to the sample weight. Quality control (QC) samples were prepared by 5 mL aliquots from all samples. The same process was followed for the preparation of the liver samples that were analyzed via UPLC-MS in a separate sequence.
For the preparation of plasma samples, the same I.S. mixture was added to each plasma aliquot (50 mL). QC samples were prepared by 5 mL aliquots from all samples after thawing on ice. Sample preparation was performed on ice. LC-MS grade methanol (1:4 ratio of sample:methanol) was added and the mixture was vortexed before being cooled to -20 C for 1 hour. The samples were then centrifuged (5 min, 18,620 g, 4 C) and the supernatant was isolated and lyophilized on a Speedvac. The samples were stored in -20 C until the UPLC-MS analysis. The samples were reconstituted in 50 mL water: acetonitrile (95:5) and centrifuged again (5 min, 18,620 g, 4 C). The supernatant was transferred to LC-MS vials.
UPLC mass spectrometry
The UPLC-MS/MS analysis was performed in a SYNAPT G2-S High-Definition Mass Spectrometer (HDMS) using an electrospray ionization (ESI) source with an AQCUITY UPLC I-class system and equipped with a Waters ACQUITY UPLC V R HSS T3 column (1.8 mm, 100 Â 2.1 mm). Water with 0.1% formic acid was used as mobile phase A and methanol with 0.1% formic acid was used as mobile phase B. The column temperature was kept at 40 C, and the autosampler at 6 C. The flow rate was set to 0.2 mL/min. The gradient used was as follows: 0-8.5 min, 0-100% B; 8.5-10 min, 100% B; 10-11 min,100-0% B; 10-15 min, 0% B.
The system was controlled using the MassLynx software package v 4.1 from Waters. High-resolution mass spectra were acquired in positive and negative ionization mode, at a mass range of m/z 50-1500. Data acquisition was performed in MSE mode. The samples were injected to the UPLC-MS system in a randomized order with QC samples injected in the beginning and end of the sample list in both ionization modes, as well as after every eight samples (7 QCs in each ionization mode in total).
Identification of metabolites
Significant features and molecules of interest were primarily annotated by databases (www.hmdb.ca, https:// metlin.scripps.edu/) based on their m/z value and given the high mass accuracy provided by the mass analyzer.
Subsequently, in-house built standard library or purchased standards, measured in the same UPLC-MS/ MS system, were used for the assignment of the retention time (rt). Finally, tandem MS experiments were performed in brain tissue samples in positive or negative ionization mode with CID of 10-30 eV, depending on the analyte, and the product ion spectra were compared to the corresponding standards.
Data analysis
The chromatograms and mass spectra were processed using the XCMS R package for peak alignment and retention time correction, 16,17 in both positive and negative ionization mode. From the corresponding feature lists obtained from the software, features with intensities > 20,000 ion count, rt > 1 min and %CV of the QCs < 30 were selected for further statistical analysis, as considerably higher than noise. The final data included 17,297 features in positive and 7,578 features in negative ionization mode measured in four different brain regions (CBL, CTX, HC and MDB) of both groups (sleep-wake, N ¼ 6 per group). The intensities of the included internal standards and the QC samples were plotted against the UPLC-MS/MS sample injection order to evaluate the stability and performance of the experimental set over time.
An overview of the data was provided by principal component analysis (PCA), prior to which the data was autoscaled using the metabolomics platform www. metaboanalyst.ca. The normality of the test statistics and P values were evaluated using the same platform and the data were distributed normally. For the hypothesis testing, two-tailed t-test was applied in metabolites extracted from every region (CBL, CTX, HC, MDB) for detecting consciousness state (sleepwake) differences. The same approach was followed for the plasma and liver sample analysis. Owing to the large size of imported features, the significance of the results was cross validated with two-way ANOVA (factors: brain regions, sleep/wake) with adjustment for multiple comparisons (Supplementary Table 8).
Synthesis of b-citryl-glutamate (1)
Synthesis of 1,5-dimethyl citrate (3): 1,5-Dimethyl citrate was prepared following a literature procedure. 18 Citric acid (10.0 g, 52.1 mmol) was dissolved in methanol (100 mL) followed by slow addition of 98% sulfuric acid (1 mL). The mixture was refluxed for 1 h then allowed to cool to room temperature before cold water (50 mL) was added under stirring. The solution was neutralized with calcium carbonate. The suspension was filtered and the filtrate evaporated to dryness in vacuo. This yielded colorless crystals, which was redissolved in cold water (100 mL), filtered to remove traces of insoluble salts and the filtrate was acidified to pH 4.5 with a 5 M hydrochloric acid solution. The resulting white precipitate was filtered off to yield the desired product (3.50 g, 15.9 mmol, 31%).
Regional brain metabolomics
Brain region-specific metabolomics was conducted to identify regional differences in metabolism at two distinct states of the circadian rhythm. For our study, mice sacrificed 3 hours into the light and dark phase corresponding to their subjective night (referred to as sleep state) and day (referred to as awake state), respectively (N ¼ 6 in each group; Figure 1(a)). Afterwards, the brain regions collected for the analysis were cortex (CTX), hippocampus (HC), midbrain (MDB), and cerebellum (CBL) (Figure 1(b)). Plasma and liver samples were also collected for control analysis to identify systemic metabolite changes. Each brain region was weighed and then homogenized using ceramic beads. Quality control (QC) samples were prepared after homogenization (Supplementary Figure 1). Metabolites from each biological replica of every examined brain region were extracted separately following standard procedures and spiking of an isotopically labelled internal standard. 19,20 The samples were analyzed by UPLC-MS/MS with a randomized sample sequence in negative and positive ionization mode. The volume during reconstitution was adjusted to the weight of each brain region and the samples were analyzed by UPLC-MS/MS (Supplementary Table 1).
The UPLC-MS data for the brain region samples were processed with R using the XCMS metabolomics framework. 17 To obtain a global overview of the data, the mass spectrometric features were considered that fit the following general criteria: retention time > 1 min and average intensity > 20,000 ion counts. Unsupervised principal component analysis (PCA) of the combined features from positive and negative ionization mode yielded separation of each distinct brain region as well as separation of the sleep and wake state for each brain region. The QC samples were centered as expected for high quality metabolomics data (Figure 1(c)). The highest variation was observed for the cortex compared with the hippocampus (PC1), while an anatomical differentiation among cerebrum (CTX and HC), midbrain (MDB) and cerebellum (CBL) was displayed (PC2). Furthermore, MDB and CBL clustered with high proximity, which could be a result from their anatomical interconnection (Supplementary Figure 2). Unsupervised analysis revealed strong separation of the sleep-wake samples based on the metabolic differences between the subjective day and night phase ( Figure 1(d)). The most significantly altered features for each brain region identified that cerebellum was associated with the most sleep-specific metabolic alterations, while the opposite was observed for the midbrain (Figure 1(e)).
As a proof-of-concept analysis, we investigated selected brain molecules that have previously been linked to brain functions. The region-specific responses associated with the sleep-wake cycle are highlighted for the explored brain regions of both groups and demonstrate similarities and differences of these selected metabolites in each brain region (Supplementary Figure 3). While most of these metabolites were identified with varying levels in different brain regions, they displayed the same levels in the sleep and awake sample set. We also observed statistically significant differences in single brain regions (Figure 1(f); Table 1). Aspartate, which is an excitatory amino acid, was elevated during the wake phase in CTX, while N-acetylmethionine was significantly depleted in MDB during sleep. Melanin is a downstream metabolic product of the dopamine pathway and was significantly elevated in the wakeful state in CBL and reduced in MDB. 21 Furthermore, the endogenous sleep-inducing molecule adenosine and its cyclic monophosphate derivative (cAMP) were also detected at higher levels during sleep in CTX and CBL (Figure 1(g)). Based on this finding, we investigated other analogues of adenosine including adenosine monophosphate (AMP), adenosine diphosphate (ADP) as well as guanosine and guanosine monophosphate, which did not show any variation across the sleep-wake cycle (Figure 1(g) and Supplementary Figure 3). Other down-stream products of the purinergic metabolism (e.g. xanthine and uric acid) were also unaltered. Purinergic signaling, including the release of adenosine triphosphate (ATP), is implicated in sleep and arousal regulation. [22][23][24] However, rapid degradation usually impedes the detection of ATP via LC-MS approaches. 25 Complete discrimination of the two sample sets was identified based on the top twenty metabolites (Supplementary Figures 3-9, Supplementary Tables 2-3). The reproducibility of this regional analysis of common metabolites lays the foundation to characterize unknown metabolic alterations specific for the sleep-wake cycle.
Cerebellar differences of the gut microbiome-derived metabolite ergothioneine
The microbiome-derived metabolite ergothioneine, 26 a metabolite specifically produced by the gut microbiome, was identified to be significantly increased during wakefulness in the cerebellum (Figure 2(a)). Ergothioneine also exhibited higher levels in the plasma of the wake-state group mice compared to the sleep-state, which is expected as it is transported from the gut to the brain via the blood. [27][28][29][30] However, no significant differences between the two groups were detected for ergothioneine in the liver, suggesting that the enrichment of ergothioneine in the brain is due to specific uptake. The chemical structure of ergothioneine was validated using an authentic standard through co-injection experiments and by comparison of the MS/MS spectra in CBL (Figure 2(b)). Ergothioneine contains a thiol moiety in equilibrium with the corresponding thione form that has been attributed to antioxidant properties (Figure 2(c)). 31 The trimethylammonium moiety is common with carnitine and microbiome-derived metabolites. 32 Due to the metabolic importance of sulfur containing metabolites, e.g. glutathione and S-adenosylmethionine (SAM), that are associated with diverse metabolic reactions including oxidation and methyl group transfer, a regional correlation analysis was performed (Pearson's coefficient r) (Figure 2 Cerebral cortex elevation of an unknown isomer of b-citryl-glutamate (1) during sleep One significantly increased metabolite during sleep was specific for CTX where it was also present at the highest concentration (Figure 3(a)). The structure was initially assigned to pseudopeptide b-citryl-glutamate (1) via MS/MS fragmentation comparison with databases. As a second isomer was identified that was reversely altered in CTX and with the identical MS/MS fragmentation pattern, we chemically synthesized b-citryl-glutamate (1) as it is not commercially available ( Figure 3 (b), Supplementary Figures 11-12). The straightforward synthesis yielded this compound in highest purity and was used for UPLC-MS/MS co-injection experiments. To our surprise, the major peak at 4.65 min that is present in all tissue samples was identical with the synthetic b-citryl-glutamate (1) (Figure 3 (c) to (d)). Regioisomer A at 4.30 min has the exact identical MS/MS spectra and its structure needs to be elucidated (Supplementary Figure 13). No isomer of b-citryl-glutamate has been reported before and the presence solely in the cortex with increased concentrations during sleep suggests a regulatory function. b-Citryl-glutamate (1) is structurally similar to the most abundant dipeptide of the CNS, N-acetyl-aspartyl-glutamate (NAAG/ Figure 3 NAAG displayed a similar brain distribution to 1 in contrast to isomer A, while it was not significantly affected by the sleep/awake state. Furthermore, the amino acid derivative N-acetylaspartate displayed CTX-specific elevation during sleep (Figure 3 Table 5). It is highly abundant in the brain and can serve as a precursor of NAAG. 33 These results suggest a distinct functional role of isomer A in the cortex.
Sleep-specific alterations of phenylalanine and tyrosine metabolism in the cerebral cortex
Phenyllactic acid and hydroxyphenyllactic acid are metabolic downstream products of phenylalanine and tyrosine metabolism. These two compounds were significantly elevated during sleep in the cerebral cortex, while phenylalanine and tyrosine were unaltered (Figure 4(a), Supplementary Figures 15 and 16, Supplementary Table 6). Importantly, no significant differences were observed in the plasma samples. Similar trends for homovanillic acid were observed in CTX, which is the final metabolite of dopamine metabolism. All three carboxylic acid-containing metabolites were distributed similarly with the highest levels in CTX. No correlation was observed for tyrosine, while the regional distribution was significantly similar for phenyllactic acid and hydroxyphenyllactic acid as well as for phenylalanine and hydroxyphenyllactic acid (Figure 4(b)). The observed differences between both arousal states were selective in the cortex for this pathway and have not been described previously.
Carnitine shuttle during sleep
Another metabolite class that was significantly altered between the sleep and the wake state were acylated L-carnitine analogues. Medium-chain (e.g. 2-hydroxyhexanoylcarnitine adipoyl/methylglutarylcarnitine, decenoylcarnitine and 3-hydroxydecanoylcarnitine) and long-chain acyl-carnitines (e.g. hydroxydodecanoylcarnitine, tetradecadiencarnitine and 3-hydroxytetradecadiencarnitine) were detected at increased levels during sleep ( Figure 5(a), Supplementary Figures 17 and 20, Supplementary Table 7). Interestingly, different acylated carnitine conjugates were significantly altered in certain brain regions demonstrating brain regiondependent needs of specific fatty acids, as these compounds are part of the carnitine shuttle and fatty acid oxidation key pathways in sleep-wake metabolism. While most of the acyl carnitines were upregulated during sleep, we identified succinylcarnitine to be significantly downregulated during sleep.
The highly abundant metabolite L-carnitine is involved in the mitochondrial b-oxidation of fatty acids ( Figure 5(b)). Initially, the conjugation of fatty acids and L-carnitine occurs in the outer mitochondrial membrane and is catalyzed by the enzyme carnitine palmitoyltransferease 1 (CPT1). Subsequently, acylcarnitines enter the inner mitochondrial membrane, where they are hydrolyzed to acyl-CoA and L-carnitine via CPT2. The released fatty acids are broken down through b-oxidation and the catabolic products enter the TCA cycle.
Among these carnitine conjugates, we also detected several hydroxylated fatty acid-carnitine conjugates that were significantly increased during sleep. The hydroxylation site in position 2 or 3 was determined via MS/MS fragmentation through identification of specific product ions. 34 We also investigated their corresponding free hydroxylated fatty acids. One representative example is hydroxyloctanodecanoic acid that was significantly elevated during sleep in CTX and CBL ( Figure 5(c), Supplementary Figure 21). We considered the main signal of the specific extracted ion chromatogram. Importantly, 2-hydroxylated fatty acids are known moieties of cerebrosides and sulfatides, i.e. hexosylated ceramides and are substantial components of myelin, the substance surrounding the nerve cell axons (Figure 5(d)). These results demonstrate different needs of fatty acids in each brain region.
Systemic and hepatic sleep metabolomics
Plasma and liver samples were also collected from the same animals for control analysis to identify systemic metabolite changes ( Figure 6, Supplementary Tables 9-10). No significant systemic alterations were detected for the reported altered brain metabolites vide supra (Figure 6(a)). Detailed metabolomics analysis of hepatic tissue samples (Figure 6(b)), however, revealed a stronger effect of sleep on this highly metabolic organ compared to plasma. The liver is a highly circadianregulated tissue that exhibited significant sleepspecific alterations in several metabolite classes e.g. creatine, serotonin, choline and its metabolites a-glycerophosphocholine (a-GPC) and betaine, glutamate, and gluconate ( Figure 6(c)). 35,36 These metabolites were not found to be altered in the four brain regions. Previously reported circadian regulated metabolites such as SAM and numerous acylcarnitines were also subjected to sleep-specific level changes ( Figure 6 (d)). 35 Interestingly, the liver levels of SAM were significantly higher during the sleep state, while the opposite effect was detected in the CTX and CBL.
Together with creatine, a number of arginine metabolites displayed sleep induced alterations, not detected in the brain (Figure 6(e), Supplementary Figure 22). Arginine metabolism is highly localized in the hepatic tissue and has also been reported as a "clock-regulated" metabolic pathway. 35,37 It should be mentioned that b-citryl-glutamate and its isomer A were neither detected in plasma nor in the liver tissue. Mass spectrometric intensities of significantly modified metabolites in liver tissue in sleep and wake state (N ¼ 6); (e) Significantly altered metabolites of the arginine metabolic pathway; Error bars: standard deviation (SD), two-tailed unpaired t-test: *P < 0.05, **P < 0.01, ***P < 0.001.
Discussion
In the present study, metabolic differences between arousal states (sleep vs wake) were explored in four regions of the mouse brain; cortex, hippocampus, midbrain, and cerebellum. This regional brain analysis during sleep provided comprehensive insights into the distinct metabolic changes for the first time compared to previous whole brain analyses. The examined regions were selected to cover different neuroanatomical and functional features of the CNS. Cortical areas are involved in executive and sensorimotor functions and are often a major focus of sleep investigations. The hippocampus constitutes a key region for memory processing and cognition, with memory consolidation as an important process during sleep. The midbrain includes monoaminergic and cholinergic nuclei projecting neurotransmitters and orchestrating neural stimulation for movement. A region considerably neglected in sleep and circadian research is the cerebellum, which plays a crucial role in movement regulation. 38 Thus, it is evident that each brain region has different metabolic needs and investigation of these regions simultaneously enables the investigation of a wide range of metabolic processes. We identified that the examined brain regions were metabolically altered during sleep. Our analysis demonstrates highly significant alterations in all brain regions depending on the compound class with the majority of changes in the cortex and the cerebellum. In these regions small signaling molecules, such as aspartate and metabolites of the widely studied purinergic pathway, exhibited significant alterations. 22 On the contrary, the midbrain includes the circadian rhythm-related hypothalamic nuclei, i.e. the suprachiasmatic nucleus of the hypothalamus, for which the lowest extent of sleep-specific metabolic alterations was observed potentially due to the lack of receptors for endogenous signals. 39 We decided to investigate the hypothalamus as part of the midbrain as dissection was not feasible in mice due to their small size. Importantly, our systemic investigation of plasma and liver samples demonstrated specific localized difference in the brain for most of metabolites described in this study. Furthermore, the investigation of hepatic tissue identified members of the arginine pathway altered as previously reported.
The gut microbiome derived metabolite ergothioneine was significantly reduced in the cerebellum during sleep. Ergothioneine is produced by the bacteria of the species Lactobacillus in the gut, which have also been reported to be under circadian regulation. 26,40 This metabolite is actively transported into neurons via the organic cation/carnitine transporter 1 (OCTN1), which is highly expressed in the cerebellum where this metabolite has a stimulating effect. 28,41 The observed significantly reduced levels in the cerebellum during sleep could be due to a reduced neuronal cerebellar activity or sleep-induced alterations of the gutbrain axis, which suggests a functional role. The statistically significant increased levels during wakefulness could relate to its antioxidant properties and neuroprotective effects to quench reactive oxygen species (ROS) derived from higher neuronal activity. 26 This is of particular importance as specific brain functions are known for similar metabolites from the gut-brain axis that are linked to carnitine mediated fatty acid oxidation. 32 The absence of significant circadian regulation of ergothioneine in the liver further supports the potential functionality of the molecule in the cerebellum, highlighting the importance of the gut-brain axis.
Increased cortical levels of b-citryl glutamate have been described in the newborn cortex, leading to the discovery of the molecule in the brain. 42 Since the discovery, this metabolite and its function in the brain have not been investigated intensely and merely described as a potential implication in depression. 43 Interestingly, the significantly higher levels of b-citrylglutamate in the brain after birth were related to metabolic shifts from glucose utilization to oxidative processes. The discovery of a previously unknown regioisomer of b-citryl-glutamate (isomer A), which is specifically upregulated in the cortex during sleep, suggests additional functions and importance of these modified dipeptides in the brain and its development. We have additionally revealed altered quantities in different brain regions for the first time.
We also detected increased levels of phenyllactic acid, hydroxyphenyllactic acid and homovanillic acid specifically in the cortex during sleep. These metabolites are metabolic end products of phenylalanine and tyrosine metabolism. Phenyllactic acid and hydroxyphenyllactic acid were described with antioxidant properties by decreasing ROS production in both mitochondria and neutrophils. 44 In addition, the phenolic moiety in hydroxyphenyllactic and homovanillic acid has been associated with neurological disorders such as schizophrenia and autism. 45 Homovanillic acid is the final product of dopamine metabolism and elevated levels of this metabolite may correlate with an increased dopamine turnover. 46 The formation of homovanillic acid is mediated by the enzyme monoamine oxidase that is a marker for higher oxidative processes. 47 Shift towards fatty acid metabolism, exemplified by elevated acyl-carnitines and b-oxidation, represents an additional important compound class strengthening the findings of an oxidative energetic shift during sleep. It has previously been reported that sleep is associated with a significant decline in cerebral glucose metabolism. 6,7,48,49 Our results now reveal increased levels of hydroxylated long-chain fatty acids as carnitine conjugates during sleep. This metabolic shift can be attributed to the higher energy yield through fatty acid oxidation compared to carbohydrate metabolism. The b-oxidation process is unfavorable for the brain during wakefulness with these higher energy demands due to the generation of high amounts of neurotoxic ROS. 50 This process compensates for heat and water loss in tissue during sleep, which leads to an electrolyte imbalance. 6 Regional differences in the altered acylcarnitines detected can be linked to cellular variations in fatty acid oxidation, with astrocytes as the predominant sites of this reaction. Overall astrocyte densities differ substantially per brain region with the cortex, hippocampus and cerebellum yielding approximate respective densities of 29,500, 14,500 and 1,500 cells/ mm 2 . 51 Therefore, differences in the cell density among the examined regions can reflect distinct metabolic requirements with respect to fatty acids of different oxidation states and chain lengths.
Although acylcarnitines have been strongly associated with the b-oxidation of fatty acids, their roles and functions in the brain exceed beyond this catabolic reaction. 52 Acylcarnitines have also been established as biomarkers of mitochondrial function and important factors in ketosis. Moreover, this compound class is involved in neuroprotection and enhancement of the cholinergic function and synthesis of acetylcholine. 53 Long-chain acylcarnitines, such as palmitoylcarnitine, can participate in the synthesis of complex lipids related to neural membranes and signal transduction. In addition, this increase in lipid content can be related to higher myelination during sleep, a process required for the maintenance of neural connections. Genomewide profiling of oligodendrocytes after sleep demonstrated increased expression of genes involved in the promotion of oligodendrocyte precursor cell proliferation, lipid synthesis and myelination. 54 This is further supported by increased levels of hydroxylated acylcarnitines in our study. These 2-hydroxylated fatty acids are major components of the sphingolipids cerebrosides and sulfatides, which form the myelin sheath of the neural axons. 55 In conclusion, our results provide for the first time detailed metabolic profiles in four major brain regions during the sleep and the wake states of the mouse brain. The discovery of diverse compound classes that are linked to oxidative stress, energy metabolism changes and gut-brain axis derived metabolites during the subjective night are new insights into metabolic regulation in the brain. The discovery of an unknown neuropeptide lays the foundation for future mechanistic and metabolic studies to understand brain metabolism and metabolic processes in the circadian rhythm. Our findings lay out the high impact of the circadian rhythm on the brain metabolism and demonstrate that more brain region specific neurochemical alterations exist that cannot be detected through investigation of systemic or hepatic metabolic changes. As the majority of metabolites described belong to metabolic pathways that are highly conserved among species, these findings are of high relevance for human brain metabolism as well.
|
2021-07-24T06:16:50.960Z
|
2021-07-22T00:00:00.000
|
{
"year": 2021,
"sha1": "58067bbc8f595c0b8c1f30822a3eb1e052dd68e7",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0271678X211033358",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1e77c18908547f4b0c99a566c0d108cbeae2981",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267309942
|
pes2o/s2orc
|
v3-fos-license
|
The Association Between Preoperative Insulin-Like Growth Factor 1 Levels and the Total Body Weight Loss in Women Post Laparoscopic Sleeve Gastrectomy
Background Despite the well-described optimal initial clinical response of sleeve gastrectomy (SG) in the treatment of obesity, some patients do not achieve optimal initial clinical response. Insulin-like growth factor-1 (IGF-1) has currently shown an association with post-bariatric surgery weight loss. This study aimed to assess the IGF-1 levels in female patients with obesity, the change after surgery, and their association with the metabolic profile and weight loss after surgery. Patients and methods This was a prospective study that was conducted on adult female patients who were recruited for SG. The patients underwent clinical and laboratory investigations that included the IGF-1 measurement. At the 1-year follow-up, the same clinical and laboratory measures were repeated. Results This study included 100 female patients. At the 1-year follow-up, there was a statistically significant reduction in body mass index (BMI) (p < 0.001), fasting HbA1C levels (p < 0.001), and triglycerides (p < 0.001), as well as a statistically significant increase in HDL (p < 0.001) and IGF-1 (p < 0.001). Multiple regression analysis revealed that, among the patients baseline characteristics, the significant predictors for the percentage of total weight loss (%TWL) were the patients’ BMI (p < 0.001) and IGF-1 levels (p < 0.001). The ROC curve showed that an IGF1 cutoff value of ≤ 23 ng/ml detected suboptimal initial clinical response, with a sensitivity of 95.35% and a specificity of 100%. Conclusion This study underscores the significant impact of SG on weight loss and metabolic improvements in female patients. Baseline IGF-1 levels emerged as a crucial predictor of optimal initial clinical response. Graphical Abstract
Introduction
Obesity is associated with several health disorders, resulting in a reduction in the quality of life and overall life expectancy [1,2].Achieving weight loss reduces the morbidity and mortality of patients with obesity [3].Surgical treatment of obesity remains a reliable solution for patients with severe obesity to reduce their weight and improve the associated medical disorders [4].
This has resulted in a continuously growing number of bariatric surgeries performed annually all over the world [5].Being a simple technique with promising safety and efficacy [6], sleeve gastrectomy (SG) is currently the most commonly performed bariatric surgery [5].
Despite the well-described optimal initial clinical response of SG in the treatment of obesity, some patients do not achieve optimal initial clinical response after surgery
Key points
• The study highlights the substantial impact of SG on weight loss and metabolic improvements among female patients with obesity.• Baseline IGF-1 levels emerge as a crucial predictor for optimal initial clinical response, emphasizing the potential importance of this factor in assessing surgical outcomes.• The study establishes an IGF-1 cutoff value of ≤ 23 ng/ml to detect suboptimal initial clinical response with high sensitivity (95.35%) and specificity (100%), providing a practical diagnostic tool.
* Ahmed Mostafa Ghobashy dr.a.ghobashy@cu.edu.eg and require revisional bariatric surgery [7].The outcome of SG has been described as being associated with various factors, including the patients' age, sex, body mass index (BMI), and obesity complications [8][9][10][11][12][13][14].Among the factors described, insulin-like growth factor-1 (IGF-1) is currently linked to post-bariatric surgery weight loss [13].IGF-1 is one of the anabolic hormones that enhance energy expenditure [15,16].In addition, there is an intricate association between IGF-1 and growth hormone (GH), with the latter inducing the hepatic synthesis and secretion of the former [17], which subsequently reflect the GH secretion status [18] and mediate its lipolytic effects [19,20].Therefore, it could be proposed to be associated with weight loss following surgery.
However, data regarding the IGF-1 levels in patients with obesity and their change after surgery show conflicting results.Also, there remains scarce evidence regarding its potential effect on post-LSG weight loss.
The present study aimed to assess the IGF-1 levels in female patients with obesity, the change after surgery, and the association between these levels, the patient's metabolic profile, and weight loss after surgery.
Patients and Methods
This was a prospective clinical study that was conducted on consecutive patients scheduled for LSG at our hospital during the period from December 2021 to December 2022 after obtaining Research Ethics Committee approval.The study followed the Helsinki Declaration.
After a multidisciplinary evaluation, patients' eligibility for bariatric surgery was determined using criteria inspired by international societies concerned with obesity surgery [21][22][23].Adult female patients who were recruited for LSG based on their choice after a thorough discussion with the surgeon, who presented the available choices, were included in the study.Patients with previous bariatric surgery and those who did not complete the follow-up visits were excluded from the study.The included patients provided informed written consent before being enrolled in the study.
The included patients underwent clinical assessment, including the anthropometric measures assessment and laboratory investigations that included the measurement of fasting serum glucose, glycosylated hemoglobin (HbA1c), lipid profile, and IGF-1, which was assessed with the Human Insulin-like Growth Factors 1, IGF-1ELISA Kit, SunLong Biotech Co., LTD, using the Elisa Plate Reader Statfax Chromate 4300 (U.S.A.).According to the manufacturer, normal IGF-1 levels range from 1.6 to 70 ng/mL.
The diagnostic criteria for diabetes are based on the measurement of blood glucose levels.According to the American Heart Association, a fasting level of 126 mg/dL or higher after an overnight fast of at least 8 h indicating diabetes mellitus [24].The diagnostic criteria for hypertension are based on the measurement of blood pressure.According to the American Heart Association, a diagnosis of hypertension can be made in the following ways, a systolic blood pressure (SBP) of 130 mm Hg or higher or a diastolic blood pressure (DBP) of 80 mm Hg or higher on two or more readings taken on two or more occasions should diagnose hypertension [25].The diagnostic criteria for dyslipidemia are based on the established guidelines, a diagnosis of dyslipidemia can be made when a total cholesterol level of 240 mg/dL or higher, a low-density lipoprotein (LDL) cholesterol level of 160 mg/dL or higher, and a high-density lipoprotein (HDL) cholesterol level of less than 40 mg/dL in men or less than 50 mg/dL in women [26].
Laparoscopic sleeve gastrectomy was performed as standardized [27].In summary, the patients underwent routine preparation preoperatively, and the operation was conducted under general anesthesia.The required incisions were performed, and trocars were inserted.After the induction of pneumoperitoneum, the sleeved stomach pouch was created using a 36-Fr bougie.The stomach was resected starting from about 3-4 cm before the pyloric canal to the angle of His.Hemostasis was ensured throughout the surgery, and the leak testing was done.The patients received routine postoperative care and a schedule for postoperative visits.
At the 1-year follow-up, the same clinical and laboratory measures taken preoperatively were repeated.The total weight loss percentage (%TWL) was calculated [28].A TWL percentage of less than 20% at the 1-year followup was considered suboptimal initial clinical response [29].The resolution of the associated medical disorders was considered based on the American Society for Metabolic and Bariatric Surgery (ASMBS) recommendations.Remission of diabetes was considered if the HbA1c was less than 6 and fasting serum glucose was less than 100 mg/dL) in the absence of antidiabetic medications.Remission of hypertension was being normotensive (SBP < 120 mmHg and DBP < 80 mmHg).Remission of dyslipidemia was having LDL levels lower than < 100 mg/dL, triglycerides levels < 150 mg/dL, and HDL levels higher than 40 mg/dL [30].
Study Outcomes
The outcomes of the current work were the potential association of IGF-1 with the patients' weight status and the predictors of weight loss after surgery.The secondary outcomes were the short-term outcomes of LSG.
Statistical Analysis
The patients' data were analyzed using version 28 of the statistical software (SPSS, IBM Corp., Armonk, NY, USA).
The patients' data were expressed as a number and percentage if categorical, or a mean and standard deviation if numerical.An independent t test and a paired t test were used for the comparison of the numerical data, as appropriate.The McNemar's test was used for the paired comparison of the categorical data.Pearson correlation analysis was used to test the correlation between numerical variables.Investigating the dynamic changes in IGF-1, the ∆ IGF (change from baseline) was calculated and its correlation with both %TWL and the change in Body Mass Index (∆ BMI) was explored.Multiple regression analysis was performed to assess the predictors for 1-year postoperative %TWL.The ROC curve was used to obtain the optimum IGF-1 cutoff value for the prediction of suboptimal initial clinical response.The level of significance was considered at a p-value of ≤ 0.05.
Results
One hundred female patients who completed the follow-up period were included in this study.Their mean age was 36.98 ± 10.05 years.The mean baseline weight was 136.12 ± 22.86 kg, the mean baseline BMI was 49.68 ± 8.13 kg/m2, and the mean baseline waist circumference (WC) was 111.64 ± 22.47 cm (Table 1).The patients' obesity complications and baseline laboratory findings are presented in Table 1.
Multiple regression analysis revealed that, among the patients baseline characteristics, the significant predictors for %TWL were the patients' BMI (p < 0.001) and IGF-1 levels (p < 0.001).The effect of these parameters on %TWL was independent of the other patients' parameters that were taken into account in the analysis (Table 4).
Testing the correlation of %TWL with the 1-year followup characteristics revealed that only the IGF-1 levels showed a statistically significant positive association (p < 0.001).
ROC curve analysis to determine the discriminant ability of the baseline IGF-1 in determining optimal 1-year weight loss showed that an IGF1 cutoff value of ≤ 23 ng/ml had excellent discriminant power to detect suboptimal initial clinical response, with an AUC of 0.975, a sensitivity of 95.35%, and a specificity of 100% (Fig. 1).
Discussion
Sleeve gastrectomy is a well-established bariatric procedure for patients with obesity who cannot achieve sustained weight loss with lifestyle modification or medical treatment [31][32][33].The effect of SG is not only through the restriction of stomach volume, but other endocrinological pathways have been proposed to be a main interplay factor in the process of weight loss and the improvement of obesity complications [34].The role of IGF-1 as an important metabolic regulator that reflects the levels of GH and partially mediates its growth effects has currently emerged [13,[35][36][37][38].
In this study, we investigated the SG effect on the metabolic profile of one hundred Egyptian adult female patients with obesity and the potential role of IGF-1.We selected female patients to provide a homogeneous study population, allowing us to minimize the impact of gender-related variations on IGF-1 levels.While IGF-1 is a crucial growth factor that plays diverse roles in both males and females, existing literature suggests that its circulating levels can vary between the sexes.Sex hormones, such as estrogen and testosterone, are known to influence IGF-1 production and regulation.Females tend to have different hormonal profiles than males, and these hormonal differences can contribute to variations in IGF-1 levels [39][40][41].
By focusing exclusively on female patients, we aimed to create a more homogenous study population, reducing the confounding effects of gender-related hormonal fluctuations on IGF-1 measurements.This approach enhances the internal validity of our findings and allows for a more focused investigation into the specific relationship between IGF-1 levels and weight loss outcomes in female patients undergoing sleeve gastrectomy.
The promising outcomes of LSG on weight loss and metabolic disorders are currently evidenced [31][32][33].Our study emphasized this in terms of significant improvement in the lipid profile and glycemic control with high rates of resolution of associated medical disorders, besides the significant weight loss.This is supported by several previous studies that demonstrated a meaningful loss of weight and resolution of obesity-associated obesity complications after SG [31-33, 42, 43].
The SG-associated improvement of the metabolic parameters could be explained by the effect of reduced gastric volume on the amount of ingested food and the subsequent weight loss that was evidenced to affect the patients' biochemical and metabolic parameters, including glucose levels and lipid profile [44].However, other hormonal changes may also share the metabolic effect after LSG [36,37,45].
Among the possible explanations, IGF-1 might have an eminent role.Both insulin and IGF-1 have hypoglycemic and anabolic effects achieved by binding to the IGF-1 receptor and/or the insulin receptor [46], giving IGF-1 its own metabolic actions in insulin sensitivity, lipolysis, and proteolysis regulation as part of the IGF-1/insulin system [47].Additionally, through IGF-1-mediated GH effects, the lipolysis process is stimulated by the free fatty acids released from fatty tissue, most prominently visceral fat.Moreover, hepatic triglyceride storage is maintained, and their uptake by skeletal muscles is stimulated [48].It has been described that "fine tuning" of the IGF-1 signaling cascade is critical for proper adipogenesis [47].
IGF-1 has shown variable levels in patients with obesity, as previously described [13,36,49,50].In this study, despite being within the normal range, the preoperative IGF-1 levels were negatively correlated with the baseline WC, fasting glucose, and HbA1C levels.This elucidates the impact of abdominal obesity on IGF-1 levels and the subsequent disruption in glycemic control.The IGF-1 levels were also negatively correlated with the patients' age.This correlation was previously described [51,52] and explained by the reduction of GH levels in older individuals being nearly negligible in individuals aged above 60 years [52], with a subsequent decline in the hepatic production of IGF-1, which is regulated by GH.
In the present work, there was an evident elevation in the IGF-1 levels at the post-surgery follow-up.This is consistent with previous studies that reported a significant elevation in IGF-1 levels after bariatric surgery [13,15,35,38,53].Interestingly, Mittempergher et al. [54] reported that IGF-1 levels did not exhibit significant elevation after bypass procedures, despite being significantly increased after SG.This was explained by the bypass procedure-associated deficiency in nutrients, including protein, which is needed for Fig. 1 ROC curve of the diagnostic performance of IGF-1 for the prediction of postoperative suboptimal initial clinical response the improvement in IGF-1 levels after surgery [15].However, Mittempergher et al. [54] found no significant association between baseline IGF-1 and postoperative weight loss.
Variable predictors for post-SG weight loss were described among studies [8][9][10][11][12][13][14].Scarce conflicting evidence is available concerning the impact of IGF-1 levels on weight loss after SG [13,54].In this study, there was a significant association between ∆ IGF-1 and %TWL, pointing to the potential role of IGF-1 dynamics in influencing overall weight reduction.Moreover, after adjusting for the other confounders, BMI and IGF-1 were the baseline predictors for achieving optimal initial clinical response.This result implies the significant role of IGF-1 in the weight-loss process.
In line with our study, Ohira et al. [13] found that IGF-1 levels were a predictor for post-bariatric weight loss.Within the same context, Savastano et al. [55] found that the percentage of excess weight loss after surgery was significantly lower in patients with subnormal levels of IGF-1.The enhancing role of IGF-1 for weight loss after surgery could be explained by its anabolic role, which increases the mass of muscles, enhances the expenditure of energy, stimulates lipolysis, and regulates insulin sensitivity [15,56].It is worth noting that the current study showed a positive correlation of GLP-1 levels with the WC, reflecting the GH status.This finding could align with the described finding that GH causes a significant reduction in the amount of subcutaneous and visceral fat when it is used for the treatment of abdominal obesity [57].
Similar to our findings, researchers in previous studies have observed an obvious positive correlation between the baseline BMI values and the post-bariatric surgery weight loss [58][59][60].
While our results demonstrated a significant correlation between IGF-1 levels and various health parameters, including weight loss, it is crucial to consider the multifaceted nature of this relationship.It is imperative to recognize that correlation does not imply causation.The role of IGF-1 in metabolic regulation is complex and influenced by a myriad of factors, including nutritional status, hormonal balance, and physical activity levels [61].Therefore, while our data suggest that IGF-1 levels may serve as a valuable biomarker in the context of weight loss, we cannot conclusively determine whether these levels are a direct causal factor, a mere indicator, or a consequence of weight loss and associated metabolic changes.
Furthermore, it is important to consider that patients with better weight loss outcomes often engage in more physical activity and have healthier dietary habits, which in turn could positively affect IGF-1 levels.This interplay highlights the importance of a holistic approach to understanding and interpreting the relationship between IGF-1 levels and weight loss.We acknowledge the need for further research to unravel the exact nature of this relationship.Future studies should aim to dissect the causal pathways and investigate how modifications in lifestyle factors could mediate the effects of IGF-1 on weight loss.
Investigating a cohort consisting exclusively of females, despite its potential limitation in generalizability, was a deliberate choice aimed at examining a more homogenous population.This approach allows for a focused analysis, minimizing the influence of gender-related variables that could confound the results.Another limitation of our study is the short-term follow-up period.Larger-scale studies with long-term follow-up are warranted to validate the clinical utility of baseline IGF-1 levels as a reliable marker for optimal initial clinical response.Furthermore, exploring the effectiveness of GLP-1 agonists in individuals with IGF-1 levels below a defined threshold presents an avenue for enhancing surgical outcomes.
Conclusion
This study underscores the significant impact of LSG on weight loss and metabolic improvements in female patients.Notably, baseline IGF-1 levels emerged as a crucial predictor for optimal initial clinical response, emphasizing their potential as a valuable marker in guiding clinical decisions and predicting surgical outcomes in this cohort.However, it is crucial to acknowledge that our results should be interpreted with caution due to the inherent limitations of our study.The association between IGF-1 levels and weight loss outcomes, while compelling, warrants further investigation to fully understand its nature and implications.
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Table 2
Correlation between the baseline IGF-1 levels and the other baseline parameters
Table 3
Comparison between the study patients according to the 1-year optimal initial clinical response
|
2024-01-30T06:17:44.256Z
|
2024-01-29T00:00:00.000
|
{
"year": 2024,
"sha1": "3597c2b479b9e84a624f657a5754db7eaf781a8e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11695-024-07077-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8de0dbf4e0ac5285a7ead9ea628d57b976c4cec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
20924766
|
pes2o/s2orc
|
v3-fos-license
|
Time-Multiplexed Measurements of Nonclassical Light at Telecom Wavelengths
We report the experimental reconstruction of the statistical properties of an ultrafast pulsed type-II parametric down conversion source in a periodically poled KTP waveguide at telecom wavelengths, with almost perfect photon-number correlations. We used a photon-number-resolving time-multiplexed detector based on a fiber-optical setup and a pair of avalanche photodiodes. By resorting to a germane data-pattern tomography, we assess the properties of the nonclassical light states states with unprecedented precision.
Introduction.-Nonclassical states of light constitute an invaluable resource for deploying quantum-enhanced technologies as diverse as cryptography, computing, and metrology, to cite only some of the many relevant examples. Certifying signatures of nonclassicality generally requires inferring either the photon-number distribution or a quasiprobability distribution indirectly from a set of measurements. Even though the latter approach is well established [1] (it involves homodyne detection followed by an appropriate reconstruction scheme), photon counting seems a more natural choice in this discretevariable scenario, in which photons are used as flying qubits.
Several strategies have been proposed thus far for PNR detectors. Single-photon avalanche diodes (SPADs) have become the prevailing option for PNR applications. Si-based SPADs constitute a relatively mature technology with several efficient devices commercially available, but they are only suitable for use at visible and near infrared wavelengths. For experiments at technologically-important telecom wavelengths, the main contending technologies are In-GaAs SPADs, which are plagued by high dark-count rates and long dead times, thereby making gating essential.
A proposal to employ a time-multiplexed detection (TMD) based on SPAD has been put recently forward [11][12][13]. These TMDs work also for pulsed light, and the photon-number distribution of a quantum state can be retrieved by inverting the measured photon statistics. Experimental applications, demonstrating a reliable loss calibration, and the TMD suitability for detecting multimode statistics and nonclassicality, have already been accomplished [14][15][16][17][18][19].
The effective implementation of these advanced schemes relies on a complete and accurate knowledge of the detector, an issue that has lately started to attract a good deal of attention [20][21][22][23][24][25][26]. The idea behind is to employ the outcome statistics in response to a set of complete certified input states.
However, as shown in Ref. [27], if the measurement itself is of no interest, the costly detector calibration can be bypassed by using a direct fitting of data in terms of detector responses to input probes. Thus, state estimation is done without any prior knowledge of the measurement, avoiding unnecessary wasting of resources on evaluating the parameters of the setup [28,29].
In this Letter, we present a thorough application of this novel data-pattern tomography to TMDs. In this way, we provide a full account of the nonclassical properties of quantum states.
Experimental setup.-The states in our experiment are generated by type II parametric down conversion (PDC) inside a periodically poled KTP waveguide. The PDC source produces decorrelated signal and idler states with a purity for heralded states above 80% and high coupling efficiency into singlemode fibers. The setup is the same as the one described in detail in Ref. [30] and sketched in Fig. 1.
Twin beams created in PDC are archetypal example of highly correlated quantum states. Sub-and super-Poissonian photon statistics [31], antibunching [32], and quantum correlated quadrature amplitudes [33] have been demonstrated.
Our TMD is also schematized in Fig. 1. Two incoming pulses are split into 16 temporal bins and impinge onto SPADs. Counting the clicks allows us to estimate photon numbers and photon-number correlations between the two input ports. Since we work at telecom wavelengths, we use InGaAs SPADs (Id Quantique id201 at a repetition rate of 1 MHz with a gate width of about 2.5 ns). As briefly mentioned before, InGaAs SPADs SPADs are the simplest and most cost-efficient detectors available at telecom wavelengths. However, they have some disadvantages: the detection efficiencies are below 25% and afterpulsing is present with a few percent probability [34]. Consequently, the conventional TMD model [11], which only takes into account the probabilistic splitting and overall losses, appears to be inadequate. A more sophisticated technique is required to recover photon statistics from the measured click frequencies; this is where data-pattern tomography comes into play.
The state is specified by the two-mode photon-number distribution P mn , where the first (second) index refers to the signal (idler) mode. We also denote by p αβ the probability of simultaneous signal (α) and idler (β ) detection. Detections are thus described by 8-digit binary numbers, where 0/1 values mean click/no click in the corresponding time bin. For example, β = 00000011 denotes a simultaneous detection in the first two idler time bins. This gives 2 8 = 256 distinct single-mode events and 2 16 = 65536 two-mode events to reckon with.
We adopt a linear model of the TMD detection, so that where d is the cutoff dimension required to accommodate the relevant parts of the signal and the idler and the measurement matrix C provides a complete description of the TMD, including losses, detector efficiencies and afterpulsing effects. In a real experiment, we acquire the relative frequencies f αβ after N random samples drawn from the multinomial distribution parametrized by p αβ . Due to afterpulsing, it is not possible to factorize the detection matrix in signal and idler parts.
We also consider single-mode and heralded events; the former (latter) are simply marginal (conditional) probabilities of p αβ . For these single-mode events, we look at the total number of clicks (either in the signal or the idler), without paying attention to the particular ordering of time bins. For example, for the signal-mode reconstruction, such reduction is readily done by summing data and patterns marginals f α = ∑ β f αβ over the 8!/[(8 − k)!k!] different permutations of α with the same number k of nonzero binary digits.
Fitting data patterns.-From the measured data f αβ we have to determine the state P mn . The standard detector tomog-raphy would proceed in two steps: first, a detector estimation, where the measurement matrix C αβ ,mn ≥ 0 is inferred from a set of calibration states. Afterwards, the state P mn ≥ 0 is reconstructed from the previously obtained detector matrix. However, this is not completely satisfactory: the details of the TMD are not of interest and, besides, the detector estimation is exceedingly costly, scaling as d 4 , which makes the method impractical, even for moderate values of this cutoff d.
The alternative data-pattern approach, we adopt here, expresses P mn as a mixture of M linearly independent (generally, nonorthogonal) twomode coherent probes {P αβ of the TMD to these coherent probes are called patterns. Then, by linearity, the data (i.e., the TMD response f αβ to an unknown state P mn ) can be modeled in terms of patterns as Hence, once the patterns and data are measured, the coefficients x ξ can be inferred from Eq. (3) and the state reconstructed according to (2). To this end, a suitable convex measure of the distance between the left-and right-hand side of Eq.
(3) has to be minimized, subject to the physical constraints P mn ≥ 0 and ∑ mn P mn = 1: this is a quadratic program than can be efficiently solved [35]. Notice that in the data-pattern tomography, the number of parameters M − 1 is independent of the probe cutoff dimension d. Also, if needed, a partial tomography of the unknown state can be performed by using only a small part of the patterns or any linear function of them (such as the value of the Wigner function at the origin) for the data fitting in Eq. (3).
To create the probe states we use pulsed coherent light attenuated at the single-photon level. The power of the reference beam is changed by two motorized half-wave plates followed by polarizing beam splitters. We calibrate all the neutral density filters separately and measure fiber-coupling losses. From these values and the measured reference power, we calculate the power inside the fibers of the TMD. Due to the high degree of attenuation (of the order of 10 −9 ), small calibration errors (of order of a few percent) cannot be avoided. However, this affects the total losses, but not the shape of the photon statistics.
Results.-We take into consideration a fixed number of patterns with amplitudes below a given threshold α max ≈ 2. This threshold is important because of the afterpulsing, which seems to be more pronounced for stronger states. The reconstruction is repeated 100 times with randomly chosen probe subsets of size M < 235 and averaged over those repetitions. In this way the redundancy in the data is propagated into the final estimate.
The variation within the set of reconstructions is used to estimate the associated errors, much in the spirit of nonparametric bootstrap [36]. In the experiment, N ξ = 4.2 × 10 6 events were registered for each coherent probe and PDC state. For low-intensity PDC states, the data were averaged over five repeated data acquisitions, making a total of N PDC = 21 × 10 6 events. With these numbers, the statistical noise is insignificant (except, perhaps, for heralded detections) and the reconstruction accuracy is governed by systematic errors and afterpulsing effects.
To check the performance for different parameter sets we first performed a cross-validation [37], to verify whether the estimated state is consistent with the observed data sample. To this end, we checked the quality of the reconstruction with random sets of coherent states discarded from the probes, but with the same amplitude threshold. We have resorted to different measures of errors; for all of them we conclude that the accuracy is insensitive to the dimension d, while the errors get larger for stronger probes. Typical fidelities around 99% are attained, which amounts to errors of a few percent for the reconstructed elements of P mn , which outperforms the standard detector tomography.
More probes give somewhat better results, but small sets of probes can be surprisingly good. This is due to the small variation across those patterns characterized by a small number of principal components in the singular value decomposition.
To compare with the theory, we assume the PDC distribution together a finite detection efficiency η which is taken into account by a Bernoulli distribution: From the zero-detection probabilities of coherent probes with known amplitudes, the quantum efficiency of detectors was estimated to be 0.22 ± 0.01 and the coupling efficiency 75%. This, in turn, enables to calculate the mean photon numbers of the generated PDC states. Three PDC, denoted PDC 1 , PDC 2 and PDC 3 , were generated, with n 1 = 0.11, n 2 = 0.76, and n 3 = 1.34, respectively. These numbers were used to predict the two-mode statistics through Eqs. (4) and (5).
In Fig. 2 we plot typical results of two-mode TMD measurements for PDC 2 . Strong signal-Idler correlations are observed and the agreement with the theory is pretty good. Similar results are found for other intensities.
In Fig. 3 we show the reconstructions of the signal states for two different pump intensities. Best fits to Bose-Einstein distributions are almost indistinguishable from the experimental results.
Heralded states are created by having the idler state conditioned on single or double detection in the signal of the PDC output. By double detection we mean here a click at detector A accompanied by a simultaneous click at detector B. Double detections at any single detector are discarded to avoid doubles caused by afterpulsing.
Heralded single-and especially two-photon states are difficult to reconstruct, since we are picking out quite a small subset of all the detection events. Besides, afterpulsing creates artificial signal-idler correlations, whose strength depends on the distance of the signal detection from the first idler time bin. All in all, this leads to larger reconstruction errors compared to single-mode states.
Reconstructed single-and two-photon heralded idler states from two different PDC states are shown in Fig. 4. To get theoretical predictions, we again assume an inefficient coupling (0.75) of the PDC state and calculate the post-measurement idler state P i from the pre-measurement P as follows whereÊ †Ê is the POVM element describing the single/double detection in the signal mode and Tr s,i indicates trace over the signal/idler. All states and POVM elements are diagonal here. Best estimates of Wigner function at the origin for the single-photon heralded states are W (0) = −0.72 ± 0.06 (PDC 1 ) and W (0) = −0.30 ± 0.09 (PDC 2 ). This agrees with the calculated values W (0) = −0.77 (PDC 1 ) and W (0) = −0.29 (PDC 2 ), respectively and confirms the noclassicality of these states. With more intense PDC inputs, single detection in the signal tends to leave a mixture of Fock states in the idler. This explains why the nonclassicality of heralded states decreases with increasing pump intensity.
Finally, we simulated heralded states as post-measurement states based on the results of full two-mode tomography. To this end, we performed 100 two-mode reconstructions for each measured PDC state. The idler post-measurement state is calculated based on a thought single or double signal detection. The statistics of the resulting ensemble of heralded states is shown in Fig. 5, where we compare this statistics with the theoretical predictions.
These predictions based on the full two-mode reconstructions are less accurate than the single-mode heralded ones. The latter is more direct. In heralded detections, what helps is that the dimension of the search space is reduced and the dominating vacuum or even single-photon terms are eliminated, which improves the accuracy. In addition, in the data-pattern approach we use heralded coherent probes i.e. do the same data selection as for the PDC data. In this way, one somehow eliminates the artificial correlations created by the afterpulsing. Nevertheless, it is nice to see that the accord between single-and two-mode measurements is actually pretty good. One can also notice that the two-mode predictions improve with increasing intensity, as one could expect. More intense PDC states have larger higher-order P mn components, which are easier to extract. Concluding remarks.-In summary, we have exploited a PDC source of quantum states at telecom wavelengths with remarkable properties in terms of brightness, purity and symmetry. To put forward the nonclassical issues of the generated states, we have employed TMDs together with the method of data-pattern tomography. The experimental calibration shown here goes beyond any quantum detector tomography previously demonstrated. Our approach is easily adapted to a variety of measurement devices and the experimental implementation presented here shows its viability for complex detectors.
|
2014-06-13T16:41:09.000Z
|
2014-06-13T00:00:00.000
|
{
"year": 2014,
"sha1": "0075f8336c345d43d8b5128d1c25a25d31c1f5d9",
"oa_license": null,
"oa_url": "https://eprints.ucm.es/id/eprint/29139/1/1791.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0075f8336c345d43d8b5128d1c25a25d31c1f5d9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
88428230
|
pes2o/s2orc
|
v3-fos-license
|
Design , synthesis and anti-diabetic activity of some novel xanthone derivatives targeting α-glucosidase
Twenty eight xanthone derivatives were designed and docked into the Nterminal catalytic domain of maltase-glucoamylase (ntMGAM) by considering miglitol as standard drug. Most of the molecules showed excellent docking scores and docking interaction as compared to the binding cavity of the standard molecule. The five best scoring ligands were synthesized and characterized by a number of analytical and spectroscopic techniques. The molecules were screened for the in vivo anti-diabetic activity in streptozotocininduced diabetic animal model in Wistar rats. Compound P4 showed the most prominent inhibition among others. The synthesized compounds reported significant (p<0.01) effect in lowering blood glucose levels compared to miglitol as a standard -glucosidase inhibitor. Article Info Received: 1 December 2015 Accepted: 12 February 2016 Available Online: 16 March 2016 DOI: 10.3329/bjp.v11i2.25851 Cite this article: Bairy PS, Das A, Nainwal LM, Mohanta TK, Kumawat MK, Mohapatra PK, Parida P. Design, synthesis and anti-diabetic activity of some novel xanthone derivatives targeting -glucosidase. Bangladesh J Pharmacol. 2016; 11: 308-18. Design, synthesis and anti-diabetic activity of some novel xanthone derivatives targeting α-glucosidase Partha Sarathi Bairy1, Aparoop Das1, Lalit Mohan Nainwal1, Tapan Kumar Mohanta2, Mukesh Kumar Kumawat3, Pradyumna Kishore Mohapatra3 and Pratap Parida4 Department of Pharmaceutical Sciences, Dibrugarh University, Dibrugarh, Assam 786 004, India; Free Major of Natural Science, School of Basic Studies, Yeungnam University, 280, Daehak Gyeongsan, Gyeongsanbuk, 712749, Republic of Korea ; Anand College of Pharmacy, Keetham, Agra, Uttar Pradesh 282 007, India; Regional Medical Research Centre, NE Region, Indian Council of Medical Research, Dibrugarh, 786 001, Assam, India. anti-bacterial, antifungal, antiviral (Diderot et al., 2006), antimycobacterial (Pickert et al., 1998), antidepressant (Galt et al., 1989), anti-diabetic (Liu et al., 2006) and monoamine oxidase inhibitors. Materials and Methods Design of new molecules A xanthone nucleus with different substitutions is proved to have a diverse class of pharmacological profile. Previously established -glucosidase inhibitor like acarbose, miglitol were observed to have polyhydroxy groups in their structure and new molecules are designed considering this fact. For this study, xanthones with several hydroxy groups and alkoxy groups at different position are designed and drawn using ChemAxon, a freeware developed by Advanced Chemistry Development, Inc. and converting 2D chemical structure of compound to 3D structures (Nainwal et al., 2014).
Introduction
Glucosidase catalyzes especially hydrolyses the carbohydrates to free glucose unit in blood in the final step of carbohydrate metabolism.Glucosidase causes hydrolysis of and glycosidic linkages of carbohydrates, thus they are -glucosidase and -glucosidase (Heightman et al., 1999).Among them -glucosidase (EC 3.2.1.20)draws considerable interest to the pharmaceutical research community because it helps to increase postprandial blood glucose level (Park et al., 2008).Inhibition of the enzyme is a useful chemotherapy for controlling diabetes and obesity.Due to its catalytic role, it also targeted in the treatment of other carbohydrate mediated diseases, including cancer (Humphries et al., 1986), viral infections (Mehta et al., 1998;Karpas et al., 1988), and hepatitis (Zitzmann et al., 1999).
So, the research continues to find other problem solving alternatives.Several fields of research upon xanthone nucleus going on and recent studies indicate that mangiferin, a xanthone C-glycoside, serve as potentglucosidase inhibitors (Liu et al., 2006).
Xanthones are secondary metabolites found in plants, fungi and lichens and now in the center of research interest by the past two decades because of its diverse class of pharmacological profile (Cardona et al., 1990).Xanthones are reported for biological activities like antitumor, anti-oxidant, anti-inflammatory, anti-allergy, anti-bacterial, antifungal, antiviral (Diderot et al., 2006), antimycobacterial (Pickert et al., 1998), antidepressant (Galt et al., 1989), anti-diabetic (Liu et al., 2006) and monoamine oxidase inhibitors.
Design of new molecules
A xanthone nucleus with different substitutions is proved to have a diverse class of pharmacological profile.Previously established -glucosidase inhibitor like acarbose, miglitol were observed to have polyhydroxy groups in their structure and new molecules are designed considering this fact.For this study, xanthones with several hydroxy groups and alkoxy groups at different position are designed and drawn using ChemAxon, a freeware developed by Advanced Chemistry Development, Inc. and converting 2D chemical structure of compound to 3D structures (Nainwal et al., 2014).
Molecular property
All the drawn structures were screened for in silico biological data prediction using Molinspiration online filter and Drulito software.They screened the designed molecules according to 'Lipinski's rule of five' predicting molecular properties like molecular weight, total polar surface area (TPSA), LogP, hydrogen bond donor (HBD), hydrogen bond acceptor (HBA) and number of roratable bonds.
Drug likeness and bioavailability
Virtual screening of the compounds was performed before molecular docking simulation studies.All the compounds were screened for predictive molecular physicochemical properties such as absorption, distribution, metabolism, excretion and toxicity (ADME/ Tox).This aids the screening of the compounds for potential drug like properties.Drug intensity and kinetics of drug contact to various tissues is greatly influenced by ADME/Tox, which in turn reflects the efficiency and pharmacological activity of the compound as a drug.The mutagenicity, tumourogenicity, irritating and reproductive effects as well as drug likeness and drug scores of the compounds were predicted by using OSIRIS property explorer (http:// www.organic-chemistry.org/prog/peo/).
Protein retrieval and preparation
-Glucosidase crystal structure was obtained from RCSB (Research Collaboratory for Structural Bioinformatics), Protein Databank (PDB, http://www.pdb.org).The PDB ID of the selected protein was found to be 3L4W (Sim et al., 2010) and downloaded as PDB text file format.All the heteroatom including co-crystallized ligand was removed by Molegro Molecular Viewer to make protein free and suitable for further docking studies.Now the protein comprises of only one chain containing 863 amino acid residues and 13493 numbers of atoms and 13693 numbers of bonds.Additions of nonpolar hydrogens were done for better interaction.Kollman united atom charges were applied to the protein.Appropriate ionization and tautomeric states of numerous amino acid residues such as Arg, His, Asp, Glu and Ser were managed by adding H-atoms to the protein at pH 7.0.
Molecular docking study
Molecular docking is generally used to detect the protein-ligand orientation and interaction.AutoDock Tools package version 2.4 was utilized to create the docking input files.The grid region was surrounded by the active site for binding.So, grid region was selected on the basis of amino acid residues representing the binding site of miglitol as the standard drug obtained from PDB with ID-3L4W and considered as the best active region for the favorable interaction.The grid box was set at 80 × 80 × 80 A˚ for x, y and z axis and covered 12 amino acid residues (TYR299, ASP327, ILE364, TRP406, TRP441, ASP443, MET444, ARG526, TRP539, ASP542, PHE575, HIS600) of active site.The Lamarckian Genetic Algorithm (LGA), a local search algorithm was utilized for ligands conformers searching.During the docking process, a maximum of 10 conformers were considered for each compound.This method was applied for each designed compound and after completion the conformer with lowest binding energy was chosen.
The conformational similarity by visualizing the binding site and its energy (Kcal/mol), and the docked amino acid residues forming hydrogen bonds and other parameters like intermolecular energy (Kcal/mol) and inhibition constant ( M) were analyzed by AutoDock tool.Ten best poses were generated for each ligand and scored using AutoDock 4.2 scoring functions (Morris et al., 1998).Based on the docked energy all the ligands were ranked.The ligand interacting residues with the target protein were analyzed using AutoDock tools, PyMOL (Konc et al., 2011) and LigPlot (Madeswaran et al., 2011).
Structural investigation
All chemicals used in the work were used without further purification.The intermediate was taken in an open capillary on the Veego-MPI melting point apparatus and the melting point of the synthesized compounds was determined.The progress of the reactions was monitored on silica gel-G TLC plate using various solvent combinations.The spots were detected with iodine vapors and observed under UV-light.The UVvisible spectra of the synthesized compounds were recorded on UV-visible spectrophotometer (Shimadzu UV-1800).Infrared spectra were recorded on an FT-IR Perkin-Elmer spectrometer.The 1 H and 13 C NMR spectra were recorded at 400 MHz and 100 MHz, respectively, on a Bruker Avance-II 400 NMR spectrometer using DMSO-d6 as solvent with tetramethylsilane (TMS) as an internal standard.Mass spectra were obtained on a Waters Q-TOF MICROMA SS LC mass spectrometer (Silverstein and Webster, 1963).
General procedure
Salicylic acid derivative and polyhydroxy phenols were used to synthesize hydroxyxanthones (intermediate) and further alkylation done by various alkyl bromide in the presence of acetone (Scheme 1).For the synthesis of hydroxyxanthones Eaton's reagent (Eaton and Carlson, 1973) was poured in the mixture of salicylic acid derivative (60 mmol) and polyhydroxy phenol (60 mmol), stirred at 70°C for 30 min.The mixture cooled, stirred with cold water, keeping temperature 0-4°C for 2.5 hours.The resulting solid collected by filtration, washed with water until pH 6 and dried at 60°C (Varache-Lembege et al., 2008).To the intermediate (2 mmol) and alkyl bromide (3 mmol) in acetone (55-60 mL) was added potassium carbonate (2.5 mmol).Mixture was refluxed under stirring for 2-4 hours, cooled, filtered and it concentrated.Further recrystallization was done and the product collected as yellow solid (Liu et al., 2006).
Evaluation of anti-diabetic activity
Synthesized compounds are evaluated for in vivo antidiabetic activity in diabetic rats using streptozotocin as diabetic inducing agent (Abeeleh et al., 2009).
Animals
Adult male Wistar rats (150-200 g) were used to study the anti-diabetic activity.Animals were housed in standard laboratory conditions (temperature 22 ± 2°C and humidity 45 ± 5% with 12 hours day: 12 hours night cycle).All animals received standard laboratory diet and water ad libitum.
Acute toxicity studies
Acute oral toxicity study was performed according to OECD-423 guidelines (acute toxic class method).Adult female Wistar rats (n = 5; 120-200 g) were selected by random sampling for acute toxicity study.The animals were kept fasting overnight and provided water ad libitum.The synthesized drugs were administrated orally at 5 mg/kg body weight orally and observed for 14 days.If mortality was observed in two out of three animals, then the dose administrated was assigned as toxic dose.If mortality was observed in one animal, then the same dose repeated again to confirm the toxic dose.If mortality was not observed, the procedure was repeated for higher doses such as 50, 100 and 1500 mg/ kg body weight.Animals were observed individually after dosing for first 30 min, 1 hour, 2 hours and daily thereafter, till 14 days.Any toxicity sign of gross changes in skin and fur, eyes and mucous membranes, circulatory, respiratory, autonomic and central nervous systems, and behavior pattern was reported (Kumudhavalli and Jaykar, 2012).
Induction of diabetes
Type 2 diabetes mellitus was induced by injecting freshly prepared streptozotocin (50 mg/kg; i.p.) in cold citrate buffer (0.1M, pH 4.5) (Ramachandan et al., 2013) in overnight fasting experimental rats.Confirmation of diabetes was decided to measure blood glucose level after 72 hours of injecting streptozotocin.Each time blood from tail vein of experimental rats is collected and blood glucose was measured with glucometer during whole study.Animals were kept in laboratory condition for 7 day to stabilize diabetes and animals showing blood glucose >250 mg/dL taken for activity assessment of synthesized drugs.
Study design and grouping of animals
Animals were randomly divided into 13 groups (n = 5) for the whole study. 1 st group treated as normal control and administered 0.3% carboxymethyl cellulose (CMC). 2 nd group remains diabetic control and no drug administered throughout the study period.Standard drug (miglitol) at a dose level of 25 mg/kg (p.o.) body weight was administered to 3 rd group.Others groups receive synthesized drugs of two dose level i.e. 100 mg/ kg and 250 mg/kg body weight.All the drugs were suspended on freshly prepared CMC (0.3% w/v) before administered.The treatments were administered orally to the animals daily for 14 days.Blood samples were collected from tail vein for determination of blood glucose level on 0, 5 th , 10th and 15 th day (Selvan et al., 2008).The blood glucose levels were measured by one touch glucometer (AccuSure) throughout the two weeks of treatment.
Body weight measurement
Animals were weighed on 0, 5 th , 10 th and 15 th day after treatment to detect any change in their body weights.
Blood sample collection
On 15 th day blood samples were collected from retroorbital puncture under mild anesthesia and stored with 4%w/v sodium citrate for plasma separation.The animals were sacrificed through cervical dislocation after subjecting them to mild ether anesthesia.Blood was carefully collected from each rat from retro-orbital puncture under mild anesthesia in microfuge tube and then centrifuged at 2,500 rpm for 15 min.Plasma, thus obtained, was collected carefully in individual microfuge tube and stored at -20°C.Plasma was separated immediately and used for the analysis of total cholesterol (TC) and triglyceride (TG).
Biochemical estimation
Serum marker such as serum glutamic oxaloacetic transaminase (SGOT), serum glutamic pyruvic transaminase (SGPT) and alkaline phosphatase (ALP) were measured (Cole et al., 2005;Silverstein and Webster, 1963).Lipid profile like triglyceride and total cholesterol were also measured using test kits.
Statistical analysis
All the data were expressed as mean ± SEM.Statistical analysis was carried out by one way ANOVA (Analysis of Variance) followed by Dunnett's t-test with the level of significance at p<0.01 and p<0.05.
Drug likeness studies
The compounds were derived from the xanthone nucleus with various substituents (Table I).Pharmacokinetics properties and toxicity studies of all the derived compounds were studied.Molecular properties of the designed ligands such as logP, molecular weight, HBA (hydrogen bond acceptor), HBD (hydrogen bond donor), nRB (number of rotatable bond), TPSA (total polar surface area) were calculated by Molinspiration property calculator and are reported in Table II.Most of the compounds showed good predictive bioavailability and pharmacokinetics hardly violating the Lipinski rule.The ligands were further studied for different property such as mutagenicity, tumourogenicity, irritating and reproductive effects as well as drug likeness and drug scores and were predicted.
Docking study
Prior to the docking simulation, the authors were applied the re-docking process to validate the docking protocol for its reproducibility.The co-crystal migiltol docked into the binding pocket of human maltase-glucoamylase.The structural superimposition of the crystal ligand along with the crystal ligand was perfomed by AutoDock 4.2.The docked pose of Migiltol achieved by AutoDock and the co-crystal conformation results an RMSD of 1.33 Å (Figure 1).The RMSD of <2 Å were considered as success wherever the RMSD between 2 and 3 Å were believed to be partially successful.It was found that all the interactions and interacting residues in the docked pose were identical with the co-crystal ligand.Lowest RMSD value, similar interactions and binding poses with same interacting residues between the docked ligand and the crystal ligand validates our docking protocol to be optimal.The molecules were targeted to the binding site of the standard compound.
The docking results predicted that migiltol had less binding affinity as compared to the most of the derived compounds.
Docking results were predicted that the standard ligand had a -8.71 Kcal/mol binding energy.The hydrogen bond forming amino acid residues of the target protein was found to be Asp327, Trp406, Asp443 and Met444.Excluding hydrogen interaction, the ligand was also formed nine hydrophobic bonds with the amino acids (Figure 2).The ligand p22 was predicted to have better binding energy as compared to the standard drug.It formed a -12.99 Kcal/mol binding energy.It only formed seven hydrophobic bonds with the target protein to the same binding pattern as of the standard (Figure 3).Docking interaction of the ligand p2 was formed -10.2 Kcal/mol binding energy.Two hydrogen bond interactions were identified with the amino acid residue Trp406 and Arg526.It also formed twelve hydrophobic interactions with the protein.The ligand p5 was formed a -9.99 Kcal/mol binding energy.It was fitted to the binding pocket of the standard molecule forming ten hydrophobic interactions.The ligand p10 was inhibited the protein having binding energy -9.93 Kcal/mol.The hydrogen bond interactions were identified with the helix of protein.Two hydrogen bonds were formed by the ligand with the residue Tyr299 and Arg526.The number of hydrophobic interacting residues was found to be ten.The ligand p4 was formed -9.89 Kcal/mol binding interaction energy.It was inhibited the target protein with eight hydrophobic bonds.
Chemistry
Five best scoring (lesser binding energy) compounds (P2, P4, P5, P10 and P22) were selected for synthesis.The synthesis of selected novel alkoxyxanthones was achieved in two step reaction (Scheme 1).In the first step, salicylic acid derivative (1) and poly-hydroxy phenols (2) were reacted together to yield hydroxyxanthones (3) in the presence of acid.Hydroxyxanthones (3) was further reacted with alkyl bromide in the presence potassium carbonate, acetone and reflux for 4 hours at 55ºC to yield targeted alkoxyxanthones (4).
Anti-diabetic activity evaluation
In vivo acute toxicity studies The synthesized compounds showed no serious toxicity up to dose level 1,500 mg/kg body weight in experimental rats.
Blood glucose level
Before starting the treatments, blood glucose level of all the animals was within normal range.After 72 hours of streptozotocin treatment, the blood glucose level was significantly changed more than 240 mg/dL.There was significant decrease (p<0.01) in blood glucose level of the animals after treatment with drugs on 5 th day, 10 th day and 15 th day (Table III).
Body weight
Statistical analysis revealed that there was significant (p<0.01)difference in body weight of experimental animals, among the group at 5 th day, 10 th day and 15 th day when compared with normal control group.But there are no significant changes in body weight between the groups when compared with diabetic control group.Effects of drugs over body weight and the changes in body weight after two weeks of treatment were given in the Table IV.
Effect on SGOT, SGPT, ALP, TC and TG level
Activity of the enzymes SGOT, SGPT and ALP were measured from the plasma of the experimental animals.The level of TC and TG content was also measured from the plasma collected from the rats after 14 days treatment using biochemical kits and results are represented in Table V.
Elevation of biomarker enzymes such as SGOT, SGPT ALP were observed in diabetic group which indicates hepatic damage.On treatment with synthesized compounds reduced the levels of the elevated marker enzymes i.e.SGOT, SGPT, ALP and restored these close to normal values which indicates recovery of insulin secretion.So it can be clearly understood that there is positive effect which is statistically significant (p<0.01).
Discussion
Type 2 diabetes mellitus is major health problem.In this manuscript we present an approach to control blood glucose levels in individuals with type 2 diabetes by targeting maltase-glucoamylase and intestinal glucosidases using some novel xanthone alpha-glucosidase inhibitors.One of the intestinal glucosidases targeted the N-terminal catalytic domain of maltase-glucoamylase (ntMGAM) which is responsible for the hydrolysis of terminal starch products into glucose (Sim et al., 2010).Hence to slow down the glucose level we targeted the ntMGAM.Previously acarbose and miglitol were found to be potent inhibitors alpha-glucosidase having polyhydroxy groups in their structure (Asano et al., 2003).Hence novel molecules are designed to xanthone based on the fact.Computational docking study was done to the binding pocket of miglitol with the crystal structure of maltase-glucoamylase.We redocked to verify the interaction of the cocrystal ligand (miglitol) with the novel xanthone molecules.We observed excellent correlation on the docking interaction we synthesized the most effective compounds.In vivo acute toxicity study showed the synthesized compounds showed no serious toxicity up to dose level 1500 mg/kg body weight in experimental rats.The molecules were screened for the in vivo antidiabetic activity in streptozotocin-induced diabetic animal model.After 72 hours of STZ treatment the blood glucose level was significantly changed more than 240 mg/dL.Statistical analysis by one way revealed that there was significant decrease (p<0.01) in blood glucose level of the animals after treat -ment with drugs on 5th day, 10th day and 15th day.
The designed ligands were predicted for bioavailability and others molecular property to check their potential as future drug candidate and they hardly violating those filters.When they docked in the same binding pocket with same amino acid residues of targeted protein most of the derived ligands predicted better binding energy than miglitol.Among 28 designed compounds the compound p22 showed to have the best binding energy (-12.99Kcal/mol) however the compound p26 showed the least binding energy (-7.17Kcal/mol).Based on these in silico results further proceded for the laboratory experiment.Top five compounds were synthesized in laboratory scale, purified and crystallized properly to characterize them by UV, FTIR, NMR and mass analysis.The analytical and spectral data of the compounds analyzed and are found in compliance with the structure of the synthesized compounds.
The compounds were further screened for in vivo antidiabetic activity using streptozotocin-induced diabetic model in Wistar rats.Effective dose was selected after oral acute toxicity study.All the study group animals were subjected to 15 days anti-diabetic treatment protocol and blood glucose level were measured on 0 th , 5 th , 10 th and 15 th day collecting blood from tail vein of study subjects.Study shows a significant reduction in blood glucose as well as it was also noted that the total cholesterol and triglycerides level was increased in diabetic control groups and there was a notable change after 15 days treatment with synthesized compounds.On treatment with synthesized compounds total cholesterol and triglyceride level decreased significantly (p<0.01) as compared with diabetic control group.
Conclusion
Xanthone derivatives proved to act as lead molecules towards the development of potential -glucosidase inhibitors.These compounds showed excellent correlation between docking results, synthetic data and in-vivo anti-diabetic activity.
Figure 1 :
Figure 1: The re-docking pose of miglitol crystal ligand.The crystal ligand is marked in red color
Figure 3 :
Figure 2: The docked pose of standard miglitol
Table III Effect on blood glucose level of synthesized compounds
All the values are given as mean ± SEM; n = 5; Diabetic control vs all group ( a p<0.01, b p< 0.05), Normal control vs all group ( c p<0.01, d p<0.05)
Table V Effect on SGOT, SGPT, ALP, TC and TG levels
Diabetic control vs all group ( a p<0.01, b p< 0.05), Normal control vs all group ( c p<0.01, d p<0.05)
|
2019-03-31T13:43:12.286Z
|
2016-03-16T00:00:00.000
|
{
"year": 2016,
"sha1": "947d767111190294773fd648a232b3105a621195",
"oa_license": "CCBY",
"oa_url": "https://www.banglajol.info/index.php/BJP/article/download/25851/18049",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b1e88469256882c80bf4720194cfe2bb914ff2fd",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
25518636
|
pes2o/s2orc
|
v3-fos-license
|
Theory of traditional Chinese medicine and therapeutic method of diseases
Traditional Chinese medicine, including herbal medicine and acupuncture, as one of the most important parts in complementary and alternative medicine (CAM), plays the key role in the formation of integrative medicine. Why do not the modern drugs targeting the specificity of diseases produce theoretical effects in clinical observation? Why does not the traditional Chinese medicine targeting the Zheng (syndrome) produce theoretical effects in clinic? There should have some reasons to combine Western medicine with Chinese herbal medicine so as to form the integrative medicine. During the integration, how to clarify the impact of CAM theory on Western medicine has become an emergent topic. This paper focuses on the exploration of the impact of theory of traditional Chinese medicine on the therapy of diseases in Western medicine.
INTRODUCTION
More than one third of patients in the United States use complementary and alternative medicine (CAM) [1] , and more and more scientists are interested in integrative medicine research in USA. Recent research showed that integrative medicine (also complementary and alternative medicine) could contribute to primary health care [2,3] . Traditional Chinese medicine (TAM), including herbal medicine and acupuncture, as one of the most important parts in CAM, should play the key role in the formation of integrative medicine. During the integration, how to clarify the impact of CAM theory on Western medicine has become the emergent topic.
TCM was formed two thousand years ago, and developed in the following centuries. TCM recognizes human body by system discrimination and cybernetic way. TCM can be characterized as holistic with emphasis on the integrity of the human body and the close relationship between human and its social and natural environment. TCM focuses on health maintenance and in the treatment of disease emphasizes on enhancing the body's resistance to diseases. For improving health, TCM applies multiple natural therapeutic methods.
Zheng (syndrome) is the basic unit and key term in TCM theory. Zheng is an outcome after analyzing all symptoms and signs. All therapeutic methods in TCM come from the differentiation of Zheng. The methods have been used for thousands of years, which proves that TCM therapeutic approach is effective. From this point of view, Zheng should play an important role in determining the effect. Combined with modern medicine, Zheng should have an impact on disease pathogenesis that directly influences the therapeutic effect.
HISTORICAL BACKGROUND
At the time when TCM formed, there was nothing modernized in medical and biological fields, but there was something developed in Chinese philosophy, astronomy and literature. Also at that time, people got a great amount of experiences on how to deal with the disorders by natural methods, such as puncture, Qigong (mind controlling), taking plants. Some talents in China began to summarize those phenomena and sublimated to theory based on their philosophical and social knowledge at that time. The theory is the original TCM. Thus TCM handles human physiology and pathology following old Chinese philosophical thinking. In the following centuries, accumulation of experiences and addition of relative knowledge (such as clinical observation data and less anatomical experience) made TCM developed. The terminology TCM is partially originated from Chinese philosophy. Other terms in TCM, even same as those in modern medicine, have completely different meanings. It is believed that to understand the physiology of TCM, to some extent, should have some knowledge about Chinese philosophy.
PHYSIOLOGY AND PATHOLOGY IN TCM
During the formation and development of TCM, there are two ideological ideas that fully penetrate into the whole process. The first is the homeostasis idea that focuses on the integrity of human body, and emphasizes the close relationship between human body and its social and natural environment (integrity between human and cosmos). The second is the dynamic balance idea that takes emphasis on the movement in the integrity. Physiologically TCM recognizes human body by system discrimination and cybernetic way. In system discrimination approach, the intrinsic activities of human body can be clarified by analyzing the audio-visual information. The human body, a complicated system, could be identified as different closely related systems that form a network (integrity). The external information should reflect something intrinsic because of the integrity between human body and its social and natural environment. For example, the heart as a center, together with blood, vessel, mind, tongue, small intestine, consists of the heart system in TCM. Any information from any parts in the system can demonstrate the system's activity even the structure of the part is unclear. In cybernetic approach, TCM takes human body as a self-controlled system network. The network is connected by the meridian that exists in whole body. Blood and vital energy flow also contributes to the connection. The Five elements theory in TCM, named as wood, fire, earth, metal and water, divides human body into five systems. Each system has its own specific features that can be inferred by analyzing those natural materials. The movement and interchange among the five elements are used to explain human body's physiology.
Since TCM has its unique physiology in understanding human body, it has its special understanding on human body's disorders. Pathologically, TCM focuses on the pathogeneicity of social and natural factors. The factors have a close relationship with humans to consist of the integrity. Mostly they are non-direct and non-specific factors if we say bacteria or viruses are direct and specific ones. TCM is not completely to seek the specific pathogen, and pathological changes in a specific organ, while it is to seek the disturbances among the self-controlled systems by analyzing all symptoms and signs. In the heart system, any disturbance in any part of the system is useful to clarify the pathology. At the same time, comparison of the disturbance happened in different period is also important in pathological analyses. TCM takes emphasis on the dynamic changes in any parts and any connections in the self-controlled system.
THERAPEUTIC MECHANISM IN TCM
Physiology in TCM is featured with self-controlled system discrimination and its pathology is featured with dynamic changes in the system (whether direct or indirect, specific or non-specific). The therapeutic mechanism in TCM focuses on enhancing human body's resistance to diseases and prevention by improving the inter-connections among self-controlled systems. To reach the approach, TCM uses different therapeutic methods, such as mind-spiritual methods (such as Qigong, Taiji boxing), natural methods (acupuncture, moxibustion, herbal medicine). These therapeutic methods are characterized by fewer side effects since they are natural. TCM evaluates the therapeutic results by comparing the symptoms before and after the treatment. The treatment is based on the differentiation of symptoms to clarify what is wrong in the self-controlled system. TCM seeks the therapeutic mechanism from the integrity. The integrity includes the human itself as integrity, and the integrity between human and its social and natural environment. The therapeutic mechanism can be achieved by activating systems, improving system connection and enhancing human resistance. The mechanism in TCM is not like modern medicine that seeks the mechanism from cellular or molecular level (such as killing bacteria and virus, antagonistic method). If someone lives well (no symptoms), she is healthy in TCM, whether she has some signs in cellular and molecular level such as high blood pressure.
KEY TERM IN TCM THERAPEUTIC APPROACH: DIFFERENTIATION OF ZHENG
Zheng (syndrome), a basic unit in TCM, decides the therapeutic methods. Zheng is the outcome after a careful analysis of all symptoms and signs (tongue appearance and pulse feeling included). Zheng outcome might change since the symptoms and signs might change. There are many Zhengs in TCM, either simple Zheng or combined ones.
Zheng, as the key term and basic unit in TCM therapeutic theory, develops following the progress in disease theory progress. Tens of years ago, Zheng did not include any signs from modern diagnostic instruments, and nowadays, Zheng is combined with or referred to disease diagnosis during the therapeutic process to some content.
The process of how to get the outcome is called differentiation of Zheng, which is based on the physiology and pathology of TCM.
IMPACT OF ZHENG ON DISEASE TREATMENT
Disease's key units usually contain etiology, pathology and disease location. Modern medicine is trying to get the specificity of the cause, pathology and location, and as a result, the therapeutic approach is targeting on the specificity. New drugs in modern medicine are developed from strictly designed scientific pharmacological tests that are targeting on the specificity. Pharmacological tests show better effect than the effect shown in clinic.
In differentiation of Zheng, clinical effect should be better if the theory of differentiation of Zheng and physiology of TCM are followed. Unfortunately the effect in practice, even completely following the differentiation of Zheng, is not as good as the theoretical one. There should have some reasons to explain the difference between theoretical and clinical effects in TCM practice.
As summarized, there are two questions about the therapeutic problem in medical science. One is why is there difference between the pharmacological and clinical effects in modern medicine? The other is why is there difference between the theoretical and clinical effects in traditional Chinese medicine?
The questions refer to that there are some shortages of therapeutic approach both in modern medicine and in traditional Chinese medicine.
Any disease (morbidity) could contain two parts of appearance. One is the so-called specificity to the realities of morbidity, such as the pathological change. The other is the non-specificity that refers to the reactions caused by interactions between personal physique and environments, such as heterogeneous manifestations. Modern medicine is aimed to explore the specificity of morbidity, while traditional Chinese medicine is mainly aimed to explore the reality of the morbidity by checking the external appearance (that is the differentiation of Zheng). It is believed that the non-specificity sometimes could influence or change the process of morbidity, and only targeting the specificity is not enough to stop the progress of morbidity [4] .
Disease mainly refers to the specificity of cause and pathology with less emphasis on the non-specificity. Nonspecificity includes all symptoms and signs not directly induced by the specific cause and pathology. Usually the specificity decides the process of diseases. Drugs in modern medicine are targeting the specific cause and pathology, and it usually gives good effect even though the effect is not as good as the pharmacological effect. Since the specific cause and pathology cannot be found in all diseases, the effect of modern drugs depends on whether the cause and pathology are clear or not. In reality, modern drugs are good at curing those diseases with clarified cause and pathology, and not good at curing those diseases due to multiple factors in the pathogenesis, which have become more common in medical science.
However, whenever the non-specificity influences on the specificity, drugs targeting the specificity have no good effect. That is the main reason why modern drugs sometimes are not effective in some cases in the treatment of a disease with a clarified cause and pathology.
Zheng mainly refers to the non-specificity and part of specificity that is only obtained from symptoms and signs by asking, watching and feeling since there are no modern diagnostic instruments. Chinese herbal medicine, based on the Zheng which is taken as an outcome of differentiation of symptoms and signs, targets to the non-specificity and part of the specificity. The effect of herbal medicine is not so good in curing a disease with specific signs, which can be only obtained Lu AP et al.Theory of traditional Chinese medicine and therapy by modern diagnostic instruments since Zheng does not refer to those signs. However, the effect of herbal medicine is better in treating some cases when the non-specificity decides the process of a disease. Thus, the reason why there is a difference between the theoretical effect based on Zheng differentiation and the clinical effect is that Zheng differentiation can not exactly differentiate the specificity of a disease.
COMBINING ZHENG WITH DISEASE: NEW STRATEGY IN THERAPEUTIC APPROACH
Following TCM Zheng theory, different diseases may be treated by a same therapeutic approach if they show same Zhengs. One herbal preparation can be used to treat different diseases, a common phenomenon in TCM. Similarly, the same disease may be treated by different therapeutic approaches if the disease shows different Zhengs. It is common in TCM that one kind of disease is treated with different therapies. As mentioned above, Zheng is the outcome of differentiation of symptoms and primary signs obtained by getting from watching (tongue watching) and feeling (pulse feeling), and definitely Zheng is not so accurate. The following example can be used to explain the shortage of Zheng information. Gastritis and stomach cancer could show similar symptoms and primary signs, suggesting that they could be differentiated as the same Zheng in TCM, and could be treated by the same TCM approach. The effect, there is no doubt, should be different since stomach cancer is difficult to be cured by herbal medicine. Thus, the differentiation of Zheng would not give any good effect when the specificity is not clarified resulting from the decisive factor in the evaluation of effects.
It was reported that the effects of two herbal preparations that targeted on coronary heart disease with different Zhengs were at least partially dependent on the Zhengs. The results showed that for coronary heart disease cases with Qi deficiency, Zheng could be alleviated by herbal medicine to reinforce Qi deficiency at effective rate of 89%, while the cases could be alleviated by herbal medicine targeting coronary heart disease and nourishing Yin at effective rate of 60%. For the coronary heart disease cases with Yin deficiency, Zheng could be alleviated by herbal medicine to nourish Yin at the effective rate of 87%, while the cases could be alleviated by herbal medicine to reinforce Qi at effective rate of 65%. Thus, the differentiation of Zheng plays an important role in the therapeutic process and affects the therapeutic result of a specific disease.
Following the disease theory there should have a specific therapy targeting the specific cause, pathology and location. If the specificity is clarified, the disease would be cured. Actually, there might not be so good effect in alleviating some diseases or symptoms even the specificity is clarified. The reason is that the non-specificity influences the specificity. Thus, targeting the specificity of a disease may not result in a good effect or give no effect at all when the non-specificity is decisive in the effect evaluation. The example about drugs in lowing blood pressure would be helpful to explain the reason. In patients with hypertension, there are some good drugs in decreasing blood pressure, and the real thing is that there always have some cases showing any effect after taking drugs. The partial reason is that, in some cases of hypertension, the non-specific appearance could play a key role in influencing the effect of drugs. At this point, new anti-hypertension drugs for the cases in which the non-specificity is a decisive factor need to be developed.
Combining the differentiation of Zheng with diagnosis of disease, which is combining herbal medicine mainly targeting non-specificity with modern drugs targeting the specificity, would achieve the best therapeutic effect.
Many clinical studies have shown that combining modern drugs with herbal medicine would dominantly increase the effect. For example, the effect rate in treating coronary heart disease with modern drugs (routine therapy) was 45.5%, while combing with herbal medicine it was up to 87.3% [5] . The importance is to explore how to combine the two therapies.
More double-blinded clinical trials need to be conducted, both for modern drugs and herbal medicine. All specific and non-specific information needs to be collected for further analysis.
Any new drug, even targeting the exact specific pathology, does not act on all cases of diseases since the effect of nonspecificity may affect the process of pathogenesis. Any herbal medicine originating from the exact differentiation of Zheng does not act on all cases with Zheng since lack of enough specificity may lose the decisive factor in the treatment.
After the information about new drug classification is obtained, the best effect could be achieved by either combination of drugs targeting the specificity with herbal medicine targeting the non-specificity, or by complex new drug development focusing both on specificity and non-specificity.
TCM focuses on the integrity of human body and the close relationship with its social and natural environments. It recognizes human physiology by analyzing external information by system discrimination and cybernetic approach, and regards that any disorders are caused by the disturbance in any part of the self-controlled system in the integrity. In therapeutics, TCM targets the non-specificity and part of specificity by natural ways.
Why modern drugs cannot achieve the effect as the pharmacological study and the same effect in a same disease is that Zheng in TCM contributes to the progress of a disease. It is important to clarify that in what situation drugs targeting the disease specificity would be effective and how to make the drug become more effective.
|
2018-04-03T01:55:52.084Z
|
2004-07-01T00:00:00.000
|
{
"year": 2004,
"sha1": "d512304f62dde4146079226fdd7770cffa026acc",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v10.i13.1854",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c2525e97caa50309c188f157a270a3f70deb858c",
"s2fieldsofstudy": [
"Medicine",
"Philosophy"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250496692
|
pes2o/s2orc
|
v3-fos-license
|
Safety of heterologous primary and booster schedules with ChAdOx1-S and BNT162b2 or mRNA-1273 vaccines: nationwide cohort study
Abstract Objective To assess the risk of adverse events associated with heterologous primary (two dose) and booster (three dose) vaccine schedules for covid-19 with Oxford-AstraZeneca’s ChAdOx1-S priming followed by mRNA vaccines (Pfizer-BioNTech’s BNT162b2 or Moderna’s mRNA-1273) as compared with homologous mRNA vaccine schedules for covid-19. Design Nationwide cohort study. Setting Denmark, 1 January 2021 to 26 March 2022. Participants Adults aged 18-65 years who received a heterologous vaccine schedule of priming with ChAdOx1-S and one or two mRNA booster doses (with either the BNT162b2 or mRNA-1273 vaccine) were compared with adults who received a homologous BNT162b2 or mRNA-1273 vaccine schedule (ie, two dose v two dose, and three dose v three dose schedule). Main outcome measures The incidence of hospital contacts for a range of adverse cardiovascular and haemostatic events within 28 days after the second or third vaccine dose, comparing heterologous versus homologous vaccine schedules. Secondary outcomes included additional prioritised adverse events of special interest. Poisson regression was used to estimate incidence rate ratios with adjustment for selected covariates. Results Individuals who had had a heterologous primary vaccine (n=137 495) or a homologous vaccine (n=2 688 142) were identified, in addition to those who had had a heterologous booster (n=129 770) or a homologous booster (n=2 197 213). Adjusted incidence rate ratios of adverse cardiovascular and haemostatic events within 28 days for the heterologous primary and booster vaccine schedules in comparison with the homologous mRNA vaccine schedules were 1.22 (95% confidence interval 0.79 to 1.91) and 1.00 (0.58 to 1.72) for ischaemic cardiac events, 0.74 (0.40 to 1.34) and 0.72 (0.37 to 1.42) for cerebrovascular events, 1.12 (0.13 to 9.58) and 4.74 (0.94 to 24.01) for arterial thromboembolisms, 0.79 (0.45 to 1.38) and 1.09 (0.60 to 1.98) for venous thromboembolisms, 0.84 (0.18 to 3.96) and 1.04 (0.60 to 4.55) for myocarditis or pericarditis, 0.97 (0.45 to 2.10) and 0.89 (0.21 to 3.77) for thrombocytopenia and coagulative disorders, and 1.39 (1.01 to 1.91) and 1.02 (0.70 to 1.47) for other bleeding events, respectively. No associations with any of the outcomes were found when restricting to serious adverse events defined as stay in hospital for more than 24 h. Conclusion Heterologous primary and booster covid-19 vaccine schedules of ChAdOx1-S priming and mRNA booster doses as both second and third doses were not associated with increased risk of serious adverse events compared with homologous mRNA vaccine schedules. These results are reassuring but given the rarity of some of the adverse events, associations cannot be excluded.
Supplementary Material
For "Safety of heterologous primary and booster schedules with ChAdOx1-S and BNT162b2 or mRNA-1273 vaccines: a nationwide cohort study" Table S1. Definitions and ICD-10 codes for the main and secondary outcomes and the comorbidity covariate
Figure S2. Associated risk of cardiovascular or hemostatic adverse events with the individual heterologous primary vaccine schedules for covid-19 compared to respective homologous mRNA vaccine schedules counterpart
Incidence rate ratios (IRRs) for the outcomes within 28 days were adjusted for calendar period, sex, birth year (proxy for age), region of residency, birth country, vaccine priority group, hospital contact in the last 6 months, and comorbidities. Cell counts less than three (but not zero) are not reported. If a subgroup analysis yielded cell counts less than three (but not zero), the number of cases are reported as less than (<) the sum of the subgroup cell counts, ie, the number of cases reported in the main analysis. Other bleeding events includes a composite of bleeding-related diagnoses other than intracranial hemorrhages. BNT denotes BNT162b2, ChAd ChAdOx1-S, CI confidence interval, m1273 mRNA-1273, and NE not estimated.
Figure S3. Associated risk of cardiovascular or hemostatic adverse events with the individual heterologous booster vaccine schedules for covid-19 compared to respective homologous mRNA vaccine schedules counterpart
Incidence rate ratios (IRRs) for the outcomes within 28 days were adjusted for calendar period, sex, birth year (proxy for age), region of residency, birth country, vaccine priority group, hospital contact in the last 6 months, and comorbidities. Cell counts less than three (but not zero) are not reported. If a subgroup analysis yielded cell counts less than three (but not zero), the number of cases are reported as less than (<) the sum of the subgroup cell counts, ie, the number of cases reported in the main analysis. Other bleeding events includes a composite of bleeding-related diagnoses other than intracranial hemorrhages. BNT denotes BNT162b2, ChAd ChAdOx1-S, CI confidence interval, m1273 mRNA-1273, and NE not estimated.
Figure S4. Association between heterologous primary vaccine schedules and cardiovascular or hemostatic adverse events by sex
Incidence rate ratios (IRRs) for the outcomes within 28 days were adjusted for calendar period, sex, birth year (proxy for age), region of residency, birth country, vaccine priority group, hospital contact in the last 6 months, and comorbidities. Cell counts less than three (but not zero) are not reported. If a subgroup analysis yielded cell counts less than three (but not zero), the number of cases are reported as less than (<) the sum of the subgroup cell counts, i.e., the number of cases reported in the main analysis. Other bleeding events includes a composite of bleeding-related diagnoses other than intracranial hemorrhages. CI denotes confidence interval and NE not estimated.
Figure S5. Association between heterologous booster vaccine schedules and cardiovascular or hemostatic adverse events by sex
Incidence rate ratios (IRRs) for the outcomes within 28 days were adjusted for calendar period, sex, birth year (proxy for age), region of residency, birth country, vaccine priority group, hospital contact in the last 6 months, and comorbidities. Cell counts less than three (but not zero) are not reported. If a subgroup analysis yielded cell counts less than three (but not zero), the number of cases are reported as less than (<) the sum of the subgroup cell counts, i.e., the number of cases reported in the main analysis. Other bleeding events includes a composite of bleeding-related diagnoses other than intracranial hemorrhages. CI denotes confidence interval and NE not estimated.
Figure S6. Association between heterologous primary vaccine schedules and cardiovascular or hemostatic adverse events by birth year.
Individuals were subgrouped according to whether born in year 1975 or later or before year 1975. Birth year of 1975 corresponds to turning 46 years of age in year 2021. Incidence rate ratios (IRRs) for the outcomes within 28 days were adjusted for calendar period, sex, birth year (proxy for age), region of residency, birth country, vaccine priority group, hospital contact in the last 6 months, and comorbidities. Cell counts less than three (but not zero) are not reported. If a subgroup analysis yielded cell counts less than three (but not zero), the number of cases are reported as less than (<) the sum of the subgroup cell counts, i.e., the number of cases reported in the main analysis. Other bleeding events includes a composite of bleeding-related diagnoses other than intracranial hemorrhages. CI denotes confidence interval and NE not estimated.
Figure S7. Association between heterologous booster vaccine schedules and cardiovascular or hemostatic adverse events by birth year
Individuals were subgrouped according to whether born in year 1975 or later or before year 1975. Birth year of 1975 corresponds to turning 46 years of age in year 2021. Incidence rate ratios (IRRs) for the outcomes within 28 days were adjusted for calendar period, sex, birth year (proxy for age), region of residency, birth country, vaccine priority group, hospital contact in the last 6 months, and comorbidities. Cell counts less than three (but not zero) are not reported. If a subgroup analysis yielded cell counts less than three (but not zero), the number of cases are reported as less than (<) the sum of the subgroup cell counts, i.e., the number of cases reported in the main analysis. Other bleeding events includes a composite of bleeding-related diagnoses other than intracranial hemorrhages. CI denotes confidence interval and NE not estimated.
Figure S8. Sensitivity analyses of the associated risk with heterologous primary vaccine schedules by use of different follow-up definitions
Figure shows the results of the sensitivity analyses where assessing a shorter follow-up of two weeks and extending the follow-up to 180 days after the day of the respective second (ie, index date). For the latter, the outcomes of Guillain-Barré syndrome and narcolepsy were studied post hoc. Incidence rate ratios (IRRs) were adjusted for calendar period, sex, birth year (proxy for age), region of residency, birth country, vaccine priority group, hospital contact in the last 6 months, and comorbidities. Other bleeding events includes a composite of bleeding-related diagnoses other than intracranial hemorrhages. CI denotes confidence interval.
Figure S9. Sensitivity analyses of the associated risk with heterologous booster vaccine schedules by use of different follow-up definitions
Figure shows the results of the sensitivity analyses where assessing a shorter follow-up of two weeks and extending the follow-up to 180 days after the day of the respective third dose (ie, index date). For the latter, the outcomes of Guillain-Barré syndrome and narcolepsy were studied post hoc. Incidence rate ratios (IRRs) were adjusted for calendar period, sex, birth year (proxy for age), region of residency, birth country, vaccine priority group, hospital contact in the last 6 months, and comorbidities. Other bleeding events includes a composite of bleeding-related diagnoses other than intracranial hemorrhages. CI denotes confidence interval and NE not estimated.
|
2022-07-14T13:12:14.434Z
|
2022-07-13T00:00:00.000
|
{
"year": 2022,
"sha1": "a0ebf1eb79ae23de70a3bf20f66a9dc778e439ea",
"oa_license": "CCBYNC",
"oa_url": "https://www.bmj.com/content/bmj/378/bmj-2022-070483.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0571295bd73562fd8b19e844836541ca3690add",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258931077
|
pes2o/s2orc
|
v3-fos-license
|
Physiological Activity of Trace Element Germanium including Anticancer Properties
Germanium is an essential microelement, and its deficiency can result in numerous diseases, particularly oncogenic conditions. Consequently, water-soluble germanium compounds, including inorganic and coordination compounds, have attracted significant attention due to their biological activity. The review analyzes the primary research from the last decade related to the anticancer activity of germanium compounds. Furthermore, the review clarifies their actual toxicity, identifies errors and misconceptions that have contributed to the discrediting of their biological activity, and briefly suggests a putative mechanism of germanium-mediated protection from oxidative stress. Finally, the review provides clarifications on the discovery history of water-soluble organic germanium compounds, which was distorted and suppressed for a long time.
Introduction
At present, germanium is widely recognized as a vital trace element, which is particularly essential for the normal functioning of the immune system and plays a significant role in cancer prevention [1][2][3][4][5][6][7]. Germanium is ubiquitously present in mammalian organs and tissues, with the highest concentration in the thymus. Germanium normalizes many physiological functions, particularly blood characteristics including pH, glucose, minerals, cholesterol, uric acid, hemoglobin and leukocytes [8,9]. Conversely, germanium deficiency can result in numerous diseases, primarily oncogenic conditions [10]. Research has revealed that cancer patients exhibit anomalously low concentrations of germanium in their blood serum [7,11,12]. Additionally, germanium levels in cancerous tissues are significantly lower than those in adjacent healthy tissues [13].
Germanium is primarily introduced into the body through the consumption of vegetable-based foods with an average daily human dose of only 0.4-1.5 mg [14,15]. Research on the determination of this element in plant raw materials unexpectedly revealed an elevated content in plants and mushrooms that are traditionally used in ethnoscience, particularly in China [7,[16][17][18]. Germanium compounds in natural sources have long been considered a therapeutic agent with anticancer, antitumor, antiviral and anti-inflammatory effects [19]. Thus, the highest germanium concentrations are contained in ginseng, saprophyte mushrooms, particularly lacquered polypore (Ganoderma lucidum) and chaga, as well as in garlic, aloe and echinacea [20][21][22][23][24][25]. Among these, ginseng and Ganoderma lucidum are widely used in complex therapies of oncological diseases [26][27][28][29][30]. Germanium compounds have been shown to normalize the oxygen respiration (i.e., oxidative phosphorylation) in cells, which can retard the growth of tumors [26,[31][32][33]. Restoring cell oxygen respiration is key to treating Warburg-like cancers [33]. The stimulating effect of germanium on oxidizing enzymes such as aldehyde reductase [34] has also been established. Hence, Hence, germanium-containing drugs have long attracted the attention of researchers and medical practitioners.
This review specifically focuses on research conducted within the past decade, during which inorganic and coordination compounds of germanium have been incorporated into medical practices alongside water-soluble organic germanium compounds [3,45,46]. Moreover, the toxicity of germanium compounds has been the subject of much controversy and confusion, and the discovery history of stable water-soluble germanium compounds has been significantly distorted. Therefore, the initial focus of this review is to elucidate the tangle of errors, inaccuracies, and myths associated with germanium. At the end of this review, the authors propose a putative mechanism for germanium-mediated cancer treatment and prevention based on the unique chemical properties of germanium.
Historical Digression and Toxicity of Germanium Compounds
The chemical element number 32 was predicted by D.I. Mendeleev in 1871, and later, in 1886, was discovered by C. Winkler, who named it after his homeland Germany ( Figure 1). Germanium has had a tumultuous history since its discovery over half a century ago. Initially, it remained an inaccessible chemical element that did not garner much scientific attention. It was not until 1948, when the first semiconductor transistors and diodes were created using germanium, that it gained significance in the field of microelectronics. However, the use of this element as a semiconductor was soon replaced by silicon and it was again forgotten. In the 1970s, the biological activity of the discovered stable organic germanium water-soluble compounds [36] attracted the attention of scientists, among which bis(carboxyethylgermanium) sesquioxide was most famous. However, in the late 1980s, interest in such compounds declined sharply as a result of an ongoing discussion about the allegedly anomalously high toxicity of organic germanium compounds (similar to organic mercury compounds). Unfortunately, the interest in such compounds declined sharply in the late 1980s due to a typo in an article published in 1987 in an inaccessible journal, which listed erroneous toxicity values for Ge-132 [6,32,33,47]. This mistake was not immediately noticed and led to erroneous criticism in subsequent publications issued in highly influential scientific journals. The correction was only published in 1988; however, until recently, many authors quoted only secondary sources that cited the erroneous data about the high toxicity of organic germanium compounds. The situation was further aggravated by a barbaric experiment conducted in Japan to determine the lethal dose of Germanium has had a tumultuous history since its discovery over half a century ago. Initially, it remained an inaccessible chemical element that did not garner much scientific attention. It was not until 1948, when the first semiconductor transistors and diodes were created using germanium, that it gained significance in the field of microelectronics. However, the use of this element as a semiconductor was soon replaced by silicon and it was again forgotten. In the 1970s, the biological activity of the discovered stable organic germanium water-soluble compounds [36] attracted the attention of scientists, among which bis(carboxyethylgermanium) sesquioxide was most famous. However, in the late 1980s, interest in such compounds declined sharply as a result of an ongoing discussion about the allegedly anomalously high toxicity of organic germanium compounds (similar to organic mercury compounds). Unfortunately, the interest in such compounds declined sharply in the late 1980s due to a typo in an article published in 1987 in an inaccessible journal, which listed erroneous toxicity values for Ge-132 [6,32,33,47]. This mistake was not immediately noticed and led to erroneous criticism in subsequent publications issued in highly influential scientific journals. The correction was only published in 1988; however, until recently, many authors quoted only secondary sources that cited the erroneous data about the high toxicity of organic germanium compounds. The situation was further aggravated by a barbaric experiment conducted in Japan to determine the lethal dose of Ge-132 for humans. The experiment involved the consumption of an astronomical dose of 328 g of germanium, which is not used in medical practice [32,[48][49][50]. The result of this experiment showed that the toxicity of Ge-132 was due to the formation and precipitation of solid germanium dioxide (GeO 2 ) in the renal pelvis [48][49][50]. The therapeutic doses of organic germanium derivatives are thousands of times less than this lethal dose. The situation was further exacerbated by cases of germanium poisoning in individuals suffering from severe diseases, who took Ge-132 for a long time in huge excess of the recommended daily dose values without the recommendation of a doctor. These individuals consumed Ge-132 in total quantities from 15 to 300 g over a period of up to three years or more (see review [50]).
It is evident that in the instances mentioned, high doses of Ge-132 resulted in toxic effects due to its hydrolysis in the body to form solid GeO 2 [15]. However, it is now known that such poisoning, even with extremely high doses of germanium, can be successfully treated with combined blood-purification therapy [51]. The occurrence of these tragic events led to various controversial political decisions concerning organic germanium. Specifically, Ge-132 was banned in several countries, despite being universally allowed as a dietary supplement as early as the 1980s. This resulted in the long-term neglect of research on the biological activity of Ge-132, particularly its anticancer properties. Ultimately, this denial of the role of germanium in wildlife was based on erroneous toxicity data, published in influential journals. The combination of typographical errors and reliance on secondary sources of information led to the neglect of the potential clinical use of compounds of this unique microelement. These events have also delayed the study of biological activity of germanium compounds, as noted in reviews [6,47]. To date, many influential journals continue to reject work related to the physiological activity of germanium compounds. It is now time to rectify this situation and restore justice by rehabilitating germanium and its biochemical role.
As of now, low toxicity Ge-132 has been established [40,[52][53][54]. In fact, the toxicity of organic germanium compounds [55][56][57][58][59][60] is lower than that of table salt and inorganic germanium dioxide, for which the oral toxicity for mice (LD 50 ) is 5400 mg/kg [55]. For example, for the best-known organic germanium sesquioxide Ge-132 oral toxicity for mice is LD 50 > 6300 mg/kg, oral for rats is >10,000 mg/kg and intravenous toxicity for rats is >1000 mg/kg [58]. Germatranol, another common germanium derivative, is also of low toxicity: oral toxicity (LD 50 ) is 8400 mg/kg for mice; intravenous toxicity is 300 mg/kg [57]. Thus, both inorganic and organic compounds of germanium are perfectly safe in those doses in which they are usually used. It should be noted that all known chemical databases, such as PubChem, currently have correct toxicity values for these compounds.
Inorganic derivatives of germanium have also been involved in a number of incidents. Dietary supplements and elixirs containing cheap both inorganic GeO 2 and Ge (IV) coordination complexes (particularly germanium citrate and citrate-lactate) have been widely sold in Japan since the early 1970s. They were advertised primarily for cancer treatment [51], wherein the recommended daily dose of 50-100 mg was completely safe. However, a number of precedents of poisoning by such germanium compounds in persons who took such elixirs for a long time have been described. In all cases, the daily dose of germanium was arbitrarily exceeded by tens and even hundreds of times (up to 5 g GeO 2 per day) for a long time (up to 18-24 months or more) [48,49,61,62]. As a result, the total dose of germanium in these people was between 100 and 500 g! Some of the more common symptoms of inorganic germanium poisoning include weight loss, fatigue, gastrointestinal disorders, anemia, muscle weakness and, in all cases, kidney failure [48][49][50]61,62]. Moreover, several serious fatal cases were described (see also review [50]). Because of such cases, these elixirs were banned in many countries [60]. However, in each of the above-mentioned cases of poisoning with germanium, it is necessary to understand fully and assess not only the harm from poisoning, but also the possible benefits. Patients in the last stages of cancer took these drugs (both in the form of Ge-132 and in the form of GeO 2 and other derivative compounds) in such huge doses independently at their own risk. When taking germanium medication, even in such toxic doses, oncological sufferers, who usually live no more than 3-6 months after diagnosis, have lived 1.5-3 years or more [50,63]. Moreover, during this time, they lived a full life, contrary to the application of classical chemotherapy.
Most of these poisoning cases occurred more than 25 years ago. However, they worsened the already bad reputation of the germanium compounds. In natural compounds, germanium forms very weak chemical bonds with organic molecules, primarily with oxygen atoms. At present, there are no methods to isolate, separate and purify such substances, so the natural germanium compounds and/or its complexes have not yet been isolated and characterized. At present, scientists have drawn attention to the water-soluble synthetic germanium derivatives that make them bioavailable and enable them to be used in safe doses.
The development of water-soluble organic derivatives of germanium (i.e., containing at least one Ge-C bond) is inextricably connected with the N.D. Zelinsky Institute of Organic Chemistry of the Russian Academy of Sciences (ZIOC RAS) and its scientists. Althouge germanium sesquioxides were known long ago, they were insoluble in water. The first water-soluble derivatives were discovered in 1965 by Prof. S.P. Kolesnikov [64][65][66], at that time a graduate student in the laboratory of Prof. O.M. Nefedov [67]. These water-soluble compounds were produced by the hydrolysis of HGeCl 3 adducts with cyclohexanone or methyl methacrylate. Later, in 1967 Prof. V.F. Mironov, a former employee of the same laboratory, similarly synthesized another stable water-soluble germanium sesquioxide-bis(carboxyethylgermanium) sesquioxide (Ge-132, CEGS), which is now the best known [68,69].
(O 1.5 GeCH 2 CH 2 COOH) n In the 1960s, the synthesis of such compounds seemed simple only on paper and, in reality, required highly qualified chemists and specialized equipment, which was available only in a few laboratories in the USSR and the USA. However, there is often a misconception in the literature that K. Asai, a well-known popularizer and author of several books about germanium, was the first to synthesize In 1967, at the international scientific conference, K. Asai learned about the discovery of water-soluble germanium compounds from Soviet scientists, who later gave him samples for testing. K. Asai was the first to foresee the pharmaceutical potential of the Ge-132 [24]. The history of Ge-132 is now well-known (see e.g., [6,7,24,70,71]). It was Ge-132 that led to the active study of biological activity of germanium compounds and their application in medical practice, especially in complex cancer therapy [7,19,31,36,72]. There are clinically proven cases of the successful use of these compounds in cancer treatment; for example, the complete remission of lung cancer was achieved when taking Ge-132 [73]. The spectrum of the biological activity of Ge-132 turned out to be very extensive, with the most pronounced being antitumor activity [40,[52][53][54].
Microbiological methods are another direction for the synthesis of organic germanium compounds. Thus, the yeast fermentation method produces Bio-Germanium, a medicine that acts as an effective immunostimulant, increasing the cytotoxicity of NK cells and activating immunoglobulin, B-cells and the tumor necrosis factor [19]. However, such drugs will remain outside the scope of this review.
The surge in the number of publications ( Figure 2) addressing the biological activity of germanium compounds until the beginning of this century was accompanied by a number of indeterminate publications containing erroneous toxicity values and reported cases of ultra-high-dose poisoning, among other issues. In the last two decades, similar peaks in publication activity were observed following the identification of novel categories of germanium compounds or the disclosure of alternative types of activity. The average number of publications has steadily risen by nearly fourfold over a span of 50 years. Thus, the discovery of novel classes of stable water-soluble germanium compounds is of significant importance.
Germanium Sesquioxides
The most-studied organic germanium compound is bis(carboxy ethylgermanium sesquioxide . Its synthesis is carried out by the addition of trichlorogerma (HgeCl3) to acrylic acid to produce 3-(trichlorogermyl)propanoic acid, followed by the h drolysis thereof. In this reaction, the trichlorogermyl group Cl3Ge regiospecifically ad to the terminal carbon atom of the vinyl group of acrylic acids ( Figure 3) [64,68,69]. Since the first synthesis of this compound was reported 55 years ago [68] the proce of producing the original trichlorogermane has gone from a technically complex synthe from elemental germanium [74] to the development of a simple and convenient meth using germanium dioxide (GeO2), HCl and H3PO2 [75]. As a result, Ge-132 and other ge manium sesquioxides are now readily available.
Structural studies of bis(carboxy ethylgermanium) sesquioxide have shown that, solid form, it can exist in several polymeric forms (repagermanium RGe, propagerm nium PGe and linear polymer GeSP) ( Figure 4) [88]. The structure of the polymer affec
Germanium Sesquioxides
The most-studied organic germanium compound is bis(carboxy ethylgermanium) sesquioxide . Its synthesis is carried out by the addition of trichlorogermane (HgeCl 3) to acrylic acid to produce 3-(trichlorogermyl)propanoic acid, followed by the hydrolysis thereof. In this reaction, the trichlorogermyl group Cl 3 Ge regiospecifically adds to the terminal carbon atom of the vinyl group of acrylic acids ( Figure 3) [64,68,69].
Germanium Sesquioxides
The most-studied organic germanium compound is bis(carboxy ethylgermanium) sesquioxide . Its synthesis is carried out by the addition of trichlorogermane (HgeCl3) to acrylic acid to produce 3-(trichlorogermyl)propanoic acid, followed by the hydrolysis thereof. In this reaction, the trichlorogermyl group Cl3Ge regiospecifically adds to the terminal carbon atom of the vinyl group of acrylic acids ( Figure 3) [64,68,69]. Since the first synthesis of this compound was reported 55 years ago [68] the process of producing the original trichlorogermane has gone from a technically complex synthesis from elemental germanium [74] to the development of a simple and convenient method using germanium dioxide (GeO2), HCl and H3PO2 [75]. As a result, Ge-132 and other germanium sesquioxides are now readily available.
Structural studies of bis(carboxy ethylgermanium) sesquioxide have shown that, in solid form, it can exist in several polymeric forms (repagermanium RGe, propagermanium PGe and linear polymer GeSP) ( Figure 4) [88]. The structure of the polymer affects Since the first synthesis of this compound was reported 55 years ago [68] the process of producing the original trichlorogermane has gone from a technically complex synthesis from elemental germanium [74] to the development of a simple and convenient method using germanium dioxide (GeO 2 ), HCl and H 3 PO 2 [75]. As a result, Ge-132 and other germanium sesquioxides are now readily available.
Structural studies of bis(carboxy ethylgermanium) sesquioxide have shown that, in solid form, it can exist in several polymeric forms (repagermanium RGe, propagermanium PGe and linear polymer GeSP) ( Figure 4) [88]. The structure of the polymer affects the rapidity and completeness of its solubility in water and, as a consequence, its biological activity and dosage. When dissolved in water, it turns into a hydrated form-3-(trihydroxygermil)propanoic acid (THGPA). PGe possesses the best water solubility.
Biomedicines 2023, 11,1535 6 of 20 the rapidity and completeness of its solubility in water and, as a consequence, its biological activity and dosage. When dissolved in water, it turns into a hydrated form-3-(trihydroxygermil)propanoic acid (THGPA). PGe possesses the best water solubility. As a result, there has recently been increased interest in Ge-132 in its most soluble form, PGe. For example, it is currently used in Japan to treat virus-hepatitis B [88]. Another direction in the study of Ge-132 biological activity is associated with the direct use of its hydrated form, THGPA. Thus, THGPA is shown to inhibit melanoma cell proliferation through phagocytosis [98]. Furthermore, it was revealed to have analgesic [99] and antiinflammatory effects [100].
THGPA contains three hydroxy groups in its molecule, which can react with OHgroups of vital molecules. Such interactions may explain a number of physiological effects of Ge-132. Thus, to assess the possible mechanisms of this physiological activity, the interaction of THGPA with biologically active compounds such as adrenaline and ATP, which have vicinal diol functional groups, has been studied in detail. The interaction with these diols explains the numerous physiological functions of Ge-132 at low toxicities [52,100]. It was later found that, in solution, THGPA can form complexes with nucleotides or nucleosides containing cis-diol fragments [101]. At the same time, the ability of THGPA to form complexes with nucleotides depended on the number of phosphate groups present at the ribose residue. Interestingly, THGPA inhibits the enzymatic activity of adenosine deaminase (ADA) when using adenosine as a substrate [101].
Given the presence of several reaction centers in the Ge-132 molecule, chemical modification has been explored to increase biological activity and broaden its scope of application. Several Ge-132 derivatives have been synthesized, including those substituted on the carboxylic group, 3-alkylsubstituted, and those with substitutes on the germanium atom.
It was previously shown that the introduction of aromatic and heteroaromatic substituents (quinolin, anthraquinone and naphthalene) as an ester group in Ge-132 increased their antitumor activity compared to Ge-132 itself [24,36]. At the same time, the introduction of an alkyl replacement in propionic acid position 2 (R 1 = Alk) significantly reduced antitumor activity ( Figure 5) [24,36].
(O1.5GeCH2CHR 1 COOR 2 )n R 1 = H, Me, Alk; R 2 = Alk, Ar, Het Later esters with naphthalene and phenanthrene fragments, as well as N-arylamides with anthraquinone and dibenzofuran fragments, were synthesized ( Figure 6) [102,103]. The resulting compounds had a stronger cytotoxic activity than Ge-132. The derivatives of methacrylic acid (R 1 = Me) were therefore less active than similar derivatives of acrylic As a result, there has recently been increased interest in Ge-132 in its most soluble form, PGe. For example, it is currently used in Japan to treat virus-hepatitis B [88]. Another direction in the study of Ge-132 biological activity is associated with the direct use of its hydrated form, THGPA. Thus, THGPA is shown to inhibit melanoma cell proliferation through phagocytosis [98]. Furthermore, it was revealed to have analgesic [99] and antiinflammatory effects [100].
THGPA contains three hydroxy groups in its molecule, which can react with OHgroups of vital molecules. Such interactions may explain a number of physiological effects of Ge-132. Thus, to assess the possible mechanisms of this physiological activity, the interaction of THGPA with biologically active compounds such as adrenaline and ATP, which have vicinal diol functional groups, has been studied in detail. The interaction with these diols explains the numerous physiological functions of Ge-132 at low toxicities [52,100]. It was later found that, in solution, THGPA can form complexes with nucleotides or nucleosides containing cis-diol fragments [101]. At the same time, the ability of THGPA to form complexes with nucleotides depended on the number of phosphate groups present at the ribose residue. Interestingly, THGPA inhibits the enzymatic activity of adenosine deaminase (ADA) when using adenosine as a substrate [101].
Given the presence of several reaction centers in the Ge-132 molecule, chemical modification has been explored to increase biological activity and broaden its scope of application. Several Ge-132 derivatives have been synthesized, including those substituted on the carboxylic group, 3-alkylsubstituted, and those with substitutes on the germanium atom.
It was previously shown that the introduction of aromatic and heteroaromatic substituents (quinolin, anthraquinone and naphthalene) as an ester group in Ge-132 increased their antitumor activity compared to Ge-132 itself [24,36]. At the same time, the introduction of an alkyl replacement in propionic acid position 2 (R 1 = Alk) significantly reduced antitumor activity ( Figure 5) [24,36]. 6 of 20 the rapidity and completeness of its solubility in water and, as a consequence, its biological activity and dosage. When dissolved in water, it turns into a hydrated form-3-(trihydroxygermil)propanoic acid (THGPA). PGe possesses the best water solubility. As a result, there has recently been increased interest in Ge-132 in its most soluble form, PGe. For example, it is currently used in Japan to treat virus-hepatitis B [88]. Another direction in the study of Ge-132 biological activity is associated with the direct use of its hydrated form, THGPA. Thus, THGPA is shown to inhibit melanoma cell proliferation through phagocytosis [98]. Furthermore, it was revealed to have analgesic [99] and antiinflammatory effects [100].
THGPA contains three hydroxy groups in its molecule, which can react with OHgroups of vital molecules. Such interactions may explain a number of physiological effects of Ge-132. Thus, to assess the possible mechanisms of this physiological activity, the interaction of THGPA with biologically active compounds such as adrenaline and ATP, which have vicinal diol functional groups, has been studied in detail. The interaction with these diols explains the numerous physiological functions of Ge-132 at low toxicities [52,100]. It was later found that, in solution, THGPA can form complexes with nucleotides or nucleosides containing cis-diol fragments [101]. At the same time, the ability of THGPA to form complexes with nucleotides depended on the number of phosphate groups present at the ribose residue. Interestingly, THGPA inhibits the enzymatic activity of adenosine deaminase (ADA) when using adenosine as a substrate [101].
Given the presence of several reaction centers in the Ge-132 molecule, chemical modification has been explored to increase biological activity and broaden its scope of application. Several Ge-132 derivatives have been synthesized, including those substituted on the carboxylic group, 3-alkylsubstituted, and those with substitutes on the germanium atom.
It was previously shown that the introduction of aromatic and heteroaromatic substituents (quinolin, anthraquinone and naphthalene) as an ester group in Ge-132 increased their antitumor activity compared to Ge-132 itself [24,36]. At the same time, the introduction of an alkyl replacement in propionic acid position 2 (R 1 = Alk) significantly reduced antitumor activity ( Figure 5) [24,36].
(O1.5GeCH2CHR 1 COOR 2 )n R 1 = H, Me, Alk; R 2 = Alk, Ar, Het Later esters with naphthalene and phenanthrene fragments, as well as N-arylamides with anthraquinone and dibenzofuran fragments, were synthesized ( Figure 6) [102,103]. The resulting compounds had a stronger cytotoxic activity than Ge-132. The derivatives of methacrylic acid (R 1 = Me) were therefore less active than similar derivatives of acrylic Later esters with naphthalene and phenanthrene fragments, as well as N-arylamides with anthraquinone and dibenzofuran fragments, were synthesized ( Figure 6) [102,103]. The resulting compounds had a stronger cytotoxic activity than Ge-132. The derivatives of methacrylic acid (R 1 = Me) were therefore less active than similar derivatives of acrylic acid (R 1 = H) [102,103]. These studies demonstrate possible means of Ge-132 modification to enhance its biological activity. acid (R 1 = H) [102,103]. These studies demonstrate possible means of Ge-132 modification to enhance its biological activity. In parallel with the derivatives of Ge-132, a germanium sesquioxide with resveratrol was synthesized (Figure 7) [104]. The antioxidant activity of the resulting compound was higher than that of Ge-132 and resveratrol separately, i.e., a synergistic effect was observed.
Germatranes, Germocanes
Germatranes (1) are another interesting class of biologically active germanium compounds, which are cyclic molecules stabilized by the hypervalent germanium atom (Figure 8) [105][106][107][108][109][110]. Several compounds were identified as having high biological activity, including a peculiar hybrid of Ge-132 and germatrane-3-germatranyl substituted propionic acid (2) and its derivatives, which showed strong activity against various tumors [111][112][113]. Based on caffeic acid 3-germatranyl-3-(4-hydroxy-3-methoxyphenyl) propionic acid (3) was synthesized, which showed strong activity against cervical tumor U14 (in vitro and in vivo). This had inhibitory activity against cervical cancer cell line U14 with an IC50 as high as 48.57 mg/L (117.32 µM), whereas the degree of inhibition of the tumor growth is 64% in the animal experiment [114]. 2-aminoethoxy-substituted germatrane (1, R = OCH2CH2NH2) inhibits the activity of mononuclear alkaline phospholipase A2, and may serve for the development of new antisclerotic drugs to prevent lipid metabolism disor- In parallel with the derivatives of Ge-132, a germanium sesquioxide with resveratrol was synthesized (Figure 7) [104]. The antioxidant activity of the resulting compound was higher than that of Ge-132 and resveratrol separately, i.e., a synergistic effect was observed.
Biomedicines 2023, 11, 1535 7 of 20 acid (R 1 = H) [102,103]. These studies demonstrate possible means of Ge-132 modification to enhance its biological activity. In parallel with the derivatives of Ge-132, a germanium sesquioxide with resveratrol was synthesized (Figure 7) [104]. The antioxidant activity of the resulting compound was higher than that of Ge-132 and resveratrol separately, i.e., a synergistic effect was observed.
Biomedicines 2023, 11, 1535 7 of 20 acid (R 1 = H) [102,103]. These studies demonstrate possible means of Ge-132 modification to enhance its biological activity. In parallel with the derivatives of Ge-132, a germanium sesquioxide with resveratrol was synthesized (Figure 7) [104]. The antioxidant activity of the resulting compound was higher than that of Ge-132 and resveratrol separately, i.e., a synergistic effect was observed.
Germatranes, Germocanes
Germatranes (1) are another interesting class of biologically active germanium compounds, which are cyclic molecules stabilized by the hypervalent germanium atom (Figure 8) [105][106][107][108][109][110]. Several compounds were identified as having high biological activity, including a peculiar hybrid of Ge-132 and germatrane-3-germatranyl substituted propionic acid (2) and its derivatives, which showed strong activity against various tumors [111][112][113]. Based on caffeic acid 3-germatranyl-3-(4-hydroxy-3-methoxyphenyl) propionic acid (3) was synthesized, which showed strong activity against cervical tumor U14 (in vitro and in vivo). This had inhibitory activity against cervical cancer cell line U14 with an IC50 as high as 48.57 mg/L (117.32 µM), whereas the degree of inhibition of the tumor growth is 64% in the animal experiment [114]. 2-aminoethoxy-substituted germatrane (1, R = OCH2CH2NH2) inhibits the activity of mononuclear alkaline phospholipase A2, and may serve for the development of new antisclerotic drugs to prevent lipid metabolism disor- Several compounds were identified as having high biological activity, including a peculiar hybrid of Ge-132 and germatrane-3-germatranyl substituted propionic acid (2) and its derivatives, which showed strong activity against various tumors [111][112][113]. Based on caffeic acid 3-germatranyl-3-(4-hydroxy-3-methoxyphenyl) propionic acid (3) was synthesized, which showed strong activity against cervical tumor U14 (in vitro and in vivo). This had inhibitory activity against cervical cancer cell line U14 with an IC50 as high as 48.57 mg/L (117.32 µM), whereas the degree of inhibition of the tumor growth is 64% in the animal experiment [114]. 2-aminoethoxy-substituted germatrane (1, R = OCH 2 CH 2 NH 2 ) inhibits the activity of mononuclear alkaline phospholipase A2, and may serve for the development of new antisclerotic drugs to prevent lipid metabolism disorders [115]. In addition, this compound has a beneficial effect on the bioenergetic characteristics of mitochondria, increasing the efficiency of oxidative phosphorylation and increasing the oxidation rate of NAD-dependent substrates by mitochondria [116][117][118]. Germatranol (1, R = OH) reveals a similar activity; it also acts as an antioxidant and reduces the content of reactive oxygen species (ROS) in plant cells [119]. Germatranol contains a hydroxy group, which (like the hydrated form of can interact with functional groups in vital molecules. Thus, germatranol-hydrate interacts with simple amino acids (glycine, L-alanine, β-alanine, and L-valine), resulting in corresponding aminocarboxygermanates [120].
creasing the oxidation rate of NAD-dependent substrates by mitochondria [116-11 matranol (1, R = OH) reveals a similar activity; it also acts as an antioxidant and the content of reactive oxygen species (ROS) in plant cells [119]. Germatranol co hydroxy group, which (like the hydrated form of can interact with fu groups in vital molecules. Thus, germatranol-hydrate interacts with simple amin (glycine, L-alanine, β-alanine, and L-valine), resulting in corresponding amino ygermanates [120].
In addition to germatranes, their bicyclic analogues-germocanes (q matranes, 4) and monocyclic analogues-hypogermatranes (5) have been synthesiz their biological activity was found to be similar to that of germatranes ( Figure 9) [1 126]. The hypogermatranes 6 [127] and 7 [128] obtained in this way are molecules i the ligands are coordinated to the germanium atom ( Figure 10). These compounds antimicrobial activity against various strains of fungi and bacteria. Their pesticide against Corcyra Cephalonica is also established. Hypogermatranes 8, in which the ligands are coordinated with Ge (IV) via thine nitrogen atom and sulfur thiol/enol oxygen atom, are also known (Fig [129,130]. These compounds have strong fungicidal and bactericidal properties. F more, they are antioxidants and DNA splitters, whereas the compounds 8b showe antifertile activity [130]. Finally, the first stable water-soluble germylene (a compound of divalent nium) 9 with dipyrromethane ligand was described and its biological activity was ( Figure 12) [131]. Compound 9 has been shown to have a comparable antiprol The hypogermatranes 6 [127] and 7 [128] obtained in this way are molecules in which the ligands are coordinated to the germanium atom ( Figure 10). These compounds exhibit antimicrobial activity against various strains of fungi and bacteria. Their pesticide activity against Corcyra Cephalonica is also established.
ders [115]. In addition, this compound has a beneficial effect on the bioenergetic characteristics of mitochondria, increasing the efficiency of oxidative phosphorylation and increasing the oxidation rate of NAD-dependent substrates by mitochondria [116][117][118]. Germatranol (1, R = OH) reveals a similar activity; it also acts as an antioxidant and reduces the content of reactive oxygen species (ROS) in plant cells [119]. Germatranol contains a hydroxy group, which (like the hydrated form of can interact with functional groups in vital molecules. Thus, germatranol-hydrate interacts with simple amino acids (glycine, L-alanine, β-alanine, and L-valine), resulting in corresponding aminocarboxygermanates [120].
In addition to germatranes, their bicyclic analogues-germocanes (quasigermatranes, 4) and monocyclic analogues-hypogermatranes (5) have been synthesized, and their biological activity was found to be similar to that of germatranes ( Figure 9) [108,[121][122][123][124][125][126]. The hypogermatranes 6 [127] and 7 [128] obtained in this way are molecules in which the ligands are coordinated to the germanium atom ( Figure 10). These compounds exhibit antimicrobial activity against various strains of fungi and bacteria. Their pesticide activity against Corcyra Cephalonica is also established. Hypogermatranes 8, in which the ligands are coordinated with Ge (IV) via azomethine nitrogen atom and sulfur thiol/enol oxygen atom, are also known ( Figure 11) [129,130]. These compounds have strong fungicidal and bactericidal properties. Furthermore, they are antioxidants and DNA splitters, whereas the compounds 8b showed strong antifertile activity [130]. Finally, the first stable water-soluble germylene (a compound of divalent germanium) 9 with dipyrromethane ligand was described and its biological activity was studied ( Figure 12) [131]. Compound 9 has been shown to have a comparable antiproliferative Hypogermatranes 8, in which the ligands are coordinated with Ge (IV) via azomethine nitrogen atom and sulfur thiol/enol oxygen atom, are also known ( Figure 11) [129,130]. These compounds have strong fungicidal and bactericidal properties. Furthermore, they are antioxidants and DNA splitters, whereas the compounds 8b showed strong antifertile activity [130]. groups in vital molecules. Thus, germatranol-hydrate interacts with simple amin (glycine, L-alanine, β-alanine, and L-valine), resulting in corresponding amino ygermanates [120].
In addition to germatranes, their bicyclic analogues-germocanes (q matranes, 4) and monocyclic analogues-hypogermatranes (5) have been synthesiz their biological activity was found to be similar to that of germatranes ( Figure 9) [1 126]. The hypogermatranes 6 [127] and 7 [128] obtained in this way are molecules i the ligands are coordinated to the germanium atom ( Figure 10). These compounds antimicrobial activity against various strains of fungi and bacteria. Their pesticide against Corcyra Cephalonica is also established. Hypogermatranes 8, in which the ligands are coordinated with Ge (IV) via thine nitrogen atom and sulfur thiol/enol oxygen atom, are also known (Fig [129,130]. These compounds have strong fungicidal and bactericidal properties. F more, they are antioxidants and DNA splitters, whereas the compounds 8b showed antifertile activity [130]. Finally, the first stable water-soluble germylene (a compound of divalent nium) 9 with dipyrromethane ligand was described and its biological activity was ( Figure 12) [131]. Compound 9 has been shown to have a comparable antiproli Finally, the first stable water-soluble germylene (a compound of divalent germanium) 9 with dipyrromethane ligand was described and its biological activity was studied ( Figure 12) [131]. Compound 9 has been shown to have a comparable antiproliferative effect to cisplatin. These results form the basis for further biological research using germylenes, which are highly active compounds of low-valence germanium.
iomedicines 2023, 11,1535 effect to cisplatin. These results form the basis for further biological research enes, which are highly active compounds of low-valence germanium.
Other Germanium Compounds
Among the compounds of other classes, germanium was introduced with known physiological activity. The obtained compounds had a synergi of these compounds is ascorbic acid, where germanium was introduced as Thus, an amide of trimethylgermylpropionic acid 10 was synthesized (Figu sesses high antioxidant properties and is proposed for the treatment of ato [132,133]. Similarly, a stable lipophilic ascorbic acid 11 derivative with hig activity was obtained ( Figure 13) [92]. Complex 12 also showed high antitcancer activity. Thus, it has a signific effect on the proliferation and growth of human cancer cell lines MCF-7 Colo205 with high selectivity between cancerous and normal cells [135,136] effect on the proliferation of these cell lines is thought to occur through th apoptosis via the ROS-dependent mitochondrial pathway [135,136].
Other Germanium Compounds
Among the compounds of other classes, germanium was introduced to compounds with known physiological activity. The obtained compounds had a synergistic effect. One of these compounds is ascorbic acid, where germanium was introduced as a substituent. Thus, an amide of trimethylgermylpropionic acid 10 was synthesized ( Figure 13). It possesses high antioxidant properties and is proposed for the treatment of atopic dermatitis [132,133]. Similarly, a stable lipophilic ascorbic acid 11 derivative with high antioxidant activity was obtained ( Figure 13) [92].
Biomedicines 2023, 11, 1535 9 of 20 effect to cisplatin. These results form the basis for further biological research using germylenes, which are highly active compounds of low-valence germanium.
Other Germanium Compounds
Among the compounds of other classes, germanium was introduced to compounds with known physiological activity. The obtained compounds had a synergistic effect. One of these compounds is ascorbic acid, where germanium was introduced as a substituent Thus, an amide of trimethylgermylpropionic acid 10 was synthesized ( Figure 13). It pos sesses high antioxidant properties and is proposed for the treatment of atopic dermatitis [132,133]. Similarly, a stable lipophilic ascorbic acid 11 derivative with high antioxidan activity was obtained ( Figure 13) [92]. Complex 12 also showed high antitcancer activity. Thus, it has a significant inhibitory effect on the proliferation and growth of human cancer cell lines MCF-7, HepG2 and Colo205 with high selectivity between cancerous and normal cells [135,136]. An inhibitory effect on the proliferation of these cell lines is thought to occur through the induction of apoptosis via the ROS-dependent mitochondrial pathway [135,136].
Germanium was also introduced into dihydroartemisinin (DHA) as an analogue of Ge-132 (product of GeHCl3 addition to crotonic acid) ( Figure 15) [137]. The resulting DHA-Ge complex 13 displays a synergistic effect of DHA and Ge-132, i.e., effectively in hibits the proliferation of HepG2 cells and can induce their apoptosis. Complex 13 is re garded as a promising antitumor agent [137].
Other Germanium Compounds
Among the compounds of other classes, germanium was introdu with known physiological activity. The obtained compounds had a syn of these compounds is ascorbic acid, where germanium was introduc Thus, an amide of trimethylgermylpropionic acid 10 was synthesized sesses high antioxidant properties and is proposed for the treatment [132,133]. Similarly, a stable lipophilic ascorbic acid 11 derivative wi activity was obtained ( Figure 13) [92]. Complex 12 also showed high antitcancer activity. Thus, it has a si effect on the proliferation and growth of human cancer cell lines M Colo205 with high selectivity between cancerous and normal cells [135 effect on the proliferation of these cell lines is thought to occur throu apoptosis via the ROS-dependent mitochondrial pathway [135,136].
Germanium was also introduced into dihydroartemisinin (DHA Ge-132 (product of GeHCl3 addition to crotonic acid) ( Figure 15) [ DHA-Ge complex 13 displays a synergistic effect of DHA and Ge-132 hibits the proliferation of HepG2 cells and can induce their apoptosis garded as a promising antitumor agent [137]. Complex 12 also showed high antitcancer activity. Thus, it has a significant inhibitory effect on the proliferation and growth of human cancer cell lines MCF-7, HepG2 and Colo205 with high selectivity between cancerous and normal cells [135,136]. An inhibitory effect on the proliferation of these cell lines is thought to occur through the induction of apoptosis via the ROS-dependent mitochondrial pathway [135,136].
Germanium was also introduced into dihydroartemisinin (DHA) as an analogue of Ge-132 (product of GeHCl 3 addition to crotonic acid) ( Figure 15) [137]. The resulting DHA-Ge complex 13 displays a synergistic effect of DHA and Ge-132, i.e., effectively inhibits the proliferation of HepG2 cells and can induce their apoptosis. Complex 13 is regarded as a promising antitumor agent [137]. Steroids are another class of physiologically active compounds in which ger was introduced as a substituent to position 16 [138][139][140]. The predicted biological of these and a number of other similar compounds was calculated by QSAR [14 tumor, antiseborrheic and dermatological activities are the most characteristic p biological properties for these steroids.
Apart from the modification of natural compounds, GeR3 moeity is introd various heterocyclic derivatives. Thus, a number of germylsubstituted hetarylbe azoles (14) was synthesized, and showed high cytotoxicity on the cell lines MG-2 1080 and NIH 3T3 (Figure 16) [142]. A similar series of germylsubstituted pyran bonitriles (15) also showed high cytotoxicity and the inhibition of matrix metallo ase ( Figure 16) [143]. The introduction of a germyl substituent in the heterocyclic 5 (in furan or thiophene) was demonstrated to contribute to the emergence of cyto
Inorganic and Coordination Germanium Compounds
The inorganic and coordination germanium compounds are now well-establ medical practice (see reviews [3,144,145] and monograph [46]). The structure of su pounds is discussed in detail in the review [146]. Problems with the use of GeO2 in practice in the 1980s were related to its low solubility, which required a substa crease in the dose. It was recently shown to be possible to synthesize highly solub of GeO2 [147]. This opens up new avenues for its use, including in medicine. Am coordination germanium compounds, the most studied are germanium (IV) cit germanium (IV) citrate-lactate, which, like GeO2, are of low toxicity but exhibit nep icity in high doses [6,47,58]. These compounds activate the immune system and ommended for the treatment of a wide range of diseases, primarily onc [3,43,46,144,145,148].
There are also known complexes of germanium (IV) with acetylacetone [Ge(acac)3)] + with different anions (16) (Figure 17) [149]. The obtained complexes high activity against different cancer cell lines, with high selectivity in cancer ce Steroids are another class of physiologically active compounds in which germanium was introduced as a substituent to position 16 [138][139][140]. The predicted biological activity of these and a number of other similar compounds was calculated by QSAR [141]. Antitumor, antiseborrheic and dermatological activities are the most characteristic predicted biological properties for these steroids.
Apart from the modification of natural compounds, GeR 3 moeity is introduced to various heterocyclic derivatives. Thus, a number of germylsubstituted hetarylbenzimidazoles (14) was synthesized, and showed high cytotoxicity on the cell lines MG-22A, HT-1080 and NIH 3T3 ( Figure 16) [142]. A similar series of germylsubstituted pyrane-3-carbonitriles (15) also showed high cytotoxicity and the inhibition of matrix metalloproteinase ( Figure 16) [143]. The introduction of a germyl substituent in the heterocyclic position 5 (in furan or thiophene) was demonstrated to contribute to the emergence of cytotoxicity. Steroids are another class of physiologically active compounds in whic was introduced as a substituent to position 16 [138][139][140]. The predicted biol of these and a number of other similar compounds was calculated by QSA tumor, antiseborrheic and dermatological activities are the most character biological properties for these steroids.
Apart from the modification of natural compounds, GeR3 moeity is various heterocyclic derivatives. Thus, a number of germylsubstituted het azoles (14) was synthesized, and showed high cytotoxicity on the cell lines 1080 and NIH 3T3 ( Figure 16) [142]. A similar series of germylsubstituted bonitriles (15) also showed high cytotoxicity and the inhibition of matrix m ase ( Figure 16) [143]. The introduction of a germyl substituent in the hetero 5 (in furan or thiophene) was demonstrated to contribute to the emergence o
Inorganic and Coordination Germanium Compounds
The inorganic and coordination germanium compounds are now well medical practice (see reviews [3,144,145] and monograph [46]). The structur pounds is discussed in detail in the review [146]. Problems with the use of G practice in the 1980s were related to its low solubility, which required a crease in the dose. It was recently shown to be possible to synthesize highly of GeO2 [147]. This opens up new avenues for its use, including in medicin coordination germanium compounds, the most studied are germanium (I germanium (IV) citrate-lactate, which, like GeO2, are of low toxicity but exhi icity in high doses [6,47,58]. These compounds activate the immune system ommended for the treatment of a wide range of diseases, primaril [3,43,46,144,145,148].
There are also known complexes of germanium (IV) with acetyla [Ge(acac)3)] + with different anions (16) (Figure 17) [149]. The obtained com high activity against different cancer cell lines, with high selectivity in can pard to normal epithelial cells. Furthermore, the compounds induce signifi
Inorganic and Coordination Germanium Compounds
The inorganic and coordination germanium compounds are now well-established in medical practice (see reviews [3,144,145] and monograph [46]). The structure of such compounds is discussed in detail in the review [146]. Problems with the use of GeO 2 in medical practice in the 1980s were related to its low solubility, which required a substantial increase in the dose. It was recently shown to be possible to synthesize highly soluble forms of GeO 2 [147]. This opens up new avenues for its use, including in medicine. Among the coordination germanium compounds, the most studied are germanium (IV) citrate and germanium (IV) citrate-lactate, which, like GeO 2 , are of low toxicity but exhibit nephrotoxicity in high doses [6,47,58]. These compounds activate the immune system and are recommended for the treatment of a wide range of diseases, primarily oncological [3,43,46,144,145,148].
There are also known complexes of germanium (IV) with acetylacetone ligand [Ge(acac) 3 )] + with different anions (16) (Figure 17) [149]. The obtained complexes exhibit high activity against different cancer cell lines, with high selectivity in cancer cells compard to normal epithelial cells. Furthermore, the compounds induce significant apoptosis [149]. A number of Ge (IV) complexes with natural polyphenols were synthesized and shown to be promising pharmacologically active substances for cancer treatment. The quercetin-germanium complex (17) (Figure 18) showed high cytotoxicity against four tumor cell lines (PC-3, Hela, EC9706 and SPC-A-1) [150,151]. Among the other polyphenolic compounds that were used in the synthesis of complexes with Ge (IV), we noted a natural coumarin daphnetin (18) and glucosylxanthone mangiferin (19) (Figure 19) [152]. The resulting Ge (IV) complexes made with the above compounds exhibit high antioxidant activity and demonstrate a strong intercalating ability in calf thymus DNA molecules. In addition, these two complexes have a strong inhibitory proliferative effect on cancer cells HepG2 [152]. Last but not least, the germanium (IV) complex with hesperidin, a flavanon glycoside, was synthesized; however the structure was not established [153]. This complex showed high activity in hepatocellular carcinoma of rats.
A Possible Mechanism of Anticancer Action of Germanium Compounds
A century ago, the Nobel Prize winner Otto Warburg observed that tumors produce excess lactate in the presence of oxygen. He proposed that the cancer's origin lies in the replacement of oxidation phosphorylation by glucose fermentation, which he interpreted as mitochondrial dysfunction [154][155][156][157][158]. This phenomenon was called aerobic glycolysis A number of Ge (IV) complexes with natural polyphenols were synthesized and shown to be promising pharmacologically active substances for cancer treatment. The quercetingermanium complex (17) (Figure 18) showed high cytotoxicity against four tumor cell lines (PC-3, Hela, EC9706 and SPC-A-1) [150,151]. A number of Ge (IV) complexes with natural polyphenols were synthesized and shown to be promising pharmacologically active substances for cancer treatment. The quercetin-germanium complex (17) (Figure 18) showed high cytotoxicity against four tumor cell lines (PC-3, Hela, EC9706 and SPC-A-1) [150,151]. Among the other polyphenolic compounds that were used in the synthesis of complexes with Ge (IV), we noted a natural coumarin daphnetin (18) and glucosylxanthone mangiferin (19) (Figure 19) [152]. The resulting Ge (IV) complexes made with the above compounds exhibit high antioxidant activity and demonstrate a strong intercalating ability in calf thymus DNA molecules. In addition, these two complexes have a strong inhibitory proliferative effect on cancer cells HepG2 [152]. Last but not least, the germanium (IV) complex with hesperidin, a flavanon glycoside, was synthesized; however the structure was not established [153]. This complex showed high activity in hepatocellular carcinoma of rats.
A Possible Mechanism of Anticancer Action of Germanium Compounds
A century ago, the Nobel Prize winner Otto Warburg observed that tumors produce excess lactate in the presence of oxygen. He proposed that the cancer's origin lies in the replacement of oxidation phosphorylation by glucose fermentation, which he interpreted as mitochondrial dysfunction [154][155][156][157][158]. This phenomenon was called aerobic glycolysis Among the other polyphenolic compounds that were used in the synthesis of complexes with Ge (IV), we noted a natural coumarin daphnetin (18) and glucosylxanthone mangiferin (19) (Figure 19) [152]. The resulting Ge (IV) complexes made with the above compounds exhibit high antioxidant activity and demonstrate a strong intercalating ability in calf thymus DNA molecules. In addition, these two complexes have a strong inhibitory proliferative effect on cancer cells HepG2 [152]. A number of Ge (IV) complexes with natural polyphenols were synthesized and shown to be promising pharmacologically active substances for cancer treatment. The quercetin-germanium complex (17) (Figure 18) showed high cytotoxicity against four tumor cell lines (PC-3, Hela, EC9706 and SPC-A-1) [150,151]. Among the other polyphenolic compounds that were used in the synthesis of complexes with Ge (IV), we noted a natural coumarin daphnetin (18) and glucosylxanthone mangiferin (19) (Figure 19) [152]. The resulting Ge (IV) complexes made with the above compounds exhibit high antioxidant activity and demonstrate a strong intercalating ability in calf thymus DNA molecules. In addition, these two complexes have a strong inhibitory proliferative effect on cancer cells HepG2 [152]. Last but not least, the germanium (IV) complex with hesperidin, a flavanon glycoside, was synthesized; however the structure was not established [153]. This complex showed high activity in hepatocellular carcinoma of rats.
A Possible Mechanism of Anticancer Action of Germanium Compounds
A century ago, the Nobel Prize winner Otto Warburg observed that tumors produce excess lactate in the presence of oxygen. He proposed that the cancer's origin lies in the replacement of oxidation phosphorylation by glucose fermentation, which he interpreted as mitochondrial dysfunction [154][155][156][157][158]. This phenomenon was called aerobic glycolysis Last but not least, the germanium (IV) complex with hesperidin, a flavanon glycoside, was synthesized; however the structure was not established [153]. This complex showed high activity in hepatocellular carcinoma of rats.
A Possible Mechanism of Anticancer Action of Germanium Compounds
A century ago, the Nobel Prize winner Otto Warburg observed that tumors produce excess lactate in the presence of oxygen. He proposed that the cancer's origin lies in the replacement of oxidation phosphorylation by glucose fermentation, which he interpreted as mitochondrial dysfunction [154][155][156][157][158]. This phenomenon was called aerobic glycolysis or the "Warburg effect". Later, the concept of mitochondrial oxidative stress was developed [159][160][161][162][163]. The mitochondrial oxidative stress leads to the overproduction of ROS, which, at teh cellular level, causes aerobic glycolysis, DNA damage, autophagy/mitophagy, and protection against apoptosis [163]. During oxidative stress, the most reactive and damaging ROS is hydroxyl radical (HO • ), which is produced from hydrogen peroxide by the Fenton reaction [164]. To protect against/prevent oxidative stress, antioxidants should be applied. Antioxidants stoichiometrically react with ROS. They are required in large amounts to suppress oxidative stress and can have side effects [165][166][167][168].
Germanium compounds were found to be effective against oxidative stress [43,71,96]. Old publications describe the unique properties of germanium derivatives, which led us to suggest a putative mechanism of oxidative stress suppression/prevention. In 1930, R. Schwarz and H. Giese studied the reaction of alkali germanates with hydrogen peroxide and obtained peroxyhygrates [169]. Later, in 1935, R. Schwarz or the "Warburg effect". Later, the concept of mitochondrial oxidative stress was developed [159][160][161][162][163]. The mitochondrial oxidative stress leads to the over-production of ROS, which, at teh cellular level, causes aerobic glycolysis, DNA damage, autophagy/mitophagy, and protection against apoptosis [163]. During oxidative stress, the most reactive and damaging ROS is hydroxyl radical (HO • ), which is produced from hydrogen peroxide by the Fenton reaction [164]. To protect against/prevent oxidative stress, antioxidants should be applied. Antioxidants stoichiometrically react with ROS. They are required in large amounts to suppress oxidative stress and can have side effects [165][166][167][168]. Germanium compounds were found to be effective against oxidative stress [43,71,96]. Old publications describe the unique properties of germanium derivatives, which led us to suggest a putative mechanism of oxidative stress suppression/prevention. In 1930, R. Schwarz and H. Giese studied the reaction of alkali germanates with hydrogen peroxide and obtained peroxyhygrates [169]. Later, in 1935, R. Schwarz and F. Heinrich proved that these peroxyhygrates are coordination germanium compounds (not peroxides), with H2O and H2O2 as ligands [170]: K2Ge2O5·2H2O2·2H2O, Na2Ge2O5·2H2O2·2H2O, Na2GeO3··2H2O2·2H2O. Such complexes do not oxidize iodides and evolve oxygen. By this means, germanium derivatives catalytically decompose hydrogen peroxide, and germanium trace quantities can keep hydrogen peroxide at low levels, thus dramatically reducing the formation of the HO • , the most damaging ROS, by the Fenton reaction ( Figure 20). Therefore, germanium derivatives can dramatically reduce hydrogen peroxide levels in cells, suppressing/preventing oxidative stress. This explains the important role of germanium in the restoration of oxygen respiration in Warburg-like cancers.
Conclusions
Germanium is a vital ultra-microelement that participates in the fundamental biochemical reactions of a living cell, determining the broadest range of biological activity in its compounds. Germanium normalizes the immune system, which is essential for cancer prevention. Germanium's ability to restore cell oxygen respiration is particularly attractive, and can serve as the basis for the treatment of Warburg-like cancers. In addition to organic compounds, germanium's other classes, particularly the well-known coordination compounds, have become the subject of studies of physiological activity in the last decade.
Based on the knowledge at present, it is anticipated that the exploration of biologically active germanium compounds will progress in two main directions: Firstly, through comprehensive investigations of established compounds, primarily Ge-132, aiming to obtain a more thorough understanding of their properties. Secondly, through the synthesis of novel derivatives of known compounds to enhance their biological activity and broaden their range of effects. Furthermore, research in germanium chemistry holds the potential to unveil new categories of water-soluble germanium compounds and their associated properties. Of particular relevance is the study of the mechanism of action of germanium compounds in living cells. It has been observed that germanium is integral to the active centers of certain enzymes and is involved in oxidative reactions, primarily with hydrogen Therefore, germanium derivatives can dramatically reduce hydrogen peroxide levels in cells, suppressing/preventing oxidative stress. This explains the important role of germanium in the restoration of oxygen respiration in Warburg-like cancers.
Conclusions
Germanium is a vital ultra-microelement that participates in the fundamental biochemical reactions of a living cell, determining the broadest range of biological activity in its compounds. Germanium normalizes the immune system, which is essential for cancer prevention. Germanium's ability to restore cell oxygen respiration is particularly attractive, and can serve as the basis for the treatment of Warburg-like cancers. In addition to organic compounds, germanium's other classes, particularly the well-known coordination compounds, have become the subject of studies of physiological activity in the last decade.
Based on the knowledge at present, it is anticipated that the exploration of biologically active germanium compounds will progress in two main directions: Firstly, through comprehensive investigations of established compounds, primarily Ge-132, aiming to obtain a more thorough understanding of their properties. Secondly, through the synthesis of novel derivatives of known compounds to enhance their biological activity and broaden their range of effects. Furthermore, research in germanium chemistry holds the potential to unveil new categories of water-soluble germanium compounds and their associated properties. Of particular relevance is the study of the mechanism of action of germanium compounds in living cells. It has been observed that germanium is integral to the active centers of certain enzymes and is involved in oxidative reactions, primarily with hydrogen peroxide, without generating detrimental reactive oxygen species, including free radicals. Consequently, germanium compounds facilitate the restoration of oxygen respiration (i.e., oxidative phosphorylation) in cancer cells, thereby impeding or even halting the growth of Warburg-like tumors. Understanding this mechanism in depth will enable the purposeful synthesis of novel germanium compounds with a targeted biological activity, yielding more significant and directed therapeutic outcomes.
Despite being neglected in a number of influential journals (see Section 2), research on the biological activity of germanium compounds continues. The reliance on secondary sources of information with erroneous data on the toxicity of organic germanium compounds is the real reason for the neglect of its biological activity to date. The publication of the review [171] has sparked further discussion on germanium, its role in wildlife, and its associated errors and misperceptions in the scientific literature [172,173].
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
|
2023-05-28T15:16:20.169Z
|
2023-05-25T00:00:00.000
|
{
"year": 2023,
"sha1": "cc7c402ad34071ab909743d9df7c0b57060640aa",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/biomedicines11061535",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "176fe2a068a2164cd8023445c7fcc9d4cb38efcd",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259271561
|
pes2o/s2orc
|
v3-fos-license
|
Metamorphosis of Topical Semisolid Products—Understanding the Role of Rheological Properties in Drug Permeation under the “in Use” Condition
When developing topical semisolid products, it is crucial to consider the metamorphosis of the formulation under the “in use” condition. Numerous critical quality characteristics, including rheological properties, thermodynamic activity, particle size, globule size, and the rate/extent of drug release/permeation, can be altered during this process. This study aimed to use lidocaine as a model drug to establish a connection between the evaporation and change of rheological properties and the permeation of active pharmaceutical ingredients (APIs) in topical semisolid products under the “in use” condition. The evaporation rate of the lidocaine cream formulation was calculated by measuring the weight loss and heat flow of the sample using DSC/TGA. Changes in rheological properties due to metamorphosis were assessed and predicted using the Carreau–Yasuda model. The impact of solvent evaporation on a drug’s permeability was studied by in vitro permeation testing (IVPT) using occluded and unconcluded cells. Overall, it was found that the viscosity and elastic modulus of prepared lidocaine cream gradually increased with the time of evaporation as a result of the aggregation of carbopol micelles and the crystallization of API after application. Compared to occluded cells, the permeability of lidocaine for formulation F1 (2.5% lidocaine) in unoccluded cells decreased by 32.4%. This was believed to be the result of increasing viscosity and crystallization of lidocaine instead of depletion of API from the applied dose, which was confirmed by formulation F2 with a higher content of API (5% lidocaine) showing a similar pattern, i.e., a 49.7% reduction of permeability after 4 h of study. To the best of our knowledge, this is the first study to simultaneously demonstrate the rheological change of a topical semisolid formulation during volatile solvent evaporation, resulting in a concurrent decrease in the permeability of API, which provides mathematical modelers with the necessary background to build complex models that incorporate evaporation, viscosity, and drug permeation in the simulation once at a time.
Introduction
Generally, most topical products are produced in semisolid dosage forms, such as creams, ointments, gels, lotions, and emulsions [1], and most of them target the skin or subcutaneous tissue [2]. These products must increase the permeation of drug molecules and preserve the rate and extent of penetration properly in the skin layers to produce appropriate therapeutic effects [1][2][3]. Different factors, including the physicochemical properties of the active pharmaceutical ingredient (API), the interaction between formulation compared to the formulations containing non-volatile vehicles, which was attributed to the supersaturation of fluocinolone in the formulation resulted from the evaporation of isopropanol [20]. Chia-Ming et al. [21] performed two sets of skin permeation experiments in vitro to evaluate topical minoxidil delivery and the role of thermodynamic activity. In this study, minoxidil at different concentrations (0.5%. l%, 2%, 3%. 4%, and 5%) in a hydroalcoholic vehicle (fixed composition of propylene glycol/water/ethanol (20.0:63. 2:16.8) was prepared, and the results showed a reduction in the flux of minoxidil at different concentrations (3%, 4%, and 5%) in formulations composed of propylene glycol, ethanol, and water. The decrease in flux was due to the crystallization of minoxidil after the evaporation of volatile vehicles. To predict the "in use" penetration profile of metronidazole semisolid products under clinical conditions, Arora et al. [23] developed a physiologically-based pharmacokinetic model of metronidazole using in vitro permeation testing (IVPT) data, which successfully captured the metamorphosis of metronidazole gel and cream after application. However, despite its relevance for the development of topical semisolid products, there are only limited attempts to correlate the changes in the rheological properties of formulation during metamorphosis with the permeation of API.
To our knowledge, the metamorphosis of topical pharmaceutical products, especially under "in use" conditions where a product is finitely thin-layer dosed in a dermatologically relevant environment, is understudied. This study aimed to use lidocaine as a model drug to demonstrate the connection between evaporation and simultaneously changing rheological properties and permeation of APIs in topical semisolid products under the "in use" condition. Lidocaine, as a hydrophobic drug, suffers from poor solubility in topical anestric products. With the evaporation of solvents, they tend to crystallize rapidly because of their altered solubility. Therefore, the evaporation rate of prepared eutectic oil (O)/water (W) creams of lidocaine was evaluated using a thermal analysis system, rheological properties were monitored by a rheometer, and the permeability was studied with comparative IVPT by controlling the evaporation condition of the formulation. Determining the metamorphotic events helps to understand their influence on the in vitro permeation profile and mitigate the potential failure mode of topical semisolid products. Along with these physicochemical characterizations, the revealed underlying mechanism provides mathematical modelers with the necessary background to build complex models that incorporate evaporation, viscosity, and drug permeation in the simulation once at a time.
Preparation of Formulations
Due to the poor solubility and temperature sensitivity, the optimized lidocaine: IPA/water eutectic mixture was utilized to prepare the cream formulation, where the lidocaine was melted at room temperature to form the oil phase, based on previous literature [24]. In the preparation of the lidocaine O/W cream, materials listed in Table 1 were weighted and then homogenized at 20,000 RPM for 10 min at 25 • C using ULTRA-TURRAX ® T18 (IKA, UK) before being stored in air-tight plastic containers. Sodium lauryl sulfate and carbopol 980 were used as the surfactant and thickening agent, respectively. As shown in Table 2, a series of carbopol fluids were prepared with concentrations ranging from 0.1% to 1.0% (w/w). The pH of carbopol fluids was adjusted to 7 by neutralizing carbopol 980 with a 5.39 M sodium hydroxide solution in a weight ratio of 2.3:1. The prepared fluids were degassed for 5 min with an ultrasound water bath before being stored in airtight containers.
Determination of Weight Loss by Evaporation
The evaporation rate of lidocaine cream samples was assessed using a thermal analysis system equipped with thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) analyses (TGA/DSC3+, Mettler Toledo, Columbus, OH, USA). Five milligrams of cream samples were weighed in alumina crucibles for measurement. Samples were held isothermally at the experimental temperature of 32 • C for 240 min. The weight of the sample and heat flow were then measured, and weight percentage change versus time and heat flow versus time were plotted. The evaporation rate was then determined and plotted versus time by calculating the actual weight loss per unit time using the equation published elsewhere [25].
Measurement and Modeling of 'in Use' Apparent Viscosity
The apparent viscosity of lidocaine cream and carbopol gel samples was measured as a function of shear rate from 0.001 to 10,000 s −1 using a controlled shear rate sweep test with an MCR 302e rheometer (Anton Paar, North Ryde, NSW, Australia). A sufficient volume of formulations was carefully loaded onto the stage to form a homogeneous and thin layer of samples before the parallel plate geometry of 40 mm diameter moved to a measuring gap of 0.5 mm. The protruded sample was then trimmed before covering an alloy hood on the stage to mitigate the wall-slipping effect at high velocity. The temperature of the sample was equilibrated at 32 ± 0.5 • C for 2 min using a P-PTD200 measuring cell to simulate the surface temperature of the skin before measurements. Specifically, to understand the change of viscosity under the "in use" condition , a fresh thin layer of lidocaine cream samples was left open on the stage each time at this temperature to dry, and the evaporation state of the samples was controlled by the time for drying, i.e., 0, 15, 30, 45, 60, 75, 90, 105, and 120 min. Thereby, the apparent viscosity was obtained at various evaporation states. The polymer concentration in a formulation is subjected to the change in volatile solvent amount with evaporation, which further leads to changes in viscosity [25]. Thereby, the viscosity of carbopol liquid at various concentrations was studied in this work to understand the influence of continuously changing carbopol concentration on rheological properties so that this kind of impact can be estimated in an evaporating formulation. The studies were performed in a temperature-and humidity-controlled facility where ambient temperature and humidity were closely monitored because the metamorphosis can be impacted by environmental factors. All measurements were taken in triplicate using fresh samples. Retrieved data is fitted to the Carreau-Yasuda model (Equation (1)): where . γ is the shear rate (s −1 ), η a is the apparent viscosity (Pa·s), η ∞ is the infinite shear viscosity (Pa·s), η 0 is the zero-shear viscosity (Pa·s), λ is the time constant (s), a is the transition control factor, and n is the power index. Additionally, the water activity of CBP01-CBP10 was measured using an Aqualab Pawkit water activity meter (METER Group Inc., Pullman, WA, USA). In brief, 3 mL of samples were placed homogeneously in measuring chamber and measured in triplicates after calibration with the provided standard solutions.
Determination of Viscoelastic Properties
The viscoelastic properties of lidocaine cream samples, including storage modulus, loss modulus, and complex viscosity, were measured with the condition and sample treatment procedures as described in the preceding section. To identify the linear viscoelastic region (LVR) of samples, dynamic strain sweep tests were performed by increasing % strain logarithmically from 0.01% to 1000% at a frequency of 6.28 rad/s and a temperature of 32 ± 0.5 • C. The measurements were taken at 10 points per decade in log mode. After the determination of LVR, a constant deformation of 1% strain was selected for frequency sweep tests over a range of 0.1-100 rad s −1 at 32 ± 0.5 • C to interrogate the oscillatory rheogram of lidocaine cream samples before and after 2-h evaporation. The measurements were taken at 5 points per decade in log mode. All measurements were taken in triplicate using fresh samples.
In Vitro Permeation Test (IVPT)
Full-thickness skin was immediately defatted after excision from the abdominal area of 25-to 48-year-old female patients undergoing plastic surgery with approval by Metro South and the University of Queensland Human Research Ethics Committee (2018/HE001721). The epidermis was heat separated using pre-established procedures before being stored at −40 • C until use [26].
For the permeation study, heat-separated epidermis membranes were sandwiched between donor chambers and receptor chambers of Franz diffusion cells set in a circulated water bath maintained at 37 ± 0.5 • C. The leakage test and skin impedance test were performed using a standard digital multimeter (FINEST 500) at 20 kΩ to exclude any impaired epidermis membranes prior to the study. After the equilibration of diffusion cells and receptor medium for 30 min, 110 mg of prepared lidocaine cream samples were dosed onto the membrane with an exposure area of 1.13 cm 2 . PBS at pH = 7.4 with 0.5% (w/w) Volpo™ N20 and 0.05% (w/w) sodium azide was selected as the receptor medium, which was continuously stirred by a magnetic stir minibar placed in the receptor chamber at 600 RPM to maintain the sink condition. The evaporation condition of the formulation was controlled by the occlusion of the donor compartment. The evaporation was mitigated in occluded cells by covering the donor chamber with Parafilm, while unoccluded cells were left open to the environment. The receptor medium of 3.2 mL was fully collected from the receptor chamber and replaced with fresh medium at 0.5, 1, 1.5, 2, 3, 4, 5, 6, 7, and 8 h. The study was performed in triplicate.
After the study, 50 µL of the collected receptor medium was spiked into 50 µL of internal standard solution (5 µg/mL prilocaine) and vortexed for HPLC analysis using a Shimadzu Prominence system with a SIL20-AHT autosampler, an SPD20A detector set to 210 nm, and a PSC18-100 A • column with temperature maintained at 35 • C. 20 µL of this prepared sample was eluted under isocratic flow with a mobile phase consisting of acetonitrile (0.23 mL/min) and 0.5 M sodium phosphate buffer at pH = 5.8 (0.73 mL/min). The retention times of prilocaine and lidocaine were 3.5 and 4.5 min, respectively. The calibration curves were created using standards of lidocaine with concentrations ranging from 0.097 to 200 µg/mL.
In vitro permeation profiles were generated by plotting the cumulative amount (Q, µg/cm 2 ) and flux (J, µg/cm 2 /h) of lidocaine permeated versus time (h) [27]. The steady-state flux (Jss, µg/cm 2 /h) across the exposure area of the epidermis (A, cm 2 ) was estimated from the apparent steady-state slope of the linear region in the plot of the cumulative amount versus time using Equation (2): where t lag (h) represents the lag time to reach the steady state of permeation [4]. The permeability coefficient was calculated with Fick's law equation (Equation (3)): where K p (cm 2 /h) is the permeability coefficient and C v (µg/mL) is the drug concentration in the donor.
Statistical Analysis
The experimental data, including rheological properties and in vitro permeation parameters, was plotted and statistically analyzed using GraphPad Prism version 9.3.1 (GraphPad Software Inc., La Jolla, CA, USA) and Origin 2022b (OriginLab Corporation, Northampton, MA, USA). Data was expressed as the mean ± standard error where feasible. A one-way analysis of variance (ANOVA) was carried out to test differences at the 95% (p < 0.05) significance level between treatments.
Evaporation Profile of O/W Lidocaine Cream
The DSC (heat flow variation) and TGA (%weight variation) curves, recorded with a sample of O/W lidocaine cream F1 subjected to the isothermal cycle performed under 32 • C from 0 to 4 h, are shown in Figure 1a. During the isothermal cycle, five segments were identified. According to the composition of the formulation and the exhibited evaporation profile in Figure 1b, the initial decline of heat flow in segment I (0-332 s) denoted the endothermic evaporation of IPA, while the subsequential exothermic process indicated the seeding and crystallization of lidocaine along with the evaporation of volatile solvents from the O/W system. After the steady increase of heat flow in segment II (332-6577 s), the surge of heat flow in segment III (6578-8526 s) suggested that drastic crystallization happened accompanied by a sharp decrease of evaporation rate till a dissolution-crystallization balance was reached at the end of segment IV (8527-10,763 s). The mass balance was achieved at the beginning of segment V (10,763-14,400 s), indicating non-volatile components accounted for 40.05% of the total weight of the sample. crystallization balance was reached at the end of segment IV (8,527-10,763 s). The mass balance was achieved at the beginning of segment V (10,763-14,400 s), indicating nonvolatile components accounted for 40.05% of the total weight of the sample.
Shear Flow Properties of O/W Lidocaine Cream and Modeling of "in Use" Apparent Viscosity
The apparent viscosity values of O/W lidocaine cream F1 at different evaporative statuses under 32 °C are plotted in Figure 2a as a function of shear rate. Before the steep decrease from 0.004 s −1 , the apparent viscosity remained flat between the first three
Shear Flow Properties of O/W Lidocaine Cream and Modeling of "in Use" Apparent Viscosity
The apparent viscosity values of O/W lidocaine cream F1 at different evaporative statuses under 32 • C are plotted in Figure 2a as a function of shear rate. Before the steep decrease from 0.004 s −1 , the apparent viscosity remained flat between the first three measuring points (0.002 to 0.004 s −1 ), indicating a possible plateau with a low shear rate at near zero shear stress [28].With the further growth of the shear rate, a shear thinning behavior was observed for all samples till the infinite shear plateau (1 to 10,000 s −1 ) was reached, representing the disentanglement of carbopol polymer and aggregation of droplets under higher shear [29]. As expected, the apparent viscosity of cream samples rose with increasing evaporation time. near zero shear stress [28].With the further growth of the shear rate, a shear thinning behavior was observed for all samples till the infinite shear plateau (1 to 10,000 s −1 ) was reached, representing the disentanglement of carbopol polymer and aggregation of droplets under higher shear [29]. As expected, the apparent viscosity of cream samples rose with increasing evaporation time. The experimental apparent viscosity data of lidocaine cream F1 were fitted to the Carreau-Yasuda model and generated 10 curves with R 2 = 0.99 in Figure 2b, representing the flow behavior of the cream at different metamorphotic statuses under "in use" settings. Zero-shear viscosity (η 0 ), infinite-shear viscosity (η ∞ ), and other parameters obtained by modeling are listed in Table 3. A solid alignment between experimental data and model outputs was achieved, as seen in the modest standard error. The increased values of the zero-shear viscosity from 3300 to 4334 Pa·s as a function of evaporation time indicated a high resistance to the movement of flow in the cream samples at a low shear rate. For all samples, the predicted values of infinite-shear viscosity were close to 0, suggesting the full disentanglement of microstructure under the extremely high shear rate [30][31][32][33]. The power indexes, n, which were smaller than 1, were consistent with the shear-thinning behavior of samples. Table 3. Carreau-Yasuda model parameters for O/W lidocaine cream F1 based on apparent viscosity measurement η 0 : zero-shear viscosity (Pa·s); η ∞ : infinite shear viscosity (Pa·s); λ: time constant; a: transition control factor; n: power index. Considering carbopol as a non-volatile component in the formulation, its concentration increased with the incremental time of evaporation after application, and hence it was hypothesized that this caused the increase in viscosity. Cabropol was also used as a thickening agent to suspend droplets in the cream. Therefore, its contribution to increasing the viscosity was investigated by the prediction of zero-shear viscosity at 32 °C using the preceding modeling method. A logarithmical plot of zero-shear viscosity versus carbopol concentration in formulations is shown in Figure 4. Similarly, three regimes were identified, including a dilute regime with a slope of 8.34 at the low concentration range from 0 to 0.2%, a semi-dilute regime with a slope of 1.07 at the medium concentration range from 0.2 to 0.4%, and a condensed regime with a slope of 0.16 at the high concentration range Considering carbopol as a non-volatile component in the formulation, its concentration increased with the incremental time of evaporation after application, and hence it was hypothesized that this caused the increase in viscosity. Cabropol was also used as a thickening agent to suspend droplets in the cream. Therefore, its contribution to increasing the viscosity was investigated by the prediction of zero-shear viscosity at 32 • C using the preceding modeling method. A logarithmical plot of zero-shear viscosity versus carbopol concentration in formulations is shown in Figure 4. Similarly, three regimes were identified, including a dilute regime with a slope of 8.34 at the low concentration range from 0 to 0.2%, a semi-dilute regime with a slope of 1.07 at the medium concentration range from 0.2 to 0.4%, and a condensed regime with a slope of 0.16 at the high concentration range from 0.4 to 1%. The increase in viscosity of carbopol fluids was consistent with the increasing time of evaporation. However, the water activity of all formulations remained stable at around 1, suggesting that most of the water is unbound in the polymeric system.
Viscoelastic Properties and Impacts of Metamorphosis on Oscillatory Rheogram
The dynamic modulus and shear stress of the lidocaine cream F1 sample at 32 °C under controlled shear strain with oscillatory amplitude sweep are plotted in Figure 5. As shown in this Figure, a plateau of elastic modulus (G′; Figure 5a) and viscous modulus (G″; Figure 5b), i.e., LVR, was found before the decrease of G′ and increase of G″. G″ exceeded G′ after the flow point at 63.7% strain, suggesting elastic components, such as aggregated micelles of carbopol, had a predominant contribution to the microstructure in the formulation at low shear strain before the viscous fraction of the sample dominated the flow behavior at the high shear strain that can indicate the sol/gel transition time [34]. The LVR (from 0.01 to 2.08%) was determined by the log derivative of shear stress (τ) plotted in Figure 5b, where a 0.1 offset was considered the critical limit of the LVR. The yield point at the exit of the LVR, where a substantial deformation of the carbopol polymeric network occurred, was determined at 2.08% shear strain with 7.58 Pa of shear stress.
Viscoelastic Properties and Impacts of Metamorphosis on Oscillatory Rheogram
The dynamic modulus and shear stress of the lidocaine cream F1 sample at 32 • C under controlled shear strain with oscillatory amplitude sweep are plotted in Figure 5. As shown in this Figure, a plateau of elastic modulus (G ; Figure 5a) and viscous modulus (G ; Figure 5b), i.e., LVR, was found before the decrease of G and increase of G . G exceeded G after the flow point at 63.7% strain, suggesting elastic components, such as aggregated micelles of carbopol, had a predominant contribution to the microstructure in the formulation at low shear strain before the viscous fraction of the sample dominated the flow behavior at the high shear strain that can indicate the sol/gel transition time [34]. The LVR (from 0.01 to 2.08%) was determined by the log derivative of shear stress (τ) plotted in Figure 5b, where a 0.1 offset was considered the critical limit of the LVR. The yield point at the exit of the LVR, where a substantial deformation of the carbopol polymeric network occurred, was determined at 2.08% shear strain with 7.58 Pa of shear stress. To understand the impacts of metamorphosis on the viscoelastic properties of t formulation under "in use" conditions, a frequency sweep was performed using 1% stra at 32 °C. The rheograms of O/W lidocaine cream F1 samples within the LVR at 0 and 1 min are depicted in Figure 6. The trace of G′ was constantly above G″, and both modu remained stable despite the incremental frequency while the complex viscosity ( To understand the impacts of metamorphosis on the viscoelastic properties of the formulation under "in use" conditions, a frequency sweep was performed using 1% strain at 32 • C. The rheograms of O/W lidocaine cream F1 samples within the LVR at 0 and 120 min are depicted in Figure 6. The trace of G was constantly above G , and both moduli remained stable despite the incremental frequency while the complex viscosity (η*) gradually decreased, indicating a gel-like profile of the samples. However, an enlarged gap between G and G could be observed after 2 h of evaporation due to the parallel increase of G over the range of frequency from 0.1 to 100 rads, suggesting a denser 3D matrix of carbopol was generated as a result of metamorphosis [35]. gradually decreased, indicating a gel-like profile of the samples. However, an enlarged gap between G′ and G″ could be observed after 2 h of evaporation due to the parallel increase of G′ over the range of frequency from 0.1 to 100 rads, suggesting a denser 3D matrix of carbopol was generated as a result of metamorphosis [35].
In Vitro Skin Permeation Profiles of Lidocaine
To investigate the skin permeation profile of lidocaine under the "in use" condition and its relationship to the metamorphosis of formulation, an IVPT study with cream formulations listed in Table 1 was performed. The flux and cumulative amount are shown in Figure 7. A good linear region of the cumulative amount was achieved for all formulations under both unoccluded and occluded conditions, and a higher penetration amount of lidocaine was found in occluded cells, suggesting the permeation enhancement caused by occlusion. In unoccluded cells, the steady state of penetration for both formulations ended after 4 h of the experiment since their flux started to decrease, which could be attributed to the increase in viscosity. Another possible reason is the crystallization of lidocaine with the evaporation of volatile solvents. As APIs in crystalline form cannot permeate through the skin barrier, the bioavailability of lidocaine in the formulation is thereby reduced. The flux of lidocaine in occluded cells remained stable until the end of the study, suggesting the drug in both formulations was not depleted. Compared to F1, a higher flux and cumulative amount of F2 were found, which was considered to be caused by the higher concentration and thermodynamic activity of lidocaine in the formulation.
In Vitro Skin Permeation Profiles of Lidocaine
To investigate the skin permeation profile of lidocaine under the "in use" condition and its relationship to the metamorphosis of formulation, an IVPT study with cream formulations listed in Table 1 was performed. The flux and cumulative amount are shown in Figure 7. A good linear region of the cumulative amount was achieved for all formulations under both unoccluded and occluded conditions, and a higher penetration amount of lidocaine was found in occluded cells, suggesting the permeation enhancement caused by occlusion. In unoccluded cells, the steady state of penetration for both formulations ended after 4 h of the experiment since their flux started to decrease, which could be attributed to the increase in viscosity. Another possible reason is the crystallization of lidocaine with the evaporation of volatile solvents. As APIs in crystalline form cannot permeate through the skin barrier, the bioavailability of lidocaine in the formulation is thereby reduced. The flux of lidocaine in occluded cells remained stable until the end of the study, suggesting the drug in both formulations was not depleted. Compared to F1, a higher flux and cumulative amount of F2 were found, which was considered to be caused by the higher concentration and thermodynamic activity of lidocaine in the formulation.
The skin permeation parameters were calculated using Equations (2) and (3) and listed in Table 4. Lidocaine in unoccluded diffusion cells dosed with both formulations exhibited a lower J ss and lower K p , suggesting a retardant effect on skin penetration due to evaporation. The increase in K p with the increase in lidocaine concentration in the donor is consistent with the penetration enhancement illustrated in Figure 7. The steady-state flux was reached earlier in unoccluded diffusion cells as a reduced lag time (t lag ) was observed. The skin permeation parameters were calculated using Equations (2) and (3) and listed in Table 4. Lidocaine in unoccluded diffusion cells dosed with both formulations exhibited a lower Jss and lower Kp, suggesting a retardant effect on skin penetration due to evaporation. The increase in Kp with the increase in lidocaine concentration in the donor is consistent with the penetration enhancement illustrated in Figure 7. The steady-state flux was reached earlier in unoccluded diffusion cells as a reduced lag time (tlag) was observed.
Discussion
In the case of complex preparations, such as topical semisolid products, the metamorphosis of formulation, such as evaporation and crystallization, can significantly impact the bioavailability of drug products. The evaporation rate of different marketed topical products with various dosage forms, such as solution, lotion, gel, cream, and ointment, can differ based on the concentrations of volatile excipients (e.g., water, ethanol, and propylene glycol) used in the structure of these products [36][37][38][39]. For example, a gel,
Discussion
In the case of complex preparations, such as topical semisolid products, the metamorphosis of formulation, such as evaporation and crystallization, can significantly impact the bioavailability of drug products. The evaporation rate of different marketed topical products with various dosage forms, such as solution, lotion, gel, cream, and ointment, can differ based on the concentrations of volatile excipients (e.g., water, ethanol, and propylene glycol) used in the structure of these products [36][37][38][39]. For example, a gel, compared to an ointment, evaporates more rapidly owing to the higher content of volatiles, such as water and alcohol, in the gel structure [37]. Additionally, it has been demonstrated that maximizing the saturation percentage of APIs (i.e., thermodynamic activity) has a crucial role in optimizing the skin delivery of topical formulations [1]. Drying up the topical products, such as gels containing volatile vehicles (e.g., water and ethanol), after topical application causes a thermodynamically unstable supersaturated system and, subsequently, its crystallization. This results in a decrease in the skin permeation of the product [5]. Depending on the dosage form, a range of CQAs, such as particle size, globule size, rheological properties, and thermodynamic activity, can be altered when the topical products encounter the metamorphosis, leading to specific failure modes [13]. Therefore, in this study, the role of rheological properties, such as viscosity and viscoelastic behavior, in drug permeation under the "in use" condition was investigated. In this work, experimental conditions were not synchronized to allow for evaporation measurements up to a 4 h duration in DSC/TGA, which provided a sensitive microbalance and control on environmental conditions and narrow gap rheological measurement to study the evaporation of formulation and metamorphotic impacts on drug permeation under true in-use conditions of a finite dose (10 mg/cm 2 ), according to OECD guidelines. As illustrated in Figure 8, the API-excipient mixture is trapped in the network of crosslinked polymer chains in the hydrophilic gel phase, and the polymeric content rises due to the simultaneous evaporation and absorption of drug vehicles into the skin, resulting in a condensed carbopol gel network that has a retardant effect on the permeability of API. in drug permeation under the "in use" condition was investigated. In this work, experi-mental conditions were not synchronized to allow for evaporation measurements up to a 4 h duration in DSC/TGA, which provided a sensitive microbalance and control on environmental conditions and narrow gap rheological measurement to study the evaporation of formulation and metamorphotic impacts on drug permeation under true in-use conditions of a finite dose (10 mg/cm 2 ), according to OECD guidelines. As illustrated in Figure 8, the API-excipient mixture is trapped in the network of crosslinked polymer chains in the hydrophilic gel phase, and the polymeric content rises due to the simultaneous evaporation and absorption of drug vehicles into the skin, resulting in a condensed carbopol gel network that has a retardant effect on the permeability of API. Figure 8. A graphical illustration of the interplay between the applied formulation and skin. The viscosity of topical semisolid products increased with the evaporation of volatile solvents, resulting in a more compact product microstructure at the skin-formulation interface. Thus, the permeation rate of APIs from the formulation was significantly reduced, indicating the product's lower permeability and overall therapeutic efficacy.
TGA and DSC results provide a unique opportunity to decipher the dynamic metamorphosis of lidocaine cream from primary to tertiary formulation, where a series of metamorphotic events, such as evaporation and crystallization, occur and eventually lead to a change in rheological behavior and drug permeability. The evaporation rate determined by isothermal TGA, which showed five prominent segments at skin temperature ( Figure 1), was linked to the simultaneously measured heat flow, suggesting that the evaporation kinetics of cream were likely related to the crystallization of lidocaine in the colloidal matrix of carbopol. In one study [25], similar five segments for the evaporation of water were observed in colloidal unimolecular polymer systems, where the alteration of surface tension and viscosity were attributed to the change in evaporation rate during the isothermal process [25]. For further differentiation of the evaporation rate of water and other cosolvents, the latest developed method to measure the loss of water using a customized evaporimeter directly and to compare the water loss to the weight loss of other volatiles has been disclosed in the recent literature [40]. Figure 8. A graphical illustration of the interplay between the applied formulation and skin. The viscosity of topical semisolid products increased with the evaporation of volatile solvents, resulting in a more compact product microstructure at the skin-formulation interface. Thus, the permeation rate of APIs from the formulation was significantly reduced, indicating the product's lower permeability and overall therapeutic efficacy.
TGA and DSC results provide a unique opportunity to decipher the dynamic metamorphosis of lidocaine cream from primary to tertiary formulation, where a series of metamorphotic events, such as evaporation and crystallization, occur and eventually lead to a change in rheological behavior and drug permeability. The evaporation rate determined by isothermal TGA, which showed five prominent segments at skin temperature ( Figure 1), was linked to the simultaneously measured heat flow, suggesting that the evaporation kinetics of cream were likely related to the crystallization of lidocaine in the colloidal matrix of carbopol. In one study [25], similar five segments for the evaporation of water were observed in colloidal unimolecular polymer systems, where the alteration of surface tension and viscosity were attributed to the change in evaporation rate during the isothermal process [25]. For further differentiation of the evaporation rate of water and other cosolvents, the latest developed method to measure the loss of water using a customized evaporimeter directly and to compare the water loss to the weight loss of other volatiles has been disclosed in the recent literature [40].
The measured apparent viscosity data of lidocaine cream under the "in use" condition were aligned with the Carreau-Yasuda model, thus giving great potential to predict the zero-shear viscosity of the cream as a function of evaporation time. Three regimes with different slopes reported in Figure 4 reflect the rearrangement of microstructure in the complex mixture system during the metamorphosis. The growth of zero-shear viscosity with time is likely to indicate that a higher activation energy, which is strongly dependent on the interaction of polymer chains, is required for the movement of molecules, which, in turn, can possibly result in a lower diffusivity [41,42]. In line with the flow behavior data of topical formulations with varying concentrations of excipients from Li et al., the prediction of zero-shear viscosity of carbopol fluids, which is shown in Figure 5, further supports this, as three similar regimes can be easily identified [34]. The dynamic rheological properties were assessed by amplitude sweep and frequency sweep to evaluate the viscoelasticity of the lidocaine cream formulation. The elastic modulus and viscous modulus plotted in the oscillatory rheography ( Figure 5) and derived parameters, including flow point and yield point, were consistent with the previous rheological characterization of commercially available creams [43]. As Figure 5a shows, G exceeded G after the flow point at 63.7% strain, which could indicate the sol/gel transition time [34]. The rheogram of the cream plotted in Figure 6 exhibited a gel-like profile since the trace of G was constantly above G , indicating that the sample formed a continuous network structure, resulting in a strong gel [34]. This implied that the elastic components, such as anionic carbopol clusters, are predominant structural entities in situ after the formulation is applied to the skin [44]. The increased elastic modulus of samples after 2 h of evaporation also suggested the condensing of carbopol in the formulation as a result of metamorphosis.
By integrating the predicted zero-shear viscosity, the significant influence of metamorphosis on the permeation profile of lidocaine cream formulations, i.e., the retardant effect on skin penetration, was successfully captured in the IVPT study, where the evaporation condition was manipulated with the occlusion of diffusion cells. Cross et al. [45] studied the penetration of oxybenzone emulsions containing the thickening agent carbomer 940 (from 0% to 0.5%) under both infinite dose (static) and finite dose (in-use) conditions across human skin [45]. Both the results of the current and Cross et al. studies verified that the drug flux was inversely proportional to the viscosity of the formulation under the "in use" condition. Additionally, the loss of volatile cosolvents due to metamorphosis, especially IPA in this case, evolved the formulation mixture into a thermodynamically unstable system, resulting in the spontaneous crystallization of lidocaine [46,47]. Hence, bioavailability was reduced. Mostly, the evaporation rate of volatile solvents is faster than the permeation rate of APIs and excipients, as these molecules need to go through the hydrophobic extracellular spaces between keratinocytes filled with lipid lamellar in the stratum corneum to get into the skin. Similarly, literature also shows that the total mass loss due to evaporation is higher than the measured transepidermal water loss on skin applied with the formulation [40]. The real-time quantitative analysis for this dynamic process was observed by Belsey et al. using stimulative Raman scattering (SRS) microscopy, and they mapped the topography of Ibuprofen-d3 crystal on the surface and in the multiple layers of the skin to reveal the metamorphosis of topical formulations [48].
In this work, we witnessed the metamorphosis of topical semisolid formulation caused by evaporation and acknowledged its implication on the rate and extent of percutaneous drug permeation, which was less significant from past IVPT infinite dose studies. Although the metamorphosis has been taken into consideration for QTPP by the evolving pharmaceutical industry, this issue is addressed here for the "in use" condition. Overall, such a relationship observed between evaporation and changes in viscosity and drug permeability can provide valuable insights into the metamorphosis of topical semisolid products under the "in use" condition that can lead to specific failure modes of the product, such as crystallization, for instance. The stepwise workflow proposed in this work, including TGA, rheological characterization, and IVPT studies, could be adapted for other topical semisolid formulations to understand the impacts of metamorphosis on a case-by-case basis for different APIs and physicochemical properties of formulations. By further integrating physiologically based pharmacokinetic modeling, the failure modes can be predicted and mitigated for the development of similar or different Q3 products.
Conclusions
As far as we are aware, this is the first work presenting the rheological change of topical semisolid formulation simultaneously with the evaporation of volatile solvent that led to a concurrent reduction of API's skin permeability. It was found that with the evaporation of volatile components from lidocaine cream, the permeability of lidocaine decreased while the viscosity and elastic modulus of the cream, as a result of the aggregation of carbopol micelles and crystallization of lidocaine after application, increased. Along with the physical and structural characterization of topical semisolid products, the potential failure modes for similar or different Q3 products will be predictable by further understanding of the metamorphotic events and changes in CQAs. This work and the revealed underlying mechanism provide mathematical modelers with the necessary background to build complex models that incorporate evaporation, viscosity, and drug permeation in the simulation once at a time.
|
2023-06-29T06:15:55.422Z
|
2023-06-01T00:00:00.000
|
{
"year": 2023,
"sha1": "5fb7230a9a84c0ecdd33efb8c8ee26949ed4e71c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f10482581ee25c6f177ab190b08e6a28fcf52b04",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210565642
|
pes2o/s2orc
|
v3-fos-license
|
EXAMINING CUMULATIVE INEQUALITY IN THE ASSOCIATION BETWEEN CHILDHOOD SES AND BMI FROM MIDLIFE TO OLD AGE
Abstract Socioeconomic status (SES) is among the strongest determinants of body mass index (BMI). For older populations, selection bias is a large barrier to assessing cumulative disadvantages. We investigated the extent to which childhood SES affects BMI from midlife to old age and gender differences in the association. Data come from Midlife in the U.S. We used latent growth models to estimate BMI trajectory over a period of 20 years and examined results under different missing data patterns. Compared to individuals from higher childhood SES, those from lower childhood SES have higher BMI in midlife and experience a faster increase in BMI between midlife and old age. The observed associations remain significant even after controlling for midlife SES. After addressing nonrandom selection, the gap in BMI between high and low childhood SES widens from midlife to old age for women. The findings provide new evidence of cumulative inequality among older adults.
Socioeconomic disadvantage in early life predicts life-course trajectories of body weight.
Individuals who were disadvantaged in early life tend to have higher body mass index (BMI) and greater likelihood of being overweight or obese in adolescence and young adulthood (H. Lee, Harris, & Gordon-Larsen, 2009), and these associations extend to midlife (Giskes et al., 2008;Pudrovska, Logan, & Richman, 2014). Importantly, these adverse effects are stronger and more consistent among women than men, in both early adulthood (Gustafsson, Persson, & Hammarstrom, 2012;Khlat, Jusot, & Ville, 2009) and midlife (Giskes et al., 2008;Pudrovska, Reither, Logan, & Sherman-Wilkins, 2014). For example, studies on SES have found strong negative effects, particularly for women, of early-life SES on adult BMI; although adult SES is among the most widely studied life-course factors leading to adult BMI, researchers have shown that the effects of such early-life disadvantage are independent of the effects of adult SES (Senese, Almeida, Fath, Smith, & Loucks, 2009).
Despite extensive life-course studies on BMI, important questions remain: do BMI inequalities established in early life widen or diminish in later life? Do the adverse impacts of early disadvantage on body weight continue to be more pronounced for women than men? And what is the role of midlife SES in the associations? Using three waves (1995/96-2013/14) from the Midlife in the U.S. Study (MIDUS), the aim of the current study is to investigate these questions. Given the importance of body weight for later-life survival (Zajacova & Ailshire, 2013), responding to these inquiries may provide important policy-relevant guidelines and gender-specific interventions. However, assessing the accumulation of inequality for older populations is quite challenging due to non-random drop-out across surveys (Banks, Muriel, & Smith, 2011;Ferraro, Shippee, & Schafer, 2009;O'Rand & Hamil-Luker, 2005), which can A c c e p t e d M a n u s c r i p t 5 potentially lead to erroneous conclusions regarding the relationship between SES and BMI. Our study builds on prior studies by comparing the results from multiple missing data mechanisms to further examine whether the link between childhood SES and BMI becomes stronger when nonrandom selection is taken into account.
Childhood SES and adult BMI
Although the accumulation of body fat results from complex combinations of biological, behavioral, social, and environmental factors (Wyatt, Winters, & Dubbert, 2006), socioeconomic status (SES) is among the strongest determinants of BMI. A large body of studies based on lifecourse perspectives has found that low childhood SES is associated with increased BMI among adults (Senese et al., 2009) Research based on European data has indicated that the effects of childhood SES on midlife BMI are independent of socioeconomic position in adulthood (Hardy, Wadsworth, & Kuh, 2000;Giskes et al., 2008). Findings in the U.S. are consistent; for example, using MIDUS, Chapman et al (2009) found that parental occupational prestige is inversely related to adult BMI and that the association remains significant after accounting for respondent's own SES, particularly for middle-aged women. Similarly, using the Wisconsin Longitudinal Study (WLS), Pudrovska, Logan, et al. (2014) found that parental SES is inversely associated with body weight at age 65 even after controlling for midlife SES. Recent research that has used the Health and Retirement Study (HRS) has augmented the typical measures of adult SES (e.g., by including neighborhood socioeconomic characteristics) and found that the effects of parental SES on BMI still remain significant (Pavela, 2017). Overall, extant evidence supports the critical period model. Thus, we expect that early-life SES will be inversely and significantly associated with later-life BMI even after controlling for midlife SES (Hypothesis [H]1).
Childhood SES and BMI trajectories in later life
There are two competing explanations for how and why the association between childhood SES and BMI varies over the life course. First, cumulative advantage/disadvantage theory suggests that BMI disparities between low vs. high SES will widen throughout the life course because disadvantage in early life might lead to subsequent disadvantages (Dannefer, 2003), which ultimately promote the accumulation of body fat with age. In contrast, the leveling hypothesis proposes that such BMI differentials at earlier ages become muted with increasing age through selective mortality and biological frailty among older populations (Dupre, 2007). That is, disadvantaged individuals who are in poor health are likely to be removed from the observed A c c e p t e d M a n u s c r i p t 7 population through premature death, with those who remain becoming more homogenous in terms of their health status. Regarding such an apparent disappearance of inequalities in later life, cumulative inequality theory suggests that non-random selection may play an important role (Ferraro et al., 2009).
In testing cumulative disadvantage theory with longitudinal studies of aging, a noteworthy concern is attrition from mortality or being lost to follow-up. For example, in MIDUS, approximately half of respondents were lost to follow-up or died between 1995/6 and 2013/14. If the probability of attrition is systemically related to outcomes of interest, the missingat-random assumption is no longer valid (Little & Rubin, 2014). Such non-random selection leads to several issues, for example, the study sample will not be representative of the population of interest and the estimated associations between covariates and the outcome may be biased (Banks et al., 2011). Given that individuals who are less healthy and of lower SES are less likely to complete surveys, life-course scholars have been concerned that non-random selection may affect assessments of inequality in later life (O'Rand & Hamil-Luker, 2005;Willson, Shuey, & Elder, 2007). In testing cumulative inequality theory, Ferraro et al. (2009) have highlighted the importance of methods that take into account potential selection bias.
Extant studies which used middle-aged populations have found supporting evidence for cumulative disadvantage theory, particularly for women. For instance, using individuals aged 40-60 from the longitudinal Dutch GLOBE study, Giskes et al. (2008) found that women from low SES families show higher BMI at baseline and greater weight gain over a 13-year period than those from high SES families. Similarly, using data from the WLS, Pudrovska et al. (2014) reported that for women, low early-life SES is related to a BMI increase between age 54 and 64.
However, we have little knowledge of the extent to which childhood SES affects BMI Downloaded from https://academic.oup.com/psychsocgerontology/advance-article-abstract/doi/10.1093/geronb/gbz081/5511912 by Technical Services -Serials user on 11 June 2019 A c c e p t e d M a n u s c r i p t 8 trajectories beyond midlife. Based on cumulative disadvantage theory, we expect that BMI will continue to grow steeper from midlife to old age for those from low SES families compared to those from high SES families (H2). Further, guided by cumulative inequality theory (Ferraro et al., 2009), we further expect that the association between SES and changes in later-life BMI may appear stronger when non-random selection is taken into account (H3).
Gender differences
Findings from both clinical and population-based studies have indicated that the effects of childhood SES are more consistent among women than men throughout adulthood (Giskes et al., 2008;Gustafsson et al., 2012;Pudrovska, Logan, et al., 2014;Walsemann, Ailshire, Bell, & Frongillo, 2012). This gendered pattern might be partially attributed to biological differences because women tend to expend less energy than men and accumulate more abdominal fat (Lovejoy & Sainsbury, 2009). Cumulative inequality theory, however, suggests that gender differences in the accumulation of inequality may produce differential vulnerability to early-life disadvantage (Ferraro et al., 2009). Early-life environments penalize women more than men, thereby reinforcing relationships between SES and body weight (Pudrovska, Reither, et al., 2014). That is, socioeconomic disadvantage has a greater impact on BMI for girls than boys; girls who are overweight during adolescence are likely to have low educational attainment and in turn have high BMI in midlife. Moreover, some studies have reported that low SES in adulthood is more closely linked with higher BMI among women than men (Drewnoski, 2009;Khalt et al., 2009;Pudrovska et a., 2014). Accordingly, we expect that the adverse effects of childhood SES on later-life BMI will be more pronounced for women than men (H4). Additionally, the mediating role of midlife SES in the association between childhood SES and later-life BMI are stronger for women than men (H5).
A c c e p t e d M a n u s c r i p t 9
Sample
Data for this study comes from the MIDUS study, a national survey designed to assess the role of social, psychological, and behavioral factors in understanding differences in mental and physical health (n = 7,108; 52% women). MIDUS began in 1995/1996 (Wave [W]1) with noninstitutionalized, English speaking adults aged 25-74 in the 48 contiguous states (Brim, Ryff, & Kessler, 2004). MIDUS consists of a two-stage survey: a telephone interview and a self- (W3). The mortality data currently available to researchers were obtained from multiple sources (e.g., National Death Index reports, mortality closeout interviews, longitudinal sample maintenance), providing information on date-of-death up to October 31, 2015. Over the course of the survey, 1,140 respondents from the baseline SAQ (18% of the 6,325 respondents) were known to have died.
Although MIDUS was designed to assess the health and wellbeing of middle-aged individuals over time, it includes a wide age range of respondents (aged 25-74). After sensitivity analysis of age cutoffs, we limited the analytic sample to those respondents who were 40-54 years old at baseline (in 1995/1996), which includes 1,140 men and 1,205 women (37% of SAQ respondents at W1). This sampling restriction allows us to: 1) minimize confounding of age and cohort patterns in BMI (for details, see Figure S1 in supplementary materials), 2) track BMI from midlife to early old age (40s to early 70s) and 3) compare our findings with those from prior studies which focused on similar age groups (e.g., Giskes et al., 2008). household income ($0-$300,000 or more), (c) wage/salary income ($0-$100,000 or more), (d) current or previous occupation (1 = never employed or manual labor, 2 = service/sales/administrative, 3 = management/business/financial, 4 = professional), (e) current financial situation (0 = worst possible through 10 = best possible), (f) control over financial situation (0 = worst possible through 10 = best possible), (g) availability of money to meet basic needs (1 = more than enough through 3 = not enough, reverse coded), and (h) level of difficulty paying bills (1 = very difficult through 4 = not at all difficult).
BMI. At W1, respondents were asked to recall their weight at age 21, and at all three waves, respondents reported their current height and weight, providing measures of BMI (i.e., A c c e p t e d M a n u s c r i p t 11 weight in kilograms divided by the square of height in meters). Prior work has indicated a strong correlation between self-reported weight and measurements by research staff, yet some studies have reported that respondents at the tails of the weight distribution tend to slightly selfnormalize their weight (Bowman & DeLucia, 1992). To confirm the reliability of the selfreported measures of weight and height, we compared data from self-reports to those from the MIDUS biomarker study. We found that self-reported weight is slightly underreported while self-reported height is overreported. Although BMI is not always accurate, particularly for muscular individuals (Huxley, Mendis, Zheleznyakov, Reddy, & Chan, 2010), it is the most frequently used measure of body fat.
We controlled for age, race/ethnicity and gender (gender-stratified model) at baseline.
Body weight (e.g., obesity) is a highly heritable trait (Willyard, 2014). Some studies have indicated that weight gain during parenthood is likely to persist and accumulate, even after children become independent (C. Lee & Ryff, 2016). Thus, we included both number of children and retrospective reports of body weight at age 21 as biodemographic confounders.
Latent Growth Model
To examine the relationship between childhood SES and BMI, we applied a latent growth modeling approach (see e.g., Bollen & Curran, 2006). The growth model estimates the effect of childhood SES on BMI measured at W1 (intercept) and on the rate of change in BMI between W1 and W3 (slope). The outcome model consists of two levels: time and individual levels. The first level explores the relationship between time (different waves) and BMI, expressed as follows: ( ) A c c e p t e d M a n u s c r i p t 12 where is the BMI for case i at time t, and is a time-specific error. There are two latent factors that vary across individuals: intercept ( ) and slope ( ). We used an approach that does not assume a linear or quadratic relationship but models the rate of change without assuming a linear or quadratic shape (see Bollen & Curran, 2006 for more information). In our sample, BMI increased between W1 (aged 40-54) and W3 (aged 60-74), with the rate of change slowing down after W2 (aged 50-64) for both genders.
The second level explores the relationship between these latent factors (intercept and slope) and childhood SES after accounting for individual-level confounders (age, race/ethnicity, body weight at age 21, and number of children at W1). and where represents individual covariates and and are individual errors for intercept and slope, respectively. The coefficients and represent changes in the intercept and slope associated with a one-unit increase in childhood SES.
The analytic model has two stages. First, we estimated the effect of childhood SES on the baseline BMI (intercept) and the change in BMI over time (slope) (Model 1). We then added midlife SES into Model 1 to test whether the effect of childhood SES on the intercept and slope remained significant even after adjusting for midlife SES (Model 2). We tested gender differences in the effects of childhood SES on the growth trajectory of BMI using the gender interaction effects in the pooled sample of women and men. The significance of indirect effects (the mediating effects of midlife SES) was tested using the multiplication of regression coefficients approach (Baron & Kenny, 1996) and gender differences in the indirect pathway were examined by the gender interaction terms on the indirect effects in the pooled sample from both genders.
Missing Data Patterns and Mechanisms
In our analytic sample, 58% of respondents (1,364 out of 2,345) remained in the study throughout all three waves while 42% of respondents had died or were lost to follow-up (LFU) following W1 or W2. The profiles of these groups' missing patterns differ substantially in terms of their SES, BMI, and health-related conditions, as well as demographic characteristics (see Table S1 in supplementary materials). Compared to individuals who participated in the entire study, those who dropped out (died or LFU) following W1 or W2 showed lower childhood and midlife SES, worse health, as well as higher BMI (particularly for women). Among those who dropped out following W1 or W2, those who died were older and had higher BMI than those who were LFU. This indicates potential problems of selective attrition when we limit our sample to those who participated in all three waves.
To reach robust conclusions, we compared the results from the three different approaches to evaluate the extent to which our estimates change under different missing data mechanisms.
We first estimated the effect of childhood SES on BMI using listwise deletion (also called complete case analysis); that is, we only included respondents who had no missing score on BMI (n = 1,038). Listwise-deletion is among the most common methods for handling missing data.
This approach provides a valid result only if the size of missing data is small and if data are missing completely at random (MCAR), which seems implausible given the missing data pattern shown in Table S1. Second, we included all respondents at baseline (n = 2,345) and estimated the effect of childhood SES on BMI using full information maximum likelihood (FIML). This approach accommodates missing data by calculating each parameter of particular statistics using all data available in the sample (Geiser, 2012). FIML estimates are known to be unbiased if attrition is consistent with data being missing at random (MAR) (Enders & Bandalos, 2001).
MAR assumes that, after controlling for observed variables, such as age, SES, health-related indicators, and demographic characteristics, the chance of missing data on the outcome (i.e., BMI) does not depend on the value of the outcome. While the MAR assumption is plausible, there might be important variables that were not observed. Finally, we used a pattern mixture model in which respondents are classified into different groups based on their missing data patterns and estimates are obtained by averaging across different missing patterns (see e.g., Glynn, Laird, & Rubin, 1986;Hedeker & Gibbons, 2006). This approach assumes that attrition was consistent with a missing not at random (MNAR) mechanism, that is, that the chance of missing data on BMI is related to BMI itself. For example, those who have high BMI values may tend to drop out or die before a study ends. We cannot exclude this scenario since our data shows systemic missing data for BMI due to mortality, especially for women.
Given that the missing data patterns differ substantially by gender, we analyzed genderstratified models. All control variables have 1-2% of data missing on average. We handled missing data for these confounders by using FIML, assuming that values were MAR. Descriptive statistics were calculated using Stata 15.0 (StataCorp, 2018), and latent growth models were analyzed in Mplus 8.0 (Muthén & Muthén, 2017).
Descriptive statistics
For both genders, the mean sample BMI at baseline was above the overweight threshold (25 kg/m 2 ), with a greater BMI for men than women (27.5 vs. 26.7 kg/m 2 , p < .01). While there was no gender difference in childhood SES, women had lower midlife SES than men (p < .001).
A c c e p t e d M a n u s c r i p t 15 Compared to men, women were more likely to participate in all waves of the survey (61% vs. 56%, p < .05). Such gender differences in participation were partially attributed to greater mortality risk for men than women during the survey period.
MAR-based Effects of Life-Course SES on Trajectory of BMI
To address problems related to missing data, we next fitted a latent growth model by assuming MAR (Table 3). Consistent with the findings from the MCAR mechanism, the results from the A c c e p t e d M a n u s c r i p t 17 procedure, we implemented three approaches: completed cases, neighboring cases, and weighted cases.
MNAR-based Overall Effects of Life-Course SES on Trajectory of BMI
For completed cases, we replaced the inestimable parameters in both Groups 4 and 5 (attend W1 and LFU or died) with their counterparts from Group 1 (those who completed all three waves). This approach assumes that dropout cases and completed cases will follow a similar trajectory. We found that the results were consistent with the estimates under the MAR assumption. There was a significant and negative effect of childhood SES on baseline BMI and the rate of change in BMI for both genders (left column in Table 4).
For neighboring cases, we replaced the inestimable parameters in Group 4 (attended W1 and LFU) with their counterparts from Group 2 (attended W1 and W2 and LFU), and we replaced the inestimable parameters in Group 5 (attended W1 and died) with their counterparts from Group 3 (attended W1, W2 and died). This approach assumes that the growth trajectory for those who died will be similar while the trajectory for those who were LFU will be similar regardless of when they dropped out (W2 or W3). We found that the effect of childhood SES on the level of BMI was significant and negative. However, after replacing neighboring cases, the effect differed from the MAR-based result. More specifically, the effect sizes for women regarding the effect of childhood SES on the slope of BMI are slightly larger when replacing Lastly, for weighted cases, we replaced the inestimable parameters in Groups 4 and 5 with the weighted average of parameters in Groups 2 and 3. This assumes that the growth trajectory for those who dropped out following W1 will be similar to either those who died or were LFU following W2. We found that the results from using weighted cases were almost identical to those from using neighboring cases. Among these three approaches, replacing the neighboring or weighted cases represents a more plausible scenario than using completed cases given the difference in profiles of those who completed all waves of the study and those who died or were LFU as shown in Table S1.
Overall, findings from all three approaches (MCAR, MAR, and MNAR-based approaches) support the cumulative inequality theory (Hypothesis 3), particularly for women.
We found that, for women only, the results from MCAR mechanisms underestimated the slope of BMI compared to MAR-and MNAR-based results. MAR-based results underestimated the slope of BMI relative to MNAR-based results, more so for women than men. Overall, the results imply that after addressing the confounding effects of selective attrition, the effect of cumulative inequality appears stronger for women.
To present the results in an intuitive way, we plotted predicted BMI trajectories. score of 27.6 at W1 (aged 40-54); those who have high childhood SES (1 SD above the average) are predicted to have a BMI score of 26.1. Similarly, among men, those from low SES families show higher BMI than those from high SES families (28.0 vs. 27.1). The gap between high vs.
low childhood SES was 1.5 for women and 0.9 for men at age 40-54. The gap, however, widens as age increases, particularly for women, so that by W3 (aged 60-74), the difference in BMI between high and low childhood SES was 2.5 for women, but only 1.3 for men.
DISCUSSION
Early-life socioeconomic position and gender have been consistently shown to be strongly associated with BMI over the life course. However, few studies have examined how these factors shape BMI disparities from midlife to old age. This lack of research may be partially attributed to the large barriers posed by high attrition and selective survival in evaluating the accumulation of inequality among older populations. Using a national sample of U.S. middle-aged adults, the purpose of this study was to examine the extent to which early-life SES produces inequalities in midlife BMI that widen or diminish in later life, whether these associations differ by gender, and the role of midlife SES in the associations. To address issues related to non-random selection, we examined results under multiple missing data mechanisms and applied analytic techniques that take into account selection bias. Our study yielded several main findings.
Based on the critical period model (Ben-Shlomo et al., 2014), we expected that socioeconomic circumstances in early life would impact individuals' body weight in later-life.
Our findings show that older adults from low SES families had higher BMI than those from high SES families and the association remained significant even after controlling for midlife SES, indicating an independent and robust effect of early-life conditions. Our findings are in line with evidence from WLS, HRS, and European studies suggesting that parental SES has an Downloaded from https://academic.oup.com/psychsocgerontology/advance-article-abstract/doi/10.1093/geronb/gbz081/5511912 by Technical Services -Serials user on 11 June 2019 A c c e p t e d M a n u s c r i p t 20 independent association with BMI among middle-aged adults, even after taking into account their own SES (Giskes et al., 2008;Pavela, 2017;Pudrovska, Reither, et al., 2014). Motivated by cumulative disadvantage/advantage theory (Dannefer, 2003), we further expected that such BMI inequalities would widen as individuals age. Our findings showed that the gap in BMI between individuals from low and high SES families increased in later life for both men and women.
Overall, our findings are consistent with two studies that investigated the association between childhood SES on changes in BMI (Giskes et al., 2008;Pudrovska et al., 2014).
However, these studies relied on changes in BMI across two points in time, which might be inadequate for assessing underlying growth. In addition, Giskes et al (2008) did not explicitly address issues related to selection bias despite high attrition rates, which might have contributed to an attenuation of early-life SES gradients in midlife BMI. Given that our sample has high attrition and non-random selection, we explicitly compared the results from three analytic approaches assuming different missing data mechanisms. The findings, indeed, indicated that BMI disparities widened from midlife to old age, particularly for women. That the observed pattern was even more pronounced when we considered selection bias lends support to cumulative inequality theory (Ferraro et al, 2009).
The gender difference in the impact of childhood SES is noteworthy. Socioeconomic disadvantage in early life is significantly and inversely associated with body weight in midlife and rapid weight gain between midlife and old age, particularly for women. Our estimates showed that the BMI difference between those from high vs. low SES backgrounds was 0.9 for men and 1.5 for women at baseline but increased to 1.3 for men from and 2.5 for women about 20 years later. More intuitively, for the average man (5 ft. 9 in. tall), the BMI difference of 1.5 amounts to a roughly 10-pound difference between those from high vs. low SES backgrounds.
A c c e p t e d M a n u s c r i p t
21
For the average woman (5 ft. 4 in. tall), the BMI difference of 2.5 amounts to a roughly 15pound difference between those from high vs. low SES backgrounds. Given that women tend to be about 5 inches shorter than men, each pound may have stronger health-compromising effects for women than men. It is important to note in Figure 1 that, among those from low SES backgrounds, the average female had a lower BMI at W1 than the average male but showed higher BMI at W3. This finding echoes those from prior studies of younger populations (Gustafsson et al., 2012;Hardy et al., 2000;Walsemann et al., 2012) and also provided new evidence that cumulative BMI inequality continues even in old age, particularly for women from socioeconomically disadvantaged families.
Consistent with prior work (e.g., Giskes et al., 2008;Pudrovska, Reither, et al., 2014), we found that midlife SES partially explains the association between childhood SES and BMI in later life, yet the role of midlife SES differs by gender. Specifically, the effect of midlife SES on BMI in midlife was significantly larger for women than men, which is consistent with prior findings (Khalt et al., 2009;Drewnoski, 2009;Salonen et al., 2009;Pudrovska et al, 2014).
Furthermore, the mediating role of midlife SES in the association between childhood SES and midlife BMI (the intercept) was larger for women. This suggests that economic hardship in midlife may have an even more detrimental impact on women than men (in addition to childhood disadvantage). For women, therefore, improving financial conditions in midlife may help reduce the BMI disparities rooted in early-life SES. Yet, the finding should be interpreted cautiously.
Given that overweight/obesity is more strongly associated with employment discrimination for women (Shinall, 2015), we cannot rule out the possibility that the finding may result from reverse causation. A c c e p t e d M a n u s c r i p t
22
The limitations of our study should be noted. First, although the data covers a follow-up period of about 20 years, MIDUS only has three data points, with a wide age range at baseline.
While it is impossible to disentangle age and cohort effects within MIDUS, future research could better estimate the growth curve model by using data that has more data points and a smaller age range. Second, our study relied on retrospective reports of childhood SES and BMI at age 21, which potentially produces some recall bias; yet, prior studies support the validity of recall of childhood SES (Krieger, Okamoto, & Selby, 1998) and a strong correlation between recalled past weight and previously measured weight (Perry et al., 1995). Third, unmeasured factors in this study may potentially affect our estimates, a common limitation in observational research. Note. MCAR= missing completely at random Controls: age, race/ethnicity, body weight at age 21, number of children. a refers to significant gender differences in the effects of childhood SES (p < .10).
*p < .05, **p < .01, ***p < .001 Note. MAR = missing at random Controls: age, race/ethnicity, body weight at age 21, number of children. a refers to significant gender differences in the effects of childhood SES (p < .10). b refers to significant gender differences in the effects of childhood SES (p < .05).
|
2019-07-26T13:14:46.082Z
|
2019-11-01T00:00:00.000
|
{
"year": 2019,
"sha1": "6215962ad89e5d5b0209a984571384a5e68bbd1d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1093/geroni/igz038.1274",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2ba0fe3fbc717e3096c1eb63c0ab2cf4b5f6a832",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221326474
|
pes2o/s2orc
|
v3-fos-license
|
Deep Learning-Based Real-Time Multiple-Person Action Recognition System
Action recognition has gained great attention in automatic video analysis, greatly reducing the cost of human resources for smart surveillance. Most methods, however, focus on the detection of only one action event for a single person in a well-segmented video, rather than the recognition of multiple actions performed by more than one person at the same time for an untrimmed video. In this paper, we propose a deep learning-based multiple-person action recognition system for use in various real-time smart surveillance applications. By capturing a video stream of the scene, the proposed system can detect and track multiple people appearing in the scene and subsequently recognize their actions. Thanks to high resolution of the video frames, we establish a zoom-in function to obtain more satisfactory action recognition results when people in the scene become too far from the camera. To further improve the accuracy, recognition results from inflated 3D ConvNet (I3D) with multiple sliding windows are processed by a nonmaximum suppression (NMS) approach to obtain a more robust decision. Experimental results show that the proposed method can perform multiple-person action recognition in real time suitable for applications such as long-term care environments.
Introduction
Due to the rise of artificial intelligence in the field of video analysis, human action recognition has gained great popularity in smart surveillance, which focuses on automatic detection of suspicious behavior and activities. As a result, the system can launch an alert in advance to prevent accidents in public places [1] such as airports [2], stations [3], etc. To serve the need of the upcoming aging society, smart surveillance can also provide great advantages to long-term care centers as well, where action recognition can help center staff notice dangers or accidents when taking care of large numbers of patients. Therefore, it is important to develop an action recognition system for real-time smart surveillance.
Recently, data-driven action recognition has become a popular research topic due to the flourish of deep learning [4][5][6][7][8]. Based on the types of input data, existing literature of action recognition can be divided into two categories: skeleton-based and image-based methods. The former includes 3D skeleton [9,10] generated by Microsoft Kinect, and 2D skeleton generated by OpenPose [11] and AlphaPose [12]. The latter includes approaches using single-frame images [13], multiframe video [14,15], and optical flow [16]. Note that the size of skeleton data is much smaller than that of the image data. For example, the size of 3D skeleton data and image data in the NTURGB+D dataset is 5.8 and 136 GB, respectively. It is easy to see that the size of skeleton data is 23 times smaller than that of the image data. Thus, the training process by using skeleton data is faster than that by using image data. However, the image data might contain vital information [17], including age, gender, wearing, expression, background, illumination, etc., in comparison to skeleton data. Moreover, in order to identify each individual appearing in the scene, the face image is also needed in the proposed method. Thus, image-based methods render more information of the scene for wider applications, such as smart identification and data interpretation, which is of practical value and worthy of further investigation.
Convolution neural networks (CNN) is a powerful tool in the field of image-based action recognition. According to the dimension of the convolution unit used in the network, previous approaches can be separated into 2D and 3D convolution approaches. Although 2D convolution provides appealing performance in body gesture recognition, it cannot effectively handle the recognition of continuous action streams. This is due to the lack of temporal information, which imposes the limitation of modeling continuous action with 2D convolution. On the other hand, 3D convolution contains an additional time dimension that can remedy this deficiency. Thus, 3D convolution has been widely used in recent data-driven action recognition architectures [18], such as C3D (convolutional 3D) [19], I3D (inflated 3D ConvNet) [20], and 3D-fused two-stream [21]. Widely seen as foundational research, Ji [14] proposed taking several frames from an input video and simultaneously feeding them into a neural network. Features extracted from the video contain the information in both time and space dimensions via 3D convolution that can be utilized to generate results for action recognition. Although the recognition performance of [14], evaluated based on the TRECVID 2008 [22] and KTH datasets [23], provided good action recognition, the accuracy of 3D convolution relies on a huge amount of network parameters, leading to low efficiency in the recognition process. This inevitably causes difficulty in reaching a real-time action recognition. To solve this problem, a recent approach [21] developed a two-stream architecture, taking both RGB and optical flow images as input data, that improves I3D by adopting the concept of Inception v1 [24] in GoogLeNet. Inception v1 contains 2D convolution using a 1 × 1 filter in the network to reduce the amount of training parameters, and hence improves the efficiency of recognition. As a result, accuracy of the I3D approaches outperforms traditional approaches in the well-known action recognition datasets, such as UCF101 [25] and Kinetics [26].
Although the I3D-based action recognition has a desired performance, the original version of I3D cannot recognize actions of multiple people appearing in a scene at the same time. The I3D-based recognition system also encounters problems when locating the start and end time for each of the actions in the input video for subsequent action recognition [27], because there is likely a mismatch between the start and end time of the actions and the segmented time interval.
In order to solve these problems, we propose a real-time multiple-person action recognition system with the following major contributions. (1) We extend the use of I3D for real-time multiple-person action recognition. (2) The system is capable of tracking multiple people in real time to provide action recognition with improved accuracy. (3) An automatic zoom-in function is established to enhance the quality of input data for recognizing people located far from the camera. (4) The mismatch problem mentioned earlier can be addressed by adopting a sliding window method. (5) To further improve the accuracy, recognition results from I3D with multiple sliding windows are processed by a nonmaximum suppression (NMS) approach to obtain a more robust decision. Experimental results show that the proposed method is able to achieve multiple-person action recognition system in real time.
The rest of the paper is organized are follows: Section 2 introduces the proposed real-time multiple-person action recognition system, Section 3 presents the experimental results, and the conclusion is given in Section 4. Figure 1 shows the complete flow chart of the proposed multiple-person action recognition system. First of all, we use YOLOv3 [28] to locate multiple people appearing in the scene. The Deep SORT [29] algorithm is used to track the people and provide each of them with an identity (ID) number. To identify the person's name for display, FaceNet [30] is then used for face recognition to check whether or not the ID exists. For people far from the camera, we also establish a "zoom in" function to improve the recognition performance. Video frames in sliding windows are preprocessed and resized for being Sensors 2020, 20, 4758 3 of 17 fed into I3D for action recognition. Finally, this system utilizes a one-dimensional NMS to postprocess the outputs from I3D to improve accuracy and robustness of action recognition.
Real-Time Multiple-Person Action Recognition System
appearing in the scene. The Deep SORT [Error! Reference source not found.] algorithm is used to track the people and provide each of them with an identity (ID) number. To identify the person's name for display, FaceNet [Error! Reference source not found.] is then used for face recognition to check whether or not the ID exists. For people far from the camera, we also establish a "zoom in" function to improve the recognition performance. Video frames in sliding windows are preprocessed and resized for being fed into I3D for action recognition. Finally, this system utilizes a one-dimensional NMS to postprocess the outputs from I3D to improve accuracy and robustness of action recognition.
YOLO (You Only Look Once)
Object detection can be separated into two main categories: two-stage and one-stage methods.
YOLO (You Only Look Once)
Object detection can be separated into two main categories: two-stage and one-stage methods. The former, such as RCNN [31], fast RCNN [32], faster RCNN [33], and mask RCNN [34], detects the locations of different objects in the image at first, and then recognizes the objects. The later, such as YOLO [35] and SSD [36], combines the both tasks through a neural network. YOLOv3 [28] is a neural networks based object detection algorithm implemented in the Darknet framework, which can obtain the class and a corresponding bounding box for every object in images and videos. Compared with previous methods, YOLOv3 has the advantages of fast recognition speed, high accuracy, and capability of detecting multiple objects at the same time [37]. Thus, this paper uses YOLOv3 at the first stage of action recognition for two reasons. The first one is because the system has to locate each of the people appearing in the scene in real time. The other one is because the information of the bounding box corresponding to each person in the scene is critical for preprocessing in the proposed system. In this step, we convert the input video from camera into frames. The frames are then resized from 1440 × 1080 to 640 × 480 for use by YOLOv3 to obtain a detection result represented by coordinates of the Sensors 2020, 20, 4758 4 of 17 bounding boxes. Note that the purpose of the frame-resizing process results from the consideration of speed and accuracy of object detection.
Deep SORT
Simple online and real-time tracking (SORT) is a real-time multiple object tracking method proposed by Bewley [38]. It combines a Kalman filter and Hungarian algorithm, predicting the object location in next frame according to that of the current frame by measuring the speed of detected objects through time. It is a tracking-by-detection method that performs tracking based on the result of object detection per frame. As an updated version of SORT, simple online and real-time tracking with a deep association metric (Deep SORT) proposed by Wojke [29] contains an additional convolutional neural network for extracting additional features, which is pretrained by a large-scale video dataset, the motion analysis and reidentification set (MARS). Hence, Deep SORT is capable of reducing SORT tracking error by about 45% [29]. In the proposed system, the goal of using Deep SORT is to perform multiple-person tracking, where an ID number corresponding to each individual is created into a database based on the detection results of YOLOv3. This means that each person appearing in the scene will be related by an individual bounding box and a ID number.
FaceNet
FaceNet is a face recognition approach revealed by Google [30], which has achieved an excellent recognition performance of 99.4% accuracy in the LFW (labeled faces in the wild) database. It adopts Inception ResNet network architecture and applies a pretraining model based on MS-Celeb-1M dataset. Note that this dataset contains 8 million face images of about 1 million identities. During the pretraining process, it first generates 512 feature vectors via L2 normalization and embedding process. Then, the feature vectors are mapped into a feature space to calculate the Euclidean distance between features. Finally, a triple loss algorithm is applied in the training process. In the processing of FaceNet, the similarity of face-matching depends on calculating the Euclidean distance between the features of the input face image and the stored face images in the database. As soon as the similarity of each matching pair is given, SVM classification is applied to make the final matching decision.
The face recognition process of the proposed method is shown in Figure 2. At the beginning, we prestore matched pairs of individual names and their corresponding features generated by FaceNet in the database. When the bounding box image and corresponding ID number of each individual are obtained from Deep SORT, we check if the ID number exists in the database for reducing the redundant execution of FaceNet. Considering the efficiency for real-time application, we separate the condition as to which individual does not require a face-matching process via FaceNet from others. If the ID number does exist in the database, the name related to the ID number will be directly obtained from the database. Otherwise, we will check whether the face of the individual can be detected by utilizing the Haar-based cascade classifier in OpenCV, based on the bounding box image of the individual. If the face of the individual cannot be detected, FaceNet will not be executed, resulting in an "Unknown" displayed on the top of the bounding box in the video frames. Otherwise, the cropped face image is fed into FaceNet for face-matching by comparing with the stored features in the database. When the feature of the cropped face image is similar to the stored feature in the database, we update the table with the ID number and the corresponding name retrieved from the database for display on the top of the bounding box in the video frames.
Automatic "Zoom-in"
At this stage, every image for further processing has been resized into 640 × 480 pixels, which is much smaller than the original image of 1440 × 1080 pixels. However, if there are individuals located far from the camera, the size of the bounding box in this case will become too small for making accurate action recognition. Fortunately, we are able to utilize the high resolution (1440 × 1080 pixels) of the original video to zoom-in the bounding box for individuals located at a longer distance. Because the position of the camera is fixed, we can roughly estimate the depth of the individual according to the height of the bounding box in the image. If the individual is located far from the camera, the height of the bounding box will be a much smaller value. Here, we set a threshold to determine when to automatically launch the zoom-in function according to the height of the bounding box in the image. Once the zoom-in function is activated, we locate the center of the bounding box in the original 1440 × 1080 image frame and crop a new image of 640 × 480 centered at the bounding box, as shown in Figure 3. Note that the cropped region will not exceed the boundary of the original image.
Blurring Background
In this process, we aim at utilizing image processing for enhancing the performance of action recognition by I3D. For each individual, we blur the entire image of 640 × 480 except the corresponding bounding box by a Gaussian kernel. Because the image region within the bounding box of each individual contains more important information than the other regions in the image for recognition purpose. In Figure 4, we can see from the top that three people have been detected. Take the individual marked as "person-0" for example. The far left image in the second row in Figure 4 shows that only the region within the bounding box of person-0 remains intact after the blurring process. The same process applies to the other two individuals as well. Then, all of the preprocessed images related to each individual are further reduced into smaller images of 224 × 224 for collection
Automatic "Zoom-In"
At this stage, every image for further processing has been resized into 640 × 480 pixels, which is much smaller than the original image of 1440 × 1080 pixels. However, if there are individuals located far from the camera, the size of the bounding box in this case will become too small for making accurate action recognition. Fortunately, we are able to utilize the high resolution (1440 × 1080 pixels) of the original video to zoom-in the bounding box for individuals located at a longer distance. Because the position of the camera is fixed, we can roughly estimate the depth of the individual according to the height of the bounding box in the image. If the individual is located far from the camera, the height of the bounding box will be a much smaller value. Here, we set a threshold to determine when to automatically launch the zoom-in function according to the height of the bounding box in the image. Once the zoom-in function is activated, we locate the center of the bounding box in the original 1440 × 1080 image frame and crop a new image of 640 × 480 centered at the bounding box, as shown in Figure 3. Note that the cropped region will not exceed the boundary of the original image.
Automatic "Zoom-in"
At this stage, every image for further processing has been resized into 640 × 480 pixels, which is much smaller than the original image of 1440 × 1080 pixels. However, if there are individuals located far from the camera, the size of the bounding box in this case will become too small for making accurate action recognition. Fortunately, we are able to utilize the high resolution (1440 × 1080 pixels) of the original video to zoom-in the bounding box for individuals located at a longer distance. Because the position of the camera is fixed, we can roughly estimate the depth of the individual according to the height of the bounding box in the image. If the individual is located far from the camera, the height of the bounding box will be a much smaller value. Here, we set a threshold to determine when to automatically launch the zoom-in function according to the height of the bounding box in the image. Once the zoom-in function is activated, we locate the center of the bounding box in the original 1440 × 1080 image frame and crop a new image of 640 × 480 centered at the bounding box, as shown in Figure 3. Note that the cropped region will not exceed the boundary of the original image.
Blurring Background
In this process, we aim at utilizing image processing for enhancing the performance of action recognition by I3D. For each individual, we blur the entire image of 640 × 480 except the corresponding bounding box by a Gaussian kernel. Because the image region within the bounding box of each individual contains more important information than the other regions in the image for recognition purpose. In Figure 4, we can see from the top that three people have been detected. Take the individual marked as "person-0" for example. The far left image in the second row in Figure 4 shows that only the region within the bounding box of person-0 remains intact after the blurring process. The same process applies to the other two individuals as well. Then, all of the preprocessed images related to each individual are further reduced into smaller images of 224 × 224 for collection
Blurring Background
In this process, we aim at utilizing image processing for enhancing the performance of action recognition by I3D. For each individual, we blur the entire image of 640 × 480 except the corresponding bounding box by a Gaussian kernel. Because the image region within the bounding box of each individual contains more important information than the other regions in the image for recognition purpose. In Figure 4, we can see from the top that three people have been detected. Take the individual marked as "person-0" for example. The far left image in the second row in Figure 4 shows that only the region within the bounding box of person-0 remains intact after the blurring process. The same process applies to the other two individuals as well. Then, all of the preprocessed images related to each individual are further reduced into smaller images of 224 × 224 for collection into an individual dataset into an individual dataset used by respective sliding windows. The design of this process is to retain important information of the images and reduce interference of the background region for improving recognition accuracy.
Sliding Windows
As mentioned earlier, most work on human action detection assumed presegmented video clips which are then solved by the recognition part of the task. However, the information of the start and end time of an observed action are important in providing satisfactory recognition for continuous action streams. Here, we apply the sliding windows [Error! Reference source not found.] method to divide the input video into a sequence of overlapped short video segments, as shown in Figure 5. In fact, we sample the video with blurred background every five frames to construct a sequence of frames for processing by a sliding window of 16 frames as time elapses. The 16 frames in the sliding window, presumably indicating the start and end time of the action, for each person detected are then fed into I3D to recognize the action performed by the person. Specifically, each video segment F consisting of 16 frames in a sliding window can be constructed by: where fc is the frame captured at current time. Hence, each video segment representing 80 frames from the camera will be fed into I3D for action recognition in sequence. In this paper, five consecutive sliding windows each consisting of 16 frames and their corresponding recognition class by I3D are grouped as an input set for processing by NMS, as shown in Figure 5.
Sliding Windows
As mentioned earlier, most work on human action detection assumed presegmented video clips which are then solved by the recognition part of the task. However, the information of the start and end time of an observed action are important in providing satisfactory recognition for continuous action streams. Here, we apply the sliding windows [39] method to divide the input video into a sequence of overlapped short video segments, as shown in Figure 5. In fact, we sample the video with blurred background every five frames to construct a sequence of frames for processing by a sliding window of 16 frames as time elapses. The 16 frames in the sliding window, presumably indicating the start and end time of the action, for each person detected are then fed into I3D to recognize the action performed by the person. Specifically, each video segment F consisting of 16 frames in a sliding window can be constructed by: where f c is the frame captured at current time. Hence, each video segment representing 80 frames from the camera will be fed into I3D for action recognition in sequence. In this paper, five consecutive sliding windows each consisting of 16 frames and their corresponding recognition class by I3D are grouped as an input set for processing by NMS, as shown in Figure 5.
Inflated 3D ConvNet (I3D)
I3D, a neural network architecture based on 3D convolution proposed by Carreira [Error! Reference source not found.], as shown in Figure 6, is adopted for action recognition in the proposed system, taking only RGB images as the input data. Note that optical flow input in the original approach is discarded in the proposed design considering the recognition speed. Moreover, I3D contains several Inception modules that hold several convolution units with a 1 × 1 × 1 filter, as shown in Figure 7. This design allows the dimension of input data to be adjusted for various sizes by changing the amount of those convolution units. In the proposed method, the input data to I3D are the video segments of a window size of 16 frames from the previous stage. Therefore, each of the video segments is used to produce a recognition class and a corresponding confidence score via I3D based on the input: 16 frames × 224 image height (pixels) × 224 image width (pixels) × 3 channels.
Inflated 3D ConvNet (I3D)
I3D, a neural network architecture based on 3D convolution proposed by Carreira [20], as shown in Figure 6, is adopted for action recognition in the proposed system, taking only RGB images as the input data. Note that optical flow input in the original approach is discarded in the proposed design considering the recognition speed. Moreover, I3D contains several Inception modules that hold several convolution units with a 1 × 1 × 1 filter, as shown in Figure 7. This design allows the dimension of input data to be adjusted for various sizes by changing the amount of those convolution units. In the proposed method, the input data to I3D are the video segments of a window size of 16 frames from the previous stage. Therefore, each of the video segments is used to produce a recognition class and a corresponding confidence score via I3D based on the input: 16 frames × 224 image height (pixels) × 224 image width (pixels) × 3 channels.
Inflated 3D ConvNet (I3D)
I3D, a neural network architecture based on 3D convolution proposed by Carreira [Error! Reference source not found.], as shown in Figure 6, is adopted for action recognition in the proposed system, taking only RGB images as the input data. Note that optical flow input in the original approach is discarded in the proposed design considering the recognition speed. Moreover, I3D contains several Inception modules that hold several convolution units with a 1 × 1 × 1 filter, as shown in Figure 7. This design allows the dimension of input data to be adjusted for various sizes by changing the amount of those convolution units. In the proposed method, the input data to I3D are the video segments of a window size of 16 frames from the previous stage. Therefore, each of the video segments is used to produce a recognition class and a corresponding confidence score via I3D based on the input: 16 frames × 224 image height (pixels) × 224 image width (pixels) × 3 channels.
Nonmaximum Suppression (NMS)
Traditional two-dimensional nonmaximum suppression is commonly used for obtaining a final decision of multiple object detection. Basically, NMS generates the best bounding box related to the target object by iteratively eliminating the redundant candidate bounding boxes in the image. To accomplish this task, NMS requires the coordinates of the bounding box related to the individual and the corresponding confidence score. During the iteration, the bounding box having the highest confidence score is selected to calculate the intersection over union (IoU) score with each of the other bounding boxes. If the IoU score is greater than a threshold, the corresponding confidence score of the bounding box will be reset to zero, which means that bounding box will be eliminated. This process repeats until all bounding boxes are handled. As a result, bounding boxes having the highest confidence are selected after the iteration.
As an attempt to improve the robustness of the recognition results, start and end times of the five consecutive sliding windows and their corresponding recognition class are used to derive the final recognition results via NMS. Figure 8 shows how a one-dimensional nonmaximum suppression is used to derive the final recognition results, where the threshold of the IoU score is set as 0.4. From top to bottom, we can see that each group of five consecutive sliding windows and their corresponding recognition class are utilized to recognize the action class and its best start and end time via NMS among a group of five consecutive sliding windows and their corresponding confidence score. The final recognition result is a winner-take-all decision, resulting in the final estimate of the action class held by the video segment with the highest confidence score. According to Figure 8, we can see that not until the last (fifth) sliding window of frames provides an output does the proposed method make a final estimate of action recognition. Although there is a delay around 2.5 s between the actual recognition results and ground truth, the NMS process can effectively suppress the inconsistency of the action recognition results.
Nonmaximum Suppression (NMS)
Traditional two-dimensional nonmaximum suppression is commonly used for obtaining a final decision of multiple object detection. Basically, NMS generates the best bounding box related to the target object by iteratively eliminating the redundant candidate bounding boxes in the image. To accomplish this task, NMS requires the coordinates of the bounding box related to the individual and the corresponding confidence score. During the iteration, the bounding box having the highest confidence score is selected to calculate the intersection over union (IoU) score with each of the other bounding boxes. If the IoU score is greater than a threshold, the corresponding confidence score of the bounding box will be reset to zero, which means that bounding box will be eliminated. This process repeats until all bounding boxes are handled. As a result, bounding boxes having the highest confidence are selected after the iteration.
As an attempt to improve the robustness of the recognition results, start and end times of the five consecutive sliding windows and their corresponding recognition class are used to derive the final recognition results via NMS. Figure 8 shows how a one-dimensional nonmaximum suppression is used to derive the final recognition results, where the threshold of the IoU score is set as 0.4. From top to bottom, we can see that each group of five consecutive sliding windows and their corresponding recognition class are utilized to recognize the action class and its best start and end time via NMS among a group of five consecutive sliding windows and their corresponding confidence score. The final recognition result is a winner-take-all decision, resulting in the final estimate of the action class held by the video segment with the highest confidence score. According to Figure 8, we can see that not until the last (fifth) sliding window of frames provides an output does the proposed method make a final estimate of action recognition. Although there is a delay around 2.5 s between the actual recognition results and ground truth, the NMS process can effectively suppress the inconsistency of the action recognition results.
Computational Platforms
To evaluate the proposed action recognition system, we train the I3D model on the Taiwan Computing Cloud (TWCC) platform in the National Center for High-performance Computing (NCHC) and verify the proposed method on an Intel (R) Core(TM) i7-8700 @ 3.2GHz and a NVIDIA GeForce GTX 1080Ti graphic card under Windows 10. The computational time of training and testing our I3D model is about 5 h on TWCC. The experiments are conducted under Python 3.5 that utilizes Tensorflow backend with Keras library and NVIDIA CUDA 9.0 library for parallel computation.
Datasets
In this paper, training is conducted on NTU RGB+D dataset [Error! Reference source not found.], which has 60 action classes. Each sample scene is captured by three Microsoft Kinect V2 placed at different positions, and the available data of each scene includes RGB videos, depth map sequences, 3D skeleton data, and infrared (IR) videos. The resolution of the RGB videos is 1920 × 1080 pixels, whereas the corresponding depth maps and IR videos are all 512 × 424 pixels. The skeleton data contain the 3D coordinates of 25 body joints for each frame. In order to apply the proposed action recognition system to a practical scenario, a long-term care environment is chosen for demonstration purpose in this paper. We manually select 12 action classes that are likely to be adopted in this environment, as shown in Table 1. In the training set, most of the classes contain 800 videos, except the class "background" that has 1187 videos. In the testing set, there are 1841 videos.
Computational Platforms
To evaluate the proposed action recognition system, we train the I3D model on the Taiwan Computing Cloud (TWCC) platform in the National Center for High-performance Computing (NCHC) and verify the proposed method on an Intel (R) Core(TM) i7-8700 @ 3.2GHz and a NVIDIA GeForce
Datasets
In this paper, training is conducted on NTU RGB+D dataset [40], which has 60 action classes. Each sample scene is captured by three Microsoft Kinect V2 placed at different positions, and the available data of each scene includes RGB videos, depth map sequences, 3D skeleton data, and infrared (IR) videos. The resolution of the RGB videos is 1920 × 1080 pixels, whereas the corresponding depth maps and IR videos are all 512 × 424 pixels. The skeleton data contain the 3D coordinates of 25 body joints for each frame. In order to apply the proposed action recognition system to a practical scenario, a long-term care environment is chosen for demonstration purpose in this paper. We manually select 12 action classes that are likely to be adopted in this environment, as shown in Table 1. In the training set, most of the classes contain 800 videos, except the class "background" that has 1187 videos. In the testing set, there are 1841 videos.
Experimental Results
In order to investigate the performance of the proposed method for practical use in a long-term care environment, there are five major objectives for evaluation:
•
Can the system simultaneously recognize the actions performed by multiple people in real time? • Performance comparison of action recognition with and without "zoom-in" function; • Differences of action recognition using black background and blurred background; • Differences of I3D-based action recognition with and without NMS; • Accuracy of the overall system.
In the first objective, after training with the dataset for 1000 epochs, two screenshots selected from an action recognition result video are shown in Figure 9, where actions performed by three people in a simplified scenario are recognized at the same time. As individuals behave in the scene over time, for example, eating a meal and standing up for ID 214, having a headache for ID 215, and falling down and having a stomachache for ID 216, the system can recognize the actions corresponding to what the individuals are performing. In addition, the proposed system can also display the ID and name obtained by Deep SORT and FaceNet, respectively. There is no hard limit of the number of people appearing in the scene for action recognition, as long as the individuals can be detected by YOLO in the image. However, if the bounding box image of an individual is too small, the system might encounter a recognition error. Table 2 shows the execution speed of the proposed system for various numbers of people, including facial recognition and action recognition.
Experimental Results
In order to investigate the performance of the proposed method for practical use in a long-term care environment, there are five major objectives for evaluation: • Can the system simultaneously recognize the actions performed by multiple people in real time? • Performance comparison of action recognition with and without "zoom-in" function; • Differences of action recognition using black background and blurred background; • Differences of I3D-based action recognition with and without NMS; • Accuracy of the overall system.
In the first objective, after training with the dataset for 1000 epochs, two screenshots selected from an action recognition result video are shown in Figure 9, where actions performed by three people in a simplified scenario are recognized at the same time. As individuals behave in the scene over time, for example, eating a meal and standing up for ID 214, having a headache for ID 215, and falling down and having a stomachache for ID 216, the system can recognize the actions corresponding to what the individuals are performing. In addition, the proposed system can also display the ID and name obtained by Deep SORT and FaceNet, respectively. There is no hard limit of the number of people appearing in the scene for action recognition, as long as the individuals can be detected by YOLO in the image. However, if the bounding box image of an individual is too small, the system might encounter a recognition error. Table 2 shows the execution speed of the proposed system for various numbers of people, including facial recognition and action recognition. Second, we design an experiment to measure the action recognition results with and without using the zoom-in function. Figure 10 shows the individual facing the camera slowly moves backward while doing actions. It can be seen that as time elapses, the individual will move farther and farther away from the camera, resulting in poor recognition performance because the size of the bounding box is too small to make accurate action recognition. On the contrary, the bounding box becomes sufficiently large with the room-in function as shown in the bottom-right image of Figure 10. Figures 11 and 12 show the recognition results with and without using the zoom-in function, Second, we design an experiment to measure the action recognition results with and without using the zoom-in function. Figure 10 shows the individual facing the camera slowly moves backward while doing actions. It can be seen that as time elapses, the individual will move farther and farther away from the camera, resulting in poor recognition performance because the size of the bounding box is too small to make accurate action recognition. On the contrary, the bounding box becomes sufficiently large with the room-in function as shown in the bottom-right image of Figure 10. Figures 11 and 12 show the recognition results with and without using the zoom-in function, where the horizontal axis indicates the frame number of the video clip and the vertical axis reveals the action class recognized with the zoom-in function (green) and without the zoom-in function (red), respectively, in comparison to the manually labeled ground truth (blue). Table 3 shows the recognition accuracy for the same individual standing at different distances. We can see that the average accuracy of the recognition results with and without the zoom-in method is 90.79% and 69.74%, respectively, as shown in the far left column in Table 3. As clearly shown in these figures, we can see that there is no difference of the recognition results with or without the zoom-in function for the individual located near the camera. However, when the individual moves far from the camera, the zoom-in approach greatly enhances the recognition accuracy from 32.14% to 89.29%.
Sensors 2020, 20, x FOR PEER REVIEW 12 of 18 where the horizontal axis indicates the frame number of the video clip and the vertical axis reveals the action class recognized with the zoom-in function (green) and without the zoom-in function (red), respectively, in comparison to the manually labeled ground truth (blue). Table 3 shows the recognition accuracy for the same individual standing at different distances. We can see that the average accuracy of the recognition results with and without the zoom-in method is 90.79% and 69.74%, respectively, as shown in the far left column in Table 3. As clearly shown in these figures, we can see that there is no difference of the recognition results with or without the zoom-in function for the individual located near the camera. However, when the individual moves far from the camera, the zoom-in approach greatly enhances the recognition accuracy from 32.14% to 89.29%. where the horizontal axis indicates the frame number of the video clip and the vertical axis reveals the action class recognized with the zoom-in function (green) and without the zoom-in function (red), respectively, in comparison to the manually labeled ground truth (blue). Table 3 shows the recognition accuracy for the same individual standing at different distances. We can see that the average accuracy of the recognition results with and without the zoom-in method is 90.79% and 69.74%, respectively, as shown in the far left column in Table 3. As clearly shown in these figures, we can see that there is no difference of the recognition results with or without the zoom-in function for the individual located near the camera. However, when the individual moves far from the camera, the zoom-in approach greatly enhances the recognition accuracy from 32.14% to 89.29%. Table 3. Recognition accuracy with and without the zoom-in method of the scenario in Figure 10. Third, to investigate the difference for image frames with blurred background and black background for action recognition, we use the same video input to analyze the accuracy of action recognition with different background modifications, as shown in Figure 13. Figures 14 and 15 show the recognition results using blurred and black backgrounds, respectively. It is clear that the recognition result is better by using blurred background (green line) than black background (red line), in comparison to the manually labeled ground truth (blue), where the recognition accuracy of using images with blurred background is 82.9%, whereas the one with black background is 46.9%. We can see that retaining appropriate background information contributes to a better recognition performance. Table 3. Recognition accuracy with and without the zoom-in method of the scenario in Figure 10. Third, to investigate the difference for image frames with blurred background and black background for action recognition, we use the same video input to analyze the accuracy of action recognition with different background modifications, as shown in Figure 13. Figures 14 and 15 show the recognition results using blurred and black backgrounds, respectively. It is clear that the recognition result is better by using blurred background (green line) than black background (red line), in comparison to the manually labeled ground truth (blue), where the recognition accuracy of using images with blurred background is 82.9%, whereas the one with black background is 46.9%. We can see that retaining appropriate background information contributes to a better recognition performance. Table 3. Recognition accuracy with and without the zoom-in method of the scenario in Figure 10. Third, to investigate the difference for image frames with blurred background and black background for action recognition, we use the same video input to analyze the accuracy of action recognition with different background modifications, as shown in Figure 13. Figures 14 and 15 show the recognition results using blurred and black backgrounds, respectively. It is clear that the recognition result is better by using blurred background (green line) than black background (red line), in comparison to the manually labeled ground truth (blue), where the recognition accuracy of using images with blurred background is 82.9%, whereas the one with black background is 46.9%. We can see that retaining appropriate background information contributes to a better recognition performance. Fourth, to verify the improvement of introducing NMS, we analyze the accuracy of the action recognition system before and after using NMS. In this experiment, we fed a video of 1800 frames consisting of 12 actions into the proposed system for action recognition, as shown in Figure 16, where the orange line and the blue line represent the recognition results with and without using NMS. It is clear that inconsistency of recognition results is greatly suppressed by the NMS method. Note that the average recognition accuracy increases from 76.3% to 82.9% with the use of NMS. Finally, to show the performance of the proposed approach, Figure 17 shows the recognition accuracy of the overall system, where the orange and blue lines indicate the action recognition results obtained by the proposed method and the manually labeled ground truth, respectively. We can see that except for few occasions, the recognition results are consistent with the ground truth at Fourth, to verify the improvement of introducing NMS, we analyze the accuracy of the action recognition system before and after using NMS. In this experiment, we fed a video of 1800 frames consisting of 12 actions into the proposed system for action recognition, as shown in Figure 16, where the orange line and the blue line represent the recognition results with and without using NMS. It is clear that inconsistency of recognition results is greatly suppressed by the NMS method. Note that the average recognition accuracy increases from 76.3% to 82.9% with the use of NMS. Finally, to show the performance of the proposed approach, Figure 17 shows the recognition accuracy of the overall system, where the orange and blue lines indicate the action recognition results obtained by the proposed method and the manually labeled ground truth, respectively. We can see that except for few occasions, the recognition results are consistent with the ground truth at Fourth, to verify the improvement of introducing NMS, we analyze the accuracy of the action recognition system before and after using NMS. In this experiment, we fed a video of 1800 frames consisting of 12 actions into the proposed system for action recognition, as shown in Figure 16, where the orange line and the blue line represent the recognition results with and without using NMS. It is clear that inconsistency of recognition results is greatly suppressed by the NMS method. Note that the average recognition accuracy increases from 76.3% to 82.9% with the use of NMS. Fourth, to verify the improvement of introducing NMS, we analyze the accuracy of the action recognition system before and after using NMS. In this experiment, we fed a video of 1800 frames consisting of 12 actions into the proposed system for action recognition, as shown in Figure 16, where the orange line and the blue line represent the recognition results with and without using NMS. It is clear that inconsistency of recognition results is greatly suppressed by the NMS method. Note that the average recognition accuracy increases from 76.3% to 82.9% with the use of NMS. Finally, to show the performance of the proposed approach, Figure 17 shows the recognition accuracy of the overall system, where the orange and blue lines indicate the action recognition results obtained by the proposed method and the manually labeled ground truth, respectively. We can see that except for few occasions, the recognition results are consistent with the ground truth at Finally, to show the performance of the proposed approach, Figure 17 shows the recognition accuracy of the overall system, where the orange and blue lines indicate the action recognition results obtained by the proposed method and the manually labeled ground truth, respectively. We can see that except for few occasions, the recognition results are consistent with the ground truth at an accuracy of approximately 82.9%. Due to the fact that this work specifically chose 12 action classes to perform multiple-person action recognition for long-term care centers, there is no universal benchmark for serving the need of a meaningful comparison. While conducting the experiments, we found that there are some difficult scenarios for action recognition by the proposed system. If two different actions involve similar movement, it may easily incur recognition error. For example, an individual leaning forward could be classified as either 'Stomachache' or 'Cough' action. Also, if occlusion occurs when capturing the video, the action recognition system is likely to make an incorrect decision. Interested readers can refer to the following link: https://youtu.be/t6HpYCjlTLA to watch a demo video showing real-time multiple-person action recognition using the proposed approach in this paper.
Conclusions
This paper presented a real-time multiple-person action recognition system suitable for smart surveillance applications to identify individuals and recognize their actions in the scene. For people far from the camera, a "zoom in" function automatically activates to make use of the high resolution video frame for better recognition performance. In addition, we leverage the I3D architecture for action recognition in real time by using sliding window design and NMS. For demonstration purpose, the proposed approach is applied to long-term care environments where the system is capable of detecting (abnormal) actions that might be of concerns for accident prevention. Note that there is a delay of around 2.5 s between the recognition results and the occurrence of observed action by the proposed method. However, this delay does not make significant impacts on the smart surveillance application for long-term care centers, because human response time after the alarm is generally longer than the 2 s delay time. Ideally, a smaller delay time is more appealing. It is our future work to decrease the delay time via optimizing the filter size of the sliding window to achieve the objective of less than 1 s for the delay time. Thanks to the architecture of the proposed method, the proposed method can be used in the future for various application scenarios to provide smart surveillance, for example, detection of unusual behavior in a factory environment. While conducting the experiments, we found that there are some difficult scenarios for action recognition by the proposed system. If two different actions involve similar movement, it may easily incur recognition error. For example, an individual leaning forward could be classified as either 'Stomachache' or 'Cough' action. Also, if occlusion occurs when capturing the video, the action recognition system is likely to make an incorrect decision. Interested readers can refer to the following link: https://youtu.be/t6HpYCjlTLA to watch a demo video showing real-time multiple-person action recognition using the proposed approach in this paper.
Conclusions
This paper presented a real-time multiple-person action recognition system suitable for smart surveillance applications to identify individuals and recognize their actions in the scene. For people far from the camera, a "zoom in" function automatically activates to make use of the high resolution video frame for better recognition performance. In addition, we leverage the I3D architecture for action recognition in real time by using sliding window design and NMS. For demonstration purpose, the proposed approach is applied to long-term care environments where the system is capable of detecting (abnormal) actions that might be of concerns for accident prevention. Note that there is a delay of around 2.5 s between the recognition results and the occurrence of observed action by the proposed method. However, this delay does not make significant impacts on the smart surveillance application for long-term care centers, because human response time after the alarm is generally longer than the 2 s delay time. Ideally, a smaller delay time is more appealing. It is our future work to decrease the delay time via optimizing the filter size of the sliding window to achieve the objective of less than 1 s for the delay time. Thanks to the architecture of the proposed method, the proposed method can be used in the future for various application scenarios to provide smart surveillance, for example, detection of unusual behavior in a factory environment.
|
2020-08-27T13:06:54.468Z
|
2020-08-23T00:00:00.000
|
{
"year": 2020,
"sha1": "9923da669485dfa56e69e4b8f4bb5620ea2161ba",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/20/17/4758/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a8990235e2389b20f0035b660208a2361920319",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
271292790
|
pes2o/s2orc
|
v3-fos-license
|
Impact of signal-to-noise ratio and contrast definition on the sensitivity assessment and benchmarking of fluorescence molecular imaging systems
Abstract. Significance Standardization of fluorescence molecular imaging (FMI) is critical for ensuring quality control in guiding surgical procedures. To accurately evaluate system performance, two metrics, the signal-to-noise ratio (SNR) and contrast, are widely employed. However, there is currently no consensus on how these metrics can be computed. Aim We aim to examine the impact of SNR and contrast definitions on the performance assessment of FMI systems. Approach We quantified the SNR and contrast of six near-infrared FMI systems by imaging a multi-parametric phantom. Based on approaches commonly used in the literature, we quantified seven SNRs and four contrast values considering different background regions and/or formulas. Then, we calculated benchmarking (BM) scores and respective rank values for each system. Results We show that the performance assessment of an FMI system changes depending on the background locations and the applied quantification method. For a single system, the different metrics can vary up to ∼35 dB (SNR), ∼8.65 a.u. (contrast), and ∼0.67 a.u. (BM score). Conclusions The definition of precise guidelines for FMI performance assessment is imperative to ensure successful clinical translation of the technology. Such guidelines can also enable quality control for the already clinically approved indocyanine green-based fluorescence image-guided surgery.
Introduction
Fluorescence molecular imaging (FMI) has made great advances in clinical translation over the last few years. 1 Driven by these advances, technologies at the forefront of the field are evolving rapidly, particularly in the areas of device design, fluorescent agents, image processing algorithms, and performance assessment metrics. 2 Consequently, the number of imaging devices and their applications is increasing.
Moreover, following the first-in-human application of FMI in 2011 by van Dam et al., 3 numerous clinical studies have been completed or are currently ongoing.A major outcome of all this activity is the recent approvals by the US Food and Drug Administration (FDA) of ∼20 fluorescence-guided clinical imaging systems 4 as well as 3 tracers for surgical guidance: (1) 5-aminolevulinic acid (5-ALA/Gleolan®; Photonamic GmbH and Co., KG, Pinneberg, Germany) for use as an intra-operative optical imaging agent in patients with suspected high-grade gliomas, 5 (2) hexaminolevulinate (HAL, available as Hexvix, Photocure ASA, Oslo, Norway, and Cysview Photocure Inc., Princeton, New Jersey, United States) for use in non-muscle-invasive bladder cancer, 6 and (3) pafolacianine (Cytalux, On Target Laboratories LLC, West Lafayette, Indiana, United States) for intraoperative imaging of folate receptorpositive ovarian and lung cancers. 7,8All this activity has highlighted the need for better and user-independent standardization procedures that would allow for system characterization, performance monitoring, data referencing, and comparison, even among markedly different systems.This is, also, very relevant to the fluorescent image-guided surgery (FIGS), given the FDA clearance of multiple FIGS devices for imaging with indocyanine green (ICG) and other contrast agents. 92][13][14][15][16][17][18][19] Thus far, methods and reference targets for system evaluation and comparison have been developed on an individual basis, but a universal cross-platform metric for image fidelity evaluation has yet to be developed. 16urrently, the sensitivity of FMI systems is assessed mostly using the signal-to-noise ratio (SNR) and/or contrast metrics. 2,12,17,202][23][24] For example, Chen et al. 22 and Hoogstins et al. 21reported that background estimation significantly affected quantification results for bulk-stained tissue fluorescence imaging and intraoperative/ex vivo fluorescence imaging, respectively, using metrics including SNR, signal-to-background ratio (SBR), and contrast-to-noise ratio (CNR).Widen et al. 23 demonstrated the impact of region of interest (ROI) sizes on overall signal and mean fluorescence intensity by analyzing fluorescent probes in animal experiments.Dijkhuis et al. 24 also demonstrated the effect of manually selected ROI in fluorescent data analysis and proposed semi-automatic methods for objective assessment of fluorescent signals in resected tissue.In view of the theoretical effect described above, Azargoshasb et al. 25 quantified how fluorescent SBR influences the robotic surgical performance of participants (n ¼ 16) during an exercise with a custom grid phantom.On the other hand, Palma-Chavez et al. 26 reported 15 different SNRs and five contrast formulas that are currently used in the field of optoacoustics, indicating that the lack of consensus is not only limited to FMI applications.The plethora of background definitions, as well as the different quantification formulas used across multiple studies, emphasize the importance of reaching a wide consensus for performance assessment and quality control of FMI systems.
Indeed, despite the fact that SNR and contrast are the most commonly used metrics for the sensitivity assessment of various systems, 1,26 there are only a few studies comparing different FIGS systems, most of which are optimized for ICG imaging. 9,19,27In addition, the formulas used to calculate SNR and contrast and methods for evaluating background ROIs vary across different studies.In a recent study, LaRochelle et al. 28 demonstrated the influence of background definition in SBR, SNR, CNR, and contrast-to-variability ratio through measurements on anthropomorphic three-dimensional (3D)-printed phantoms.However, to the best of our knowledge, there is no study quantifying the effect of the combined variation (ROIs and metrics formulas) on performance assessment.An in-depth testing and evaluation of current strategies are crucial to raise community awareness of existing limitations, to spur effective development of the technology, and to set the performance limits that are required for regulatory approvals.
Building on the assumption that the SNR and contrast metrics depend on the selection of background ROIs and quantification formulas, herein, for the first time, we systematically investigate and showcase this dependence with regard to the sensitivity assessment of markedly different FMI systems.
In specifics, using six near-infrared FMI systems, we captured fluorescence images of a composite rigid phantom previously developed by our group. 11,18,19We then assessed the sensitivity 19 of those systems using six previously published formulas for SNR and contrast 17,[29][30][31][32][33] and two background locations.Moreover, based on these metrics, we quantified the corresponding benchmarking (BM) scores, 19 and the systems were ranked based on these scores.
Recently, we called attention to the need for a commonly accepted phantom to promote good imaging practices during the development of FMI systems or their use in clinics. 1 We now pinpoint additional needs to consistently define ROIs and use common quantification formulas for SNR and contrast.Answering these needs will enable consistency, allow data comparison and referencing, and advance the quality and performance of FMI systems.These improvements will promote wide acceptance and usage of FMI as a tool for interventional and endoscopic procedures.
FMISystems
For this study, we used six fluorescence imaging systems distributed in different labs in the United States and Europe.The main specifications of each system, as well as the adopted phantom imaging protocols, are summarized in Table 1, while the corresponding system schematics are presented in Fig. 1.All measurements were conducted in darkness to eliminate the influence of ambient light on the results.
Mob is a mobile phone-based near-infrared fluorescence (NIRF) imaging system previously, 34 where its spectral sensitivity was documented.It involves a 1W 785-nm laser diode, an 800-nm short-pass excitation filter (84-729, Edmund Optics, Barrington, New Jersey, United States), and a long-pass emission filter with a cutoff wavelength at 825 nm (86-078, Edmund Optics) for the detection.The phone camera is based on an 8-bit complementary metal oxide semiconductor (CMOS) sensor with an f/2.4 aperture lens (Eigen Imaging, Inc., San Diego, California, United States) and a near-infrared blocking filter, which was removed during this study.NIRF I is a custom benchtop NIRF imaging system 35 with a light-emitting diode (M780L3, Thorlabs, Inc., Newton, New Jersey, United States) centered at 780 nm and power of 200 mW.The same optical filters used with the Mob system were also used in the NIRF I imaging system.A 16-bit charge-coupled device camera (Alta U2000, Apogee Imaging Systems, Roseville, California, United States) coupled with a zoom lens (7-mm focal length, f/3.9, Tamron, Commack, New York, United States) was used for the detection of the emitted fluorescence.
NIRF II is an updated version of the NIRF I imaging system.Its main improvement is the replacement of the imaging sensor with the more sensitive Kodak KAI-2020M (Image Sensor Solutions Eastman Kodak Company, Rochester, New York, United States), while fluorescence was induced by a laser diode at 785 nm and 1W power, instead of the light-emitting diode present in NIRF I system.
Solaris is an open-air commercially available fluorescence imaging system by PerkinElmer (Waltham, Massachusetts, United States).The Solaris system is designed for research applications, including preclinical studies for advanced molecular-guided surgery, and drug efficacy and safety measurements.
RawFl is a custom-built setup 36 with a filtered 760-nm laser diode (LDX Optronics, Maryville, Tennessee, United States) light source, a 16-bit scientific complementary metal oxide semiconductor (sCMOS) camera (pco.edge5.5, PCO AG, Kelheim, Germany) as a detector and polarizers (PPL05C; Moxtek, Orem, Utah, United States) for minimizing the contribution from specular reflections at the surface of the sample.
Standardization Phantom
The composite phantom shown in Fig. 2(a) 19 was used to quantify the SNR and contrast from images acquired by the six systems.The application of the phantom as a fluorescence standard for performance assessment, quality control, and comparison of markedly different systems through a single image has been described in detail in previous studies. 11,18,19In the current study, however, the SNR and contrast were evaluated only on the "sensitivity versus depth" region of the phantom [see Fig. 2(a)].This region includes (1) the transparent polyurethane (WC-783 A/B, BJB Enterprises, Tustin, California, United States) matrix base, with 0.00875 mg/g alcohol-soluble nigrosin (Sigma Aldrich, St. Louis, Missouri, United States) and 1.5 mg/g TiO 2 nanoparticles (titanium IVoxide; Sigma Aldrich) for mimicking absorption and scattering, and (2) nine equally sized circular wells, made of the same polyurethane base with 20 μg∕g bovine hemin (≥ 90% pure; Sigma Aldrich) and 0.66 mg/g TiO 2 for absorption and scattering and 10-nM organic quantum dots (Qdot 800 ITK, Thermofisher Scientific, Waltham, Massachusetts, United States) for fluorescence.As shown in Fig. 2(a), the nine wells were embedded into the phantom matrix at distances of 0.2, 0.4, 0.6, 0.8, 1.0, 1.33, 1.66, 2.0, and 3.0 mm, respectively, from the phantom's top surface.
Data Processing
The sensitivity versus depth phantom region [Fig.2(a)] was extracted from the fluorescence images acquired by each system, and the SNR and contrast metrics were quantified by adopting the formulas in Table 2.
First, all images of the phantom wells from the region sensitivity versus depth were converted into binary images using the MATLAB function "imbinarize," with the default option of thresholding using the Otsu method (MathWorks, Natick, Massachusetts, United States), and the location and radius of each well were obtained using the "imfindcircles" function.The extracted wells were then adjusted to match the size and location of the phantom wells based on the phantom design template, which ensured all wells preserved the same size within an image, regardless of the per-well fluorescence intensity distribution.Using this information, one mask was created to extract the average fluorescence intensity and standard deviation values from each well.A second mask, consisting of (i) the annuli between each well and concentric to the wells' circles with a 40% larger radius (termed ROI b1) and (ii) a well-sized circular area in the non-fluorescent region of the phantom (termed ROI b2), was also created to quantify the average intensity and 1 for the description of each system).
corresponding standard deviation values from the background ROIs (Fig. 2).The ROI b1 is adjacent to the wells that produce fluorescence signal, where fluorescence leakage to the neighboring phantom areas influences the ROI's intensity values.This is frequently adopted as a strategy for background definition in multiple studies. 9,28The second ROI, b2, is located far from fluorescent wells and thus is not affected by fluorescence leakage.This is another frequently adopted definition of background, especially for studies where autofluorescence or diffusion is strong in the proximity of the target. 13o investigate the impact of chosen ROIs and quantification formulas (Table 2) on the BM of FMI systems, we calculated BM scores for each system as derived from the sensitivity versus depth phantom region using the method previously described. 19Briefly, the BM scores were defined as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 4 ; 3 0 9 BM ¼ sMAPE∕N; ( where sMAPE is the symmetric mean absolute percentage error of the SNR and contrast metrics that have been quantified for the various formulas of Table 2 and for the two background regions shown in Fig. 2(a).The sMAPE is calculated as ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 4 ; where nis the number of phantom wells included in the metrics' evaluation (n ¼ 9), X i is the value of the metric result (i.e., SNR or contrast), and Y i is the reference value.For the BM score quantification, we considered normal signal distributions, according to which a measurement is assumed to present 95% confidence if the signal is twofold the noise level.This results in reference values of 6 dB for SNR, 0.33 for Michelson contrast, and 1 for Weber contrast. 19ince the scope of this work is to assess how the SNR and contrast change depending on the application of different formulas and/or ROIs, all data processing was implemented on single images of the phantom acquired by the six systems.The repeatability and error analysis of the quantification of those two metrics have been recently reported by our group elsewhere. 39ble 2 Formulas for calculating SNR and contrast.Employing the six FMI systems described in Table 1, we imaged the composite phantom of Fig. 2(a) and isolated the sensitivity versus depth region from the acquired images, as shown in Fig. 2(b).As expected, the markedly different systems yield subjectively different images from the same area of the same phantom, which highlights the importance and need for FMI standardization to ensure a consistently high degree of performance and facilitate clinical translation.
The ROIs used for the quantification of SNR and contrast are shown in the top-right inset of Fig. 2(a).The arrows point out the two areas (b1 and b2) used for background calculation.These locations were chosen based on different studies assessing the performance of FMI systems 13,28 and according to phantom constituents and geometry.
Figures 3 and 4 illustrate the calculated SNR and contrast metrics, respectively, as functions of depth.The results obtained using SNR 1 and SNR 2 show the same trend for all systems, but not equivalent values [Fig.3(b)].Moreover, the results obtained using SNR 3 and SNR 4 not only differ from SNR 1 and SNR 2 but are also influenced by the chosen background area.For instance, when comparing Mob and NIRF I for SNR b1 2 , it is evident from Fig. 3(b) that the NIRF I system has a greater SNR than the Mob system.However, a comparison of the NIRF I system using SNR b1 3 with the Mob system using SNR b1 2 yields the opposite conclusion.Figure 4 demonstrates the results of the contrast metrics with respect to the applied formula (i.e., C M -Michelson contrast and C W -Weber contrast) and considered background ROI.The trends for both C M and C W metrics are similar for each system when the same background values are considered (i.e., b1 or b2 for both formulas).Conversely, when comparing the trends observed in C M under the two background values, the background influence on the quantification of the contrast metrics becomes evident [Fig.4(b)].For example, the Mob system has a higher contrast than the RawFl system when the Michelson contrast is applied under the b1 background for both systems.This is not true, however, when the Michelson contrast is used under b1 for the Mob system and b2 for the RawFl system.In that case, the RawFl system has a higher contrast than the Mob one [see Fig. 4(b)].The influence of the applied formula and background ROI shown in Fig. 3 for SNR and Fig. 4 for contrast becomes even stronger when both metrics are combined to assess the performance of different FMI systems.Figure 5 depicts the ranking of the six systems used in this study based on the corresponding BM scores, which were calculated from the SNR and contrast metrics.Figure 5(a) illustrates the effect of combining the different formulas and background locations on the quantification of the BM scores per system.Moreover, Figs.5(b)-5(e) demonstrate exemplary BM scores for each system as selected from the four marked squares in Fig. 5(a).The four squares were selected after a visual inspection of the map in Fig. 5(a) to showcase the variability in the quantified BM scores.As can be seen, the BM scores not only have different values, but also their trend is different per combination of formulas and background ROIs.This trend becomes clear in Fig. 5(f), where the systems' ranking (i.e., 1-worst through 6-best) is shown for the various BM scores of Figs.5(b)-5(e).For example, the hybrid system's rank is superior to Solaris' rank if their BM scores result from the combination of SNR
Discussion and Conclusion
In the current work, through the comparison of six near-infrared FMI systems, we showed that the assessment of system performance and standardization via SNR and contrast is highly dependent on the definition of background ROI and the formulas used.This proves the need for careful attention to test a method's clinical relevance, as well as consistency in defining metrics for objective, quantitative assessment of FMI system performance.We used fluorescence data from the sensitivity versus depth areas of a multiparametric phantom to quantify SNR and contrast by means of different formulas obtained from the literature (Table 2).It was demonstrated that resultant SNR values can be affected by both the selected background location and the formulas applied (Fig. 3).In the case of contrast values, resultant trends appear similar for both Michelson and Weber formulas (C M and C W in Table 2), but the employed background ROI is still observed to impact the trends.Indeed, as we show in Fig. 4, the contrast (C W ) for the hybrid system changes by a factor of 8.3 depending on the background, while for the Mob system by 2.9.The dependence of the applied formula and/or background becomes more evident for SNR, where the Mob system shows a variation by a factor of 19.6 in the SNR estimation.This indicates a pressing need for common quantification formulas for SNR and contrast and consistent ROI definition for both signal and background.All measurements in this study were conducted in darkness to minimize ambient illumination that would further complicate the quantification of SNR and contrast.However, illumination is another critical factor that must be accounted for when darkness is not possible.One way to address this challenge is by acquiring a "dark" image with the excitation sources turned off and subsequently subtracting that image from the fluorescence image.This step should be performed before quantifying and reporting any performance assessment and quality control metrics.Meeting these requirements is crucial to achieving reliable results and standardization guidelines for FMI. 2,14Having this internal consistency during the development of FMI systems will lead to the establishment of international consensus across the field and will contribute to the widespread acceptance and use of FMI.
Our goal, however, was not only to assess the performance of each system in different SNRs and contrast definitions but also to show how these definitions affect the comparison of markedly different systems.The results of our contrast and SNR calculations were translated into BM scores and then to rank values.This analysis revealed the dependence of the ranking on the definition of background ROIs or the adopted formulas (Fig. 5).For example, the rank value for the Solaris system was lower than the corresponding values for the NIRF II and the hybrid systems if the performance assessment was based on the Michelson contrast and SNR 2 formula with background defined as b1.However, the Solaris system ranks higher than the NIRF II and the hybrid systems when SNR is evaluated as SNR 1 and contrast through the Weber formula with the b2.This inconsistency in the determinants of the metrics for system evaluation can affect the development and comparison of systems and ultimately the design and efficacy of clinical or pre-clinical studies.In a recent report, the American Association of Physicists in Medicine (AAPM) proposed SNR 4 and C M , as metrics for the performance assessment of fluorescence imaging systems. 17Moreover, the suggested background region proposed in these guidelines for the estimation of SNR corresponds to a region with the same optical properties as the interrogated wells, but without fluorescent dye.This corresponds to the ROI b 2 in our study since the wells are gradually covered with the phantom matrix material which has no fluorescent dye.On the other hand, in the AAPM study, the contrast is associated with the resolution of a system and not the signal contrast as employed herein.Thus, although there is agreement in the SNR definition (SNR b1 4 ) between the AAPM and our study, we additionally employed the contrast as a means of sensitivity assessment.Nevertheless, these recommendations represent a promising initial step toward establishing a widely accepted protocol for standardizing FMI systems, thereby addressing the inconsistencies demonstrated herein.
Similar limitations for the quantification of SNR and contrast have also been reported during the use of FMI systems in pre-clinical and clinical applications.For example, LaRochelle et al. 28 discussed the variability of the methods used for reporting the quantitative sensitivity metrics using 3D anthropomorphic phantoms with incorporated NIR fluorescent tumor parts.On the other hand, Hoogstins et al. 21used data from both animal and human studies with multiple fluorescence tracers to show that background noise and background selection have a significant influence on the quantification of SBR and contrast-to-background ratio.Similarly, Azargoshasb et al. 25 showed that SBR quantification can impact the surgical discrimination of fluorescence signals, highlighting the importance of the applied quantification approach in intraoperative decision-making.Herein, we present, to the best of our knowledge, the first study that showcases not only how the adopted formulas and the used background affect the performance assessment of an FMI system but also how the lack of consensus on quantification methods of SNR and contrast can result to misleading interpretation of system comparison measurements.
Moreover, for the quantification of the BM scores, we assumed normal signal distributions, according to which a measurement represents 95% confidence when its value is twice the magnitude of the noise level. 19Thus, the reference threshold values applied here are user-independent in comparison to another value commonly used in fluorescence imaging, the Rose criterion. 17he Rose criterion method also sets a limit of detection for fluorescence imaging for which the CNR values must be greater than 3 to 5. 40 However, the range of a particular threshold value varies from study to study 35,41,42 and depends on several parameters such as object shape, edge sharpness, viewing distance, and observer experience.Besides the parameters affecting the threshold value, Rose's studies were intended for electronic imaging systems (i.e., photography, television, and optical and visual systems) 43 and were focused on human perception of signal detectability. 44However, threshold values that are constrained by aspects of the human visual system might no longer be relevant with the advent of artificial intelligence (AI) imaging and signal processing.6][47] The criterion adopted herein follows a more simplistic statistical approach that evaluates system performance without depending on human perception and thus is more relevant for assessing the detection limits of FMI systems.
The findings of this study are also relevant to existing ICG-based FIGS systems.Similar to FMI, most FIGS system sensitivity assessment and quality control approaches are still based on the quantification of SNR and contrast metrics.However, the quantification methods for these metrics still represent a major limitation factor for cross-platform system comparisons and affect the design and/or repeatability of preclinical or clinical trials.Moreover, consistency in quantification and reporting of the various performance assessment metrics is especially important for FIGS systems, as no established quality control protocols currently exist despite the wide clinical use of such systems.The quantitative assessment of the system performance presented herein advances the current standardization strategies, which is critical for the further development of this technology and for establishing the performance limits that are a prerequisite for regulatory approvals.
Finally, similar challenges in the quantification of SNR and contrast are present in other optical technologies that are currently under investigation.For example, Palma-Chavez et al. 26 showcased variability in SNR and contrast quantification methods within the field of optoacoustics.Fluorescence lifetime imaging is another emerging and very promising technology that also lacks consensus in the quantification of SNR, despite its frequent use in assessing the reliability of lifetime measurements.Under appropriate modifications, our study can also be adapted for such technologies, thereby contributing to the development of performance assessment and quality control protocols for imaging methods beyond FMI and FIGS.
Fig. 2
Fig. 2 Sensitivity versus depth phantom region.(a) An illustration of the composite phantom used in this study, with the sensitivity versus depth wells highlighted and enlarged.Arrows denote two areas (b1 and b2) used as background regions.The depth of the phantom wells (bottom left, Dx where x ¼ a; b; c: : : ) indicates the distance from the top surface of the phantom to each fluorescent inclusion.The concentrations of different constituents are the same for all inclusions.Qdots, quantum dots for fluorescence; Hemin, bovine hemin; and TiO 2 , nanoparticles (see Sec. 2.1).(b) Fluorescence images normalized to their corresponding maxima as acquired by the six systems employed in the study (see Table1for the description of each system).
30 SNR b2 2 SNR b1 3 SNR 31 SNR b2 3 SNR b1 4 SNR
of photons on the detector; σ-the noise associated with the detector (i.e., standard deviation) foreground signal pixel intensity; N-mean background noise pixel intensity ¼ μ S−N σ S μ S−N -mean signal after background subtraction; σ S -standard deviation of the signal ¼ S−N σ N S-mean signal pixel intensity; N-mean background noise pixel intensity, σ N -background standard deviation 17 I max −I min I max þI min I max ; I min -maximum pixel intensity and minimum background pixel intensity,
Fig. 3
Fig. 3 Dependence of SNR on the two background locations shown in Fig. 2(a) and/or the quantification formulas of Table 2 for different FMI systems.(a) SNR values for all systems at each depth.SNR 1 shows the same behavior for each system as a function of depth.SNR 2 shows a similar trend to SNR 1 for all systems, regardless of the background employed.SNR 3 and SNR 4 show different trends compared with SNR 1 and SNR 2 , depending on the background.(b) SNR values of the phantom well with depth = 1 mm for all systems.The values correspond to the dashed area highlighted in panel (a).
Fig. 4
Fig. 4 Dependence of contrast on the two background locations shown in Fig. 2(a) and/or the quantification formulas of Table 2 for different FMI systems.(a) The contrast metric results for all systems at each depth.C M and C W show similar trends when either b1 or b2 is employed for both calculations.(b) Contrast results for the phantom well with depth = 1 mm for all systems.The values correspond to the dashed area highlighted in panel (a).
Fig. 5
Fig. 5 BM scores calculated according to Gorpas et al. 19 for each system.(a) Map of the BM scores quantified using different SNRs and contrast (C) formulas (see Table 2) and two different backgrounds [see Fig. 2(a)].The squares marked with numbers 1, 2, 3, and 4 correspond to the representative graphs of BM scores in panel (b) for square 1, SNR b1 2 and C b1 M ; (c) for square 2, SNR 1 ; (d) for square 3, SNR b1 3 ; and (e) for square 4, SNR b2 4 .(f) The rank of each system as a result of the BM scores for all squares of panel (a).
Fig. 5 BM scores calculated according to Gorpas et al. 19 for each system.(a) Map of the BM scores quantified using different SNRs and contrast (C) formulas (see Table 2) and two different backgrounds [see Fig. 2(a)].The squares marked with numbers 1, 2, 3, and 4 correspond to the representative graphs of BM scores in panel (b) for square 1, SNR b1 2 and C b1 M ; (c) for square 2, SNR 1 ; (d) for square 3, SNR b1 3 ; and (e) for square 4, SNR b2 4 .(f) The rank of each system as a result of the BM scores for all squares of panel (a).
Table 1
FMI systems used in this study and the corresponding imaging protocols.
a At the phantom surface.
|
2024-07-19T21:40:29.319Z
|
2024-07-18T00:00:00.000
|
{
"year": 2024,
"sha1": "dfe23e936cb461a413debbd9a803882154611b0d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "dfe23e936cb461a413debbd9a803882154611b0d",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
21401474
|
pes2o/s2orc
|
v3-fos-license
|
Joint involvement in patients affected by systemic lupus erythematosus : application of the swollen to tender joint count ratio
Joint involvement is a common manifestation in systemic lupus erythematosus (SLE). According to the SLE disease activity index 2000 (SLEDAI-2K), joint involvement is present in case of ≥2 joints with pain and signs of inflammation. However this definition could fail to catch all the various features of joint involvement. Alternatively the Swollen to Tender joint Ratio (STR) could be used. This new index, which was originally proposed for rheumatoid arthritis (RA) patients, is based on the count of 28 swollen and tender joints. Our study is, therefore, aimed to assess joint involvement in a SLE cohort using the STR. SLE patients with joint symptoms (≥1 tender joint) were enrolled over a period of one month. Disease activity was assessed by SLEDAI-2K. We performed the swollen and tender joint count (0-28) and calculated the STR. Depending on the STR, SLE patients were grouped into three categories of disease activity: low (STR<0.5), moderate (≤0.5 STR ≤1.0), high (STR >1.0). We also calculated the disease activity score based on a 28-joint count and the erythrocyte sedimentation rate (DAS28-ESR). We enrolled 100 SLE patients [F/M 95/5, mean±standard deviation (SD) age 46.3±10.6 years, mean±SD disease duration 147.1±103.8 months]. The median of tender and swollen joints was 4 (IQR 7) and 1 (IQR 2.5), respectively. The median STR value was 0.03 (IQR 0.6). According to the STR, disease activity was low in 70 patients, moderate in 23 and high in 7. A significant correlation was identified between STR values and DAS28 (r=0.33, p=0.001). The present study suggests a correlation between STR and DAS28, allowing an easier and faster assessment of joint involvement with the former index.
n INTRODUCTION
J oint involvement is a common manifes- tation in patients affected by systemic lupus erythematosus (SLE) (1,2).The SLE disease activity index 2000 (SLEDAI-2K) is the most frequently used index in the clinical practice for SLE patients (3).However, the SLEDAI-2K indicates the presence of joint involvement only in case of ≥2 joints with pain and signs of infl ammation, such as tenderness, swelling or effusion (3).This stringent defi nition may, therefore, fail to identify all the different forms of joint involvement.Kristensen et al. have recently proposed a new index for rheumatoid arthritis (RA) resulting from a ratio of swollen to tender joints based on a 28 joint count named swollen to tender joint count ratio (STR) (4).This index seems reliable and easy to calculate in the clinical routine, can identify the different degrees of disease activity and does not require the erythrocyte sedimentation rate (ESR) and the patient's assessment, which could be infl uenced by other factors in SLE patients.
N o n -c o m m e r c i a l u s e o n l y
Therefore in this study we aimed to assess joint involvement in a large SLE cohort using the STR clinical marker.
n MATERIALS AND METHODS
During one month, we enrolled 100 consecutive SLE patients with active joint complaints (≥1 tender joint).All patients were referred to the Lupus Clinic, Rheumatology Unit, La Sapienza University of Rome.
The study protocol was in compliance with the principle of good clinical practice and the Declaration of Helsinki statements.All patients gave their informed consent to participate in the study.The ethical committee of Sapienza University of Rome approved the study protocol.The diagnosis of SLE was made on the basis of the 1997 American College of Rheumatology (ACR) revised criteria (5).Clinical and laboratory data, as well as demographics and past medical history with date of diagnosis, comorbidities and previous and concomitant treatments were reported in a standardized electronic form.All patients underwent a complete assessment, including a global health assessment by a visual analogue scale (GH; 0-100 mm).Peripheral blood samples were collected from all patients to evaluate the autoantibody profile and complement serum levels.Specifically, anti-dsDNA antibodies were assessed by indirect immunoflorescence on Crithidia Luciliae in accordance with the manufacturer's instructions (Orgentec Diagnostika, Mainz, Germany).Serum levels of complement C3 and C4 (mg/dL) were examined by radial immunodiffusion.ESR was determined with standard methods (mm/h, Westergren).Disease activity was assessed by using the SLEDAI-2K at the time of the visit (3).
Statistical analysis
The software packages we used were MedCalc 16.0 (MedCalc Software, Mariakerke, Belgium) and Statistical Package for Social Sciences (SPSS 13.0, Chicago, IL, USA).Data were reported as means and standard deviations (SD) or medians
n RESULTS
We enrolled 100 SLE patients (M/F 5/95, mean±SD age 46.3±10.6 years, mean±SD disease duration 147.1±103.8months).Table I reports the main features of the enrolled patients.Firstly, we counted the swollen and tender joints and calculated a median of 1 (IQR 2.5) and 4 (IQR 7), respectively.Subsequently, we calculated the STR values, which gave a median of 0.03 (IQR 0.6).According to STR values, the disease activity was low in 70 patients, moderate in 23 and high in 7. We calculated the DAS28, which gave a median of 4.1 (IQR 1.96).We also observed a positive correlation between STR and DAS28 values (p=0.001,r=0.33; Figure 1) and between STR and ESR (p=0.01;r=0.25).We grouped SLE patients according to the disease activity identified by STR values [low (STR <0.5), moderate (≤0.5 STR ≤1.0), high (STR >1.0)] and performed a comparison among the features of the three groups.Table II reports only the comparisons with statistically significant results.We evaluated 34 patients with joint involvement identified by SLEDAI-2K.STR was low in 29.4% of them, moderate in 50.0% and severe in 20.6% (Figure 2A).The remaining 66 patients without joint involvement defined by SLEDAI-2K had a low STR in 91% of cases and a moderate STR in 9% of cases (Figure 2B).erogeneous clinical features (8)(9)(10)(11).The therapeutic strategy in SLE patients should include the control of disease activity and the prevention of chronic damage (12,13).
Joint involvement can affect up to 90% of SLE patients (1,2).It is associated with different degrees of severity ranging from mild arthralgia to erosive disease (2).Data from the literature suggest a prevalent polyarticular involvement and inflammatory signs even in asymptomatic joints (14,15).Moreover, the assessment of disease activity and treatment response is crucial in the management of SLE patients with prevalent joint involvement.Several global indexes have been proposed and validated to assess disease activity in SLE patients.The revised SLEDAI-2K is the most frequently used in the clinical practice as well as in observational studies, due to its simplicity and feasibility (3).Among the SLEDAI items, joint involvement is defined as the presence of at least 2 joints with pain and signs of inflammation (i.e., tenderness, swelling or effusion), with a corresponding score of 4 (3).This value is not random in that it highlights the importance of joint involvement, because the disease is considered to be active with a SLEDAI-2K value ≥4.Nonetheless, it is also clear that the definition in the SLEDAI-2K cannot capture all the potential features of joint involvement in SLE patients and cannot therefore reflect fully the evolution of joint involvement during the follow-up.We found a surprisingly high number (9%) of patients without joint involvement according to SLEDAI-2K, who, however, scored a moderate STR.Therefore, also in the light of some results from previous studies (4), this small yet significant sub-population could potentially start a treatment for joint involvement, despite this symptom is not included in the SLE-DAI-2K.Moreover, the degrees of activity identified by the STR seem to be associated with different disease manifestations.
In particular, patients with a high STR showed more frequently neuropsychiatric and renal involvement compared to those with moderate and low activity.Also the positivity for anti-dsDNA and anti-SSA antibodies was significantly more frequent in SLE with high STR-disease activity.
It should be considered that the median STR score is low in this cohort and the majority of patients showed a low activity score, indicating a greater number of tender joints compared with swollen ones.We cannot exclude a possible role of a concomitant fibromyalgia, which was not assessed in this analysis.
In conclusion, our study suggests the possibility of using the STR in the assessment of joint involvement in SLE patients.One of the strengths of this index is that it can be easily applied in the clinical practice, thus allowing a quick assessment.At the same time, its sensitivity demonstrated in previous studies on RA patients (4) suggests the need for longitudinal studies in larger populations of SLE patients.
i a l u s e o n l y with an interquantile range (IQR) depending on the data distribution (tested with the Kolmogorov-Smirnov test).Histograms were used to visualize the distribution of swollen and tender joints, STR and STR categories.Pearson's and Spearman's tests were used to perform correlation analyses when appropriate.Wilcoxon's matched pairs test and paired t-test were performed accordingly.Univariate comparisons between nominal variables were calculated using the chi-square test or Fisher's exact test when appropriate.Two-tailed p values were reported; p values ≤0.05 were considered statistically significant.
Figure 1 -
Figure 1 -Correlation between swollen to tender joint count ratio (STR) and disease activity score 28-joint count (DAS28) values and ESR.
Figure 2 -
Figure 2 -Distribution of the three subset of disease activity according to swollen to tender joint count ratio values in patients with (A) and without joint involvement defined by systemic lupus erythematosus disease activity index 2000 (B).
Table I -
Historical clinical, laboratory and therapeutical features of the 100 systemic lupus erythematosus patients.
Table II -
Demographic, clinical and laboratory features of the enrolled systemic lupus erythematosus patients, grouped according to disease activity identified by swollen to tender joint count ratio values.
|
2017-06-17T07:04:12.489Z
|
2015-09-16T00:00:00.000
|
{
"year": 2015,
"sha1": "bb6c04a1c5c708d70fbb1df952edc3588a234aaf",
"oa_license": "CCBYNC",
"oa_url": "https://www.reumatismo.org/index.php/reuma/article/download/828/691",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bb6c04a1c5c708d70fbb1df952edc3588a234aaf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220885283
|
pes2o/s2orc
|
v3-fos-license
|
Integrating structure-based machine learning and co-evolution to investigate specificity in plant sesquiterpene synthases
Sesquiterpene synthases (STSs) catalyze the formation of a large class of plant volatiles called sesquiterpenes. While thousands of putative STS sequences from diverse plant species are available, only a small number of them have been functionally characterized. Sequence identity-based screening for desired enzymes, often used in biotechnological applications, is difficult to apply here as STS sequence similarity is strongly affected by species. This calls for more sophisticated computational methods for functionality prediction. We investigate the specificity of precursor cation formation in these elusive enzymes. By inspecting multi-product STSs, we demonstrate that STSs have a strong selectivity towards one precursor cation. We use a machine learning approach combining sequence and structure information to accurately predict precursor cation specificity for STSs across all plant species. We combine this with a co-evolutionary analysis on the wealth of uncharacterized putative STS sequences, to pinpoint residues and distant functional contacts influencing cation formation and reaction pathway selection. These structural factors can be used to predict and engineer enzymes with specific functions, as we demonstrate by predicting and characterizing two novel STSs from Citrus bergamia.
Introduction
One of the largest and most structurally diverse family of plant-derived natural products is the isoprenoid or terpenoid family, with over 60,000 members comprising mono-, sesqui-, di-, tri-, and sesterterpenes, along with steroids and carotenoids [1]. These phytochemicals serve plants in defence against pathogens or herbivores and as attractants of pollinators [2]. They are also of high economic value to humankind due to their widespread use in pharmaceutical agents, insecticides, preservatives, fragrances, and flavoring [3]. The immense diversity of the terpenoid family derives from the polymerization and rearrangement of a varying number of simple 5-carbon isoprenoid units. Monoterpenes are 10-carbon (C10) compounds built up of two such units, sesquiterpenes are composed of three and hence are C15 compounds, diterpenes (C20) are composed of four, and so on. Sesquiterpenes are especially interesting due to their high diversity. Their formation is catalyzed from the C15 substrate, farnesyl pyrophosphate (FPP), by sesquiterpene synthases (STSs), a class of enzymes found in plants, fungi and bacteria [4].
Recently, we published a database of over 250 experimentally characterized STSs from over one hundred plant species, collectively responsible for the formation of over a hundred different sesquiterpenes [5]. These compounds all derive from the same substrate, FPP, through a branching tree of reactions such as cyclizations, hydride shifts, methyl shifts, rearrangements, re-and de-protonations to give rise to the immense existing variety in sesquiterpene structures. Apart from the functionally characterized STSs in the database, there are thousands of putative STSs in sequenced plant genomes and transcriptomes whose product specificity is unknown. In addition, many STSs in our database are multi-product enzymes, further complicating the matter of product specificity prediction. As a first contribution, we show that multiproduct STSs usually catalyze products specific to a single pathway, indicating selectivity towards one precursor cation. Finding residue positions related to this cation choice across all STSs can reveal important aspects of the underlying mechanisms. However, our previous sequence-based analysis showed that these enzymes are very diverse and sequence similarity is heavily influenced by phylogeny [5]. While an approach using hidden Markov models derived from sequences is available to predict what kind of terpene synthase (mono-, di-, tri-, sesquietc.) a particular enzyme may be [6], this kind of sequence-based grouping was not seen within STSs making products derived from a particular cation or cyclization [5]. As a result, previous studies directed at identifying determinants of catalytic specificity in STSs mainly used mutational approaches between and within a few enzymes from the same or closely related species [7][8][9][10]. While such approaches have been successful in finding residues influencing product specificity, their small scale in light of the large diversity of STSs makes it likely that they miss aspects shared across all plant STSs. However, terpene synthases across plants, animals, fungi, and bacteria all share a common structural fold [11]. Protein structures typically evolve at a slower pace than sequences, which means they can contain a wealth of information not easily retrieved from the corresponding sequences.
Here, we combine homology modelling to incorporate STS structural information and machine learning to tease out contributions of different residues to cation specificity. We show that structure-based prediction performs well across all plant species, including on STS enzymes that were published recently and were not used for the construction of the predictor. Such structure-or model-based machine learning has been explored before in other enzyme families and prediction tasks [12][13][14][15], and is challenging. One major challenge is the immense number of features produced, as each protein has many hundreds of residues, each of which has its own set of structural features. This poses a problem in cases like the current one, where labeled, experimentally characterized data is sparse. Here we used a novel hierarchical classification approach where many classifiers are first trained on each feature across all residues, after which the most predictive residues are selected. The final classifier is only trained on the feature values of these predictive residues. Thus, we are able to prune noisy and irrelevant features in order to pinpoint residue positions correlating with cation specificity. These selected residues are likely intrinsically linked to the catalytic mechanism of an STS and contribute to the enzymatic formation of the precursor cation. Many of these residues are also not found when relying on sequence-derived features alone, emphasizing the importance of structure in understanding catalytic activity.
In addition, while the current characterized sequence space may be small, there are many thousands of uncharacterized putative terpene synthases whose sequences can provide valuable information about evolution and conservation, especially in regions where reliable structural information is not available. A correlated mutations analysis on all putative terpene synthases indicates co-evolving residue partners for our set of cation-specific residues which are implicated in shared functional activity (such as intermediate binding or coordination), favoring their co-evolution. Examining these residues and pairs in the context of each other and co-crystallized substrate analogs reveals important aspects of the STS reaction mechanism.
Apart from the independent test set of recently characterized enzymes, we also present a use-case of our predictor for STS specificity screening by predicting and characterizing bisabolyl cation synthases from Citrus bergamia, which further demonstrated the accuracy of the predictor. As the number of experimentally characterized STSs grows, this accuracy will further increase, potentially allowing for more fine-grained product specificity prediction.
The three-pronged approach presented here combines a modest amount of labelled sequence data, a very small amount of experimental structure data, and large amounts of unlabeled sequence data using homology modelling, interpretable machine learning, and co-evolutionary analysis to predict and investigate the underlying mechanisms of cation specificity in STSs. This approach can also be useful for exploring specificity in other enzyme families with characteristics similar to the STSs.
Sesquiterpene synthases follow a single branch of the reaction tree
The reaction cascade of an STS can take two directions. As is depicted in Fig 1, all reactions are initiated by a metal-mediated removal of the diphosphate anion in the (E,E)-FPP substrate, leading to the formation of a transoid (2E,6E)-farnesyl cation (farnesyl cation). The farnesyl cation may then isomerize to form a cisoid (2Z,6E)-farnesyl cation (nerolidyl cation). These two cations may be quenched by water or undergo a proton loss to form acyclic products (acyclic-F and acyclic-N). However, both farnesyl and nerolidyl cations can undergo cyclization at the C10-C11 bond, while the nerolidyl cation can also cyclize at the C6-C7 bond. The resulting cyclic cations can undergo further hydride shifts, methyl shifts, cyclizations, rearrangements, re-and de-protonations to form the final products of the enzyme [16]. Thus, the farnesyl and nerolidyl cations form the roots of a branching tree of hundreds of diverse intermediates and end products.
Many STSs are multi-product enzymes, with two of the more extreme examples being δselinene and γ-humulene synthases from Abies grandis, which produce 52 and 34 sesquiterpenes respectively. In order to determine whether cation specificity is maintained across minor products, we looked at the reaction pathways of the sesquiterpenes produced by the The reaction mechanism of sesquiterpene production starts with farnesyl diphosphate ((E,E)-FPP). Loss of the diphosphate moiety (OPP) leads to farnesyl cation formation. The farnesyl cation can subsequently be converted to the nerolidyl cation. Acyclic sesquiterpenes (acyclic-F and acyclic-N) are formed from these two cations by proton loss or reaction with water molecules. Possible cyclizations for both cations are indicated in the figure. The subsequently formed cyclic cations undergo modifications and rearrangements to form cyclic sesquiterpenes. Some of these sesquiterpenes (g-A and bcg) themselves act as neutral intermediates which can be re-protonated and undergo further reactions to form more products. Products are also formed from specific charged intermediates such as a 1,2-or 1,3-hydride shift of the 10,1-cyclized farnesyl cation (1,2H, 1,3H) and the cadalane skeleton (cadalanes), which can be formed via either of the two precursor cations, or via acidinduced rearrangement of germacrene D. The 7,1-cyclization of the nerolidyl cation, shown in gray, is not found in plant-derived sesquiterpenes. g-A = germacrene A, g-D = germacrene D, bcg = bicyclogermacrene.
https://doi.org/10.1371/journal.pcbi.1008197.g001 multi-product enzymes in our previously assembled database [5]. In their review [17], Vattekkatte et al. looked into multi-product mono-, sesqui-, and triterpene synthases with respect to factors affecting their promiscuity, such as substrate isomers, metal cofactors and pH. However, they did not specifically address the similarity of an enzyme's minor products to the major product. The collation of characterized STSs in our database provides us with 96 multiproduct STSs across a wide variety of species, to better analyze and address this question.
For each sesquiterpene, the route taken in the reaction tree, up to the depth shown in Fig 1 was determined as explained in Materials and Methods. Out of the 96 enzymes with more than one product, 79 (82%) had products from the same branch of the tree, three were 10,1-farnesyl synthases with products from different sub-branches, seven had products from the same cation but a different initial cyclization, and twelve synthases had products from different cations, including the aforementioned multi-product Abies grandis γ-humulene synthase. Of these twelve multi-cation STSs, however, eight had an acyclic farnesyl product in addition to nerolidyl-derived compounds. The ease of formation of acyclic farnesyl products (a single step from the farnesyl cation) indicates that they can be formed even by a nerolidyl synthase as the farnesyl cation is the precursor of the nerolidyl cation. Thus there are only four examples of true multi-cation STSs (<5% of the experimentally characterized multi-product enzymes).
This analysis indicates that STSs are, in the vast majority of cases, optimized for the production of sesquiterpenes from a single, well-defined reaction route, by careful control of intermediates right from the commencement of the reaction, at the precursor cation formation step. This insight can be helpful in STS engineering: changing the reaction specificity of an existing STS to products in the same reaction pathway may be easier to accomplish, with fewer mutations, than the introduction of a new reaction pathway. For instance, the 412 active mutants made by O'Maille et al., exploring the mutation space of tobacco 5-epi-aristolochene synthase and Hyoscyamus muticus vetispiradiene synthase, in many cases resulted in an increased production of germacrene A along with the original product 5-epi-aristolochene, which is derived from germacrene A [18]. Given that even multi-product STSs make sesquiterpenes from the same cation, understanding and predicting this cation specificity can greatly narrow down the possible products of a given enzyme.
Structure-based cation prediction helps overcomes species bias
STS enzymes all have similar tertiary structures consisting entirely of α-helices and short connecting loops and turns. Each structure is typically organized into two domains, with the C-terminal domain containing the active site. The conserved nature of STS enzyme structures across the plant kingdom indicates that applying machine learning on attributes derived from these structures may explain more about cation and product specificity in STSs than sequence-derived attributes, which are more phylogeny-specific. However, due to the lack of available crystal structures for all the characterized enzymes, we turn to homology modelling to make up the deficit. Six crystal structures of STS enzymes were used for multitemplate homology modelling of the C-terminal domains of 247 characterized plant STSs. Table 1 describes these six structures, three of which are farnesyl synthases, two nerolidyl synthases, and one is a cadalane-type synthase. S1 Appendix provides more detail on the modelling results, by comparing multi-template models to those created using the single closest template, and by comparing models of the six experimental structures to themselves. Models of the full enzyme sequences were also made but found to be sub-optimal due to the lack of a defined sequence alignment in regions surrounding the C-terminal domain. These results indicate that the final C-terminal domain models are accurate and capture the characteristics of the true structures in this region.
In order to assess the effect of using features derived from modeled structures compared to purely sequence-based approaches we compared results across three classifiers. One is a simple rule-based classifier, Clf-id, that assigns a test sequence the same class as its closest training sequence based on sequence identity. While this approach is a good baseline and often used in biotechnological applications, machine learning-based models have two advantages over this simple model. Firstly, they are capable of incorporating more complex features, such as the sequence and structure features described in, as well as recognizing more complex patterns in these features, allowing for more accurate predictions that generalize across proteins. Secondly, trained machine learning models can be inspected to understand the patterns used for prediction [19,20]. In this case, this can help gain insight into the contributions of different residues to cation specificity. Therefore, the other two classifiers use the hierarchical machine learning framework described in Materials and Methods with only sequence features (Clf-seq) and with sequence and structure features (Clf-str) respectively. Our classification frameworks make use of gradient boosting trees due to their good out-of-box performance and capability of handling missing feature values caused by deletions in some enzymes.
The dataset consists of 176 farnesyl cation-specific STSs and 72 nerolidyl cation-specific STSs. The remaining 25 STSs are not used for training as they either form products from both cations or only cadalane-type compounds. The cadalane skeleton (Fig 1) can be formed by either of the two precursor cations [21] or in acidic conditions of in vitro assays from rearrangements of germacrene D [22]. These two alternatives make it difficult to judge whether a cadalane STS goes through the farnesyl or the nerolidyl pathway. Table 2 shows the performance of these three classifiers using increasingly difficult validation schemes: a random five-fold cross-validation (Random Split), a leave-10-genera-out based scheme (Genus Split), and, finally, training on 177 dicot STSs (124 farnesyl, 53 nerolidyl) with 48 monocot and coniferous STSs (29 farnesyl, 19 nerolidyl) in the test set (Clade Split). Due to the imbalanced nature of the dataset, we use a variety of different metrics to measure performance. These are further described in the Materials and Methods. While Clf-str outperforms the sequence-based approaches by a small margin in the random cross-validation results, the improvement is much more striking in the phylogenetic validation schemes. As STS sequence similarity is biased more towards phylogeny than functional activity, Clf-id and Clf-seq make more errors when testing on species far away from those in the training set. Since Clf-str uses structure-derived information, it is less affected by this bias. This indicates that the structurebased classification framework is more suited to be applied across all plant species, including under-explored species, without losing out on predictive performance. S1 Fig shows the predicted nerolidyl percentages for each enzyme with Clf-str (using the probabilities returned by the genus-based split for each enzyme in the dataset). A clear separation is seen between farnesyl and nerolidyl-cation specific enzymes. However, because of the much lower number of nerolidyl-cation specific enzymes in our dataset, the nerolidyl predicted probabilities for nerolidyl-cation specific enzymes (average 53% ± 30%) are generally lower than the farnesyl predicted probabilities of farnesyl-cation specific enzymes (average 88% ± 19%, calculated as 100 -nerolidyl predicted probability percentage). As a consequence of its superior performance, the structure-based classifier likely finds features and residues that are important for cation specificity across all plant species-something we can look into to understand generic STS cation determinants.
Thirty cation-specific residues were selected from Clf-str, as described in Materials and Methods. Fig 2 visualizes the characterized STS enzymes with respect to the features values of the cation-specific residues, colored by cation and cyclization specificity. Though imperfect, a separation of farnesyl and nerolidyl cation-specific STSs can be seen. Most cadalane STSs lie on the farnesyl side, with only two being predicted as nerolidyl cation-specific STSs in the Genus Split results. This can indicate that many cadalane synthases in fact make their products through a germacrene D intermediate, or, if the measurements were conducted in vitro, then perhaps acidic assay conditions led to spontaneous product rearrangements, thus the interpretation of Fig 2 in terms of STSs producing only cadalane products is unclear. While nerolidol synthases (N-acyclic in Figs 1 and 2) cluster separately from the rest, farnesene and farnesol synthases (F-acyclic in Figs 1 and 2) are found all across the reduced space. Due to the ease of formation of these acyclic farnesyl products, it is possible that ancestral versions of these enzymes did indeed produce nerolidyl-derived compounds but this capability was later lost.
A further test of Clf-str was performed on 42 STS enzymes characterized from August 2017-January 2020, not included in the first release of the characterized STS database [5], 31 of which come from species not present in the current set. This new set consists of 24 farnesyl cation-specific STSs, 16 nerolidyl cation-specific STSs, three STSs producing only cadalane compounds, and one STS which produces both farnesol and nerolidol. Clf-str correctly predicted all the nerolidyl cation-specific STSs and all but two of the farnesyl cation-specific STSs. Both the cadalane and the acyclic STSs were predicted as farnesyl cation-specific STSs. These enzymes are listed in S1 Table and have been added to the second version of the characterized STS database, found at bioinformatics.nl/sesquiterpene/synthasedb.
Residues in five structural regions contribute to cation specificity
The cation-specific residues according to our structure-based predictor are indicated in Fig 3A on the tobacco epi-aristolochene synthase (TEAS) structure. They are roughly found in five different structural regions, labeled A-E. Also shown are the residues in the three known terpene synthase motifs, namely RxR, DDxxD, and NSE/DTE, as well as the magnesium ions and substrate analog. Fig 3B shows the sequence composition of these thirty residues across farnesy and nerolidyl cation-specific STSs. While the sequence logos ( Fig 3B) show significant differences in some predictive positions, others have very similar amino acid distributions across the two cations, indicating that their differences lie solely in some combination of their structural features likely due to their structural interaction with neighboring residues. Thus, these Characterized STSs visualized using the feature values of the cation-specific residues followed by dimensionality reduction using UMAP [23], which positions STSs with similar feature values closer to each other. Squares represent farnesyl cation-specific STSs and diamonds represent nerolidyl cation-specific STSs. Each STS is also colored by its cyclization specificity. Enzymes catalyzing products from different precursor cations are marked as triangles. To obtain more information about these thirty residues we turned to the wealth of uncharacterized putative terpene synthase enzymes in sequenced plant genomes and transcriptomes. The products of these putative enzymes are unknown, so they cannot be used to train a classifier; however the sequences themselves still carry valuable information about conservation and divergence. We used co-evolutionary analysis to inspect these sequences in the context of the cation-specific residues. Co-evolutionary analysis is a statistical technique applied on protein sequence alignments based on the underlying biological theory of residue co-evolution [24]. This theory postulates that if there is a mutation in one residue involved in an interaction, then proteins in which its interaction partner is mutated as well, in a way that maintains their interaction, are preferentially selected by evolution. While this technique is most often used to find potentially interacting residues within a protein in protein families with scant structural information, an alternative scenario of co-evolution can play out in the case of functionally related residues [25]. For instance, two residues which contact a substrate or an intermediate, while not interacting directly, may still co-evolve to maintain their shared interactions with the substrate.
We used 8344 putative terpene synthase N-and C-terminal domains obtained from sequenced plant genomes and transcriptomes to perform a co-evolutionary analysis as described in Table 1. When looking at the top 1500 predicted contacts (S3A Fig), 328 have residues at least 7 positions apart in the sequence indicating long range interactions across different structural regions. Only 78 (24%) of these are not capable of physical interaction (>11 Å apart) in all of the six STS crystal structures. 10 of these predicted pairs, shown in Fig 4, have at least one residue among the thirty cation-specific residues. Below, we discuss specific examples of these residues and pairs in context of the five regions predicted to be involved in cation specificity.
Residues in region A (colored dark green in Figs 3 and 4) lie in the A-C loop, close to the conserved RxR motif, with one residue forming the second Arg in the motif itself. This motif has been implicated in the complexation of the diphosphate moiety, preventing nucleophilic attacks on any of the intermediate carbocations [26]. As this is one of the first steps to occur in order for the resulting charged intermediate to undergo cyclization and further reactions, it can play a crucial role in determining how the newly formed cation is positioned, thereby determining whether a farnesyl cation is formed or a nerolidyl cation. In previous work we showed that many nerolidol (N-acyclic) synthases have a mutation in this motif, from RxR to RxQ (as can be seen in the sequence logo; Fig 3B, position 266), indicating that changes in and around this motif can indeed affect the products formed.
The six residues in region B (colored red in Figs 3 and 4) all lie right in the center of the active site cavity, in helix D (G276, T293, S298, in TEAS), around the kink region in helix G2 (T402, Y404, L407) and in helix H2 (C440), enveloping the descending substrate from all sides. The residues in this region are very close to both the substrate analog co-crystallized with TEAS as well as the analog co-crystallised with Abies grandis α-bisabolene synthase, as depicted in Fig 4C. This proximity has led to a more thorough exploration of these residues in the context of product specificity, than in other regions of the structure. For instance, Yoshikuni et al, 2006 explored plasticity residues in the active site of the promiscuous Abies grandis γ-humulene synthase [8]. Among the many mutants they made, those that converted the major product from the farnesyl-derived γ-humulene to nerolidyl-derived products such as βbisabolene, α-longipinene, longifolene, and sibirene, contained mutations in the residues corresponding to T402, Y404 and C440 in TEAS-three cation-specific residues according to our predictor. Two of these residues (Y404 and C440) have also been explored by Salmon et al [27] when mutating the acyclic β-farnesene synthase from Artemesia annua to a cyclic nerolidyl cation-derived enzyme.
Similarly, Li et al, 2013 demonstrated that a single mutation in the kink in the G2 helix can change the product specificity of an Artemisia annua STS from α-bisabolol, a nerolidyl-derived sesquiterpene, to the farnesyl-derived γ-humulene [28]. T402 from this kink has co-evolved with S298 in the parallel helix D. As depicted in Fig 4B (column 1), while these two positions are very often both Serine in farnesyl cation-specific STSs, in nerolidyl cation-specific STSs the commonly occurring pairs are Thr-Ile or Tyr-Ser. The dipole of T402 has been implicated along with T401 in directing the cationic end of the farnesyl chain into the active site, preparing it for a C10 attack [26]. Isoleucine, which is not often found to be a catalytic residue due to its inert nature, cannot perform this task in nerolidyl cation-specific STSs. Another contact is between the cation-specific residue C440 and Y376 (numbered 2 in Fig 4B). A mutational analysis on a multi-product maize STS by Kollner et al. demonstrated the importance of Y376 in the formation of bicyclic products such as sesquithujene and bergamotene, derived from the nerolidyl cation [29]. The residue positioned three residues downstream of Y376 was identified by Kollner et al in 2009 to be involved in controlling the ratio of α-bergamotene to the acyclic β-farnesene in maize STS orthologs [30]. Therefore, the combined effects of position 376 and Tobacco epi-aristolochene synthase (TEAS) secondary structure with distal cation-specific co-evolutionary contacts (green arcs), motif residues (purple), and cation-specific residues (colored by region). Helix naming as in Starks et al. [26] B. Sequence-pair conservation of four cation-specific contacts discussed in the text, across farnesyl and nerolidyl cation-specific STSs, and all putative terpene synthases. The height of a pair of letters represents the frequency of the pair appearing in those two positions, with 'X' representing gaps. C. Diagrams indicating the proximity of residues labeled B in Fig 3B, as well as the residues that they co-evolve with, to substrate analogs trifluorofarnesyl diphosphate (FFF) co-crystallized with TEAS (left) and farnesyl thiodiphosphate (FPS) co-crystallized with Abies grandis α-bisabolene synthase (AgBIS) (right). Carbon atoms are numbered (white boxes) as in the FFF subtrate analog moeity in PDB 440 are likely required for the formation of the nerolidyl cation followed by a second cyclization to bicyclic nerolidyl sesquiterpenes. An alignment of TEAS with the examples discussed here is depicted in S4 Fig. These examples demonstrate that residues found important by our structure-based predictor are indeed involved in catalytic and functional activity. They also establish the power of an integrative machine learning approach to pinpoint residue positions important across a variety of species, a combination of what one would find from each of the individual studies referenced above. A Fisher's exact test for the significance of the number of residues found both by our predictor and in literature returned a p-value of 9.8e −07 .
The 12 residues in region C (colored orange in Figs 3 and 4) encompass the entire E-F loop and parts of the G2-H1 loop at the very bottom of the active site cavity. An interesting residue here is H360, the last residue in the E-F loop. Sequence conservation shows that this position is very often deleted in nerolidyl cation-specific synthases, while farnesyl cation-specific synthases usually have bulky residues such as Tyrosine and Histidine (Fig 3B, position 360). Two of its co-evolving partners (numbered 3 and 4 in Fig 4B), one from the parallel helix G2 and one from the 4-5 loop in the N-terminal region, are also primarily deleted in nerolidyl cation-specific STSs but present in farnesyl cation-specific STSs, albeit usually as Glycine in helix G2. While the connection with the N-terminal domain is surprising, the parallel residue in the C-terminal domain, when present, may physically interact at some point during the reaction or in other plant STSs, not captured in the six crystal structures currently available [31]. A deletion can break this interaction, which in turn can have an effect on the positioning of helix G2 in the active site and thereby the positioning of the cation-specific residues that lie within it. These subtle alterations in cavity shape may in turn affect which kinds of intermediates fit comfortably inside the cavity.
Two consecutive high scoring residues (region D, colored blue in Figs 3 and 4), lie in the H3-α1 loop, close to the catalytic NSE/DTE motif. This motif is involved in coordinating Mg +2 ions along with the DDxxD motif on the opposite side [32]. This region lies at the entrance of the active site cavity and is in an optimum position to contact the substrate as it enters the cavity. In addition, the inability to crystallize this region in three of the six crystal structures indicates that this loop is very flexible [33].
Residues in region E (colored light green in Figs 3 and 4) lie in helix I, near the end of the C-terminal domain and close to helix 7 and helix 8 in the N-terminal domain.
Overall, these results show that cation-specific residues in regions labelled A, B, and D lie within areas known to participate directly in the catalytic reaction. These residues were predicted by our machine learning approach without using any knowledge on their functional properties. Some of these residues have been mutated before and were shown to be important for cation specificity. This indicates that the other residues are also likely to perform similarly crucial roles, perhaps also in STSs that have not been used so far in mutagenesis experiments. Residues labeled C and E lie quite far from the active site and could be involved in subtle alterations of the cavity shape or in stabilising contacts with the N-terminal domain. Though this domain is known to be important for plant STS reactions, its exact function has not been fully explored. However, just as O'Maille et al. showed that residues distant from the active site can still be functionally crucial [18], these distal residues are likely to have multifaceted and interdependent roles in cation specificity that only such large-scale computational approaches can recognize. Further experiments and mutational studies in these regions are required to confirm and elaborate their involvement in the STS reaction mechanism. Meanwhile, the ID 5EAU. The closest distance (in Å) between each residue's β-carbon and a substrate atom is labeled in gray. Two co-evolving contacts (labeled 1 and 2 in A) are colored in green.
https://doi.org/10.1371/journal.pcbi.1008197.g004 structure-based predictor, as well as the cation-specific sequence and contact conservation information described can be used to screen through the many thousands of uncharacterized putative STSs with a particular cation specificity in mind as demonstrated in the next section.
Bisabolyl cation synthases from Citrus bergamia 'Femminello'
One potential application of the cation-specificity predictor presented here is to screen for enzymes with a desired specificity. We demonstrate this application to find STSs catalyzing the formation of products derived from the bisabolyl cation from 23 terpene synthase-like sequences extracted from the transcriptome of Citrus bergamia 'Femminello' (described in Materials and methods). Using the hidden Markov model approach detailed by [6], 11 sequences out of these 23 were predicted to be STSs (as opposed to mono-or diterpene synthases). We used the cation specificity predictor on these 11 and sorted by decreasing order of predicted nerolidyl cation specificity, selecting enzymes with predicted probability percentage above 10%, based on the predicted percentages of the characterized database (S1 Fig). Two enzymes clustered close to the nerolidol cluster in Fig 2 and were thus excluded, resulting in four enzymes with >10% predicted nerolidyl cation specificity. Three of these could be experimentally characterized, submitted to GenBank with identifiers MT636927, MT636928 and MW384854 respectively. MT636927 and MT636928 produced bisabolyl cation-derived products. MT636927 has 55% predicted nerolidyl specificity and produced trans-α-bergamotene, β bisabolene, and α bisabolol. MT636928 has 11% predicted nerolidyl specificity, and produced zingiberene. MW384854 has 26% predicted nerolidyl specificity but produced the farnesyl-cation derived caryophyllene. The chromatograms and the fragmentation patterns of the identified peaks and the reference compounds can be found in S5 Fig and S2 Appendix.
Sequence identity based screening, on the other hand, predicts all 11 enzymes as farnesyl cation specific showing that based on only sequence identity, we cannot prioritize candidate genes for production of bisabolyl cation-derived products. Thus, the cation specificity predictor can be used for effective screening of STSs with desired intermediate specificity, saving time, labour and costs required for extensive experimental characterization. Considering that the bisabolyl cation is one of the least represented intermediates in our dataset, expanding the number of experimentally characterized enzymes used for training can further increase the accuracy of our results, and even allow for more fine-grained product specificity prediction.
Conclusion
The availability of growing numbers of characterized and putative sesquiterpene synthases opens doors for the application of computational analyses in order to obtain insights about this large and amazingly diverse family of enzymes. While STSs collectively produce many hundreds of compounds, these are all rearrangements of two precursor carbocations deriving from a single substrate. We show that multiproduct STS enzymes catalyze the formation of products deriving from the same cation, indicating that cation specificity is determined early in the reaction. A combination of structure-based supervised machine learning and unsupervised co-evolution gives us a set of structural regions implicated in cation specificity determination as well as possible functional relationships between residues in these regions and other parts of the STS structure. The predictor itself can be used for cation-specificity screening, while the residues and corresponding linkages discussed here can be used to design mutational studies with a higher likelihood of maintaining catalytic activity while changing cation specificity. Such an integrative approach can also be applied to other diverse enzyme families in order to uncover large-scale interdependent relationships between catalytic residues influencing product specificity. As the number of characterized STSs from across the plant kingdom increases, more specific predictors can be designed, in order to screen STSs at the cyclization or even product level.
Reaction pathway determination
The reaction pathway for each sesquiterpene in the database was determined using the scheme detailed in IUBMB's Enzyme Nomenclature Supplement 24 (2018) [34] up to the depth specified in Fig 1. For example, the sesquiterpene viridiflorene would be labeled F112 as it derives from bicyclogermacrene which itself is labeled F11. Sesquiterpenes derived from the cadalane skeleton, namely cadinanes, cubebenes, copaenes, amorphenes, sativenes, muurolenes, ylangenes, and their alcoholic variants, are marked as cadalanes as they can form from multiple reaction pathways.
Two sesquiterpenes share a reaction path if the pathway annotation of one is a non-strict prefix of the other's. For example, sesquiterpenes labeled F1, F11, and F113 belong on the same reaction path while those labelled F111, F112, and F12 do not. If multiple cadalane-type compounds are produced by one enzyme, they are assumed to come from the same path. These rules are used to calculate the number of multi-product enzymes with products following the same reaction path.
STSs were labelled as farnesyl or nerolidyl according to the group that their products belong to. STSs making cadalane products along with additional non-cadalane products are labeled with the cation of these other products. Multi-product STSs producing compounds from different cations, as well as cadalane STSs without any non-cadalane product are considered separately and are not used for training.
Sequence extraction and alignment
N-terminal and C-terminal domain sequences were extracted from all spermatophyte plant STSs from the database using HMMER [35] and the Pfam [36] domains PF01397 and PF03936 respectively. All N-terminal and C-terminal sequence alignments were made using Clustal Omega [37], using the corresponding Pfam domain HMM to guide the alignment. A combined N-and C-terminal domain HMM was built by aligning each half of the common seed sequences from both respective Pfam domains, stacking the resulting alignments together, and using the hmmbuild tool in HMMER [35]. This HMM is referred to as Terpene_synth_N_C.
Homology modelling
For each STS, 500 multi-template homology models were created of the C-terminal domain region using MODELLER [38], with six STS structures from the PDB [39] as templates, as listed in Table 1. These were aligned to each sequence using the C-terminal PF03936 Pfam domain [36] as a guide, using Clustal Omega [37]. The top three models were selected based on their N-DOPE score for feature extraction.
For comparison, 500 models were also made using a single template for each enzyme; the template chosen was the one having the maximum sequence identity to the enzyme being modelled. Similarly, models were made for each of the six template structures using the other five structures as templates. Models of full STS sequences (including the N-terminal domain) were also made using a similar multi-template approach with the custom Terpene_synth_N_C HMM to guide the alignment to the templates. Results for these three additional approaches are presented in S1 Appendix.
Feature extraction
Sequence and structure features were extracted from each STS as described below and aligned according to the C-terminal domain alignment. Gaps in the alignment were represented as NaNs for continuous features and as a separate category for categorical features.
Sequence features. For each STS sequence, PSIBLAST [40] was run on the non-redundant protein database (nr) [41] and used to calculate a position-specific scoring matrix (PSSM) and a position-specific frequency matrix (PSFM). The information content of each column in the PSSM was also calculated. SCRATCH [42] was used to predict the secondary structure and surface accessibility of each residue. Finally, the raw amino acid sequence was also used as a feature source. Categorical features were one-hot encoded.
Structure features. Structural features were extracted for each of the top three homology models for each STS. All atom-level features were converted into α-carbon, β-carbon, and mean residue features. For Gly, the α-carbon was used for the β-carbon features as well. ProDy [43] was used to calculate the 50-mode Gaussian Network Model (GNM) and Anisotropic Network Model (ANM) atom fluctuations using the calcGNM/calcANM functions followed by the calcSqFlucts function. APBS [44] was used to calculate the Coulomb and Born electrostatics of a modelled structure. PDB2PQR [44] was first used to generate a PQR file from each PDB file, followed by running the born command with an epsilon (solvent dielectric constant) of 80 and the coulomb command with the -e option. DSSP features are calculated using ProDy [43] to give hydrogen bond energies, surface accessibility, dihedral angles (α), bend angles (κ), ϕ, and ψ backbone torsion angles, and tco angles (cosine angle between the C = O of residue i and the C = O of residue i − 1). Residue depths were extracted using BioPython [45] from the PDB files of the top three models.
Classification framework
A classification framework using Gradient boosting trees (as depicted in S6 Fig) was built for different sets of features. The framework is trained in three steps: 1. A separate gradient boosting tree is trained for each kind of feature for all residues.
XGBoost [46] was used with default parameter settings for these intermediate classifiers (100 trees, learning rate = 0.1, gamma = 0, subsample = 1, colsample_bytree = 1, colsam-ple_bylevel = 1). These simple settings are sufficient as these classifiers are only used to find predictive residues, as described in the next step.
2. The sum of normalized weights for each residue across all the trained feature models from Step 1 is used as a scoring measure to select the top thirty residues.
3. A final gradient boosting forest with much stricter parameter settings (2000 trees, learning rate = 0.005, gamma = 0.01, subsample = 0.7, colsample_bytree = 0.1, colsample_bylevel = 0.1) is trained using XGBoost [46] on all the feature values of the top residues picked in Step 2. These parameter settings are chosen to make a more conservative classifier that avoids overfitting in three ways: reduced model complexity by regularization (using the gamma parameter), robustness to noise by random selection in each intermediate tree of both data points (the subsample parameter) and features (the colsample parameters), and a slow learning rate combined with a large number of trees to increase the power of the ensemble.
For testing, the features of the selected thirty residue positions in the test enzymes are fed into the trained classifier.
Clf-seq and Clf-str are two classifiers built using this framework utilizing only sequence features and both sequence and structure features, respectively. Clf-id is a simple rule-based classifier that does not use this framework and instead returns the class of the closest training set sequence based on sequence identity.
Validation and testing
Three validation schemes are used to test a classifier.
2. Genus Split: A scheme in which cases from 65 genera are used for training and the rest for testing, repeated 10 times with different sets.
3. Clade Split: All dicot STSs are used for training and monocot and conifer STSs for testing.
Three different metrics are used to measure the performance of each classifier, using the definitions of TP and TN as the number of nerolidyl cation-specific synthases and number of farnesyl cation-specific synthases predicted correctly at a certain threshold of predicted probability, and FP and FN as the number of nerolidyl cation-specific synthases and number of farnesyl cation-specific synthases predicted incorrectly at a certain threshold. All metrics are calculated using the scikit-learn Python library [47]. 42 newly characterized synthases from literature (listed in S1 Table) are used as the final independent test set.
Selecting cation-specific residues
The normalized weights across all feature classifiers were summed across all the folds of the Genus Split and the resulting thirty highest scoring positions represent the set of cation-specific residues. The sequence and structural features of these residues were used to visualize the set of characterized STSs. This was done by applying UMAP [23] to reduce the dimensionality to 2.
Co-evolution analysis on plant terpene synthase-like proteins
An HMM search was performed using HMMER [35] and the custom Terpene_synth_N_C HMM across all plant UniProt proteins [48] and all plant transcriptome sequences from the OneKP transcriptome dataset [49]. Only those with sequence length at least one standard deviation away from the mean sequence length of the characterized STSs from the database [5] were retained. The resulting set of uncharacterized sequences were aligned with Clustal Omega [37] using the same HMM and 10 guide-tree/HMM iterations (clustalo optioniter = 10). Alignment positions not present in any of the six structures in Table 1 were discarded.
CCMPred [50] was used to perform co-evolution analysis on this alignment. The top 1500 predicted contacts were selected based on their confidence scores (S3A Fig). Contacts containing one residue from the cation-specific positions, at least 11 Å apart in any of the six structures in Table 1 and seven residues apart in sequence were retained.
Visualization of cation-specific residues and contacts
Cation-specific residues and contacts were visualized in multiple ways.
• Sequence and Co-evolution Conservation Logos-The positions of predictive residues in farnesyl and nerolidyl cation-specific STSs were used to generate two sequence conservation logos based on the percentage of appearance of each amino acid at each position. The sequence conservation of four co-evolving residue pairs was also visualized across farnesyl and nerolidyl cation-specific STSs and the set of putative terpene synthases. These figures were made with matplotlib [52].
• Co-evolutionary Links-The cation-specific residues and contacts as well as terpene synthase motif residues were visualized on the secondary structure of the N-terminal and Cterminal domain portions of the tobacco aristolochene synthase (TEAS) structure found by the two respective Pfam domains (PF01397 and PF03936), using matplotlib [52]. Helices are labeled as described by Starks et al. [26].
• Substrate Analog Proximity-Substrate analogs trifluorofarnesyl diphosphate (FFF) and farnesyl thiodiphosphate (FPS) were extracted from tobacco epi-aristolochene synthase PDB ID: 5EAU, and Abies grandis α-bisabolene synthase PDB ID: 3SAE respectively. Their positions in both structures were obtained by superposing the two structures to each other using the align command in Pymol [51]. Distances between a subset of the cation-specific residues and the atoms of the substrate analogs were visualized using matplotlib [52]. The atoms in both analogs are numbered according to the numbering of FFF.
Citrus bergamia 'Femminello' STSs
The cation specificity predictor was employed to select four STSs among the putative terpenes synthases from C. bergamia with the highest nerolidyl cation specificity. The sequences were codon optimised, synthesised and expressed in Rhodobacter sphaeroides, as described earlier in [53]. The analysis of the products coming from the engineered strains was performed on the GC Agilent 7890B coupled to the MS Agilent 5977B. The used column is an HP-5MS 30m x 250um x 0.25um. The resulting chromatograms and the fragmentation patterns of the identified peaks and the reference compounds can be found in S5 Fig and S2 Appendix.
|
2020-07-30T02:02:33.778Z
|
2020-07-28T00:00:00.000
|
{
"year": 2021,
"sha1": "58bd48c6bb01070d52b28c7c8e8a61e6bbe839ea",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1008197&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "143ac14c1fb24148017770f1c80f01433d9d54c6",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Computer Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Computer Science",
"Medicine",
"Biology"
]
}
|
7228142
|
pes2o/s2orc
|
v3-fos-license
|
New atmospheric profiles for H . E . S . S . data analysis
Particles arriving to Earth allow us to scan and improve our understanding of the Universe, both in its composition and its dynamics. For ground-based experiments in astroparticle physics, the atmosphere plays a key role and its understanding, monitoring and modelling is essential for a realistic description in data analysis and simulations. In this work we present the development of a novel experimental atmospheric model, the GNNA120 model, describing the atmosphere above the H.E.S.S. gamma-ray observatory in Namibia. This new description is the most realistic to date and introduces the possibility to study variations at different time scales. This enables us to provide the first ever study of seasonal effects based on actual measured atmospheric profiles for the H.E.S.S. observatory and opens up the window to further improvements of the H.E.S.S. Monte Carlo
Introduction
Much of the radiation propagating in the Cosmos and incident on the Earth is thermal radiation generated in hot objects such as stars.However, it is well-known that certain particle populations in the Universe cannot result from thermal processes, but they must be produced by collective mechanisms.The best-known of this type of population are cosmic rays.
Even though the existance of cosmic rays has been known since the first decades of the 20 th century, their acceleration locations have not been found yet.In fact, deflections of charged particles in galactic and/or extragalactic magnetic field are so large that over most of the energy regime, the pointing information is completely lost; only at the very highest energies, where the energy lowers the deflection effect, in the domain of the AUGER experiment, the directional information can possibly be exploited.In this context, non charged particles neutrinos and photons play a key role to study high-energy cosmic rays and their sources.
The detection of gamma-rays can be done on board of a satellite, e.g.Fermi-LAT or AMS, or at ground level, e.g.Veritas, MAGIC and H.E.S.S., depending on the energy range.The atmosphere acts as an active target in groundbased experiments, since it is the target medium for the cosmic particle, but also the emitter of Cherenkov photons and the transport medium for those photons.Being a continuously changing system, constantly monitoring and correct parametrisation of the atmosphere plays a crucial role.Varying atmospheric conditions in terms of state variables may alter the development and detection of extensive air showers.The main goal of the study presented here is to revise the H.E.S.S. atmospheric parametrisation and quantify the systematic errors from an incorrect description.
The paper is structured as follows: In §2,the characterisation of the atmosphere above the H.E.S.S. site is presented.Following this, the new experimental GNNA120 atmospheric model is introduced and explained.In §3, we present and compare the former and new atmospheric profile and other atmospheric features.The results are discussed and summarized in §4.
2 Characterising the atmosphere above H.E.S.S.In the H.E.S.S. experiment, the gamma-rays are detected indirectly via the Cherenkov light emitted by the charged particles of the induced air-shower.Thus, a good understanding of the atmosphere that acts as the calorimeter of the system is crucial for the Imaging Atmospheric Cherenkov Technique (IACT) method.
In 1999, motivated by the advent of a new generation of gamma-ray observatories, atmospheric density profiles, as well as several light absorption and scattering processes dependency on geographic position and time variability were investigated by Konrad Bernlöhr [1].The most important aspects treated in the mentioned work are: • Vertical structure of the atmosphere.Being the most influential parameters, it was seen that different density profiles lead to differences in Cherenkov light density up to 60%.Concretely, seasonal variation at mid-latitude sites were found to be of the order of 15-20% as can be seen in Figure 1 where several average lateral distributions of Cherenkov light photons have been simulated with CORSIKA 5.71 [2] taking into account absorption of Cherenkov light.• Atmospheric extinction of Cherenkov light.It includes absorption and scattering of the electromagnetic radiation by dust and gas between the emitter and the observer.For those phenomena, the main concern is that extrapolation for high-altitudes must be avoided, and monitoring procedures must be applied even though a measurement of the aerosol vertical structure is rather difficult.However, Lidar-based methods are available to measure the extintion profile with a 10% accuracy.
Note that within the H.E.S.S. data analysis and simulation tools, the Cherenkov transparency coefficient [3] takes into account the presence of elevated aerosols concentrations and of large-scale clouds.The parameters that are used as an input for its calculation are the energy threshold of the telescope, which shows a direct proportion to the transparency of the atmosphere, and other inversely proportional quantities related to the telescope data taking efficiency like the telescopes-wise muon efficiency, average pixel gains and number of active telescopes.This approach has been proven to be an effective and a pratical method of distinguinshing good quality datasets without the need to measure the aerosols distributions [3] , that is a rather difficult task.
• Refraction.The wavelenght independency of the refractive index was assumed since n(λ)-1 changes only 5 % over the wavelenght range 300-600 nm, which is the range typically covered by photomultipliers.The influence on gamma-ray observations can therefore be considered as negligible [1].
The vertical atmospheric profiles used within KASKADE [4], one of the two main simulation frameworks used within the H.E.S.S. collaboration, was derived from balloon soundings at Windhoek airport in the 1990s, averaged over a whole year.A verification of this parametrisation is the main aim of the work presented here.
Experimental GNNA120 atmospheric model
Worldwide, the atmosphere is being continuously monitored by a fleet of satellites, networks of weather stations, ballon soundings, etc.Here we exploited different databases belonging to NOAA and NASA's monitoring and modeling centers, respectively.A full atmospheric profile model, named experimental GDAS -NRLMSISE for Namibian Atmosphere up to 120 km (GNNA120) model is introduced.For this new description of the atmosphere above H.E.S.S. two different sources of data have been chosen.
• Low altitudes.The Global Data Assimilation System [5] (GDAS) is a system used by the National Center for Environmental Prediction (NCEP) Global Forecast System (GFS) to place observations into a gridded model space for the purpose of starting weather forecasts with observed data.The full GDAS data are publically available via the NOAA National Operational Model Archive and Distribution System (NOMADS) [6] which is a Web-services based project providing both real-time and retrospective access to climate and weather model data.
GDAS adds various types of observations to a gridded, 3-D, global model: surface observations, ballon data, wind profiler data, aircraft reports, buoy observations, radar observations, and satellite observations.Based on simulations and weather predictions tied to measurements model data is available with a time resolution of 3 hours.The provided data contain information of 23 different levels, up to an altitude of about 25 kilometers above the surface.The levels are related to the pressure at the layer and span a range from 1000 hPa to 20 hPa.The GDAS grid spans the whole Earth with a grid-size is of 1 AtmoHEAD 2016 The NRLMSISE-00 Model 2001 [7] describes the neutral temperature and densities for the Earth's atmosphere up to thermospheric heights.It includes temporal and spatial variatons at various scales based on an analytical model.
The profiles that could be extracted from GDAS only present data up to around 25 km.Thus, once one runs out of data from the GDAS database, the data is taken from the NRLMSISE-00 model up to 120 kilometers, in layers of 5 kilometers.It has been verified that both models agree very well in the overlapping region, thus no artefacts are induced by this combination.
GNNA120 atmospheric profiles
Temperature and density profiles for the combined data set extracted from GDAS and NRL-MSISE are shown in Figure 2. One can appreciate the expected changes of the profiles while increasing the altitude, from troposphere up to thermosphere, going through stratosphere and mesosphere.The steep changes in temperature correspond to the pauses between these layers.In order to check the impact of the humidity on the density profiles, the comparison for the averaged density profiles while taking into account humidity effects is plotted in Figure 3 for the different season and the whole year.For computing the humidity, a very accurate formula for determining the saturation vapor pressure by Herman Wobus in [8] has been used.The relative changes have a maximum value of 0.7%, what lead us to the conclusion that this effect is rather small compared to other systematics.
Note that the humidity is an important parameter for fluorescense techniques and that in the case of condensed water, it is currently taken into account into the transparency coefficient.It can also be noted that in future experiences as CTA, it would be measurable by Raman LIDARs, and included in aerosols simulations.
Comparison KASKADE-to-GNNA120 atmospheric profiles
In order to compare the former and the new GNNA120 atmospheric profiles, relative residuals have been computed and shown in Figure 4.The biggest changes (up to 20%) are found in the upper regions of the atmosphere.However, the region of interest (ROI) important for the development of TeV gamma-ray induced air showers ranges from an altitude of 2 km to 26 km.There, the largest differences, up to 4% from the old KASKADE profile, are given for the seasonal GNNA120 profiles, namely for spring and autumn, around altitudes of 15-20 kilometers.Comparing KASKADE to the yearly GNNA120 profile, the changes are less significant and do not exceed 1%.
Slant depth
In principle, one can assume that the big changes up to 20% seen in Figure 4 for the density in the upper layers of the atmosphere will not change the development of the shower in simulations, since the density goes rapidly to zero as one increases to these altitudes.Seeking to test that assertion, we calculate the cumulative slant depth since it gives an estimation of the changes of the place in the atmosphere where the first interaction takes place and the following air shower evolution.The yearly-seasonal comparison is shown in Figure 5 where the biggest changes up to 3 g/cm 2 are visible for autumn and spring.We note that the size of the effect is small compared to the intrinsic shower-to-shower fluctuations.
Summary
The impact of the parametrisation of different atmospheric variables relevant for the atmospheric Cherenkov technique in the atmosphere above the H.E.S.S. site has been presented by means of data coming from GDAS and NRLMSISE, the new GNNA120 model has been built.
The atmospheric profile used in the H.E.S.S. analysis, namely a yearly average at four different altitudes with interpolated values for the density up to 120 km, has been compared to the new GNNA120 model.In the ROI, the yearly comparison to the KASCADE profile shows differences of about 1%, meaning that the implementation of the yearly averaged atmosphere in the H.E.S.S. analysis is fully acceptable.This implies that, overall, we found surprisingly small differences with the 20 year old implementaton in KASKADE and, thus, our study can therefore be considered as a validation of the profiles used in the H.E.S.S. analyses so far.
Note that the GNNA120 model including seasons, developed during this work, has been included in the HESS analysis software.Results show up to a 20% of difference and the following air shower evolution at high altitudes and up to 3% for the seasonal profiles in the important range for the gamma-ray induced air shower development.This emphasizes the importance of including seasonal changes and encourages us to further study effects at even shorter timescales.
The possibility of including humidity effects into the density profiles has been studied.The effect never exceeds 1% in any of the studied cases.Despite this fact, it is a parameter that presents huge spatial and temporal fluctuations.Thus, we decided to not take it into account in the average profiles.
Some changes with respect to the yearly slant depth profile have been found for autumn and spring.The scale of these (<4 g/cm2) is modest and should not significantly influence the air shower simulations.
After checking the impact of the seasonal changes, one wants to go one step further by taking into account smallscale effects.The presented study opens up the window to further implementations of the parameters that describe the chaotic nature of the calorimeter of our experiment, the atmosphere.The next step of the GNNA120 model and, indeed, the ultimate goal would be to implement atmospheric profiles every 3 hours a new method for Monte Carlo simulations: : simulations reproducing the actual data taking conditions in a time dependent fashion, so called Run-Wise Simulations currently under development within the H.E.S.S. Collaboration.
Figure 1 :
Figure 1: Average lateral distribution of Cherenkov light photons in the wavelenght range 300-600 nm for vertical 100 GeV gamma-ray showers in CORSIKA 5.71 simulations with different atmospheric profiles (200 showers) from [1]
Figure 2 :
Figure 2: (a) GNNA120 yearly and seasonal averaged temperature profiles up to 120 km (b) Yearly and seasonal averaged density profiles up to 120 km.
Figure 3 :
Figure 3: Difference in percentage of the density averaged profile when taking into account humidity effects
Figure 4 :
Figure 4: (a) Residuals of the computed GNNA120 model and the H.E.S.S. profiles used for simulations (b) Residuals zoomed at the ROI, important for the gamma-ray induced air shower developement.
Figure 5 :
Figure 5: Differences between seasonal averaged and yearly averaged slant depth as a function of the altitude in the ROI.The altitude range was set to the ROI since for high altitudes, it goes rapidly to zero.
2.1 Previous studies of the atmosphere above
H.E.S.S.
|
2017-12-22T09:53:18.302Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "503ef723d887fdbb24903ac2cc06638f1acb4527",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/13/epjconf_atmo2017_01013.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "503ef723d887fdbb24903ac2cc06638f1acb4527",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
218467007
|
pes2o/s2orc
|
v3-fos-license
|
Combining sample expansion and light sheet microscopy for the volumetric imaging of virus-infected cells with optical super-resolution
Expansion microscopy is a sample preparation technique that enables the optical imaging of biological specimens at super-resolution owing to their physical magnification, which is achieved through water-absorbing polymers. The technique uses readily available chemicals and does not require sophisticated equipment, thus offering super-resolution to laboratories that are not microscopy-specialised. Here we present a protocol combining sample expansion with light sheet microscopy to generate high-contrast, high-resolution 3D reconstructions of whole virus-infected cells. The results are superior to those achievable with comparable imaging modalities and reveal details of the infection cycle that are not discernible before expansion. An image resolution of approximately 95 nm could be achieved in samples labelled in 3 colours. We clearly resolve the concentration of viral nucleoprotein on the surface of vesicular structures within the cell and their positioning relative to cellular organelles. We provide detailed guidance and a video protocol for the optimal application of the method and demonstrate its potential to study virus-host cell interactions.
Introduction
Expansion microscopy is a technique that relies on the physical magnification of biological samples in order to visualise details that are spaced more closely than the diffraction limit of light (~300 nm). 1 2 The physical expansion is achieved by embedding fixed specimens in a polymer matrix that can absorb water, forming a socalled hydrogel. This approach generates volumetrically isotropic expansion and allows the bypassing of the diffraction barrier of light microscopy without any need for sophisticated instruments, enabling laboratories that possess standard fluorescence microscopes to image their samples at super-resolution. Despite the great potential of expansion microscopy, the physical nature of the expanded sample places some limitations on its use. Firstly, the hydrogels are challenging to image using conventional microscopes: such samples are bulky and mechanically unstable, making mounting and long-term imaging more challenging compared to conventional samples (for example, cells adherent on a glass slide). Secondly, when the hydrogel is placed in a dish and imaged through the bottom, the 'vertical' expansion of the sample hinders the imaging of the whole volume of the specimen using high numerical aperture (NA) objectives, which usually have short working distances. Moreover, the use of oil-immersion objectives generates a refractive index mismatch with the water-based hydrogels, which causes optical aberrations. Finally, the expansion process dilutes the fluorophore concentration, decreasing the fluorescence intensity up to a hundredfold, which hinders the imaging of expanded samples with microscopes incapable of collecting a large number of photons or those requiring very high fluorescent signals.
Expanded samples have so far been imaged mainly using confocal microscopes. 2 However, a confocal microscope would not be optimal for the imaging of expanded samples: the weak fluorescence intensity of the expanded specimens is best recorded with a setup that is more photon-efficient than a confocal microscope. Light sheet microscopy is such a technique. Nonetheless, reports on the combination of light sheet microscopy with sample expansion have been limited so far. 3 4 5 6 7 In light sheet microscopy, the optical pathways of excitation and detection are geometrically decoupled such that the sample is illuminated with a thin sheet of laser light, and detection is performed along an axis orthogonal to the illumination. This separation of excitation and detection light paths maximises detection efficiency by minimising out-of-focus fluorescence. The axial confinement of the excitation results furthermore in dramatically reduced photobleaching. For detection, fast cameras with high quantum efficiencies can be used, and thus imaging speed can be increased by orders of magnitude compared to point-scanning techniques.
The principle of light sheet microscopy was developed more than a century ago by Siedentopf and Zsigmondy, then termed 'ultramicroscopy'. 8 The method was rediscovered by developmental biologists at the beginning of the 21st century 9 and its popularity has increased ever since. The high spatiotemporal resolution of light sheet microscopy was impressively demonstrated for high-speed imaging of embryonal development, 9 neural activity, 10 cardiac dynamics, 11 12 and physiologically representative subcellular imaging. 13 Despite all the advantages, a complication that regularly arises in light sheet microscopy concerns the mounting of the samples. The placement of samples on the microscope for imaging has to be compatible with the two-objective configuration of the method and solutions often need to be customised for the specific experimental configuration in use. Nowadays, commercial setups are available, and a lot of effort is still invested in the technical development of designs that are simpler to use and that improve the versatility and utility of the method. 14 15 In this work, we study the suitability of expansion microscopy and light sheet microscopy for the imaging of virus-infected samples using human A549 cells infected with live attenuated influenza vaccine (LAIV), a modified low-virulence variant of the influenza A virus that is the basis of flu vaccine formulations and sold under the names Fluenz (USA and Canada) and FluMist (Europe). 16 17 We show that expansion microscopy is particularly well suited to study host-cell vaccine interactions helping to understand underlying biomolecular mechanisms of the vaccine. Optical imaging at super-resolution is required in order to dissect the interplay of viral proteins and substructures with cell organelles, which often have dimensions comparable to, or smaller than, the diffraction limit of light (~300 nm). However, imaging entire infected cells in multiple colours to build a full 3D picture of the infected cell is challenging using alternative super-resolution techniques, like dSTORM 18 and STED, 19 due to their long acquisition times and proneness to photobleaching.
Here, we first describe the expansion of infected cells using a published expansion microscopy protocol. We then explain the imaging of the infected expanded cells on a light sheet microscope, and for comparison, also on widefield and confocal microscopes. By combining expansion microscopy with light sheet microscopy, we demonstrate how high-contrast 3D models of whole LAIV-infected cells can be easily reconstructed at superresolution in three colour channels, highlighting the viral nucleoprotein located in cytosolic vesicles. These vesicles are essential for the transport of vRNA (viral RNA)-nucleoprotein complexes to the plasma membrane, where the mature virions form, and are also thought to mediate vRNA-vRNA interactions in the cytoplasm, which according to recent studies are crucial for virus assembly. 20 The vesicles were not resolved without expansion, but due to expansion could be characterised for their size and cellular distribution. Finally, we present a detailed video protocol on the mounting and imaging of expanded samples using a light sheet microscope. The aim is to make this technique available to the wider community and so that the power of light sheet and expansion microscopy can be harnessed for addressing questions on virus research specifically, but also in cell biology more generally, for problems that require the study of detail within cellular volumes at sub-wavelength resolution.
Chemicals
Methanol-free formaldehyde was purchased from Thermo Fisher Scientific; the ampoules were used immediately after opening and any leftover formaldehyde discarded. All chemicals used for sample expansion (glutaraldehyde 50% in water, sodium acrylate, N, N'-methylenbisacrylamide, acrylamide, Proteinase K) were purchased from Sigma Aldrich and used as received.
Antibodies
Mouse anti-influenza A nucleoprotein (ab20343) and rabbit anti-beta tubulin (ab6046) primary antibodies were purchased from Abcam. Detection was via polyclonal goat secondary antibodies: an ATTO647Nconjugated anti-mouse antibody was purchased from Sigma Aldrich, while an AlexaFluor488 (AF488)conjugated anti-rabbit antibody was purchased from Invitrogen.
Infection of A549 cells with LAIV
A549 cells were plated onto 13 mm round coverslips in 4-well plates at 60,000 cells per well, 16 hours before infection. The next day, cells were infected with LAIV at 10 PFU per cell. After one hour of incubation at 37°C and 5% CO2, the medium was exchanged with fresh new medium. Cells were fixed 9 hours post infection (hpi), permeabilised and labelled with antibodies according to procedures described below.
Immunostaining
Infected cells were fixed by incubation with 4% methanol-free formaldehyde and 0.1% glutaraldehyde in PBS for 15 minutes at room temperature, washed three times with PBS and then permeabilized by incubation with a 0.25% solution of Triton X-100 in PBS for 10 minutes. Unspecific binding was blocked by incubating with 10% goat serum in PBS for 30 minutes at room temperature. Without washing, the samples were incubated with the primary antibody, diluted 1:200 in PBS containing 2% BSA (bovine serum albumin) for 1 hour at room temperature. After three washes in PBS, the samples were incubated with the secondary antibody, diluted 1:400 in PBS containing 2% BSA, for an hour at room temperature in the dark. Samples were then washed 3 times with PBS. Samples that were not meant for expansion microscopy were counterstained with DAPI nuclear dye (10 µg/mL in PBS for 10 minutes at room temperature). Finally, the coverslips were mounted on glass slides using a Mowiol-based mounting medium. Alternatively, samples were expanded using the expansion microscopy protocol detailed below.
Expansion and imaging of samples
The expansion of samples was achieved following a published protocol. 21 Nuclear staining was performed after the first round of expansion, using DAPI, 10 µg/mL in water for 20 minutes. In order to image the expanded gels on the widefield and confocal microscopes, they were cut using a glass coverslip as a knife to fit in glassbottom Petri dishes, which were pre-coated with poly-Llysine (0.02% in water for 30 minutes). Alternatively, the gels were imaged with the light sheet microscope, by cutting a strip of gel with cells facing up, which was then glued on a 24x50 mm glass coverslips using cyanoacrylate-based super-glue (Henkel). The slide was left to cure for two minutes and then placed in an imaging chamber and filled with milli-Q water. This process is documented in detail in Supporting Video 1.
Data analysis
Vesicle sizes present in expanded samples that were imaged using light sheet microscopy were analysed using ImageJ. Data were plotted in GraphPad Prism (GraphPad Software, US).
Infection, staining and expansion of cells
A549 cells, from human alveolar carcinoma, are a common model for the study of LAIV infection and replication. 17 We incubated the cells with the LAIV particles at 10 PFU for one hour, then we exchanged the medium and let the infection cycle progress until fixation. The LAIV virions are not fluorescent, therefore, immunostaining of the viral proteins is necessary in order to study the infection progression using fluorescence microscopy. Here, we stained the LAIV nucleoprotein (NP), a structural protein that packs the viral RNA inside of the virus. 23 Additionally, we stained the cell microtubules and the cell nuclei, in order to study the interaction between the viral particles and cell organelles. A picture of the stained infected cells, not expanded and imaged on a confocal microscope, is shown in Figure 1. The resolution of the non-expanded images is enough to localise the viral nucleoprotein (NP) in the cell cytosol. However, the image resolution is too low to clarify the exact NP localization, form, and interaction with other cellular structures.
After immunostaining, the cells were expanded using a published expansion microscopy protocol. 21 Briefly, expansion microscopy works by synthesising a polymer matrix in situ, which cross-links the protein structures of the sample. The sample, now embedded within the gel, is then enzymatically digested by proteases in order to cleave the cells' rigid structures, such as the cytoskeleton. Without digestion, the gelled sample would not expand; however, the linking to the gel matrix guarantees that the cleaved proteins are not lost and that they keep their relative positions. Finally, the gel is placed in deionised water to expand. The expansion process spatially separates the fluorophores that are spaced more closely than the scale of the microscope resolution, hence increasing the level of detail in the final image. A schematic representation of the steps of the expansion microscopy protocol is shown in Supporting Figure 1.
Mounting expanded samples for microscopy
The expanded gels are unconventional imaging samples: since they mainly consist of water, they are very fragile and unsteady. In order to image the gelled samples on inverted confocal or widefield microscopes, we cut them to fit into glass-bottom Petri dishes with cells facing down (Figure 2A, left). One issue we encountered while imaging the gels in this configuration is their wobbling and drifting during the image acquisition. To minimise this issue, we pre-coated the glass bottom surfaces of the dishes with poly-L-lysine, which improved gel adherence. However, this is not always sufficient to keep the gels still, especially during long-term imaging. The use of cyanoacrylate-based glue has been suggested to keep the gels in place, 24 although here this would require the glue to be placed in direct contact with the cell-containing side of the sample, leading to potential deterioration of embedded biological structures. Moreover, the use of glue means that the glass bottom cannot be reused.
In order to image the expanded samples using a light sheet microscope, a small gel strip was cut and attached to a glass slide (24x50 mm) using super-glue, with cells facing up (Figure 2A, right). In this configuration, the glue was not in direct contact with the cells. The procedure was effective in preventing problems associated with gel wobbling or drift. Cutting the gel into a strip permits it to be placed in the small space between the two light sheet objectives; this configuration enables the imaging of the whole depth of the sample. Moreover, the light sheet microscope is equipped with waterdipping objectives, which eliminates optical aberrations due to refractive index mismatches with the (waterbased) gels. On the other hand, the confocal and widefield microscopes that we used for this study were optimised for the imaging of fixed samples and were equipped with high NA oil objectives for the purpose of high light collection efficiency. They are however not ideal for imaging gelled samples, whose optical properties resemble more those of living samples than fixed ones. A step-by-step procedure for the mounting of expanded samples on a light sheet microscope is depicted in Figure 2B. A detailed video protocol of this Immunostaining was used to fluorescently label LAIV nucleoprotein (NP, magenta) and microtubules (green) whereas DAPI staining was used for labelling the nuclei (blue). Scalebar 25 µm. Size of images in bottom row 25x25 µm. procedure is presented in Supporting Video 1, where we show how to cut the gelled sample and glue it to the glass slide of the imaging chamber of the light sheet microscope.
Comparison of imaging modalities
We compared the performance of light sheet microscopy of expanded samples with two commonly used conventional fluorescence microscopy techniques, widefield and confocal laser scanning microscopy (CLSM). The working principles of all three techniques are illustrated in Figure 3A. When imaging a sample using a widefield microscope, the whole fluorescent specimen is excited, which results in considerable outof-focus light derived from fluorophores that lie outside of the focal plane, producing high background signals and low image contrast. A confocal laser scanning microscope mitigates this problem by using a pointsource for illumination and pinholes to filter out out-of-focus fluorescence. As a result, CLSM features better image contrast compared to widefield microscopy for thick samples. However, the requirement for scanning the point sequentially across the sample decreases the acquisition speed significantly. A light sheet microscope combines the advantages of a confocal laser scanning and a widefield microscope: here the sample is illuminated with a thin sheet of light that excites only those fluorophores that lie within the vicinity of depth of field of the detection objective. Thus, signal is only generated from the illuminated fluorophores and those in out-of-focus planes are not excited and therefore do not contribute to image blur. Speed is high due to the parallel detection of the widefield signal by all pixels of a camera.
Using all three microscopy techniques, we imaged A549 cells that were infected with live attenuated influenza vaccine (LAIV), fixed 9 hours post-infection, immunostained and expanded as described in Section 3.1. Example images featuring the cell nucleus, the viral nucleoprotein (NP) and microtubules are displayed in Figure 3B. As expected, we observed the lowest amount of bleaching with the light sheet microscope. The acquisition of a stack composed of 200 frames took roughly 30 seconds, using an exposure time of 50 ms; after acquiring one stack, we did not notice any evident sample bleaching. On the widefield microscope, photobleaching after an acquisition of a comparable volumetric image stack was also low. CLSM, however, resulted in substantial photobleaching due to the high laser power used (up to 80 mW) to boost the fluorescent signal. Furthermore, the speed of CLSM was also significantly reduced compared to the other methods because of its point scanning nature. Acquisition for a single frame in three colours took roughly five minutes by CLSM (using a line scanning frequency of 10 Hz and a 2048x2048 pixel field of view), while it took roughly 5 seconds on the widefield setup (200 ms exposure) and less than a second on the light sheet microscope (50 ms exposure). This long acquisition time, combined with the substantial photobleaching noticed, led us to rule out CLSM as a valid way of scanning our sample volumetrically.
We note that the microscope pinhole was opened to correspond to 3.7 Airy units in size (in contrast to the pre-set value 1.0 Airy units). This was done to boost the weak fluorescence signal, but came at the cost of decreased image contrast. In principle, similar (or better) contrast to light sheet microscopy can be achieved with CLSM via the use of small pinholes, but this is possible only for samples of high brightness. The reduced sensitivity of the confocal system reflects fundamental differences in the detection of the fluorescent signal.
To assess the image quality produced by the different imaging modalities, we applied a Fourier spectral power analysis similar to the method proposed by Demmerle et al. 25 Representative images of LAIV NP were selected and Fourier transformed, then radially averaged to produce a spectral power plot ( Figure 4A). This shows that the light sheet images have better spectral power at all spatial frequencies, denoting superior contrast. Furthermore, the spatial frequency at which each curve crosses the noise floor, denoting the absolute resolution limit, is the same for both the confocal and light sheet case. This demonstrates that the effective resolution of both types of image is approximately 400 nm.
This effective resolution limit is close to that predicted by the Abbe resolution criterion (λ / 2 NA) for the light sheet case in which the NA is 0.90 (375 nm), but is far from the theoretical resolution limit of 255 nm for the oil immersion lens used in the confocal imaging. We hypothesised that this was due to the aberrations induced by imaging at depth in a watery sample. Using a model for index-mismatched point spread functions 26 and a vectorial diffraction simulation, 27 we calculated theoretical point spread functions (PSF) for the oil immersion lens at different depths within the watery sample ( Figure 4B). This showed that the PSF was already highly aberrated only 10 µm into the sample, corresponding to roughly half the thickness of several types of mammalian cells.
These simulated PSFs also showed significant 'flare' at depth, in which much of the intensity within the PSF is displaced away from the focal plane. This explains why the spectral power analysis shows the widefield image to have the worst image quality and resolution (~500 nm): in contrast to the confocal case, there is no mechanism with which to exclude this out-of-focus light, leading to significant out-of-focus blur contamination.
In contrast to widefield and confocal microscopy stacks, the light sheet images require post-imaging processing, specifically a procedure called deskewing. This is necessary to correct for the angle at which the stage scans the sample through the focal plane of the detection lens (48° with respect to the detection axis) to reconstruct the pictures in a conventional geometry. The deskewing can be performed with an affine transformation where each image slice is computationally shifted (deskewed) to its proper position in the three-dimensional image volume. This procedure was performed using a customized ImageJ macro based on the TranformJ ImageJ plugin. 28 29
Expansion microscopy highlights the vesicular structure of NP-containing compartments
By combining expansion and light sheet microscopy we could generate high-contrast 3D reconstructions of whole infected cells. In Figure 5A, we show maximum intensity projections at different magnifications obtained from a light sheet image z-stack after deskewing. Using the z-stacks acquired combining expansion microscopy and light sheet microscopy we could render 3D models of whole LAIV-infected A549 cells with high contrast, as shown in Figure 5A (fourth row) and Supporting Video 2. The resolution increase brought about by the sample expansion allowed us to demonstrate that the LAIV nucleoprotein localises at or in the membrane of small vesicular structures in the cell cytosol ( Figure 5, third row). It is assumed that viral ribonucleoprotein complexes (vRNP) of influenza A virus, which are formed by nucleoprotein, viral RNA and other viral proteins, start to assemble in the cell cytoplasm at the membranes of recycling endosomes and are transported by those vesicles to sites of virion formation at the plasma membrane. 30 31 32 33 Taking the calculated expansion factor of 4.2 ( Figure 5B) into account, we find that the vesicles possess a diameter of up to ~500 nm (see size distribution in Figure 5C). The smallest vesicle diameter that we could resolve is ~150 nm. In contrast, before expanding the sample we were not able to resolve the vesicular structure of the compartments occupied by the viral nucleoprotein ( Figure 1). Interestingly, we find that the larger vesicles preferably occupy a space next to the nucleus where typically the Golgi apparatus is positioned, whereas the smaller vesicles are closer to the cell periphery. The space occupied by the larger vesicles is almost devoid of microtubules, and the vesicles do not seem to directly interact with the microtubules. This is interesting and in line with observations for influenza A virus, which suggest that the mode of transport for vRNP complexes can be microtubule-independent. 31 Moreover, there was no identifiable DNA compaction inside the nucleus, which is typical of other viruses such as herpes. 34 In the future, we aim to use this technique for studying the interplay between the viral proteins and the cell compartments in order to dissect the whole replication cycle of LAIV.
Conclusions
In this work, we imaged LAIV-infected A549 cells using a combination of expansion microscopy and light sheet microscopy, as well as confocal and widefield microscopy. The 3D rendering of whole infected cells was troublesome using data acquired by confocal microscopy, given the long acquisition time and intense photobleaching noticed in the gelled sample portion. The widefield microscope did not possess enough sectioning resolution and was characterised by low image contrast, thus, it could not produce cell reconstructions with a sufficient level of detail for our purpose. Using the light sheet microscope, instead, we were able to scan the whole specimen and deliver 3D renderings of whole infected cells with the highest level of image quality of the three techniques. We concluded that light sheet microscopy in combination with sample expansion is most suitable for detailed investigations of the interplay between the viral proteins and the cell organelles in whole cells.
In terms of sample mounting and imaging, light sheet microscopy poses challenges, owing to the gel nature of the sample. We include instructions on the procedure in a step-by-step video protocol and find that on mastering the method, the workflow for imaging with light sheet microscopy was faster than that for either widefield or confocal microscopy. While the acquisition of a stack composed of 200 frames took less than a minute using the light sheet microscope (50 ms exposure time), on the confocal system it took around 5 minutes or the acquisition of a single frame. We note that increasing the scanning speed or reducing the field of view would decrease the time of confocal imaging significantly, nonetheless, we are confident that a confocal system could not outperform a light sheet microscope as regards acquisition speed.
Expansion microscopy is a technique that has not yet been widely explored in virus research and has so far only been applied in two very recent studies on herpes virus simplex 1. 7 35 However, we show that it provides unprecedented detail on the interaction and localisation of virus particles with subcellular organelles. We observe the nucleoprotein (NP) in the membrane of cytoplasmic vesicles which are up to 500 nm in size and larger in the perinuclear region compared to the cell periphery. From previous studies on influenza A it is known that cellular Rab11a-containing endosomes colocalize with vRNPs. 31 Using our methodology, we can now characterise these vesicles in size and cellular distribution. This is important because these vesicles are essential for the transport of vRNPs to the plasma membrane where the mature virions form and are thought to mediate vRNA-vRNA interactions which according to recent studies are crucial for influenza virus assembly. 36 The advantages of combining expansion and light sheet microscopy were here demonstrated in a study of LAIV, but the method is, of course, applicable for studies of host cell biology in general. The continuous development of new protocols will allow the investigation of distinct events in the viral life cycle like entry, assembly and egress with high resolution since viral proteins, host cell proteins and even viral RNA can potentially be visualised at the same time.
|
2020-04-17T13:19:48.103Z
|
2020-04-11T00:00:00.000
|
{
"year": 2020,
"sha1": "aeab7e2213d68cbddb5c6e1facd5f1400026db6f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/boe.399404",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "aeab7e2213d68cbddb5c6e1facd5f1400026db6f",
"s2fieldsofstudy": [
"Engineering",
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Biology"
]
}
|
221507234
|
pes2o/s2orc
|
v3-fos-license
|
Credit Card Fraud Detection using Machine Learning Algorithms
Now a days Using technology like phishing technique to do internet banking fraud means transferring and removing the money from banker account without the permission of the banker. credit card frauds are happening in large amount and also some banking companies and the companies giving service to banks are facing problem. In this project trying build the model which predict fraud and nonfraud transaction in best way using machine learning algorithms and neural networks. Objective of the project is to predict the fraud and fraud less transaction with respect to the time and amount of the transaction using classification machine learning algorithms and statistics and calculus (differentiation ,chain rule etc) and linear algebra in building of the complex machine learning models for prediction and understanding of the data set . we have achieved accuracy of 94.84% using logistic regression and 91.62% using naive Bayes and 92.88% using decision tree and we step into deep learning, we used ANN achieved better accuracy then all other algorithms of 98.69%.
under the ROCi curve. Finally, it reveals the utility of our approach on both UCI databases and a sample collection in order to find out credit card fraud in daily life. Krishna Modi et.al [2] discusses different approaches for the identification and comparison of fraudulent transactions. You may use either of these or a variation of these approaches to spot fraudulent transactions. Artificial Neural Network: Capacity to know and may handle complicated data. Neural network Less access to learn. Hidden Markov model is a Markov statistical model. The framework that is modelled is believed to be a Markov chain of hidden states. An HMM is a double embedded, progressively evaluated technique for appropriation of likelihood. The next method discussed is decision tree which is a graphic illustration of potential choices based on different circumstances. Root node is the ending node of decision tree, splits them to different divisions, links the divisions to different nodes and it continues. Decision tree ends up in leaf node. Each node in the Decision tree depicts a check, branches linked with it represent their potential outcomes and a leaf node has a class name. Decision tree typically isolates the complicated issue into straightforward ones in this pragmatic method of splitting and determining. In this article, author outlines numerous approaches for the identification and analysis of fraudulent transactions. You may use either of these or a variation of these approaches to spot fraudulent transactions. You may incorporate new functions, and use various sampling methods.
Figure: Proposed algorithm
In future research, we must introduce a new architecture that incorporates Apache spark and isolation forest to identify fraudulent transactions in real time. Anomaly detection system is developed and is capable of pre-processing, training and forecasting real-time data transactions. They adapted pattern, using woodland isolation as follows: iForest algorithm is occupied as new transactions arrive to build an evaluation score which will show whether or not the activity is deceitful. Anomaly score below 0 is considered normal, score 1 is considered anomaly. Abhimanyu Roy et.al [4] talks about Recurrent Neural Networks: they are a kind of Artificial Neural Networks, fit to sequential information handling. Artificial neural systems don't give the fundamental versatility to demonstrate wide consecutive information. Along with linkages between layers, recurrent neural systems permit the production of relationship between neurons co-situated in a similar layer, bringing about the formation of cycles in the design of the system. Cycles permit the neurons in the model to trade loads at explicit time ventures during progressive estimations of a given information sources. This considers the actuation capacity to consider the condition of the neuron at a past stage in time. This encourages the actuation system to consider the neuron status in time at a first point. In this manner, the state can be utilized to pass a few components of the relating time stages to future time stages. Actuation work, dropout rate and disappointment work are basic parameters that impact the effectiveness of the RNNs. Examination in this paper shows a huge temporal segment. The LSTMi and GRUi model extraordinarily beat the ANN, which implies that a record's request for exchanges gives important data to observe among fraudulent and non-fraudulent exchanges. This shows a more extensive system could be better centered around preparing registering devices. Quality expanded in the tests as system size rose. Our examination kept consistent the quantity of neurons per mystery plate. Extra knowledge into the effect of system size on model yield can likewise show fluctuation in the quantity of neurons in each line. Kumar et.al [5] says that Random Forest is regularly referred to as Random Decision Forest. They are utilized for gathering, relapse and different errands done by a few choice trees being made. This Random Forest Algorithm depends upon administered learning and the important benefit of this calculation is, both order and relapse can be utilized. Random Forest Algorithm offers you more precision comparative with all other current plans and this strategy is all the more broadly utilized. The utilization of the random forest algorithm in the distinguish of fraudulent transaction will give you an exactness of around 90 to 95 percent. i .
Figure: Dataset after SMOTE tool is used
Random Forest Algorithm and Neural Networks were used in Proposed framework in this paper to identify and regress data collection. Next, the data collection for the credit card should be obtained and evaluated on the generated dataset. Then cleaning of the dataset is needed after data set review Typically, there will be several duplicate values in every dataset, and null values will be present, so cleaning method is needed to delete both those duplicate and null values. Instead split the dataset into two groups for evaluating and testing the dataset as Trained Dataset and Testing dataset. Once the data collection has been separated, the Random Forest Method is implemented where this method provides the best precision regarding the credit card frauds. The dataset will be divided into four groups by implementing the Random Forest Algorithm which will be generated in the form of an uncertainty matrix. The device output review will be performed based on the above description. The consistency of credit card fraud purchases will be achieved through this study and will essentially be described in the context of graphical representation. Yashvi Jain et.al [6] talks about various methods in this paper, one of them is: Fuzzy Logic that is used at times when the values are uninterrupted i.e., they are continuing. It is used for logics having multiple values. There are unequivocal classes of rules depending on that, exchanges are delegated an authentic or extortion one. Proficiently there are three components namely DeFuzzification , Rule Based , Fuzzification. Fuzzification is to divide an entering transaction in the classes of inflated, small or average based on the economic amount associated with it. Rule based is responsible for customizing and creating the laws based on the consumer's behavior. The payment on behalf of the customer is allowed to occur if it follows the law ,otherwise it doesn't. In Defuzzification, if an affair does not match with the above mentioned laws then that particular payment shall not be allowed. It will be stopped at once and then cross verified with that particular customer of the transaction if it should be given the permission to be carried with or be stopped at once. Shailesh S. Dhok et.al [7] tries to convey that due to easy movement in the e-commerce field, the use of credit cards has been commonly increasing with a rise in popularity too. Because of it's efficiency in making online payments or while regularly shopping, it is considered widely. As credit card becomes the most efficient mode of payment for both online as well as regular purchase. It offer's one a lot of comfort and ease to buy anything anywhere and make payment right there.In summary, with this rapid advancement danger of fraud anomaly by using credit card has also been rising. In the current scenario of credit card fraud framework , fake deal will be detected after transaction is done. It is not easy to find fraudulent and it's corresponding damage done to the customer which is on the part of the issuing authority to be provided. This paper talks about succession of activities in charge card interchange using a Hidden Markov Model (HMM) and show how it may be very well be utilized for tracking the cheats. Firstly,Hidden Markov Model is trained with the normal characteristics of the consumer's cardholder. This model aims at rejecting any incoming credit card transaction if it doesn't meet the higher expected probability and is hence considered fraudulent. Hidden Markov Model gives us a higher frequency on the part of detecting the fraud, and lower frequency on simply alarming us on bogus predictions. This paper discusses an implementation of HMM fraud recognition of credit card and the various steps involved in the same. Using a stochastic process and filtering their representation is also demonstrated . They additionally study assortment of exchange sum as the perception images though the kinds of thing have been viewed as conditions of the HMM. They likewise proposed a technique for searching the profile of cardholders , just as well as solicitation of deciding the importance of observation symbols and initial amount of the model's parameters. It is known to additionally clarify how the HMM can distinguish whether any approaching exchange is deceitful. The structure is also modular for managing huge voluminous transactions.
Raghavendra Patida et.al [8] details out about the benefits of using credit card unknown of it's position, customers are able to make purchases anywhere as they had done before "over the desk". The main issue here is that the card or it's owner and required at the time of purchase scale.. It is therefore highly unlikely for the trader to crosscheck if the client is genuine or fake. Plastic card deception is become a difficult problem all throughout the world. Offices,, banks mislay tremendous amounts yearly due to deceit and deceitsters continuously and lookout for new ways to commit illegal actions. Be that as it may, this fraud will generally be specific to examples. Therefore this paper focuses on detecting this hoax through the neural network along with the genetic algorithm. Artificial neural network when prepared appropriately will function as a complex living creature's mind. It is not possible for artificial neural network to mimic exactly the way a brain can work, at which cerebrum work, yet neural system and mind, rely on dealings with the neurons, which is the little utilitarian unit in mind just the same as that of ANN. Algorithm of hereditary are utilized in settling on the choice regarding the system's topology, no. of concealed layers, no. of hubs that is going to be utilized in the structure of neural network systems for the issue of fraud detection. It also uses supervised learning feed forward back propagation algorithm. his paper focusses on various procedure that is being utilized to execute Mastercard extortion on how Mastercard misrepresentation affects the budgetary establishment just as dealer and client and extortion recognition methods utilized by VISA and MasterCard including most recent neural system strategies utilized in various territories because of its ground-breaking capacities of learning and anticipating. It encompasses combining of Neural Network along with genetic Algorithm that comes from the reality that if a person is very talented and is trained properly then chances of individual success is very high.
Anuruddha, et al [9] illustrates mainly about four fraud occasions in the real-world purchases. Every misrepresentation's utilized by a sequence of AI algorithms and the best method is selected by the means of an assessment. This evaluation uses a thorough manual for picking a perfect figuring concerning the sort of the fakes and speak to the appraisal with an appropriate introduction measure. Other way is by ]utilizing prescient investigation done by the AI models being executed and an API to check if a particular type of payment is a genuine oneor not. It directs the skewed distributed data. This paper proposes a detection framework by determining four varied kinds of patterns of fraudulent deals using best suited algorithms and by viewing into the connected problems recognised by historic researchers in the above mentioned detection. Usage of prediction analysis and API module is recommended since it notifies the end user through the GUI the fraction a fraudulent transaction occurs. This bit of network is allowed for the extortion examination group to choose their solution and move to the following stage when a dubious trade is recognized. The models chosen proved to be 83%, 91%, 72% and 74 % accurately respectively.
Ljiljana Brkic et.al [10] stated that recently there has been an uprise in usage of ML algorithms such as data-mining techniques for detecting fake transactions. In spite of these a numeral challenges are foreseen, such as openly obtainable data sets, high disproportion categorical weightage, alternative fraudulent behavior etc. InThis particular paper, they describe usage of three Ml algorithms namely-, Support vector machine, Random forest, and Logistic regression and compare their respective performance for real-time data holding credit card transactions. To overcome disproportion class sizes,a method called as SMOTE sampling has been describedThe presentation of these methods depends on accuracy and review.This paper outlined four major matters in the CCFD field and proposed condition of-craftsmanship. Utilizing openly accessible data sets available for CCFD, performance of three named algorithms are measured. Experiments described use two basic approaches, (i) static and (ii) Incremental. They assess the exhibition of the calculations: ROC bend and normal accuracy. Based on the results presented , SVM is found to be the poorest in performance in terms of both static and incremental setup. LR has been found to be better than SVM, in gradational setup.
Nadisha Abdulla et.al [11] says that if we want to identify frauds from the mixture of genuine as well as fraudulent ones,We need efficient techniques to detect fraud and recognize them accurately. These should not be based on simple pattern matching methods. One of the approaches is using a mixed method that includes phases of pre-handling. In this method, unknown exchanges will be evacuated, hereditary calculation is demonstrated for highlight choice and bolster vector shall be trained for feature coercion and support vector machine used for diversification. The requested prototype was tested on UCSD-FICO information mining challenge dataset of the year 2009 (imbalanced and unknown). This is the dataset that was used in rivalry and was sorted by the main supplier of investigation and choice administration innovation and the University of California, San Diego UCSD called, the FICO. His written paper depicts a straightforward extortion identification system which can successfully identify misrepresentation with incredible precision. The issues looked by the current strategies are the absence of freely accessible datasets and to overcome this issue a novel methodology is utilized for recognizing fakes on imbalanced dataset of UCSD dataset as a cross breed approach including hereditary calculation and bolster vector machines. So as to assess the suggested framework, UCSD-FICO's Data mining challenge informational index was utilized. The above mentioned model is now to be analysed using anonymized set of data and check whether it handles the issue of imbalanced class. This model involves different steps like clustering, genetic algorithm, pre-processing, and finally support vector machine classification. These stages are effectively actualized to the set of data and made a decent model for identifying extortion. SVM proves great exactness in the above mentioned strategy by grouping the information to be tested separately.
Sameena Naaz et.al [12] explains that ccredit card systems are vulnerable to chicanery. The practices of cheat and fraudery adopted by these cheaters brings huge enormous amount of losses to financial companies and consumers. Fraudsters consistently attempt to discover new techniques and stunts to submit these unlawful and fugitive activities. E-commerce trading is extremely challenged in the current era since it's more prone to such a type of deceit. Hence financial companies and banks need to lookout for a way to make sure that all type of transactions are efficient from their side. Researches are still researching into this aspect to make e-commerce trading reliable. Correlation of Isolation Factor and Local Outlier Factor and calculations which uses concepts of python and it's trial results have been observed in this paper. After the set of data was examined , it showed accuracy of 76 percent by Isolation Forest and 97 percent by Local Outlier Factor. An investigation of chicanery on a dataset that is free and accessible easily utilizing ML calculations, for example, Local anomaly algorithm and Isolation Forest is introduced in the paper. The proposed system is implemented in PYTHON.
Wen-Fang yu et.al [13] highlights as the internet based trading was increased in China, the amount of frauds associated with the corresponding trade also showed a significant increase. The most effective method to improve the recognition and counteraction of credit card extortion turns into the focal point of hazard control of embankment. This particular paper provides us with a credit card extortion discovery framework utilizing distance sum model as indicated by the different frequency and mis conventionality of misrepresentation in card exchange, by appealing outlier mining into the credit card fraud detection. From Final results. We see that the above mentioned method is handy ,practical ,easy and has a higher accuracy level in detecting the fraud. Malini,N et.al [14] is saying that credit card transaction is a popular mode of payment that is accepted for offline and online transactions. This mode of payment is both simple and popular. It has it's advantages in making installments and different exchanges. With the advancement in technology, there is a rise in credit card deception. Likewise, it can also be said that the misrepresentation of monetary values worldwide is definitely extending corresponding to the improvement. The loss is incurred because of such fraudulent acts are over hundred millions of dollars according to yearly records. Such Farudulent acts are carried out in such elegant manner that they are very similar to the genuine transactions. Therefore, using less complex methods and simple pattern related techniques are not useful. Banks are in need for efficient fraud observation methods to minimize the chaos. Several techniques have been in use for detecting fraudulent credit transactions. Some of these could be are Sequence alignment, Machine learning, Fuzzy logic, Genetic programming, etc. KNN algorithm and outlier detection methods can be implemented along with these techniques to achieve best solutions in such problems. These approaches have proved to be fruitful in minimizing the rate of false alarms and increasing the rate of fraud detection. These ways or either of them can be implemented in banks for fraud detection and also to avert any fraud deal.
Behrouz Far et.al [15] states that the data analytics' goal is portraying the invisible patterns ,then using these patterns in variety of situations to support the solutions agreed on. The escalation in the credit card frauds accompanied along with the improvement of modernizing technology is making it an easy target. These type of cheats have imbalanced the datasets that are publicly available. In the given paper, to detect fraudulent credit card transactions using the real time datasets, multiple supervised machine learning algorithms are applied. Additionally, these algorithms are employed for best suited classifier using machine learning algorithms. The dynamic consonants that might tend towards high accuracy in detection of credit Card frauds are identified with utmost importance. Furthermore, comparision and discussion of production of different types of machine learning algorithms in supervision existing in literature as opposed to superior classifier has been carried out in this paper.
III. APPLICATION
Supervised and unsupervised learning are most commonly used forms of machine learning. Although other forms of machine learning are also available. Here are few most commonly used algorithms: Supervised learning algorithms are equipped with named instances, such as an input where you know the expected result. For examples, one bit of equipment may have information focuses set apart as "F" (failed) or "R" (runs). The learning algorithm furnishes I a progression of contributions alongside the significant outputs, and the algorithm learns by coordinating the present outcome with the correct output to be recognize fallacy. This then amends the pattern accordingly. Supervised learning utilizes correlations by approaches such as grouping, regression, correlation, and gradient boosting to estimate the label values on additional unlabeled results. Supervised learning is widely utilized in systems where past evidence predict possible occurrences of the future. For example, whether payment card charges are likely to be illegal, or when insurance client is likely to lodge a lawsuit, it may predict. Uncontrolled learning is used against data that doesn't have past identifiers. The "correct response" doesn't say the machine. The algorithm needs to find out what's needed. The objective is to break down the information and to locate some importance of the data. Unsupervised learning functions well with value-based transactions. For instance, it can recognize consumer groups with specific characteristics which can then be used in marketing strategies in a similar way. Or it will consider the key characteristics dividing groups of consumers from each other. Common strategies involve self-organizing graphs, nearest-neighbor routing, clustering of k-means and decomposition of singular values. These algorithms often help to segment text topics, suggest items and classify outliers of data.
Few algorithms used to identify fraudulent transactions: A. Convolutionary Neural Network (CNN):
Mapping input to secret layer contains one set of apps. Every characteristic I map represents one function. Compressing cycle of neurons into feature map is called convolution Subsampling decreases feature map parameters. They're made of neurons with learnable loads and inclinations. Developing neuron gets a few sources of info, executes dot products and possibly applies a non-linearity to it. A solitary differentiable score highlight is constantly spoken to in the system: from the crude picture pixels toward one side to class scores on the other. What's more, on the last layer they likewise have a loss work, so all the tips/stunts that we have learned for considering ordinary Neural Systems additionally apply. ConvNet structures expressly assume that the information sources are photographs, permitting one to encode those properties into the design. It at that point permits the forward technique increasingly successful and fundamentally diminishes the measure of parameters in the system. In the least difficult case, a ConvNet engineering is a lot of layers that convert the volume of the picture into a yield esteem. There are a couple of unmistakable classifications of layers Each layer takes a 3D esteem information and changes over it into a 3D volume yield utilizing a differentiable component. The model's general setup is part into two sections: the preparation segment of model and the recognizable proof segment of exchange. The model's preparation segment is separated into two sections: the sequencing layer work, and the CNN. The extended sequencing layer of capacities is utilized to refine the arrangement of exchange usefulness Next, the verifiable information is cleaned, etc., at that point information is set into the requesting layer of the framework and the effect of the calculation is checked via preparing the CNN programming, and the request for the capacity arrangement is changed by the contribution of the effect. We may make sense of the ideal arrangement mode in the update time frame through set change times of the capacity. While contributing the model progressively subtleties, the information highlights are arranged by the request for the element, and afterward the preparation model is judged
B. Fuzzy Logic:
That is used in situations where we have no distinct values of truth, i.e. whether they are constant. This is a logic of a multivalued. There are some number of laws by which a transaction is marked as legitimate or fraudulent. Fuzzification, rule based, defuzzification are the three unit that occurs. Fuzzification is to label an incoming activity in the large, small or medium ranges, depending on the corresponding monetary value. Rule dependent includes the writing of regulations focused on consumer behavior. In Defuzzification it is not permitted to occur if a transaction does not conform with the predefined collection of laws. This is halted automatically, and then cross-checked with the client that permission to proceed or be terminated will be given. Fuzzy reasoning contains 0 and 1 as severe causes of reality but often involves the number of forms of reality among them such that, for example, the outcome of a contrast of two objects may not be "tall" or "short" but ".38 big." Fuzzy logic is similar to the nature of our minds. Data are compiled and partial truth sets are made which is compiled further in order to obtain higher truths. When limits are reached, additional effects are generated. Basic type of mechanism is utilized in neural systems, master frameworks and different usage in computerized reasoning. Fuzzy thinking is significant for the making of human-like manmade intelligence abilities, regularly referred to as artificial general insight: the portrayal of normal human subjective i limits in programming to permit the man-made intelligence framework to discover an answer when stood up to with an obscure issue Five inputs: Credit card transaction period, number, place, distance, and frequency.
• Output: The verbal grouping of credit cards.
• Increasing data has unpredictable variables.
• An affiliation function is correlated with every ambiguous element.
• The membership function for every Uncertain variable is determined.
C. Artificial Neural System:
It can deal with complex information and has capacity i to learn itself. Rule based strategy: Straightforward and easy to execute. Units which are known as units are in Neural System. These units are placed in progression layer. There are various layers in Artificial Neural System. The first layer is input layer -System learns from units that residing in this layer are from outside world. The last layer is known as output layer -The unit in this layer detects data that finds out any undertaking. Hidden Layer is the middle layer which is in between input and output layer. Hidden layer is involved involves in providing something that a output layer can utilize.
D. Hidden Markov model :
Is a measurable Markov model in which the framework being demonstrated is thought to be a Markov chain with hidden states. An HMM is a twofold implanted likelihood conveyance process with chain of importance levels. This model targets dismissing any approaching Visa exchange on the off chance that it doesn't meet the higher anticipated likelihood and is consequently viewed as fake. Hidden Markov Model serves with acquiring a high extortion inclusion with a low false alert rate. Well in fraudulent identification and the various advances engagedi with charge card exchange preparing and their portrayal as the fundamental stochastici procedure of an HMM.
E. Decision tree:
It is a visual portrayal of elective options relying upon different conditions to a choice. Decision tree begins with root node, parts into various divisions, certain divisions are associated with different node, etc. Decision tree winds up in leaf node. Every node in the Decision tree speaks to a check, branches associated with it mirror their potential results and a leaf node has a class name. Decision tree ordinarily detaches the complex issue into clear ones in this common-sense strategy for parting and deciding.
F. Random Forest
It is now and then named Random Decision Forest, and is utilized for making some choice trees for deciding, relapse and different exercises. Random forest algorithm is based on supervised learning and this current calculation's principle advantage is that it tends to be utilized for both arrangement and relapse. Random forest strategy offers more prominent precision comparative with all other current plans and this procedure is all the more broadly utilized. Random forest is comprised of numerous individual choice trees which go about as a gathering. Through individual tree rambles a class expectation in the arbitrary timberland, and the class with the most votes is the forecast for the set. Random forest prescient productivity will contrast and best-supervised learning algorithm. They convey viable test blunder gauges without acquiring the cost of redundant model testing connected to cross-approval. It lessens the issue of overfitting in choice trees, and in this way diminishes the fluctuation. And afterward the presentation increments.
G. Logistic Regression
It is one of the methods used to calculate a restrictive value in a progression of free factors (1/0, Yes/No, Genuine/Bogus). Dummy factors are utilized to portray parallel/unmitigated qualities. To serve an exceptional instance of strategic logistic regression, a linear regression is utilized when the result variable is clear cut then the log of chances is utilized for subordinate variable and it additionally figures the likelihood of an occasion happening by fitting information into a calculated capacity. Logistic regression works best when you expel non-output part qualities and traits that are close to each other. Execution is incredibly basic and preparation is powerful.
H. Gradient boosting model.
With each tree learning and improving on the previous, GBMs create an ensemble of shallow and frail successive trees. It often provides statistical accuracy which cannot beat. It's also versatile. It can optimize different loss functions and offers several tuning options for hyperparameters which make the fit function very versatile. No pre-processing of the data needed. It manages lost data automatically. After the data processing, results on training sets are recorded for each algorithm, and changes are made to enhance the accuracy of predictions. The method is eventually applied on the research dataset, and the tests are written down. The precision of each algorithm is measured and one may decide the best algorithm for the sample.
Figure: Proposed Model
The first step is to read the dataset. Once it is read, we should process so that null values are removed and also the duplicate values. Then this data is separated as test and train values. Once the data is rea, select any of the algorithm mentioned earlier and apply to it. Output is predicted with test data set and then analysed for accuracy and performance and output will be predicted.
V. CONCLUSION
This goes without question that credit card theft is an act of gross dishonesty. Through a legal standpoint, it can be claimed that all suspicious transactions will be identified by the banks and credit card firms. But it is doubtful that the unprofessional fraudster can work on the size of the skilled fraudster, and therefore the expense of their identification to the bank can be inexpensive.
A huge deal of work is being done to find out about illegal activities and a variety of methods are being suggested to be solved. We put together various approaches for the identification and analysis of fraudulent transactions. You may use either of these or ai variation of these tools to identify fraudulent transactions. Now a days, credit card abuse is seen everywhere. Creating an efficient and simple to operate credit card risk control program is one of the important tasks for the banks in order to increase the degree of risk management of retailers in an automated and successful manner. One aim of this analysis is to establish the user model which better recognizes cases of fraud. There are several forms to have credit card theft identified.
|
2020-08-20T10:09:32.803Z
|
2020-06-30T00:00:00.000
|
{
"year": 2020,
"sha1": "5521d23ee189219fe10dae83cb2c0243c54bdea7",
"oa_license": "CCBY",
"oa_url": "https://www.ijert.org/research/credit-card-fraud-detection-using-machine-learning-algorithms-IJERTV9IS070649.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3bd656c1f30304d64b84576e96bdb0c3f6a22303",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
232227737
|
pes2o/s2orc
|
v3-fos-license
|
Tackling Drug Resistance in EGFR Exon 20 Insertion Mutant Lung Cancer
Abstract Insertion mutations in exon 20 (Ex20ins) of the epidermal growth factor receptor (EGFR) gene are the largest class of EGFR mutations in non-small cell lung cancer (NSCLC) for which there are currently no approved targeted therapies. NSCLC patients with these mutations do not respond to clinically approved EGFR tyrosine kinase inhibitors (TKIs) and have poor outcomes. A number of early phase clinical trials are currently underway to evaluate the efficacy of a new generation of TKIs that are capable of binding to and blocking Ex20ins. Although these agents have shown some clinical activity, patient responses have been restricted by dose-limiting toxicity or rapid acquisition of resistance after a short response. Here we review the current understanding of the mechanisms of resistance to these compounds, which include on-target EGFR secondary mutations, compensatory bypass pathway activation and acquisition of an EMT phenotype. Taking lessons from conventional EGFR inhibitor therapy in NSCLC, we also consider other potential sources of resistance including the presence of drug-tolerant persister cells. We will discuss therapeutic strategies which have the potential to overcome different forms of drug resistance. We conclude by evaluating recent technological developments in drug discovery such as PROTACs as a means to better tackle TKI resistance in NSCLC harbouring Ex20ins mutations.
Introduction
Lung cancer accounts for around 13% of all cancer diagnoses and is one of the leading causes of cancer mortality. 1 Non-small cell lung cancer (NSCLC) comprises the vast majority of lung cancer cases (~85%) 2 and activating mutations in the epidermal growth factor receptor (EGFR) gene are the second most prevalent oncogenic driver present in ~15-20% of NSCLC patients. 3,4 There are a wide array of different EGFR mutations including the two most frequent mutations, L858R and Exon 19 deletions (Ex19del) which are often referred to as classical or sensitizing EGFR mutations. The third most common class of EGFR mutations are exon 20 insertions (Ex20ins) which account for ~4-10% of all EGFR mutations in NSCLC. [5][6][7] EGFR Ex20ins are a class of mutations which are heterogeneous both in terms of size and location within the EGFR gene. They can be grouped together as insertions or duplications of 1-7 amino acids found between the α-C helix and following loop (762-774 amino acid sequence) of EGFR. [5][6][7][8] The most frequently identified EGFR Ex20ins variants are V769_D770ins and D770_N771ins, which together account for half of all NSCLC cases that harbour Ex20ins. 8 Activating Ex20ins have also been observed in the human epidermal growth factor receptor 2 (HER2) gene, another member of the EGFR family of receptor tyrosine kinases (RTK). Although HER2 mutations are present in only ~2% of NSCLC patients, Ex20ins are the most common HER2 mutation in lung cancer and occur between the α-C helix and following loop (767-783 amino acid) of the protein in a similar fashion to EGFR. 9 Beyond NSCLC, EGFR Ex20ins have recently been described in 68% of sinonasal squamous cell carcinomas, 10 a rare form of head and neck cancer, and both EGFR Ex20ins and HER2 Ex20ins were found in 18% and 3.6% of urothelial cancers, respectively. 11 These data suggest that development of targeted therapies against Ex20ins may have therapeutic implications for other cancer types.
In lung cancer tumors with EGFR or HER2 mutations, blockade of EGFR or HER2 activity with targeted inhibitors can trigger rapid apoptosis in a manner consistent with the "oncogene addiction" model, in which cells are dependent on persistent kinase signalling for survival. 12 As a kinase which is readily druggable with selective small molecule inhibitors, EGFR presents an attractive therapeutic target, and the success of EGFR inhibitors in NSCLC has paved the way for realising the potential of targeted therapy in oncology. However, EGFR Ex20ins represent a clinical unmet need as they are associated with de novo resistance to clinically approved EGFR inhibitors, including the competitive, reversible first-generation tyrosine kinase inhibitors (TKIs) (erlotinib and gefitinib) and the irreversible second-generation (afatinib) and thirdgeneration inhibitors (osimertinib). 6,9,13 One of the main challenges of targeting EGFR Ex20ins is that unlike classical EGFR mutations, Ex20ins mutations can activate EGFR without diminishing ATP affinity versus the wildtype kinase, 14 a feature which negates the advantage of ATP-competitive inhibitors to selectively target mutant over wild-type EGFR. Moreover, 3D modelling suggests that EGFR Ex20ins possess a rigid C-helix conformation that creates a compact drug binding site, further blocking drug accessibility. 9 Together, these features create an extremely narrow therapeutic window that prohibits clinically approved EGFR inhibitors from reaching therapeutic doses that can selectively target EGFR Ex20Ins mutants over wild-type EGFR without significant toxicity in patients. There is however one exception, the insertion mutant A763_Y764FQEA has a high affinity for firstgeneration EGFR inhibitors and there are multiple case studies that report responses to erlotinib in patients with this specific mutation. 13,15 Beyond this exception, EGFR inhibitors are not currently used to treat EGFR Ex20Ins NSCLC patients. Instead, although the survival benefit is minimal, the current standard of care for the majority of EGFR Ex20ins patients remains cytotoxic chemotherapy comprising a platinum based agent such as cisplatin or carboplatin combined with a taxane or pemetrexed. [16][17][18] EGFR inhibitors with the capacity to bind to and inactivate the compact ATP-binding site of Ex20ins (Ex20ins TKI; Table 1) include the covalent, irreversible EGFR inhibitors poziotinib (formerly HM781-36B), mobocertinib (TAK-788), and TAS6417 (CLN-081). 9,19,20 Therapeutics which target the Ex20ins receptor but do not block the ATPbinding site include the EGFR and the hepatocyte growth factor receptor (HGFR or MET) dual targeting antibody amivantamab and the heat shock protein 90 (Hsp90) inhibitor luminespib (NVP-AUY922) ( Figure 1). 21,22 Pre-clinical studies and several ongoing clinical trials are currently evaluating these experimental therapeutics in NSCLC patients with EGFR and HER2 Ex20ins mutations. [23][24][25] However, the limited clinical efficacy of these drugs reported to date highlights the challenges associated with Ex20ins mutant selectivity and drug resistance. In this review, we will outline the currently known resistance mechanisms identified for investigational agents that target Ex20ins and also describe candidate mechanisms based on the extensive clinical experience with first and third generation EGFR inhibitors in the context of classical EGFR mutations.
Clinical Trial Evaluation of Ex20ins Targeting Agents
Due to the lack of efficacy of approved EGFR inhibitors in EGFR Ex20ins NSCLC, targeted therapy is not normally considered and the standard of care for this subset of patients is chemotherapy. 26 Retrospective analysis of 165 EGFR Ex20ins NSCLC patients found a significantly longer median progression-free survival (PFS) for patients treated with platinum-based chemotherapy (6.4 months) compared with all approved EGFR inhibitors (2.9 months). 18
302
from this trial demonstrated a promising 58% objective response rate (ORR) after 8 weeks treatment. 25 There was no restriction in the number or type of prior systemic therapies received, and responses were observed in 8 of 13 patients (62%) previously treated with an alternative TKI, indicating the potential role of poziotinib in heavily pretreated patients. However, poziotinib is also a potent inhibitor of wild-type EGFR, and this trial reported that 60% of patients experienced grade 3 or greater adverse events, most commonly rash and diarrhoea. Furthermore, 45% of patients required a dose reduction from the starting dose of 16 mg poziotinib daily to 12 mg daily, and 17.5% of patients required a dose reduction to 8 mg daily.
Poziotinib has also been investigated in the phase II ZENITH20 trial (NCT03318939), an open-label, multicohort, multi-centre study which includes a cohort of pretreated patients with a proven EGFR or HER2 Ex20ins mutation. 27 This cohort of 115 patients had a median of 2 prior lines of therapy, and treatment with poziotinib 16 mg once daily conferred an ORR of 14.8% falling below the pre-specified primary endpoint of an ORR of 17%. 65% of patients had an observed reduction in tumor size with a disease control rate of 68.7% and a median PFS of 4.2 months. Toxicity remained a concern with 63% of patients reporting grade 3-4 treatment related adverse events. As a result, 68% of patients required dose reductions to subtherapeutic doses; 16% requiring a one-step reduction to 14 mg, 30% requiring a 2-step reduction to 12 mg, 22% requiring a dose of 10 mg or less and 10% of the study population permanently discontinued treatment. Adverse events were consistent with those previously reported for irreversible second-generation EGFR TKIs, most commonly diarrhoea and skin rash. This large multi-centre trial is ongoing with a cohort of treatment-naïve Ex20ins NSCLC patients and a split daily dosing regimen of poziotinib to determine if this regimen reduces incidence of adverse events and the requirement for dose reductions.
Mobocertinib is another covalent, irreversible inhibitor that selectively targets EGFR and HER2 Ex20ins. 28 It is being assessed in the ongoing phase I/II EXCLAIM trial (NCT02716116) to determine the safety of administering mobocertinib as a single agent or in combination with pemetrexed or carboplatin. Preliminary results have been presented, with 28 patients with locally advanced or metastatic NSCLC harbouring EGFR or HER2 Ex20ins treated with 160 mg mobocertinib once daily included for analysis. 28 Of these 28 patients, 26 were evaluable for treatment response with 14 having a partial response,
DovePress
giving an ORR of 53.8%, and 23 (88.5%) achieved disease control. In terms of toxicity, 21.7% of patients required a dose reduction due to treatment related adverse events with 10.9% discontinuing treatment as a result. The most common side effects included diarrhoea, rash and nausea. Based on this data, the FDA granted mobocertinib breakthrough designation status, however it remains to be determined whether toxicity will be present as an issue moving forwards into its phase III trial, EXCLAIM-2, which is now enrolling treatment-naïve NSCLC patients with EGFR Ex20ins (NCT04129502). 29 TAS6417 is a covalent, irreversible EGFR inhibitor specifically designed to target the ATP binding site of the EGFR Ex20ins kinase domain. 30 Promising pre-clinical work suggests that TAS6417 has a wide therapeutic window to target EGFR Ex20ins mutants over wild-type EGFR in cell line models. 20 Clinical data for TAS6417 has yet to be reported, however a phase 1/2a clinical trial (NCT04036682) is ongoing to establish the maximum tolerated dose for NSCLC patients with EGFR Ex20ins.
An EGFR and MET-targeted bispecific antibody, amivantamab, has shown promising efficacy against EGFR Ex20ins NSCLC in engineered mouse models with a reduction in tumor volume, as well as a reduction in total and phospho EGFR and MET and the inhibition of downstream signaling pathways protein kinase B (AKT) and extracellular signalregulated kinase (ERK). Amivantamab has also shown superior efficacy to poziotinib in tackling EGFR Ex20ins NSCLC with lower skin toxicity and loss of body weight in mice. 31 Promising clinical activity has been observed in the phase 1 first-in-human study CHRYSALIS (NCT02609776) involving 50 NSCLC patients with 13 distinct EGFR Ex20ins mutations, Figure 1 Therapeutic approaches to target EGFR Ex20ins NSCLC in clinical trials. Several approaches with distinct mechanisms are being assessed in clinical trials to target EGFR Ex20ins NSCLC, which are refractory to current clinically approved EGFR inhibitors. Small molecule tyrosine kinase inhibitors with the capacity to target the EGFR Ex20ins (Ex20ins TKI) can inhibit kinase catalytic activity. The bispecific EGFR-MET antibody amivantamab binds to both receptor tyrosine kinases which can result in receptor internalisation and downmodulation of oncogene expression on the cell surface. The Hsp90 inhibitor luminespib can inhibit the Hsp90 chaperone system which is co-opted by mutant EGFR Ex20ins to prevent ubiquitin-mediated protein degradation. submit your manuscript | www.dovepress.com
DovePress
Pharmacogenomics and Personalized Medicine 2021:14 of which 39 patients were evaluable for response. After a median follow-up of 4 months, ORR for amivantamab was 36% across the 39 patients, with 8.3 months median PFS. 21 The safety profile for amivantamab was manageable, with 36% of patients experiencing grade >3 adverse events, of which 6% were treatment related. Based on this data, the FDA granted breakthrough designation status for amivantamab in March 2020 to accelerate the clinical investigation of this antibody in EGFR Ex20ins NSCLC. A phase 3 clinical trial, the PAPILLON study, is currently underway to investigate the potential of a combination of amivantamab with carboplatin-pemetrexed chemotherapy compared to chemotherapy alone in NSCLC patients with EGFR Ex20ins (NCT04538664). 32 The Hsp90 inhibitor luminespib is generally welltolerated, though reversible low-grade ocular-toxicity is common. 22 In a phase II study (NCT01124864) involving patients with advanced NSCLC with several molecularlydefined subtypes, luminespib showed an ORR of ~17% among EGFR-mutant NSCLC. Notably, one patient with an Ex20ins mutation responded to luminespib. 33 Based on further pre-clinical evidence that Hsp90 inhibition is effective in models that harbour EGFR Ex20ins, 34 a phase II clinical trial for luminespib in NSCLC patients that specifically harbour EGFR Ex20ins (NCT01854034) found a 17% ORR in 29 patients and a median PFS of 2.9 months. 22 The study met its primary endpoint for ORR, indicating that Hsp90 inhibitors could potentially be used as a therapeutic strategy in patients with EGFR Ex20ins. It should be noted however, that a high degree of luminespib-related toxicities reported in clinical trials reflects the general challenge of using Hsp90 inhibitors in patients and may ultimately be a limiting factor for further clinical development.
The clinical data to date highlight the challenges of targeting EGFR Ex20ins without significant toxicity due to wildtype EGFR inhibition. The ORR for these new agents remains low compared to approved EGFR inhibitors in the context of NSCLC bearing L858R and Ex19del (ORR >60%). [35][36][37] The design of EGFR inhibitors with a greater therapeutic index may result in higher response rates and better drug tolerability. However, an outstanding question is whether intrinsic and acquired drug resistance will be a major limiting factor to the clinical efficacy of these agents that target EGFR Ex20ins. The short median PFS of 2.9 months for luminespib, 4.2 months reported for poziotinib 27 and 7.3 months for mobocertinib 23 contrasts with 10.1 months for osimertinib in L858R and Ex19del NSCLC in the second-line setting. 38 While insufficient dosing due to toxicity may contribute to shortterm responses in patients, early data from the use of poziotinib in EGFR Ex20ins patients suggests rapid acquisition of drug resistance, and the specific mechanisms of resistance have some overlap with those observed to arise in classical mutant EGFR NSCLC treated with approved EGFR inhibitors. 39 Therefore, it is important to consider and anticipate the potential routes of drug resistance in order to achieve durable responses in patients with Ex20ins mutations.
Known Mechanisms of Ex20ins TKI Resistance
Despite having only recently been evaluated in clinical trials, clinical mechanisms of resistance have already been reported for some of the aforementioned Ex20ins TKIs. 39 Here we will outline the currently known resistance mechanisms to poziotinib and other Ex20ins TKIs focusing on on-target mechanisms and compensatory bypass mechanisms of resistance described in the literature ( Figure 2). An overview of genomic alterations, mutations, amplifications and copy number losses which are present at baseline or occur at relapse post-TKI treatment in Ex20ins patients are summarized in Table 2. [39][40][41] We also outline potential therapeutics and druggable targets, which could be utilised to overcome TKI resistance.
On-Target Mechanisms of Resistance
A well-established mechanism of resistance to clinically approved EGFR inhibitors is the acquisition of on-target secondary mutations in EGFR, including the T790M gatekeeper and C797S point mutation. T790M is located in the ATP binding pocket and confers resistance to competitive first generation inhibitors by sterically hindering drug binding and increasing the affinity of mutant EGFR for ATP, thus decreasing the affinity and binding of reversible TKIs. 42 This mutation can be effectively overcome with the irreversible inhibitor osimertinib, which covalently binds to the C797 residue of EGFR and shows greater selectivity for EGFR T790M mutations over wild-type EGFR. 43,44 However 7% of NSCLC patients with classical EGFR mutations that are treated with osimertinib as a first line therapy develop the C797S mutation, the second most common mechanism of acquired resistance observed after MET amplification (15%). 45 The C797S mutation renders osimertinib ineffective by preventing the formation of the key covalent bond between this irreversible inhibitor and the thiol group (-SH) of cysteine in the EGFR 797 Pharmacogenomics and Personalized Medicine 2021:14 submit your manuscript | www.dovepress.com DovePress residue. [46][47][48] This second critical point mutation is a common mechanism of resistance to irreversible inhibitors which prevents permanent inactivation of the kinase.
On-target secondary mutations in EGFR have also been observed to confer resistance to Ex20ins targeting TKIs. A recent study from Elamin et al has shown that resistance to poziotinib can occur through the acquisition of T790M in pre-clinical models and EGFR Ex20ins NSCLC patients. 39 The study found that co-expression of an Ex20ins (S768supSVD) and T790M in engineered Ba/ F3 cells caused resistance to poziotinib. This study also analysed blood samples and biopsies collected at baseline and upon disease progression from 50 NSCLC Ex20ins patients enrolled in a poziotinib phase II clinical trial (NCT03066206). Patient samples were analysed using next generation sequencing. Of the 20 patients who went on to have disease progression, a number of on-target secondary mutations were observed. These mutations included T790M (n=2), V774A (n=1) and D770A (n=1). 39 To determine whether poziotinib binds to EGFR via the C797 residue, Robichaux et al generated Ba/F3 cells engineered with the EGFR C797S point mutation. 9 At the time of this study, the C797S mutation had only been observed in response to osimertinib in patients with classical EGFR mutations and T790M. The addition of C797S to classical EGFR mutants co-expressing T790M was found to confer poziotinib resistance (the half maximal inhibitory concentration IC 50 >10 μM), indicating poziotinib does bind to the C797 residue. 9 Mobocertinib has also been shown to inhibit EGFR and HER2 via covalent modification of EGFR C797 and its equivalent residue C805 in HER2, indicating point mutations in these amino acids may confer resistance to multiple Ex20ins TKIs. 49 Koga et al demonstrated that the C805S secondary resistance mutation can confer poziotinib resistance in a Ba/F3 model with HER2 Ex20ins mutations (A775_G776insYVMA and G776delinsVC). Using N-ethyl-N-nitrosourea (ENU), a mutagen known to cause
307
random mutations, poziotinib resistant clones were generated by growing ENU mutagenized Ba/F3 HER2 Ex20ins cells in the presence of poziotinib. All clones were sequenced to identify on-target secondary HER2 mutations and notably, C805S accounted for resistance in 31% of the resistant clones and was the only on-target secondary mutation identified. The acquisition of the C805S mutation was found to confer a 100-fold increased resistance to poziotinib. The authors demonstrated that luminespib was able to reduce the viability of HER2 Ex20ins mutant expressing cells regardless of the presence of the C805S on-target mutation. 50 The mechanism through which the Hsp90 inhibitor overcomes this resistance mechanism was not explored by Koga
Compensatory Bypass Pathways
Co-occurring mutations and gene amplifications in alternative oncogenic drivers are also putative resistance mechanisms in cancers with EGFR activating mutations. Elamin et al identified that co-occurring mutations in the Kirsten rat sarcoma 2 viral oncogene homolog (KRAS) and the Erb-B2 receptor tyrosine kinase 4 (ErbB4) were observed in genetically engineered mouse models (GEMM) harbouring tumors expressing EGFR Ex20ins (D770insNPG), following treatment with poziotinib. 39 Additionally, activation of the mitogenactivated protein kinase (MAPK), the mitogen-activated protein kinase kinase (MEK), AKT and ERK was elevated in GEMM tumors which progressed on poziotinib treatment compared to sensitive tumors, suggesting that acquired resistance to poziotinib is associated with the reactivation of MAPK and phosphatidylinositol 3-kinase (PI3K) pathways. 39 The same study utilized Ex20ins NSCLC patient biopsies from a poziotinib phase II clinical trial which were taken prior to treatment and upon disease progression. 1/20 patients with Ex20ins NSCLC had an E545K mutation in phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit alpha (PIK3CA) following progression on poziotinib. Mitogen-activated protein kinase 2 (MAPK2) S94L mutation (1 patient), MET amplification (1 patient) and cyclin dependent kinase 6 (CDK6) amplification (2 patients) were also identified. 39 Notably, MET and CDK6 amplifications have been previously described as mechanisms of acquired resistance to approved EGFR TKIs and are potential druggable targets to overcome resistance to TKIs that target Ex20ins. 53,54 Putative
Mechanisms of Ex20ins TKI Resistance
Ex20ins TKIs are still undergoing the initial stages of clinical development and our understanding of their resistance mechanisms is limited. However, based on recent studies it appears that these compounds may share overlapping acquired resistance mechanisms to first-and thirdgeneration EGFR inhibitors, including the acquisition of the point mutations T790M and C797S described above. These findings, together with the extensive knowledge gleaned from over a decade of clinical use of approved EGFR TKIs in NSCLC bearing classical mutations allows us to make informed predictions about additional potential acquired resistance mechanisms to TKIs that target Ex20ins. This is key to anticipating and forecasting effective therapeutic strategies to overcome drug resistance in this patient group. Here we will discuss two mechanisms that may play a role in the acquisition of resistance in the context of EGFR Ex20ins based on pre-clinical data from cellular models of classical EGFR mutations treated with first and third generation EGFR inhibitors, namely epithelial to mesenchymal transition (EMT) and drug tolerance. The cell line models and experimental design employed in these studies, and their EGFR mutational status, genomic alterations associated with resistance and drug dosing regimens are summarized in Table 3.
Epithelial to Mesenchymal Transition
EMT has been shown to confer resistance to clinically approved EGFR inhibitors in classical mutant EGFR NSCLC which lack EGFR on-target mutations or compensatory bypass mechanisms (Table 3). 55 The acquisition of an EMT phenotype in response to gefitinib treatment has been observed both in vitro and in patients with a decrease in expression of the epithelial marker E-cadherin. 56 EMT protects against EGFR-mediated TKI cell death through increased expression of the mesenchymal transcription factor zinc finger E-box binding homeobox 1 (ZEB1) which in turn inhibits the expression of the Bcl-2-like protein 11 (BIM). BIM is a pro-apoptotic protein required for EGFR TKIinduced apoptosis, therefore lower levels of BIM in cells that undergo EMT protect against EGFR TKI induced cell death. 57 ZEB1 has also been linked to increased expression of the fibroblast growth factor receptor 1 (FGFR1) which is submit your manuscript | www.dovepress.com
DovePress
Pharmacogenomics and Personalized Medicine 2021:14 associated with resistance to EGFR inhibitors and often concomitant with EMT. 58 Activation of an autocrine fibroblast growth factor 2 (FGF2)-FGFR1 growth loop drives resistance to EGFR TKIs through FGFR1-mediated activation of PI3K/AKT and MEK/ERK pathways. 59 Moreover, FGFR1 inhibition has been shown to restore sensitivity to EGFR TKIs in acquired resistant cell models with an EMT phenotype. 59,60 EMT has also been indirectly identified as a potential mechanism by which NSCLC cells can become resistant to poziotinib. NSCLC cell lines with classical
DovePress
EGFR mutations (HCC4006 (Ex19del) and HCC827 (Ex19del)) which underwent EMT in response to escalating concentrations of erlotinib were also resistant to poziotinib. 9 Further preclinical and translational studies are required to determine if EMT is a bona fide mechanism of resistance in EGFR Ex20ins patients.
Drug Tolerance
It is now well established that the emergence of minimal residual disease can be attributed to a subpopulation of drug tolerant persister (DTP) cells. 61,62 DTP cells are defined as the small subpopulation of cells that remain viable in the presence of anti-cancer treatments, despite not harboring classic genetic mutations commonly associated with drug resistance. They undergo a drug tolerant reversible state which has been observed in numerous cancer models in vitro and in vivo in response to drug pressure, suggesting a general phenomenon. [63][64][65][66][67][68] Despite no evidence of on-target resistance mutations, drug sensitivity can be >100-fold less in DTP cells when compared to the bulk tumor cells. Although the properties of DTP cells have not been fully characterized, it has been demonstrated that these cells harbor specific epigenetic modifications and a reversible drug tolerant slow-growing phenotype. 62,69 Experiments in several cell line models suggest that the ability of these DTP cells to maintain viability following drug exposure to both targeted therapy and chemotherapy involves a transient chromatin state dependent on insulin-like growth factor 1 receptor (IGF-1R) signaling, histone demethylase KDM5A and KDM6B activity and decreased histone acetylation. 62,69 This reversible DTP state could also account for the re-sensitization of patient tumors to TKIs after the interruption of treatment for an extended period of time (drug holiday). For example, some NSCLC patients with classical EGFR mutations who respond well to treatment with gefitinib and later experience therapy failure, showed a second response to the same EGFR TKI after a drug holiday. 70,71 DTP cells in NSCLC have been studied in vitro using the PC9 cell line (Ex19del mutation). Hata et al showed that acquired resistance to gefitinib can occur as a result of either pre-existing EGFR T790M containing cellular subpopulations or from initially T790M-negative DTP cells. 72 These DTP cells provide a reservoir of cells that can then acquire de novo T790M or other resistance-associated mutations after prolonged exposure to gefitinib. The cells also showed diminished apoptosis after exposure to osimertinib, indicating they may be less responsive to third-generation EGFR inhibitors. 72 A second study explored the evolution of PC9 DTP cells derived DTP cells from a single clonal population after prolonged exposure to erlotinib. 73 Different DTP cells derived from the same clonal population were found to acquire a diverse set of resistance mechanisms, including those most commonly observed in NSCLC patients in the clinic such as EGFR T790M mutation and MET amplification. These data suggest that different genetic and epigenetic drug resistance mechanisms can arise independently within the same initial cell population passing through the persister bottleneck, thereby complicating strategies to overcome resistance. 73 Given that DTP cells have been observed in response to clinically approved EGFR inhibitors, it is tempting to speculate that a similar phenomenon may be seen in EGFR Ex20ins tumors. Upon treatment with Ex20ins-targeted TKIs, a small subpopulation of clones may enter a resistant slow-growing state facilitating escape from drug pressure. Multiple de novo resistance mechanisms can then arise in these DTP clones which will allow them to revert to a fast-growing state, eventually becoming the dominant population in a relapsed tumor. 73 Understanding the biological mechanisms driving the evolution of DTP cells will undoubtedly help in the design of more effective upfront therapeutic strategies for EGFR Ex20ins patients.
Future Perspectives
Given the dose limiting toxicities in the current generation of EGFR Ex20ins TKIs, there is an urgent need for new compounds with a wider therapeutic index which are both effective and safe for use in Ex20ins patients. Furthermore, it is also essential to identify innovative approaches to overcome key resistance mechanisms anticipated with the current generation of Ex20ins TKIs. In this section we describe new methods to discover nextgeneration compounds which may be more effective in the treatment of Ex20ins patients including proteolysis targeting chimeras (PROTACs) and the mammalian membrane two-hybrid drug screen (MaMTH-DS) methodology. We also outline recent advances in monoclonal antibodies (mAb) combinations targeting on-target EGFR resistance mutants and explore new therapeutic opportunities in overcoming DTP tumor cells in patients.
PROTACs
PROTACs are valuable tools for the discovery of EGFR Ex20ins targeting agents. PROTACs consist of submit your manuscript | www.dovepress.com
DovePress
Pharmacogenomics and Personalized Medicine 2021:14 a bifunctional molecule containing a target protein binding ligand and an E3 ligase ligand which are bridged by a crosslinker. After the formation of a ternary complex composed of the protein target, PROTAC and E3 ligase, the ubiquitin proteasome system is recruited to degrade the protein of interest (in this case a transmembrane RTK). After degradation, the bifunctional PROTAC molecule is released and can enter the next degradation cycle, allowing a sustained reduction in receptor signaling and providing potential for PROTAC activity at lower concentrations than comparable TKIs. [74][75][76] Promising PROTACs have been reported for mutant EGFR in various cellular models. 77,78 Burslem et al described the development of a PROTAC for RTKs based on the reversible EGFR/HER2 inhibitor lapatinib by using a ligand that binds to the E3 ligase, VHL (von Hippel-Lindau). 77 Interestingly, this lapatinib-based compound was also shown to be capable of degrading EGFR Ex20ins protein (ASV duplication) in engineered HeLa cells. 77 By virtue of the ability of PROTACs to degrade EGFR rather than just inhibit its kinase activity, the authors showed that PROTACs offered several advantages over conventional TKIs. This included marked improvement in potency in preclinical models as well as sustained inactivation of downstream effector signaling compared to kinase inhibition by TKIs. These effects minimize compensatory pathway activation and could circumvent kinome rewiring which is a frequently observed resistance mechanism in response to TKIs (Figure 3). 77 However, phase I clinical trials of PROTACs have yet to report on the safety profile of these compounds. PROTACs have the potential to cause adverse clinical effects due to prolonged on-target and offtarget protein degradation. [79][80][81][82] For example, proteins that are part of the same complex or in close proximity with the target protein can be degraded even if not directly bound to a PROTAC. 83 In addition disruption of cellular proteostasis can occur, through either competition with endogenous E3 binding substrates or accumulation of ubiquitinated proteins which can saturate the proteolysis machinery. 84 Finally, some proteins are refractory to PROTAC-mediated degradation, which may limit the suitability of this therapeutic strategy for targeting certain oncogenes. 85,86
MaMTH-DS
MaMTH-DS is a split-ubiquitin-based-technology which has recently been used to identify new EGFR targeting agents. It involves a high-throughput screening methodology that is based on targeting functional RTK proteinprotein interactions. 87 in vitro kinase methods, this drug discovery platform utilizes full-length integral membrane proteins in their natural membrane context in live mammalian cells. In this assay, cells are transfected to stably express a bait RTK which is fused to the C-terminus of ubiquitin and an artificial transcription factor. In addition, the Src homology 2 domain-containing adaptor protein 1 (Shc1) is fused to the N-terminus of ubiquitin and expressed as the prey due to its ability to interact with a wide variety of phosphorylated RTKs. Upon activation of the bait RTK, proteolytic cleavage and release of the transcription factor leads to the activation of a luciferase reporter system. This methodology provides a useful strategy to identify inhibitors that block RTK phosphorylation resulting in a reduction in the luciferase readout. As proof of principle, Saraon et al used this platform to screen a EGFR inhibitor resistant Exon19del/T790M/C797S triple mutant NSCLC model against a library of 2960 small molecules. 88 They identified 4 new compounds that inhibit this triple mutant which is resistant to irreversible EGFR inhibitors including poziotinib. Importantly, two of these compounds, AZD7762 and EMI1, would not have been identified using in vitro kinase assays. For instance, the specificity of the checkpoint kinase (Chk) inhibitor AZD7762 for mutant EGFR depends on additional factors only present in the live-cell format while the mechanisms of action of the small molecule EMI1 is reliant on direct inhibition of microtubule polymerization, which indirectly affects mutant but not wild-type EGFR signaling and trafficking. This work demonstrates the utility and potential of MaMTH-DS as a screening platform that could be used to identify new candidate drugs for Ex20ins and associated on-target resistance mutations.
Therapeutic Monoclonal Antibodies
MAbs represent an important component in the arsenal of targeted cancer therapy for NSCLC treatment. MAbs that bind to the extracellular domain of EGFR are not affected by the acquisition of common on-target resistance mechanisms (eg T790M or C797S) that are found in the intracellular domain of the receptor. Cetuximab is a mAb that binds to the extracellular domain of EGFR, preventing ligand binding and blocking receptor activation. 89 Experimental strategies able to overcome EGFR T790M or C797S resistant mutants have exploited the combinatorial use of MAbs, such as cetuximab, trastuzumab (anti-HER2 mAb) and mAb33 (anti-HER3 mAb). 90 In particular, it has been shown that a triple combination of mAbs (3xmAbs) that simultaneously target EGFR, HER2 and HER3 inhibited tumor growth with low toxicity in a xenograft NSCLC model with classical EGFR mutations in combination with T790M. 91 In tumors which had acquired T790M, the 3xmAbs combination was shown to inhibit tumor growth in a similar fashion to osimertinib, but through a mechanism of cell senescence rather than apoptosis. This mAb combination overcame resistance to osimertinib in tumors that either expressed C797S or upregulated HER2 and HER3 as compensatory bypass mechanisms (Figure 3). 91 In another study from the same group, the combination of the TKI osimertinib and mAbs cetuximab and trastuzumab had a long-lasting effect in preventing onset of resistance to osimertinib by suppressing signaling from compensatory RTKs, such as HER2, HER3, MET and AXL. 92 These findings suggest that the combinatorial mAbs strategy may offer a feasible pharmacological option for treating Ex20ins lung cancer patients that develop both on-target and bypass resistance mechanisms to TKIs such as poziotinib. Limited clinical evidence for the efficacy of afatinib in combination with cetuximab has been reported in patients with EGFR Ex20ins, 93 however the 3xmAbs combination has yet to be assessed in this context.
Therapeutic Targeting of DTP Cells
In order to fully tackle the challenge of drug resistance and tumor relapse, it will be necessary to identify ways to effectively overcome DTP cells and residual disease following EGFR TKI treatment. The DTP cell state is reliant upon specific signaling pathways and epigenetic alterations, which present a therapeutic opportunity for drugs that can target these dependencies. A study from Rusan et al showed that the DTP cellular state is transcriptionally addicted to specific genes and pathways in a variety of cancer models. 94 In the PC9 cell line, the authors found that DTP cells arising from erlotinib treatment could be targeted by combining erlotinib with THZ1, which is a CDK7/12 inhibitor that blocks the transcriptional response in DTP cells (Figure 4). A genome-wide CRISPR/Cas9 screen performed in PC9 treated with erlotinib in combination with THZ1 demonstrated that suppression of genes associated with transcriptional complexes (such as EP300 or CREBBP) enhanced the THZ1/erlotinib therapeutic synergy. In addition, a new drug tolerant pathway associated with the dysregulation of UFMylation protein response and endoplasmic reticulum (ER) stress was characterized using this approach. 95 Components of the post-translational UFMylation pathway have only recently been characterized. They play an submit your manuscript | www.dovepress.com
DovePress
Pharmacogenomics and Personalized Medicine 2021:14 important role in cell survival as regulators of ER homeostasis and are linked to several types of cancer including lung cancer. 96,97 Suppressing expression of genes involved in the UFMylation pathway protects DTP cells against THZ1 and erlotinib combination treatment by promoting a protective unfolded protein response (UPR) associated with the stimulator of interferon response CGAMP interactor 1 (STING) upregulation. This triggers pro-tumorigenic inflammatory signaling and dependency on the apoptotic repressor B-cell lymphoma-extra large (Bcl-xL). 94 The dysregulation of the UFMylation pathway and ER stress response is a key TKI drug tolerance pathway that activates survival signaling which could be therapeutically exploited, however further work is required to identify whether similar DTP cellular pathways are present in Ex20ins tumors.
Conclusion
The current generation of TKIs capable of targeting Ex20ins has shown preclinical promise in the treatment of this rare group of NSCLC patients. However, early clinical data finds that this strategy suffers from a poor therapeutic index and inevitable primary and acquired drug resistance. Recent pre-clinical and clinical studies indicate that resistance may be acquired through the acquisition of EGFR ontarget mutations or the activation of compensatory bypass pathways. 39 In some cases, resistance mechanisms that mirror what has been observed with the common classical EGFR activating mutants in response to clinically approved EGFR inhibitors are applicable to this current generation of Ex20ins TKIs. But there is still a large gap in our knowledge of the myriad ways in which these tumors evolve when subjected to drug selection. In addressing this class of mutations, there is clearly a twin challenge of not only identifying a new generation of drugs with a better therapeutic index but also developing an in-depth understanding of the spectrum of biological mechanisms of drug resistance. The advent of new drug discovery tools, such as MaMTH-DS and PROTACs technology, should facilitate the rapid identification of new therapeutics that might ultimately be useful as first-line or salvage therapy; while a better understanding of mechanisms of resistance arising from residual DTP cells may hold the key to achieving 94 However, THZ1 treatment in combination with erlotinib suppresses the expression of the UFMylation pathway components which can trigger a protective unfolded protein response associated with tolerable levels of ER stress and cell survival. 94 Pharmacogenomics and Personalized
|
2021-03-16T05:32:54.996Z
|
2021-03-09T00:00:00.000
|
{
"year": 2021,
"sha1": "6b3f06f1f71522bbd110f59fd5103c6eab1cdc06",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=67462",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b3f06f1f71522bbd110f59fd5103c6eab1cdc06",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
19971160
|
pes2o/s2orc
|
v3-fos-license
|
Resection and primary anastomosis with or without modified blow-hole colostomy for sigmoid volvulus
AIM: To evaluate the efficacy of resection and primary anastomosis (RPA) and RPA with modified blow-hole colostomy for sigmoid volvulus. METHODS: From March 2000 to September 2007, 77 patients with acute sigmoid volvulus were treated. A total of 47 patients underwent RPA or RPA with modified blow-hole colostomy. Twenty-five patients received RPA (Group A), and the remaining 22 patients had RPA with modified blow-hole colostomy (Group B). The clinical course and postoperative complications of the two groups were compared. RESULTS: The mean hospital stay, wound infection and mortality did not differ significantly between the groups. Superficial wound infection rate was higher in group A (32% vs 9.1%). Anastomotic leakage was observed only in group A, with a rate of 6.3%. The difference was numerically impressive but was statistically not significant. CONCLUSION: RPA with modified blow-hole colostomy provides satisfactory results. It is easy to perform and may become a method of choice in patients with sigmoid volvulus. Further studies are required to further establish its role in the treatment of sigmoid volvulus.
INTRODUCTION
The epidemiology and clinical presentation of sigmoid volvulus are well established. Although the clinical manifestations of acute volvulus are often clearcut, a diagnostic dilemma is not uncommon. Sigmoid volvulus is the third most common cause of largebowel obstruction in the western world, after cancer and diverticular disease [1] . It accounts for 4% of all cases of large-bowel obstruction in the United States and United Kingdom [1,2] . Sigmoid volvulus is relatively more common in Eastern Europe, India and Africa, accounting for 50% of all cases of intestinal obstruction [1,2] .
The precise etiology of sigmoid volvulus remains speculative, and several etiological factors have been suggested including chronic constipation, high fiber diet, bowel habit, high altitude, and enemas containing ginger, pepper and herbal extracts [3] . Patients with sigmoid volvulus present with abdominal distention, pain, nausea, vomiting, and obstipation, while peritoneal signs are noted infrequently [4] . The colon is distended often to enormous proportions, particularly when the patient is symptomatic. Plain abdominal X-rays are often diagnostic of volvulus. Air-fluid levels are present, and a "bird's beak" deformity is often seen at the site of the torsion [5] . Sigmoid volvulus has been described since ancient times, but its treatment continues to evolve [6,7] . Several therapeutic approaches such as: resection, non-operative www.wjgnet.com Coban S et al . Comparison of two techniques for sigmoid volvulus 5591 reduction with the help of a colonoscope, sigmoidopexy and mesosigmoidoplasty have been employed [8] . There is no consensus with regard to the optimal surgical management in patient with an acute presentation. Proximal decompressive blow-hole colostomy has been used in patients with toxic megacolon secondary to inflammatory bowel disease and clostridium difficile colitis, with a view to avoid manipulating the colon and to tide over the critical phase [9] . This procedure is also performed in hemodynamically unstable patients with malignant bowel obstruction [10][11][12] .
Resection and primary anastomosis (RPA) has emerged as the treatment of choice for sigmoid volvulus over the past two centuries [6,7] . Particularly in elderly and hemodynamically unstable patients, anastomotic leakage may occur due to co-morbid risk factors with this approach. In such situations, a blow-hole colostomy may play a protective role in avoiding anastomotic leakage.
The aim of the present study was to compare the results of RPA with or without modified blow-hole colostomy in an unprepared bowel in patients with acute sigmoid volvulus.
MATERIALS AND METHODS
From March 2000 to September 2007, 77 patients with acute sigmoid volvulus were treated in the department of general surgery, school of medicine, Inonu University. Colonoscopic derotation was attempted in 27 patients, and was successful in 19 patients. In 10 of the 19 nonoperatively reduced patients, semi-elective one-stage resection was performed; the remaining 9 patients refused surgery after non-operative decompression. Hartmann's operation was performed in 11 patients who were not in stable condition. After excluding these 30 cases, the remaining 47 patients who underwent RPA with or without a modified blow-hole colostomy were included in this study.
The diagnosis of sigmoid volvulus was made on the basis of clinical features and plain abdominal radiographs. Laparotomy was performed in all patients after active fluid resuscitation and correction of electrolyte imbalance was obtained. Seftriacson 1000 mg a n d m e t r o n i d a z o l e 5 0 0 m g we r e a d m i n i s t e r e d intravenously at the time of induction of anesthesia, and were continued every 12 h after the operation for 5 d in patients with viable bowel and for 7 d in those with gangrenous bowel.
None of the patients included in the study were treated with preoperative decompression techniques. At laparotomy, the distended bowel was decompressed by using a rectal tube, and any residual feces was milked digitally into the segment of the bowel to be resected. Although the bowel was unprepared, on-table lavage was not performed in any patient. Informed consent was obtained from each patient prior to the surgery. In 25 patients, RPA (Group A) was performed and in the remaining 22 patients, a modified blow-hole colostomy was performed with RPA (Group B). All the anastomoses were inverting and two-layered.
The clinical course and postoperative complications were documented. Wound infection was defined as spontaneous discharge of pus from the wound or a wound that requires drainage. Anastomotic leak was defined as the presence of fecal fistula or the presence of feces in the drain.
Surgical technique of modified blow-hole colostomy
In group B, a proximal stoma was performed to protect the anastomoses. A 3 cm longitudinal incision was made through the tenia libera, and into the transverse colon. An abdominal wall aperture intended for the colostomy was made in the right upper quadrant using a rectus splitting incision. The collapse of the colon allowed it to reach the incision, and facilitated performing a skinlevel colostomy. In the blow-hole colostomy technique which has been previously reported, the omentum and seromuscular layers of the colon were attached to the peritoneum and the rectus fascia with interrupted or continuous sutures [9] . In the present study, the cut edges of the colon were sutured to the skin with 3/0 vicryl without fascial or peritoneal sutures. As a result, we call it a "modified blow-hole colostomy technique" (Figure 1). In these patients, oral intake was started on postoperative day 1. On postoperative day 10, the anastomotic integrity was checked using water-soluble radiological studies, and if intact, colostomy closure was performed.
Statistical analysis
Statistical analyses were performed using SPSS for Windows version 11.0 program. Continuous variables were reported as mean ± SD. Categorical variables were reported as percent. Normality for continued variables in groups was determined by the Shapiro Wilk test. The variables showed normal distribution (P > 0.05). Therefore, unpaired t-test was used for comparison of variables (age and length of hospital stay) between the two groups. Fisher's Exact and Pearson c 2 tests were used for comparison of categorical variables between the groups. P < 0.05 was considered as significant.
RESULTS
Forty-seven consecutive patients (7 women and 40 men), who had undergone RPA (25 patients) or RPA with modified blow-hole colostomy (22 patients) were evaluated. There was no significant difference between the two groups in the mean age or the sex ratio. The operative procedures and postoperative outcomes are shown in Table 1. Nine (36%) patients in group A and 5 (22%) in group B had gangrenous bowel. The mean hospital stay, wound infection and mortality did not differ between the groups (Table 2). However, superficial wound infection was almost four times more common in the group A (32% vs 9.1%), and nearly two times more common in patients with a viable colon. All the infected wounds healed with conservative measures. Three patients (12%) developed anastomotic leak in group A; two had viable colon and one had gangrenous sigmoid colon. At re-laparotomy, Hartmann's procedure was performed in patients with anastomotic dehiscence. No anastomotic leak was observed in group B. The time to resumption of oral intake was postoperative day 4 in group A, and day 1 in group B, due to the presence of a protective stoma. All patients in group B, had stoma closure performed on postoperative day 10, after radiological studies were carried out. Development of leak or wound infection secondary to stoma closure was not observed in any patient.
The mortality rates were identical in the two groups. One patient died of myocardial infarction and one of sepsis resulting from anastomotic dehiscence in group A on the 1st and 4th postoperative day respectively. In group B, one patient died on the 6th postoperative day due to pulmonary embolism, and one patient died secondary to multi-system organ failure on the 16th postoperative day.
DISCUSSION
The management of sigmoid volvulus involves relief of obstruction and prevention of recurrence. Several operative procedures have been used in the emergency management of sigmoid volvulus. However, permanent cure involves resection of the sigmoid colon, with or without anastomosis [7,13,14] . Less extensive procedures are not always successful and are contraindicated if gangrene or compound volvulus is present [15] . Colonoscopic detrortion and laparotomy with detorsion and colopexy are associated with appreciable morbidity [16] . A recent study reported on the use of laparoscopic rectosigmoidectomy following colonoscopic decompression in nine patients [17] . Although further studies with larger number of patients are required, this technique appears to be a good option, but can only be applied in decompressed patients.
Traditional surgical teaching has dictated that a primary anastomosis should not be undertaken in an unprepared, obstructed bowel [18][19][20] . A number of studies on sigmoid volvulus explored the feasibility of one-stage resection using on-table lavage [13,[21][22][23][24] . The advantages of this approach include single stage procedure, no need for a colostomy, possible lower morbidity and mortality, and shorter hospital stay. The disadvantages include prolonged operative time, need for several liters of irrigation solution, and risk of contamination. However, clinical and experimental evidence supports the view that a clean bowel has an important advantage in surgery of the left colon and rectum, which are parts of the bowel containing solid feces, with a high bacterial count [19,25] . Therefore, an emergency RPA of an unprepared left colon is a controversial subject. Traditionally, obstruction of the left colon is managed by a multi-stage defunctioning colostomy and resection. However, there is a growing acceptance of one-stage primary resection and anastomosis with the use of on-table antegrade irrigation [24,[26][27][28] . However, several studies have suggested that on-table lavage may not be necessary for a safe emergency RPA of unprepared left colon [26][27][28][29] . Most experts agree that temporary proximal fecal diversion or decompression can reduce the risk of sepsis resulting from anastomotic leakage [30,31] . All patients in the two study groups, had RPA after intraoperative decompression without any on-table lavage.
Symptomatic anastomotic leak is the most important postoperative complication following emergency colorectal resection with intestinal anastomosis. De et al [26] reported 197 patients who had a single stage primary anastomosis without colonic lavage for leftsided colonic obstruction due to acute sigmoid volvulus; only 2 (1.01%) patients developed symptomatic anastomotic leak. In a similar prospective study by Raveenthiran [29] , 57 consecutive patients with acute sigmoid volvulus had emergency RPA without ontable lavage or caecostomy. The anastomotic leak was seen in 10% patients, with higher leak rate in patients with gangrenous colon. Factors such as acute anemia, shock and peri-operative whole blood transfusion are believed to be associated with major anastomotic leaks in patients with gangrenous colon. Due to these comorbid risk factors, particularly in patients presenting with gangrenous sigmoid volvulus, the addition of a modified blow-hole colostomy appears to be a promising procedure in order to avoid anastomotic leak. Early diet consumption is another advantage of this procedure and may improve the postoperative course. In the present study, anastomotic leakage occurred in 3 (6.3%) patients in group A, compared to none in group B. The difference was numerically impressive but not statistically significant. Comparison of hospital stay, mortality and wound infection did not reveal any significant difference between the two groups. The lower rate of wound infection in group B is a controversial subject. In our opinion, this is probably due to the beneficial effect of early dietary consumption. Previous studies have clearly shown the advantages of early enteral nutrition in surgical patients in reducing septic complications and the overall morbidity compared to parenteral nutrition [32,33] .
In conclusion, RPA with modified blow-hole colostomy provides satisfactory results in patients with sigmoid volvulus. This procedure is safe and effective in preventing anastomotic leaks, and may become the treatment of choice in patients with sigmoid volvulus. Further studies are required to definitively establish its role in sigmoid volvulus.
Background
Although the diagnosis of sigmoid volvulus is not difficult, the emergent surgical approach to sigmoid volvulus is a subject of much debate. There is a growing acceptance of one-stage primary resection and anastomosis of sigmoid colon. However, with this approach, anastomotic leakage may occur, particularly in elderly and hemodynamically unstable patients. Blow-hole colostomy can play a protective role to avoid anastomotic leakage. In the present study, we compared the results of resection and primary anastomosis (RPA), with or without the use of a modified blow-hole colostomy in unprepared bowel in patients with acute sigmoid volvulus.
Research frontiers
Anastomotic leakage is the most important postoperative complication following emergency colorectal resection with intestinal anastomosis. Temporary proximal fecal diversion or decompression can reduce the risk of sepsis from anastomotic leakage. The present study showed that RPA with modified blowhole colostomy in unprepared bowel with sigmoid volvulus is effective in preventing anastomotic leakage.
Innovations and breakthroughs
This is a new technique for use in patients with sigmoid volvulus. We believe that this technique will decrease the rate of complications such as anastomotic leakage, especially in high risk patients.
Applications
The present study has shown that RPA with modified blow-hole colostomy for sigmoid volvulus is a safe procedure, which enables successful decompression and avoids high mortality rates, particularly in elderly and hemodynamically unstable patients. Future studies will be required to verify the effectiveness and safety of this technique.
|
2018-04-03T01:25:55.844Z
|
2008-09-28T00:00:00.000
|
{
"year": 2008,
"sha1": "100abed22f72876d17f2393b0b03ffe5174091d1",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.14.5590",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "886d4eed506de8ca8d07522a810b9eb5118460b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226251832
|
pes2o/s2orc
|
v3-fos-license
|
Potent Cytotoxicity of Four Cameroonian Plant Extracts on Different Cancer Cell Lines
In this study, the potential cytotoxicity of four plant extracts originated from Cameroon: Xylopia aethiopica (XA), Imperata cylindrica (IC), Echinops giganteus (EG) and Dorstenia psilurus (DP) were examined in vitro. We tested the anti-proliferative activity of the methanolic extracts of these compounds using MTT assay on seven different human cancer cell lines: HeLa, MDA-MB-231, A549, HepG2, U-87, SK-OV-3 and HL60. Induction of cell death was assessed by cell cycle analysis, apoptosis was determined by Annexin V-FITC binding and caspase 3/7 activity. As well, changes in mitochondrial membrane potential (MMP) and cell migration were tested. The genetic toxicity, using the alkaline comet assay, was evaluated. The studied extracts inhibited the cell proliferation of all tested cancer cell lines with concentration dependent effect over time. All of these extracts mainly induced apoptosis of HeLa cells by the accumulation of hypodiploid cells in the sub-G0/G1 phase and increasing the activity of caspase 3/7, as well they showed potential MMP disturbance and expressed a marked inhibitory effect on cell migration. Assessment of probable genetic toxicity by these extracts revealed no or minimum incidence of genetic toxicity. Therefore, the studied plant extracts are exhibiting potent anticancer activity based upon marked induction of tumor-cell death.
Introduction
Cancer is the most serious disease worldwide that is expected to increase due to the adoption of behaviors and lifestyle factors known to cause cancer [1]. Toxicity and tumor resistance are currently the main limitations in using chemotherapeutic agents in cancer treatment. So, the discovery of new and safe treatment options is considered a big challenge [2]. Natural products represent a great source for screening new and safe anti-cancer products [3]. Plant extracts are rich in bioactive molecules based on the variety of their chemical constituents such as flavonoids, polyphenols and alkaloids which play a highly significant role in drug discovery and development process [4]. On the other hand, after 48 h treatment of the non-tumor HEK293 cells with 50 µg/mL of each crude methanol extract, there was only a negligible effect on the cell growth. The cell viability was 80.90%, 73.88%, 71.89% and 70.26% after cell treatment with XA, IC, EG and DP extracts, respectively. Only for concentrations above 100 µg/mL, a decrease by more than 50% in cell viability could be seen ( Figure 2). Upon treatment of cells with a concentration of 100 µg/mL of EG and DP, cell viability decreased to 49.16% and 49.44%, respectively. While at a higher concentration of 200 µg/mL of XA and IC extracts, cell viability decreased to 48.05% and 47.30%, respectively. However a concentration of below 30 µg/mL could be considered safe for healthy cells with cell viability more than 90%. On the other hand, after 48 h treatment of the non-tumor HEK293 cells with 50 µg/mL of each crude methanol extract, there was only a negligible effect on the cell growth. The cell viability was 80.90%, 73.88%, 71.89% and 70.26% after cell treatment with XA, IC, EG and DP extracts, respectively. Only for concentrations above 100 µg/mL, a decrease by more than 50% in cell viability could be seen ( Figure 2). Upon treatment of cells with a concentration of 100 µg/mL of EG and DP, cell viability decreased to 49.16% and 49.44%, respectively. While at a higher concentration of 200 µg/mL of XA and IC extracts, cell viability decreased to 48.05% and 47.30%, respectively. However a concentration of below 30 µg/mL could be considered safe for healthy cells with cell viability more than 90%.
These findings were in agreement with a recent study that was performed to assess the in vivo toxicological effect of the IC crude extract [30]. This study revealed the safety of crude methanol extract of IC at low doses on animal models, while prolonged use was suggested to be discouraged at high doses. 80.90%, 73.88%, 71.89% and 70.26% after cell treatment with XA, IC, EG and DP extracts, respectively. Only for concentrations above 100 µg/mL, a decrease by more than 50% in cell viability could be seen ( Figure 2). Upon treatment of cells with a concentration of 100 µg/mL of EG and DP, cell viability decreased to 49.16% and 49.44%, respectively. While at a higher concentration of 200 µg/mL of XA and IC extracts, cell viability decreased to 48.05% and 47.30%, respectively. However a concentration of below 30 µg/mL could be considered safe for healthy cells with cell viability more than 90%.
Cytotoxicity of the Plant Extracts
Screening of the cytotoxicity effect of the four studied plant extracts was performed using MTT viability assay. Seven cancer cell lines were incubated with the four extracts for 24, 48 and 72 h and the status of cell growth was observed ( Figure 3). The results showed that all of the aforementioned extracts expressed concentration dependent cytotoxic effects against the treated cell lines. Results confirmed that DP showed the most potent antitumor activity against cervical cancer (HeLa) cancer after 72 h, among the four studied extracts, with average half-maximal inhibitory concentration (IC 50 ) In comparison with previous studies on a variety of Cameroonian plant extracts [25][26][27][28][29], these findings confirm the potential anticancer activity of these four extracts in these seven cancer cell lines with concentration dependent effect over time. However, the most potent cytotoxic activity was against HeLa, HepG2 and MDA-MB-231 cells.
According to the US National Cancer Institute (NCI) plant screening program, a crude plant extract is generally considered to have acceptable in vitro cytotoxic activity if the IC50 value is less than 20 µg/mL after incubation 48 and 72 h treatment of cancer cell lines [31]. Based on these results, the four plant extracts were shown potent cytotoxic activity against HeLa cells after 72 h, with IC50 values of 17.93, 27.61, 30.10 and 32.29 µg/mL for DP, XA, EG and IC, respectively. Thus, HeLa cells were selected for the assessment of inhibition mechanisms. In comparison with previous studies on a variety of Cameroonian plant extracts [25][26][27][28][29], these findings confirm the potential anticancer activity of these four extracts in these seven cancer cell lines with concentration dependent effect over time. However, the most potent cytotoxic activity was against HeLa, HepG2 and MDA-MB-231 cells.
According to the US National Cancer Institute (NCI) plant screening program, a crude plant extract is generally considered to have acceptable in vitro cytotoxic activity if the IC 50 value is less than 20 µg/mL after incubation 48 and 72 h treatment of cancer cell lines [31]. Based on these results, the four plant extracts were shown potent cytotoxic activity against HeLa cells after 72 h, with IC 50
Live/Dead Staining
A live/dead assay was performed to confirm the cytotoxic activity of the four plant extracts against HeLa cell line. Cells were treated with a concentration of 30 µg/mL of each crude extract over 24, 48 and 72 h. Then, dual staining with fluorescence dyes was performed using calcein-AM and propidium iodide. The results showed a significant decrease in viable cells after being with the extracts ( Figure 4A). Data confirmed the cytotoxic activity of these extracts with time dependent manners. As the percentage of viable cells of untreated cells was 97. 10 Figure 4B-D). These results suggest that the four studied plant extracts are potential cytotoxic agents for the studied cell lines.
Live/Dead Staining
A live/dead assay was performed to confirm the cytotoxic activity of the four plant extracts against HeLa cell line. Cells were treated with a concentration of 30 µg/mL of each crude extract over 24, 48 and 72 h. Then, dual staining with fluorescence dyes was performed using calcein-AM and propidium iodide. The results showed a significant decrease in viable cells after being with the extracts ( Figure 4A). Data confirmed the cytotoxic activity of these extracts with time dependent manners. As the percentage of viable cells of untreated cells was 97. 10 Figures 4B-D). These results suggest that the four studied plant extracts are potential cytotoxic agents for the studied cell lines.
Cell Cycle Analysis
To investigate the mechanism of growth inhibition, the effect of the four studied crude extract on the cell cycle distribution of HeLa cells were analyzed after 48 h of the treatment of cells with the concentration of IC50 obtained from the MTT assay, followed by fixation and PI staining. The cell cycle arrest was analyzed using flow cytometry. The results in Figure 5 show an obvious alteration of the distribution of different phases. The cell population was seen to be increasingly accumulated
Cell Cycle Analysis
To investigate the mechanism of growth inhibition, the effect of the four studied crude extract on the cell cycle distribution of HeLa cells were analyzed after 48 h of the treatment of cells with the concentration of IC 50 obtained from the MTT assay, followed by fixation and PI staining. The cell cycle arrest was analyzed using flow cytometry. The results in Figure 5 show an obvious alteration of the distribution of different phases. The cell population was seen to be increasingly accumulated at the SubG0/G1 phase upon treatment with the studied extracts. Compared to the untreated cells that showed a percentage of 6.02 ± 1.94% in this phase, XA and IC showed a considerable increase with percentages of 18.03 ± 1.94% and 14.91 ± 1.16%, respectively, while EG and DP significantly increased the accumulation of hypodiploid cells in the sub-G0/G1 phase with percentages of 38.88 ± 2.84% and 44.51 ± 1.73%, respectively. The data showed that there was no significant change in cell arrest in all other phases. The results suggest that HeLa cells underwent apoptosis upon treatment with the studied plant extracts. at the SubG0/G1 phase upon treatment with the studied extracts. Compared to the untreated cells that showed a percentage of 6.02 ± 1.94% in this phase, XA and IC showed a considerable increase with percentages of 18.03 ± 1.94% and 14.91 ± 1.16%, respectively, while EG and DP significantly increased the accumulation of hypodiploid cells in the sub-G0/G1 phase with percentages of 38.88 ± 2.84% and 44.51 ± 1.73%, respectively. The data showed that there was no significant change in cell arrest in all other phases. The results suggest that HeLa cells underwent apoptosis upon treatment with the studied plant extracts. Plant extracts exert their cytotoxic effects through common mechanisms including cell cycle arrest and cell death by apoptosis. The potential of anticancer agents is evaluated by the ability to initiate cell cycle arrest in cancer cells [32]. Production of apoptotic cells, which are resulting from DNA fragmentation, display a broad hypo-diploid sub-G0/G1 peak which is easily detected with sufficient loss of cellular DNA and can be discriminated by flow cytometry [33]. Corresponding to these findings, in the present study, the four studied extracts have the potential to cause sufficient DNA loss to induce apoptosis, as the accumulation of hypodiploid cells in the sub-G0/G1 phase is an indication of apoptotic cell death. Several intracellular cascades, such as the activation of caspases and the disruption of MMP would be a confirmation for apoptosis.
Annexin V-FITC/PI for Apoptosis Detection
In order to verify that the effect of the studied extracts on the growth inhibition of HeLa cells was related to apoptosis, analysis of the apoptotic and necrotic cells was performed using Annexin V. After 48 h of the treatment with the IC50 of the four extracts, a double labelling was done with PI that staining the necrotic cells with red fluorescence and Annexin V-FITC which produce cytoplasmic Plant extracts exert their cytotoxic effects through common mechanisms including cell cycle arrest and cell death by apoptosis. The potential of anticancer agents is evaluated by the ability to initiate cell cycle arrest in cancer cells [32]. Production of apoptotic cells, which are resulting from DNA fragmentation, display a broad hypo-diploid sub-G0/G1 peak which is easily detected with sufficient loss of cellular DNA and can be discriminated by flow cytometry [33]. Corresponding to these findings, in the present study, the four studied extracts have the potential to cause sufficient DNA loss to induce apoptosis, as the accumulation of hypodiploid cells in the sub-G0/G1 phase is an indication of apoptotic cell death. Several intracellular cascades, such as the activation of caspases and the disruption of MMP would be a confirmation for apoptosis.
Annexin V-FITC/PI for Apoptosis Detection
In order to verify that the effect of the studied extracts on the growth inhibition of HeLa cells was related to apoptosis, analysis of the apoptotic and necrotic cells was performed using Annexin V. After 48 h of the treatment with the IC 50 of the four extracts, a double labelling was done with PI that staining the necrotic cells with red fluorescence and Annexin V-FITC which produce cytoplasmic green labelling for apoptotic cells. Images from fluorescent microscopy ( Figure 6A), showed that viable cells were negative for both PI and Annexin V. While XA and IC were shown considerable green and red labelling for both apoptotic and necrotic cells, respectively. A significant detection of apoptotic cells was shown upon cell treatment with both EG and DP, indicating that cells mainly underwent apoptosis with those extracts. green and red labelling for both apoptotic and necrotic cells, respectively. A significant detection of apoptotic cells was shown upon cell treatment with both EG and DP, indicating that cells mainly underwent apoptosis with those extracts. Q2 = late apoptosis, Q3 = necrosis, Q4 = live) and represent one of three sets of independent experiments conducted. Cells that are undergoing apoptosis will shift from the viable quadrant (Q4) to the early apoptosis quadrant (Q1), and eventually end up in late apoptosis quadrant (Q2). On the other hand, cells that undergo necrosis will shift from viable quadrant (Q4) to the late necrosis quadrant (Q3). Un-treated cells showed a percentage of 92.8 ± 1.06% for viable cells, 4.06 ± 0.73% for dead cells, 2.11 ± 0.30% for late apoptosis and 1.04 ± 0.09% for early apoptosis. XA and IC expressed increase in the late apoptotic population to 18 ± 2.08% and 7.05 ± 2.6%, respectively. While EG and DP showed significant increase in the late apoptosis to 31.50 ± 1.26% and 35.6 ± 1.42%, respectively.
As well, a considerable increase was shown in necrotic cells, with percentages of 15.30 ± 2.35%, 22.40 ± 3.82%, 16.90 ± 3.36% and 25.50 ± 4.54%, upon treatment with XA, IC, EG and DP, respectively. Lastly, early apoptotic cells, represented by Q1, displayed only a slight increase in cell distribution as a result of treatment with the studied extracts XA, IC, EG and DP to percentages of 2.51 ± 0.34%, 5.99 ± 1.36%, 7.05 ± 1.23%, 4.11 ± 0.89%, respectively. Phosphatidyl-serine (PS) on the outer layer of the plasma membrane acts as a recognition site of phagocytes during the early stage of apoptosis [34]. Annexin V, a calcium-dependent protein can bind to the exposed phosphatidyl-serine (PS) on the external layer of the membrane [35]. In this study, it was observed that the percentage of cells undergoing late apoptosis increased significantly, thus confirming that apoptosis was one of the major modes of cell death induced by the four studied plant extracts, especially EG and DP extracts.
Effect on the Activity of Caspase 3/7
To investigate whether the apoptosis effect induced by the four studied plant extracts is dependent upon the caspases activation, the caspase 3/7 activity examined on HeLa cells treated with the concentration of IC 50 values of the four studied extracts for 24 h (Figure 7). DP expressed a significant effect on activation of caspase 3/7 up to 7.17 ± 0.72 fold compared to the untreated cells. As well, EG efficiently enhanced the caspase 3/7 activity by 5.15 ± 1.14 fold, while cells treated with XA and IC expressed an increase in the activities of caspase 3/7 by 3.60 ± 0.58 and 4.75 ± 0.92 fold compared to the untreated cells that expressed only 1.12 ± 0.19 fold increase in activity.
Pharmaceuticals 2020, 13, x FOR PEER REVIEW 9 of 19 dead cells, 2.11 ± 0.30% for late apoptosis and 1.04 ± 0.09% for early apoptosis. XA and IC expressed increase in the late apoptotic population to 18 ± 2.08% and 7.05 ± 2.6%, respectively. While EG and DP showed significant increase in the late apoptosis to 31.50 ± 1.26% and 35.6 ± 1.42%, respectively. As well, a considerable increase was shown in necrotic cells, with percentages of 15.30 ± 2.35%, 22.40 ± 3.82%, 16.90 ± 3.36% and 25.50 ± 4.54%, upon treatment with XA, IC, EG and DP, respectively. Lastly, early apoptotic cells, represented by Q1, displayed only a slight increase in cell distribution as a result of treatment with the studied extracts XA, IC, EG and DP to percentages of 2.51 ± 0.34%, 5.99 ± 1.36%, 7.05 ± 1.23%, 4.11 ± 0.89%, respectively. Phosphatidyl-serine (PS) on the outer layer of the plasma membrane acts as a recognition site of phagocytes during the early stage of apoptosis [34]. Annexin V, a calcium-dependent protein can bind to the exposed phosphatidyl-serine (PS) on the external layer of the membrane [35]. In this study, it was observed that the percentage of cells undergoing late apoptosis increased significantly, thus confirming that apoptosis was one of the major modes of cell death induced by the four studied plant extracts, especially EG and DP extracts.
Effect on the Activity of Caspase 3/7
To investigate whether the apoptosis effect induced by the four studied plant extracts is dependent upon the caspases activation, the caspase 3/7 activity examined on HeLa cells treated with the concentration of IC50 values of the four studied extracts for 24 h (Figure 7). DP expressed a significant effect on activation of caspase 3/7 up to 7.17 ± 0.72 fold compared to the untreated cells. As well, EG efficiently enhanced the caspase 3/7 activity by 5.15 ± 1.14 fold, while cells treated with XA and IC expressed an increase in the activities of caspase 3/7 by 3.60 ± 0.58 and 4.75 ± 0.92 fold compared to the untreated cells that expressed only 1.12 ± 0.19 fold increase in activity. One of the important measures that play essential roles in apoptosis, necrosis, and inflammation is caspases which are a family of cysteine proteases, causing cleavage of cellular protein [36]. Examination of the activities of caspase 3/7 in HeLa cells with the four studied plant extracts has shown a marked increase in the activity of caspase 3/7, which confirm the effect of these extracts in apoptotic cell death as observed previously in cell cycle analysis and Annexin V.
Effect on Mitochondrial Membrane Potential (MMP)
As the disruption of the MMP is one of the sequential events exhibited during the apoptotic pathway, the effect of the four studied plant extracts in this alteration after 24 h of treatment against HeLa cells was compared to the positive control FCCP ( Figure 8A) with untreated cells representing the negative control. After the cells were treated with IC50 of XA and EG, the MMP closely dropped to 19.70% and 20.84%, respectively. As well, DP and IC disrupted the MMP to 23.02% and 26.90%, while FCCP, the positive control for the MMP breakdown, altered the MMP to 30.20% compared to the control cells which represent the 100%. The results were analyzed in Figure 8B to assess the fold One of the important measures that play essential roles in apoptosis, necrosis, and inflammation is caspases which are a family of cysteine proteases, causing cleavage of cellular protein [36]. Examination of the activities of caspase 3/7 in HeLa cells with the four studied plant extracts has shown a marked increase in the activity of caspase 3/7, which confirm the effect of these extracts in apoptotic cell death as observed previously in cell cycle analysis and Annexin V.
Effect on Mitochondrial Membrane Potential (MMP)
As the disruption of the MMP is one of the sequential events exhibited during the apoptotic pathway, the effect of the four studied plant extracts in this alteration after 24 h of treatment against HeLa cells was compared to the positive control FCCP ( Figure 8A) with untreated cells representing the negative control. After the cells were treated with IC 50 of XA and EG, the MMP closely dropped to 19.70% and 20.84%, respectively. As well, DP and IC disrupted the MMP to 23.02% and 26.90%, while FCCP, the positive control for the MMP breakdown, altered the MMP to 30.20% compared to the control cells which represent the 100%. The results were analyzed in Figure 8B to assess the fold change in MMP after the treatment of different studied plant extracts. The data showed that all plant extracts expressed a significant alteration of the MMP. XA revealed a significant alteration decrease by 5.09 ± 0.21-fold, followed by EG, DP and IC with 4.80 ± 0.12, 4.37 ± 0.43 and 3.77 ± 0.63-fold, respectively. Comparatively, FCCP altered the MMP with a 3.35-fold decrease. Further analysis using fluorescence microscopy ( Figure 8C) was performed to visualize the breakdown of MPP after treatment of HeLa cells with the IC 50 of the four studied plant extracts, followed by staining with 500 nM of tetramethylrhodamine ethyl ester (TMRE.) The results represent the significant depletion of MMP after cells were treated with the four plant extracts, compared to the positive control (FCCP) and the untreated cells, as the remaining living cells obviously showed a depolarization of MMP which induced the apoptotic pathway and cell death. Mitochondria play an essential role in the physiological metabolism of cells and energy supply for cell survival [37]. Mitochondrial membrane potential (MMP) is one of the sequence processes that integrated during apoptotic pathway. Results in this study revealed that the four studied plant extracts showed significant disruption of the MMP compared to the untreated. These findings confirm that cells undergo apoptosis after treatment with these extracts by potential MMP alteration which represent is a key step in the intrinsic apoptotic pathway
Wound Healing Assay
Assessment of the inhibitory effect of the four studied plant extracts on the progression and migration was performed against HeLa cells over 24 and 48 h. the negative control is represented by un-treated cells and cells treated with vehicle (0.1% DMSO). The treatment of cells with the IC50 of the studied plant extracts resulted in a significant blocking the progression and wound healing of the Mitochondria play an essential role in the physiological metabolism of cells and energy supply for cell survival [37]. Mitochondrial membrane potential (MMP) is one of the sequence processes that integrated during apoptotic pathway. Results in this study revealed that the four studied plant extracts showed significant disruption of the MMP compared to the untreated. These findings confirm that cells undergo apoptosis after treatment with these extracts by potential MMP alteration which represent is a key step in the intrinsic apoptotic pathway
Wound Healing Assay
Assessment of the inhibitory effect of the four studied plant extracts on the progression and migration was performed against HeLa cells over 24 and 48 h. the negative control is represented by un-treated cells and cells treated with vehicle (0.1% DMSO). The treatment of cells with the IC 50 of the studied plant extracts resulted in a significant blocking the progression and wound healing of the scratch area compared to the untreated cells and control cells treated with the solvent (0.1% DMSO) as a control ( Figure 9A). Data analysis of the percentage of the wound closure was performed and data showed that the percentage decreased at 24 h from 96% and 81% for untreated cells and DMSO, respectively to 38%, 36%, 27% and 11.5% for IC, XA, EG and DP. After 36 h, the scratch area for the untreated cells was totally covered by cells with a percentage of 100% and for cells treated with DMSO by 95%. However, the significant effect on the scratch area, which continued up to 36 h, was observed in cells treated with DP where the percentage of wound closure was only 16.5% followed by 39.5% for cells treated with EG. While the coverage area of the scratch increased to 64.5% and 66% for cells treated with IC and XA, respectively ( Figure 9B), from these findings it was obvious that the four studied plant extracts exhibited a potential inhibitory effect of metastatic progression. scratch area compared to the untreated cells and control cells treated with the solvent (0.1% DMSO) as a control ( Figure 9A). Data analysis of the percentage of the wound closure was performed and data showed that the percentage decreased at 24 h from 96% and 81% for untreated cells and DMSO, respectively to 38%, 36%, 27% and 11.5% for IC, XA, EG and DP. After 36 h, the scratch area for the untreated cells was totally covered by cells with a percentage of 100% and for cells treated with DMSO by 95%. However, the significant effect on the scratch area, which continued up to 36 h, was observed in cells treated with DP where the percentage of wound closure was only 16.5% followed by 39.5% for cells treated with EG. While the coverage area of the scratch increased to 64.5% and 66% for cells treated with IC and XA, respectively ( Figure 9B), from these findings it was obvious that the four studied plant extracts exhibited a potential inhibitory effect of metastatic progression. Cell migration in vitro, or cell metastasis in vivo, represents one of the main features of malignant tumors that leads to increase in the mortality rate in cancer disease [38]. In this study, wound-healing assay showed that the four studied plant extracts significantly inhibited the cell migration in HeLa cells when treated with the IC50 values. DP and EG showed more potential and continuous effect in the inhibition of wound closure up to 36 h. These findings could represent potent anticancer agents that inhibited cancer metastasis.
Single Cell Gel Electrophoresis Assay
To assess if there is any possible toxicity induced by the studied plant extracts especially damage to the DNA and genotoxicity, the comet assay was performed. The principle of the comet assay is based on the fact that DNA strands which occur as a negatively charged supercoiled structure in the nucleus can be fragmented due to the exposure to the toxins or drug treatments [39]. Cell migration in vitro, or cell metastasis in vivo, represents one of the main features of malignant tumors that leads to increase in the mortality rate in cancer disease [38]. In this study, wound-healing assay showed that the four studied plant extracts significantly inhibited the cell migration in HeLa cells when treated with the IC 50 values. DP and EG showed more potential and continuous effect in the inhibition of wound closure up to 36 h. These findings could represent potent anticancer agents that inhibited cancer metastasis.
Single Cell Gel Electrophoresis Assay
To assess if there is any possible toxicity induced by the studied plant extracts especially damage to the DNA and genotoxicity, the comet assay was performed. The principle of the comet assay is based on the fact that DNA strands which occur as a negatively charged supercoiled structure in the nucleus can be fragmented due to the exposure to the toxins or drug treatments [39].
According to Figure 10, it can be observed that no direct DNA strand breakage was caused by IC 50 levels of the three of the studied plant extracts-IC, EG and DP-compared to the untreated cells and vehicle control (0.1% DMSO). While IC 50 of the plant extract XA might interfere with low DNA damage. Several parameters including percentage of DNA in comet tail (% tail-DNA), tail length (TL), and tail moment (TM) were used in the past to monitor DNA strand-breakage with the comet assay. In this study, the percentage of tail-DNA was used to quantify DNA strand-breakage in Hela cells after 24 h of treatment with the plant extracts. Data in Table 1, confirmed the results as IC, EG and DP exhibited percentage (% tail-DNA) <10% which represent the value for undamaged nuclei, while XA shown value (% tail-DNA) of 19.04 ± 2.10% that indicate low-damaged nuclei [40]. From these results, it can be assumed that these plant extracts can be used in in-vivo studies with no or minimum incidence of genetic toxicity. According to Figure 10, it can be observed that no direct DNA strand breakage was caused by IC50 levels of the three of the studied plant extracts-IC, EG and DP-compared to the untreated cells and vehicle control (0.1% DMSO). While IC50 of the plant extract XA might interfere with low DNA damage. Several parameters including percentage of DNA in comet tail (% tail-DNA), tail length (TL), and tail moment (TM) were used in the past to monitor DNA strand-breakage with the comet assay. In this study, the percentage of tail-DNA was used to quantify DNA strand-breakage in Hela cells after 24 h of treatment with the plant extracts. Data in Table 1, confirmed the results as IC, EG and DP exhibited percentage (% tail-DNA) <10% which represent the value for undamaged nuclei, while XA shown value (% tail-DNA) of 19.04 ± 2.10% that indicate low-damaged nuclei [40]. From these results, it can be assumed that these plant extracts can be used in in-vivo studies with no or minimum incidence of genetic toxicity. Medium, Dulbecco's modified Eagle's minimum essential medium (DMEM), Fetal bovine serum (FBS) were purchased from Capricon Scientific (Ebsdorfergrund, Germany). The Caspase-Glo 3/7 Assay kit was procured from Promega (Walldorf, Germany), and carbonyl cyaninde 4-(trifluoromethoxy) phenylhydrazone (FCCP) was from Cayman Chemical (Ann Arbor, Michigan, USA).
Plant Material and Extraction
Extraction of the four studied plants: Xylopia aethiopica (XA), Imperata cylindrica (IC), Echinops giganteus (EG) and Dorstenia psilurus (DP) (collected in Cameroon) were performed at our partner laboratory at the University of Dschang, Cameroon. Plant materials were purchased from Dschang local market, West Region of Cameroon in August 2018. They were identified at the Cameroonian National Herbarium where voucher specimens were deposited under the flowing reference number (Table 2, Figure 11). The extraction was done by maceration of 100 g of the plant material in 500 mL methanol for 48 h, then, the methanolic extracts were concentrated by rotary evaporation under reduced pressure to obtain the crude extracts. The extracts were then conserved at 4 • C until further use.
Plant Material and Extraction
Extraction of the four studied plants: Xylopia aethiopica (XA), Imperata cylindrica (IC), Echinops giganteus (EG) and Dorstenia psilurus (DP) (collected in Cameroon) were performed at our partner laboratory at the University of Dschang, Cameroon. Plant materials were purchased from Dschang local market, West Region of Cameroon in August 2018. They were identified at the Cameroonian National Herbarium where voucher specimens were deposited under the flowing reference number (Table 2, Figure 11). The extraction was done by maceration of 100 g of the plant material in 500 mL methanol for 48 h, then, the methanolic extracts were concentrated by rotary evaporation under reduced pressure to obtain the crude extracts. The extracts were then conserved at 4 °C until further use.
SRB Assay
To initially assess the possible cytotoxic effect of the studied plant extracts, the colorimetric sulphorhodamine-B (SRB) assay was used for measurement of cell proliferation [41]. Briefly, 4 × 10 4 of HeLa (cervical cancer) and HEK293 (non-tumor) cells were added to each well of a 96-well plate
SRB Assay
To initially assess the possible cytotoxic effect of the studied plant extracts, the colorimetric sulphorhodamine-B (SRB) assay was used for measurement of cell proliferation [41]. Briefly, 4 × 10 4 of HeLa (cervical cancer) and HEK293 (non-tumor) cells were added to each well of a 96-well plate and incubated overnight to allow for cell attachment. The cells were then treated with serial dilutions of the four plant extracts: XA, IC, EG and DP (200 to 1 µg/mL) and 1% Triton-x was used as a positive control. Untreated cells receiving the same volume of medium were served as a control while the concentration of vehicle control (DMSO) was kept at or below 0.1%. After 48 h exposure, the cells were fixed with ice-cold 10% TCA at 4 • C for 1 h, then washed four times under slow-running tap water, afterwards, stained with 0.057% (w/v) SRB in 1% acetic acid, washed and air-dried. Bound dye was solubilized with 200 µL of 10 mM Tris base solution (pH 10.5). The plates were read at 540 nm absorbance on a Fluostar microplate reader (BMG Labtech, Ortenberg, Germany), the determination of 50% inhibitory concentration (IC 50 ) was based on dose-response curves between the extract concentration and percent growth inhibition using the GraphPad Prism 5 software. The values are expressed as mean ± SD with all the experiments independently performed in triplicate.
Cell Viability Assay
Cell viability was measured by a standard MTT assay method [42]. The plant extracts were dissolved in DMSO, then diluted with cell culture medium to different serial dilutions (200 to 1 µg/mL) while the concentration of vehicle control (DMSO) was kept at or below 0.1%., 1% Triton-x was used as a positive control and untreated cells receiving the same volume of medium served as control For all adherent cells, the cells were cultured in 96-well plates (4 × 10 4 cells per well) and incubated at 37 • C for 24, 48 and 72 h. After that, a 20 µL MTT (5 mg mL −1 ) solution was added to each well and incubated for 4 h, then 150 µL of DMSO was added to each well to dissolve the formazan crystals. The absorbance data was detected by a microplate reader at 490 nm. For Leukemia HL60, a suspension of 4 × 10 4 cells/mL were seeded in 96-well plates and immediately after, serial dilution of the extracts in the media was added. After adequate incubation 24, 48 and 72 h, the 96-well plates were centrifuged at 1000 × g, 4 • C for 5 min in a microplate-compatible centrifuge, then the media was carefully aspirated and treated with MTT solution as above mentioned. The viability was observed based on comparison with the absorbance of untreated and treated cells. The IC 50 values were obtained from the concentration of the plant extracts that induced 50% inhibition of cell growth using the GraphPad prism 5 software. Additionally, each test was replicated three times independently.
Live Dead Staining
Assessment of cell viability of HeLa cells upon treatment with the four studied extracts was conducted using the LIVE/DEAD ® Viability/Cytotoxicity Kit. A total of 2.0 × 10 5 cells/well were plated on the surface of a sterile glass coverslip placed in a twelve-well plate and incubated overnight before treatment with a concentration of 30 µg/mL of the crude extracts. Cells were washed with 1× PBS solution before staining using a dual-fluorescence of calcein-AM (2.0 µM) and propidium iodide (20 µg/mL). Cell washed twice after 15 min with 1 × PBS solution, then visualization of samples was carried out using a fluorescence microscope (CKX-53 Olympus, Tokyo, Japan). Viable cells stained with green fluorescence and dead cells labelled with red fluorescence correlating to calcein-AM and Propidium iodide, respectively. Four random fields of view for each sample were captured and were analyzed using ImageJ software for three independent experiments. The percentages of viable cells were calculated as follows:
Cell Cycle Analysis
A total of 3 × 10 5 cells were seeded into a well of a 12-well plate and incubated with the IC 50 values obtained from the MTT assay of the four studied plant extracts and untreated cells as a control for 24 h. Following incubation, cells were trypsinized, washed with PBS and resuspended in 100 µL of ice-cold PBS. Cells transferred to a 2 mL sample tube and shaking at 800 rpm while adding 900 µL of ice-cold 70% ethanol drop-wise then incubated at −20 • C for 2 h. Cells were further centrifuged for 5 min at 8000 rpm, 4 • C, cell pellets were resuspended in 1 mL ice-cold PBS and the centrifugation was repeated then pellets were re-suspended in 1 mL of staining solution (PBS, 0.01% tween 80, 20 µg/mL PI, 1 µL/mL RNase [100 mg/mL]) and incubated at 37 • C for 30 min [43]. After washing and centrifugation, the cell suspension was vortexed and filtered using a polystyrene round bottom tube equipped with a cell strainer cap. The cell cycle distribution was then analysed using PE-A channel with a flow cytometer (BD Biosciences LSR II FACS, San Francisco, CA, USA) and 10,000 events per sample were acquired. The percentages of cells in the different cell cycle phases were analyzed in triplicate by the ModFit LT software (version 5.0).
Annexin V-FITC/PI for Apoptosis Detection
Further analysis for apoptosis was performed using a FITC Annexin V Apoptosis Detection Kit according to the manufacturer's instructions. Briefly, a total of 5.0 × 10 5 of HeLa cells were cultured before being treated with the IC 50 values, obtained from the MTT assay of the studied crude extracts, for 48 h. For the qualitative analysis, cells were seeded on the surface of a sterile glass coverslip placed in a twelve-well plate and incubated overnight before treatment. Treatment-free cells were grown as negative controls. Then, cells were washed twice with 1× binding buffer and further incubated for 15 min with 200 µL of the binding buffer containing Annexin V-FITC and PI in the dark. Visualization of samples was carried out using a fluorescence microscope (CKX-53 Olympus, Tokyo, Japan). Apoptotic cells stained with green fluorescence and dead cells labelled with red fluorescence correlating to annexin V-FITC and propidium iodide, respectively. Four random fields of view for each sample were captured.
For the quantitative analysis, the cells were harvested with trypsin and centrifuged at 300 × g for 10 min after treatment. Cells were then washed twice with 1 × PBS buffer and further incubated with 100 µL of the binding buffer containing annexin V-FITC and PI in the dark for 15 min. After that the samples were mixed with 400.0 µL binding buffer before being analyzed using a flow cytometer (BD Biosciences LSR II FACS). Results for three independent experiments expressed in a scatter plot as four different quadrants representing viable cells, necrosis, early and late apoptosis.
Caspase-Glo 3/7 Activity
The influence of extracts on caspase 3/7 activity in HeLa cells was observed using Caspase-Glo 3/7 Assay kit (Promega). Following the manufacturer protocol, cells cultured in RPMI were seeded in 96-well plates overnight, then treated with the IC 50 of the crude plant extracts obtained from the MTT assay and untreated cells as a control. After 24 h treatment, 100 µL of caspase reagent were added to each well, mixed and incubated for 1 h at room temperature. The luminescence was measured using a Fluostar Optima microplate reader (BMG Labtech, Ortenberg, Germany), then caspase activity was expressed as percentage of the untreated control within five replicates reading.
Analysis of Mitochondrial Membrane Potential MMP (∆Ψ m)
The mitochondrial membrane potential assay was performed by using tetramethylrhodamine ethyl ester (TMRE) to label active mitochondria. Simply, HeLa cells were seeded in 96-well plates (4 × 10 4 ) cells per well), after incubation overnight, cells were treated with IC 50 of the four plant extracts obtained from the MTT assay and untreated cells worked as negative control. 24 h later, 50 µM of carbonyl cyanide 4-(trifluoromethoxy) phenylhydrazone (FCCP) was added to one sample replicate as a positive control for 20 min, then cells were treated with 500 nM of TMRE and incubated for 20 min at 37 • C. Cells were washed twice with 0.2% bovine serum albumin (BSA) w/v in PBS, then fluorescence was detected using a Flurostar Optima microplate reader (λ Ex /λ Em = 549/575 nm). For further evaluation of the disruption of MMP, visualization of the MMP analysis was done under a fluorescence microscope (CKX-53 Olympus). All experiments were performed in independent triplicates.
Wound Healing Assay
Inhibition of cell migration and metastasis was evaluated by wound healing assay [44]. HeLa cells were seeded in 24 well plates. After 24 h, cells were treated with concentration values of IC 50 of the different plant extracts obtained from the MTT assay and 0.1% DMSO (solvent control) for 2 h in serum-free medium. A scratch was made with a 200 µl pipette tip. Cells were then washed twice with ice-cold PBS (pH 7.4) and fresh medium was added. Wound closure was observed immediately (0 h) using an inverted microscope (CKX53, Olympus) and at different time intervals (24 h and 36 h). Cell migration and percentage wound healing were also calculated using Sketch and CalcTM ® along with Gimp2.10.10 ® software measuring the distance between wound closures. The experiment was independently performed in triplicates.
Single Cell Gel Electrophoresis
The assessment of the DNA damage and genotoxicity induced by the studied plant extracts was performed by the single-cell gel electrophoresis (Alkaline Comet Assay). All the procedures were performed in dark [45]. Briefly, 1 × 10 5 HeLa cells per well were seeded into a six-well plate and were allowed to adhere overnight. Then cells were treated with IC 50 of the different plant extracts obtained from the MTT assay and 0.1% DMSO (solvent control) for 24 h. Then the cells were trypsinized and centrifuged for 5 min at 1000 rpm to get the cell pellet. The obtained cell suspension was washed twice using sterile PBS (pH 7.4) and cell density was adjusted accordingly. After that, 80,000 cells (25 µL) of the treated and untreated cell suspension was mixed with 75 µL of 1% of pre-warm low melting agarose (LMA) (Carl Roth GmbH). The mixture was applied on the super frost glass slide previously pre-coated with 1% standard normal melting agarose (NMA) and was immediately covered with coverslips. The glass slides were then placed on an ice block for 10 min until solidified and the coverslips were gently removed. The cell membrane lysis was done by submerging the slides overnight into the staining jar containing cold lysis solution (300 mM NaOH, 1.2 M NaCl, 2% DMSO and 1% Triton X-100) [39]. The slides were then transferred to the electrophoresis tank containing alkaline electrophoresis buffer (300 mM NaOH and 1 mM EDTA) and were left in the buffer for 30 min to allow the unwinding of DNA. Electrophoresis was performed for 30 min at 250 mA current and 25 V, resulting in the DNA unwinding and exposing the alkali labile sites. After the electrophoresis, the slides were neutralized by washing the slides with double distilled water. The cell fixation was then done by submerging the slides into the 70% ethanol for 20 min. After fixation, cells were stained with SYBR ® safe DNA staining dye (1:10,000 in PBS) for 20 min. Finally, the slides were washed with double distilled water to remove any unbound stains. The comet analysis was done under a fluorescence microscope (CKX-53 Olympus). Fifty individual comets were scored for each sample and analyzed using Comet Assay IV ® software.
Statistical Analysis
Non-linear curve fitting functions were applied on normalized dose-response cell viability data obtained from SRB and MTT assays, then IC 50 values were calculated using GraphPad Prism 5. All the experiments were performed in triplicate unless otherwise stated and results are expressed as mean ± SD. One-way ANOVA was performed on data obtained of bar graphs using GraphPad Prism 5. Significance levels of p < 0.05 were considered for the rejection of the null hypothesis.
Conclusions
In this study, the cytotoxic activity of extracts from four Cameroonian plants-XA, IC, EG and DP-was examined. These extracts could represent potential antitumor agents against the examined cancer cells in a concentration dependent manner. Cell cycle analysis showed the accumulation of hypodiploid cells in the sub-G0/G1 phase which is considered to be a marker for apoptotic cell death. As well, apoptosis was induced, by all of these extracts, with an increase in the caspase 3/7 activity and significant MMP disruption. These extracts introduce a promising option for inhibition of cancer cell metastasis. Further studies are warranted to identify the active constituents which are responsible for the anticancer properties and assessment of the dose-response relationship in vitro and in vivo.
|
2020-11-05T09:10:39.063Z
|
2020-10-31T00:00:00.000
|
{
"year": 2020,
"sha1": "a26f0375a04a12a9c32886abba64010ef504e9ef",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8247/13/11/357/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "944642c8fc76f5af165a9acf606a7f6363768b36",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
17057199
|
pes2o/s2orc
|
v3-fos-license
|
The Link between Cognitive Measures and ADLs and IADL Functioning in Mild Alzheimer's: What Has Gender Got to Do with It?
Objectives. To investigate the link between neurocognitive measures and various aspects of daily living (ADL and IADL) in women and men with mild Alzheimer's disease (AD). Methods. Participants were 202 AD patients (91 male, 111 female) with CDR global scores of ≤1. ADLs and IADLs ratings were obtained from caregivers. Cognitive domains were assessed with neuropsychological testing. Results. Memory and executive functioning were related to IADL scores. Executive functioning was linked to total ADL. Comparisons stratified on gender found attention predicted total ADL score in both men and women. Attention predicted bathing and eating ability in women only. Language predicted IADL functions in men (food preparation) and women (driving). Conclusions. Associations between ADLs/IADLs and memory, learning, executive functioning, and language suggest that even in patients with mild AD, basic ADLs require complex cognitive processes. Gender differences in the domains of learning and memory area were found.
Introduction
Basic and instrumental activities of daily living are tasks required to function on a daily basis, and are often impacted by disease processes that reduce cognitive ability such as Alzheimer's disease (AD). Basic activities of daily living (ADLs) include core tasks of everyday life such as eating, dressing, grooming, and bathing while instrumental activities of daily living (IADLs) include more complicated, higher-level, tasks such as preparing meals, managing finances, shopping, doing housework, and using the telephone. Driving and medication management are other IADLs that are significantly disturbed in patients with AD [1]. While both ADLs and IADLs are impacted by AD, IADLs are the first to decline and the level of functional impairment is the core clinical distinction between AD and milder conditions such as mild cognitive impairment (MCI). Classification of MCI subtypes, suggest that MCI-amnestic type and MCI patients with multipledomain impairments are more predictive of later conversion to dementia and are more impaired functionally compared to MCI of nonamnestic and single domain subtype [2].
Patients with AD experience a gradual loss in the abilities to live independently due to impairments in cognitive and memory functioning [1,3] and, as the disease progresses, the ability to carry out these essential activities eventually disappears [4]. Functional impairments in AD place the greatest burden on both caregivers and the economy [5]. Although there is not a large body of research on the relationship between neuropsychological measures and functional activities, several studies suggest that neuropsychological test 2 International Journal of Alzheimer's Disease performance is predictive of complex ADLs and IADLs in elderly neuropsychiatric patients and those with AD [1,3,6,7]. Evans found that performance on neuropsychological evaluations predicted functional capacity beyond negative symptoms in elderly with schizophrenia. However, these authors were unable to identify specific cognitive domains that impacted functional impairment. To date, most studies have not utilized comprehensive and conceptually sound measures to identify specific cognitive domains to predict particular areas of daily functioning [1]. The limited existing literature suggests a correlation between objective neuropsychological assessment and informant reported level of functioning. This relative dearth of literature may be attributable to the difficulty in accurately measuring everyday functioning [1]. Trained observer ratings of functional level may be the gold standard but are very time consuming and impractical in outpatient settings. Loewenstein et al. [8] found that family members' report of functional impairment is "extremely accurate" when compared with objective functional performance and is a useful mechanism to assess functioning [8].
The present study evaluated the link between specific neurocognitive measures and informant report of ADLs and IADLs in patients diagnosed with mild AD. As recommended by Beck et al., we address challenges in the current literature by utilizing a comprehensive neuropsychological battery to predict daily functioning in this clinical population.
To our knowledge no research has addressed the possibility of gender-related differences in the cognitive mechanisms required for select areas of daily functioning [9]. For instance, women inherently expend more effort than men in the area of dressing and grooming, which implies increased cognitive effort. The majority of studies have dealt with possible gender differences by covarying for gender, which tends to obscure any meaningful relationships that may be gender specific. Our study directly evaluates gender differences in the ability of specific neuropsychological tests and cognitive domains to predict functioning. Collateral information on ADLs and IADLs ratings was obtained from immediate caregivers who predominately were family members (spouse and or children) using the Lawton-Brody rating scales. The methodology of the TARC project has been described in detail elsewhere [10]. Briefly, the TARC project is a longitudinal multisite study of a cohort of AD patients and normal controls where each participant undergoes an annual evaluation that includes a medical examination, interview, neuropsychological testing, and blood draw. AD patients met consensus-based diagnosis for probable AD based on NINCDS-ADRDA criteria [11]. Male participants were 56 to 92 years of age (M = 74.36, SD = 8.21), and females participants were 54 to 92 (M = 76.95, SD = 7.74). The characteristics of the participants are presented in Table 1. The majority of participants were Caucasian (98%), black or African American (1.5%) was the next largest group. The TARC project received Institutional Review Board approval, and all participants and/or caregivers provided written informed consent.
Assessment.
The TARC neuropsychological core battery consists of the following instruments: Wechsler Digit Span, Logical Memory, and Visual Reproduction, Trail Making Test A & B, Clock Drawing Test (CDT), Boston Naming Test, the Geriatric Depression Scale (GDS-30), and the Clinical Rating scale (CDR). Verbal memory was assessed with the Wechsler Logical Memory I (LM I) and Wechsler Logical Memory II (LM II), visual memory was assessed with the Wechsler Visual Reproduction I (VRI) and II (VRII), attention was evaluated by performance on Trails A and Total Digit Span, linguistic capacity was assessed with Boston Naming Test (BNT) and verbal fluency (FAS, Category Naming (COWAT)), measures of executive functioning in this battery included the CDT and Trails B. Cognitive evaluation was administered in a controlled setting according to standardized instructions. In order to equate scores from digit span and story memory scales, all raw scores were converted to scale scores based on previously published normative data [12]. For the Boston Naming Test, the current group recently conducted an independent study that demonstrated the psychometric properties of an estimated 60-item BNT score that can be calculated from 30-item versions [13]. Adjusted scale scores were utilized as dependent variables in analyses.
Data Analysis.
Descriptive statistics and one-way ANOVA comparison of male and female samples (presented in Table 1) were conducted using SPPS version 17.0. Stepwise regression modeling was used to evaluate the link between each test of cognitive function and ADLs and IADLs. Independent variables were caregiver ratings on the Physical Self-Maintenance Scale for ADLs and the Personal Self-Maintenance Scale for IADLs [14]. Each item has five descriptors from total independence to total dependence or total loss of functional control. The items are scored 0-4. The ADLs assessed were toileting, feeding, dressing, grooming, ambulation, and bathing. The IADLs assessed were telephone use, shopping, food preparation, housekeeping, transportation, laundry management of medications and finances. ApoE4 status (presence or absence) was also analyzed. Significance level was set at 0.05.
Predictors of IADL Functions for Total Sample. Logical
Memory I (LMI) and performance on CDT were significant predictors of IADLs (see Table 2 Table 3. ApoE4 status was excluded from stepwise regression modeling and did not impact the level of ADL functioning in women and men with mild AD and as a result was not included in gender analyses.
Predictors of ADL Functions of Males. VRI, Trails A,
BNT, Trails B, and CDT were all significant predictors of level of ADL functioning in men with AD (see Table 5). Total ADL score (t = 8.86, P < .0001; R 2 = .35) was significantly predicted by VRI and Trails A. Performance on VRI predicted bathing ability (t = 6.26, P < .0001; R 2 = .41) and when combined with Trails B also predicted grooming capacity (t = 5.66, P < .0001; R 2 = .51). The ability to eat/feed independently was predicted by BNT and CDT (t = 16.65, P < .0001; R 2 = .56).
Discussion
Previous research has shown that cognitive functioning, as assessed by neuropsychological tests, is the strongest predictor of functional impairment [6,7]. Specific cognitive domains of executive functioning, praxis/visuospatial skills, and memory have been found to be useful for predicting ADL and IADL in assisted-living elders [6]. Measures of executive functions have been shown to predict IADLs in community dwelling elders [15]. Our findings are consistent with previous research and demonstrate a significant relationship between performance of daily living activities and neurocognitive performance. Unlike other studies, we found that attention is an important predictor of ADLs in AD patients. These differences may be related to differences in setting. Prior studies have been conducted in assisted care facilities where caregiver assistance may be sufficient to overcome inattention. However, individuals with mild dementia seen as outpatients in our study were likely responsible for basic ADLs, and, thereby, attention was necessary to facilitate functioning. Memory and learning (LMI, LMII, VR I, VRII), executive functioning (CDT, Trails B), and language (BNT, COWAT) were significant predictors of ADLs and IADLs. Among the measures administered, CDT, LMI, and Trails A were predictive of both ADL and IADL functioning in analysis of the total sample. Whereas prior reports suggested that cognitive abilities are most predictive of complex tasks of everyday functioning [15,16], our results suggest that cognitive test performance also predicts basic ADLs (e.g., bathing, grooming, dressing, and feeding). This suggests that even in patients with mild AD, basic ADLs likely also require complex cognitive processes.
Another intriguing finding is that the presence of ApoE4 genotype was not predictive of level of IADL and ADL functioning in the current sample. Presence of particular APOE genotype has been associated with greater disability in prior research with patients with MCI [17]. The current data suggests that the presence of APOE was not significantly associated with level of functioning in patients who have converted to AD status. This is the first known study to directly examine gender differences. We anticipated gender differences because there are (a) differences in task performance and (b) differences in strategies used to perform ADLs and IADLs. For instance, it has been documented that women tend to use landmarks when driving and given directions where men are more likely to use street names [9,18]. In our research we administered several measures within each cognitive domain to facilitate understanding of not only which domain is different for male and female but also which specific measure best predicts functioning. Figure 1 demonstrates gender differences in ADLs and IADLs with regard to specific cognitive domains assessed. Measures of attention predicted overall ADL scores in both men and women. However, attention predicted bathing and eating ability in women but not men. Language also predicted IADL functions in men (food preparation) and women (driving). Executive function predicted both ADLs and IADLs in women and men. Gender differences remained in the domain of learning and memory, suggesting that men rely on this process for both ADL and IADL whereas it is only predictive of IADL functions in women.
A notable gender difference is that cognitive functioning is generally a better predictor of ADL and IADL functioning for women compared to men. While the administered assessment battery predicted practically all daily tasks for women, it only predicted a few specific ones for men. This may be due in part to the likelihood that men especially of the generation in our sample are less likely to be involved in cooking, shopping, housekeeping and laundry and hence have little variability. For men, the IADL of medicine management was the only area predicted by performance on several different cognitive measures. Among ADLs, only bathing, grooming, and feeding capacity was significantly predicted by VRI, Trails A and Trails B, BNT, and the CDT. However, in women, the CDT, verbal learning and memory (LMI and LMII) and language were good predictors of capacity to perform almost all ADLs and IADLs. These findings suggest that men tended to depend on visual learning and visual memory and women on verbal learning and verbal memory. LMI and LMII were predictive of functioning for women, whereas VRI and VRII were predictive for men. One could speculate that women tend to problem-solve verbally using "self-talk" whereas men tend to conceptualize visually.
The CDT appears to be a good measure in predicting functioning for women, but not for men. The CDT is typically seen as a measure of executive functioning and of frontal lobe processes. The clinical utility of the Clock Drawing Test has been documented [18] for diagnosing patients with dementia [19], but its relationship to specific functional activities has not been reported.
The generalizability of our findings suffers from the relatively small sample size and the nature of subject recruitment. The current study is one of the first to examine gender differences, and efforts to replicate these findings is warranted due to several sample limitations that include lack of racial diversity and differences in educational levels and age among men and women in this sample. Though it is unlikely these factors negate current results, it would be best to stratify according to education and age in future studies. Additional studies with larger more representative samples are needed to further assess the impact of gender on predicting functioning. Although late-life depression could impact cognitive functioning, gender differences were not significant (Table 1) in our sample, and, therefore, depression score was not accounted for in the analyses. However, effects of depression are significant and warrant future efforts.
Although the current research has its limitations this study has several advantages over earlier studies in terms of understanding patients with AD. First, the sample was limited to individuals with mild AD which helps control for the affect of disease severity on functional activities. Second, it examines gender differences that have not been examined in prior research. Third, in addition to evaluating predictive value of specific domains it also evaluated specific measures within those domains in both gender-specific and mixedgender analyses. The findings of this study underscore the importance of gender and the gender-specific relations of neurocognitive measures to everyday activities.
Conclusion
Acknowledging gender differences is important as it may facilitate more accurate interpretation of neurocognitive tasks and its relationship to particular daily living activities. These findings also have clinical values for making informed decisions and recommendations of capacity in patients with AD. There is generally consensus that executive functioning is an important predictor of capacity to perform complex tasks (IADLs; 15). While this may be accurate for women, current findings suggest that it may be an irrelevant predictor for men. Understanding how to most accurately predict level of function will also enable patients to maintain daily functions longer, reducing caregiver fatigue, and also social and economic burden.
|
2018-04-03T01:52:13.340Z
|
2011-05-25T00:00:00.000
|
{
"year": 2011,
"sha1": "27162ce042ab9365924c8e7830390bc3d0bf78cd",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijad/2011/276734.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1deaf5fc2d98cff8a7ef799d94fc3941d3ef3098",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
256111631
|
pes2o/s2orc
|
v3-fos-license
|
Algebraic aspects of hypergeometric differential equations
We review some classical and modern aspects of hypergeometric differential equations, including A-hypergeometric systems of Gel′\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$'$$\end{document}fand, Graev, Kapranov and Zelevinsky. Some recent advances in this theory, such as Euler–Koszul homology, rank jump phenomena, irregularity questions and Hodge theoretic aspects are discussed with more details. We also give some applications of the theory of hypergeometric systems to toric mirror symmetry.
Introduction
Notational conventions We use Italic letters M for rings, variables and modules; calligraphic letters D for sheaves; Roman letters FL for functors; Gothic letters for prime ideals p and points x of spaces.
Lattice elements a are in Roman bold; coordinate sets t and other sets of functions or operators ∂ in Italic bold. ♦
Hypergeometric functions
The study of hypergeometric functions started more than two centuries ago and formed a important part of the work of Euler and Gauß. A power series is hypergeometric if the quotient a i+1 /a i of consecutive coefficients is a rational function in i. Traditional convention dictates that the exponential function is regarded as the standard hypergeometric function (to a i+1 /a i constant); this "explains" the choice of a i /i! over a i as series coefficient. Further examples include Bessel, Airy, trigonometric and (higher) logarithmic as well as all other special functions, and the hypergeometric functions that express roots of algebraic equations .
The continuing interest in hypergeometric functions stems to some extent from the fact that they are often solutions to very appealing linear differential equations taken from physics. For example, the Bessel functions J ±r (x) of the first kind arise as solutions to a linear second order equation that shows up in heat and electromag-netic propagation in a cylinder, vibrations of circular membranes, and more generally when solving the Helmholtz or Laplace equation. Indeed, such connections to physics through differential equations prompted the first studies of (specific) hypergeometric functions. However, hypergeometric functions also appear in many other parts of mathematics: as we will see soon, each time an action of an algebraic torus on a space is observed, one can expect to find some differential equation of hypergeometric type connected to this situation. The abundance of toric varieties in geometry explains why there are so many different interesting hypergeometric functions. We discuss in Sect. 5 below one prominent case where hypergeometric differential equations prove to be useful: the so-called mirror symmetry phenomenon for certain smooth toric varieties. Other recent applications that are beyond the scope of this article include the holonomic gradient method in algebraic statistics (Hibi et al. 2017) or Feynman integral computations in quantum field theory (Nasrollahpoursamami 2016;Klausen 2019;de la Cruz 2019;Feng et al. 2020).
As it turns out, it is exactly the type of differential equation satisfied by a function that determines whether the function should be considered as hypergeometric, since these force the right kind of recursions on the series. The most successful approach to generalize hypergeometric differential equations to several variables was initiated by Gel fand, Graev, Kapranov and Zelevinsky in the 1980s, and some of the features of this theory form the topic of this article. We start with some motivating examples. While this integral cannot be solved in closed form, it can be developed into a convergent Taylor series where a i = 1/(2i + 1), so that is hypergeometric. ♦ The univariate hypergeometric functions are classified by the rational function a i+1 /a i . More precisely, suppose that a i+1 /a i = P(i)/Q(i) where P, Q ∈ C[i] are monic with P = p j=1 (i + α j ) and Q = q j=1 (i + β j ). Then the univariate hypergeometric function associated to P, Q is p F q (α 1 , . . . , α p ; β 1 , . . . , β q ; z) = ∞ i=0 a i z i i!
Example 1.2 (The error function, part II) It follows from (1) that erf(z) is, up to the factor 2z/ √ π , equal to 1 F 1 (1/2; 3/2; −z 2 ), where 1 F 1 (1/2; 3/2; z) = 1 + z 3 + z 2 10 + z 3 42 + z 4 216 + z 5 1320 + · · · is the Kummer confluent function which encodes all intrinsic analytic and combinatorial properties of erf(x) and, with θ z = z d dz , satisfies the differential equation The particular shape of this equation will be used in the next section for a conversion process from univariate hypergeometric functions to A-hypergeometric ones. ♦ In the following example we document how hypergeometric functions arise naturally from differential forms with parameters. The computation was apparently already known to Kummer; compare (Brieskorn and Knörrer 1986) for details. In modern terms, it represents the birth of the notion of a variation of Hodge structures. (u, v) = v 2 − u(u − 1)(u − z) defines for each z ∈ C {0, 1} a smooth curve E z over C. Its projective closure E z ⊆ P 2 C meets the line at infinity in a single point and is smooth as long as z / ∈ {0, 1, ∞}. The natural projection from E z to C via "forgetting v" is generically 2 : 1 and branches at 0, 1, z; the induced map E z −→ P 1 C also branches at infinity. The differential 1-form ω z := du/v is everywhere holomorphic and nowhere zero on E z ; the existence of this "form of the first kind" in Riemann's language makes the elliptic curve E z a Calabi-Yau manifold in modern terms. The "form of the second kind" ω z := ω z /(u − z) has a unique pole, at u = z, at which it is residue-free. Considering v = v(u, z) as dependent variable and writing ω z , ω z in terms of u and z, one notes that ∂ ∂z (ω z ) = 1 2 ω z , and (compare especially (Brieskorn and Knörrer 1986, Page 685)) the differential on the right being taken in u, v with z constant (and noting that on E one has d(u(u − 1)(u − z)) = 2v dv). Let λ ∈ H 1 (E z ; Z) Z ⊕ Z and set I 1 (λ) = λ ω z and I 2 (λ) = λ ω z , multivalued functions on E z defined via elliptic integrals. The differential equations for ω z , ω z imply (compare (Brieskorn and Knörrer 1986, Lemma 12)) that I 1 (λ) and with singularities at 0, 1 and ∞. It is the special case 1 = 2a = 2b = c of the general Gauß hypergeometric differential equation with solution space basis given by Gauß' hypergeometric functions which have singularities at 0, ∞ and 1, ∞ respectively. Suppose λ z , λ z are the standard basis (the minimal geodesics) for the first homology group of the torus E z . Then two elementary (but non-trivial) computations reveal: (1) analytic continuation of the solution space basis F = (F 1 , F 2 ) T around the points z = 0 and z = 1 corresponds to multiplication of F by M 0 = 1 0 −2 1 and M 1 = 1 2 0 1 respectively; (2) the map is a bundle with fiber E z 1 /z 0 that admits an Ehresmann connection. In particular, the cohomology classes of the fibers allow parallel transport. The induced vector bundle with fiber H 1 (E z ; Z) = Zλ z + Zλ z admits a monodromy action, lifting the loops around z = (0, 1) and z = (1, 1). Analysis of the geometry of π shows that this monodromy is given again by the actions of M 1 and M 2 respectively.
On the complement of the points 0, 1, ∞ this D z -module is a vector bundle with a flat connection. The fibers of this vector bundle are the cohomology groups H 1 (E z 1 /z 0 ; C). This vector bundle is actually a variation of pure Hodge structures of weight 1 where the (1, 0)-part is generated by the differential form ω z , the variation of this (1, 0)-subbundle being described by (4).
It follows that, up to scalars, I 1 (λ z ) = F 1 (z), I 2 (λ z ) = F 2 (z). In particular, the ratio τ (z) = I 1 (λ z )/I 2 (λ z ) is the modulus of the elliptic curve in the sense that the fiber over z is isomorphic to the quotient of C by Z + √ −1τ · Z. We will take up the discussion of Hodge structures associated to more general univariate hypergeometric operators (see Eq. (7) below) later in Sect. 4 (see page 33). ♦
From univariate to GKZ and back
In the 1980s, the Russian school around I.M. Gel fand found a universal way of encoding univariate hypergeometric functions by way of certain systems of PDEs that arise from an integer matrix A and complex parameter vector β. We start with the general definition and then explain how univariate hypergeometric functions arise as solutions of these D-modules.
Notation 1.4 In the first three sections of this article,
A = (a 1 , . . . , a n ) ∈ Z d×n denotes an integer matrix with d rows and n columns. In the last two sections, A will still be integer, but at least sometimes of size (d + 1) × (n + 1). ♦ For convenience, we place the following constraints on the matrix A; they make concise statements possible, or at least easier to make.
Convention 1.5 (Standard assumptions on A) With A as above, A spans a semigroup
NA := n j=1 Na j ⊆ ZA inside Z d . Throughout we assume that • the group ZA generated by A agrees with Z d (A is full); • the semigroup NA contains no units besides 0 (A is pointed). We note that pointedness of A is equivalent to the existence of a group homomorphism from Z d to Z that is positive on every a j . ♦ We now give the definition of the main character of our story.
Definition 1.6 (A-hypergeometric system, Gel'fand et al. (1987)) Fix A ∈ Z d×n as in Convention 1.5 and choose β ∈ C d . Let be the n-th Weyl algebra over C. Here x = x 1 , . . . , x n , ∂ = ∂ 1 , . . . , ∂ n , and ∂ j is identified with the partial differentiation operator ∂ ∂ x j . We also let denote the polynomial subring.
Letting θ j stand for x j ∂ j , the Euler operator E i is For each u ∈ Z n in the kernel of A its box operator is where (u + ) j = max{0, u j } and (u − ) j = max{0, −u j }. The toric ideal I A is the R A -ideal generated by all u with u ∈ ker A. Finally, the hypergeometric ideal and module to A, β are ♦ Before we embark on a general discussion of these modules we wish to distinguish two special subclasses that will play a lead role.
Definition 1.7
The matrix A is homogeneous if the following equivalent properties are satisfied: • there is a group homomorphism from Z d to Z that sends every a j to 1 ∈ Z; • the vector (1, 1, . . . , 1) is in the row span of A; • the ideal I A is standard graded and thus defines a projective variety inside projective (n − 1)-space.
♦ Definition 1. 8 The semigroup NA is saturated if NA agrees with the intersection of ZA with the cone R ≥0 A spanned by the columns of A viewed as elements of In a series of articles, including Gel'fand et al. (1987,1989,1990), the basic theory of these linear PDEs was developed by the Gel'fand school. The initial motivation came from Aomoto type integrals depending on a complex parameter vector β ∈ C d , It is not hard to verify that a hypergeometric function defined by the integral (5) is annihilated by both the Euler operators and the box operators (Gel'fand et al. 1990;Adolphson 1994) but it took a decade to arrive at the general formulation given here. It turns out that every univariate hypergeometric function arises as a solution of an A-hypergeometric system; we sketch next the steps to construct the proper A, β. The general hypergeometric univariate differential equation is It is elementary, but not always trivial, to bring a differential equation derived from a series expansion of a hypergeometric function into this shape; it may require changes of variables in z. Note that p F q (α; β; z) is a solution to the special form as one can see from applying the two operators to the power series (2).
Let v and c be the vectors with entries v j and c j respectively. For 2 F 1 (equal to the function F 1 in Example 1.3), v = (1, 1, −1, −1) while for the Kummer confluent function 1 F 1 , v = (1, 1, −1). Now, in order to manufacture A and β from Eq. (6), choose an integral matrix A such that Z · v = ker A and set β = A · c. Then the solutions of H A (β) (in other words, the functions annihilated by every operator in this left ideal) "contain the solutions to (6)" in the following sense. Example 1.9 (The GKZ-system to the Kummer confluent function) Consider the system of partial differential equations since v = (1, 1, −1) is the Z-kernel of A.
Then v = (1, 1, −1, −1) and the matrix A can be chosen as annihilate each solution, so every monomial x u in the power series expansion of every solution to the A-hypergeometric system must satisfy the three conditions For a monomial x u , we call A · u ∈ ZA the A-degree of x u . Then, every solution u(x 1 , x 2 , x 3 , x 4 ) can be written as a univariate function g in x 1 x 4 x 2 x 3 , multiplied by a monomial of A-degree β. As in the previous example, one can use the fact that v kills u to show that g satisfies the Gauß hypergeometric differential equation. ♦ Of course, the kernel of A being Z · v means that A ∈ Z (n−1)×n and I A = ( v ) is principal. On the other hand, the A-hypergeometric paradigm also encodes multivariate hypergeometric series of higher rank (namely n − d) when d < n − 1. The solutions to H A (β) use n variables and satisfy d homogeneities, so that effectively they are functions in n − d independent quantities. Some aspects of the translation between the two setups is discussed in . The advantage of the Ahypergeometric point of view is that it allows hypergeometric functions to be studied with methods coming from algebraic geometry, commutative algebra, and the theory of torus actions. We describe in the following sections some of the advances and some of the new problems that have been created through these new techniques.
Solutions
While we do not focus very much on solutions of A-hypergeometric systems in this survey, it is only fair to indicate to some extent the development of the understanding of their solution space over time. We also refer the reader to Remark 3.14 below, where we list and discuss some more references, after having explained issues like irregularity and slopes of hypergeometric systems.
Classically, functions were considered as hypergeometric if they could be developed into a hypergeometric series. They typically arose from specific differential equations and the hypergeometricity was a consequence of the recurrence relations that came out of the differential equation. While introducing A-hypergeometric systems, Gel fand and his collaborators Graev, Kapranov and Zelevinsky developed a similar paradigm for the multi-variable homogeneous case, see Definition 1.7. With setup as in Sect. 2, so A · γ = β and L A the kernel of A, the series formally is a solution of H A (β). Assuming a certain amount of genericity for γ (such as non-resonance, see Definition 2.7) the article (Gel'fand et al. 1989) also finds that the regions of convergence of these series contain an open cone of the same shape as The series approach to solving differential equations of hypergeometric type was then taken further by Sturmfels, Saito and Takayama in their book Saito et al. (2000) through the technique of Gröbner bases. As part of this mechanism, triangulations arise. The connection between certain special solution series on one side and and triangulations on the other appears already in Gel'fand et al. (1989). In the homogeneous normal case (see Definition 1.7) it can be used to count the number of solutions as the simplicial volume of the convex hull of the columns of A; Saito et al. (2000) provides various generalizations.
The first functions that were identified as hypergeometric were the -type integrals t a (1 − t) b (1 − zt) c dt of Euler for the Gauß hypergeometric function. In Gel'fand et al. (1990), the authors consider integrals where P i (t) are Laurent polynomials and the integrals are functions in the coefficients of the polynomials P i . Here, σ is a k-cycle; in the Euler integrals σ is a curve. Gel fand, Kapranov and Zelevinsky show that the above integrals are A-hypergeometric and under suitable conditions span the solution space. This approach generalizes Aomoto's integrals on complements of generic hyperplane arrangements (Aomoto 1977), a source of inspiration in the search for the right definition of A-hypergeometric systems. There has always been a strong trend towards the study of "special" hypergeometric systems, namely those for which the solution space is spanned by special classes of functions. This starts with Gauß' observation (Gauß 1973, page 125, Formel I.-V.) that some parameter choices in the Gauß hypergeometric differential equation yield algebraic solutions. Kummer (1836), Riemann, and Gauß (Gauß 1973, page 207) developed tools to search for other such instances. Then Schwarz constructed his famous list (Schwarz 1873) of the Euler-Gauß hypergeometric differential equations whose solution space is spanned by algebraic functions. The case of all p F p−1 was dealt with much later by Beukers and Heckman (1989) as part of their study of the monodromy. For irreducible such equations with real parameters α 1 , . . . , α p , β 1 , . . . , β p−1 set β p = 1. Their exponentials on the unit circle are interlaced provided that the images of α i and β j are encountered alternatingly on a trip around the unit circle. Then Beukers and Heckman (1989) shows that interlacing is equivalent to the solution space of the differential equation being spanned by algebraic functions. Other cases were characterized in Sasaki (1977), Cohen and Wolfart (1992) Kato (1997Kato ( , 2000 For saturated irreducible homogeneous A-hypergeometric systems M A (β) with rational β, Beukers discovered the following fact about the number of algebraic solutions. Let C A,β = (β + ZA) ∩ (R ≥0 A) and consider it as a module over the semigroup NA. Let σ A (β) be the number of generators of C A,β over NA. Then, Beukers shows in Beukers (2010) that σ A (β) never exceeds the volume of A, and equality of σ A (kβ) = vol(A) for all 1 ≤ k ≤ D coprime to the least common denominator D of β 1 , . . . , β d happens precisely when the solution space is spanned by algebraic functions. We remark that irreducibility is linked to non-resonance (compare Definition 2.7) by Beukers (2011), Saito (2011 and Schulze and Walther (2012).
The story for inhomogeneous (i.e., confluent) systems is more complicated, both theoretically and algorithmically. Since the solutions do not need to lie in the Nilsson ring, a systematic search in the sense of Saito et al. (2000) using Gröbner bases is not possible. Nonetheless, in Esterov and Takeuchi (2015) an idea of Adolphson (Adolphson 1994) is completed that casts solutions of non-resonant A-hypergeometric systems as integrals Here, γ is a continuous family of real d-dimensional topological cycles in the torus, on which the integrand decays rapidly at infinity in the sense of Hien (2009). This was also already studied in the context of integrals from hyperplane arrangements by Kimura et al. (1992).
Torus action and Euler-Koszul complex
In this section, we start exploring algebraic properties of the system H A (β) by introducing a homological tool from Matusevich et al. (2005) that has proved to be very successful: the Euler-Koszul complex. It has been used to study the number of solutions, their monodromy, and several other aspects. We refer to the start of Sect. 1.2 for basic notations and assumptions regarding A.
Torus action and A-grading
Given a D A -module Q, its Fourier-Laplace transform Q is equal to Q as a C-vector space and carries a D A := C[ξ ] ∂ structure given by for any m ∈ Q. See (19) for a functorial description, and compare Sect. 4.4 for a related construct, the Fourier-Sato transform. The polynomial ring R A is naturally identified with the coordinate ring C[ξ ] of the Fourier-Laplace dual space C n of C n . The matrix A defines an algebraic action with coordinates t = t 1 , . . . , t d on C n by (η, ξ ) → η · ξ := (η a 1 ξ 1 , . . . , η a n ξ n ).
This action induces a grading we refer to this as the A-grading. There is a natural extension to D A if one sets deg(x j ) = −a j that makes every Euler operator A-graded of degree zero. The coordinate ring of the orbit closure through (1, . . . , 1) is the toric ring Remark 2. 1 The semigroup ring S A is normal (and hence Cohen-Macaulay by Hochster's Theorem 1 in Hochster (1972)) if and only if NA is saturated in the sense of Definition 1.8. ♦ We shall identify subsets of columns of A with subsets of column indices or submatrices. For such a subset τ ⊂ A, set The following sets are then in one-to-one correspondence:
Toric category and Euler-Koszul technology
The following set of constructions and results is taken from Matusevich et al. (2005).
Note that E i −β i ∈ D A can be viewed as a left D-linear endomorphism on A-graded D A -modules M by sending a ZA-homogeneous y ∈ M to and that these morphisms commute with one another.
Definition 2.2 (Degrees and Euler-Koszul complex) Let
be an A-graded R A -module and pick β ∈ C d . Let tdeg A (M) be the true A-degrees of N , given as the set of points A · u in ZA for which the graded component N u is nonzero, when S A is a Cohen-Macaulay ring, and in Adolphson (1994Adolphson ( , 1999 a modified version of the complex is discussed. ♦ The properties of the Euler-Koszul complex are most pleasant when N is in the category of toric modules. These are A-graded R A -modules that have a finite composition series whose successive quotients are ZA-shifted quotients of S A .
Remark 2.4
There is a generalization in Schulze and Walther (2009) to quasi-toric (i.e., certain non-Noetherian A-graded) modules that is useful for the interplay of Euler-Koszul complexes on local cohomology modules or on localizations such as C[ZA].
A different generalization (toral modules) is given and used in Dickenstein et al. (2010). ♦ By Matusevich et al. (2005), short exact sequences 0 −→ N −→ N −→ N −→ 0 of toric modules give rise to long exact sequences of Euler-Koszul homology modules that are all holonomic (see Definition 2.12). Moreover, vanishing of H A,0 (N ; β) implies vanishing of all H A,i (N ; β) and this vanishing is equivalent to −β not being in the quasi-degrees of N .
Remark 2.5 While Euler-Koszul complexes were initially defined for the study of the size of the solution space of A-hypergeometric systems (Matusevich et al. 2005), they have turned out to be remarkably successful when investigating other issues such as irregularity (see Sect. 3; Schulze and Walther (2008)), reducibility of the monodromy (Walther 2007;Fernández-Fernández 2019), comparisons with direct image functors (see the next subsection as well as (Schulze and Walther 2009;Steiner 2019a, b)), more general classes of binomial D-modules (Dickenstein et al. 2010;, the study of Horn hypergeometric systems (Dickenstein et al. 2010;, resonance (Schulze and Walther 2012), or Hodge theoretic aspects (see sections 4 and 5 as well as Reichelt 2014; Reichelt and Sevenheck 2015, 2017Reichelt and Walther 2018). ♦
Fourier-Laplace transformed GKZ-systems
We noted in Sect. 2.1 that the torus T acts on the Fourier-Laplace dual space C n . The orbit closure through (1, . . . , 1) is an affine toric variety X A := Spec(S A ). We identify its dense open orbit O A with the torus T. This gives rise to inclusions where j A is an open embedding and i A is a closed embedding. We set We denote the Fourier-Laplace transform of M A (β) by M A (β), and the corresponding quasi-coherent sheaves by M A (β) and M A (β) respectively. Using the definition of the Fourier-Laplace transform (12) one easily sees that M A (β) has support on the toric variety X A . In Schulze and Walther (2009) The relevant definition is the following one. The resonant parameters contain NA, but the strongly resonant ones usually do not. For example, if the semigroup NA is saturated, then NA ∩ sRes(A) = ∅. In particular, 0 is not an element of sRes (A) in this case, a fact that will become useful later. ♦ Example 2.9 Consider the matrix A = −1 0 1 2 1 1 1 1 the sets tdeg A (S A ) and sRes (A) and the cone R ≥0 A are sketched below. Since d = 2, fullness of A implies that we have qdeg A (S A ) = C 2 (Fig. 1). ♦ Theorem 2.10 Let A ∈ Z d×n be as above, then the following statements are equivalent (Gel'fand et al. 1987) where it was shown that β non-resonant gives the desired isomorphism. The precise computation in Theorem 2.10 comes from Schulze and Walther (2009). These results were refined and extended to the strongly resonant case in Steiner (2019a, b) where Steiner uses a combination of direct and proper direct image functors. ♦
Holonomicity, Rank, and Singular Locus
Suppose M = D A /I is some left D A -module, and M = D C n /I the associated sheaf of D C n -modules. Then its analytification M an = D an C n /D an C n I is obtained by replacing D C n by the sheaf D an C n of analytic linear differential operators on C n where now I ⊆ D C n ⊆ D an C n generates a left ideal of analytic linear differential operators. Choose x ∈ C n and denote stalks by subscripts. Consider the functor Cone, true, and strongly resonant degrees from germs of left D an A,x -modules to vector spaces. 1 If M an = D an C n /D an C n I then η ∈ Sol x (M an ) corresponds to the analytic solution η(1 + D an C n I ) near x. The dimension of the vector space of solutions to M at x is the rank of M at x. When we mean the rank at a generic point x we speak of just the rank of M.
Typically, Sol x (M an ) is infinitely generated. But for the select class of holonomic modules it is always finite.
Definition 2.12 Any principal D A -module (resp. D an C n -module) M (resp. M ) with generator m has a natural order filtration F ord • by R A -modules (resp. O C n -modules) where F ord k (M) (or, on the stalk, F ord k (M x )) is generated by the cosets of ∂ u with |u| ≤ k. The notion readily extends to any module with chosen set of generators and behaves well under analytification.
If M = D an C n is the sheaf of differential operators itself, the associated graded object is on the stalk isomorphic to the regular ring O x [ y] where y = y 1 , . . . , y n is the set of symbols to ∂ 1 , . . . , ∂ n . For any M (resp. M ), the associated graded object gr F (−) becomes a module over gr F (D A ) (resp. gr F (D an C n )). The module is holonomic if the associated graded module has Krull dimension n. ♦ It was shown in Gel'fand et al. (1987,1989) that many, and then in Adolphson (1994) that in fact all A-hypergeometric systems are holonomic; an elementary proof is given in . Holonomicity was then extended in Matusevich et al. (2005) and Schulze and Walther (2009) to all Euler-Koszul homology modules derived from quasi-toric input.
By Sato et al. (1973) and Gabber (1981), the characteristic variety is always involutive and has all components of dimension n or larger. This implies that holonomic modules have finite length and satisfy a Krull-Remak-Schmidt theorem (have welldefined sets of simple composition factors with multiplicity taken into account). Moreover, the quantity agrees with the rank of M in a generic point x ∈ C n by the Cauchy-Kovalevskaya-Kashiwara Theorem (Saito et al. 2000, p. 37).
For many important A-hypergeometric systems, a search of explicit natural power series solutions leads to rank many independent solutions, compare (Gel'fand et al. 1987;Saito et al. 2000). It was claimed in Gel'fand et al. (1989) where vol(A) is the (simplicial) volume of A, a purely combinatorial quantity given by the quotient of the measure of the convex hull of the origin and the columns of A, divided by the measure of the standard n-simplex. Adolphson (Adolphson 1994) pointed at a possible flaw in the argument, and Sturmfels and Takayama (1998) eventually provided a counter-example that is worth looking at.
Example 2.13 (The 0134-curve, Sturmfels and Takayama 1998) Let
The volume of A is 4, equal to the volume of the interval (0, 4) inside R. (Since the interval is 1-dimensional, usual volume-length-and simplicial volume agree). The toric ideal I A is homogeneous here, defining the pinched rational normal space curve. In Saito et al. (2000) it is shown that series solution methods based on weight vectors and the computation of certain initial ideals of H A (β) always lead to volume many independent series solutions, as long as A is homogeneous. This generalized the naïve series written out in Gel'fand et al. (1987Gel'fand et al. ( , 1989 to the case where logarithmic terms can appear in the series solutions. For almost all β, the rank of M A (β) in a generic point is 4, spanned by functions where the dots indicate a (usually infinite) series of terms ordered by the weight vector (0, 1, 2, 0). (The particular weight is immaterial, but it needs to be sufficiently generic; this one is so for this example). If one now deforms β into (1, 2) then the four independent solutions above degenerate into a linearly dependent set of rank three. On the other hand, the functions are new, not-deforming (in β) solutions to M A ((1, 2)). It follows that the "rank jumps at β = (1, 2)", from 4 to 5 = 4 − 1 + 2. ♦ Shortly after the discovery of rank jumps, the case of homogeneous monomial curves was completely discussed in Cattani et al. (1999): the "holes" of NA (the finitely many elements of (R ≥0 A ∩ ZA) NA) are exactly the rank-jumping parameters, and each rank jump is by 1. It was then shown in Matusevich et al. (2005) that as β varies, the rank of M A (β) is upper-semicontinuous, so that it can only go up under specialization (formation of a limit) of β. In fact, (Matusevich et al. 2005, Cor. 9.3) shows that the exceptional set E A of points where rank exceeds volume is Zariski closed and equals a certain subspace arrangement. To understand the origins of E A one must view the local cohomology modules H i ∂ (S A ) with i < d as quasi-toric modules; their elements are then witnesses to the failure of S A to be Cohen-Macaulay, while the union of their quasi-degrees forms the exceptional arrangement. The fact, also observed in Matusevich et al. (2005), that this arrangement has codimension at least two explains why finding rank-jumps at all turned out to be very hard and involved extensive computer experiments in Sturmfels and Takayama (1998).
Example 2.14 (Continuation of Example 2.13) In Example 2.13, d = 2 and so E A can be at most a finite set of isolated points. The local cohomology H 0 ∂ (S A ) is zero and Each degree component in S A and its monomial localizations are 1-dimensional C-spaces; we use this to depict these localizations in theČech complex by dots as follows ( Fig. 2): In this picture, the blue area indicates the directions in which the semigroup in question extends, black dots are the elements of A and the red dot indicates a "missing" element in the semigroup. Taking cohomology "dot-by-dot" one identifies the local It is remarkable that the components of the H 1 m (S A )-cocycle are precisely the "new" solutions that appear at β = (1, 2) that do not deform to other β. While this is not always literally true, a weaker form is typical and an explanation of this phenomenon involving Laurent polynomials is given in Berkesch et al. (2018) and Berkesch-Zamaere et al. (2016), especially for d = 2. Compare also Remark 3.14. ♦ In Berkesch (2011) it is proved that there is a purely combinatorial recipe (involving the relative positioning of β to the degrees of NA) that determines the rank of M A (β). The procedure to arrive at the exact rank is very involved.
Remark 2.15
The only known closed rank formula is for non-jumping parameters, where the rank is just the volume. 2 The best known general bound is exponential (Saito et al. 2000), in the sense that the rank of M A (β) is bounded above by 2 2d vol (A). It was shown in Matusevich and Walther (2007) (2013) to the existence of a ∈ R greater than 1 and families of matrices A (d) of size d × n d and with parameters β (d) such that the rank of M A (d) (β (d) ) exceeds a d vol (A). It would be interesting to know how far the bound from Saito et al. (2000) is from the the worst examples that exist. ♦ There is an open subset of C n on which the solutions for M A (β) form a vector bundle of rank rk(M A (β)). The complement (the singular locus of the module) of this set is algebraic, cut out by the A-discriminant, a product of individual discriminants to polynomial systems, one for each face of the cone over A. For a very detailed discussion on this, see the books (Gel'fand et al. 1994) and Saito et al. (2000). If one moves from general to special x, rank can go down due to singularities in the solutions. In contrast to rank in generic points, rank at special x is not known to be upper-semicontinuous. For the case of A as in Example 2.13, this is worked out in Walther (2018), which discusses the more general question of stratifying C n by the restriction diagrams, which encode the behavior of the D-module theoretic (derived) pull-back to x ∈ C n ; the elementary pull-back just counts rank at x.
Better behaved systems and contiguity
For each β = a j + β there is a natural contiguity morphism of degree a j , induced by right multiplication with ∂ j on S A through the Euler-Koszul functor. The existence of these morphisms is a consequence of the fact that ; this is a special case of Eq. (14) when y = ∂ j . Since elements in I A act as zero on S A , any composition of contiguity morphisms of fixed total degree γ ∈ NA acts the same way as morphism Contiguity morphisms have turned out to be a very useful tool in the study of Ahypergeometric systems since for k 0, c β+ka j ,β+(k+1)a j and c β−(k+1)a j ,β−ka j are isomorphisms (and one can determine explicit bounds in terms of A, β for k being sufficiently big). Contiguity maps have been used in Saito (2001) to identify combinatorially the isomorphism classes of A-hypergeometric systems, in Walther (2007) to study irreducibility and holonomic duality of M A (β) as a D A -module, and in Reichelt (2014), Reichelt and Sevenheck (2020) for investigating the Hodge module structure on certain M A (β). For a study of Gauß hypergeometric functions via contiguity operators see (Beukers 2007).
On the level of solutions, a map in the reverse direction is induced that literally takes the derivative by x j . For certain applications in mirror symmetry it is desirable to know that every contiguity operator induces an isomorphism on (the solutions of) M A (β). In case one has a generic β, this is automatic. But in practical situations it is more likely that β is integer, or at least resonant. In the present context, resonance encapsulates the lack of genericity of a parameter β to admit contiguity isomorphisms (in both directions). Resonance and contiguity operators were refined and used in Adolphson (1994), Saito (2001), Okuyama (2006, Cattani et al. (2011), Schulze and Walther (2012) and Beukers (2011Beukers ( , 2016 to study reducibility and general structure of M A (β). Now consider the quasi-toric module F A equal to the ring C[ZA]. It arises as the localization of S A at all ∂ j , or alternatively at one monomial whose degree is in the interior of R ≥0 A. By definition, multiplication by ∂ j on F A is an isomorphism, and therefore the same applies to the generalized A-hypergeometric system that arises as the Euler-Koszul homology H A,0 (F A ; β), for every β. Since F A is a maximal Cohen-Macaulay S A -module, there is no other Euler-Koszul homology (Matusevich et al. 2005;Schulze and Walther 2009).
This module H A,0 (F A ; β) was studied in Paul Horja (2006, 2013) and termed better behaved GKZ-system. A variant of these systems, considered in Mochizuki (2015a), can be described as the Euler-Koszul homology H A,0 In Sect. 4 below we will discuss Hodge theoretic ramifications of the main result of Mochizuki (2015a).
Irregularity
In this section we discuss regularity issues of hypergeometric D-modules; this is a multi-variate form of essential singularities. We start with discussing more general filtrations than the one by order. A combinatorial object can be derived from this process that governs the convergence behavior of solutions to A-hypergeometric systems near coordinate hyperplanes. Via results of Laurent and Mebkhout we discuss a generalized classical Fuchs criterion this gives information on the irregular solutions.
The Fuchs criterion and regularity
A univariate function f (t), analytic on a small open disk around t = 0 but singular at t = 0, can behave in two essentially different ways: the growth of f (t) as t → 0 could be bounded by a polynomial, or not. In the former case, f has a pole, in the latter an essential singularity. If f arises as solution to a differential equation we say 0 is a regular singular point of the equation in the first, and an irregular singular point in the second case.
For linear differential equations P • f (z) = 0 in the local parameter z, Fuchs gave the following practical procedure for determining regularity of the origin. If O 0 := C{z} is the ring of convergent power series near z = 0, write P as a linear combination m being the order of P, and p k = ∞ i=n k c k,i z i ∈ O 0 with c k,n k = 0 indicating the lowest order term of p k (z). Writing ∂ z for differentiation by z, for a monomial z r ∂ s z we use the two weights Then plot for each k the weights of c k,n k ∂ k z in the (F, V )-plane (Fig. 3): The shaded region (the Fuchs polygon of the operator) is the lower left convex hull of the (finitely many) points so obtained. It is, by definition, stable under shifts Two cases arise, indicated in the picture: (1) The Fuchs polygon has one vertex, in the upper right corner (left).
(2) There are two or more corners. This is tantamount to the boundary of the shaded region having one or more finite boundary segments with slopes different from 0 and −∞ (right).
Fuchs' criterion (see Gray 1984;Ince 1944 for a detailed account) states that P has a regular singularity at the origin if and only if the Fuchs polygon of P has no slopes. Regular differential equations are much better behaved than irregular ones, both theoretically and practically. On the theoretic side, they form an ingredient of the Riemann-Hilbert correspondence that links regular holonomic D-modules to perverse sheaves, which for irreducible modules restricts to a bijection with intersection cohomology complexes; on the practical side regular differential equations are amenable to the Frobenius method since their solutions come from the Nilsson ring (Kashiwara 1984;Mebkhout 1980Mebkhout , 1984Saito et al. 2000).
In higher dimensions, the concept of regularity is more difficult. One way of defining it proceeds via pullbacks: the D-module M on the analytic space C n is regular if and only if the pullback of M along any analytic morphism ι : * −→ C n , where * is a punctured disk, leads to a module with regular singularities at the origin on * . The problem is that there are many such morphisms to be tested. Laurent (1987) and later with Mebkhout (1999) found a way to translate regularity in more than one variable into a condition that resembles the Fuchs criterion. For that, we need to discuss filtrations and initial ideals on D-modules in more detail.
Initial ideals and triangulations
A general technique to understand (non-commutative) algebraic structures is the reduction to a simpler (commutative) situation by applying a grading with respect to a filtration. For D-modules, the filtration by the order of differential operators leads to the characteristic variety which carries various bits of information on the D-module. The process of grading is rather cumbersome but can be performed algorithmically in various situations using Gröbner basis methods. The simplest case is that of a generic weight vector because the resulting graded ideal will be monomial; this invites the use of techniques developed in Saito et al. (2000) and .
So, let L = (L 1 , . . . , L n ) ∈ Q n be a generic weight vector on R A ; genericity is needed to assure that gr L (I A ) is a monomial ideal. (In R n there are weights L that are generic for all ideals of R A simultaneously. There is no rational weight with this property, but for a finite number of ideals a Zariski open set of the rational weight space consists of generic weights.) Example 3.1 For the matrix A = 1 0 1 0 1 1 , with columns indicated with solid bullets, the following picture sketches the possible initial ideals that arise from the weights in the family L t = 1 1 t , t > 0. Note that a 1 = a 1 /L t 1 and a 2 = a 2 /L t 2 for all t. Plotted with hollow bullets are the points a 3 /L t 3 for the indicated choices of t.
♦ Definition 3.2 Associated to the generic weight L and the R A -ideal I is an initial simplicial complex L I that arises as follows. A collection τ of indices contained in [n] forms a face of L I if and only if there is no monomial in gr L (I ) whose support is precisely τ . Put another way, L I is the simplicial complex whose Stanley-Reisner ideal is the radical of gr L (I ). If For example, suppose I A is the principal ideal generated by ∂ 1 ∂ 2 ∂ 3 − ∂ 4 ∂ 2 5 . Then I A admits two distinct monomial initial ideals whose corresponding simplicial complexes are (Fig. 4).
The generic weight L also induces a triangulation of [n] as follows. Consider the points = {(a j , L j ) ∈ R d × R} 1≤ j≤n . The faces of the triangulation are those faces of the cone R ≥0 of that are visible from the point (0, −∞); these are exactly those faces whose outer normal vectors have negative last component. A triangulation of [n] is regular (or coherent) if it arises this way for some L. This property is strongly tied to A, and not all triangulations of A have to be regular (Fig. 5).
Fig. 5 A non-regular triangulation of a triangle
The collection of regular triangulations of A turns out to be in (the obvious) bijection with the initial complexes of A. There is a third combinatorial object associated to L and A, namely the collection S (gr L (I A )) of standard pairs of gr L (I A ), introduced in Sturmfels et al. (1995). A standard pair (∂ b , σ ) of the monomial ideal I is a monomial and a subset of [n] such that {1, 2, 3, 4}), and (1, {1, 2, 3, 5}). The standard pairs yield immediately a decomposition into irreducible ideals by For I as above we obtain I = (∂ 5 ) ∩ (∂ 2 5 ) ∩ (∂ 1 4 ). The standard pairs hence contain all information needed to recover I and its triangulations. In particular, the facets of L A are precisely the subsets σ that are listed in the standard pairs. Example 3.3 We consider Example 3.1 from this new angle. We fix the weights L 1 = L 2 = 1 and vary the weight t = L 3 . For L 3 < 2, gr L I A = ∂ 1 ∂ 2 and the facets of . We could interpret this as the complex of faces, not containing 0, of the convex hull of 0 and the columns of A. Similarly we obtain L A = {1, 2} for L 3 > 2, which can be read as a convex hull as before, but with a 3 not in the picture. For L 3 = 2, gr L I A = I A is prime and L A should now equal {1, 2, 3}: we would like to view a 3 as "collinear with a 1 , a 2 " in this case. This is the topic of the next section; the following is a teaser: in order to view the three cases from a unifying angle, note that scaling a weight component L i by λ and "scaling the degree a i of ∂ i " by 1/λ have the same effect on the initial terms (and also on the face complex of L A ). One is thus led to replace a 3 by a 3 /L 3 ; then the resulting convex hull yields the face complex generated by {1, 2
Slopes and the (A, L)-umbrella
In case of a D A -module M = D A /J , J an ideal in D A , we will want to grade with respect to a filtration on D A defined by (and identified with) a weight vector L ∈ Q d × Q d for the variables x 1 , . . . , x n , ∂ 1 , . . . , ∂ n . We denote the L-leading term of P ∈ D A by σ L (P) and call it the L-symbol.
Convention 3.4
We assume that there is a positive real constant c such that This hypothesis has the effect that is a (commutative) polynomial ring whose spectrum is naturally identified with the total space of the cotangent bundle T Smith (2001).
We record the special case Our plan is to connect this construction to analytic information as follows. Suppose X ⊆ X = C n,an is an analytic subspace with a smooth point x ∈ X . Then in suitable local coordinates at x one can write X as the zero set of the first n − dim X coordinates on X . In the stalk at x consider the grading of the D-module M by the filtrations induced by the weights L p/q := pF + qV where as always F is the order filtration and V is the V -filtration along X (compare Sect. 3.1): (There is an obvious identification of graded objects for L p/q and L p /q when p/q = p /q ).
Definition 3.5 With notation as just introduced, This definition is taken from Laurent (1987). By Laurent and Mebkhout (1999), Laurent's algebraic slopes constructed from filtrations agree with Mebkhout's transcendental slopes given as jumps of the Gevrey filtration on the irregularity sheaf and hence provide a measure of growth for the solutions of M. The central question in this section is to study the behavior of ChV L (M A (β)) under changes of L and β.
We illustrate the link of slopes of M A (β) with Fuchs' criterion in an example.
Example 3.6 It is clear from the series expansion (2) that the Kummer confluent series 1 F 1 (a; b; z) is analytic at every finite z for all a, b. On the other hand, it follows from the integral definition of the error function that at z = ∞ there is an essential singularity (and algebraic changes of coordinates do not eradicate essential singularities). If we denote −1/z by u, then the differential operator θ z (θ z + 1/2) − z(θ z − 1/2) turns into uθ u (θ u − 1/2) − (θ u + 1/2) for the resulting inverse Kummer confluent series. The Fuchs polygons are (Fig. 6). So, the Kummer series has (of course) regular "singularities" at the origin, while the inverse Kummer series has a slope of −1. This reflects the fact that, up to multiplication by a function bounded by a polynomial, the Kummer series at 0 behaves like exp(z 0 ), while the inverse Kummer series behaves like exp(z −1 ): the Kummer series grows (up to polynomially bounded factors) near ∞ like exp(z).
For the translation to the A-hypergeometric setting we can use in both cases −1, 1). The toric ideal is then We know from Example 3.1 that for the family L t = (1, 1, t) there is a jump at t = 2 in the L t -graded ideal of I A since at that moment v becomes L-homogeneous. It turns out that the L t -characteristic variety of H A (β) for any β also changes at t = 2, so that M A (β) has a slope of 2 along the hyperplane x 3 = 0.
The correspondence between these numbers is encapsulated by the equation 1 where s F is the slope of the Fuchs polygon (and indicates exponential growth behavior with exponent s F ), and s L is the slope at which Laurent's filtrations jump. ♦ We now discuss "regular triangulations to non-monomial graded toric ideals" coming from non-generic weight vectors in greater generality, the details being taken from Schulze and Walther (2008). For the transition, suppose J is generated by elements inside R A ⊆ D A . Then one can restrict the weight to L ∂ on R A and compute gr L ∂ (J ∩ R A ) in the commutative situation of Sect. 3.2. Note that then gr L (J ) = gr L (D A ) · gr L ∂ (J ∩ R A ). Specifically, we write Let L = (L 1 , . . . , L n ) ∈ Q n be any weight vector on R A . As L may have zero components, possible division (as suggested in Example 3.3) by L i = 0 forces us into Kummer (right) work in a projective space: In P d Q , any two distinct points a, b ∈ P d Q are joined by two line segments. If the hyperplane H in P d Q contains neither a nor b, one may define the convex hull of a, b as the line segment not intersecting H . Similarly one can define the convex hull conv H (S) of a subset S ⊆ P d Q disjoint from H as the convex hull of S in the affine So, the faces of L A × {1} ⊆ A d+1 Q are in bijection with those of the cone spanned by it from the origin in A d+1 Q that have outer normal vector "pointing down", and this is the same cone as the one spanned by the appropriate collection inside {(a j , L(a j )} n 1 . ♦ Just like L A in the monomial case, L A corresponds to minimal prime ideals of gr L (I A ). More precisely the following holds.
In particular, the (A, L)-umbrella encodes the geometry of S L A .
L-characteristic varieties
Equipped with the knowledge from the previous section, we can return to the question of describing For a weight L ∈ Q n × Q n , the L-symbols σ L (E i ) span the tangent spaces of every torus orbit and hence impose the conormal condition to O τ A for all τ ∈ L A (compare Gel'fand et al. 1989;Schulze and Walther 2008). The inclusion appears already in Gel'fand et al. (1989) and Adolphson (1994) and shows that ChV L (M A (β)) must be contained in the union of the closures of all these conormals. One might hope that (16) is always an equality; this would simplify the problem of describing ChV L (M A (β)). The right hand side is the fake initial ideal and equality holds if I L A is Cohen-Macaulay (Saito et al. 2000, Thm. 4.3.8). Unfortunately, this inclusion can be strict in general as the following example shows.
♦ Notwithstanding this example, the following is true.
Theorem 3.12 The L-characteristic variety of the A-hypergeometric system is where for τ ∈ L A , we denote by ϒ τ A ⊆ T * C n the conormal to the orbit O τ A ⊆ C n , and where we use the identification T * C n ∼ = T * C n .
By Theorem 3.12 the two ideals in (16) differ along minimal components only by their multiplicities. Taking into account this information turns the L-characteristic ). This number is bounded from below by the intersection multiplicity μ L,τ A between the Euler variety Var(gr L (E 1 , . . . , E d )) ⊆ C n and the component of gr L (I A ) along ϒ τ A . Moreover, μ L,τ A,0 (β) agrees with this lower bound for a Zariski-open set of parameters β, but may exceed it for special values of β; see Schulze and Walther (2008).
Using this notation, with volume functions normalized such that they return unity on the standard simplex, In particular, this formula proves that the slopes of the D-module M A (β) are determined entirely by combinatorics of A L , since this is true for their L-characteristic varieties. (For the empty face τ , if NA is saturated, this simplifies to the formula already in Gel'fand et al. (1989) that rank is then equal to the volume of A).
Remark 3.13
If an A-hypergeometric system is homogeneous, it can have no slopes since it is regular holonomic (Hotta 1998). On the other hand, an inhomogeneous H A (β) has at least one slope along the subspace cut out by the variables corresponding to any of the faces of the umbrella of A that do not touch the boundary of the umbrella, as moving it will eventually change the shape of the umbrella (compare Schulze and Walther 2008). By Laurent's results, regularity of M A (β) is hence equivalent to homogeneity and independent of β. ♦ Remark 3.14 A natural question is whether one can find a stratification of the parameter space such that rank is constant on each stratum and whether one can give a family of parametric solutions that deform analytically to rank many solutions on the chosen stratum. This is indeed so; the details are worked out in Berkesch et al. (2014Berkesch et al. ( , 2018 and Berkesch-Zamaere et al. (2016). For confluent systems, when the Nilsson ring does not contain all solutions, the approach of Gevrey series can be used. Early focus was on the irregularity sheaves of Mebkhout introduced in Mebkhout (1990). In a series of papers, Fernández-Fernández (2010) and Castro-Jiménez (2011a, b, 2012), study theory and construction of solutions. Another point of interest is asymptotics. In Castro-Jiménez and Granger (2015), it is worked out how this plays out in the d = 1 case (A is a single row matrix): Gevrey series solution along the singular locus of the system appear as asymptotes of holomorphic solutions along suitable paths of integration. A similar result for modified systems is proved in .
A related problem is that of determining the monodromy of A-hypergeometric systems. This turns out to be an extraordinarily difficult problem, and only limited information is available at this point. We mention the work of Ando et al. (2015) that determines the monodromy at infinity for confluent (inhomogeneous) systems, building on Takeuchi (2010) for the homogeneous case. Hien's rapid decay cycles (Hien 2009) make an entry here via Esterov and Takeuchi (2015), replacing the classical integral representations of Gel'fand et al. ♦
Hodge theory of GKZ-systems
In this section we show that certain GKZ-systems carry a mixed Hodge module structure in the sense of Saito (1990) and investigate some consequences of this fact. Since the definition of mixed Hodge modules (MHM) is rather involved, we give here a simplified version which is enough for our purpose. Assuming the reader to be at least somewhat acquainted with the Riemann-Hilbert correspondence, we start with a brief outline of the cornerstones of the theory of mixed Hodge modules. We then give (certain) A-hypergeometric systems an interpretation as Gauß-Manin systems and use it to define an MHM structureon these A-hypergeometric systems. We then discuss two induced filtrations on these GKZ-systems.
Section setup, and basics on mixed Hodge modules
An algebraic mixed Hodge module on a smooth algebraic variety X is an algebraic, regular holonomic D X -module M together with an increasing filtration by coherent O X -modules F There is a direct image functor as well, but its definition is more technical because the chain rule cannot be reversed in general. Again, one proceeds by defining a naïve version (neither left nor right exact) as in Hotta et al. (2008, Sect. 1.3), from which a derived functor f + can be defined; this functor is denoted f in Hotta et al. (2008, p. 40). Conjugation by the duality functor leads to the exceptional direct image functor f † , which is denoted f ! in Hotta et al. (2008, Sect. 3.2). ♦ Due to the groundbreaking work of Saito (1988Saito ( , 1990, for each morphism f : X −→ Y there are lifts of the functors f + , f † , f + , f † to the category of mixed Hodge modules which we denote by The proof of the existence of these functors on MHM require various rather deep results from Hodge theory (such as the existence of a Hodge structure on the cohomology of a degenerating VHS on a curve which was established by Zucker using L 2 -cohomology), the theory of filtered D-modules, compatibility properties of V -and F-filtration (also known as strict specializability), as well as a tricky formalism of induced modules.
Our starting point is Sect. 2.3, where we have seen that if β / ∈ sRes(A) then (A). Now in order for its (inverse) Fourier-Laplace transform to be a mixed Hodge module, the GKZ-system M A (β) should of course in particular be regular holonomic. By Remark 3.13 and Definition 1.7, this property is equivalent to I A being homogeneous. In other words, for the GKZ-system to have any hope of being an MHM module we must require that the vector (1, 1, . . . , 1) is in the row span of A. Fortuitously, this requirement on A provides also the solution to the translation of MHM structures from M A (β) to M A (β). Indeed, while the (inverse) Fourier-Laplace transform does in general not preserve mixed Hodge modules, we shall employ a Radon transform (which makes only sense in the homogeneous case) in order to construct a mixed Hodge module structure on the GKZ-system via M A (β).
In order to simplify the statement of some formulas in the remainder of the article, we make now the following convention on A.
Convention 4.2 From now on, A is in Z (d+1)×(n+1) and we assume that A is homogeneous, full, pointed, and generates a saturated semigroup. ♦
Since a GKZ-system derived from a pair (A, β) is unchanged under an invertible Z-linear transformation of the rows we can moreover assume that the matrix A has the following shape where B ∈ Z d×n is full but is not necessarily pointed or homogeneous. Notice also that if NA is saturated, then so is NB; however, the converse implication is not true in general.
Geometric interpretation of GKZ-systems
The aim of this section is to express certain GKZ systems as objects which are built from consecutive applications of (possibly proper) direct image and (possibly exceptional) inverse image functors applied to a structure sheaf. From the discussion above it follows then that these GKZ systems carry a mixed Hodge module structure. In order to achieve this we have to introduce various integral transformations and their relations.
Define a pairing and a free rank one O C n+1 ×C n+1 -module which acquires a D C n+1 ×C n+1 -module structure via the product rule. We denote by p 1 and p 2 the projections from C n+1 × C n+1 to the first and second factor respectively. The sheafified version of the Fourier-Laplace transform is given by and one has FL • FL = − id. Although defined at the level of derived categories, FL is an exact functor, and an instructive exercise shows that on the level of global sections it is given by formula (12). Theorem 2.10 now implies that, whenever β / ∈ sRes(A), we have Here, the final identification holds due to the homogeneity of I A even though FL 2 is not the identity.
The second type of transformation we will need is the Radon transformation of D-modules introduced by Brylinski (1986); some variations were later discussed by D' Agnolo and Eastwood (2003). Let be the complement of the universal hypersurface defined by the vanishing of the pairing −, − . For the sake of readability, we denote P( C n+1 ) form now on simply by P n . Consider the following commutative diagram 9 9 r r r r r r r r r r r r The Radon transformation is the functor RT : Let π : C n+1 \{0} −→ P n be the canonical projection and denote by π V : V −→ P n the total space of the tautological bundle O P n (−1). Recall that V can be identified with the blow-up of the point {0} of C n+1 and P n with the exceptional divisor E. We denote by π V,E : E −→ {0} −→ C n+1 the restriction of the blow up map π V : V −→ C n+1 . The following proposition relates the Fourier-Laplace and Radon transformations.
In particular, if N is a mixed Hodge module, then the above isomorphisms allow us to equip the right hand sides with induced MHM structures.
To simplify the presentation, we will focus now (and this until Definition 4.6 below) primarily on the case β = 0. For β = 0 a twisted variant of the Radon transformation is needed: see Reichelt and Sevenheck (2020) for details. We start with the following commutative diagram is the projection to the last d variables and where In particular, is as in (15) earlier (with the caveat that now A is as in Convention 4.2). We then observe that
and with Proposition 4.3, the isomorphisms
endow the GKZ-system M A (0) with the structure of a mixed Hodge module. We now consider a part of the long exact sequence of the adjunction triangle (20) applied to (g B ) + O T . In order to identify the individuals terms we introduce a family of Laurent polynomials defined on (C * ) d × C n = T × C n using the columns b 1 , . . . , b n of the matrix B from (17). We define As a consequence, the lower exact sequence underlies a sequence of mixed Hodge modules.
Hodge-filtration on GKZ-systems
Although the isomorphism (23) equips the GKZ system M A (0) with the structure of a mixed Hodge module, it is far from clear what the Hodge and weight filtrations look like. The first step in this direction was carried out by Stienstra (1998), relying heavily on work of Batyrev (1993), who computed the Hodge and weight filtration on the smooth part of the GKZ system. Denote := conv(a 0 , . . . , a n ) the convex hull of the points a 0 , . . . , a n , and note that this is the decone of the Apolyhedron from Definition 3.7. Let τ ⊆ be a face of , let x ∈ C n , and set The Laurent polynomial F A,x := F A A,x is called non-degenerate (see, e.g., (Batyrev 1993, Definition 3.3)) if for every face τ of the equations have no common solutions in T. Then, for 0 ≤ i ≤ d, define the differential operators which are elements of the Weyl algebra D C[t ± ] on t 0 , . . . , t d localized at t 0 · · · t d . One checks that these operate on the semigroup ring S A ⊆ C[t ±1 0 , . . . , t ±1 d ], P i (S A ) ⊆ S A , so they are differential operators on the affine toric variety X A = Spec(S A ).
Before we can state Stienstra's result mentioned in the introduction to this section, we need some more terminology. Let be the ascending sequence of homogeneous ideals in S A where I (k) is generated by all elements t a with a ∈ NA that are not contained in any codimension k face of R ≥0 A. Define a decreasing sequence of C-vector spaces in S A Stienstra proved the following result Theorem 4.5 (Batyrev 1993;Stienstra 1998;Reichelt and Sevenheck 2020) Let x ∈ C n+1 be such that the Laurent polynomial F A,x is non-degenerate and consider the canonical inclusion i x : {x} → C n+1 . Then, with ϕ denoting the family from (24),
Under this isomorphism, the Hodge filtration is given by
If the matrix B ∈ Z d×n is homogeneous, then the weight filtration on H d (T, ϕ −1 (x); C) is given by where the semigroup ring S B , the ideals I (k) and the differential operators P i are now derived from B.
Equation (26) is shown in Stienstra (1998) for homogeneous A; the general case is treated in Reichelt and Sevenheck (2020 A (β) (in the sense of mixed Hodge modules), this extends the first part of the above Theorem 4.5. Since we will formulate the result for certain parameter vectors β different from 0, we first need to introduce the following definition.
Example 4.7 For the matrix
the following picture shows the sets sRes(A) (see Definition 2.6 above) and A A . ♦ We can now state a result, taken from (Reichelt and Sevenheck 2020, Theorem 5.35) which describes the Hodge filtration on the GKZ-systems in a rather precise way.
Theorem 4.8 Let A ∈ Z (d+1)×(n+1) be as in Convention 4.2, β ∈ A A and β 0 ∈ (−1, 0]. Then the Hodge filtration on M A (β) is given by the shifted order filtration, so that we have the following equality of filtered D C n+1 -modules It has been shown in Reichelt and Sevenheck (2020, Theorem 5.43) that the first part of the above Theorem 4.5, and so Formula (26) is a rather direct consequence of the comparison between the Hodge and the order filtration on M A (0).
Remark 4.9
As already noted in Sect. 2 above, a variant of Borisov-Horja's better behaved GKZ-systems has been considered in Mochizuki (2015a). If we suppose that A is normal (as we do throughout this section), then the definition in Mochizuki (2015a) coincides with the one for ordinary GKZ-systems as given in 1.6 above. However, the matrix A is not supposed to be homogeneous in Mochizuki (2015a). The module M A (β) will have irregular singularities then, as discussed in Sect. 3 above. One may ask what kind of Hodge theoretic information can be derived from M A (β) in this case. This is similar to the statements on the ordinary versus irregular Hodge filtration on univariate hypergeometric systems that we will discuss below. In Mochizuki (2015a, Prop. 1.4), Mochizuki proves the the following statement which can be considered as an irregular variant of Theorem 4.8 above. Let B ∈ Z d×n be such that ZB = Z d . Suppose for the simplicity of the exposition that NB = R ≥0 B ∩ Z d . Consider the non-commutative "Rees ring" and the corresponding sheaf R C×C n . Let H z A (0) be the left R C×C n -ideal generated by Then the left R C×C n -module R C×C n /H z A (0) underlies a mixed twistor module on C n , a notion that in many respects is the correct replacement of a mixed Hodge module in the irregular setup. In particular, any mixed Hodge module can be considered as a special mixed twistor module, and therefore the case β = 0 of Theorem 4.8 can be deduced from Mochizuki's result. Using a filtered variant of the Fourier-Laplace transformation (compare the discussion in Sect. 5 below), one can also obtain the latter from Theorem 4.8, as has been demonstrated in Domínguez et al. (2019, Corollary 4.8). ♦ As another application of Theorem 4.8, we will describe some results about the Hodge structure of univariate hypergeometric equations (see the discussion in Sect. 1.2 above). Consider again the operator (compare with Eq. 7, where m = q + 1, m = p and where λ 1 = 0, λ i = 1 − β i+1 , μ j = −α j ) for some real numbers λ i , μ j . The corresponding cyclic module is irreducible if and only if for all i, j we have λ i −μ j / ∈ Z. The modules H (λ; μ) are the most basic examples of rigid D-modules (see Katz 1990;Arinkin 2010). A first consequence of this property is that if H (λ; μ) is irreducible, then it is isomorphic to some H (λ ; μ ) whenever μ − μ and λ − λ are integer vectors. We can thus assume that 0 ≤ λ 1 ≤ · · · , λ m < 1, 0 ≤ μ 1 ≤ · · · ≤ μ m < 1 and that λ i = μ j for all i, j. It is obvious that H (λ; μ) is regular exactly when m = m and in that case it has the three singular points {0, 1, ∞}. On the other hand, if m = m then Sing(H (λ; μ) = {0, ∞}.
In the regular case, that is, if m = m, the rigidity property can be stated at the level of the the local system L on P 1 \{0, 1, ∞} of solutions of P: it simply says that the local monodromies around the singular points determine the (global) monodromy representation defined by L . From there it follows by Simpson (1990, Cor. 8.1) and also Deligne (1984, Prop. 1.13) that L underlies a complex variation of Hodge structures. Then the following formula for its Hodge numbers has been shown in Fedorov (2018, Thm. 1) The Picard-Fuchs equation of the family of elliptic curves in Example 1.3 corresponds, as we computed there, to the hypergeometric differential equation given by the module H (0, 0; 1/2, 1/2). Applying Fedorov's formula yields dim(gr F 0 L ) = dim(gr F 1 L ) = 1, confirming our computation in Example 1.3. Notice also that in this case the local system L underlies a real (and even rational) variation of Hodge structures, which is consistent with Fedorov (2018, Theorem 2).
If m = m (and, up to a change of the coordinate z → 1/z we can assume that m > m), then H (λ; μ) is irregular and can no longer support a variation of Hodge structures. In Sabbah (2018), a category of irregular Hodge modules is developed, which can roughly be seen as lying between the category of mixed Hodge modules and the category of mixed twistor modules. A possibly irregular D X -module M on a complex manifold X underlying an irregular Hodge module comes equipped with an irregular Hodge filtration, an increasing filtration F irr α M by coherent O X -modules indexed by the real numbers (contrarily to the regular case); we write F irr <α M := β<α F irr β M . However, the indexing set is determined by a finite set I ⊆ [0, 1) having the property that In Sabbah and Yu (2019), the following formula for the irregular Hodge numbers has been found (see also , where the Hodge filtration itself is determined in some cases, using Theorem 4.8 from above): For m = m, this gives back the formula (31) up to the fact that the local system L is in the regular case in Fedorov (2018) the one of the solutions of H (λ; μ), whereas formula (32) gives (for m = m) Hodge numbers of a filtration defined on the dual local system of flat sections.
Weight filtration on GKZ systems
In the remainder of this section, we discuss results concerning the weight filtration on GKZ-systems. Recall that we equipped the GKZ-system M A (0) in Sect. 4.2 with a mixed Hodge module structure by rewriting it as certain Radon transform of a direct image of a structure sheaf (cf. (23)). In this subsection we endow the GKZ systems with an a priori different mixed Hodge module structure. If the matrix A is chosen to be homogeneous then the GKZ-system M A (0) is a monodromic D-module. In this case the Fourier-Laplace transformation can be replaced by the Fourier-Sato transformation (or monodromic Fourier-Laplace transformation) (cf. Brylinski 1986, Théorème 7.24) which happens to be a functor of mixed Hodge modules. Denote by the standard C * -action on C n+1 . We refer to the push-forward θ * (z∂ z ) as the Euler vector field E, where z is a coordinate on C * . A regular holonomic D-module M is called monodromic, if the Euler field E acts finitely on the global sections of M .
Consider the diagram x x r r r r r r r r r r r where p 1 is the projections to the first factor, i 0 is the canonical inclusion and the map ω is given by The Fourier-Sato transformation (or monodromic Fourier transformation) is defined by where φ z is the vanishing cycle functor along z = 0.
It was shown in (Reichelt and Walther,Proposition 4.12) that the Fourier-Sato transformation respects the weight filtration of monodromic D-modules which are localized along {0} ∈ C n+1 (up to a shift). Hence, a weight filtration on the GKZsystem is induced by the following isomorphisms: Since the Fourier-Sato transform is an equivalence of categories it is therefore enough to compute the weight filtration on M A (0) = (h A ) + O T which will be done below.
Recall that the graded parts Gr W k M of a mixed Hodge module are pure Hodge modules and as such are semi-simple, splitting as direct sums of intersection complexes (which are simple D-modules). Because the number of simple objects (counted with multiplicity) is independent on the chosen (weight) filtration this also gives us the simple objects occurring in the weight filtration induced by the Radon transform (but possibly in another order). However, we conjecture that the Fourier-Sato transformation and the Radon transformation are actually isomorphic on the level of mixed Hodge modules.
Conjecture 4.10
For N ∈ MHM(P n ): We will now proceed to state the result on the weight filtration of Let τ ⊆ γ ⊆ σ be faces of a cone σ ⊂ R d+1 . The quotient face of γ by τ is defined as: where τ R is the linear span of the cone τ . Define The cone γ is the dual of γ in its own span, hence independent of σ . For cones τ ⊆ γ denote by X γ /τ the spectrum of the semigroup ring induced by the cone γ /τ in its natural lattice. Set Y γ /τ := X (γ /τ ) .
In the following, we denote the cone R ≥0 A by σ . The Fourier-Laplace transformed GKZ system M A (0) is isomorphic to (h A ) + O T and has support on the affine toric variety X A = X σ . For a face τ of σ write d τ for its dimension. We have seen in Sect.
that the d τ -dimensional T-orbits O τ
A in X σ are in one-to-one correspondence with the faces τ of σ . The closure of an orbit O τ A is X τ . It turns out that the varieties X τ are exactly those which occur as support varieties of the summands in the semisimple decompositions of the graded parts gr W M A (0).
In order to simplify the notation, we use the symbol IC Y (L ) for the intersection cohomology D-module on some smooth variety X with support on the closed subset Y ⊆ X , and where L is a local system on a Zariski open subset of Y .
Application to toric mirror symmetry
The aim of this final section is to discuss some results concerning the so-called mirror symmetry phenomenon, which links enumerative geometry of projective algebraic, and more generally symplectic varieties (called A-model) to complex geometry, in particular, Hodge theory of their so-called B-models. The B-model is usually given by a family of algebraic varieties which may have singularities and which need not be projective (which forces one to consider compactifications, see below). Often these families on the B-side are referred to as Landau-Ginzburg models.
The first example of mirror symmetry was given by Candelas et al. (1991) who predicted a virtual number of rational curves on a quintic threefold (later referred to as the genus 0 Gromov-Witten invariants) by period computations for the mirror partner (the B-model). These predictions were verified and also generalized to numerically effective smooth complete intersections in toric varieties by Givental ( , 1998. His celebrated mirror theorem shows that the J -function, a generating function for the genus 0 GW-invariants of such varieties, is computable in terms of a cohomologyvalued hypergeometric function. Givental also conjectured that the components of this function are given as oscillating integrals. This was much later proved in Iritani (2009) (even treating the case where the toric variety in question is an orbifold), some details of the construction described below are parallel to his paper. However, an algebraic construction of the correct Hodge theoretic B-model was still missing. Our purpose in this section is to give an overview of techniques and results (mainly referring to Reichelt and Sevenheck (2015, 2017 as well as to Mochizuki (2015a)), where the machinery of GKZ-systems as discussed in the previous sections is used to obtain a purely algebraic Hodge theoretic (and D-module based) mirror correspondence for certain smooth toric varieties resp. subvarieties of them.
Gromov-Witten invariants and Dubrovin connection
Let X be a toric smooth projective variety. For the purpose of this exposition, we assume further that X is Fano, so the anticanonical class [−K X ] is ample. A good part of the results discussed below also applies if one considers weak Fano manifolds, meaning that [−K X ] is a numerically effective (nef) class. There are however a few technical modifications needed in the nef case, which is why we refrain from discussing it here. Developing the mirror symmetry picture described below in the absence of any positivity assumption on X remains a subject of active current research (see, e.g., Iritani 2008;Gross et al. 2017;Iritani 2017).
Let β ∈ H 2 (X , Z) and choose γ 1 , γ 2 , γ 3 ∈ H * (X , Q). The genus zero, three point Gromov-Witten invariants intuitively count the number of stable maps f from rational curves C with-in this case-three marked points, satisfying f * ([C]) = β and f (C) ∩ PD(γ i ) = ∅ for i = 1, 2, 3. (Here and elsewhere, PD(−) denotes the Poincaré dual). Technically, they are obtained as follows: pull back the (three) arguments of I 0,3,β to the moduli space of such maps (along the three induced evaluation maps to X ), take their cup product and evaluate against this product by integration over a certain virtual fundamental class on the moduli space. Constructing this latter class is a major issue in Gromov-Witten theory (see, e.g. Fulton and Pandharipande 1997;Behrend and Fantechi 1997).
We choose a homogeneous basis T 0 , T 1 , . . . , T r , T r +1 , . . . , T s of H * (X ; Z) such that T 0 ∈ H 0 (X ; Z), the classes T 1 , . . . , T r ∈ H 2 (X ; Z) lie in the nef cone of X and T r +1 , . . . , T s ∈ H >2 (X ; Z). Let g i j := (T i , T j ) be the Poincaré pairing between the elements T i and T j and define With δ ∈ H 2 (X ; C), the three point Gromov-Witten invariants can be used as structure constants for a family of multiplications on H * (X ; C). This product structure is the small quantum product of X and parameterized by the cosets of δ in the complexified Kähler moduli space A priori it is far from clear that the sum in (33) is convergent. However, the Gromov-Witten invariants satisfy (among others) the following properties:
Effectivity:
I 0,3,β = 0 ifβ does not lie in the Mori cone Degree: Point Mapping: where we recall that the Mori cone is the cone in H 2 (X ; R) of effective classes of curves. It is dual to the cone of nef divisors in H 2 (X ; R). The effectivity axiom together with our assumption that X be Fano-so that the class c 1 (X ) be ample-show that I 0,3,β is zero unless c 1 (X )(β) ≥ 0. The degree axiom now tells us that for fixed T i , T j , T k there are only finitely many β in the Mori cone such that I 0,3,β (T i , T j , T k ) is non-zero. Hence the product defined in (33) is finite and therefore defined on the whole space K.
Then, under the exponential map from H 2 (X ; C) to K, q = {q i } i=1,...,r become coordinates on K corresponding to t = {t i } i=1,...,r on H 2 (X ; C) and induce an explicit isomorphism K (C * ) r . Since T 1 , . . . , T r lie in the nef cone, the cone generated by the dual basis (η j ) j=1,...,r contains the Mori cone and therefore all monomials q β 1 1 . . . q β r r have non-negative exponents. Hence the quantum product extends to the partial compactification The point mapping property of the Gromov-Witten invariants shows that the small quantum product degenerates to the ordinary cup product at q = 0.
Example 5.1 Consider the first Hirzebruch surface F 1 which is induced by the following fan (left); on the right is shown the space H 2 (F 1 ; R) using the coordinate system given by the classes of D 1 and D 2 . (See the start of Sect. 5.2 for information on how to view H 2 (X ; Z)).
We choose the homogeneous basis
The small quantum cohomology product of F 1 is determined by The small quantum cohomology ring of F 1 is therefore given by Restricting this ring to q 1 = q 2 = 0 gives C[T 1 , T 2 ]/(T 2 1 , T 2 2 − T 1 T 2 , T 1 T 2 2 ) which is isomorphic to the cohomology ring (cf. Fulton 1993, Section 5.2), We are going to give a reformulation of the quantum cohomology algebra in terms of certain differential systems. The intrinsic reason of the appearance of differential equations in this context is best understood when studying the big quantum product instead of the small one as we have done above. It basically means to have a product on H * (X ; C) which is parameterized by any class δ ∈ H * (X ; C) instead of a class in H 2 (X ; C) (more precisely, instead of a representative of a coset in K). One can show that the structure constants of the big quantum product can be obtained as third derivatives of a generating function, referred to as the Gromov-Witten potential. This fact reveals an intrinsic integrability property of the (big) quantum product. Moreover, the associativity then boils down to a famous third order non-linear partial differential equation satisfied by the GW-potential, abbreviated as WDVV-equation (after Witten, Dijkgraaf, Verlinde, Verlinde, see, e.g. (Manin 1999)). It turns out that using the next definition, this equation can be rewritten as a flatness property of a system of linear differential equations, that is, a vector bundle with a connection. Definition 5. 2 The small Dubrovin connection (H A , ∇ A ) of X is a flat meromorphic connection ∇ A on a trivial, holomorphic vector bundle H A over P 1 × K with fiber H * (X ; C). The connection is given by where we denote by z the coordinate centered at 0 ∈ C ⊆ P 1 . ♦ Notice however that this convention from quantum cohomology literature leads to some slight clash of notation. Namely, the variable z from above (a coordinate on P 1 ) is different from the variable z used for univariate hypergeometric equations in Sect. 1 as well as in Formula (30). In order to be consistent with the literature, we stick to these conventions and hope that it does not lead to confusion.
It is an easy but instructive exercise to check that the flatness of the connection ∇ A implies the associativity and commutativity of the small quantum product. Example 5. 3 The small Dubrovin connection of the first Hirzebruch surface is given by
Landau-Ginzburg models
Let X be the fan of the toric smooth projective Fano variety X defined on the ddimensional vector space N ⊗ Z R (N ∼ = Z d being a lattice), with X (1) the set of one-dimensional cones whose primitive elements in N form the columns of the matrix B ∈ Z d×n . Denote by M = Hom Z (N , Z) the dual of N which is identified with the group of torus-invariant principal divisors and by Div T (X ) the group of torus-invariant Weil divisors. There is the following (split) exact sequence Applying (−) ⊗ Z C * one obtains the (split) exact sequence of algebraic tori, where b is the monomial map encoded by the transpose of B, K is as in Sect. 5.1,and T as in (22). Recall that the standard basis e 1 , . . . , e d of M gives coordinates t = (t 1 , . . . , t d ) on T.
The canonical basis of torus-invariant divisors D 1 , . . . , D n for Div T (X ) corresponding to the one-dimensional cones induces an isomorphism Div T (X ) ⊗ Z C * (C * ) n . Let W : Div T (X ) ⊗ Z C * = (C * ) n −→ C be the function given by summing the coordinates.
Definition 5.4
The Landau-Ginzburg model associated to the smooth, toric, Fano variety X is the map ♦ If we view K as an abstract algebraic torus, defining the morphism (W , c) requires only the matrix B (that is, the generators of X (1)), but not the full data of the fan X . We shall later wish to (partially) compactify K, as we have done before (see Formula (34)). For this, we need to equip K with the coordinate system {q i } i=1,...,r , corresponding to the basis {T i } i=1,...,r on H 2 (X ; C). The compactification is designed to contain the point q 1 = · · · = q r = 0, since there the quantum product collapses to the cup product. This will be the case if the basis {T i } i=1,...,r of H 2 (X ; R) consists of nef classes (this choice has already been made above at the beginning of Sect. 5.1). Hence, fixing such a good coordinate system {q i } i=1,...,r on K depends on the geometry of the toric variety X and not just on the ray generators given by the matrix B (see Reichelt and Sevenheck 2015, Section 3.1 for a more detailed discussion).
Since (37) splits, we can find a section of the map Div T (X ) −→ H 2 (X , Z) which then induces a section Again, s, seen as a monomial map from (C * ) r to (C * ) n , will depend on the fan structure of X via the choice of coordinates on K. From now on, we will always fix such coordinates and consider K as the concrete r -dimensional torus (C * ) r . The isomorphism gives a different presentation of the Landau-Ginzburg model, namely as a family of Laurent polynomials where S = (s 1 , . . . , s n ) ∈ Z r ×n and B = (b 1 , . . . , b n ) ∈ Z d×n represent the maps s and b respectively.
Example 5. 5 We continue Example 5.1. The exact sequence (37) is given by where we have chosen the basis T 1 = [D 1 ], T 2 = [D 2 ] as a basis in H 2 (X ; Z), as we did in Example 5.1. The Landau-Ginzburg model is given on the level of coordinate functions by The corresponding family of Laurent polynomials is where we have chosen the section s : K −→ Div T (X ) ⊗ Z C * as the one induced from the map It was conjectured by Givental (see, e.g. Givental 1998) that oscillating integrals over Lefschetz thimbles with respect to the Landau-Ginzburg model give flat sections of the Dubrovin connection. An algebraic replacement of these oscillating integrals, localized and partially Fourier-Laplace transformed Gauß-Manin systems of the Landau-Ginzburg model.
We briefly explain this version of the ordinary Fourier-Laplace transformation functor (see Formula (19) above). In the following, O C t ×C τ ×Y · exp(−tτ ) denotes a free rank 1 module with twisted differential given by the product rule.
Definition 5.6 Given a smooth variety Y and a holonomic D C×Y -module N , the localized, partial Fourier-Laplace transform of N is the sheaf where p 1 : are the canonical open embeddings with the understanding that z = 1/τ . ♦ The name "localized" comes from the fact that by using the direct image ( j z ) + , the action of z is invertible on the resulting module (and so is the action of τ ).
The localized, partially Fourier-Laplace transformed Gauß-Manin system of the Landau-Ginzburg model ψ is then defined as It is an exercise (using the definition of the direct image functor, see, e.g. Hotta (2008, Sections 1.3, 1.5)) to show that the module of global sections G ψ of G ψ has the following presentation in terms of relative differential forms where d is the differential on the complex •+d T×K/K . Following an idea from singularity theory (see Brieskorn 1970;Saito 1989;Sabbah 2006), one defines the Fourier-Laplace transformed Brieskorn lattice by We will see below, using GKZ-systems, that G ψ 0 is O C×K -free. In order to connect G ψ to a GKZ-system we observe that the family of Laurent polynomials ψ is a pullback of a larger family where s : K → Div T (X ) ⊗ Z C * ∼ = (C * ) n is as in (38) and the middle map is the identification induced from the standard basis on M.
In Theorem 4.4 we have connected the Gauß-Manin system of ϕ to a GKZ system via the 4-term sequence is the homogenization of the matrix B constructed from the ray generators of the fan X . Since the outer two terms are free O C n+1 -modules, they are in the kernel of the localized partial Fourier-Laplace transform. Indeed, on the level of global sections, FL loc Y is the composition the localization at ∂ t with the ordinary Fourier-Laplace transformation FL Y , and C[t] = D t /D t · ∂ t naturally localizes to zero. Thus, the localized partial Fourier-Laplace transform being the composition of two exact functors, the previous display implies (0)).
The module of global sections of FL loc C n (M A (0)) is the cyclic left module D C×C n [z ± ]/I over the ring One can prove by base change that the Fourier-Laplace transformed Brieskorn lattice G ϕ 0 is the inverse image of G ψ 0 under the map ι in (42). We therefore arrive at the following result where, for u ∈ ker(B), we read it as an element of H 2 (X ; C) via the dual of the sequence (37): Parallel to R C×C n from (28), we define and denote by R log C×K the associated sheaf on C × K. Then the following statements on some cyclic R log C×K -modules are proved in Reichelt and Sevenheck (2015) using methods from toric geometry, including the notions of primitive collections and relations (see, e.g., Cox and von Renesse 2009;Cox et al. 2011).
Proposition 5.9 Let J log ⊆ R log be the left ideal generated by E and u from Proposition 5.8. Then In order to construct an object which matches the small Dubrovin connection coming from the Gromov-Witten invariants of X we have to go one step further. Recall that the small Dubrovin connection (35) is a family of vector bundles on P 1 , parameterized by K, equipped with a certain connection operator. As of yet, starting from the Landau-Ginzburg model ψ from (39) of X , we have constructed a vector bundle R log C×K /J log on C × K with a differential structure, and it is easily verified that the behavior along the poles ({0} × K) ∪ (C × (K\K)) of the connection operators on both bundles are of the same type. If we want to compare R log C×K /J log to the small Dubrovin connection, it thus remains to extend this bundle (together with its connection operator) over the divisor {∞} × K to all of P 1 × K. This is of course always possible if no other condition is imposed. However, if we want to reconstruct the Dubrovin connection, this extension needs to satisfy two strong conditions simultaneously: the resulting object must be a family of trivial P 1 -bundles and the connection must have a logarithmic pole at infinity. Fulfilling both requirements is not always possible, and goes under the name (Riemann-Hilbert-)Birkhoff problem; for a modern account see (Sabbah 1998, Chapter IV). However, under the current circumstances, a solution to the Birkhoff problem can be found locally near the boundary K\K, as the following result shows.
We remark that in Reichelt and Sevenheck (2015, Proposition 4.10) a similar result for the more general case of weak Fano toric manifolds is given, albeit with a weaker conclusion: the extension H B there only exists on an analytic open subset of K (see the remark after Reichelt and Sevenheck 2015, Proposition 3.10).
The logarithmic extension is equal to C[z, q 1 , q 2 ] z 2 ∂ z , zq 1 ∂ q 1 , zq 2 ∂ q 2 /J log where J log is generated by the same operators as J .
Reduced quantum D-modules and intersection cohomology
In this section, we are going to discuss a mirror statement that concerns weak Fano smooth complete intersections inside smooth projective toric, possibly non-Fano, vari-eties. From the point of view of physics, this is an even more important class of examples than the one considered previously since it includes Calabi-Yau manifolds that are subvarieties of toric manifolds, although they are not toric themselves. The most prominent example, namely, the quintic in P 4 (where the first enumerative predictions using the mirror symmetry principle were made, see (Candelas et al. 1991)) is of this type. We will discuss a non-affine version of the Landau-Ginzburg models introduced above. The mirror statement that we aim for will relate (part of) the quantum cohomology of the complete intersection subvariety to the lowest weight filtration step of a GKZ-system. It follows from the results in Sect. 4.3 that the lowest weight filtration step is a single intersection cohomology D-module which arises as the image under a natural morphism from the holonomic dual of the GKZ system to the GKZ system itself. In the cases we discuss here this holonomic dual is isomorphic to a GKZ system with the same matrix A but different parameter vector β. Hence the intersection cohomology D-module can be described as the image of a morphism between two GKZ-systems by a contiguity morphism. Our main reference in this section is Reichelt and Sevenheck (2017). We start with setting the notation.
Notation 5.13 As before, X will be a smooth projective toric variety of Picard rank r attached to the fan X of dimension d, whose primitive rays form the columns of the matrix B. In contrast to the previous case we do in this subsection not make any positivity assumption on X here. Let O(L 1 ), . . . , O(L c ) be globally generated line bundles; since X is toric, this amounts to asking that each L i be nef-their classes should lie in the nef cone in H 2 (X , R). We shall assume also that − K X − L 1 − · · · − L c is nef.
If D 1 , . . . , D n are the torus invariant divisors on X we can write for suitable non-negative integers d i j . Set and consider a generic global section γ ∈ (X , E ). Our assumptions imply that is a smooth complete intersection subvariety for which −K Y is nef; we call this property weak Fano. ♦ In this paragraph we briefly review a variant of the above quantum product that is designed to encode enumerative information about stable maps to Y . The first point is that one can generalize the definition of Gromov-Witten invariants (5.1) to the twisted (three-point) GW-invariants; these are also maps from H * (X , Q) ⊗3 → Q, but Chern classes of certain tautological bundles (on the moduli space of stable maps) derived from E come into play. We denote by I 0,3,β (γ 1 , γ 2 , γ 3 ) ∈ Q the value of such a three point twisted GW-invariant for given cohomology classes γ 1 , γ 2 , γ 3 ∈ H * (X , Q) (see, e.g. Reichelt and Sevenheck 2017, Section 4.1) for a more detailed discussion, including an explanation for the process γ 3 γ 3 ). Then one defines in complete analogy to Formula (33) the twisted (small) quantum product by where, as before, q are coordinates on K and q β := exp(δ(β)) for β ∈ H 2 (X ; C). We now follow the definition of the small Dubrovin connection, Eq. (35), and define the twisted quantum D-module, denoted by QDM(X , E ), as the vector bundle on P 1 × K with fiber H * (X ; C) together with the connection given by its total space. Then V is a (non-compact) toric variety, whose fan is given as follows: The set of rays of V are the columns of the matrix where B is the d × n-matrix constructed from the primitive rays in X and where d ji are as in (44). Then the fan V consists of all cones R ≥0 b i 1 + · · · + R ≥0 b i k + R ≥0 b j 1 + · · · + R ≥0 b j such that R ≥0 b i 1 + · · · + R ≥0 b i k ∈ X and j 1 , . . . , j ∈ {n + 1, . . . , n + c}. Notice that we have H 2 (V; Z) ∼ = H 2 (X , Z) ∼ = Z r and that Div T (V) ∼ = Z n+c . Similarly to the discussion in Sect. 5.2 we then consider a family of Laurent polynomials associated to these toric data.
Definition 5.14 (Reichelt and Sevenheck 2017, Definition 6.3.) Let (X , E ) be as in Notation 5.13 and consider the complexified Kähler moduli space K ∼ = H 2 (X ; Z) ⊗ Z C * ∼ = H 2 (V; Z) ⊗ Z C * of both X and V. Write T V := (C * ) d+c for the (d + c)dimensional torus. Then the affine Landau-Ginzburg model of (X , E ) is the morphism ψ = (F, pr 2 ) : where is a Zariski open subset on which the Laurent polynomials ψ(−, q) satisfy a nondegeneracy condition (see Reichelt and Sevenheck 2017, Section 3.2) and where (s 1 , . . . , s n+c ) ∈ Z r ×(n+c) is a section of the projection Div T (V) H 2 (X , Z). ♦ One can establish a mirror symmetry theorem for the twisted quantum D-module which involves the affine Landau-Ginzburg model, very much in the same spirit (without looking at logarithmic extensions over the boundary K\K though, and also neglecting the extension to families of bundles over P 1 ) as Theorem 5.11 above (see Reichelt and Sevenheck 2017, Theorem 6.13, 6.16) and also Mochizuki 2015a). However, in order to reconstruct the reduced quantum D-module QDM(X , E ), we are forced to look at a compactification of the morphism ψ. In order to define it, consider the map g B : T V = (C * ) d+c → P n+c (see Formula (22) above). Then define to be the closure in P n+c × C × K • of the graph F ⊆ T V × C × K of the function F : T V × K • → C defined in (47). Notice that Z • is a partial compactification of T V × K • , that is, quasi-projective but in general not smooth.
Definition 5.15
Let (X , E ) be as above. Then we call the restriction the non-affine Landau-Ginzburg model of (X , E ). ♦ Clearly, is a projective morphism, and hence should be considered as a partial compactification of the affine Landau-Ginzburg model ψ.
In a rather similar way to the case of Landau-Ginzburg models of projective toric varieties, we obtain the following description of the relevant Gauß-Manin cohomologies by GKZ-type systems. As a matter of notation, consider the the matrix A ∈ Z 1+d+c,1+n+c obtained by homogenizing the matrix B defined in Eq. (46), that is With these definitions, we have the contiguity morphism (see Sect. 2.5) ..·∂ n+c due to the special shape of the matrix A . Notice that here we use the coordinates (x 0 , x 1 , . . . , x n+c ) on C × C n+c and ∂ 0 , ∂ 1 , . . . , ∂ n+c for the corresponding partials.
We can now formulate the following statement about the non-affine Landau-Ginzburg.
Theorem 5.16 (Reichelt and Sevenheck 2017, Lemma 6.4 and Proposition 6.7) There is an isomorphism of D C×K • -modules where we denote (with a slight abuse of notation) by ι : C × K • → C × C n+c the embedding already used above (see Eq. 42). Moreover, there is an isomorphism of D C×K • -modules Notice that by definition, the intersection cohomology module IC(C T V ×K • ) to the constant sheaf on T V ×K • becomes a D P n+c ×C×K • -module via Kashiwara equivalence (using the locally closed embedding T V × K • ∼ = F → F → P n+c × C × K • ); this is the reason for using the direct image by pr from Definition 5.15. Since it has support on the subvariety Z • , the corresponding perverse sheaf under the Riemann-Hilbert correspondence is the (zeroth perverse cohomology of the) direct image under the morphism applied to the intersection complex of Z • . Finally, we want to state a mirror statement close in spirit to Theorem 5.11 which concerns the reduced quantum D-module. For this, we first need an extension of the localized partial Fourier-Laplace transformation functor FL loc Y as defined in Formula (40) to a functor acting on the category of filtered D-modules. Without giving the actual details (see, e.g. (Sabbah and Jeng-Daw 2015, Appendix A) or (Reichelt and Sevenheck 2020, Definition 6.2)), let us just state that starting from a filtered D Y -module (M , F • ), this version of the Fourier-Laplace transformation yields an R-module, where again R is the sheaf of Rees rings, as discussed in Sect. 4.3 (see Formula (28)). We denote this R-module by FL loc C×Y (M , F • ). Moreover, in order to properly state the mirror theorem for nef complete intersections, we have to take into account the so-called mirror map, which was not present in Theorem 5.11 since we restricted our attention to the Fano case there. For a sufficiently small ε ∈ R + , write * ε := {t ∈ (C * ) r | 0 < |t| < ε} ⊆ K • . Then the mirror map is a morphism Mir : * ε −→ H 0 (X ; C) × U that has been defined in Givental (1998) and Coates and Givental (2007). Here, U ⊆ K is the set on which the twisted quantum product * tw is defined (converges). With these preparations, our final mirror theorem can be stated as follows.
Theorem 5.17 (Reichelt and Sevenheck 2017, Conjecture 6.15, Reichelt and Sevenheck 2020, Theorem 6.5, Theorem 6.6) We have an isomorphism of R C× * ε -modules This result depends in an essential way on the computation of the Hodge filtration on GKZ-systems, that is, on Theorem 4.8, since the expression of the Hodge filtration as the shifted order filtration on the modules M A (β) for various parameters β allows us to describe explicitly the left hand side of (50). Notice that, by the very definition of the Dubrovin connection, the restriction of the (reduced) quantum D-module to C × * ε has the structure of an R C× * ε -module. A consequence of Theorem 5.17 is the following Hodge theoretic property of the reduced quantum D-module.
Corollary 5.18 (Reichelt and Sevenheck 2020, Theorem 6.6) Suppose X , E , Y are as in Notation 5.13. Then the reduced quantum D-module QDM(X , E ) underlies a smooth pure polarizable twistor D-module on K • (in the sense of Mochizuki (2015b)); that is, a (pure) non-commutative Hodge structure in the sense of Sevenheck (2007, 2010) and Katzarkov et al. (2008).
The operator Q (2,3) is confluent, univariate and hypergeometric (compare Sect. 1.2) with a regular singularity at q = 0 and irregular singularity at q = ∞. Notice that if instead we consider a (2, 4)-complete intersection Y ⊂ P 5 , then Y is a Calabi-Yau manifold, and we have where Q (2,4) = 8q · (2q∂ q + 1)(4q∂ q + 1)(4q∂ q + 2)(4q∂ q + 3) − (q∂ q ) 4 is a homogeneous, hence, regular (non-confluent) hypergeometric operator, with singularities at q = 0, 2 −10 , ∞. In this case, the Hodge theoretic result Corollary 5.18 simply states that D C * q /D C * q · Q (2,4) underlies a pure polarized variation of Hodge structures; this is consistent with Simpson (1990, Corollary 8.1) and Deligne (1984, Prop. 1.13) (see the discussion on page 33 above). ♦ Finally, let us remark that unlike in the previous example(s), it is in general not easy to give a cyclic description of the intersection cohomology D-module FL loc K • H 0 pr + IC(C T V ×K • ). In other words, even though we know that it has a description as an (Fourier-Laplace transform of an) image of a contiguity morphism, it is not clear how to describe the kernel of this morphism and how to give a presentation of the image as a quotient of D (see also (Mann and Mignon 2017, Section 6) for some examples and conjectures).
Table of Symbols
Single letters (by alphabet):
|
2023-01-24T14:39:13.603Z
|
2021-02-02T00:00:00.000
|
{
"year": 2021,
"sha1": "2189515473df263a60d38d23610cca089d0814c7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13366-020-00560-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "2189515473df263a60d38d23610cca089d0814c7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
213727605
|
pes2o/s2orc
|
v3-fos-license
|
Application of Several Classical Sorting Algorithms in Early Warning of Payment Risk of Basic Endowment Insurance Fund in China
: The financial situation of China’s basic endowment insurance fund has begun to deteriorate, and its deterioration trend will accelerate with the deepening of the aging population, so it is urgent to carry out a study on the early warning of the payment risk of this endowment insurance fund. This paper discusses the classification accuracies of C4.5 algorithm, Naive Bayesian algorithm and BP neural network in the warning of basic endowment insurance fund payment risk. It is found that C4.5 algorithm has the best classification effect, with an accuracy of 71.43%; BP neural network takes the second place, with an accuracy of 61.90%; Naive Bayesian algorithm has a poor effect, with an accuracy of 52.38%.
INTRODUCTION
According to "China Labor Statistics Yearbook 2018", there are 6 provinces in China where the incomes of basic endowment insurance fund are less than its expenditures, namely Liaoning Province, Jilin Province, Heilongjiang Province, Shandong Province, Hubei Province and Qinghai Province. Among them, Heilongjiang Province has a particularly severe financial situation, with a total deficit of 48.62 billion yuan. These data show that the financial situation of the basic endowment insurance fund in China has gradually deteriorated. Some scholars' studies (Tan and Fan [1], Yu and Zhong [2], Feng and Liu [3], Ai and Zhang et al. [4], Liu [5]) also show that the finance of the basic endowment insurance fund is not sustainable or only has a very weak sustainability in the long term, which will directly affect the payment of pensions for retirees and further affect the credibility of the government. Therefore, it is urgent to provide early warning on the finance of the China's basic endowment insurance fund.
warning. Zhu [13] summarized the research status of risk assessment, risk warning and risk factors of the endowment insurance fund, reconstructed the early-warning indicator system, and conducted empirical study with the improved BP neural network.
On the basis of the above research, this paper first constructs the early warning indicators of fund payment risk from the perspective of the main factors affecting the incomes and expenditures of the basic endowment insurance system. Secondly, it focuses on the application of C4.5 decision tree, Naive Bayesian algorithm and BP neural network in the early warning of the financial status of the insurance fund. Thirdly, the warning effects of these three classification methods are compared, and the best warning classification method is selected. Finally, based on the best method, the financial risk level of the endowment insurance fund in the next 10 years is further predicted.
Early warning indicators construction and data Early warning indicators
From the perspective of the factors that affect the contributions and expenditures of the endowment insurance fund, the early warning indicators of fund payment risk is constructed.
The main factors affecting the incomes of the fund are: the average annual salary of the employees, the number of employees, the growth rate of the wages of the employees, the contributions paid by the enterprise and the employees, etc., so the following indicators are refined: average annual salary of employees (C1), wage growth rate (C2), growth rate of on-the-job insureds (C3), unit contribution rate (C4) and individual contribution rate (C5). The main factors that affect the pension expenditures are: the level of pension treatment and the proportion of this level to pre-retirement wages, accumulation of individual pension accounts of employees, the situation of the retirees who need to pay the pension, the growth rate of the number of retired insureds, the life expectancy of the retirees and the growth of the pension expenditures as a whole, etc. Therefore, the following indicators can be extracted: bookkeeping rate (P1), the rate of support (P2), growth rate of retired insureds (P3), average life expectancy (P4) and growth rate of pension expenditures (P5). The specific meaning of these indicators is shown in Table 1.
Dependent variable (Y): the payment capacity of fund is the accumulated balance of the basic endowment insurance fund divided by the pension expenditures of the current year, and multiplied by 12 to convert its unit from year to month. In other words, the payment capacity (Y) is the number of months that the fund can still pay pension to retirees in the current year.
According to Ye and Li [12], when Y ≥ 12, the fund has sufficient capacity to pay, and the payment risk is recorded as class A; when 9 ≤ Y < 12, fund payment has a slight degree of risk, and the risk is recorded as class B; when 6 ≤ Y < 9, fund payment has a medium degree of risk, and the risk is recorded as class C; when Y < 6, fund payment has a serious degree of risk, and the risk is recorded as class D. the average annual statistical wages of on-the-job employees wage growth rate (C2) the growth rate of per capita disposable income of urban residents growth rate of on-the-job insureds (C3) number of on-the-job insureds in current year minus the number of last year, and then divide by the number of last year unit contribution rate(C4) the employers shall pay the old-age insurance premium rate for the on-the-job employees individual contribution rate(C5) insurance premium rate paid by the staff themselves
Data of indicators
Since there were few national data from statistical yearbook, relevant representative provinces' data were also selected as samples from all regions of the country. Table 2. Classification effect comparison of several sorting algorithms C4.5 algorithm, Naive Bayes algorithm and BP neural network are three commonly used classification algorithms. C4.5 algorithm is an extension and optimization based on ID3 algorithm, which selects splitting attributes through information gain rate. As a widely used classical classification method, Naive Bayesian algorithm is constructed on the basis of bayesian theorem and independent hypothesis of feature conditions. BP neural network is widely used in modern scientific research. It can realize arbitrary nonlinear mapping from input to output layer by training according to the error reverse propagation algorithm, and has strong faulttolerant ability.
Before applying three classical classification methods to the early warning of the payment risk of basic endowment insurance, 84 total samples were randomly divided into two categories, one was training samples and the other was verification samples. The total samples were divided into two categories according to 3:1 by using the random ordinal number in MATLAB, and 63 training samples and 21 verification samples were randomly selected from the total samples.
Classification results of C4.5 algorithm
The C4.5 algorithm was programmed with MATLAB software. First, the training samples were used to generate the decision tree, and then the verification samples were used to test the results. The confusion matrix of the results is shown in Figure 1. Where classes A, B, C, and D correspond to the numbers 1, 2, 3, and 4 in the figure It can be seen that there were 10 results when the risk degree of class A was correctly predicted as class A, and there were 2 and 1 results when the risk degree of class A was wrongly predicted as class C and class D, respectively. There were 5 results when risk degree B was correctly predicted as B, and 1 result when risk degree B was incorrectly predicted as A and C, respectively. The risk degree C class has only one in the verification sample and was incorrectly predicted as the B class. Therefore, the accuracy of the C4.5 algorithm is 71.43%.
Classification results of Naive Bayesian algorithm
Based on the principle of Naive Bayes algorithm, MATLAB software was used for programming, and the confusion matrix of the predicted results can be obtained, as shown in figure 2.
Fig-2: Confusion matrix of Naive Bayesian algorithm classification results
The risk degree class A was correctly predicted to have 6 results for class A, but was incorrectly predicted to be 2 and 1 results for class B and class C, respectively. The risk degree class B was correctly predicted to have 3 results for class B, but it was wrongly predicted to have 1 result for class A and class D, respectively. The risk degree class C was correctly predicted to have 1 result for class C, but it was wrongly predicted to have 1 result for class A and 3 results for class D. The risk degree class D was correctly predicted to have one result for class D, and it was wrongly predicted to have one result for class A. Therefore, the accuracy of Naive Bayesian algorithm is 52.38%.
Classification results of BP neural network
After continuously adjusting the parameters of BP neural network, the prediction accuracy of the model was compared when the number of hidden layers was 10, 11, 12, 13, 14 and 15, and the learning rate was 0.1. When the hidden layer was found to be 11, the BP neural network had a better effect in the early warning of the fund payment risk. The confusion matrix of the results is shown in Figure 3. The risk degree class A was correctly predicted to have 6 results for class A, but was incorrectly predicted to be 2 results for class B. The risk degree class B was correctly predicted to have 3 results for class B, but it was wrongly predicted to have 3 results for class A and one result for class D. The risk degree class C was correctly predicted as class C with 3 results, but was incorrectly predicted to have one result for class B and class D, respectively. The risk degree class D was all correctly predicted as Class D, with one result. Therefore, the accuracy of the BP neural network is 61.90%.
In conclusion, among the three classical classification algorithms, C4.5 algorithm has the best effect, with an accuracy of 71.43%; BP neural network is the second, with an accuracy of 61.90%; Naive Bayesian algorithm is poor, with an accuracy of 52.38%.
Predicting national financial status of the insurance fund
As can be seen from the above, C4.5 algorithm has the best early warning effect. Based on this best method, the early warning of payment risk of the national basic endowment insurance fund in the next 10 years can be carried out. The data of national early-warning indicators of 2019-2028 were shown in table 3. Among them, the unit contribution rate in 2019-2028 was 16%, which was obtained according to relevant provisions of "Comprehensive Scheme for Reducing Social Insurance Contribution Rates". The individual contribution rate and bookkeeping rate remained unchanged during this period. The values of other warning indicators were obtained by using linear trend extrapolation method. By substituting the early-warning indicators data in table 3 into the decision tree generated by C4.5 algorithm for prediction, the results of payment risk degree of China's basic endowment insurance fund in these 10 years can be obtained, as shown in table 4.
RESEARCH CONCLUSIONS
At present, the financial situation of the basic pension fund in some provinces of China has started to deteriorate. With the deepening of the aging population, the deterioration trend will be more and more obvious. Therefore, it is urgent to give early warning on the payment risk of the basic pension fund in China. This paper discusses the effect of C4.5 algorithm, Naive Bayesian algorithm and BP neural network in early warning of payment risk of basic endowment insurance fund. It is found that C4.5 algorithm has the best prediction effect, with an accuracy of 71.43%; BP neural network takes the second place, with an accuracy of 61.90%; Naive Bayesian algorithm has a poor effect, with an accuracy of 52.38%. Therefore, C4.5 algorithm is more suitable for early warning of payment risk of China's basic endowment insurance fund.
|
2020-01-30T09:15:13.732Z
|
2019-11-10T00:00:00.000
|
{
"year": 2019,
"sha1": "68b403dd8cdc8400026b3795c7c2362da3186844",
"oa_license": null,
"oa_url": "https://doi.org/10.36346/sarjbm.2019.v01i03.011",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ec60b3121811d04687e6896ca94c1fe37e0cce06",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
105756180
|
pes2o/s2orc
|
v3-fos-license
|
MONITORING OXIDATIVE LEVELS OF FRYING OILS USING FTIR SPECTROSCOPY AND MULTIVARIATE CALIBRATION
Objective: To develop a rapid reliable technique based on Fourier transform infrared-attenuated total reflectance (FTIR-ATR) spectroscopy in combination with multivariate calibrations for prediction of frying oil quality, namely acid value (AV), iodine value (IV) and peroxide value (PV). Methods: FTIR spectra were directly obtained and subjected to optimization and spectral treatments including a selection of wavenumbers region and spectral derivatization. The condition selected was based on its capability to provide the highest coefficient of determination (R Faculty of Pharmacy, Gadjah Mada University, Yogyakarta 55281 Indonesia Email: abdulkimfar@gmail.com 2) and the lowest root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) for the relationship between actual values of AV, IV, and PV as determined using standard titrimetric methods and predicted values as determined by FTIR spectroscopy aided with multivariate calibrations. Results: Using optimized condition, FTIR spectroscopy combined with multivariate calibrations could be successfully used for prediction of AV, and PV. Acid value (AV) could be determined using the first derivative spectra at wavenumbers of 1524-658 cm-1. The R2of 0.973 (in calibration model) and 0.932 (in prediction model) with low RMSEC and RMSEP values was obtained. Iodine value (IV) was best predicted using principle component regression (PCR) with normal FTIR spectra at the combined wavenumbers region of 3076-2783 and 1811-656 cm-1. PCR using normal spectra at combined wavenumbers region of 3076-2783and 1811-656 cm-1
INTRODUCTION
The quality of frying oil (FO) closely affected fried foods [1]. During frying, oil has been subjected to be heating at high temperatures at prolonged periods with the presence of water and air, which lead to some complex chemical reactions including thermal degradation, hydrolysis, polymerization, and oxidation [2,3]. The chemical compounds resulted from these reactions are not only producing undesirable components but also affecting the quality of food flavor [4]. As a consequence, the quality control of FO using rapid and reliable methods should be developed. Several parameters have been used for evaluation of FO including acid value (AV) or free fatty acids (FFA), peroxide value (PV), iodine value (IV) and total polar compounds (TPC) [5,6], and its authenticity [7,8].
The standard analytical methods used for monitoring the quality of fats and oils as appeared in Association of Official Analytical Chemists (AOAC) and International Standardization Organization (ISO) are based on wet chemical methods such as titrimetric which involved some solvents and reagents and in some cases using sophisticated instruments such as gas chromatographic which are expensive, require lengthy sample preparation, and, in some cases, depend on advanced instruments and skillful analyst [9,10]. Thus, some simple methods based on spectroscopic methods could be developed as an alternative method to overcome the drawbacks of wet chemical methods and the complexity of sophisticated instruments. The results of FTIR spectroscopy have been reported to correlate with those using wet chemical methods [11,12].
Due to its property as fingerprint analytical techniques [13], FTIR spectroscopy in combination with chemometrics have been developed for analysis of acid value [14,15], the peroxide value of FO [16], iodine value [17], and anisidive value [18]. However, these oil parameters were determined individually, and there are limited reports regarding the determination of acid value (AV), peroxide value (PO) and iodine value (IV) simultaneously. Therefore, this research was aimed to determine AV, PO and IV simultaneously using FTIR spectroscopy in combination with multivariate calibrations of principle component regression (PCR) and partial least square regression (PLSR).
Materials
Frying oils and used frying oils were obtained from Samarinda, East Kalimantan, Indonesia. The other reagents and solvents used were of pre-analytical grade obtained from E. Merck (Darmstat, Germany).
Determination of acid value
Acid value (AV) of FO samples was determined using titration method as appeared in the standard method of the American Oil Chemists' (AOCS). A-10.0 g of FO samples were accurately weighed and dissolved in 100 ml ethanol-ethyl ether mixture (1:1 v/v). This solution was then titrated using standardized KOH-ethanolic solution using phenolphthalein as indicator until the pink-violet color was observed. AV was expressed as the number of mg KOH needed to neutralize free fatty acids in 1 g of FO samples. AV was calculated as:
Determination of iodine value
Iodine value (IV) was determined according to AOCS titration method. A-1.0 g of FO samples was added with 20 ml cyclohexaneacetic acid mixture (1:1 v/v). The solution was added with 25 ml of Wijs solution (iodine monochloride, ICl) and was kept in the dark condition for 1 h. The mixture was added with 20 ml saturated KI solution and 150 ml distilled water, shaken homogenously, and titrated with 0.1 N sodium thiosulphate 0.1 N using 1 mL of starch indicator 0.05% until the color became clear. The blank titration was also carried under the similar condition without the addition of FO samples. IV was calculated as: Vb is volume (in ml) of thiosulphate used for blank titration and Vs is volume (in ml) of thiosulphate used for sample titration.
Determination of peroxide value
Determination of peroxide value (PV) was performed using the titrimetric method according to ISO (3960:2001) as in Liang et al. (2013). An approximately of 5.0 g of FO samples were placed into iodine flask and was dissolved in the solution mixture of 50 mL glacial acetic-acid isooctane (3: 2, v/v). The solution was added with 0.5 mL saturated solution of KI. The mixture was then shaken vigorously for 0.5 min and allowed in the dark condition for another 3 min. The solution was added with 30 mL distilled water and was titrated using sodium thiosulphate 0.01 N using 1 mL of starch indicator 0.05%. Titration was stopped if blue colour of solution just disappeared. The blank titration was also carried out under the similar condition without the addition of FO samples. PV was calculated as: Where PV is peroxide value (in meq/kg), Vs is volume (in ml) of thiosulphate used for sample titration, Vb is volume (in ml) of thiosulphate used for blank titration, Nthio FTIR spectra of FO samples were scanned using FTIR spectrophotometer (Nicolet 6700 from Thermo Nicolet Corp., Madison, WI) equipped with a detector of deuterated triglycine sulphate (DTGS) and KBr as a beam splitter. FTIR spectrophotometer was connected to computer operating systems using software OMNIC operating system Version7.0 from Thermo Nicolet (Madison, WI, USA). The sampling technique used was Attenuated Total Reflectance kit (ATR, Smart ARK, Thermo Electron Corp.) using ZnSe crystal. FTIR spectra were read at the mid-infrared region, 4000-650 cm is normality of thiosulphate.
Measurement of FTIR spectra
-1
Chemometrics analysis
, using absorbance mode to facilitate the quantitative analysis based on Lambert-Beer law.
The chemometric analyses were carried out using TQ Analyst software included in the FTIR instrument. Prediction of AV, IV, and PV was facilitated with multivariate calibration of partial least square regression (PLSR) and principal component regression (PCR) [19,20]. PLSR algorithm is an effective multivariate calibration which takes the advantages of multiple linear regression and PCR. PLSR has a relatively simple model with strong predictive capability, making it suitable for FTIR spectral treatment [21,22].
RESULTS AND DISCUSSION
In this study, the quality of frying oils was evaluated by determining several parameters namely acid value (AV), peroxide value (PV), and iodive value (IV) simultaneously using FTIR spectroscopy in combination with multivariate calibrations of PLSR and PCR. Acid value (AV), used for evaluation of fatty acids released, can be taken into account as the precursors of lipid oxidation products in which the higher AV, the lower quality of oils [23].
Iodine value (IV) is used to measure the unsaturation degree of double bonds in oils so that the decrease of IV indicated a decrease in double bonds, and it indicated oxidation of oils [24,25]. Peroxide value (PV) could be used as an indicative of oxidation of oils in the initial stages of oxidation [1]. Table 1 indicated of AV, IV, and PV of frying oils. The standard methods for analysis of AV, IV and PV were titrimetric methods which were laborious and involving chemical reagents, therefore, in this study, FTIR spectroscopy in combination with multivariate calibrations (PLSR and PCR) was developed for prediction these values with the main advantage of simplicity, ease in analysis, and allowing simultaneous analysis. Fig. 1 revealed FTIR spectra of frying oils scanned at mid-infrared regions which exhibited characteristics peaks and shoulders present in triglycerides. Each peak and shoulders indicated the functional groups responsible for IR absorption. The identification of functional groups in each wavenumber can be found in [6,7,12]. Peaks at 3007 cm -1 were due to cis =CH-stretching vibration, peaks at 2953 and 2922 were corresponding to the asymmetric stretching vibration of-CH3 dan-CH2-, respectively. The presence of carbonyl groups was confirmed by a peak at 1743 cm -1 . Peak at 1654 cm -1 was due to stretching vibration of C=C. The presence of-CH2 and CH3 was also confirmed by bending vibrations at 1462 and 1377 cm -1 , respectively. In addition, the peaks at 1160, 1117 and 1098 cm -1 Table 2 compiled the performance of FTIR spectroscopy and multivariate calibrations for prediction of acid value (AV). Several wavenumbers and also spectral treatments (normal, first derivative, and second derivative) were also optimized to obtain the best condition for such prediction. The spectral derivative is applied to get the best resolution among overlapping peaks, but spectral derivatization could make lower sensitivity than normal spectra. Due to its capability to provide the highest of the coefficient of determination (R 2 ) either in calibration and validation models and lowest values of root mean square error of calibration (RMSEC) and prediction (RMSEP), the first FTIR derivative spectra using PCR was selected for prediction of AV in frying oil. The wavenumbers used was 1524-658 cm -1 . The R 2 values obtained was 0.973 (in calibration) and 0.932 (in prediction or invalidation) with RMSEC and RMSEP of 0.121 and 0.253. The high R 2 and low values of RMSEC and RMSEP indicated good accuracy and precision of models. Fig. 2 revealed the correlation between actual values of AV as determined using AOCS method and FTIR predicted values along with residual analysis (the difference between actual and predicted value). Residual values fall above and below zero (0) value, indicating that no systematic errors were observed during modeling. Finally, IV was best predicted using PCR with normal FTIR spectra at the combined wavenumbers region of 3076-2783and 1811-656 cm -1 . PCR using normal spectra at combined wavenumbers region of 3076-2783and 1811-656 cm -1 was also selected for prediction of PV. Fig. 3 and fig. 4 showed a correlation between actual values of IV and PV as determined using AOCS method and FTIR predicted values. The high R 2
Fig. 3: The correlation between actual values of iodine value and FTIR predicted values
values and low values of RMSEC and RMSEP indicated that FTIR spectroscopy is accurate and precise model for prediction of IV and PV, and could be used as an alternative method toward standard titrimetric method.
GRANT INFORMATION
values and low values of RMSEC and RMSEP indicated that FTIR spectroscopy in combination with PCR is an accurate and precise model for prediction of AV, IV and PV, and could be used as an alternative method toward standard titrimetric method.
|
2019-04-10T13:12:34.417Z
|
2018-11-22T00:00:00.000
|
{
"year": 2018,
"sha1": "4850f2513202f37d1d658f91120fc97ebd290926",
"oa_license": "CCBYNC",
"oa_url": "https://innovareacademics.in/journals/index.php/ijap/article/download/29716/16160",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cec962307136a7b20fc71251a117a2e9657b195c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
58576592
|
pes2o/s2orc
|
v3-fos-license
|
FAst Segmentation Through SURface Fairing (FASTSURF): A novel semi-automatic hippocampus segmentation method
Objective The objective is to present a proof-of-concept of a semi-automatic method to reduce hippocampus segmentation time on magnetic resonance images (MRI). Materials and methods FAst Segmentation Through SURface Fairing (FASTSURF) is based on a surface fairing technique which reconstructs the hippocampus from sparse delineations. To validate FASTSURF, simulations were performed in which sparse delineations extracted from full manual segmentations served as input. On three different datasets with different diagnostic groups, FASTSURF hippocampi were compared to the original segmentations using Jaccard overlap indices and percentage volume differences (PVD). In one data set for which back-to-back scans were available, unbiased estimates of overlap and PVD were obtained. Using longitudinal scans, we compared hippocampal atrophy rates measured by manual, FASTSURF and two automatic segmentations (FreeSurfer and FSL-FIRST). Results With only seven input contours, FASTSURF yielded mean Jaccard indices ranging from 72(±4.3)% to 83(±2.6)% and PVDs ranging from 0.02(±2.40)% to 3.2(±3.40)% across the three datasets. Slightly poorer results were obtained for the unbiased analysis, but the performance was still considerably better than both tested automatic methods with only five contours. Conclusions FASTSURF segmentations have high accuracy and require only a fraction of the delineation effort of fully manual segmentation. Atrophy rate quantification based on completely manual segmentation is well reproduced by FASTSURF. Therefore, FASTSURF is a promising tool to be implemented in clinical workflow, provided a future prospective validation confirms our findings.
Introduction
Hippocampus segmentation on structural magnetic resonance images (MRI) is used to monitor morphological hippocampal changes which occur in diseases like Alzheimer's disease (AD), depression, epilepsy, and schizophrenia [1][2][3][4]. Hippocampal volume change is therefore an important biomarker in the quantification of progressive neurodegenerative diseases such as AD or mild cognitive impairment (MCI) [5,6]. In the last few years, hippocampal delineation has also gained importance in radiotherapy during prophylactic cranial irradiation (PCI) aimed at avoiding lung tumour spread to the brain while sparing the hippocampus and reduce neurotoxicity [7][8][9][10][11].
The hippocampus is a small archicortical brain structure which shows limited contrast on structural MRI scans because adjacent structures, such as the amygdala, caudate nucleus and the thalamus typically have similar intensity [12]. This makes hippocampus segmentation a difficult task, regardless of the degree of automation used. Manual segmentation requires extensive training and is labour intensive. Multiple methods have been developed to semiautomatically or fully automatically segment the hippocampus, most of which are discussed in a recent review study by Dill et al [13]. Automatic methods are usually based on deformable models, single-, multiple-or probabilistic-atlases, while semi-automatic methods also involve manual pre-or post-processing. According to Dill et al., the reasons why these methods are still not ready for routine clinical use include the sensitivity of automatic methods to the choice of (patient group dependent) atlases, the computational cost of multiple atlas registration, the lack of validation for different data sets, and the complexity of the required manual pre-and post-processing procedures [13].
For hippocampal volume measurements in clinical trials, manual delineation is usually the method of choice [32]. However, even manual segmentations are biased because the precise definition of the hippocampal region varies across laboratories resulting in hippocampal volumes ranging from 2 to 5.3 cm 3 in studies with different diagnostic groups and outlining protocols [33,34]. It is therefore of crucial importance that manual outlining protocols are standardized as much as possible. Different application areas have developed their own standards. Within neurology, an initiative has been taken to develop a harmonized hippocampal . Permission for MRI data and hippocampus delineation from dataset 1 will not be granted, because these are patient data from an ongoing phase III trial and property from the National Cancer Institute -Antoni van Leeuwenhoek (NKI-AvL) hospital in Amsterdam, and they cannot agree to release these data. Permission to use manual hippocampus segmentations from dataset 3 will also not be granted, because these are the property of the Radiology and Nuclear Medicine department, VU University Medical Center, and they did not agree to release these data. Permission for these data may be granted by contacting Anne Verhagen (a.verhagen@vumc.nl) at the VU University Medical Centrum. All other relevant data are within the paper and its Supporting Information files.
Funding: Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-outlining protocol (HarP), by merging hippocampal boundary definitions from different outlining protocols [34][35][36]. Within radiotherapy, due to the integration of hippocampal avoidance treatment plans in radiotherapy, another hippocampus outlining protocol has been developed by the radiotherapy oncology group (RTOG, [37]). These protocols differ in terms of the definitions of boundaries and the anatomical orientation of the images used for outlining. Manual segmentation protocols are mainly focussed on reproducibility and standardization, whereas the delineation efficiency is greatly ignored. Typically, it requires one to two hours to segment a complete hippocampus pair. With this study, we present a novel semiautomatic hippocampus segmentation method: FAst Segmentation Through SURface Fairing (FASTSURF). The method is based on mesh processing techniques, is computationally inexpensive and does not require a priori knowledge such as atlases or models. The underlying idea of FASTSURF is that the slice to slice changes of hippocampal cross-sections are generally small. Therefore, using certain smoothness constraints, the hippocampal shape can be reconstructed from a few manually delineated cross-sections. In this study, these few delineated cross-sections are simulated from full manual delineations. FASTSURF is then validated by comparing it to these fully manual segmentations, using different datasets from different diagnostic groups. Because the underlying principle is applicable to different outlining protocols, it is tested for the HarP and RTOG protocols and for a protocol from Jack et al. [38]. Finally, a comparison is made with automatically segmented hippocampi using FreeSurfer [12,15] and FSL-FIRST [14].
Datasets and MRI acquisition
We used three different datasets to validate our method, one dataset with subjects from the Netherlands Cancer Institute-Antoni van Leeuwenhoek (NKI-AvL) hospital in Amsterdam, the Netherlands (Dataset 1, described below) and two different datasets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (Datasets 2 and 3, described below). Datasets 2 and 3 used in the preparation of this article were obtained from the ADNI database (adni.loni.usc.edu). The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD.
2.1.1. Dataset 1. Dataset 1 is a subset of data from a multicentre phase III trial in which patients with small cell lung cancer (SCLC) receive either standard PCI treatment or PCI treatment with hippocampal avoidance (Clinical trials.gov identifier: NCT01780675). MRI data were anonymously accessed and collected at the NKI-AvL. The imaging protocol was the same as in the ADNI GO study. Sagittal 3D T1-weighted MRI were acquired with a magnetization prepared rapid acquisition gradient echo (MPRAGE) sequence using a 3T Philips Achieva with an eight channel head coil. For all MRIs, pixels in-plane were 1mm 2 with a slice thickness of 1.2mm. Data and hippocampus delineations of 12 patients who received PCI with hippocampal avoidance were collected.
2.1.2. Dataset 2. Dataset 2 was taken from the ADNI database with images and training labels of 135 subjects of different diagnostic groups, acquired with two different MRI scanner field strengths of 1.5T and 3T using various MRI scanner vendors (Philips, Siemens and GE). Sagittal 3D T1 weighted MPRAGE images were acquired for 44 healthy control (CTRL), 46 MCI and 45 AD subjects. In-plane pixel sizes ranged from 0.86mm to 1.25mm and slice thickness was 1.2mm. In [39] a detailed description of the imaging protocol is given.
Dataset 3.
The third dataset is the same ADNI dataset as was used in [31] and [40]. The dataset consists of 80 subjects, 20 CTRL, 40 MCI and 20 AD subjects. For each subject, Competing interests: HV has received research grants from Novartis, Teva, MerckSerono and Pfizer, and a speaker honorarium from Novartis, but these funders did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. All funds were paid directly to his institution. There are no patents, products in development or marketed products to declare. This does not alter our adherence to all the PLOS ONE policies on sharing data and materials.
four volumetric MRI scans were collected. Two MRI back-to-back (BTB) scans were acquired at time-point baseline (BL-A and BL-B) and two MRI BTB scans one year later (M12-A and M12-B). The BTB scans were acquired in a single session with just a few seconds between acquisitions but processed independently. The BL scans were acquired between September 2005 and August 2007. Sagittal 3D T1 weighted MPRAGE images were acquired at 1.5T field scanners from different vendors (Philips, Siemens and GE). The four scans for each subject were acquired with the same MRI scanner and protocol. In-plane pixel sizes ranged from 0.93mm to 1.2mm and slice thickness was 1.2mm. Images were not further processed other than the default scanner corrections and visual inspection of each scan ensured good quality.
In [41] a more detailed description of the MRI acquisition can be found.
Manual and automatic hippocampus segmentation
2.2.1. Manual hippocampus segmentation for dataset 1. The clinical Dataset 1 was delineated using the RTOG protocol for hippocampal sparing [37]. Using a rigid body registration, MRIs were registered to treatment-planning CTs with 1mm slices thickness and in-plane pixel sizes varying between 0.6mm and 0.7mm. Hippocampi were delineated on these resliced axial MRI slices. The most inferior slice to delineate the hippocampus is defined to be the slice on which the temporal horn appears next to the lateral ventricle. Hippocampal grey matter is segmented from the anterior to the superior direction while avoiding the fimbria. The anterior boundary is defined by the temporal horn and the amygdala, the medial boundary by the uncus. In postero-cranial direction the medial boundary is formed by the lateral edge of the quadrageminal cistern. On the last slices in postero-cranial direction the hippocampus is located antero-medially to the atrium of the lateral ventricle and hippocampus segmentation ends when the crux of the fornix emerges. The average number of slices on which the hippocampus was outlined is 21.1 (see Table 1).
Manual hippocampus segmentation for dataset 2.
Scans of dataset 2 were outlined using the EADC-ADNI Harmonized Protocol for Hippocampal Segmentation (HarP) described in [35] and segmentation files were obtained from the HarP project's website (http://www.hippocampal-protocol.net/). Briefly, MRIs were aligned along the anterior and posterior commissures of the brain (AC-PC line) by using a rigid body registration to the MNI ICBM152 template (International Consortium for Brain Mapping) with 1x1x1mm voxel dimensions and images were resampled with trilinear interpolation. The most posterior slice where the hippocampus is segmented is defined to be the slice on which a small ovoid grey matter mass is visible close to the lateral ventricle. The most anterior slice to outline the hippocampus is defined to be the slice on which the alveus can be seen below the amygdala. For detailed boundary descriptions and figures we refer the reader to the HarP literature [34][35][36].
Manual hippocampus segmentation for dataset 3.
Scans of (ADNI) Dataset 3 were segmented at the Image Analysis Center (IAC, VU University Medical Center (VUmc) Amsterdam) using a segmentation protocol from [38], previously described in [31,38,42]. For all subjects the BL MRI scans were reformatted in a plane perpendicular to the long axis of the left hippocampus resulting in a pseudo coronal orientation. Sinc interpolation was used, slice thickness was 2mm, and the original in-plane resolution was maintained. M12 scans were rigidly registered to BL scans, again using sinc interpolation. All hippocampi were segmented by a single well-trained expert of the IAC using in-house developed software (Show_Images 3.7.1.0). Following the IAC protocol, BL segmentations were shown alongside M12 scans when M12 scans were segmented. However, the technician was blinded to the diagnosis and BTB scans were given in random order. The hippocampal formation consists of the Ammon's horn, dentate gyrus, alveus and fimbria and the subiculum. When detecting the total length of the crux of the fornix the most posterior slice to outline the hippocampus can be seen. The inferior boundary is formed by the subiculum and the parahippocampal gyrus and the superior boundary by the CSF of the temporal horn and the alveus. The lateral border is defined by the CSF and the temporal horn and the alveus, while the medial border is defined by the CSF in the cisterna ambiens and the transverse fissure. The most anterior slice on which the hippocampus is outlined, is defined to be the slice on which the hippocampus appears alongside the amygdala and CSF appears on the medial side of the hippocampus.
FSL-FIRST hippocampus segmentation (only dataset 3).
FSL-FIRST is an automatic segmentation tool based on deformable models. Details are described in [43] and [14]. Briefly, with a set of manual segmented hippocampi from the Center for Morphometric Analysis (CMA), Massachusetts General Hospital (MGH) Boston, shape and appearance models were constructed. For this, a point distribution model was created using parameterized surface meshes created from the manual segmentations taking into account the intensity around the tissue border. To segment a new MRI, FSL-FIRST uses intensity values from the MRI and searches through linear combinations of shape variation modes to find the most probable shape. Before segmentation, FSL-FIRST performs a two-stage affine registration to MNI152 standard space at 1mm resolution. Then, by using FAST voxel-wise segmentation software [44] the hippocampus mesh is converted to a labelled image. We used FSL-FIRST v.5.0.4 with the script command run_first_all. The voxel-wise hippocampal labels produced by FSL-FIRST are in native MRI scan space.
FreeSurfer hippocampus segmentation (only dataset 3).
FreeSurfer automatic segmentation for subcortical structures involves multiple steps and is described in detail in [12]. First, MRI scans are transformed to a conformed 1mm 3 256 3 space. FreeSurfer performs bias-field correction and intensity normalization, and strips the skull to transform an atlas to the brain. Voxels are assigned to subcortical structures using prior probabilistic intensity and tissue class information.
To obtain FreeSurfers hippocampus segmentation FreeSurfer version 5.3 was used with the longitudinal stream for longitudinal data (Dataset 3) and cross-sectional stream for cross-sectional data. FreeSurfer's voxelwise hippocampal labels from the cross-sectional and longitudinal stream were converted back to the native MR image space using the procedure provided by FreeSurfer (mri_label2vol).
Like FSL-FIRST, FreeSurfer uses the CMA segmentation scheme for subcortical segmentation. The segmentation protocol can be found on their website (http://freesurfer.net/fswiki/ CMA). The substructures of this outlining protocol are similar to the substructures mentioned in the outlining protocol from [38] of dataset 3: dentate gyrus, cornu ammonis, subiculum, fimbria and alveus.
Surface reconstruction and volumetric analysis.
We converted all voxel-wise hippocampal labels to meshes using the marching cube algorithm. To reduce interpolation errors as much as possible, all volumes and overlap indices were computed from these meshes after applying the appropriate registration transformation as described previously in [40].
Using IBM SPSS Statistics for Windows v. 22 Armonk, NY: IBM Corp we performed a oneway repeated measures ANOVA to determine volumetric differences in dataset 3 between manual and automatic segmentation methods. A post hoc analysis was performed after Bonferroni's correction.
FASTSURF
2.3.1. Theory. FASTSURF is based on sparse hippocampus contouring, with the missing contours computed automatically, under the constraint that contours of the most extreme slices of the hippocampus are available. We define a contour as a closed tracing of the hippocampus perimeter on a single slice. Delineated contours are connected by constructing a triangular mesh of which some nodes correspond to the delineated points and the remaining points move to intermediate positions determined by applying certain smoothness constraints. This technique is known as surface fairing [45]. A schematic representation of delineated and intermediate contours is represented in Fig 1. The mesh so obtained can be considered as a graph, in which every vertex is connected to a set of neighbours. Then, given the connectivity graph, the discrete Laplacian is defined as follows: where the indices n and m refer to the mesh vertices and N Neighbours (v n ) is the number of neighbours of vertex v n . When all the edges are interpreted as springs with a fixed spring constant and when a net force balance of zero is imposed on each vertex, both at known and unknown vertices, optimal vertex positions are obtained by setting where x, y, and z are vectors of the x-, y-and z-coordinates of all mesh vertices. Coordinates of the unknown intermediate vertices can be found by moving all known points to the right hand side of these equations and by solving the three sparse systems of equations, for which we used the iterative bi-conjugate gradient method [46]. Finding the intermediate vertices with these equations would lead to a surface of minimum area, or minimal surface, and no penalty is put on the increased curvature at the delineated points. When minimizing the curvature instead of surface area, a thin-plate surface is obtained, requiring only a minor modification of the equations. Translating continuous curvature minimization functions to a discrete triangle mesh [45] leads to linear bi-Laplacian systems: This approach has similarities to spline interpolation, in which continuity of a function and its derivatives is enforced at all edges and nodes and the interpolating triangles are curved. However, in our approach the triangles are flat and a numerical approximation of the minimum surface curvature, resulting to simpler and probably faster computations.
An example showing the difference between a Laplacian and bi-Laplacian solution is presented in Fig 2. In the remainder of this paper we use the term "FASTSURF segmentation" to denote sparse hippocampal outlines which were completed by solving the bi-Laplacian systems.
Simulation of sparse delineation.
To demonstrate the proof of concept, we simulated sparse delineations to evaluate FASTSURF segmentation. Manually delineated hippocampus segmentations were converted to 3D meshes from which we extracted a number of contours at regular intervals. The contours were extracted in the same direction in which the hippocampus was segmented, i.e. for dataset 1 the contours were extracted in axial direction and for dataset 2 and 3 in (pseudo) coronal direction. Then, we linearly interpolated a predefined number of points on each contour and replaced the original contour points with the interpolated ones to obtain the same number of points equally distributed on each contour. Then, as a first approximation, contours were connected by straight lines and intermediate
Comparison of FASTSURF segmentation to manual and automatic segmentation.
We used overlap indices and percentage volume difference measures to compare FAS-TSURF segmentation with completely manual hippocampus segmentation. The Jaccard index was computed directly from the surface meshes by adopting a fine regular grid enclosing the two surfaces. The Jaccard index was approximated by: where N A\B and N A[B are the number of grid points inside the cross section and the union of both surfaces, respectively. The Jaccard index is directly related to the Dice overlap index (D = 2J/(J+1)). Hippocampus meshes from different MRI scans generally are in different spaces. Before applying (4), we first performed a rigid body co-registration of the BTB MRI scans with FSL-FLIRT [47,48] and applied the obtained registration parameters on the mesh points of the hippocampi meshes to bring the meshes into the same space. Cross-sectional percentage volume difference was computed using: and longitudinal percentage volume change was defined by: with V A being the volume of object A and similarly V B . For dataset 3 we obtained FSL-FIRST and FreeSurfer hippocampus segmentations and compared these segmentations to manual and FASTSURF segmentations. Using the longitudinal BTB scans' hippocampus segmentations of dataset 3, we computed atrophy rates as defined in (6) using BL and M12 scans.
When comparing FASTSURF segmentations to manually outlined hippocampus segmentations, results will be biased because the input contours of the simulated sparse delineation are taken from points very close to the fully outlined manual segmentations. Using the BTB scans of dataset 3, we overcome this bias by comparing independent manually outlined hippocampus segmentations from the A scans with FASTSURF segmentations from the B scans, and vice versa. Having A and B scans from both BL and M12, this comparison can be performed twice for each subject, which strengthens the statistical analysis. Using this comparison, we were also able to quantify the bias. Without the availability of real segmented sparse contours, we consider this comparison as an adequate unbiased test of our method's performance. In the remainder of this manuscript we call this "robustness analysis". The robustness analysis was performed for both manually and automatically segmented hippocampi. An unbiased atrophy analysis could not be performed with the manual segmentations of this dataset, because the hippocampi on the M12-A and M12-B scans were segmented alongside the corresponding scans and segmentations of the BL time point, i.e. BL-A and BL-B respectively, to determine longitudinal volume change. Therefore, the A and B scans cannot be fairly interchanged for this type of analysis. Agreement, robustness and atrophy comparisons are illustrated in Fig 3 with coloured 3D meshes representing manual and FASTSURF segmentations from different time-points.
All measurements were performed in groups (CTRL, MCI and AD). Furthermore, we tested FASTSURF using different numbers of contours, with a minimum number of four contours. We aimed to reduce the number of contours at least by half, thus for dataset 1 the number of contours used for hippocampus reconstruction ranged from 4-10, for dataset 2 it ranged from 4-18 and for dataset 3 we used a range of 4-10. An example using FASTSURF with different numbers of contours is presented in Fig 4. 2.3.4. Parameter tuning. Parameter refinement and bug-testing for FASTSURF was performed on 10 randomly chosen MCI subjects' hippocampal segmentations from Dataset 3, using both BTB scans. These 10 subjects' hippocampal segmentations were excluded in our final analysis. We extracted 10 contours from these subjects' segmentations and tested the effects of the number of intermediate contours and the number of points used in the triangulation step for each contour. Using the BTB scans' segmented hippocampi, we performed agreement and robustness analysis for FASTSURF segmentations with manual hippocampal segmentations. Table 2 shows results for optimizing the number of intermediate contours (using 50 points per contour) and Table 3 shows the test results for optimizing the number of points on each contour. In both tables means and standard deviations (STD) of resulting Jaccard indices and PVDs are presented. Table 3 shows that Jaccard indices increase as a function of this number, until about 100 points per contour. PVDs slightly get closer to zero with increasing number of points per contour, but computational times also increase. Therefore, we chose to perform our final analysis with 100 points per contour and three intermediate contours.
Results
Hippocampal volumes for specific groups are presented in Table 4, in which for all datasets left and right hippocampal volumes were grouped together, and for dataset 3 hippocampal volumes from all time-points were grouped together. Because of the violation of sphericity, the univariate repeated measures ANOVA was Greenhouse-Geisser corrected. Mean hippocampal volumes showed a significant dependence on method (BL left p = 0.000459, BL right p = 1.4E -10 , M12 left p = 0.000002, M12 right p = 6.3E -14 ). The post hoc analysis showed that manual BL left and right did not significantly differ from FreeSurfer's hippocampal volumes (p = 0.341 and p = 0.070), but they were significantly different from FSL-FIRST volumes (p = 0.000139 Table 2. From 10 randomly chosen MCI subjects' BTB hippocampus segmentations, 10 contours were extracted to simulate delineations and the number of intermediate contours between subsequent delineation simulations was varied. Agreement and robustness were determined as described in the main text. The volumes differ between datasets due to different operational procedures and protocols. For instance, hippocampi outlined on resampled MRI of dataset 2 generally have more contours than hippocampi from the other datasets and hippocampi from dataset 1 are outlined in axial direction. Fig 5 illustrates these differences by presenting surface renderings of one example from each dataset for manual and FASTSURF hippocampi using seven contours.
Results for dataset 1
Hippocampi in dataset 1 were outlined using the RTOG protocol and FASTSURF segmentations were generated using 4 to 10 contours. Jaccard indices and PVDs are plotted in boxplots in Fig 6. S1 Table displays all corresponding mean and standard deviations for Fig 6. As expected, with increasing number of contours Jaccard indices increase, and PVDs get close to zero. It should be noted, ignoring the bias in these results for now, that with only five contours a Jaccard index higher than 0.67 (equivalent to a Dice overlap of 0.8) is reached. This is considered as good accuracy for small structures as the hippocampus [12,13]. PVDs for six or more contours are relatively consistent. Five to six contours would mean a theoretical time reduction to approximately one fourth of the original time needed, considering that the mean number of hippocampal contours for this dataset is~21.
Results for dataset 2
For dataset 2 we performed a similar analysis separately for each patient group. Fig 7 shows overlap indices and PVDs of FASTSURF and manual segmentations per group as a function of the number of input contours. For enhanced visibility, we scaled the PVD boxplot cutting off larger outliers for four to six contours, but all mean and standard deviations can be found in the S2 Table. With eight or more contours, Jaccard indices above 0.67 and relatively low PVDs were obtained. In this dataset using the HarP protocol for segmentation, the mean number of hippocampal contours is~37, meaning that eight or nine contours would reduce the outlining time to approximately one fourth of the full outlining time, comparable to dataset 1. From the Jaccard indices of Fig 7 it can be seen that the MCI group has slightly lower Jaccard indices than the CTRL group and the AD group has slightly lower indices than the MCI group. Overlap indices tend to be lower for smaller volumes. To determine to what extent the decrease in Novel semi-automatic hippocampus segmentation Jaccard indices in Fig 7 is a volume effect we plotted the volumes of manual segmentations against the observed Jaccard indices in Fig 8. In the same plot stacked histograms are shown to illustrate frequencies of volumes in specific groups. From the scatter plot it can be observed that Jaccard indices increase with hippocampal volume and that all three patient groups behave identically, i.e. that the volume difference drives the difference in Jaccard index.
Results for dataset 3
For dataset 3 we obtained 280 hippocampus segmentations for 70 subjects with 4 MRIs at different time-points. Data of 10 MCI subjects were used for algorithm optimization and were therefore excluded from this analysis. We performed agreement (biased), robustness (unbiased) and atrophy (biased) analyses to assess FASTSURF's performance. Fig 9 shows the biased Jaccard indices and PVDs comparing manual segmentations of the BL scans with corresponding FASTSURF segmentations for each diagnostic group. In both boxplots left and right hippocampus segmentations were grouped together. In the right part of each panel, the results for the automatic methods are shown. One can observe that FASTSURF segmentation with only five contours agree better with manual than fully automatic methods and with six contours PVDs are consistently close to the zero line. centred around zero for six contours and more. It is maintained that FASTSURF with only five contours performs better than the tested automatic methods. Also, Jaccard indices and PVDs for manual BTB hippocampus segmentations are presented, indicating the reproducibility of the manual observer. Manual hippocampus segmentation is often regarded as the "gold standard" [34,49], thus manual outline reproducibility represents a desirable level of accuracy to be reached. In study design, manual outline reproducibility is the maximum level of accuracy that can be reached with FASTSURF, because we extract contours from manual segmentations and FASTSURF segmentation follow the shape of these contours. Similar boxplots With six or more contours, Jaccard indices and PVDs are relatively consistent-six contours would theoretically reduce segmentation time by approximately one third considering that the mean number of outlined contours for this dataset of~20.
In Fig 11 three scatter plots show the correlation of hippocampal atrophy rates as determined by manual segmentations and FASTSURF using 4, 7 and 10 contours for the A scans' hippocampi. Correlations (R 2 ) for other numbers of input contours are given in Table 5. The last three lines in Table 5 present analogous correlations comparing atrophy measurements based on manual and FSL-FIRST, manual and FreeSurfer, and finally manually determined atrophy using A and the B scans.
The correlation expectably increased with increasing number of contours. Atrophy rates derived from FASTSURF correlated consistently better with manually measured atrophy rates than atrophy rate measurements based on either automatic segmentation method. Even though this comparison is biased towards FASTSURF, the difference in R 2 between automatic segmentation and FASTSURF is much larger than the estimated bias reported above. Similar results were obtained when using B-scans instead of A-scans. Novel semi-automatic hippocampus segmentation
Discussion
This study was performed to show the proof of concept of a novel semi-automatic hippocampus segmentation method (FASTSURF) which can substantially reduce segmentation time while maintaining high accuracy.
The novelty of FASTSURF is that it is entirely based on mesh processing procedures, i.e. image intensity, structural shape information or atlases are not needed. Therefore, we believe Novel semi-automatic hippocampus segmentation that FASTSURF is less prone to image noise or artefacts compared to intensity-based methods. Furthermore, the completion of a hippocampus given a sparse set of contours is computationally inexpensive and hippocampi are reconstructed within a second. The hippocampus is a thin seahorse-shaped structure which has geometrically more variation in shape than other subcortical brain structures or other soft tissue structures in the body. Since FASTSURF does not require specific anatomical a priori knowledge other than smoothness we expect that FAS-TSURF can also be used to outline different anatomical regions with similar or even better accuracy, depending on the shape of the structure. Using simulated input extracted from different datasets we quantified the agreement to manual hippocampus segmentation by the Jaccard index and PVD measures. With FAS-TSURF we reached good accuracy with a Jaccard index of higher than 0.67 (equivalent to a Dice overlap of 0.8) by using only five contours for dataset 1 (μ = 0.75±0.035), seven contours for all groups in dataset 2 (μ CTRL = 0.76±0.025, μ MCI = 0.74±0.034, μ AD = 0.72±0.043) and five contours for all groups in dataset 3 (Biased: μ CTRL = 0.78±0.030, μ MCI = 0.77±0.033, μ AD = 0.76 ±0.026; Unbiased: μ CTRL = 0.73±0.033, μ MCI = 0.73±0.035, μ AD = 0.72±0.031). Furthermore, as it can be seen from the Jaccard indices from dataset 3, the agreement to manual segmentation was considerably better than both tested automatic methods with only five contours for both biased and unbiased comparisons. Mean PVDs with five contours still seem to be quite high, ranging from 2.40(±3.67)-8.20(±3.71)% across data sets. PVDs improve considerably from Table 5, it is evident that atrophy measurement using FASTSURF agrees more closely with atrophy derived from manual outlines than atrophy determined by either automatic segmentation methods. Visually inspecting Fig 11 and Table 5 suggests that using FASTSURF hippocampus segmentations with seven to ten input contours is sufficient with R 2 values ranging from 0.75-0.85. Therefore, if this type of outlining protocol would be used, we recommend the use of seven contours as a practical compromise between accuracy and delineation time.
Most of our comparisons show very promising results in terms of accuracy of volume, Jaccard index and atrophy, but for part of the data sets they are biased. However, the unbiased robustness analysis performed with dataset 3 confirmed that FASTSURF segmentations agree better with manual segmentations than both automatic segmentation methods. Good and consistent overlap indices and PVDs were obtained by using six or more contours-our atrophy measurements suggest the need of seven or more contours. The robustness analysis indicates that slight variations of contour outlines does not affect the performance of the reconstruction method and that the bias is small. Therefore, our results suggest that these conclusions are equally valid for the data sets segmented with other protocols, but this needs to be confirmed in future studies.
The HarP protocol is the most modern and broadly accepted protocol in neuroscience, used to perform standardized and reproducible manual hippocampal segmentations [35]. In this study, HarP simulated contours were reconstructed with FASTSURF and compared to the manual counterpart segmentation. Results show high and consistent accuracy with eight or more contours-eight contours would reduce segmentation time by one fourth. This comparison is biased, but results of dataset 3 indicate that the bias is relatively small. We suggest that HarP can be combined with FASTSURF with minimum loss of accuracy, but this needs to be validated in future studies. Therefore, we conclude that FASTSURF would be very useful for efficient and reproducible hippocampus outlining. In radiotherapy, after delineating the hippocampus, a 5mm margin is placed around the hippocampus determining the region for dose sparing [10]. With FASTSURF we obtained high overlap results for hippocampi of dataset 1 with only five contours, indicating that this method can possibly be used for delineation in hippocampal sparing brain irradiation.
We emphasize that the completion of the hippocampus given a sparse delineation is computationally inexpensive and hippocampi are reconstructed within a second. Automatic segmentations, due to registration procedures of atlases, are usually computationally more expensive and it takes multiple minutes or hours to obtain a hippocampus segmentation. This leads to another advantage of FASTSURF because atlases, registration procedures, or parameter tweaking are not needed.
Compared to literature, we obtained similar overlap and PVD results for both automatic methods in comparison to manual segmentation [12,14,[16][17][18][19][20][21][22][23][24][25][26]. Most of the literature mentions that automatic segmentation methods are comparable to manual hippocampus segmentation, i.e. show similar hippocampal volume trends for diagnostic groups, but they still need to improve to become as good as the gold standard. Recent papers even suggested that FreeSurfer might be used clinically for specific applications [23,24]. We showed that with FAS-TSURF, segmentations are consistently closer to manual hippocampus segmentations than FreeSurfer and FSL-FIRST without producing outliers. This suggests that FASTSURF is possibly closer to clinical implementation than automatic segmentations.
Comparison of FreeSurfer and FSL-FIRST with manual segmentations from dataset 3 might not be completely fair, because both automatic methods are trained with a different outlining protocol from the Center of Morphometric Analysis (CMA). The ANOVA volume analysis also indicates an overall outlining protocol difference with p-values lower than 0.005. However, with the post hoc ANOVA volume analysis we actually showed that BL left and right hippocampal volumes from FreeSurfer and manual segmentations were not significantly different (p = 0.341 and p = 0.070), but FSL-FIRST and FreeSurfer volumes were significantly different even though they were trained on the same outlining protocol (BL left: p = 0.070; BL right: p = 0.000009; M12 left: p = 0.030; M12 right: p = 0.000003). This indicates that at least on a volumetric level the outlining protocols are not very different. Extensive manual-automatic hippocampus segmentation analysis has been done previously, therefore we did not expand this outlining protocol investigation. Here, we merely demonstrate that FSL-FIRST and FreeSurfer hippocampus segmentations are less close to manual segmentations than FAS-TSURF segmentation, but for a completely unbiased comparison FreeSurfer and FSL-FIRST would have been trained with the same outlining protocol.
Furthermore, it would be interesting to compare FASTSURF to other automatic segmentation method such as multi-atlas/template-based segmentation methods [50,51], patch-based segmentation methods [52] or modern deep learning based methods as they emerge. In terms of segmentation results and segmentation speed the patch-based method seems very promising. In future studies, multi-atlas/template-based segmentation methods can be trained and tested with the manual segmentations from dataset 2 or 3 and finally, these methods can be compared to FASTSURF segmentations. Currently, the comparison to FSL-FIRST and Free-Surfer is the most important because these are the most used and tested publicly available segmentation methods.
Considering segmentation time reduction, we are not able to exactly predict how much time an observer would save for hippocampal segmentation, because this is a simulation study. As a rough estimate, one can take the number of contours taken for reconstruction, divide it by the mean number of total contours and multiply it by an estimated segmentation time for total hippocampus segmentation. As an example, if an expert rater takes~2h to segment the left and right hippocampus outlining 36 slices, using our method the rater would only takẽ 30min if he/she outlines the hippocampus on 9 slices. Suggesting an optimal number of contours for accurate hippocampus reconstruction also depends on the desired level of accuracy. We think that with our method the number of contours can be at least reduced by half, if not by three quarters.
This study has two minor limitations. So far, only one contour on each slice is allowed to be outlined. This might not always be sufficient, because hippocampal atrophy can cause irregular hippocampal shapes leading to two or more contours per slice. Furthermore, if the hippocampus contains cavities that should be excluded from the hippocampal volume special precautions in the outlining software need to be implemented to account for such structures.
Another limitation of this study is that sparse segmentations were simulated from full manual segmentations. The present study was intended to demonstrate the proof of concept by providing initial validation. Future studies should produce true sparse delineations de novo, ideally including independent sparse delineations from multiple observers for a more complete validation. Furthermore, observers usually inspect neighbouring slices to outline the hippocampus. In theory, sparse segmentations could also be obtained by inspecting the neighbouring slices, which might slightly affect the delineation time.
FASTSURF is based on smooth interpolation and therefore it is, in its present form, not suited to delineate structures with irregular shapes such as tumours. However, for smooth structures such as the amygdala, thalamus, putamen or the caudate nucleus FASTSURF might work as well as for the hippocampus. Furthermore, manually selecting and including additional contours at inflection and high curvature points most probably improves FASTSURF's accuracy for segmenting irregular shapes.
Conclusion
FASTSURF provides hippocampus outlines that are highly similar to completely manual segmentations and agree consistently better with manual segmentations than automatic segmentation methods (FSL-FIRST and FreeSurfer). Dependent on its implementation and the associated workflow, FASTSURF can reduce the time for expert observers to at least a half. Because in principle observers do not need to be retrained and because the method is computationally inexpensive, the proposed method is expected to be easily integrated into existing workflows. Future work needs to validate FASTSURF with partial segmentation performed by expert raters, which might lead to a possible usage of this method in the clinic.
Supporting information S1
|
2019-01-22T22:35:07.553Z
|
2019-01-18T00:00:00.000
|
{
"year": 2019,
"sha1": "d4c938b1fd554d8d8564afb4313f69fdaccc44b6",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0210641&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49cdd1faf2e72b50dd83af5182fd6df5f42fb0cc",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
202860324
|
pes2o/s2orc
|
v3-fos-license
|
Modeling brain dynamics after tumor resection using The Virtual Brain
Brain tumor patients scheduled for tumor resection often face significant uncertainty, as the outcome of neurosurgery is difficult to predict at the individual patient level. Recently, computational modeling of brain activity using so-called brain network models has been introduced as a promising tool for this purpose. However, brain network models first have to be validated, before they can be used to predict brain dynamics. In prior work, we optimized individual brain network model parameters to maximize the fit with empirical brain activity. In this study, we extend this line of research by examining the stability of fitted parameters before and after tumor resection, and compare it with baseline parameter variability using data from healthy control subjects. Based on these findings, we perform the first “virtual neurosurgery” analyses to evaluate the potential of brain network modeling in predicting brain dynamics after tumor resection. We find that brain network model parameters are relatively stable over time in brain tumor patients who underwent tumor resection, compared with baseline variability in healthy control subjects. In addition, we identify several robust associations between individually optimized model parameters, structural network topology and cognitive performance from pre-to post-operative assessment. Concerning the virtual neurosurgery analyses, we obtain promising results in some patients, whereas the predictive accuracy of the currently applied model is poor in others. These findings reveal interesting avenues for future research, as well as important limitations that warrant further investigation.
Introduction
Many brain tumor patients undergoing neurosurgery face significant uncertainty regarding the outcome of surgery. Although average neurosurgical outcomes for patient cohorts can be predicted with a high degree of accuracy (Emblem et al., 2009(Emblem et al., , 2015Senders et al., 2017), the heterogeneity of brain tumors complicates predictions on an individual patient level.
Following methodological advances, several studies have addressed this limitation by applying graph theoretical and machine learning approaches to infer neurosurgical outcome at the individual patient level (for a review see Senders et al., 2018). In particular, several studies have tried to find biomarkers that predict seizure freedom after epilepsy surgery (for example Bonilha et al., 2013Bonilha et al., , 2015He et al., 2017;Ji et al., 2015;Morgan et al., 2017;Munsell et al., 2015;Taylor et al., 2018;van Dellen et al., 2014). Others have evaluated machine learning strategies designed to predict survival in glioma (Emblem et al., 2009(Emblem et al., , 2015 and traumatic brain injury patients (Rughani et al., 2010). Furthermore, one study found that graph measures derived from the pre-surgical functional connectome of patients with temporal lobe epilepsy were able to predict post-surgical cognitive performance scores across different domains (Doucet et al., 2015).
Recently, brain network modeling has also been introduced as a promising tool to simulate neurosurgical outcome (Arsiwalla et al., 2015;Proix, Bartolomei, Guye, & Jirsa, 2017). Brain network modeling techniques implement dynamical models on individual structural brain connectivity networks to simulate subject-specific brain activity (Schirner, McIntosh, Jirsa, Deco, & Ritter, 2018). By virtually lesioning structural connectomes, brain network models may therefore be used as predictive tools to investigate the impact of diverse structural connectivity alterations on brain dynamics, including those purposefully induced by surgery.
For example, a study by Sinha and colleagues (Sinha et al., 2017) investigated surgical outcome in patients undergoing neurosurgery for refractory epilepsy. Specifically, they modeled seizure likelihood per region to identify a highly epileptogenic zone in each patient. According to model predictions, virtual resection of these regions with high seizure likelihood reduced the overall likelihood of seizures, which was confirmed by actual surgical outcomes in the majority of patients (81.3%). Moreover, in patients with poor predicted outcomes, alternative resection sites could be obtained from the model. Furthermore, it has been shown that large-scale brain network models can be used to predict the propagation zone of epileptic activity as determined by stereotactic EEG recordings and clinical expertise (Proix et al., 2017). Importantly, in a follow-up study, they were able to identify the most unstable pathways that support and allow the propagation of seizure activity (Olmi, Petkoski, Guye, Bartolomei, & Jirsa, 2019). Hence, results from this study suggest that selective removal of these unstable connections would be equally effective to render patients seizure free, compared with surgical resection of the entire epileptogenic zone.
The major advantage of brain network modeling is that it produces actual biophysically-oriented models of the brain that go beyond a simple black-box predictor of surgical outcome, potentially making it a useful tool to predict a rich variety of outcomes such as epilepsy status, cognitive performance, functional network integrity and survival. Brain network modeling may thus serve as an important complementary source of information to aid patients and physicians in the process of surgical and medical decision making, by providing estimates of successful and/or adverse outcomes. Furthermore, biologically inspired dynamical models may provide insights into the local dynamics underlying large-scale network topology in health and disease. As such, they may provide an entry point for understanding brain disorders as well as recovery processes after interventions at a causal mechanistic level.
In prior work (Aerts et al., 2018) we investigated brain dynamics before tumor resection in 25 brain tumor patients and 11 healthy control subjects using The Virtual Brain (TVB) (Sanz Leon et al., 2013); an open-source neuroinformatics platform that enables the construction, simulation and analysis of large-scale brain network models. In particular, we optimized model parameters of the Reduced Wong-Wang model (Deco et al., 2014) on an individual basis, after which we compared the fitted parameters between brain tumor patients and healthy control subjects. In addition, we assessed the relations between model parameters, structural network topology and cognitive performance. We found significantly improved prediction accuracy of individual functional connectivity when using individually optimized model parameters, indicating the importance of tuning the model parameters in a subject-specific manner. In addition, local model parameters differed between regions directly affected by a tumor, regions distant from a tumor, and regions in a healthy brain. Lastly, we identified several associations between model parameters, structural network topology and cognitive performance.
In this study, we extend this line of research by examining possible changes in optimized model parameters from pre-to post-operative assessment. To this end, we apply the same procedure as in the pre-operative case to the data acquired several months after each patient's surgery. To quantify a normal range of baseline variability over time, we also perform parameter optimization on data acquired from healthy control subjects at both time points. After examining the stability of fitted model parameters over time, we use this information to perform the first "virtual neurosurgery" analyses on glioma patients' pre-operative data, to evaluate the potential of brain network modeling to predict brain dynamics after tumor resection.
Participants
Patients for this study were recruited with the aim of longitudinal assessment. In particular, data were collected the day before each patient's tumor resection and again several months after surgery, on the day of patients' first clinical consultation at the hospital (mean: 7.9 months postoperative; range 5.2-10.7 months post-operative). Patients were included if they were diagnosed with a glioma or meningioma (Fisher, Schwartzbaum, Wrensch, & Wiemels, 2007). Both types of tumors are typically graded according to their malignancy, with grade I tumors being benign, and grade III (for meningioma) or IV (for glioma) being most malignant (Louis et al., 2007). Hereby, malignancy depends on the speed with which the disease evolves, the extent to which the tumor infiltrates healthy brain tissue, and chances of recurrence or progression to higher grades of malignancy.
Patients were recruited from Ghent University Hospital (Belgium) between May 2015 and October 2017. Patients were eligible if they (1) were at least 18 years old, (2) had a supratentorial meningioma (WHO grade I or II) or glioma (WHO grade II or III), (3) were able to complete neuropsychological testing, and (4) were medically approved to undergo MRI investigation. Primary caregivers of the patients were also asked to participate in the study to constitute a group of healthy control subjects that suffer from comparable emotional distress as the patients (Goebel, Von Harscher, & Mehdorn, 2011;Janda et al., 2007). All participants received detailed study information and gave written informed consent prior to study enrollment. This study was approved by the Ethics Committee of Ghent University Hospital.
MRI data acquisition and preprocessing
MRI sequence details and preprocessing procedures for the post-operative data are mostly identical to those that we used before to collect and preprocess the pre-operative data. All details are described in Aerts et al. (2018). In the following sections, we provide a summary of these procedures, as well as an overview of the minor modifications that were applied.
Preprocessing of T1-weighted anatomical MRI data
High-resolution anatomical images were processed using the default "recon-all" processing pipeline of FreeSurfer (http://surfer.nmr.mgh.harvard.edu), yielding a subject-specific parcellation of each participant's cortex into 68 regions (Desikan et al., 2006;Fischl et al., 2004). To account for lesion effects in the parcellation, some additional steps were performed depending on the specific tumor type. For meningioma tumors, that were completely resected during neurosurgery, few to no lesion effects were apparent in eight out of ten patients. For these patients, the default processing pipeline was applied. In the other two meningioma patients, residual edema and a resection cavity were observed, respectively. Therefore, we used a procedure similar to the one outlined in Solodkin et al. (2010). Specifically, we first produced an enantiomorphic filling of the lesioned area (Nachev, Coulthard, Jäger, Kennard, & Husain, 2008) using the BCBtoolkit (Foulon et al., 2018), after which the standard FreeSurfer processing pipeline was utilized. For the glioma patients, we used each patient's pre-operative parcellation scheme, after non-linear registration to their post-operative space (using FSL FNIRT; Andersson, Jenkinson, & Smith, 2007). All registration results were visually verified.
Functional MRI preprocessing
Resting-state fMRI data were preprocessed using FEAT (FMRI Expert Analysis Tool, version 6.00), part of FSL (FMRIB's Software Library; Jenkinson, Beckmann, Behrens, Woolrich, & Smith, 2012), comprising motion correction, slice-timing correction, non-brain removal, grand-mean intensity normalization and high-pass temporal filtering (100-second high-pass filter). Functional connectivity matrices were then constructed by mapping the FreeSurfer cortical parcellation schemes obtained in the previous step to each subject's functional MRI data, and calculating the Fisher's z-transformed Pearson correlation coefficient between all region-wise BOLD time series.
Diffusion MRI preprocessing
For preprocessing and construction of structural connectomes based on the diffusion MRI (dMRI) data, a processing pipeline was used combining FSL (FMRIB's Software Library; Jenkinson et al., 2012; version 5.0.9) and MRtrix3 (Tournier et al., 2019). Preprocessing steps included correction for various artifacts (noise (Veraart et al., 2016), Gibbs ringing (Kellner, Dhital, Kiselev, & Reisert, 2016), motion and eddy currents (Andersson & Sotiropoulos, 2016), susceptibility induced distortions (Andersson, Skare, & Ashburner, 2003) and bias field inhomogeneities (Zhang, Brady, & Smith, 2001)), registration of subjects' high-resolution anatomical images to diffusion space (Jenkinson, Bannister, Brady, & Smith, 2002;Jenkinson & Smith, 2001), and segmentation of the anatomical images into gray matter, white matter and cerebrospinal fluid (Zhang et al., 2001). Further, quantitative whole-brain probabilistic tractography was performed using MRtrix3 (Tournier et al., 2019), resulting in 7.5 million streamlines per subject (more details are available in Aerts et al., 2018). Structural connectivity (SC) matrices were then constructed by transforming each individual's FreeSurfer parcellation scheme to the diffusion MRI data and calculating the number of estimated streamlines between each pair of brain regions. Lastly, we thresholded the SC matrices and normalized structural connections with the same constant scalar as in the pre-operative analyses, to ensure all weights varied between 0 and 1 and were maximally comparable between pre-and postoperative assessment.
Brain network modeling
Procedures for simulating large-scale brain dynamics and optimizing model parameters were also identical to those applied to the pre-operative data, as described in detail in Aerts et al. (2018). Briefly, local dynamics for each of the 68 cortical brain regions were simulated using Reduced Wong-Wang neural mass models (Deco et al., 2014), which faithfully approximate the mean dynamics of interacting populations of excitatory and inhibitory spiking neurons. Subsequently, neural mass models were coupled according to each subject's tractography-derived structural connectome to generate personalized virtual brain models (Deco et al., 2014;Ritter, Schirner, McIntosh, & Jirsa, 2013;Sanz Leon et al., 2013;Schirner et al., 2018).
To optimize the correspondence between empirical and simulated functional connectivity, subjectspecific parameter space explorations were conducted in which the global scaling parameter (G) was varied (0.01 to 3 in steps of 0.015). This parameter rescales each subject's structural connectivity, which is given by relative values, to yield absolute interaction strengths. For each parameter set, resting-state blood-oxygen-level-dependent (BOLD) time series were generated. Subsequently, functional connectivity matrices were computed by calculating the Fisher's z-transformed Pearson correlation coefficient between all pairs of simulated BOLD time series. The parameter set that maximized the Pearson correlation between each individual's simulated and empirical functional connectivity matrix was then selected for further analyses.
In addition, inhibitory synaptic weights (Ji) -which control the strength of connections from inhibitory to excitatory mass models within each large-scale region i -were automatically tuned in each iteration of the parameter space exploration, to clamp the average firing rate at 3 Hz for each excitatory mass model (Deco et al., 2014;Schirner et al., 2018). After simulations, the obtained local inhibitory connection strengths were corrected for their respective region size for further analyses, since the need for local inhibition to balance global excitation depends on the total connection strength a brain region has, which tightly correlates with region size (Aerts et al., 2018). Median Ji values (both corrected for region size as well as uncorrected) across the entire brain and across tumor and non-tumor regions in brain tumor patients were then computed per subject. Of note, delineation of tumor and non-tumor regions was based on the pre-operative data. A more detailed description is available in Aerts et al. (2018).
Graph analysis
Post-operative structural network topology was evaluated using the same graph metrics as those applied to the pre-operative structural connectomes (Aerts et al., 2018). Specifically, global efficiency, modularity and participation coefficient were computed using the Brain Connectivity Toolbox (see Rubinov & Sporns (2010) for more details and an in-depth discussion of graph metrics).
Neuropsychological testing
Cognitive performance of all participants was re-assessed after each patient's tumor resection using the Cambridge Neuropsychological Test Automated Battery (CANTAB®; Cambridge Cognition (2017); All rights reserved; http://www.cambridgecognition.com). The same cognitive tasks were administered as before surgery, again in random order to avoid sequence bias. In particular, the Rapid Visual Information Processing (RVP) task was used to assess sustained attention, the Spatial Span (SSP) task measured working memory capacity, the Reaction Time task (RTI) evaluated mental response speed, and the Stockings of Cambridge (SOC) task assessed planning accuracy.
Accounting for covariates of no immediate interest
Several factors can influence cognitive performance and graph metrics (see for example Bettus et al., 2010;Biswal et al., 2010;Harrison et al., 2008). Therefore, cognitive performance results were corrected for each participant's level of emotional distress, residual lesion size, age and sex. Likewise, graph metrics were corrected for each subject's level of emotional distress, residual lesion size, age, sex, handedness, motion during resting-state fMRI acquisition and intensity normalization factor used in dMRI preprocessing. In particular, on the day testing took place, emotional distress was measured using the State-Trait Anxiety Inventory (Spielberger, Gorsuch, Lushene, Vagg, & Jacobs, 1983;Van der Ploeg, 1982). Further, residual lesion volume was calculated as the number of 1 mm³ isotropic voxels in the mask that delineated residual lesion tissue, which was drawn manually on the anatomical T1-weighted MRI image. Lastly, handedness was measured using the Edinburgh Handedness Inventory (Oldfield, 1971).
We then constructed linear regression models for every outcome variable (sustained attention, working memory capacity, reaction time, and planning accuracy for cognitive performance; global efficiency, modularity and participation coefficient for graph theory metrics) as a function of these confounders. Residuals of these models were further transformed to z-scores for subsequent analyses using the pre-operative mean and standard deviation of the respective metric in the group of control subjects, for ease of interpretation.
Statistical analyses
First, we compared post-operatively optimized model parameters, cognitive performance scores and graph metrics between glioma patients, meningioma patients and control participants. For these analyses, we used one-way analysis of variance (ANOVA) and Kruskal-Wallis rank sum tests, depending on whether or not the normality assumption held. Next, we computed difference scores between each participant's pre-and post-surgical model parameters, cognitive performance scores and network topology indices to evaluate whether changes over time were evident using onesample t-tests. In addition, group differences in the mean and variance of difference scores between pre-and post-operative assessment were examined using one-way ANOVA and Levene's test for equality of variances. Afterwards, post-surgical optimal model parameters were related to structural network topology and cognitive performance using linear regression. Likewise, difference scores between pre-and post-operatively optimized model parameters were compared with differences in cognitive performance scores and graph metrics over time. Statistical analyses were carried out with R version 3.5.3 (R Core Team, 2018).
Virtual tumor resection
After examining the stability of fitted model parameters over time, this information was used to perform the first virtual neurosurgery analyses, to evaluate the potential of brain network modeling to predict brain dynamics after tumor resection. To this end, a procedure similar to the one described by Taylor and colleagues (Taylor et al., 2018) was adopted. In particular, each patient's actual surgery was mimicked by removing all streamlines from their pre-operative tractogram that intersect the resection mask that was retrospectively derived from the post-operative anatomical MRI data. Since standard tractography algorithms are currently unable to reliably reconstruct white matter streamlines within or in close proximity to tumorous tissue, a dedicated pipeline was developed for this second part of the study. This was of crucial importance to allow simulation of tumor resection procedures, since white matter tracts in the vicinity of the tumor have the highest probability of being removed during neurosurgery. These proof of concept analyses were performed on all glioma patients for which both pre-and post-operative data were available. Virtual neurosurgery was not performed on data from meningioma patients, as these tumors generally do not infiltrate healthy brain tissue and therefore are not represented within the tractogram or structural connectivity matrix. Figure 1 illustrates the procedure used to predict post-surgical brain dynamics after virtual tumor resection, using brain network modeling. Figure 1. Graphical overview of the procedure to evaluate the potential of brain network modeling after virtual neurosurgery to predict post-surgical brain dynamics. A: First, each patient's pre-operative structural connectome is reconstructed using dMRI whole-brain streamlines tractography, after which actual surgery is mimicked by removing all streamlines that intersect the resection mask that was retrospectively derived from the post-operative anatomical MRI data. Additionally, pre-operatively optimized model parameters, fitted to the subject's pre-operative functional connectivity, are supplied. B: Subsequently, large-scale brain dynamics are simulated on patients' virtually lesioned structural connectome. C: Resulting brain dynamics are transformed to a functional connectivity matrix, representing brain dynamics after virtual neurosurgery. D: Simulated brain dynamics are compared to patients' empirical FC, derived from their post-operative fMRI data that served as ground truth.
Based on the resulting white matter fiber orientation distributions (FODs), probabilistic streamlines tractography was performed, using an FOD amplitude threshold of 0.07 (Tournier, Calamante, & Connelly, 2010). Although white matter FODs could be estimated using SS3T-CSD within regions infiltrated by a tumor (Aerts et al., 2019), resulting white matter FOD amplitudes were substantially smaller in tumor regions compared to the rest of the brain. While this likely reflects the smaller portion of space taken up by axons (due to infiltrating tumor tissue) and/or damage to white matter tracts, it does pose a practical challenge to tractography algorithms, which rely on the aforementioned amplitude threshold to determine where and how far tractography may proceed. As detailed in Aerts et al. (2019), we overcame this by gradually reducing the FOD amplitude threshold close to and even more so within the tumor, based on its prior segmentation. Anatomical constraints were also imposed to the generation of streamlines, informed by a segmented tissue image (Smith, Tournier, Calamante, & Connelly, 2012). Since this particular segmentation strategy misclassified tumorous tissue mostly as gray matter, which is then enforced to be an endpoint of white matter streamlines during tractography, a modified segmented tissue image was provided. Specifically, the tumor mask was filled with undamaged tissue from homologous regions within the contralateral hemisphere using a non-linear registration approach (Foulon et al., 2018;Nachev et al., 2008), providing an approximation of the patient's brain anatomy as if the tumor were absent. This "restored" anatomical image was then segmented and used to generate 30 million streamlines connecting pairs of brain regions.
All reconstructed streamlines were filtered to 7.5 million tracts using SIFT (Smith, Tournier, Calamante, & Connelly, 2013) to obtain quantitative streamline counts. Structural connectivity matrices were then constructed by calculating the number of estimated streamlines between any two Desikan-Killiany cortical brain regions, and normalizing all connectivity weights with a (single) constant scalar across subjects to ensure all weights varied between 0 and 1.
Identifying optimal model parameters
Using the newly constructed pre-surgical structural connectome, we redid subject-specific parameter space explorations to identify each patient's optimal global coupling value that maximized the correspondence between pre-surgical empirical and simulated functional connectivity. To this end, we adopted the procedure as described in section 2.3 ("Brain network modeling").
Virtual lesioning of the structural connectome
To mimic each patient's actual neurosurgical procedure retrospectively, tumor resection cavity maps were drawn manually under the supervision of an expert neuroradiologist (E.A.) based on the patient's post-operative anatomical T1-weighted MRI data. These resection masks were then overlaid onto the patient's pre-surgical tractogram using non-linear registration, and all connections that intersected the resection cavity mask were removed, similar to the approach outlined in Taylor et al. (2018).
Simulating post-surgical brain dynamics
Using the patient's pre-surgically optimized model parameters and virtually lesioned structural connectome, we then simulated large-scale brain dynamics with the Reduced Wong Wang model (Deco et al., 2014). Finally, each patient's predicted functional connectome was compared to their post-operative empirical functional connectome that served as ground truth, by means of link-wise Pearson correlation.
Data and code accessibility
All data and code used for this study is freely available. The data is publicly available at the OpenNeuro website (https://openneuro.org) and on the European Network for Brain Imaging of Tumours (ENBIT) repository (https://www.enbit.ac.uk) under the names "BTC_preop" and "BTC_postop" for the pre-and post-operative data, respectively. The optimized TVB C code can be found at https://github.com/BrainModes/The-Hybrid-Virtual-Brain and all scripts for postprocessing can be found at https://github.com/haerts/The-Virtual-Brain-Tumor-free-Patient.
Stability of individual model parameters over time
In the first part of this study we examine if and how individually optimized model parameters change after tumor resection. In order to identify normal ranges of variability over time, we perform the same analyses in a group of healthy control subjects. Results are summarized in Table 2 and Figure 2. In particular, the subplots on the left of Figure 2 show the difference scores between pre-and postoperatively optimized model parameters in meningioma patients, glioma patients and control subjects. Differences over time in fitted parameters are shown as deviations from the horizontal line drawn around zero, with positive scores indicating increases in post-operative relative to preoperative measures, whereas negative scores correspond to decreases after surgery compared with pre-operative levels. The subplots on the right of Figure 2 depict pre-versus post-operatively optimized model parameters at the individual level, shape-and color-coded by group. Here, differences over time in individuals' fitted parameters are shown as deviations from the main diagonal, with scores above the diagonal indicating increases in post-operative relative to preoperative measures, whereas measures below the main diagonal correspond to decreases after surgery compared with pre-operative levels.
Specifically, Figure 2A and 2B depict the median local inhibitory connection strengths across tumor regions in meningioma and glioma patients, and across the entire brain in healthy controls (after correcting for region size). Changes over time in local inhibitory connection strengths are not statistically significant, nor do they differ significantly between groups. Nevertheless, local inhibitory connection strengths are much lower and more variable in tumor regions compared to those in healthy brains, before as well as after tumor resection.
Further, results reveal differences in median local inhibitory connection strengths (corrected for region size) across non-tumor regions in tumor patients relative to those in healthy brains ( Figure 2C and 2D). This is the case both before patients' tumor resection, as well as after surgery. Moreover, we observe an increase in participants' median local inhibitory connection strengths after surgery compared to their pre-surgical levels, although changes over time do not differ significantly between groups.
Of note, without correcting for region size, we also find significant differences in median local inhibitory connection strengths between tumor and healthy brain regions. However, without correction for region size, brain tumor patients show higher levels of feedback inhibition compared to controls, whereas the opposite trend was found when correcting for region size (see Supplementary Figure 1A and 1B). Changes over time in local inhibitory connection strengths are also not statistically significant, and do not differ significantly between groups. In contrast, no significant differences are apparent in median local inhibitory connection strengths between nontumor and healthy brain regions without correction for region size (Supplementary Figure 1C and 1D).
Finally, we find no statistically significant group differences nor effects over time in the global scaling parameter ( Figure 2E and 2F).
Stability of structural network topology and cognitive performance over time
Before relating optimized model parameters to cognitive performance scores and structural network topology metrics, we examine whether changes in these predictor variables can be observed over time, or whether post-operative group differences are apparent. Statistical results are summarized in Table 3, and Supplementary Figure 2 and 3 provide a visual overview of pre-and post-operative cognitive performance and graph metrics (z-scores), respectively.
With regard to cognitive performance, results from prior work (Aerts et al., 2018) showed no statistically significant group differences before surgery in any of the cognitive domains assessed. Likewise, we find no significant changes from pre-to post-operative assessment. Consequently, no group differences are apparent in cognitive performance after patients' tumor resection.
Concerning structural network topology, we find no significant changes over time. Furthermore, no statistically significant group differences are found in these post-operative graph metrics, despite increased levels of participation coefficient in glioma patients before surgery (Aerts et al., 2018). Table 3. Statistical results of analyses on differences over time and between groups of cognitive performance and structural network topology scores.
Difference scores (post-pre) Post-operative group differences
Robust associations between modeling parameters, structural network topology and cognitive performance
In the next step, we investigate the relations between the individually optimized modeling parameters on the one hand, and structural network topology and cognitive performance on the other hand. Two out of four statistically significant associations that were found pre-operatively could be replicated.
In particular, a negative correlation between global efficiency of the structural network and the global scaling factor is replicated after patients' tumor resection, as shown in Figure 3A (t = -2.77, p = 0.0108, η² = 0.23; t = -2.16, p = 0.0420; η² = 0.17 after removal of 1 outlier). Likewise, individual differences between pre-and post-surgical optimization of the global scaling parameter are inversely related to the global efficiency of the structural network (t = -2.35, p = 0.0279, η² = 0.19). However, this association is no longer statistically significant after removal of one outlier (t = -0.95, p = 0.35, η² = 0.04).
Next, we find an additional association between pre-to post-surgical differences in the global scaling parameter and planning accuracy (t = -2.78, p = 0.0113, η² = 0.23; t = -2.12, p = 0.0469, η² = 0.15 after removal of 1 outlier; Figure 3B). Although changes in these two variables appear to covary, their association is however not statistically significant before or after surgery (Before surgery: t = 0.43, p = 0.67; After surgery: t = -0.66, p = 0.52). In contrast to the results obtained before patients' surgery, we observed no significant association between participants' post-operative median inhibitory synaptic weight parameter across healthy regions and the global efficiency of their structural network (t = -0.70, p = 0.49, η² = 0.02).
Furthermore, a significant association is replicated between patients' median feedback inhibition control parameter across tumor regions and their reaction time during post-operative cognitive assessment (t = -2.86, p = 0.0144, η² = 0.39; Figure 3C). The association between patients' median feedback inhibition control parameter across tumor regions and their sustained attention is no longer significant (t = -1.47, p = 0.17, η² = 0.10).
Lastly, we observed a statistically significant relation between differences over time in participants' median inhibitory synaptic weight across healthy brain regions and their working memory capacity (t = -2.72, p = 0.0127, η² = 0.24; Figure 3D). Figure 3. Visual summary of statistically significant linear relationships between individually optimized model parameters, structural network topology and cognitive performance. G = global scaling parameter; Jtumor = median inhibitory synaptic weight across tumor regions in brain tumor patients (corrected for region size); post = calculated on post-operative data; post-pre = difference score between post-and pre-operative data. Gray line represents regression line with 95% confidence interval. Group membership is shape-and colorcoded: MEN = meningioma patients; GLI = glioma patients; CON = healthy control participants.
Virtual neurosurgery proof of concept
In order to evaluate the capacity of the currently applied brain network models to predict patients' post-surgical brain dynamics, we simulate brain dynamics after virtual neurosurgery and compare the resulting simulated functional connectivity to patients' empirical post-operative functional connectivity that served as ground truth. As a reference of how well the model can perform for a given patient, we also compute the maximum similarity between each patient's pre-operative empirical and simulated functional connectome during parameter optimization, without virtual surgery.
Results of these proof of concept analyses are summarized in Figure 4. Compared to the structural connectome that is used as input (SC), simulating functional connectivity by means of brain network modeling (FCsim) usually improves the correspondence with empirically derived functional connectivity. This is also the case during parameter space explorations (PSE: FCsim > PSE: SC). Yet, important individual differences are evident in the extent to which computational modeling can enhance prediction accuracy beyond the structural connectome. As can be seen in Figure 4, substantial improvements in prediction accuracy due to simulating brain activity during parameter space exploration are observed in four out of seven patients, whereas only marginal gains are found in the remaining three patients. Importantly, this aspect -i.e., the degree to which the model can increase prediction accuracy beyond the underlying structure -appears to be of key importance in assessing the potential of this technique for virtual neurosurgery (VS). In particular, prediction of post-surgical brain dynamics only improves in three out of four glioma patients for which computational modeling also yield substantially improved prediction accuracy beyond the structural connectome during parameter space exploration. In the other four patients, correspondence with empirical functional connectivity decreases after simulating virtual neurosurgery compared to using only the virtually lesioned structural connectivity matrix.
Discussion
Results from our study reveal that model parameters describing brain dynamics are relatively stable over time in brain tumor patients who underwent tumor resection, relative to baseline variability levels observed in healthy control subjects. Furthermore, several robust associations between individually optimized model parameters, structural network topology and cognitive performance are identified from pre-to post-operative assessment. Based on these findings, we perform the first proof of concept analyses to evaluate the potential of the currently applied brain network models to predict individual brain dynamics after tumor resection, relying solely on pre-operatively available information. We obtain promising results for a subset of patients, and reveal several limitations and challenges that need to be addressed by future research.
Individual biophysical model parameters and their predictors remain stable over time
In contrast to our expectations, the amount of variability in individually optimized model parameters from pre-to post-operative assessment is comparable between brain tumor patients who underwent neurosurgery and healthy control subjects tested across a similar time interval. This means that there is some variability present over time in brain tumor patients' optimal model parameters, although there is no systematic trend towards increases or decreases in specific model parameters and the amount of variability does not exceed regular test-retest variability levels in healthy control subjects. Only inhibition control parameters across healthy regions are higher across all participants after surgery compared to pre-operative levels. Of note, we observe similar slight increases in feedback inhibition across tumor regions as well, although these differences do not reach statistical significance given the much larger variability of feedback inhibition values across tumor regions. These elevated levels in feedback inhibition could indicate that more inhibition was required to balance the marginally higher levels of global excitation (i.e., global coupling) after surgery.
In the next step, we evaluate structural network properties and measures of cognitive performance as possible predictors of the individual model parameters. In line with pre-operative results described in prior work (Aerts et al., 2018), initial descriptive analyses show remarkable similarity of cognitive performance and structural network topology across groups as well as over time. Only the increased participation coefficient that was found before surgery in glioma patients is no longer significant after tumor resection, pointing towards a normalization of glioma patients' structural network topology after surgery. While these findings seem counterintuitive, one other study has also reported comparable structural network topology in brain tumor patients relative to healthy controls (Yu et al., 2016). For cognitive performance, in contrast, the majority of previous studies have reported significant alterations in cognitive functioning as a result of a brain tumor or its subsequent treatment (Klein et al., 2002;Taphoorn & Klein, 2004;Taphoorn et al., 1994;Tucha, Smely, Preier, & Lange, 2000). Possibly, the power of our analyses is not sufficient to detect differences between subjects or changes over time, given the limited sample size. Alternatively, the cognitive tasks (and graph metrics) that we utilize are not sufficiently sensitive to capture these differences.
Despite non-significant group differences or changes over time in cognitive performance and structural network topology, we find several associations between these variables and the individually optimized model parameters. Moreover, we replicate two out of four significant associations that were identified pre-operatively. First, we replicate the inverse relation between global efficiency of the structural network and the global scaling factor after patients' tumor resection. This implies that higher global coupling values are required in subjects whose structural connectome is less efficiently organized, in order to achieve the same amount of functional connectivity between cortical areas. This appears to be a very robust association, that has also been reported in stroke patients (Falcon et al., 2015). Secondly, we again identify a positive relation between feedback inhibition across tumor regions in brain tumor patients and their reaction time during cognitive assessment. This implies that patients who have higher feedback inhibition take longer to respond on a reaction time assessment. However, similar to results in the pre-operative setting, the association between local inhibitory connection strength and reaction time is largely influenced by a few outlying observations. Hence, caution is advised in interpreting this finding and a larger sample size would be required to clarify this association.
Furthermore, we identify two additional significant associations after patients' tumor resection. Specifically, differences in the global scaling parameter from pre-to post-surgical assessment are inversely related to changes over time in planning accuracy. Likewise, changes over time in feedback inhibition across healthy regions are negatively associated with differences in working memory capacity from pre-to post-operative assessment. This means that those subjects whose global coupling or feedback inhibition across healthy regions decreased over time, showed improved performance on planning accuracy or working memory tasks, respectively. This finding provides an interesting direction for future research, for example by investigating whether cognitive training can impact model parameters.
Virtual neurosurgery results
Given that brain tumor patients' model parameters remain relatively stable from pre-to postoperative assessment, we investigate whether post-operative brain dynamics can be predicted using only pre-operatively available information. To this end, we virtually lesion the patient's preoperative structural connectome according to the resection mask derived after surgery, based on which we re-simulate brain dynamics using a brain network model with the patients' pre-surgically optimized global coupling value. Any differences over time in feedback inhibition are assumed not to pose any problems for these proof of concept analyses, as the regional feedback inhibition control parameters are tuned automatically, to clamp the average firing rate at 3 Hz for each excitatory mass model.
Our proof of concept analyses yield promising results for three glioma patients, while the predictive accuracy of the currently applied models is poor in the remaining four patients. Importantly, model performance during pre-operative parameter space exploration appears to be a key indicator for the potential of this technique for virtual neurosurgery. In particular, prediction of post-surgical brain dynamics seems to be feasible only in patients for which the model also substantially improves prediction accuracy beyond the structural connectome during parameter space exploration. Specifically, out of the four patients for which this was the case, prediction accuracy of postoperative brain dynamics is relatively successful in three of them. In one patient simulated postoperative brain dynamics do not show good correspondence with empirical post-operative brain dynamics; possibly the resection mask did not give an appropriate idea of the surgical intervention performed. In the other three patients for which computational modeling only results in marginal gains in prediction accuracy relative to the structural connectome, results show worse prediction accuracy after simulating post-operative brain dynamics compared to using only the virtually lesioned structural connectivity matrix. Nevertheless, for those patients, the virtually lesioned structural connectome serves as a good approximation of their post-operative functional connectivity, suggesting that their brain dynamics are more determined by the underlying structure in the months following tumor resection.
Limitations and future directions
In the interpretation of these study results, some important limitations have to be taken into consideration. First, our sample size is rather small, limiting the statistical power of the analyses. Additionally, substantial inter-subject variability is present in both patient groups, caused by (among other factors) heterogeneity in lesion etiology, size and location. As a result, subtle differences between groups or over time are difficult to detect. By making use of increasingly available openaccess clinical datasets, future studies may benefit from using larger sample sizes.
Secondly, although feedback inhibition control parameters are controlled for region size, a substantial association between both remains. This may influence the results, since several tumors overlap with two very large regions (superiorfrontal left and right), whose inhibition values are much higher compared to those of other regions. Currently, model optimization processes are being improved in order to optimize regional feedback inhibition control parameters based on the empirical FC data rather than using the firing rate, which is a direct proxy of region size.
Complementary, future research could use parcellation schemes with equally sized regions in order to avoid the confounding effect of region size.
Lastly, important individual differences are observed in the added benefit of simulating brain dynamics on top of the individual structural connectome. These results could reflect true differences in the appropriateness of the model between subjects. Alternatively, however, these differences may result from instabilities in the parameter optimization procedure by maximizing the link-wise Pearson correlation between empirical and simulated FC. Although this method is routinely employed in large-scale modeling studies, other methods that maximize the large-scale organization of both connectivity matrices might be more sensible and yield more robust results. For example, similarity could be sought at the modular level, maximizing the cross-modularity between simulated and empirical functional connectivity (Diez et al., 2015;Stramaglia et al., 2017).
Conclusion
In summary, our study is the first investigation of potential changes in model parameters describing brain dynamics after brain tumor resection using large-scale brain network modeling. Notwithstanding the methodological caveats described above, we provide preliminary evidence that optimized model parameters are relatively stable from pre-to post-operative assessment. Furthermore, several robust associations between individually optimized model parameters, structural network topology and cognitive performance are identified from pre-to post-operative assessment. Based on these findings, we perform the first proof of concept analyses to evaluate the potential of brain network modeling to predict brain dynamics after tumor resection. We obtain promising results in a subset of patients and reveal important limitations that need to be addressed by future research.
|
2019-09-17T03:08:51.374Z
|
2019-09-05T00:00:00.000
|
{
"year": 2019,
"sha1": "d047c25dbb569df778e9ba99eec54df0d9eeb1ab",
"oa_license": "CCBY",
"oa_url": "https://biblio.ugent.be/publication/8654150/file/8663418.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "1ff3caa5fff183a0a3493787890aad0d3c99acd3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
55107805
|
pes2o/s2orc
|
v3-fos-license
|
Olecranon Enthesophytes Growth Rates: A Case Study
Many older citizens are besieged with chronic ailments that are associated with becoming older. Activities conducted as a young adult can play a significant role in your condition, later in life. Olecranon enthesophytes (bone spurs) of the elbow are one of those chronic health issue that can be quite debilitating and we really have no idea how long it takes the condition to develop. This is a case study that involves dating an olecranon posterior enthesophyte (bone spur) to determine the growth rate and associated behaviors that were more likely than not, responsible for the condition. A growth rate of .11mm per month was determined as a starting point for investigating causes for enthesophyte development. With organized fitness becoming more and more a part of everyday life, understanding growth rates of enthesophytes can allow for earlier detection. Most enthesophytes are asymptomatic; until they become major alignments. Policy makers, coaches, school athletic coordinators, the military and parents can make informed decisions about participating in risk associated behaviors. Health education that focuses on prevention in our younger children and adults as they become more and more athletically oriented and susceptible to sustaining injuries, may avoid later chronic ailments.
Introduction
Visit any gym today and you can see workouts that rival professional athletes' programs. These gym warriors are not in training for the Olympics or aspiring for a "tryout" in the professional arena but the intensity of the workout is as strenuous. The damage that may be occurring to the elbows during these workouts could devastate this generation of fitness enthusiast beyond anything we are seeing in older adults today [1]. The aging process is, at best, difficult. The joints of the body seem particularly susceptible to "wear and tear". Aging has long been associated with osteoarthritic and other joint aliments; however, while aging is associated with this deterioration, it is not always the cause. Current sports medicine research, as stated by Buckwalter [2], indicates that, "Participation in sports increases the risk of joint injuries that can lead to posttraumatic osteoarthritis, a clinical syndrome caused by trauma-initiated joint degeneration that results in permanent and often progressive joint pain and dysfunction" [2]. In addition to osteoarthritis, this initial trauma can also lead to enthesophytes or osteophytes (bone spurs) in and around the affected joint [3].
The elbow is a particular vulnerable area. Injuries to the elbow can be very painful and slowly can develop from an acute injury to a chronic debilitating condition. Ankles and knees are usually the first consideration in longitudinal studies, if for no other reason than they limit your mobility. Later damage has long been studied in these areas. An acute injury of these limbs often is accompanied by a visit to the emergency room. Frequently, we can remember those incidents. Osteoarthritis, with no history of trauma or any kind of infectious osteoarthritis, in the ankle is rare [4]. Even the official position of the American Othopaedic Foot & Ankle Society, states "the first and foremost cause of ankle arthritis is previous trauma. Approximately 80% of arthritis of the ankle occurs secondary to such conditions. In all, it is rare to see idiopathic (cause unknown) or primary osteoarthritis of the ankle" [5]. Substantial research is also being conducted on knee injuries and later medical issues. Dr. Mininder Kocher, of Harvard Medical School, in a soon to be released paper, indicates that in excess of 50% of young athletes, who have ACL injuries, develop arthritis within 14 years from the date of the injury [6]. In contrast, you do not have to be involved in any athletics or physical training for an elbow injury to occur because this injury can be experienced with just participation in everyday life.
Developing a Model
Predicting possible damage, to areas of the body, will become as important as many internal maladies. Most elbow injuries have been associated with over hand throwing motion, tennis, golf or overuse that leads to an acute condition [7]. Even dart throwers have a syndrome; Dart Throwers' Elbow [8]. Olecranon enthesophytes (bone spurs) are a condition that happens long after the accompanying activity has ceased and is the body's way of responding to stress [9]. Enthesophytes are abnormal bony projections at the attachment of a tendon or ligament. They are not to be confused with osteophytes, which are abnormal bony projections in joint spaces. Enthesophytes and osteophytes are bone responses for stress [10]. At times, this is confusing in the literature as the terms are sometimes used incorrectly; while the conditions are similar, they are not the same.
Injuries to the biceps and triceps tendons about the elbow are relatively frequent. Typically, they are traumatic events that occur as a result of a forceful eccentric contraction. Olecranon traction bone spurs are enthesophytes that develop at the distal triceps tendon, at the point of insertion into the olecranon process. They are thought to arise as a result of mechanical loading (ie, repetitive traction stress) and have been found to grow by a unique combination of endochondral, intramembranous, and ossification. An olecranon traction enthesophyte may be a source of substantial elbow pain, alone or in combination with triceps tendinopathy and olecranon bursitis. Many patients with a bony spur at the olecranon process also exhibited olecranon bursitis [11] caused by the irritation of the enthesophyte on the triceps tendon much like a pebble in your shoe [12]. As long as the olecranon traction enthesophyte remains, the condition will continue. The pain associated with the condition will also be static as the Olecranon spur rubs the triceps tendon whenever the elbow is moved. Early recognition of these injuries and prompt intervention are the cornerstones to a successful outcome. George S. Athwal, MD and Jay D. Keener, MD, [13] came to similar conclusions in finding that "Bone spurs are often found on the tip of the elbow bone in patients who have had repeated instances of elbow bursitis".
The prognosis is not optimistic as there are few reports of surgical treatment to address a painful enthesophyte at this site, and there is sparse outcome data [14]. The purpose of this investigation is to propose a growth rate for olecranon traction enthesophytes through a case study, proposing a standard growth rate model for early detection and possible prevention.
Materials and Methods
The subject of the case investigation is a Caucasian 65-year-old male that has had a physically stressful career (U.S. military for 20 years) as well as a less physically stressful occupation (20 years public school teacher and administrator). At 65 in 2017, he sought treatment for a swollen and painful left elbow. After examination by an orthopedic physician he was diagnosed with "obvious olecranon bursitis". AP and lateral views of his elbow showed a very large enthesophyte off the posterior olecranon. He had limited motion and ability to extend his arm. He also reported numbness in his ring and little fingers of his left hand. There was no reported specific activity, within the past few years, that might have been the cause but it had progressively gotten worse over the course of the last 10 or 15 years.
Pathology of the Condition
Since there was no acknowledgement of a recent injury, the current state of the enthesophyte (bone spur) must be determined. The location of the enthesophyte is an indication of the kind of activity that would be associated with the body's need to "strengthen the bone" in this area. Posterior indicates that the enthesophyte is located at the Distal Triceps Tendon insertion of the Olecranon process. When the triceps are contracted, the forearm extends and the elbow straightens; if the triceps are relaxed and the biceps flexed, the forearm retracts and the elbow bends. This would indicate that either a continuous motion activity occurred or one that involved heavy loading at the extension position of the triceps was experienced to damage at the insertion. To investigate when an incident may have occurred, the total length of the enthesophyte must be examined. The base of the enthesophyte takes the longest to develop and for this reason the subject's spur appears more than a traction enthesophyte (Figure 1). The literature is relatively sparse in estimating enthesophytes growth rates. Most spurs are asymptomatic in the beginning. Even with an acute injury to the elbow, medical authorities would not be able to say that an enthesophyte would develop later in life. Most people simply do not know they have a bone spur. In one case surgeons did remove an Olecranon Traction Enthesophyte from an individual that later participated in a follow up exam almost 7 years later. The spur had grown back to 10 mm [14]. This allows for calculating a rate of growth at .128 mm per month (10mm/78 months).
Figure 2. Cadaver Ulnar bone
Accurate measurement is difficult, however, applying a consistent methodology of calculating an enthesophyte length, allows for a more precise measurement [10]. Figure 2 demonstrates an accurate measure of a Cadaver Ulnar bone with an enthesophyte [10]. Examining the subject's most recent X-rays ( Figure 1) the length of the enthesophyte is measured to be approximately 35mm. Using the previous studies enthesophyte growth rate [14], and calculating an approximate timeline, a search was conducted for injuries that would be consistent with the development of the Olecranon Traction Enthesophyte. The timeline would be in excess of 23 years. It is accepted that growth rates are subject to many variables. The age of the patient at the time of the injury, diet, and individual internal response to growing extra bone are just a few unmeasurable variables.
It is important, in developing a growth model to attempt to match current medical conditions with historical medical complaints. Backward mapping may not be as effective as a longitudinal study; however, it can provide an accurate assumption for a logically certain conclusion. In a review of the subject's medical history, one event date emerges; medical treatment was sought in September 1986 and resulted in a diagnosis of Ulnar Compression Syndrome. Ulnar nerve compression at the elbow is also called "cubital tunnel syndrome". Ulnar nerve compression is the second most common nerve entrapment of the upper extremity after carpal tunnel syndrome [15]. As the Ulnar nerve runs the entire length of the arm, there are several places along the nerve that can become compressed or irritated. This compression or irritation is known as Ulnar nerve entrapment. The ulnar nerve is particularly susceptible to compression at the elbow because it must traverse through a tight space with very little soft tissue to protect it. The compression that occurs, is the single most common cause of Ulnar nerve entrapment. The syndrome may be a result of any of the following [16]: an activity that causes a person to bend and straighten the elbow joint repeatedly leaning on the elbow for an extended period the Ulnar nerve slipping out of place when the elbow is bent fluid buildup in the elbow current or previous injury to the inside of the elbow bone spurs in the elbow arthritis in the elbow or wrist swelling in the elbow or wrist joint This is consistent with the record entry of the subject. Approximately at the same time, an additional record entry describes a treatment plan, however, this deviated from normal conservative treatment methods. Treatment for ulnar nerve entrapment depends on how severe the entrapment is. For less severe cases, a doctor will probably recommend nonsurgical treatment options first. These may include some combination of the following [16]: use of anti-inflammatory medications to reduce swelling elbow braces or splints to keep the joint straight at night exercises and physical therapy to help the nerve slide through the arm correctly The treatment given to the subject was: Take Aspirin x 5 days Ice 15 minutes before bed Take a jacuzzi. Limit use as much as practicable for 1 week.
See neuro consultant
This does not appear to match the accepted treatment for Ulnar Compression Syndrome; however, it does match the accepted treatment for a triceps injury at the elbow. A triceps tendon injury is a problem with the tendon that connects the muscle at the back of your upper arm to the bony bump at the back of your elbow. Tendons are strong bands of tissue that attach muscle to bone. You use this tendon to straighten your arm after you bend it. Tendons can be injured suddenly or they may be slowly damaged over time. You can have tiny or partial tears in your tendon or a complete tear (called a rupture). Other tendon injuries may be called a strain, tendinosis, or tendinitis.
The activity which led the subject to seek treatment was reported as an extensive physical training regimen and "shoveling" over the weekend. The subject reports that a few days later he experienced "falling off" the pull up bar because he lost his grip. This is also consistent with Ulnar Compression Syndrome as a weakened grip is a symptom of the condition. While this was not called a specific trauma to the elbow, injury to the Ulnar nerve can be the acute injury masking the more specific triceps tendon trauma. There is no evidence that there was a rupture, however, there certainly is evidence of a strain. The repetitive motion that preceded the clinical visit was compressed into a day or two. While this is repetitive motion, the time factor would lend this more to a trauma than an over use situation. Figure 3 displays the Ulnar nerve [17]. It is not difficult to understand how both the nerve and the tendon can be affected by one kind of trauma. This is not a unique situation. Labors who were required to use specific tools or do actions repeatedly for extended periods of time were reported more at risk of ulnar entrapment and trauma to the elbow, that later developed enthesophytes. These two predisposing factors lead to the eventual scenario of double-crush injury of the ulnar nerve [18].
Medical research clearly links Ulnar Compression Syndrome to elbow trauma. The trauma could be repetitive use or acute action such as a tendon tear or strain. Both of these actions also cause Olecranon posterior enthesophytes. As bone spurs follow a slow growth process, this would not have been apparent when medical care was rendered to the subject. Research supports a growth rate that is consistent with the injury in 1986 and supports the pathology of the subject's current condition, developing a new growth rate.
Discussion
Based on the behaviors of the subject at the time of the injury, enthesophyte's length, medical and scientific evidence, it is reasonable to deduce that the activity that started the Olecranon Enthesophyte to development was more likely than not, when the subject was diagnosed with Ulnar Compression Syndrome in 1986 while in the military.
Updating the calculation for Olecranon enthesophytes' (bone spurs) growth rate we can conclude a growth rate of .11 mm per month (length in mm/number of months from injury). Again, as previously mentioned, there are many variables not considered that may be associated with enthesophyte growth.
Since Enthesophytes (Bone Spurs) usually continue to grow slowly over time, the condition can be asymptomatic for many years. Even after 10 years, a typical Olecranon Enthesophyte would only be approximately 13 mm. At this length, there still would be no likely reaction. As the spur continues to grow, the difficulty in treating the condition becomes more complicated. While there has been success at surgically removing the enthesophytes, not enough research exists to conclude that the risk versus benefits factor is acceptable. Predicting the potential length of an enthesophyte can assist in early diagnoses.
The enthesophyte, in this case, has caused limits in motion, pain and chronic bursitis. As long as the enthesophyte is present, Olecranon bursitis will also be present, a frequent cause of pain at the posterior elbow. The bursitis is associated with a soft tissue swelling caused by the Enthesophyte (bone spur) at the olecranon [19]. Had it been discovered earlier; a less risky treatment plan may have been available.
Conclusions
As we see from the investigation of this subject, an Olecranon Enthesophyte can take 30 years to develop to a debilitating condition. Using this case study, along with the subject, in [14] Hasham, Kalainov, Biswas, Soneru, and Cohen, a growth rate can be calculated. With a working growth rate model, programs can de designed to periodically monitor potential enthesophyte growth. Additionally, other conditions that cause acute injury, such as Ulnar Compression Syndrome, can be an early warning signal for closer monitoring. To expand the accuracy of the growth rate model, future studies could include conducting cross-sectional descriptive or cohort studies to examine the relationships between enthesophyte development and previous injuries with a larger more diverse population.
There is an abundance of "medical and scientific" peer reviewed research that links Ulnar Compression conditions, Olecranon Traction Enthesophyte and Olecranon Bursitis together. There simply is not any research that disputes this connection. Understanding the elbow anatomy and the location of an Olecranon Traction Enthesophyte leads to the conclusion that in normal activities, damage can occur without an accompanying serious acute injury.
Individuals with high risk occupation (athletes, construction workers, repetitive motion laborers and the military) should receive information on enthesophyte development. In light of the activities in today's society, the list should be expanded to include anyone who includes full extension exercises as part of their workout regimen. Bench pressing, pushups and pullups are all potential activities that should be monitored, especially in younger children. Virtual any exercises that cause extreme loading of the triceps while the elbow is fully extended should be avoided or at the very least strictly supervised.
Prevention would seem to be a better option than trying to eradicate an enthesophyte that has had 30 years to integrate within the distal triceps tendon insertion. Policy makers, coaches, school athletic coordinators, the military and parents can make informed decisions about participation, focusing on prevention in our younger children and adults as they become more and more athletically oriented and susceptible to sustaining injuries that later become chronic ailments.
While medical care and surgical techniques are becoming more refined with less and less recovery time, the costs continue to climb making health care less affordable for many. Prevention is a solution that can be much less invasive and a lot less in cost. Knowing the possible causes of Olecranon Enthesophyte and the time it takes for the condition to become chronic can make for a safer and heathier population. Nothing in this article is meant to offer a diagnoses or treatment; only to offer a method of predicting possible conditions that result from a society trying to remain as fit as possible.
|
2019-01-08T14:06:58.186Z
|
2018-09-01T00:00:00.000
|
{
"year": 2018,
"sha1": "94f67a470e2c6f82ae83ffd11711187e8226c6dd",
"oa_license": "CCBY",
"oa_url": "https://www.hrpub.org/download/20180830/UJPH4-17611977.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f4f6c5a114017b900abee57ba7b855b859670b9b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251611437
|
pes2o/s2orc
|
v3-fos-license
|
Mechanism and Utilization of Ogura Cytoplasmic Male Sterility in Cruciferae Crops
Hybrid production using lines with cytoplasmic male sterility (CMS) has become an important way to utilize heterosis in vegetables. Ogura CMS, with the advantages of complete pollen abortion, ease of transfer and a progeny sterility rate reaching 100%, is widely used in cruciferous crop breeding. The mapping, cloning, mechanism and application of Ogura CMS and fertility restorer genes in Brassica napus, Brassica rapa, Brassica oleracea and other cruciferous crops are reviewed herein, and the existing problems and future research directions in the application of Ogura CMS are discussed.
Introduction
In 1973, the German botanist Joseph Gottlieb Kolreuter first observed the phenomenon of male sterility (MS). Researchers defined MS as plant failure to produce dehiscent anthers, functional pollen or energetic male gametes [1,2]. To date, researchers have observed MS in 43 families, 162 genera and approximately 617 species, including Zea mays, Oryza sativa, Raphanus sativus, Gossypium hirsutum, Allium cepa, Sorghum bicolor and Glycine max [3][4][5][6]. According to different inheritance characteristics, MS can be divided into genic MS (GMS) and cytoplasmic MS (CMS). GMS is controlled by one or more nuclear genes, while CMS is usually controlled by mitochondrial genes and a few nuclear genes. CMS has been found in more than 150 higher plants [7,8]. Because female organ function is not affected by CMS and CMS lines can be leveraged to maximize cost-effectiveness, the CMS/FR system has become the most widely used MS system in cruciferous crops [9,10]. To date, numerous different CMS systems have been reported in cruciferous crops [11], including Ogura CMS (Ogu CMS), Nap CMS, Polima CMS (Pol CMS), Kosena CMS (Kos CMS), Tour CMS, Moricandia arvensis CMS, Nsa CMS, Nca CMS, Hau CMS and inap CMS (Figure 1). In addition, several CMS-related genes have been identified, including orf138 of CMS-Ogu in Raphanus sativus [12][13][14][15][16][17][18][19][20][21], orf263 of CMS-Tour in Brassica tournefortii [22], orf224 of CMS-Polima, orf222 of CMS-Nap in Brassica napus [23,24], orf220 of a new type of CMS line and orf288 of CMS-hau in Brassica juncea [25][26][27] (Figure 1). Among all types of CMS, Ogura CMS is the most widely applied in cruciferous crops due to its significant advantages of stable sterility and complete abortion.
Identification of the Ogura Sterility Gene orf138
The determination of the sterility gene of the CMS system is a critical step in mechanistic research. Through the transcription pattern profiling approach, glutathione S-transferase (GST) fusion protein experiments and mitochondrial genome sequencing, the orf138 gene was identified as the sterility gene of Ogura CMS. orf138 was first identified because the transcription patterns of three mitochondrial structural genes (atp6, atpA and coxI) changed in Ogura CMS radish, but these changes were not strictly correlated with Ogura CMS or Ogura nuclear restorer genes [12,13]. An Ogura-specific mitochondrial DNA fragment containing two ORFs, orf138 and orf158, was subsequently reported, which was consistent with the sterility phenotype in the 13 somatic hybrids of Ogura CMS B. napus obtained from protoplast fusions [14,15]. Sequence analysis and transcript analysis of this region showed that there were also transcripts of orf158 in fertile plants but no transcripts of the orf138 gene in fertile revertant plants. Therefore, it was speculated that the gene controlling Ogura CMS is the mitochondrial gene orf138 [16]. In addition, a polypeptide confirmed to be the product of the orf138 gene using antibodies against a glutathione S-transferase-ORF138 fusion protein was only found in sterile-plant mitochondria isolated from B. napus with Ogura CMS, which further confirmed that orf138 was the sterility gene in Ogura CMS [17,18]. The mitochondrial genome sequencing of the Ogura CMS line and maintainer line promoted the identification of the Ogura CMS gene orf138. The unique presence of the orf138 gene in Ogura CMS lines was confirmed by comparing the mitochondrial genomes of Cruciferae crops, and the key role of the MS-causing gene orf138 was identified. The Ogura CMS mitochondrial genomes were generally larger and showed higher recombination and rearrangement than the mitochondrial genomes of the maintainer line, leading to the loss or generation of some genes (Table 1). For example, the mitochondrial genome sequencing of Ogura CMS radish showed that the mitochondrial genome had four special regions compared with that of normal fertile plants and that the orf138 gene was located at the edge of the largest region (15,255 bp), which was occupied by the cox1 gene in normal fertile radish [18][19][20]. Meanwhile, Ogura CMS radish possesses an additional copy of the genes atp9 and trnfM.
Functional Verification of the Sterility Gene orf138
The function of orf138 was verified through transient gene overexpression in Arabidopsis thaliana and knockdown approaches. The method of using mitochondrion-targeting peptides to transform sterility genes into the mitochondrial genome has been successfully applied in the functional verification of CMS genes in G. max, O. sativa, Beta vulgaris, B. napus, B. juncea, Capsicum annuum and other crops [50,117,118]. Zhang et al. (2015) cloned the orf138 gene from the mitochondria of the Ogura CMS line in cabbage, constructed a mitochondrion-targeting expression vector driven by the CaMV35S promoter, transformed it into A. thaliana to induce MS, and verified the sterility-related function of the orf138 gene [119]. Evidence suggests that stable mitochondrial transformation is currently lacking, while recent advances in gene editing have made it possible to easily knock out a gene. The 3 end of the orf138 gene has three consecutive 39-bp repeats. Apart from a 39-bp deletion and two single-nucleotide polymorphisms (SNPs), the orf125 and orf138 genes share 100% sequence identity with each other, which means that these two genes are highly homologous. Kos CMS caused by orf125 and Ogura CMS caused by orf138 can both be fertility restored by the fertility restorer gene Rfo/Rfk [114]. Kazama et al. (2019) knocked out the mitochondrial gene orf125 by mitoTALENs in rapeseed, and the hypothesis that orf125 is the driver of Kosena-type CMS was verified, which strongly suggested that the orf138 gene is the sterility gene of Ogura CMS [120].
Ogura CMS Fertility Restorer Genes in Raphanus Sativus
Parallel with the discovery of sterile cytoplasms, FR genes were identified as the genetic means for the recovery of the lost harmony between the mitochondrial and nuclear genomes [121]. To date, all the Ogura CMS FR genes (Rfo, Rfk, RF1, RF2, RF3, Rfob, Rfoc, RsRf3-1/RsRf3-2, RsRf3-4~RsRf3-7, Rft and Rfs) have been reported in radish and cloned from European radish, Japanese radish and Chinese radish ( Table 2). The Ogura CMS fertility restorer gene Rfo was first cloned using map cloning and comparative genomic analysis in radish [51,122]. A fertility restorer gene of Ogura CMS named Rfk was cloned and identified as the same gene as Rfo [123]. Three populations were used to locate RF1, RF2 and RF3 in the upper region of the Rsl linkage group, the middle region of the Rs2 linkage group and the upper region of the Rs7 linkage group, respectively, and all three genes showed dominance and were mutually epistatic [124]. The Rfob and Rfoc genes can also restore the fertility of Ogura CMS radish [52]. Rfob is closely linked with the Rfo gene and has two SNPs compared with Rfo. Rfoc is generated by recombination between Rfo (PPR-B) and PPR-C. In addition, several new fertility restorer loci, namely RsRf3-1/RsRf3-2 and RsRf3-4~RsRf3-7, in the Chinese radish materials 2007H, 9802H and 9606H were also cloned and sequenced [53, 125,126]. A new restorer gene of Ogura CMS, Rft, which was found in the Tomioka population of Japanese wild radish, was also a homolog of Rfo and had 19 SNPs compared with Rfo [50]. Yamagishi et al. (2021) confirmed that Ogura CMS had a new fertility restorer gene, Rfs, which could restore the fertility of individuals by processing orf138 mRNA and was different from Rft [54].
Ogura CMS Fertility Restorer Genes in Other Cruciferous Crops
With the wide application of Ogura CMS, fertility restorer genes have been successfully introduced into different cruciferous crops and integrated at different genomic locations. In B. napus, three main fertility restorer genes in Ogura CMS have been localized and cloned, namely Rfo, Rfob and Rft [91,118,129,130]. They were integrated into the C genome (C09, C03) or A genome in different B. napus materials. The Rfo restorer gene in the restorer line R2000 created by INRA and the restorer line 46H02 created by Pioneer Hibred were both located in the N19 linkage group, and a large number of gene-specific markers and linked markers were developed [129,141]. In CLR650, a new fertility-restored material of Ogura CMS B. napus, the restorer gene was located in the N19 linkage group between the SSR markers BnGMS35 and BoGMSl97 [137]. In the Ogura CMS restorer line N1717 of B. napus, the Rfo gene was confirmed to be located in the C genome by bacterial artificial chromosome-fluorescence in situ hybridization (BAC-FISH), and it was inferred to be located on chromosome C03 according to the size of the chromosome and the location of the centromere [129]. In a new version of the Ogura CMS restorer line CLR6430, detected the restorer gene by FISH using the C genome and BAC as a probe and verified the presence of Rfo in the A genome of B. napus [130]. In B. rapa, the Ogura CMS fertility restorer gene was integrated into an additional radish chromosome. Rfk1 of Kosena CMS is a homolog of the fertility restorer gene Rfo of Ogura CMS. It was transferred from B. napus to Ogura CMS B. rapa, and its location on the chromosome was identified by BAC-FISH technology [138]. In B. juncea, the Rfo gene was located in añ 108 kb radish chromosome fragment, which was positioned amidst a large C-genome translocation at the distal part of chromosome A09, and an intragenic KASPar marker (KASP-RFO-1814) for the marker-aided transfer of Rfo was developed [99,140].
Mechanisms of Ogura CMS in Major Cruciferous Crops
The factor that most directly causes MS in plants is abnormal pollen development [142]. Although the detailed process by which sterility genes affect floral organ and pollen development is still unclear, it is generally believed that mitochondrial dysfunction is mainly responsible for the disruption of pollen development [6,143]. Furthermore, it was shown that during the development of pollen grains, the mitochondrial content of tapetum cells was 40 times that of somatic cells, and the mitochondrial content of microspore cells was 20 times that of somatic cells. Therefore, anthers require more ATP and are extremely sensitive to changes in mitochondrial ATP synthesis [144]. The orf138 gene encodes a mitochondrial transmembrane protein, similar to many other CMS genes [145]. In sterile plants, the ORF138 protein accumulates largely on the mitochondrial membrane, which may interfere with the expression of key genes in the electron transport chain, such as ATP6, ATP8 and COXI, and inhibit anther synthesis ( Figure 2). In addition, the occurrence of CMS is mostly related to some chimeric ORFs generated by mitochondrial genome rearrangement [146]. Tanaka et al. (2012) found that the Ogura CMS cytoplasm may be generated by mitochondrial genome rearrangement, and this rearrangement is likely to destroy mitochondrial structures, thereby resulting in Ogura CMS [20]. The abnormal development of tapetal cells was confirmed as an important cause of abnormal pollen development. Normal microsporogenesis requires the appropriate timing of tapetum degeneration [147]. Studies revealed that pollen abortion in Ogura CMS occurred after the tetrad stage [147]. Lin et al. (2019) found that aberrant anther development occurs during the transition from microspore mother cells to tetrads, and defective microspore development and the early clearing of tapetal cytoplasm led to shrunken anthers with collapsed locules. Some studies indicated that the autolysis process, rather than the normal programmed cell death (PCD) process, led to the premature death of tapetal cells, which hindered the development of pollen in the microspore stage of large vacuoles and finally led to MS [55,148]. In addition, the abnormal proliferation of tapetal cells was confirmed as an immediate cause of Ogura CMS in cabbage [113]. Combining cDNA-AFLP with microarray analysis, only one of twenty-nine AT hook nuclear localized (AHL) family genes, BoMF2, was differentially expressed in the anthers of Ogura CMS cabbage and might regulate tapetum proliferation during anther development [149].
Flavonoids play an important role in plant pollen development. Many studies in recent years have shown that the chalcone synthase (CHS) gene is related to MS. Related studies have mainly focused on petunia, maize, rice and other plants [147,150]. In radish, the expression of CHS in the flavonoid biosynthesis pathway of Ogura CMS anthers was reduced, and the decrease in CHS expression led to a decrease in flavonoid content, which may ultimately have a negative impact on the production of pollen grains [55].
The possible toxicity of ORF138 remains unclear, and direct evidence for a cytotoxicity model is still lacking. The ORF138 protein was expressed in almost all tissues, including hypocotyls, leaves, roots and buds, but its expression levels varied among the tissues. In vegetative tissues, no morphological or respiratory defects were detected, confirming that ORF138 can cause complete pollen abortion without affecting vegetative development and female gametogenesis in Ogura CMS plants [151]. However, the expression of ORF138 in Escherichia coli can inhibit normal growth, indicating that ORF138 has a toxic effect on cells [152]. In addition, the expression of orf138 in the nucleus was realized by fusing and transforming the CMS gene, green fluorescent protein (GFP) and a mitochondriontargeting peptide into A. thaliana and yeast. The orf138 gene changed the morphology of mitochondria in yeast and Arabidopsis transgenic lines but did not inhibit the respiration and growth of yeast and did not cause sterility in Arabidopsis [153].
Great progress in high-throughput sequencing, transcriptomics, proteomics and degradomics has provided a large amount of genetic data and unprecedented opportunities to identify differentially expressed genes (DEGs) that play important roles in CMS [6,[154][155][156][157][158]. By comparing the Ogura CMS sterile line and the corresponding maintainer line, a large number of DEGs were identified (Table 3). Substantial numbers of DEGs have been shown to be involved in energy metabolism, which underpins the idea that insufficient energy supply may lead to MS [113,[159][160][161][162][163][164][165][166]. Microsporocytes give rise to pollen via meiosis, and somatic cells, particularly tapetal cells, are essential for the normal development and release of pollen [167]. The failure to produce functional pollen in Ogura CMS lines is found to be accompanied by DEGs involved in microspore formation, anther development, pollen wall formation, exine formation (the formation and dissolution of the callose wall, fatty acid metabolism pathway and biosynthesis of sporopollenin precursors in the tapetum), the accumulation of the pollen coat and pectinesterase activity [159][160][161][163][164][165][166][167][168]. CMS always induces the expression of stress-related genes, especially HSP genes [160,162,164,168,169]. ORF138 may affect redox status in Ogura CMS plants, although only DEG evidence exists [159,160,164]. Caspase-like and metacaspase activity genes involved in cell apoptosis were also discovered among DEGs, suggesting that pollen abortion in Ogura CMS was related to PCD [164,165]. In addition, the plant hormone auxin plays a central role in plant growth and development [170]. Studies showed that the delayed expression of most auxin-related genes may have caused short filaments and reduced plant growth in Ogura CMS plants [160,171]. DEGs were also enriched in the mitochondrial retrograde signaling pathway [159]. Two novel miRNA/target cascades (novel-miR-335/H + -ATPase and novel-miR-448/SUC1) may participate in the communication between the mitochondria and nucleus [172]. Genes that play important roles in the plant defense response were significantly upregulated in Ogu CMS lines, suggesting their likely involvement in Ogu CMS pollen abortion [163,164,171]. Although these -omics technology methods have been extensively applied in DEG exploration in cruciferous crops, the molecular mechanisms underlying Ogura CMS remain elusive. Furthermore, most impacts of orf138/ORF138 were primarily based on speculations, lacking direct experimental evidence to support such conclusions.
Mechanisms of Ogura CMS fertility Restorer Genes in Major Cruciferous Crops
The PPR-B gene, but not the PPR-A and PPR-C genes, restored the fertility of the Ogura CMS materials. Ogura CMS is regulated by the orf138 gene, and fertility can be restored when the Rfo gene is present [51,122]. The Rfo gene is present as three highly similar genes: PPR-A, PPR-B and PPR-C. Three homologous copies are arranged in a series, encoding highly similar proteins [51, 57,122,123,175]. PPR-A and PPR-C do not have fertility restoration ability and have no effect on the synthesis of sterility proteins, while the PPR-B gene plays a role in the fertility restoration of Ogura CMS materials [57].
Several important functional sites and binding sites of the fertility restorer gene Rfo and the sterility gene orf138 have been identified. Most restorer genes encode pentatricopeptide repeat (PPR) proteins, and all identified Rf-PPR genes evolved from a unique subset of the PPR gene family called the Rf-like or RFL gene family [176,177]. PPR proteins are characterized by tandem repeats of 35-amino-acid motifs, most of which are considered sequence-specific RNA-binding proteins, and they regulate mitochondrial and chloroplast gene expression through editing, splicing and cleavage [178][179][180][181]. In the fertility restoration process of Ogura CMS, all PPR-B repeats are indispensable for complete fertility restoration [182]. An allele sequence analysis of the fertility restorer gene revealed four substituted amino acids (the 118th, 153rd, 170th and 171st amino acids) in the second and third repeats of the PPR, suggesting an essential role in the fertility restoration of these domains formed by these repeats in the Rfo gene-encoding protein ORF687 [123]. Yamagishi et al. (2021) and Imai et al. (2002) also considered that the 118th and 153rd amino acids in ORF687 play a crucial role in fertility restoration [54,183]. By comparing the transcript sequences of rfo/rfo and Rfo/Rfo homozygote plants, it was found that four amino acids (the 176th to 179th amino acids) of the Rfo gene were essential, and the deletion of these four amino acids in the central region of the Rfo gene-encoding protein reduced fertility restoration ability [182]. In the interaction of a nuclear fertility restorer gene and a mitochondrial CMS gene, Uyttewaal et al. [57] proposed that ORF687 could associate with the 5' untranslated region (5' UTR) of orf138 mRNA. Yamagishi et al. (2021) found that direct ORF687 binding to the coding region of orf138, not the 5' UTR inferred previously, was essential for fertility restoration by Rfo [54]. It was also found that either a single nucleotide substitution (the 61st nucleotide) in orf138 or two amino acid substitutions (the 118th and 153rd amino acids) in ORF687 will prevent the Rfo gene from restoring the fertility of Ogura CMS materials with the orf138 gene [54]. Wang et al. (2021) demonstrated that PPR-B bound within the coding sequence of orf138 and found that the GTAAAGTTAGTG-TAATA sequence of the orf138 transcript was the PPR-B binding site [184]. Interestingly, the 61st nucleotide was in the binding site, which confirmed the validity and reliability of this conclusion.
The Ogura CMS restorer gene acted at the translation level and restored fertility by blocking the translation elongation of orf138 mRNA. The mechanisms of different restorer genes are diverse among crops. Fertility restoration in CMS plants is mainly regulated at the genomic level, post-transcriptional level, translational level, post-translational level and metabolic level [6,63]. The function of Ogura CMS restorer genes is generally considered to involve the processing and editing of sterility-related gene transcripts at the post-transcriptional or translational level and then the inhibition of the accumulation of ORF138 and the elimination of the negative effects of sterility proteins on pollen or tapetum development [6,184]. The ORF138 protein showed a significant decrease in the mitochondria of flowers and leaves after the fertility of Ogura CMS R. sativus was restored. However, the presence or absence of fertility restorer genes did not affect the size, abundance or RNA editing pattern of orf138 transcripts, which suggested that the Ogura CMS fertility restorer gene affects the translational level of the orf138 gene [6,18,57]. Immunolocalization experiments showed that fertility restoration was related to the complete elimination of the ORF138 protein in the tapetum. Therefore, PPR-B restores fertility mainly by inhibiting the synthesis of the ORF138 protein in the anther tapetum. Previous research suggested that the precursor RNA of the orf138 gene in sterile plants was stable because of the formation of a 'neck-loop' structure during the splicing process, and it was degraded during the splicing process because of the formation of an unstable 3 end in fertile plants [185,186]. Therefore, it was concluded that the restorer gene Rfo acts post-translationally on the stability of the Ogura CMS-associated protein ORF138 in the reproductive tissues of rapeseed cybrids. By using a restored rapeseed transgenic line containing four copies of PPR-B, Wang et al. (2021) confirmed that ORF138 disappeared from mitochondria in the presence of PPR-B and rejected the hypothesis regarding a fertility restoration mechanism for the increased instability of ORF138. Wang et al. (2021) demonstrated that this specific translational inhibition of orf138 mRNA and ORF687 acted as a ribosome blocker to specifically impede translation elongation along the orf138 mRNA by in organello synthesis, polysome sedimentation and Ribo-Seq analyses [184]. Based on the results of Wang et al. (2021) and other studies, a schematic diagram of the mechanisms of Ogura CMS fertility restorer genes is shown in Figure 3. In addition, another Ogura CMS fertility restorer gene (Rft) found in most Japanese wild radishes can affect the expression of orf138 mRNA [184]. This finding showed that the Rf gene, with different molecular mechanisms, evolved in radish and inhibited the expression of the sterility gene orf138 in Ogura CMS lines.
Application of Ogura CMS and Creation of Ogura CMS Fertility Restorer Lines
The Ogura cytoplasm has been introduced into different Brassica crops, including B. napus (AACC), B. oleracea (CC), B. rapa (AA) and B. juncea (AABB), by intergeneric hybridization, somatic hybridization and repeated backcrossing. In rapeseed, it is essential for the Rfo restorer gene. In vegetables, because the leafy organ is used as a product for consumers, it is not necessary for fertility restoration to be possible in the vegetative growth stage. However, the wide usage of Ogura CMS has led to a new problem: all offspring not carrying the fertility-restored gene exhibit MS, which has inhibited germplasm innovation and breeding. Therefore, there is an urgent need to create a fertility restorer to promote the innovation and reutilization of germplasm resources in all Ogura CMS Brassica vegetables. Corresponding restorer lines have also been successfully created in crops such as B. napus (AACC), B. oleracea (CC) and B. juncea (AABB).
Brassica Napus
The Ogura CMS cytoplasm was successfully transferred to B. napus, and the whole process consisted of two stages. (1) The French scholar Bannerot introduced the nucleus of European oilseed rape into Japanese radish sterile cytoplasm and transferred the Ogura CMS cytoplasm into B. napus through continuous backcrossing [63]. However, due to cytonuclear discordance, which is a hallmark of many introgression events, many undesirable traits, such as young leaf chlorosis under low temperature and poor nectary development, affect normal growth and development [58,64]. (2) In 1983, Pelletier successfully transferred Ogura CMS to B. napus by protoplast fusion and solved existing problems in the previously created Ogura CMS B. napus. The male-sterile line exhibited normal green leaves at 12 • C, the normal development of nectaries and the stable inheritance of sterility [65]. Ogura CMS lines of B. napus were also introduced in China, and massive restorer line screening was performed [66][67][68]. Li et al. (1995Li et al. ( , 2001 introduced the FR gene into B. napus by hybridization with the radish variety 'Makino' and finally gained the addition line Ad-6, which unfortunately could not be applied in hybrid seed production because of unstable fertility and poor agronomic traits. Chen et al. (2012) used grafting technology to overcome the reproductive obstacles of distant hybridization. Using an interspecific doubling hybrid of oilseed rape and radish (AACCRR), the restorer line CLR650 with high glucosinolate content was selected after multigeneration backcrossing and screening, but it still needs to be improved [69]. Wen et al. (2010Wen et al. ( ,2016 obtained the restorer line R2008 by multigeneration backcrossing, test-crossing and microspore culture and further improved it to obtain six restorer lines with better agronomic traits [68,70].
The creation and improvement of Ogura CMS B. napus restorer lines have undergone 30 years of development, and the resulting lines have already been applied in commercial production. Through a large number of screenings, scientists found that there are no natural fertility restorer lines of Ogura CMS in Brassica species, and the restorer genes in radish must be transferred to Brassica species by intergeneric hybridization. The creation of a fertility restorer line for Ogura CMS in B. napus can be divided into three stages.
(1) Fertility-restored interspecific materials were first obtained by hybridization and/or protoplast fusion between R. sativus with a restorer gene and B. napus [65,71]. However, due to the influence of large redundant radish fragments, B. napus fertility restorer lines have poor agronomic traits, low seed setting rates and even high glucosinolate content under the influence of close linkage between glucosinolate genes and the restorer gene Rfo [127]. (2) With the realization of marker-assisted selection (MAS) in many crops and the development of Rfo-linked markers, the improvement of restoration materials has been greatly accelerated [72]. MAS combined with backcross breeding was used to obtain low-glucosinolate-content restorer lines, but some redundant genes closely linked to Rfo were still retained in the created materials, resulting in poor agronomic traits [73]. (3) INRA finally provided the restorer line R2000 with both low glucosinolate and low erucic acid levels by combining gamma-ray irradiation with multigeneration backcross breeding [74]. Gamma-ray irradiation greatly increased the genome recombination rate, and the improved fertility restorer line had good agronomic traits, a normal seed setting rate and good Rfo transmission efficiency, which have been successfully applied to seed production in Ogura CMS B. napus. Pioneer Hi-Bred International Inc. screened a series of Brassica mutants with reduced radish fragments from tens of thousands of plants obtained by gamma-ray irradiation and finally bred a new Ogura CMS restorer line (SRF line) of B. napus with fewer radish fragments and a lower glucosinolate content than R2000 [75].
Brassica Oleracea
Ogura CMS was transferred into cabbage by distant hybridization between radish and cabbage combined with embryo rescue, and OguraR 1 CMS cabbage was created [58]. However, because the obtained Ogura CMS cabbage has a radish cytoplasm, problems frequently occur, such as nuclear-cytoplasmic incompatibility, chlorosis at low temperature (15 • C), nectary dysplasia and poor seed set. OguraR 2 CMS materials were successfully obtained by replacing the radish chloroplast in the OguraR 1 CMS system with the cauliflower chloroplast by protoplast fusion [77]. Most likely because of the large proportion of the radish cytoplasm in protoplast fusion plants, the growth vigor of two sterile lines derived from OguraR 2 CMS materials decreased, while the numbers of abnormal flowers and pods increased after 5-6 generations of transformation breeding [78]. The Asgrow seed company applied the asymmetric protoplast fusion method to reduce the proportion of radish mitochondria and obtained Ogura CMSR 3 with normal sterility and pistil structure. With the Ogura CMSR 3 material as the sterile source, several Ogura CMS lines of cabbage with stable sterility and normal flowering and seed set have been bred and widely used in the production and breeding of cabbage hybrids. A number of new varieties have been approved or identified, such as 'Zhonggan 22', 'Zhonggan 192', 'Zhonggan 96' and 'Zhonggan 101' in China [79].
The Rfo restorer gene was introduced from B. napus to B. oleracea by the bridge material Brassica alboglabra Bailey [80][81][82]. Using triploid and hexaploid methods, several Ogura CMS fertility-restored plants of B. alboglabra Bailey were successfully created. A series of fertility-restored cabbage plants in the BC 5 generation were obtained by crossing the BC 2 generation of B. alboglabra Bailey fertility-restored plants with cabbage materials, and these materials were successfully used for the fertility restoration of Ogura CMS clubroot resistance resources in B. oleracea [83]. produced the Ogura CMS Rfo restorer by transforming the modified Rfo B restorer gene into the Ogura CMS line 'CMS2016' of B. oleracea var. capitata and developed 18 different morphological Ogura CMS restorers [84]. The polyploid materials, introgressive lines and interspecific hybrid materials created from the above distant hybridization approaches will also provide important materials for several essential research topics, such as polyploidization, distant hybridization, the influence of alien fragments, the innovation of gene function and the formation of important traits.
Brassica Rapa
Ogura CMS was successfully transferred into Chinese cabbage [91,92]. Williams and Heyn (1981) introduced the sterility gene from Japanese radish into Chinese cabbage and bred many male-sterile lines of Chinese cabbage [92]. However, there were different defects in these germplasm resources after the screening of Ogura sterile lines. Delourme et al. (1994) also reported the transformation breeding of the Ogura cytoplasm from B. napus to B. rapa [91]. In China, sterile resources were collected from the United States, Germany and other countries. Some Ogura CMS Chinese cabbage lines were introduced from abroad, and a number of sterile lines that basically overcame the defects of seedling yellowing, nectary degeneration, bud abortion and a low seed setting rate of the original Ogura sterile lines of B. rapa were developed and obtained after continuous multigeneration backcross breeding [93][94][95]. Hou et al. (2001) transferred Ogura CMS into Chinese cabbage by an asymmetric fusion technique [96]. Cui et al. (2004) used Ogura CMS B. napus as the female parent and inbred lines as the recurrent parent to conduct hybridization and continuous backcross breeding and finally obtained three CMS lines of B. rapa with stable sterility and nonchlorosis [97]. Zhao et al. (2007) introduced the sterility of RC97-1 into Chinese cabbage by using Ogura CMS B. napus RC97-1 as a sterile source and then bred a new Ogura CMS line, RC7, which overcame many shortcomings of the original Ogura CMS material through continuous backcrossing and strict economic trait selection [98]. Although Ogura CMS Chinese cabbage has been developed over a long period, it is generally believed that the lack of excellent Ogura CMS Chinese cabbage varieties with good economic traits and large-scale production is due to the problems of disease resistance, poor combination ability, low yield, negative cytoplasmic effects and the severe degradation of backcross generations despite some progress in addressing the problems of yellowing and nectary development. In addition, restorer lines of B. rapa have not been created or reported.
Brassica Juncea
CMS lines of B. juncea are obtained in three main ways: (1) spontaneous CMS; (2) CMS created by introducing the B. juncea nucleus into a heterologous cytoplasm, with the resulting CMS given a name that reflects the species, such as Moricandia CMS [100], Trachystoma CMS [101] and Siifolia CMS [102]; and (3) CMS, first widely used in B. napus, transferred into B. juncea through protoplast fusion, somatic hybridization and conventional backcrosses, such as Tournefortii CMS [103], Oxyrrhina CMS [104] and Ogura CMS [62]. Delourme et al. (1994) used protoplast fusion to transfer Ogura CMS from B. napus to B. juncea. Kirti et al. (1995) also reported the transfer of Ogura CMS to the B. juncea variety RLM198 by using continuous backcrossing combined with variety screening. However, the obtained sterile line exhibited serious defects, such as seedling yellowing, late flowering time, poor seed set and small or abnormally shaped pods. Therefore, Kirti et al. (1995) further fused the protoplast of the Ogura sterile line and normal cytoplasmic variety RLM198 and obtained an improved Ogura CMS line with great improvement in the above defects by continuous backcrossing. INRA also created the Ogura sterile line and restorer line of B. juncea, and Tian et al. (2014) introduced it in Canada [99]. The sterile line was proven to be stable, while the introduced Ogura CMS restorer line in B. juncea had a low seed setting rate and many unfavorable agronomic traits, preventing the restorer line from being used. Using MAS combined with backcross breeding, Tian et al. (2014) successfully developed a homozygous Ogura CMS restorer line with a high seed set and good agronomic traits in B. juncea, and the linkage redundancy of the improved restorer line was greatly reduced [99].
Other Cruciferous Crops
In Brassica campestris, an Ogura CMS line was obtained by crossing the Ogura CMS line of B. napus with B. campestris and backcrossing with the original male parent [92].
In B. campestris L. var. purpurea Bailey, a corresponding CMS line was obtained by using Ogura CMS material in B. napus after three consecutive years of backcrossing [105]. Since this variety is grown for its consumable vegetative organs rather than its seeds, there is no need for restorer lines.
In B. oleracea L. var. botrytis L., an Ogura CMS line with resistance to atrazine herbicide was obtained by protoplast fusion technology [85]. Jiang [89]. The flower organ structure was normal, and the seed setting ability was strong. Using this broccoli sterile line, the first-generation hybrid 'Luqing No. 1' of broccoli was prepared and had the advantages of strong growth potential, early maturity, excellent comprehensive traits and significant yield benefits. Using Chinese kale 16Q2-11 containing the fertility restorer gene Rfo as an intermediate material, the Rfo gene was introduced into six Ogura CMS broccoli lines by hybridization and subsequent backcrossing [90].
How Do CMS Sterility Genes and Fertility Restorer Genes Come about?
Plant mitochondrial genomes are rich in repeated sequences, with estimates of up to 38% of the mitochondrial genome occupied by tandem repeats, short repeats and large repeats [189,190]. Repetitive sequences are the most common and important sequences in the mitochondrial intergenic region. They are related to the mutation and recombination of the mitochondrial genome and play an important role in the evolution of plant mitochondrial genomes. The comparison of the mitochondrial genomes of CMS lines and maintainer lines showed that there were almost no deleterious mutations in CMS line genomes except that of Beta vulgaris ssp. maritima with G-type CMS, which means that deleterious mutations are not the root cause of CMS [191,192]. Together with the reported mitochondrial genome sequences, comparative analysis showed that most CMS types are closely related to repeating sequences. Heng et al. (2014) found that repeats in the CMS line were typically twice the size as those in the maintainer line and three large repeats were present downstream of the hau CMS-associated gene orf288, which implicated the strong ties between repeat sequences and the formation of this chimera [193,194]. Comparative analysis between the mitogenomes of CMS and male-fertile lines of pepper (Capsicum annuum L.) showed that the CMS candidate genes orf507 and ψatp6-2 were proximal to the edges of highly rearranged CMS-specific DNA regions, whose evolution may be the result of nearby intermediate or large-sized repeats [195]. Tanaka et al. (2014) reported that the Ogura-type mitochondrial genome was highly rearranged compared with the normal-type genome by recombination through one large repeat and multiple short repeats. A high rearrangement rate is a common feature of CMS mitochondrial genomes. Recombination via repeat sequences is believed to be responsible for extensive structural changes [20]. The structure of plant mitochondrial genomes evolves rapidly, driven by rearrangements that result from high rates of recombination [196]. Thus, CMS leading to mitochondrial genome rearrangement may be associated with DNA recombination mediated by repeating sequences and probably arose in progenitor species, which was then revealed when hybridization removed the corresponding nuclear restorer gene. Mutation and selection are two primary forces that drive evolutionary processes [197]. Any nuclear allele able to restore the function of the male gametophyte in plants with sterility genes will be favored and subsequently fixed in a population. As the frequency of the CMS gene increases in the population, selection will more strongly favor the restorer alleles, which should subsequently spread in the population.
What Are the Future Development Directions of Ogura CMS?
Ogura CMS is the most extensively studied and widely used CMS system in Brassicaceae crops [54]. Nevertheless, many open questions remain, and this situation has changed in recent years with the development of new techniques. According to the limitations and problems during the application of Ogura CMS in existing research, future directions are discussed and then summarized into two major aspects.
More direct evidence for the functional gene validation of the sterility gene orf138 is still needed. Although the overexpression of the orf138 gene in transgenic Arabidopsis and the knockdown of the orf125 gene, which is the homologous gene of orf138, have been achieved, functional genomics research needs to employ diverse experimental approaches to investigate gene functions [198]. Gene editing of orf125 was implemented by using transcription activator-like effector nucleases (TALENs) with mitochondrial localization signals (mitoTALENs). This approach has also been applied to validate the mitochondrial gene orf312 of Tadukan-type CMS and the mitochondrial gene orf79 of CMS-BT in rice [120,199]. CRISPR-based gene editing has revolutionized targeted gene editing in plants in recent years [200]. CRISPR-mediated genome editing was realized in the mitochondria of yeast and the chloroplasts of Chlamydomonas [201]. Kang et al. (2021) achieved DNA-free editing in chloroplasts and generated lettuce calli and plantlets resistant to streptomycin or spectinomycin [202]. The Northwest Institute of Plateau Biology, Chinese Academy of Sci-ences, also unveiled a method to develop male-sterile lines in plants using a mitochondrial gene editing system [203]. One of the most direct pieces of evidence would be to verify the function of the orf138 gene by editing ORFs related to CMS with mitochondrial gene editing to silence the orf138 gene and directly realize fertility conversion. In addition, the transcription-specific targeting of mitochondrial gene expression is difficult, and it is also necessary to realize the mitochondrially targeted expression of the orf138 gene in Cruciferae crops, except for Arabidopsis.
A great deal remains to be determined about sterility and fertility restoration mechanisms. (1) The mechanism by which sterility genes affect pollen abortion but not female gametes and other tissues remains unclear. Research on CMS-S in maize has led to a breakthrough in solving this problem. Xiao et al. (2020) found that the nuclear-encoded DREB transcription factor ZmDREB1.7 was specifically expressed in anthers and promoted the expression of the CMS-S gene orf355 [204]. The accumulation of ORF355 activated mitochondrial retrograde signaling and in turn induced ZmDREB1.7 expression. ZmDREB1.7 and orf355 formed a positive-feedback transcriptional regulation loop in microspores of CMS-S maize, which eventually led to the accumulation of the ORF355 protein and abortion. Similar transcription factors may also exist in Ogura CMS and influence pollen abortion without affecting other tissues. Recent advances in single-cell sequencing technologies would be useful for addressing this issue. Zhang et al. (2021) isolated the anthers of CMS lines and restorer lines, analyzed their transcriptional expression profiles by single-cell RNA sequencing, and then expounded the mechanism of CMS-C sterility genes and nuclear restorer genes affecting meiosis and microspore development [205]. There are still many research gaps in the genetics, cytology and molecular biology fields related to male and female gametophyte development. Breakthroughs in future leading-edge technology are expected to provide insights into the regulatory mechanism of male and female gametophyte development at the single-cell level and into basic theoretical research on sterility and fertility restoration mechanisms. (2) Problems such as large differences in pollen amount and pollen viability and a low Rfo gene transmission rate still exist in fertility restorer lines [83]. Su et al. (2016) performed bulked segregant RNA-Seq and identified six potential associated genes in minor effect QTLs contributing to fertility instability, which inspired us [206]. Studies have shown that the introgression of exogenous fragments has an effect on genetic variation in chromosomes possessing chromatin fragments and may result in both whole-genome shock and local chromosomal shock [207,208]. The effect of exogenous fragments from R. sativus and B. napus will be an important direction for future research on the Ogura CMS/FR system.
|
2022-08-17T15:05:13.929Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "816586e100b8b6b4d384e008590ff847397910ab",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/16/9099/pdf?version=1660398037",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8ddc1a5104db3171cfeb33b67facdf92feff7f4",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
267851673
|
pes2o/s2orc
|
v3-fos-license
|
Exploring Interference Issues in the Case of n25 Band Implementation for 5G/LTE Direct-to-Device NTN Services
This paper delves into an interference analysis, focusing on the forthcoming Starlink Generation 2 satellites, stated to operate within the 1990–1995 MHz frequency band. The aim is to assess the potential interference from this Starlink system to the satellite receivers of mobile satellite systems (MSSs), which are set to function within the 1980–2010 MHz range, and satellite receivers of the NTN systems, which are planned to operate in the n256 bands, defined by the 3GPP specifications. Through simulation-based evaluations, both single-entry and aggregate interference levels from Starlink to MSSs and NTN systems are comprehensively explored. To estimate the interference impact, several protection criteria were used. The study is in line with the Recommendations of International Telecommunication Union (ITU-R) and common approaches that are used when performing compatibility studies between satellite systems. The findings of this study demonstrate the feasibility of utilizing the n25 band for NTN direct-to-device services.
Introduction
In the dynamic landscape of contemporary broadband communications, the pursuit of higher data rates aligns seamlessly with the quest for widespread connectivity.This becomes particularly crucial as more tourists explore remote natural areas, where traditional communication infrastructure is either absent or severely limited.The challenge of providing mobile services in such areas persists, and current satellite solutions, though capable, often demand costly and cumbersome user equipment, discouraging a substantial user base and relegating these systems to niche applications.
A promising solution to overcome this challenge involves delivering broadband satellite services directly to regular handsets, specifically unmodified smartphones.This approach has proven successful in transforming niche applications into mainstream phenomena, as exemplified by the integration of GPS functionality into widely adopted consumer gadgets like smartphones and tablets.Supporting LTE/NR satellite services with regular handsets could potentially provide a viable solution to access broadband services in remote areas.
However, implementing direct-to-device (D2D) satellite systems in terrestrial spectrum bands poses significant challenges, especially when dealing with unmodified smartphones.These challenges encompass interference issues, Doppler shifts, which affect signal frequency and phase, causing distortion and reception problems, and delays complicating the realization of Hybrid Automatic Repeat Request (HARQ) protocols, which are crucial for reducing the Bit Error Rate (BER) in channels, and the inherent limitations of smartphones' relatively low power, leading to constraints in link budget and throughput.Regulatory challenges also arise due to the operation of these systems in terrestrial spectrum bands, falling outside traditional satellite service allocations.
Sensors 2024, 24, 1297 2 of 16 This study delves into the multifaceted issue of interference in D2D satellite systems, exploring potential problems that may arise when implementing such systems.A notable challenge is the necessity to utilize terrestrial spectrum bands for satellite services, which were not originally intended for mobile satellite services.Some companies, like AST SpaceMobile and Lynk, have ventured into this domain, launching test satellites capable of providing broadband services in UHF terrestrial mobile bands.Although utilizing UHF bands offers advantages due to their excellent propagation characteristics and favorable link budgets, it comes with the constraint of requiring larger antennas, a challenge imposed by payload mass limitations [1].
To address the antenna size limitation, one potential solution is to operate in higher terrestrial spectrum bands.SpaceX and T-Mobile's collaboration plans to launch secondgeneration Starlink satellites serving regular handsets within the n25 band, specifically 1910-1915 MHz (Earth-to-space) and 1990-1995 MHz (space-to-Earth).However, this frequency band presents a challenge, as the space-to-Earth link overlaps with the mobile satellite service (MSS) S-band, using the 1980-2010 MHz (Earth-to-space) and 2170-2200 MHz (space-to-Earth) bands.This overlap introduces the potential for reverse space-tospace link interference, a complex issue due to radio waves' propagation in a vacuum with minimal attenuation.Additionally, the S-band, defined as the n256 band in 3GPP for NTN networks, is supported by future smartphones, creating an overlap with other potential D2D systems built on future NTN networks.
This paper presents an interference analysis conducted between second-generation Starlink satellites and MSS and NTN systems, focusing on the frequency band 1990-1995 MHz.It specifically considers typical MSSs, which are based on ITU filings from various MSS networks and designed to offer voice calls, IoT services, and data transfer services in the S-band.The study also explores the potential interference with NTN systems in the n256 band based on 3GPP specifications.The results will clarify whether the n25 band is suitable for providing direct-to-device services or if interference concerns must be addressed.
Literature Review
During the World Radiocommunication Conference 2023 (WRC-23), mew agenda item for future conference in 2027 was opened, under this agenda item ITU-R should study possible allocations for MSS that utilizes International Mobile Telecommunication (IMT) user equipment in the terrestrial cellular frequency bands with the 694-2700 MHz range.The IMT based radio interfaces include UMTS, LTE, 5G and future generations of cellular communications.This agenda item essentially will study possibility of using unmodified handsets based on LTE, 5G and future generations within the terrestrial bands of the cellular operators.However, these studies haven't yet began and will last for the next 4 years until the WRC-27.At national level some countries have already began to update their national frequency regulations to allow satellite systems operate in the terrestrial band [2], however these updates do not eliminate some interference issues.
Thus, presently, there is a limited body of research focusing on interference issues in satellite systems deployed in the terrestrial spectrum, especially in the context of implementing device-to-device (D2D) services.While a few studies have touched upon this subject, the depth of technical analysis remains somewhat limited.This study places emphasis on investigating interference scenarios between Non-terrestrial (NTN) satellite systems and other mobile satellite service (MSS) systems, particularly within the n255 and n256 frequency bands [1].
In the 3GPP TR 38.863 report, titled "Non-Terrestrial Networks (NTN) Related RF and Co-Existence Aspects", the primary focus revolves around frequency allocations, aligning with the ITU-R Radio Regulations (RR), and encompasses several compatibility studies [3].One published article delves into the compatibility between satellite and terrestrial segments of 5G networks in adjacent frequency channels [4].Another publication [5] outlines the simulation methodology for interactions between Non-Geostationary Satellite Orbit (NGSO) and Geostationary Satellite Orbit (GSO) systems.Additionally, there is work dis-cussing the potential of hybrid satellite/terrestrial networks, which includes considerations for traffic sharing between terrestrial and satellite Radio Access Networks (RAN) [6].
This research, exploring the use of 3GPP technology for satellite communication, underscores the importance of conducting interference analyses, highlighting the risks that may arise when the same spectrum is utilized for both terrestrial and satellite systems [7].Furthermore, Ref. [8] provides insights into potential approaches for dynamic spectrum sharing between terrestrial and non-terrestrial networks in the context of 5G services and beyond.
In summary, while numerous papers acknowledge the challenge of interference when employing the terrestrial spectrum for satellite connectivity, most have refrained from conducting in-depth technical analyses, primarily raising awareness of the issue.Consequently, there is a need for more comprehensive studies that can offer a deeper understanding of the extent of the problem, which is important for correct decision-making processes when deploying D2D services in terrestrial bands.
Simulation Parameters and Scenarios
In 2021, 3GPP unveiled Technical Report TR 38.821.This report provides a methodology and includes example parameters that enable the simulation of satellite-based nonterrestrial networks (NTN) offering 5G services.Its primary goal is to delineate the essential features and adaptations needed for the New Radio (NR) protocol to operate effectively within non-terrestrial networks, with a particular emphasis on satellite access [9].
Following the finalization of Release 17, several new frequency bands were introduced for the NTN segment of NR technologies.Notably, the n255 band encompasses a pair of frequency bands, 1626.5-1660.5 MHz (Earth-to-space) and 1525-1559 MHz (space-to-Earth), and the band 256, which comprises a pair of frequency bands: 1980-2010 MHz (Earth-to-space) and 2170-2200 MHz (space-to-Earth).
In August 2022, SpaceX and T-Mobile unveiled a groundbreaking partnership, with plans to offer direct-to-device (D2D) services in the United States, even in the most remote and previously unreachable areas, where traditional cellular signals struggle to reach.The concept revolves around creating a novel network that harnesses the capabilities of Starlink satellites and T-Mobile's mid-band 5G spectrum.Subsequently, it was announced that Starlink's second-generation system would provide D2D services within the Frequency Division Duplex (FDD) pair of the n25 band, specifically within the 1910-1915 MHz (Earthto-space) and 1990-1995 MHz (space-to-Earth) bands.Starlink has further extended its global reach by partnering with cellular providers, including Rogers in Canada, Optus in Australia, One NZ in New Zealand, KDDI in Japan, and Salt in Switzerland.
It is important to note that if both bands are utilized for D2D services, the NTN satellite receivers of different Mobile Network Operators (MNOs) will be susceptible to interference from Starlink satellite transmitters.Since space-to-space interference scenarios have low attenuation of the interference signals, it is difficult to mitigate such interferences, especially taking into account that the victim receiver satellite will have in its line-of-sight range a very large number of satellites that will serve different countries, which means that it would be very difficult to manage such interference.Figure 1 illustrates the interference scenario between a typical NTN system operating in the n256 band and Starlink Generation 2, which operates in the n25 band.
The simulation characteristics of the NTN satellite can be derived from the ITU-R Report M.2514 [10] and Technical Report 3GPP TR 38.821.The spacecraft is located in low-Earth orbit (LEO).These parameters can be applied to both terrestrial and satellite spectrums for the S-band.The NTN system characteristics that were simulated in the study are presented in in Table 1.The simulation characteristics of the NTN satellite can be derived from the ITU-R Report M.2514 [10] and Technical Report 3GPP TR 38.821.The spacecraft is located in low-Earth orbit (LEO).These parameters can be applied to both terrestrial and satellite spectrums for the S-band.The NTN system characteristics that were simulated in the study are presented in in Table 1.The NTN satellite employs a multibeam antenna system for both reception and transmission.This advanced antenna comprises 19 individual beams, each designed to operate within distinct frequency ranges.This particular antenna configuration is comprehensively described in the 3GPP TR 38.821 document.Figure 2 illustrates the spatial layout of these beams on the NTN satellite's onboard antenna.The NTN satellite employs a multibeam antenna system for both reception and transmission.This advanced antenna comprises 19 individual beams, each designed to operate within distinct frequency ranges.This particular antenna configuration is comprehensively described in the 3GPP TR 38.821 document.Figure 2 illustrates the spatial layout of these beams on the NTN satellite's onboard antenna.
In our analysis, we've considered the utilization of a typical unmodified handset (smartphone) as a reference device.Table 2 provides an overview of the smartphone's typical characteristics.In our analysis, we've considered the utilization of a typical unmodified handset (smartphone) as a reference device.Table 2 provides an overview of the smartphone's typical characteristics.Interference from Starlink to MSSs is expected to be most common in regions where countries using MSS equipment are geographically close to those adopting Starlink's n25 band for direct-to-device connectivity.SpaceX has initially planned to employ the n25 band in collaboration with T-Mobile in North America.However, considering the global reach of the Starlink satellite system and the fact that the n25 band is supported for smartphones worldwide, it is likely that expansion into other regions will be on the horizon.This expansion will be especially significant given the band's global compatibility with smartphones across all regions.Therefore, in our study, we consider interference to MSS user equipment, which is located in the desert area of African continent where satellite services will likely to be used since such remote areas do not have terrestrial networks coverage.Figure 3 illustrates the interference scenario between the MSS system operating in the 1980-2010 MHz band and Starlink Generation 2, which operates in the n25 band.Interference from Starlink to MSSs is expected to be most common in regions where countries using MSS equipment are geographically close to those adopting Starlink's n25 band for direct-to-device connectivity.SpaceX has initially planned to employ the n25 band in collaboration with T-Mobile in North America.However, considering the global reach of the Starlink satellite system and the fact that the n25 band is supported for smartphones worldwide, it is likely that expansion into other regions will be on the horizon.This expansion will be especially significant given the band's global compatibility with smartphones across all regions.Therefore, in our study, we consider interference to MSS user equipment, which is located in the desert area of African continent where satellite services will likely to be used since such remote areas do not have terrestrial networks coverage.Figure 3 The characteristics of the MSS system were derived from the ITU-R filings.The MSS system characteristics that were simulated in the study are presented in in Table 3.
Table 3. Simulation parameters of the MSS system.The characteristics of the MSS system were derived from the ITU-R filings.The MSS system characteristics that were simulated in the study are presented in in Table 3.For both Starlink and NTN, each beam of the multibeam antenna used the pattern that was based on Recommendation ITU-R S.1528 [11], which describes typical antenna patterns for NGSO satellites in the frequency ranges below 30 GHz; these antenna patterns are commonly used for compatibility studies in the ITU-R study groups and for frequency coordination between the satellite systems [12].While it is important to acknowledge that the antenna patterns described in the recommendation are essentially approximations, real-world tests and measurements have consistently demonstrated their validity and applicability.These patterns, despite being theoretical representations, have been proven to closely align with the actual performance of NGSO satellite communication systems in practice.In other words, they serve as reliable models that accurately represent the behavior of the antennas in real-life scenarios.The antenna pattern of the MSS is based on Recommendation ITU-R S.672, which is stated in the ITU filings of the system [13].
The antenna patterns are presented in Figure 4.
closely align with the actual performance of NGSO satellite communication systems in practice.In other words, they serve as reliable models that accurately represent the behavior of the antennas in real-life scenarios.The antenna pattern of the MSS is based on Recommendation ITU-R S.672, which is stated in the ITU filings of the system [13].
The antenna patterns are presented in Figure 4.
Methodology of Simulations
The study employed a hybrid approach, combining deterministic analysis and Monte Carlo simulations.The deterministic aspect involved calculating the orbital positions of the satellites based on Kepler's laws, while the Monte Carlo simulations were utilized to generate interfering transmitters from user equipment (UE).This process enabled us to
Methodology of Simulations
The study employed a hybrid approach, combining deterministic analysis and Monte Carlo simulations.The deterministic aspect involved calculating the orbital positions of the satellites based on Kepler's laws, while the Monte Carlo simulations were utilized to generate interfering transmitters from user equipment (UE).This process enabled us to estimate the cumulative interference caused by terrestrial UE on the NTN satellite receiver.For more precise results the simulations had a step size of 1 s.Due to the complex computations, they are performed in corresponding toolkits of Matlab at the same time, to understand the mathematics that lie behind these computations.Several of the most important expressions are provided to understand how the results were obtained.
In the case of a space-to-space interference scenario, PL can be calculated using the free space propagation model, following Recommendation ITU-R P.525 [14].
The interference level from the i-th interfering station can be calculated using the following expression: where I represents the interference level caused by the the i-th interfering station.P interferer denotes the output power of the interfering station, expressed in dBW.G interferer stands for the gain of the transmitting antenna on the interferer satellite, directed to the victim receiver, and measured in dBi.G victim represents the gain of the receiving antenna at the victim receiver station, oriented to the interfering station, and is also measured in dBi.PL stands for the propagation loss between the interfering transmitter and the victim receiver, measured in dB.
To calculate the aggregate interference level from Starlink, the following expression can be used [14]: The Carrier-to-Noise-and-Interference Ratio (C/(N + I)) for the transmission link between the satellite and UE can be derived from the Carrier-to-Noise Ratio (C/N) and Carrier-to-Interference Ratio (C/I), as indicated by the following equations [13,14]: The formula for C/N calculation is [15,16] where EIRP stands for effective isotropic radiated power (EIRP), expressed in dBW, G/T is the antenna-gain-to-noise-temperature, measured in dB, k is the Boltzmann constant and equals to 228.6 dBW/K/Hz, PL is the free space pathloss, and B is channel bandwidth in dBHz.
The antenna-gain-to-noise-temperature G/T can be derived by the following equation [16]: where G R is the receiver antenna gain, N f is the noise figure, T 0 is the environment temperature, and T r is the receiver.The pathloss between the interfering Starlink satellites and victim MSS/NTN satellites can be calculated using the following traditional free space expression [17]: where f represents the frequency of the transmitter and d represents the distance between an interfering transmitter and a victim receiver.
Methodology of the Simulation of Interference to MSS
It should be noted that most MSS systems use a transparent payload.Users transmit within the frequency range of 1980-2010 MHz, subsequently, the frequency is converted to the 6700-7075 MHz range before retransmission to the Earth station gateway, which actively tracks the MSS system.Since there is no onboard processing, it is required to calculate the composite C/(N + I) of the transparent payload to estimate end-to-end performance of the link.The following expression can be used to calculate end-to-end performance [15,16]: C/(N + I) total = −10 log 10 −0.1C/(N+I) up + 10 −0.1C/(N+I) down where C/(N + I) up is the signal-to-noise ratio plus interference level in the Earth-to-space link, and C/(N + I) down is the signal-to-noise ratio plus interference level in the space-to-Earth link [18].
Figure 3 shows the simulation of the end-to-end performance of the MSS system while the interfering Starlink satellite is in proximity to it.
After C/N and C/(N + I) were calculated, it is possible to calculate the Eb/No.The Eb/No is directly related to the C/N value, and can be expressed as follows [19]: where E b /N 0 represents the ratio of energy per bit to spectral power density (dB); N is the noise level in a reference bandwidth (dBW); I represents the interference level in a reference bandwidth (dBW); R is the data rate (kbit/s); B is the reference bandwidth (kHz); and C is the carrier bandwidth [20].
The total noise level under interference conditions should be presented as a sum of the spectral density of the receiver's noise and external noise, N ∑ = N o + I o .The levels of E b /(N o + I o ) can be checked according to the curves to calculate the BER levels depending on the modem implementation.In our study we have considered the most commonly used modulation scheme QPSK.Since there are different modem implementations for MSS no code rate was considered and raw BER levels were obtained.The threshold BER levels depend on the system and usually varies; however, the most common threshold BER level is 10 −6 .A lot of MSS systems employ this threshold in their specifications, including the Globalstar satellite system.Figure 5 shows a simulation of interference from Starlink to the MSS system.used modulation scheme QPSK.Since there are different modem implementations for MSS no code rate was considered and raw BER levels were obtained.The threshold BER levels depend on the system and usually varies; however, the most common threshold BER level is 10 −6 .A lot of MSS systems employ this threshold in their specifications, including the Globalstar satellite system.Figure 5 shows a simulation of interference from Starlink to the MSS system.
Methodology of the Simulation of Interference to NTN
In the realm of satellite communications, the evaluation of throughput losses involves the examination of Modulation and Coding Schemes (MODCOD).These coding schemes are typically outlined in corresponding specifications.However, when it comes to incorporating adaptive modulation and coding schemes into simulations, there exists no universally accepted methodology.
Methodology of the Simulation of Interference to NTN
the realm of satellite communications, the evaluation of throughput losses involves the examination of Modulation and Coding Schemes (MODCOD).These coding schemes are typically outlined in corresponding specifications.However, when it comes to incorporating adaptive modulation and coding schemes into simulations, there exists no universally accepted methodology.
In contrast, 3GPP terrestrial specifications have made significant progress in this area.They have developed a methodology that effectively accommodates adaptive modulation and coding schemes.To achieve this, 3GPP has introduced link adaption approximations [21], which enable the accurate estimation of modem throughput losses, taking into account the intricacies of adaptive modulation and coding schemes.
Given that NTN system D2D systems are designed to use the same waveforms to ensure compatibility with smartphones, these link-level adaptions developed by 3GPP can be readily applied to the analysis of interference within the NTN system.
The subsequent equations serve to approximate throughput over a channel, given a specific signal-to-interference-plus-noise ratio (SINR), measured in dB, when employing link adaptation: where: S(SINR): Shannon bound, S(SINR) = log 2 (1 + 10 SINR/10 ) (bps/Hz); α: Attenuation factor, representing modem implementation losses; SINR MIN : Minimum SINR of the code set, dB; SINR MAX : Maximum SINR of the code set, dB.In these equations, the Shannon bound represents the maximum theoretical throughput than can be achieved over an AWGN channel for a given SNIR.The parameters α, SINR MIN , and SINR MAX can be chosen to represent different modem implementations and link conditions [22].The parameters from Table 5 represent a typical case, which assumes the following: 1:1 configuration of the antenna; AWGN channel model; Link Adaptation; No HARQ.Based on the above equations, bitrate mapping can be calculated for the uplink and downlink.Figure 6 represents bitrate mappings for the downlink and uplink of the typical NTN system.To determine the throughput loss for both the uplink and downlink in a c network, the essential steps involve calculating the signal-to-noise ratio (SNR) connections and assessing interference (I) from interfering stations.Next, the in level should be combined with the background noise level in the analyzed syste pute the signal-to-interference-plus-noise ratio (SINR).These SINR values ca compared against the reference curves in Figure 6, and the throughput loss can mined using the following formula: where Throughput[kbps] is the maximum throughput of the channel, expresse NRB_per_UE is the number of resource blocks (RBs) per user, Ntotal_RBs the total numb B stands for the channel bandwidth, expressed in MHz, and Scapacity is the spe ciency, depending on the SINR, expressed in bps/Hz.Figure 7 illustrates a simulation depicting interference from a Starlink sate NTN satellite receiver when the NTN satellite is serving a user during a close fl Starlink satellite for a single-entry interference case and for aggregate interferen To determine the throughput loss for both the uplink and downlink in a considered network, the essential steps involve calculating the signal-to-noise ratio (SNR) of all the connections and assessing interference (I) from interfering stations.Next, the interference level should be combined with the background noise level in the analyzed system to compute the signal-to-interference-plus-noise ratio (SINR).These SINR values can then be compared against the reference curves in Figure 6, and the throughput loss can be determined using the following formula: where Throughput[kbps] is the maximum throughput of the channel, expressed in bps, N RB_per_UE is the number of resource blocks (RBs) per user, N total_RBs the total number of RBs, B stands for the channel bandwidth, expressed in MHz, and S capacity is the spectral efficiency, depending on the SINR, expressed in bps/Hz.Figure 7 illustrates a simulation depicting interference from a Starlink satellite to the NTN satellite receiver when the NTN satellite is serving a user during a close flyby of the Starlink satellite for a single-entry interference case and for aggregate interference.
NRB_per_UE is the number of resource blocks (RBs) per user, Ntotal_RBs the total number of RBs, B stands for the channel bandwidth, expressed in MHz, and Scapacity is the spectral efficiency, depending on the SINR, expressed in bps/Hz.
Figure 7 illustrates a simulation depicting interference from a Starlink satellite to the NTN satellite receiver when the NTN satellite is serving a user during a close flyby of the Starlink satellite for a single-entry interference case and for aggregate interference.
Simulation Results
The findings of our studies are presented through Cumulative Distribution Functions (CDF) illustrating the reduction in carrier-to-noise levels and the resulting throughput loss in the NTN satellite uplink and BER rise of the MSS composite link.These metrics allow us to effectively estimate the real degradation of the MSS and NTN systems' operations when they are interfered with by the Starlink systems in the 1990-1995 MHz band.
It is worth noting that, in Recommendation ITU-R S.2131satellite communications systems typically consider a threshold C/N reduction of 1 dB, corresponding to a 10% reduction in spectral efficiency [23]; this protection criterion can be applied to MSS systems.
In accordance with 3GPP specifications for terrestrial LTE/NR segments, the threshold for acceptable throughput loss stands at 5% [22].This protection criterion can be applied be applied to the satellite NTN system offering D2D services.
Single-Entry Interference from Starlink to MSS
In this scenario, the interfering Starlink satellite was in close proximity to the MSS system, which was actively receiving a transmission from user equipment.This scenario lasted approximately 200 s, aligning with the MSS uplink access duration while considering the necessary carrier-to-noise values.The average distance between the victim MSS receiver and the Starlink transmitter was 1600 km. Figure 8 displays the CDFs of single-entry interference from the Starlink satellite operating in the 1990-1995 MHz frequency band to the typical MSS system.
The results indicate that in the case of single-entry interference, specifically in a scenario involving the proximity of a Starlink satellite to an MSS system, the end-to-end link performance of the MSS system can experience an Eb/No degradation from 1 to 3 dB.For the most part, the overall BER levels are within acceptable limits.However, it is noteworthy that there can be a significant surge in BER levels under certain conditions.Considering that our MSS link simulation assumed close to ideal conditions, it is reasonable to speculate that in practical scenarios with this type of interference, BER levels could exceed acceptable thresholds for a substantial portion of the time.
system, which was actively receiving a transmission from user equipment.This scenario lasted approximately 200 seconds, aligning with the MSS uplink access duration while considering the necessary carrier-to-noise values.The average distance between the victim MSS receiver and the Starlink transmitter was 1600 km. Figure 8 The results indicate that in the case of single-entry interference, specifically in a scenario involving the proximity of a Starlink satellite to an MSS system, the end-to-end link performance of the MSS system can experience an Eb/No degradation from 1 to 3 dB.For the most part, the overall BER levels are within acceptable limits.However, it is noteworthy that there can be a significant surge in BER levels under certain conditions.Considering that our MSS link simulation assumed close to ideal conditions, it is reasonable to speculate that in practical scenarios with this type of interference, BER levels could exceed acceptable thresholds for a substantial portion of the time.
Aggregate Interference from Starlink to MSS
In this scenario, we considered the aggregate interference level originating from the Starlink constellation, which comprised 1694 interfering satellites.Starlink satellites were active only when located above landmass areas and remained inactive over oceanic regions.Figure 9 illustrates the results of the aggregate interference from the Starlink system operating in the 1990-1995 MHz frequency band to the typical MSS system.
Aggregate Interference from Starlink to MSS
In this scenario, we considered the aggregate interference level originating from the Starlink constellation, which comprised 1694 interfering satellites.Starlink satellites were active only when located above landmass areas and remained inactive over oceanic regions.Figure 9 illustrates the results of the aggregate interference from the Starlink system operating in the 1990-1995 MHz frequency band to the typical MSS system.The results indicate that in the case of aggregate interference, the MSS end-to-end link experiences a degradation exceeding 10 dB for a significant duration, far surpassing the acceptable threshold for a tolerable Eb/No degradation.This leads to a significant increase in BER levels.The elevated interference BER levels pose a serious threat to the complete outage of MSS uplink operations within the 1990-1995 MHz frequency range.As a result, once the second-generation Starlink system is deployed and serves millions of users, it will render it practically impossible to deploy MSS systemss in specific areas operating within the 1990-1995 MHz frequency band.The results indicate that in the case of aggregate interference, the MSS end-to-end link experiences a degradation exceeding 10 dB for a significant duration, far surpassing the acceptable threshold for a tolerable Eb/No degradation.This leads to a significant increase in BER levels.The elevated interference BER levels pose a serious threat to the complete outage of MSS uplink operations within the 1990-1995 MHz frequency range.As a result, once the second-generation Starlink system is deployed and serves millions of users, it will render it practically impossible to deploy MSS systemss in specific areas operating within the 1990-1995 MHz frequency band.
Aggregate Interference from Starlink to NTN
In this scenario, we considered the aggregate interference level originating from the Starlink constellation, which comprised 1694 interfering satellites.Starlink satellites were active only when located above landmass areas and remained inactive over oceanic regions.It is important to note that, since the NTN satellite utilizes nineteen spot beams, we assumed that only seven beams directly overlap with the Starlink interferers, while the other twelve beams receive out-of-band interference.Figure 11 illustrates the results of the aggregate interference from the Starlink system operating in the 1990-1995 MHz frequency band.Our results indicate that, in the case of single-entry interference, the throughput loss amounts to 2.5%.Generally, a 5% throughput loss is considered acceptable.However, this criterion is typically applied to terrestrial LTE/NR systems.Given the NTN satellite's limited uplink link budget for D2D services, even a 2.5% throughput loss can substantially impact the performance of the NTN system.
Aggregate Interference from Starlink to NTN
In this scenario, we considered the aggregate interference level originating from the Starlink constellation, which comprised 1694 interfering satellites.Starlink satellites were active only when located above landmass areas and remained inactive over oceanic regions.It is important to note that, since the NTN satellite utilizes nineteen spot beams, we assumed that only seven beams directly overlap with the Starlink interferers, while the other twelve beams receive out-of-band interference.Figure 11 illustrates the results of the aggregate interference from the Starlink system operating in the 1990-1995 MHz frequency band.
active only when located above landmass areas and remained inactive over oceanic regions.It is important to note that, since the NTN satellite utilizes nineteen spot beams, we assumed that only seven beams directly overlap with the Starlink interferers, while the other twelve beams receive out-of-band interference.Figure 11 Our findings reveal a significant throughput loss in the case of aggregate interference, amounting to 40%.This throughput loss greatly exceeds the 5% threshold and would lead to a substantial degradation in the quality of service provided by the future NTN systems Our findings reveal a significant throughput loss in the case of aggregate interference, amounting to 40%.This throughput loss greatly exceeds the 5% threshold and would lead to a substantial degradation in the quality of service provided by the future NTN systems operating in the n256 band.This would result in a highly inefficient spectrum utilization in the uplink channel of the 1980-2010 MHz band.
Conclusions
The evolution of non-terrestrial network (NTN) systems for direct-to-device (D2D) services brings significant advantages, leveraging unmodified handsets for seamless integration.However, the competitive pursuit of frequency bands compatible with smartphones has led to potential interference issues.Specifically, the overlap of Starlink in frequency band 1990-1995 MHz with existing MSS systems and future NTN systems in the n256 band in, poses challenges with space-to-space link interference.Our study demonstrates that this interference can have severe implications; specifically, our results indicate up to a 40% reduction in uplink throughput for future NTN systems and a significant degradation of the services for the currently existing MSS systems.
It is essential to address this issue, given its significant impact on users and the potential depletion of valuable spectrum resources.Considering the inherent challenge of avoiding space-to-space interference, especially given the anticipated high number of active Starlink satellites, there are only two feasible solutions for interference mitigation.The first involves using non-overlapping bands within the n25 band.The second entails separating the MSS/NTN and Starlink service areas.This separation can be implemented by countries aiming to provide satellite services within their territories, requiring explicit agreements and coordination between different administrations to prevent mutual interference.
Figure 1 .
Figure 1.Interference scenario between NTN network operating in the 1980-2010 MHz and Starlink operating in the 1990-1995 MHz.
Figure 1 .
Figure 1.Interference scenario between NTN network operating in the 1980-2010 MHz and Starlink operating in the 1990-1995 MHz.
Figure 2 .
Figure 2. Illustration of the allocation of the beams of the NTN satellite used in simulations.
Figure 2 .
Figure 2. Illustration of the allocation of the beams of the NTN satellite used in simulations.
16 Figure 3 .
Figure 3. Interference scenario between NTN users and terrestrial users.
Figure 3 .
Figure 3. Interference scenario between NTN users and terrestrial users.
Figure 4 .
Figure 4. Antenna pattern approximations of NTN satellite and Starlink used in the simulations.
Figure 4 .
Figure 4. Antenna pattern approximations of NTN satellite and Starlink used in the simulations.
Figure 5 .
Figure 5. Simulation of the single-entry interference from a Starlink satellite to the MSS receiver: (a) view from Starlink; (b) view from MSS.
Figure 5 .
Figure 5. Simulation of the single-entry interference from a Starlink satellite to the MSS receiver: (a) view from Starlink; (b) view from MSS.
Figure 6 .
Figure 6.Bitrate mappings of the NR downlink and uplink.
Figure 7 .
Figure 7. Simulation of the interference from a Starlink satellite to the NTN satellite receiver: (a) single-entry interference; (b) aggregate interference.Figure 7. Simulation of the interference from a Starlink satellite to the NTN satellite receiver: (a) single-entry interference; (b) aggregate interference.
Figure 7 .
Figure 7. Simulation of the interference from a Starlink satellite to the NTN satellite receiver: (a) single-entry interference; (b) aggregate interference.Figure 7. Simulation of the interference from a Starlink satellite to the NTN satellite receiver: (a) single-entry interference; (b) aggregate interference.
Figure 8 .
Figure 8. CDF of single-entry interference from Starlink to the NTN satellite receiver: (a) carrier-tointerference reduction; (b) throughput loss of the NTN uplink.
Figure 8 .
Figure 8. CDF of single-entry interference from Starlink to the NTN satellite receiver: (a) carrier-tointerference reduction; (b) throughput loss of the NTN uplink.
Figure 9 .
Figure 9. CDF of aggregate interference from Starlink to the MSS satellite receiver: (a) carrier-tointerference reduction; (b) throughput loss of the NTN uplink.
Figure 9 .
Figure 9. CDF of aggregate interference from Starlink to the MSS satellite receiver: (a) carrier-tointerference reduction; (b) throughput loss of the NTN uplink.
Figure 10 .
Figure 10.CDF of single-entry interference from Starlink to the NTN satellite receiver: (a) carrier-tointerference reduction; (b) throughput loss of the NTN uplink.
Figure 11 .
Figure 11.CDF of aggregate interference from Starlink to the NTN satellite receiver: (a) carrier-tointerference reduction; (b) throughput loss of the NTN uplink.
Figure 11 .
Figure 11.CDF of aggregate interference from Starlink to the NTN satellite receiver: (a) carrier-tointerference reduction; (b) throughput loss of the NTN uplink.
Table 1 .
Simulation parameters of NTN satellite.
Table 1 .
Simulation parameters of NTN satellite.
Table 2 .
Handset characteristics that were used in the simulations.
Table 2 .
Handset characteristics that were used in the simulations.
Table 3 .
Simulation parameters of the MSS system.Starlink satellite system, we gathered simulation data from the Federal Communications Commission (FCC) records and applications.Specifically, our simulation involved 1694 Starlink satellites.It is worth noting that in practical deployments, the number of satellites could be considerably larger, as Starlink has announced plans for more extensive constellations.The simulation of Starlink satellites closely adhered to the specifications provided to the FCC.Table4outlines the simulation parameters employed for modeling the Starlink system.
Table 4 .
Simulation parameters of the Starlink system.
Table 5 .
Link adaptation parameters of NR.
|
2024-02-25T06:17:15.331Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "2119b25cfe88f854e338e404a37537a56cf66976",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f2ca38fc93f04394fdf1efe30c5a70bfa1b5a2f5",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247981121
|
pes2o/s2orc
|
v3-fos-license
|
Molecular Mechanisms and Therapeutic Strategies for Levodopa-Induced Dyskinesia in Parkinson’s Disease: A Perspective Through Preclinical and Clinical Evidence
Parkinson’s disease (PD) is the second leading neurodegenerative disease that is characterized by severe locomotor abnormalities. Levodopa (L-DOPA) treatment has been considered a mainstay for the management of PD; however, its prolonged treatment is often associated with abnormal involuntary movements and results in L-DOPA-induced dyskinesia (LID). Although LID is encountered after chronic administration of L-DOPA, the appearance of dyskinesia after weeks or months of the L-DOPA treatment has complicated our understanding of its pathogenesis. Pathophysiology of LID is mainly associated with alteration of direct and indirect pathways of the cortico-basal ganglia-thalamic loop, which regulates normal fine motor movements. Hypersensitivity of dopamine receptors has been involved in the development of LID; moreover, these symptoms are worsened by concurrent non-dopaminergic innervations including glutamatergic, serotonergic, and peptidergic neurotransmission. The present study is focused on discussing the recent updates in molecular mechanisms and therapeutic approaches for the effective management of LID in PD patients.
INTRODUCTION
Parkinson's disease (PD) is a life-threatening progressive neurodegenerative disease, which is characterized by severe locomotor impairments including bradykinesia, tremor, and rigidity (Kalia and Lang, 2015;Mishra et al., 2021;Angelopoulou et al., 2021;Ahmad et al., 2022;Kaur et al., 2022). These symptoms are the manifestation of substantia nigral dopaminergic neuron loss (Kalia and Lang, 2015). Other abnormalities associated with PD are cognitive defects, psychiatric abnormalities, and neurodegenerative implications of levodopainduced dyskinesia (LID) (Schapira et al., 2017). Levodopa (L-DOPA) is highly efficient and is the medication used for the mitigation of PD, but its prolonged use gives rise to motor abnormalities including dyskinesia. Dyskinesia may be mild at first but may develop into a debilitating symptom and may affect the quality of life of patients. More than 50% of patients with PD develop LID after 5 years of continuous treatment with L-DOPA, which worsens the quality of life in these patients (Fabbrini et al., 2007;Wu et al., 2019;Maan et al., 2020;Mandal et al., 2020).
Various types of movement disorders have been observed in LID which majorly include dystonia, myoclonus, chorea, or their combination. Generally, dyskinesia appears in the neck, jaw, tongue, face, waist, shoulder, trunk, and leg . There are three types of dyskinesias identified to date, off-period dystonia, peak dose dyskinesia, and diphasic dyskinesia (Fabbrini and Guerra, 2021). Off-time dyskinesia is usually encountered during the night or early morning. Thus, administration of L-DOPA formulations having a longer half-life at night can be an effective way to treat off-time dyskinesia (Vijayakumar and Jankovic, 2016). However, diphasic dyskinesia is treated by increasing the dose of dopaminergic drugs administered; contrarily, treatment of peak dose dyskinesia is generally performed by reducing the dose of dopaminergic drug administered. However, in some cases, reduction of L-DOPA treatment leads to worsening of motor side effects (Fox et al., 2018). The management of LID remains a matter of grave concern as the diagnosis of the type of dyskinesia is a challenging issue. Furthermore, different types of dyskinesias can respond differently to a particular treatment.
LID is currently thought to be related to pre-/postsynaptic changes resulting in dopaminergic imbalance. Dyskinesia is associated with a series of events that include potent stimulation of dopamine (DA) receptors, low-protein and genetic mutations, and abnormalities in non-DA transmission (Bezard and Brotchie, 2001). Therefore, animal models are designed to explore the mechanisms involved in LID and to identify new pathological targets for drugs. Current mice and primate disease models are created by inducing PD by using MPTP or 6-OHDA toxins and then inducing LID by administering a daily dosage of L-DOPA (Bezard and Brotchie, 2001;Brotchie et al., 2007). In this article, we have discussed the pathophysiology, treatment trends of LID, and various preclinical and clinical evidence regarding this.
BASIC PATHOPHYSIOLOGY OF L-DOPA-INDUCED DYSKINESIA
The mechanism of LID is not clearly understood. However, continuous research has found that continued damage of nigral dopaminergic neurons produces deformities in the connection of the motor cortex and the striatum and makes a functional disturbance in basal ganglia, which can lead to the generation of involuntary abnormal movements, that is, LID (Bezard and Brotchie, 2001). The quantity and period of drug exposure play a crucial role in the development of dyskinesia.
Along with that, the severity of LID also depends on the extent of neurodegeneration. PD patients and MPTP-treated primates having a higher degree of dopaminergic neuron degeneration show dyskinetic symptoms after the administration of L-DOPA. In general, rising and falling of the plasma L-DOPA level is the main reason for dyskinesia and motor defect (Thanvi et al., 2007).
Along with the advancement of the disease, the equal dosage of L-DOPA, which is generally required for alleviation of PD symptoms, causes dyskinesia. The main cause for this altered response pattern is not clear; however, the literature suggests that disturbance in between pre-and postsynaptic nigrostriatal DA transmission leads to motor complications. A schematic diagram describing major signaling abnormalities associated with LID in PD patients has been illustrated in Figure 1.
Improper Dopamine Storage, Neurotransmission, and Postsynaptic Alteration
Exogenously administered L-DOPA, a dopamine precursor, is converted to DA and stored in the vesicles of presynaptic dopaminergic neurons, which upon receiving stimuli releases it. This conversion and storage also take place inside the serotonergic nerve terminals. Even most of the exogenously administered L-DOPA is taken up by the serotonergic neurons and converted to DA. Dopamine transporter (DAT) transports this converted DA into the synaptic vesicles, where it is stored and released in a physiologically regulated manner (Corsi et al., 2021). Mainly, D 2 autoreceptors and DAT buffer the release of DA into the synaptic cleft (Navailles and De Deurwaerdère et al., 2012). However, serotonergic neurons lack these mechanisms; thus, the physiological fine-tuning of DA release is absent in them. With the progression of disease neurodegeneration, the number of dopaminergic terminals which are required to modulate the production and controlled release of DA decreases. With the progression of time, dopaminergic neurons start to degenerate and serotonergic neurons become more relevant, and eventually, most of the striatal DA gets released, as false transmitters, from the serotonergic nerve terminals (Corsi et al., 2021). At an advanced stage of PD, a vast portion of dopaminergic neurons are lost, and the serotonergic terminal is mainly providing the DA release and conversion, which are devoid of the molecular mechanism for DA release management and feedback control. As a result, the DA concentration cannot be regulated, which leads to an abnormal swing in extracellular DA concentration following oral administration of L-DOPA (Mosharov et al., 2015;Fabbrini and Guerra, 2021). Fluctuations in the dopamine release patterns from the presynaptic neuron terminals give rise to abnormal responses, which were then carried out by postsynaptic neurons. These abnormal responses then give rise to fluctuations in motor coordination.
An important pathophysiological factor for LID is the shorter half-life after oral administration of L-DOPA, which produces a non-physiological pulsatile dopaminergic stimulus in the stratum region of the brain. This condition makes the intermediate level of DA dependent on the pharmacokinetics of external L-DOPA. Thus, the improvement from PD symptoms gets dependent on the administration of the L-DOPA dose. All these abnormalities ultimately give rise to progressive loss of the dopaminergic nerves (Bastide et al., 2015;Leta et al., 2019;Chen et al., 2020). Gradually, degeneration of presynaptic nigral neurons results in loss of DA in peak dose dyskinesia. After a few years of treatment, a new kind of dyskinesia starts to occur, which is known as wearing off dyskinesia. In this condition, worsening of PD symptoms takes place when the next dose of L-DOPA is due, and it improves after taking the new dose (Bezard and Brotchie, 2001;Bézard et al., 2003).
L-DOPA-induced aberrant dyskinetic activity could be due to deleterious plastic neurological implications at the postsynaptic levels. Between several different ideas supporting the role of postsynaptic variables in LID, one theory proposes that L-DOPA causes a stimulatory priming effect in Parkinson's disease sufferers (Nadjar et al., 2009). Chronic introductions to medications that function as direct or indirect stimulants of cerebral DAT can cause sensitization to their cognitive stimulating characteristics, resulting in priming (behavioral sensitization). As a result, priming might occur in both drug usage and LID (Calabresi et al., 2008). Apparently, two required prerequisites for the induction and maintenance of LID that are thought to be a result of priming: high L-DOPA dosages and extensive DA denervation. L-DOPA generates dyskinesia with a constant spectrum of severity in dyskinetic rats, when given an appropriate dose, and results in severe striatal DA depletion (Calabresi et al., 2010).
Chronic drug therapy appears to cause unfavorable plastic alterations, which mostly occur in the postsynaptic region. After systemic administration of L-DOPA, this syndrome causes an excessively significant rise in extracellular DA (Abercrombie et al., 1990), which is coupled with defective monoamine oxidase-mediated DA breakdown, which further increases extracellular striatal DA concentration (Wachtel and Abercrombie, 1994). These huge changes in extracellular DA concentrations could then provoke abnormal postsynaptic responses. The increasing degradation of nigrostriatal dopaminergic neurons, which produces a reduction in DA FIGURE 1 | Molecular neurobiology of levodopa-induced dyskinesia. DA production generally regulates voluntary movements. In PD patients, initial dopaminergic neurodegeneration in substantia nigra pars compacta is asymptomatic. Loss of striatal-nigral neuronal sensitizes the D1 receptor on the medium spiny neurons of the direct pathway. This results in initial motor symptoms of PD (dyskinesia). The treatment with L-DOPA improves the initial motor symptoms and promotes BDNF release from corticostriatal neurons. The expressed BDNF potentiates the expression of D3Rs through the activation of Trκ-B receptors in nigrostriatal medium spiny neurons. Enhanced expression of D3Rs suppresses internalization and abnormal trafficking of membrane-bound D1Rs, thus intensifying D1R sensitization and associated dyskinesia. Activation of D1R by DA (released through L-DOPA) results in the activation of D1R/Gαolf/adenylyl cyclase 5 (AC5) machinery in nigrostriatal medium spiny neurons and causes cAMP-mediated hyperexpression of protein kinase A (PKA) and DARPP-32. The abnormal PKA/DARPP-32 expression results in hyper-phosphorylation of GluR1 and promotes the excitability of medium spiny neurons contributing to the loss of long-term depression and depotentiation and development of LID. On the other hand, activation of D1R results in the Ras/Raf/MEK/ERK signaling pathway, which further potentiates Ras/Raf machinery and regulates various transcription and translation processes regulating LID. PKA/DARPP-32 and/or ERK/Elk/MSK1 signaling migrates to the nucleus, leads to phosphorylation of CREB/histone H3, and augments the expression of immediate early genes (prodynorphin and ΔFosB), which are reported to contribute to the development of LID. Activated ERK further elevates mTORC1 expression-mediated mRNA translation and worsens LID. holding space, is responsible for the difference between the severity of different disease phases. As a result, changes in L-DOPA fractions in the brain are solely dependent on the drug dosing period (Metman et al., 2000). Moreover, relative to stable responses, PET scans revealed fast oscillations in brain DA levels in individuals with PD and LID. Even though the drugdosing cycle plays an important part in the vulnerability toward dyskinesias, other peripheral (gastric emptying and ingestion) and central (changes in precursor drug buildup in serotonergic neurons) elements are likely to play a part in the illness (Calabresi et al., 2010).
One further area which has lately been examined in mice persistently treated with L-DOPA, following unilateral 6hydroxydopamine-induced lesion, is the anatomical location for the manifestation of LID (Buck et al., 2010). Dyskinesias were induced by locally injecting L-DOPA into the lesioned striatum, and on the other hand, regional L-DOPA administration of a similar dose in the ipsilateral globus pallidus and substantia nigra pars reticulata did not elicit dyskinesias, implying that the striatum is a particular target region for the activation of previously existent dyskinesias (Calabresi et al., 2010).
Altered Dopaminergic Pathways and Receptors
Three signaling pathways, PKA/DARPP-32, ERK, and mTORC1, were identified, which exert significant involvement in the pathophysiology of LID. All these pathways are interrelated and are activated by a common intercellular cascade triggered in nigrostriatal SPNs expressing D1 receptors. Chronic L-DOPA administration activates all of them, which alters the striatal synaptic plasticity. DARPP-32 signaling was observed to regulate ERK and mTORC1 pathways Torkaman-Boutorabi et al., 2012).
Recent studies have demonstrated that D1 receptor activation leads to Shp-2 activation, which leads to activation of ERK1/2 and mTOR phosphorylation. This, in turn, induces the LID expression. This effect can easily be reversed by using a D1 receptor antagonist, which downregulates the end product of the D1R/Shp-2 pathway, p-mTOR, and p-ERK1/2 (Calabrese et al., 2020). Rapamycin was observed to be selectively affecting the mTORC1 pathway, without affecting DARPP-32 signaling, ERK pathway, and AMPA, NMDA expression .
After the onset of LID, the potential of D1 receptors to get associated with PKA and ERK1/2 increases. However, it was observed that after 2 weeks of L-DOPA treatment, D1 receptorassociated activation of PKA and ERK1/2 got blocked, due to oversensitivity of dSPNs (Wu et al., 2018). The D1R-dependent ERK1/2 phosphorylation was observed to be modulated by mGluR5, which modulates the mGluR5/PLC/PKC pathway without influencing PKA activity (Jones-Tabah et al., 2020). D1 receptor activation leads to activation of the D1R/cAMP/ PKA pathway, which upregulates the expression of the Gα olf protein. Gα olf is a major stimulating G-protein, which upon activation upregulates the expression of cAMPs (Fieblinger et al., 2014). It was observed that chronic administration of L-DOPA increases the expression of Gα olf in striatonigral SPNs and decreases its expression in striatopallidal SPNs (Alcacer et al., 2012). Knocking out casein kinase 2 (CK2) protein was also found to be linked with reduced Gα olf protein expression in striatonigral SPNs. This also reduced the severity of LID. However, knockdown of CK2 in striatopallidal SPNs leads to increased severity of LID, which can be easily reversed by coadministration of caffeine and L-DOPA (Goto, 2017).
Upregulated Nurr1 transcription factor was found to be associated with increased cortically evoked firing rate and increased spine density of SPNs. Leucine-rich repeat kinase 2 (LRRK2) was observed to be influencing D1/D2 receptoractivated pathways. Interaction between them is not well understood, but this can help researchers to control LID in a better way (Sellnow et al., 2020). An anchoring protein, PSD95, dictates the distribution of dopaminergic receptors in the brain. An increased expression was noticed in DA depleted brain, which lowers the diffusion of D1R in the brain Divito et al., 2015). The expression of parkin protein also gets impaired, which leads to abnormal involuntary movements (Porras et al., 2012). Ca2+/calmodulin-dependent protein kinase IIα (CaMKIIα) binds to the same domain where the G αi domain is located on the D2 receptor. This also indicates CaMKIIα D2 receptor activity via the adenyl cyclase signaling pathway. In the case of LID, increased interaction between CaMKIIα and D2 receptors was observed, and reduction in CaMKIIα concentration reduces LID, just like a D2 receptor agonist .
An increased expression of D3 receptors was reported in animals with severe LID. An associated increase in the expression of GABA release was also noticed. D3 receptors can also perform the D1 receptor modulation action via affecting the ERK pathway. Upon deletion of the D3 receptor, levels of FosB, ERK, and H3 activities decreased along with the alleviation of LID symptoms, with no effect on L-DOPA treatment Albarrán-Bravo et al., 2019). A lack of D5 receptors is also associated with increased LID severity and decreased response to the L-DOPA symptoms (Payer et al., 2016).
Glutamatergic Hypothesis
In the past two decades, other than the dopaminergic system, numerous, non-dopaminergic systems are also observed to be involved in the pathophysiology of LID. First, significantly increased glutamatergic neurotransmission was observed in basal ganglia and the thermo-cortical circuit. In glutamate, ionotropic and metabotropic glutamate receptors are mainly involved in the excitatory effect. An excessive amount of AMPA and NMDA receptors were observed to be situated at the striatum of the patients and animals suffering from LID (Sgambato-Faure and Cenci, 2012;Guerra et al., 2019). Moreover, a histological study has shown that Glu (related to NMDA) is elevated in the striatum of the dyskinetic patients, but it has not shown any presence in non-dyskinetic patients. The sentence should be corrected as following: Abnormal glutamate release has been linked to development of LID in PD patients (Ahmed et al., Frontiers in Pharmacology | www.frontiersin.org April 2022 | Volume 13 | Article 805388 2011). Confirmation of the link between the elevated level of Glu and LID is also coming from pharmacological studies by using NMDA receptor antagonist drugs. By this, it has been confirmed that NMDA antagonist drugs are decreasing dyskinesia in animals and humans (Jongsma et al., 2001;Morin and Di Paolo, 2014). It was also observed that metabotropic glutamate receptors (mGluR) are involved in the pathophysiology of LID. Additionally, Glu intracellular signaling is modulated by the MGlu receptor without affecting Glu excitatory action on synaptic neurotransmission. In the GPCR (G-protein coupling receptor), normally, three groups of mGlu are present, and they differ in ligand binding profile and sequence homology (Sebastianutto and Cenci, 2018). In the group 1 receptors (MGluR1, MGluR5) interact with phospholipase C beta and control intracellular calcium release by the inositol triphosphate formation along with protein kinase C activation. Both the group 2 (MGluR2, MGluR3) and group 3 receptors (MGluR4, MGluR6, MGluR7, and MGluR8) interact with the inhibitory G-protein and inhibit the formation of cAMP (Rondard and Pin, 2015;Sebastianutto and Cenci, 2018). Striatal PKA-related glutamatergic signaling and glutamatergic receptors (AMPA) play an important role in the expression and occurrence of LIDs. AMPA antagonists directly act on the glutamate receptors situated in M1 and are proven effective in alleviating LID. NMDA antagonists do not alleviate LID in rats when used as a monotherapy, but when co-administered with AMPA antagonists, they show LID-alleviating properties. This combination also potentiates the AMPA antagonists (Lindenbach et al., 2016).
Serotonergic Hypothesis
Serotonergic neurons convert 5-HT into DA, sort it, and then release it. This mechanism is referred to as the presynaptic serotonergic LID mechanism (Rylander et al., 2010;Sahin et al., 2014). As discussed earlier, serotonergic neurons play a critical role in the release of DA, and continuing this path, pharmacological lysis of serotonin raphe projections have shown to be blocking LID development in dyskinetic rats . As discussed earlier, in the advanced stages of PD, false dopaminergic neurotransmission is a serious pathological implication in the manifestation of LID symptoms. It was further established in hemiparkinsonian rats, where demolition of serotonergic innervation in the striatal region of the brain results in complete restoration of LID-like condition . The same effect was observed in non-dyskinetic rats upon removal of serotonergic afferents . Furthermore, an association of serotonergic innervations and serotonergic autoreceptors with the pathophysiology of LID was confirmed using other animal models also (Muñoz et al., 2008;Lindgren et al., 2010;Beaudoin Gobert et al., 2015). On the other hand, worsening of LID symptoms was observed after inducing the serotonergic innervations (exerts a tropic effect on serotonergic axons) in the striatum of parkinsonian rats, denoting that the striatum is the most important point of serotonergic implications (Tronci et al., 2017). Many articles support that irregular serotonergic movements are involved in the pathophysiology of LID. The involvement of the serotoninergic system in LID is supported by data showing significant striatal hyper-innervation in the parkinsonian animal (Gagnon et al., 2016). However, the serotonergic implication is not only restricted to the striatal region but also affects the parts wherever the serotonin innervation is present (Carta et al., 2018). Long-term L-DOPA administration was noticed to decrease basal serotonin release and metabolism, with associated downregulation of serotonin tissue content in the brain (Navailles et al., 2011). Thus, prolonged L-DOPA-induced reduction in 5-HT synaptic functionality may be implicated not only in the development of LIDs but also in the onset of non-motor health consequences of L-DOPA medication, such as anxiousness or melancholy, in which the serotonergic system plays a role (Navailles et al., 2011).
Radiological analysis has revealed an upregulation of serotonin transporter (SERT) expression in LID-affected patients (Sahin et al., 2014). Chronic L-DOPA treatment causes brain neurotrophic factor (BNDF) overexpression, which is associated with axonal growth of serotonin neurons, and further activates cAMP/PKA and mTORC pathways (Gagnon et al., 2016). In dopamine-depleted brains of PDaffected monkeys, striata and areas of dopaminergic denervation were found to be denser in SERT positive axons (Inden et al., 2012). Various selective serotonin reuptake inhibitors (SSRIs), like citalopram, paroxetine, and fluoxetine, decrease LID when co-administered with L-DOPA, while no such effects were observed with DA agonist, apomorphine. Thus, uptake blockage of serotonin is the preferred mechanism of anti-dyskinetic action (Bishop et al., 2012;Miguelez et al., 2016).
5HT 1A and 5HT 2B receptors are also passively involved in the prognosis of LID. They exist as autoreceptors in the axons and cell bodies of the serotonergic nerves and as heteroreceptors on striatal, as well as cortical neurons. As heteroreceptors, they control the release of GABA and glutamate in the cortical and striatal regions. As discussed earlier, the release of glutamate has some serious implications in LID. Thus, the activation of the 5HT 1A receptor leads to a release of glutamate, which exacerbates the LID symptoms (Carta et al., 2018). The role of serotoninergic system in LID has been validated through the animal models of 5HT1A and 5HT2B receptor agonists (animals having normalized serotoninergic receptor expression) and they have attenuated the dyskinetic symptoms produced by the L-DOPA administration. For example, vilazodone is a serotoninergic reuptake inhibitor and partial 5HT 1A agonist class of drug, which decreases the developing effect of LID without any interruption in motor effects of L-DOPA in 6-hydroxyDA-lesioned hemiparkinsonian rats (Meadows et al., 2018).
APPROACHES FOR MANAGEMENT OF LEVODOPA-INDUCED DYSKINESIA: PRECLINICAL STATUS
The management of LID is often complicated as its pathogenesis is poorly understood. Based on the preclinical evidence, surgical Frontiers in Pharmacology | www.frontiersin.org April 2022 | Volume 13 | Article 805388 5 and pharmacological approaches for the management of LID have been elaborated.
Pharmacological Approaches
For so many years, L-DOPA, which is a prodrug, is very effective in the treatment of parkinsonian motor symptoms. Chronic use of L-DOPA leads to several complications in PD patients. Response fluctuations and dyskinesia are examples of these complications. As a result, the management of the use of L-DOPA for treatment becomes a very important aspect. Many approaches participate that decide the best use of L-DOPA for the treatment of PD. Chemical approaches include two types of therapy, that is, dopaminergic therapy and non-dopaminergic therapy (Hauser et al., 2006;Encarnacion and Hauser, 2008). Recent updates on therapeutic interventions for LID (Santini et al., 2007;Kong et al., 2015;Wang et al., 2015;Calabrese et al., 2020;Beck et al., 2021) have been summarized in Supplementary Table S1 and illustrated in Figure 2.
Improving Dopaminergic Therapy
LID is motor complications arising from pulsatile DA concentration in the brain. Theoretically, reaching a steady state of DA expression can precipitate dyskinesia. Thus, researchers have focused on developing therapies that provide continuous dopaminergic stimulation (CDS) to the brain. By doing this, the antiparkinsonian effects will remain unchanged, but it will be achieved without encountering dyskinesias.
Steady administration of L-DOPA abridged the risk of LID (Bibbiani et al., 2005). Many experimental studies showed that Parkinson's patients are more prone to dyskinesia with L-DOPA than DA agonist monotherapy. The reason can be that DA agonists have more affinity for D2 receptors than D1 receptors (Sharma et al., 2015). Therefore, to manage dyskinesia in PD patients, the first step is to lessen the dose of L-DOPA during dopaminergic therapy given to the patient, but in some cases, it will worsen parkinsonism. Apart from this, we can also give repeated small doses of L-DOPA (Mazzucchi et al., 2015). During the management of dyskinesia, the focus is always to prevent the FIGURE 2 | Targeting signaling upstream/downstream of Ras/Raf/MEK/ERK pathway modulation in LID. In the experimental model of PD, L-DOPA administration causes abnormal activation of the RAS/RAF/ERK pathway and results in the emergence of dyskinetic behavior. Activation of D1R, Trk-B, or NMDA receptor causes activation of RAS-GDP molecular switch by Ras-guanyl nucleotide-releasing factor 1 (Ras-GRF1), which facilitates the conversion of RAS-GTP to Ras-GDP. Ras-GDP further causes the activation of Raf protein kinase, which in turn leads to the phosphorylation of mitogen-activated protein kinase/ERK kinase (MEK) and ERK. Activation of ERK enhances expression of the transcription factor ΔFosBand ERK-dependent activation of mTORC1, which results in the inhibition of long-term depression in striatal neurons and development of the abnormal involuntary movement in PD patients. Attenuation of Ras-GRF1 in the knock-out mice model results in the attenuation of LID. A similar reduction in LID was resulted in the inhibition of Ras (by simvastatin) and MEK (by SL327) expression. On the other hand, the inhibition of ΔFosB by short hairpin RNA and blockade of mTORC1 by rapamycin has been shown to mitigate LID. Various NMDA receptor antagonists have also been reported to reduce the expression of the Ras/Raf pathway and support their efficacy in LID.
Frontiers in Pharmacology | www.frontiersin.org April 2022 | Volume 13 | Article 805388 6 progress of dyskinesia and reduce the severity of dyskinesia. Instead of an intermediate release, the controlled release formulations of L-DOPA are helpful in delaying dyskinesia (Lee, 2001). Along with L-DOPA, dopaminomimetics having a longer half-life are administered to decrease pulsatile stimulation of DA receptors. In selected cases, COMT inhibitors are included as an example amantadine, which is recently given adjuvant to L-DOPA.
Replacement With Other Dopaminergic Agents
As previously mentioned, prolonged formulation of DA agonists can be used in dyskinesia management because of their selective nature toward D2 receptors. D1 receptors are mainly responsible for emerging dyskinesia. Cabergoline, pramipexole, and ropinirole decreases the development of dyskinesia as compared to L-DOPA monotherapy during initial therapy in PD patients (Binde et al., 2020). DA agonists promote more continuous stimulation and restrict postsynaptic changes, which play the main role in L-DOPA inducing dyskinesia (Jenner, 2004). Pergolide, which is another example of a DA receptor agonist, delays the motor complication more than L-DOPA therapy (Bychkov et al., 2007). Bromocriptine monotherapy also lessens motor complication (Hussain et al., 2018).
Pramipexole, a D3 agonist, balances D1 DA receptor supersensitivity-associated LID. Early use of pramipexole not only prevents dyskinesia but also helps to treat motor fluctuations. Ropinirole is another example of a DA receptor agonist. Studies have claimed that ropinirole significantly lowers dyskinesia as compared to L-DOPA. Prolonged release of ropinirole has the potency to delay dyskinesia more than the increased dose of L-DOPA in patients. Rotigotine, a D1 receptor agonist, can also be used as a replacement for L-DOPA to prevent dyskinesia, but more investigation is required (Hutny et al., 2021).
Adjuvant Therapy With L-DOPA
Catechol-O-methyltransferase inhibitors, also called COMT inhibitors, increase the obtainability of L-DOPA in the brain. The progress of dyskinesia can be overcome in PD patients by maintaining the plasma concentration of drugs in the body . Entacapone and tolcapone are the drugs that help to reduce fluctuation in the concentration of L-DOPA. The only disadvantage is that tolcapone leads to liver toxicity. This disadvantage results in the development of new drugs which are safer and have enhanced activity (Deane et al., 2004). Opicapone belongs to the third-generation COMT inhibitor, a drug having safer and enhanced activity. This drug increases the bioavailability of L-DOPA, which has been proved in many experimental studies. Dyskinesia, insomnia, and dizziness are the disadvantages of opicapone (Hutny et al., 2021).
MAO-B inhibitors like safinamide, which also act on the glutamatergic system, help to improve the bioavailability of L-DOPA and increase its availability in spiny neuron synaptic clefts (Grégoire et al., 2013). In another study, safinamide is modulating the glutamate expression in the striatal region. Due to this, it has depicted significant motor complication reduction properties, without inducing troublesome dyskinesia. This fact can support its clinical use as an add-on therapy in the management of LID (Gardoni et al., 2018). Safinamide increases the bioavailability of L-DOPA when used as an ad on therapy, which makes this combination more effective in reducing motor complications (Schaeffer et al., 2014).
From a modification point of view, some alterations in therapy are included. The first modification is related to dose: the dose selection should be in a way that it gives more benefit. One experimental study proved that in the initial stages of treatment, if we give a dose of L-DOPA more than 600 mg, it may increase the chances of dyskinesia (Lee, 2001). In a modification, the dose of DA receptor agonists may increase or the daily dose of L-DOPA may decrease. Treatment of diphasic dyskinesia is more complex. In treatment therapy of diphasic dyskinesia, the addition of DA agonists or drugs that prolong L-DOPA, that is, COMT inhibitors or monoamine oxidase B inhibitors, are administered. D3 receptor antagonist IRL790 can also be given to a patient for maintaining psychomotor stability (Schaeffer et al., 2014;Hutny et al., 2021).
Formulation-Based Approaches for Improved L-DOPA Therapy
Numbers of new formulations have been discovered by drug repurposing methods to manage dyskinesia induced by L-DOPA. The working of the new drug should be in a way that it reduces L-DOPA-induced dyskinesia but should not have any effect on antiparkinsonian activity. Scientists have tried to know the mechanism involved in the occurrence and expression of symptoms in LID. Pharmacokinetics of L-DOPA play an important role in the treatment of LID. We can achieve constant plasma levels by improving the delivery of a drug into the body . XP21279 is a L-DOPA prodrug, which is converted into L-DOPA by carboxylesterases. This prodrug has a stable plasma concentration and can be used to diminish LID (Savola et al., 2003). CVT-301, an example of inhaled L-DOPA formulation used in PD patients, has also proven its ability in the improvement of motor complications (Stocchi et al., 2018).
L-DOPA/benserazide (polylactic-glycosic acid) microspheres (LBM), a novel therapeutic approach, have shown to be effective as an antiparkinsonian therapy without inducing dyskinesias. The effect was shown in a dose-dependent manner. LBM was observed to be not activating the D1R/Shp-2/ERK1/2 pathway, which was found to be activated in the 6-hydroxy DA rat model of LID (Fiorentini et al., 2013;Wan et al., 2017). These results were like the previous results, where it was observed that LBM reduces the frequency of LID. In various animal models, it was found that PKA/DARPP-32 signaling activation increases tau protein phosphorylation, which leads to subsequent activation of SNPs and results in LID. Dose-dependent inhibition of PKA/DARPP-32 signaling, tau protein phosphorylation, and activation of SNPs were also examined in LBM-treated animals. Thus, in the case of LBM-treated animals, the amount of dyskinesia encountered was also less frequent . Chitosan-coated nanoliposomes (CCN) is another advanced way to improve DA delivery in the brain. In Parkinsonian rats treated with CCN, it was observed that the induction of LID is less than that of L-DOPA-treated animals. CCN was observed to be reducing the expression of ERK1/2, DARPP-32, and FosB/DFosB, which were found to be upregulated in LID (Cao et al., 2016).
Non-Dopaminergic Approaches
Non-dopaminergic activity influences symptomatic development, neurodegenerative processes and, more importantly, the development of available therapeutic adverse effects including LID. It has identified defects in several nondopaminergic aspects of the pathway inside the basal ganglia. During the past few years, researchers have started to identify improper signaling via glutamatergic, adrenergic, serotonergic (5HT), cannabinoid, and opioid pathways in both stimulation and activation of LID at the cellular level (Brotchie, 2005).
Adenosine A 2A Receptor Antagonists
Adenosine receptors play a crucial role in the development of LID. Adenosine interacts closely with DA and regulates the excessive glutamate neurotransmission in PD as well as LID (Morin et al., 2016). Adenosine A 2A receptor antagonists were observed to be reducing excessive striato-pallidal and subthalamic neuronal activity (Blandini and Armentero, 2012). In a study, it was observed that MPTP-treated monkeys with dyskinesia have an upregulated expression of adenosine A 2A receptor, compared with non-dyskinetic MPTP treated monkeys (Morin et al., 2016). Hence, adenosine receptors have gained increased attention from researchers.
ST1535, an adenosine A2 receptor antagonist, was examined to be effective as an antiparkinsonian agent, and it significantly reduces the requirement of L-DOPA dose. Thus, using this as an add-on therapy along with L-DOPA may significantly delay the development of LID (Rose et al., 2006). Caffeine also shows adenosine A2 receptor antagonism action. It was examined that caffeine was able to alleviate LID symptoms in 6-OHDAtreated mice. The mechanism of action was found to be blockage of adenosine A 1 or A 2A receptors (Xiao et al., 2011). Istradefylline, another adenosine A2 receptor antagonist, was observed to be effective in improving motor complications associated with PD without provoking dyskinesia. Therefore, it has the potential to be used as a monotherapy in PD; in that way, it will eliminate the occurrence of LID. However, more research is required to explore its potential (Uchida et al., 2015). Contrastingly, SCH 412348, an adenosine A2 receptor antagonist, does not alleviate LID when co-administered with L-DOPA in 6-OHDA-treated rats (Jones et al., 2013). Thus, using a combination therapy of L-DOPA and adenosine A2 receptor antagonists to treat LID might not carry that much potential. However, more research is needed to confirm it.
Adrenergic Receptor Antagonists
The adrenergic system also plays a crucial role in the development of PD and LID. In a study, it was observed that the noradrenergic system modulates the LID severity. In that same study, it was also demonstrated that in 6-OHDA-treated hemiparkinsonian rats, the use of alpha-and beta-adrenergic receptor blockers can alleviate the development of LID (Barnum et al., 2012).
Alpha-2 adrenergic receptor blockers idazoxan, rauwolscine, and yohimbine were also observed to be reducing LID in the MPTP-lesioned primate PD model. Idazoxan as monotherapy has no beneficial effect on PD or LID, but when it was used in combination with L-DOPA, it not only alleviated LID but also increased the t max more than two times (Paredes-Rodriguez et al., 2020). Propranolol, a potent beta-adrenergic receptor blocker, was also observed to be reducing LID in hemiparkinsonian rats. The researchers found that LID-alleviating effects of propranolol were mediated via mitigation of L-DOPA-induced extra physiological efflux of DA (Bhide et al., 2015). Fipamezole (JP-1730) is also a potent alpha-2-adrenergic blocker. It was observed to be reducing LID in MPTP-lesioned primate PD models. The combination of fipamezole and L-DOPA increases the duration of action of L-DOPA by 66%, compared with L-DOPA alone (Savola et al., 2003). Thus, adrenergic receptor antagonists have great potential as an adjunct to classical L-DOPA therapy. It can be beneficial in increasing the duration of the therapy and providing continuous dopaminergic stimulation.
Glutamatergic Antagonists
There are two types of glutamatergic receptors: First is ionotropic glutamatergic receptors (iGluRs) that mediate fast excitatory neurotransmission, and the second one is metabotropic glutamatergic receptors (mGluR) that mediate slow excitatory neurotransmission. iGluRs are classified into NMDA, AMPA, and kainite (KA) receptors (Morin and Di Paolo, 2014;Fabbrini and Guerra, 2021). Higher striatal expression of NMDA and AMPA receptors was reported previously in MPTP-treated dyskinetic monkeys (Morin and Di Paolo, 2014). However, NMDA and AMPA receptor blockage was observed to be alleviating the development of LID in 6-OHDA rats (Fabbrini and Guerra, 2021).
Mavoglurant is an antagonist for the glutamate mGluR5 receptor. Ionotropic and metabotropic glutamate receptor antagonists are subjected to examination for their possible involvement in the alleviation of LID since overactivity of glutamatergic pathways has been linked to the development of LID in animal models (Jimenez-Urbieta et al., 2015;Trenkwalder et al., 2016a;Negida et al., 2021). The striatum is rich in metabotropic mGlu5 receptors, which are related to NMDA receptors. mGlu5 receptor antagonists appeared to be beneficial in correcting aberrant movements in 6-hydroxy DA (6-OHDA)-lesioned macaques in preclinical trials. Human trials of mavoglurant, a metabotropic mGlu5 receptor antagonist, have had mixed results. In addition, the medication has been linked to several potentially dangerous side effects such as hallucinations and dizziness. MGluR4PAM in several animal models of PD, a positive allosteric modulator of mGluR4 (VU0364770), elicits significant antiparkinsonian effects, but it fails to affect the development and evolution of LID in unilaterally 6-OHDAlesioned rats (Iderberg et al., 2015;Tison et al., 2016).
Traxoprodil is an NR2B subunit-specific NMDA receptor antagonist. It was observed that administration of traxoprodil revealed 30% reduction in LID severity, but there was no Frontiers in Pharmacology | www.frontiersin.org April 2022 | Volume 13 | Article 805388 8 reduction in motor abnormalities or certain cognitive difficulties associated with therapy Kong et al., 2015). Memantine functions as a non-competitive NMDA receptor antagonist. In an experiment with 6-ODHA-lesioned rats, both memantine and amantadine greatly decreased LID, but the alleviating effect of memantine was found to be, in a few days, showing a rapid tolerance to the anti-dyskinetic action of this medicine (Jankovic and Clarence-Smith, 2011). In contrast, riluzole, an NMDA receptor antagonist, did not elicit any anti-dyskinetic effect (Bara-Jimenez et al., 2006). CI-1041, another NMDA receptor antagonist, was observed to be inhibiting the development of LID in parkinsonian monkeys. The mechanism of action observed was the downregulation of the striatal mGlu5 receptor (Hadj et al., 2004).
Serotonergic Agonists
Chronic L-DOPA therapy and decreased DA levels in the brain lead to alterations in the brain's serotonergic system. It was observed that modification of the serotonergic system exerts an inhibitory effect on LID. However, careful administration is required as an overdose of serotonergic agonists can lead to symptomatology (5-HT syndrome), which shows PD-like symptoms (Hernandez et al., 2019). Buspirone, a partial 5-HT1A receptor agonist, was examined to be preventing the development of LID in rats without altering the STN electrical activity. An observed mechanism was found to be the regulation of GABA and Glu release, along with burst activity in pars reticular of SN (SNr) (Vegas-Suárez et al., 2020).
8-Hydroxy-2-dipropylaminotetralin (8-OH-DPAT), a 5-HT1A receptor agonist, was reported to be reducing LID development via downregulating D1 and D2 agonism-induced cortical gamma overactivity. However, associated frequent 5-HT syndrome development limits its further application (Nahimi et al., 2012;Dupre et al., 2016). SSRIs were also evaluated for this purpose. Citalopram, an SSSRI, was observed to be alleviating LID development without inducing 5-HT syndrome . Another 5-HT 1A receptor agonist, BMY-14802, was also evaluated to be showing LID reducing effect in a dosedependent manner. It not only reduces L-DOPA-induced dyskinesias but reduction of DA agonism-induced dyskinesias was also reported. The chances of developing 5-HT syndrome after treatment were also very low as its activity can be easily reversed by using a 5-HT1A receptor antagonist (Bhide et al., 2013). 5-HT 3 receptor antagonism is another possible checkpoint in the treatment of LID. Ondansetron, a well-known 5-HT 3 receptor antagonist, was examined for this purpose. After 23 days of continuous co-administration with L-DOPA, a significant reduction in LID development was noticed between Ondansetron + L-DOPA-treated group and only L-DOPAtreated groups. These results suggest that 5-HT 3 receptor antagonism carries potential in controlling the development of LID, but more research is needed to explore its maximum potential (Aboulghasemi et al., 2018).
Serotonin precursor, 5-hydroxy-tryptophan (5-HTP), was also examined to bear anti-dyskinetic properties. The basic mechanism of its activity is the upregulation of cytoplasmic serotonin, which competes with DA in serotonergic neurons and decreases the DA release from nerve terminals (Maffei, 2020). 5-HTP was reported to be alleviating LID in 6-OHDAtreated PD mice. When co-administered with L-DOPA, it neither loses its LID-alleviating properties nor decreases the antiparkinsonian effect of L-DOPA (Tronci et al., 2013). Thus, it can serve as a potent adjunct to L-DOPA therapy, which can minimize the development of LID.
In rat models of LID, levetiracetam has been found to decrease abnormal involuntary movements in a dose-dependent manner. While the mechanism for this improvement is unknown, it is thought to occur as a result of its effect at many locations in the LID cascade, including a change in the expression of specific transcription factors and phosphorylated kinases in the striatum (Du et al., 2015). In an MPTP non-human monkey model of PD, levetiracetam was observed to enhance the effects of amantadine (Hill et al., 2004). Levetiracetam has been proven to be effective in treating LID in several open-label clinical studies (Tousi and Subramanian, 2005).
Opioid Drugs
The idea that increased signaling by opioid peptides may promote to the development of LID has piqued interest for more than a decade because the opioids affect the neurotransmission in the basal ganglia, and significant alteration in opioid signaling is generally encountered in PD patients (Johansson et al., 2001). Thus, drugs modulating this system can prove to be an effective alternative therapy to the standard anti-dyskinetic therapy.
Combination treatment of L-DOPA and nalbuphine (κ subtype agonist and µ subtype antagonist) was examined to be downregulating the expression of various LID markers, like ΔFosB, prodynorphin, dynorphin A, Cdk5, and Thr34 phosphorylation of DARPP-32, to normalized levels. No adverse events were encountered during this study. The sedative effect of nalbuphine was also not induced by L-DOPA (Potts et al., 2015). To find out the potential of µ receptor antagonism monotherapy in alleviating LID, another study was performed on the 6-OHDA PD rat model. In this experiment, it was found that selective µ receptor antagonism is not able to improve dyskinesias in rats (Bartlett et al., 2020b). Thus, the results obtained by using the L-DOPA and nalbuphine combination therapy might be an outcome of the therapy's dualistic nature. However, µ receptor agonism monotherapy was able to downgrade the incidence of LID in the MPTP parkinsonian rat model (Hutny et al., 2021).
Mu-delta opioid receptor agonism can also be beneficial in the treatment of LID. Lactomorphin, a mu-delta opioid receptor agonist, was examined to exert antiparkinsonian activity along with reduced incidence of dyskinesia in 6-OHDA-treated male SD rats. It reduced the AIM score significantly when compared with the untreated animals (Flores et al., 2018). Cyprodime, ADC-02520849, and ADC-02265510 are three opioid receptor agonists. They were examined to be alleviating LID symptoms along with reduction of LID severity scores in MPTP-treated primate models of PD (Bezard et al., 2020). Thus, the use of opioid drugs is an effective pharmacotherapeutic way to reduce Frontiers in Pharmacology | www.frontiersin.org April 2022 | Volume 13 | Article 805388 9 dyskinesias associated with L-DOPA treatment. However, much research is needed to evaluate such possibilities.
Nitric Oxide Modulators
Loss of dopaminergic neurons in the substantia nigra of the brain is the main characteristic feature of PD. However, along with DA depletion hindrance in nitric oxide (NO), neurotransmission was also observed to be hindered during PD. DA by direct acting on DA receptors situated in the striatal nNOS interneurons facilitates NO production. In PD, there is a significant decrease in the expression of nNOS-containing neurons and nNOS mRNAs. NO is proven to play a crucial role in the control of motor functions (Pierucci et al., 2011;Lorenc-Koci et al., 2017). Thus, it can be concluded that NO hypofunction can play a crucial role in the development of PD and LID.
In a study, it was observed that when NO donor molsidomine was co-administered with L-DOPA, the expression of DA gets increased, which in turn decreases the incidence of LID (Lorenc-Koci et al., 2013). The anti-dyskinetic property of molsidomine was also demonstrated in genetically susceptible PD mice (Solís et al., 2015). Upregulation of NO-soluble guanylyl cyclase-cGMP signaling also plays a vital role in LID. Methylene blue, a potent inhibitor of it, can alleviate LID, when co-administered with L-DOPA or when injected into the lateral ventricle of the brain as a monotherapy (Pierucci et al., 2011;Bariotto-Dos-Santos et al., 2019). Thus, the use of NO modulators carries the potential to be included in the management of LID. However, clinical evidence regarding this is still missing in the literature. More research is required to reveal its true potential in the management of LID.
Surgical Approaches
Neurosurgical approaches have shown to be the most effective in providing sustained relief from LID in PD patients. Mainly, neurosurgical approaches associated with basal ganglia and cerebral cortex have shown to be the most effective in treating LID.
Deep Brain Stimulation
DBS is a highly effective and the most preferred procedure in LID patients having advanced PD, drug refractoriness, and motor complications associated with L-DOPA therapy (Fasano et al., 2012;Munhoz et al., 2014). Various randomized control trials (RCTs) have proven that DBS reduces the PD symptoms, dyskinesias, and reduces the need for dopaminergic stimulation in LID (Kopell et al., 2006;Nazzaro et al., 2013). There are mainly two widely persuaded targets in the human brain that are being stimulated by DBS, that is, subthalamic nucleus (STN) and globus pallidus internus (GPi). Many RCTs have proven STN DBS and GPi DBS to be more successful in alleviating LID symptoms than medical treatment (Deuschl et al., 2006;Weaver et al., 2009). Few studies have demonstrated that STN DBS is more effective at reducing the required medication dose in LID, and these studies have also confirmed that STN DBS application does not lead to worse neuropsychiatric and cognitive outcomes than those of GPi DBS (Odekerken et al., 2016;Xie et al., 2016;Elgebaly et al., 2018). There are some conflicting pieces of evidence, whether the beneficial mechanism of DBS in LID is due to its direct stimulation, reduction of L-DOPA dosage, or a blend of both. However, some studies suggested that the effects are target-dependent. Like in the case of GPi DBS, the dyskinesia scores (UPDRS IV) get improved without reducing the required L-DOPA dose (Volkmann et al., 2004;Follett et al., 2010;Moro et al., 2010;Odekerken et al., 2016;Bonenfant et al., 2017), whereas in the case of STN DBS, the mechanism is of mixed type (Castrioto et al., 2011;Zibetti et al., 2011;Munhoz et al., 2014;Krishnan et al., 2016).
Classical DBS is mainly based on frame-based and frameless stereotactic methods with microelectrode recording (MER, for electrode placement confirmation). Portable imaging techniques (O-arm) have also been used in some studies, with or without the use of MER, to improve the accuracy and safety associated with DBS target placement (Burchiel et al., 2013;Frizon et al., 2018). Another technique, namely, frameless iMRI-guided DBS, was successfully investigated to have significantly greater accuracy than that of the classical DBS. It uses interventional magnetic resonance imaging (iMRI), and the electrode placement in it is guided by a skull-mounted aiming device. It is performed in anesthetized patients with the assistance of a real-time MRI scanner. The real-time MRI can account for the brain shift after the skull is opened, contrary to the classical DBS, where stereotactic placement based on preoperative imaging is the only option (Nakajima et al., 2011). A second-generation iMRI-guided DBS with improved operator control and fully integrated software has also been reported to be used with even greater lead placement accuracy and favorable patient outcomes (Ostrem et al., 2016).
Recently, frame-based stereotactic DBS has also been used along with iMRI (Matias et al., 2018). The iMRI-guided DBS has certain advantages over classical DBS, like improved electrode placement accuracy, streamlined procedure (as the patients are anesthetized), improved patient compliance (as performed under general anesthesia), faster lead placement times, and incorporation of MER for target confirmation (Ostrem et al., 2013). The iMRI-guided DBS is a very costly procedure, along with the logistical challenges of maintaining detectable intraoperative MRI and scheduling the procedure time on a diagnostic scanner. So, the cost to benefit ratio for patients included in this procedure must be evaluated with great importance (Azmi et al., 2016).
Classical DBS needs calibration for every patient to avoid side effects like dyskinesia, as incorrect stimulation parameters can lead to dyskinesia. To address this drawback, an adaptive or closed-loop DBS has been developed. In adaptive DBS, a sensory electrode and a modulating electrode are applied on the motor cortex and basal ganglia region, respectively. They help in monitoring the brain activity patterns associated with dyskinesia. Upon detection of these patterns, a feedback loop gets activated, which decreases the stimulation delivered to the target region (Swann et al., 2018). An adaptive DBS study was successfully performed using a novel narrowband gamma Frontiers in Pharmacology | www.frontiersin.org April 2022 | Volume 13 | Article 805388 oscillation as a biomarker for dyskinesia in the motor cortex to modulate the target stimulation delivered (Swann et al., 2018). They also reported 38-45% less energy consumption than classical DBS to maintain the same therapeutic efficacy (Dastin-van Rijn et al., 2021). Recently, more responsive adaptive DBS systems have been developed using accelerometer-based technology, which detects movements by utilizing peripheral kinematic sensors to detect dyskinesia (Rojas Cabrera et al., 2020).
Surgical Ablation
Surgical ablation is another well-persuaded procedure in LID. Ablation of globus pallidus is preferred when several factors, like situations with fewer resources, difficulties in follow-up for several years, and risk of complications from electrode implantation, make DBS less practical to follow (Sharma et al., 2020). Unilateral pallidotomy has shown to result in long-lasting and sustained improvement of LID, tremor, rigidity, bradykinesia, gait, and balance. Contralateral and ipsilateral improvements in LID were also reported by following this procedure (Vitek et al., 2003;Upadhyay et al., 2015). However, it was reported that bilateral STN DBS produced greater improvements in motor and bradykinesia symptoms than unilateral pallidotomy (Esselink et al., 2004). Ultrasound ablation or focused ultrasound ablation is a newly developed and minimally invasive procedure, which has been successfully examined to be highly effective in the treatment of LID (Magara et al., 2014;Na et al., 2015;Schlesinger et al., 2015;Zaaroor et al., 2018). It is a magnetic resonance-guided focused ultrasound (MRgFUS) technique, which has been successfully incorporated with pallidothalamic tractotomy (Fine et al., 2000) and unilateral pallidotomy (Na et al., 2015) procedures, and it resulted in better outcomes than conventional surgical ablation procedures. Focused ultrasound ablation showed to improve the UPDRS motor scores and contralateral dyskinesia scores in LID patients (Magara et al., 2014;Na et al., 2015;Schlesinger et al., 2015;Zaaroor et al., 2018). Thus, it has great potential in the management of LID.
Transcranial Magnetic Stimulation
TMS is one newly implemented non-pharmacological method to treat LID. TMS is a non-surgical procedure, in which a specific area of cortex is stimulated by recurrent single pulse electromagnetic conduction that emerges from a coil, which is coupled to a pulse generator. It has been observed that the excitability in the cortical region gets increased after application of high-frequency stimulation, and it gets decreased by the application of low-frequency stimulation, but in the case of theta-burst stimulation (TBS), intermittent stimulation increases cortical excitability, whereas continuous stimulation reverses (Pascual-Leone et al., 1994;Chen et al., 1997;Huang et al., 2005).
A study suggests that LID is mainly a manifestation of primary disinhibition of motor cortex and secondary excessive outflow of pallidothalamocortical motor loop (during peak dosing) (Rascol et al., 1998). Overlapping of these quantifiable cortical changes with LID symptoms provides strong evidence toward the involvement of cortical dysfunction in the pathophysiology of LID (Rascol et al., 1998). Thus, applying stimulation to the cortex region to undo its dysfunctions is a rational LID treatment approach.
Although some studies have demonstrated that repetitive TMS (rTMS) with lower frequency (1 Hz) can produce a significant decrease of dyskinesia symptoms in patients, but the effect is not long term, recurrent rTMS treatment is needed to maintain the reduced dyskinesia state for a longer duration of time (Wagle-Shukla et al., 2007;Filipović et al., 2009). However, the long-term effects of rTMS are still debatable as there are study limitations like insufficient statistical power, protocol variation, short followup period, and lack of standardized control procedure. Due to these limitations, the dopaminergic effects of rTMS are not proven yet (Strafella et al., 2006;Elahi et al., 2009;Benninger et al., 2012).
Implanted Motor Cortex Stimulation
MCS is a neurosurgical procedure that allows implantation in both epidural and subdural levels, and due to that, long-term, repetitive electromagnetic stimulation is delivered to the cortex. However, epidural implantations are preferred over subdural implantations because of no dampening of therapeutic stimulation by CSF and less serious complications in it (Bezard et al., 1999;Saitoh et al., 2003;Manola et al., 2005;Delavallee et al., 2008). MCS works on the same principles as TMS. The mechanism of action of the MCS procedure is uncertain, but some studies on primate PD models have proven that "electromagnetic stimulation dampens the abnormal oscillations taking place between the cortex and basal ganglia in PD, normalizes hyperactive structures in these areas, and reactivates hypoactive structures" (Drouot et al., 2004;Lefaucheur, 2009). Implanted MCS has shown promising results in reducing L-DOPA dose; hence, delaying the onset of LID (Canavero and Paolotti, 2000;Cilia et al., 2007). However, some studies have reported conflicting results. In those studies, it has been observed that a lower frequency of stimulation produced negative results, whereas higher frequency stimulation produced positive results (Lefaucheur, 2009). Despite these conflicting results, implanted MCS has great potential in the management of LID. Further research can fortify its application in LID with greater effectiveness.
Pharmacotherapeutic Approaches
Pathological knowledge of LID has increased vastly in the last decade. Many pharmacotherapeutic approaches have shown promising results in their preclinical studies, but their clinical results are not satisfactory. The present clinical status of LID management has been summarized in Supplementary Table S2.
There is profound evidence present in the literature regarding the treatment of LID. The main goal of LID treatment has been the reduction or prevention of LID or to reduce its onset, maintaining the excellent antiparkinsonian effect of L-DOPA (good on time). To achieve this goal, researchers have mainly focused to reduce the L-DOPA dose, providing continuous dopaminergic stimulation, and identifying non-dopaminergic targets to treat LID.
Continuous usage of lower doses of L-DOPA is the most preferred clinical strategy in the management of LID (Scott et al., 2016). There is a clear correlation between the dose of L-DOPA used and the severity and likelihood of the dyskinesia developed. It was observed that PD patients receiving higher doses of L-DOPA are more likely to develop LID (Fahn, 1999;Verschuur et al., 2019). Along with that, adjuvant therapy of DA agonists and MAO inhibitors was also evaluated in various seminal studies, with an aim of reducing the L-DOPA dose; thus, delaying the onset of LID (Holloway et al., 2005;Watts et al., 2010). However, no long-term benefit has been observed. A new extent release capsule of carbidopa and L-DOPA named IPX066 has also been discovered. This capsule contains immediate and sustained-release pellets, and has become superior over L-DOPA + carbidopa + entacapone in the treatment of PD . L-DOPA carbidopa subcutaneous pumps are also used in PD patients. This subcutaneous pump helps reduce fluctuation in L-DOPA concentrations (Giladi et al., 2015).
Due to the failure of these approaches, the concept of continuous dopaminergic stimulation (CDS) has emerged. This technique makes use of longer acting dopaminergic agents, which provide more constant and long-lasting dopaminergic stimulation to the dopaminergic receptors. Rotigotine, apomorphine-like drugs, entacapone/L-DOPA, and L-DOPA carbidopa intestinal gel (LICG)-like formulation approaches have been taken to provide CDS, but they have proven beneficial only in advanced PD and not in LID. However, they are reported to be reducing the occurrence of motor complications and dyskinesia. Thus, using them along with other preventive medication can increase the quality of life of patients suffering from LID (Olanow et al., 2020).
DBS is the most effective technique by which CDS and subsequent alleviation from LID can be achieved. Stimulation of the subthalamic nucleus and internal globus pallidus has shown to be beneficial in LID as it induces the striatal DA release, which provides more sustained CDS (Fox et al., 2018). However, despite the benefits, cost, lack of accessibility, invasiveness, and side effect profile limit their use on human subjects (Buhmann et al., 2017).
MAO-B inhibition has also proven to be effective in the management of LID. Safinamide is a water-soluble orally given aminoamide that has two mechanisms of action: MAO-B inhibition and glutamate release inhibition. In a two-year period, in PD, a prospective double-blind placebo-controlled study is being conducted. Safinamide treatment was found to reduce the motor complications without worsening PD complications (Borgohain et al., 2014). When compared to placebo, long-term usage of safinamide did not increase the risk of dyskinesia, and 100 mg of safinamide per day reduced dyskinesia in patients who had more severe dyskinesia at baseline and was well accepted by patients (Borgohain et al., 2014).
Zonisamide works via increasing DA production, inhibiting MAO-B, inhibiting glutamate release, and inhibiting sodium and T-type calcium channels, among other things. Since studies indicated an improvement in PD symptoms with a very low frequency of side responses such as dyskinesia and hallucination, zonisamide has been approved in Japan for use as a supplementary therapy in PD patients. Zonisamide lowers offtime in PD patients with wearing off, according to a double-blind RCT including 422 patients (Murata et al., 2015). Zonisamide is effective in the treatment of PD motor symptoms, although it is still considered experimental for other indications, such as the therapy of LIDs (Seppi et al., 2011).
Along with dopaminergic alteration, scientists have identified many non-dopaminergic targets/add-on therapies also. Metabotropic glutamate receptor agonism is one the most promising aspect of it. Dipraglurant and mavoglurant (AFQ056), two selective metabotropic glutamate receptor agonists, have shown to be significantly reducing dyskinesia without worsening PD symptoms (Stocchi et al., 2013;Tison et al., 2016). Amongst them, dipraglurant has shown a significant reduction in peak dose dyskinesia, along with rapid absorption (T max = 1 h) (Tison et al., 2016). This fact can support the rational use of dipraglurant in the management of acute peak dose dyskinesia cases, but its efficacy in such cases is yet to be evaluated. However, in another clinical study, mavoglurant (AFQ056) has failed to show LID-alleviating effects (Negida et al., 2021). Thus, the use of mavoglurant in the management of LID is questionable.
Out of several add-on therapies evaluated, amantadine, which is a non-selective NMDA receptor antagonist, is the most widely used and has proven to be the most effective add-on treatment in clinical studies (Fox et al., 2018;Hauser et al., 2021;He et al., 2021). Extended-release formulations of it were observed to be significantly increasing the on time, but it does not affect the dyskinesia (Hauser et al., 2021). However, due to side effects like confusion and hallucinations, their human application is limited (Hubsher et al., 2012). FDA has approved the use of a long-acting amantadine formulation for LID treatment in 2017. It provides a more stable plasma concentration and bioavailability than that of the immediate release forms, that is, supporting the concept of CDS (Müller and Möhr, 2019). In one phase III clinical study, it was observed that single daily dosing of amantadine significantly reduces LID in comparison to placebo-treated group (Pahwa et al., 2015;Paik et al., 2018). In another study of extended-release amantadine formulation, it was observed that a reduced level of LID was achieved, and that reduced expression was maintained for a median treatment period of 1.9 years (Tanner et al., 2020). However, its chronic clinical use for a longer period still needs to be evaluated. 100 mg of amantadine twice daily resulted in a significant decrease in total dyskinesia by 24%, and a significant reduction in duration of dyskinesia was also observed in a single double-blind placebo-controlled study involving 24 patients (Snow et al., 2000). Dyskinesias deteriorated within a week in patients who stopped using amantadine, according to a 3-month multicenter double-blind RCT comparing patients who were kept on amantadine vs. those who switched to a placebo. Apathy and tiredness were more common worsening conditions in these individuals (Friedman et al., 2014). Though the effects of amantadine anti-dyskinetic were thought to disappear after a few months, clinical investigation on 32 PD patients with LID, who received amantadine for more than a year, has shown improvement in dyskinesia. In contrast, some studies suggested that the anti-dyskinetic effect was not the same (Wolf et al., 2010). The findings revealed that amantadine remained efficient in treating dyskinesia longer than a year after therapy began, with no adverse signs recorded in patients. The LID symptoms were observed to deteriorate in the group which received a placebo treatment. Furthermore, chronic amantadine treatment was associated with the development of hallucinations, sedation, myoclonus, livedo reticularis, hair loss, and edema (Seppi et al., 2011;Vijverman and Fox, 2014), which suggests the careful monitoring of the patients.
Another most promising non-dopaminergic approach to treat LID is the 5-HT (5-HT 1A and 5-HT 1B ) receptor agonism. 5-HT 1A and 5-HT 1B receptor agonism have also gained research interest, as serotonergic stimulation is associated with DA release and activation of 5-HT 1A , 5-HT 1B receptors, which can lead to increased serotonin release in the brain . Eltoprazine, a 5-HT 1A , and a 5-HT 1B receptor agonist have been shown to be alleviating dyskinesia without interfering with the L-DOPA activity. It was also reported to be well tolerated amongst the patients (Svenningsson et al., 2015). Thus, this can be a possible add-on therapy to L-DOPA in controlling LID. In a randomized placebo-controlled doubleblind pilot trial of nine PD patients, on time without dyskinesia or with non-troublesome dyskinesia increased significantly (Zesiewicz et al., 2005). Another multicenter double-blind placebo-controlled crossover experiment in 38 individuals with LID found that LID at dosages of 500 and 1,000 mg per day resulted in a substantial reduction in time (Stathis et al., 2011). However, a nine-patient open-label trial found that levetiracetam was not well tolerated in PD patients, with most patients experiencing sleepiness and increasing dyskinesia . In a phase 1 or 2 a dose-finding trial, eltoprazine, an agonist of presynaptic 5-HT 1A and 5-HT 1B receptors, established its antidyskinetic efficacy at 5 or 7.5 mg without affecting normal motor responses to L-DOPA (Svenningsson et al., 2015). Pardoprunox is a partial and full agonist at D 2 and D 3 receptors, as well as a low-affinity agonist at D4, α-2 adrenergic, and 5HT 7 receptors. Because of its partial agonist action at DA receptors, this medication is thought to have a reduced risk of dyskinesia than other DA agonists. An RCT found a substantial increase in on time without bothersome dyskinesias when pardoprunox was titrated to as high as 42 mg per day (Hauser et al., 2009). A 12-week randomized placebo-controlled trial found that pardoprunox rose on time without causing troublesome dyskinesia, although the research had a high dropout rate, owing to gastrointestinal side effects, somnolence, and insomnia, as well as fast titration to higher dosages (Rascol et al., 2012).
Dipraglurant was studied in PD patients with moderate or severe LID in a phase IIa, randomized, placebo-controlled, double-blind, parallel-group trial. The primary goal was to establish safety and tolerability, and secondary goals were set to improve abnormal involuntary movements. A total of 46 (88.5%) dipraglurant-treated patients and 18 (75%) placebo patients experienced adverse events, with no significant changes in safety monitoring measures. Treatment with dipraglurant was linked with substantial effects on the mAIMS, but the durability of this initial impact is unknown, as there was no significant between-group difference at the study's conclusion point (Tison et al., 2013). The impact of the mGluR-5 negative allosteric modulator dipraglurant on LID reduction without worsening of parkinsonism observed in one RCT suggests that this drug merits further research.
In Japan, istradefylline, an agonist of adenosine A 2A receptor, has been approved as a supplement to L-DOPA therapy for PD patients (Mizuno and Kondo, 2013). Its impact on locomotor disturbances and LID is debatable. In a placebo-controlled, double-blinded RCT containing 373 PD affected volunteers with motor abnormalities, it was found that by applying istradefylline along with L-DOPA, the off-time associated with L-DOPA therapy was significantly reduced, although dyskinesia was the most encountered adverse effect (Vorovenci, and Antonini, 2015). Preladenant, an adenosine A 2A receptor antagonist, was observed to be improving the on time associated with L-DOPA therapy, without changing the dyskinesia state in individuals with moderate to severe PD in a double-blinded, placebo-controlled RCT . Preladenant was observed to be increasing the on time of L-DOPA, without severe dyskinesia in two double-blinded, placebo-controlled, phase III studies. However, the results obtained were not significant when compared with the placebo-treated group . Caffeine is a commonly prescribed psychostimulant that works by blocking adenosine A2A and A1 receptors, which inhibits adenosine propagation (Fredholm et al., 1999). Caffeine causes a persistent and dose-dependent increase in DARPP-32 phosphorylation at Thr75, according to new research. Caffeine's impact is greatest at 7.5 mg/kg, which also induces a sustained increase in motor activity. Caffeine's capacity to enhance DARPP-32 phosphorylation at Thr75 is most likely mediated by adenosine A2A receptor inhibition as SCH 58261, a specific A2A receptor antagonist, similarly increases phosphoThr75 levels (Lindskog et al., 2002).
Clozapine, in numerous trials, including a placebo-controlled, double-blinded study, was observed to be decreasing the severity of LID (Durif et al., 2004;Seppi et al., 2011). Clozapine's specific mechanism of action is unknown, but antagonistic binding to striatal DA receptor type 2 (D2) and serotonin type 2A receptors has been postulated (5-HT2A receptors). However, worries regarding possible side effects limit the use of clozapine for dyskinesia treatment. Insomnia, sialorrhea, asthenia, and possible severe toxicities, such as agranulocytosis in around 0.7 percent of patients, seizures, and myocarditis, were among the most prevalent side effects observed. Because of concerns regarding possible neutropenia, the constant neutrophil count was to be monitored for 12 months while on this drug Seppi et al., 2011;Hack et al., 2014).
PET investigations in PD patients with LID show abnormalities in opioid receptors in the basal ganglia and elsewhere (Piccini et al., 1997). Furthermore, in rodent and primate models of LID, the expression and levels of opioid receptors are changed throughout the basal ganglia (Johansson et al., 2001). While there are numerous methodological variations across the trials, it appears that blocking opioid transmission in a fashion that is not selective for opioid receptor subtypes has no anti-dyskinetic effect in humans when used in conjunction with L-DOPA. This was proven in recent research in individuals with LID, where an intravenous infusion of the non-subtype-selective opioid antagonist naloxone at a dosage that had previously been shown to offer effective opioid transmission-blocking was unable to decrease opioid transmission (Fox et al., 2004). Many other drugs which have shown promising results in the reduction of LID severity in animal models have failed to show their activity in clinical studies. Perampanel, topiramate, AQW051, rislenemdaz, vitamin D, and nitric oxide modulators (Lees et al., 2012;Kobylecki et al., 2014;Trenkwalder et al., 2016b;Herring et al., 2017;Habibi et al., 2018) are a few examples of it. Topiramate, an anti-epileptic drug, was even reported to be increasing the severity of dyskinesia in patients. Five patients were even withdrawn from the study due to increased dyskinesia (Kobylecki et al., 2014). However, more research is still required to fully understand these variations.
PROBLEMS IN THE CONVERSION OF PRECLINICAL EFFECTIVITY TO CLINICAL STAGE
The first challenge faced is the difference between the disease models used to evaluate effectivity in animals and humans. The animal models used for this purpose do not fully replicate the complex human neurodegenerative patterns and variations that can take place inside a human body. The gold standard preclinical model, that is, the MPTP-lesioned primate model of PD, is generally dopaminergic and nonprogressive, but in the case of humans, there is the involvement of complex pathogenic pathways and progressive neurodegeneration. This fact can explain the lack of conversion of efficacious results from the preclinical to clinical phase (Fox and Brotchie, 2019).
Also, finding of equivalent dose between the preclinical to clinical phases is difficult. As an example, naftazone (CVXL0107), a glutamate inhibitor, showed to reduce LID in the MPTP-lesioned primate model of PD, but in a multicenter, crossover, double-blinded randomized control trial (DBRCT), it failed to alleviate the LID symptoms (Mattei et al., 1999;Brotchie et al., 2007;Corvol et al., 2019). The authors of this study have interpreted that the unsuccessful results are either due to a mismatch between L-DOPA/ naftazone or due to inappropriate dosing. Famotidine, a selective H2 receptor antagonist, also failed to show clinical efficacy despite being effective in the preclinical studies. The reason behind this failure is poor BBB permeability (Johnston et al., 2010;Mestre et al., 2014).
Another shortcoming of preclinical studies is that the animal models used cannot properly measure the tolerability in human subjects. Many drugs like topiramate and 5-HT1A antagonists have shown poor tolerability in human subjects, despite being efficacious in the preclinical studies (Goetz et al., 2017;Huot et al., 2017).
Measurement of LID severity in clinical trials is a difficult task. In preclinical studies, many objective rating scales are used to measure the LID severity, which are similar to objective rating scales used in clinical studies (UDYRS part III). This depends upon the patient's perception of dyskinesia severity, and it provides no information about the level of disability. Along with that, lack of awareness of mild dyskinesia in PD is well documented (Hung et al., 2010). Thus, the results obtained from these scales are quite inaccurate, unreliable, and are subjected to recall bias (Stone et al., 2002;Papapetropoulos, 2012). However, some advanced technologies have been evaluated for the measurement of LID, and they were found to be producing more accurate and reliable results. Implementation of sensors to aid these technologies in measuring the severity and endpoints can also be an objective step that can produce more accurate results (Lopane et al., 2015;Ossig et al., 2016;Delrobaei et al., 2017).
The placebo effect in the evaluation of novel therapies alleviating LID is well observed in various studies. As an example, in a trial involving sarizotan, a 5HT1A agonist, the on time of LID was reduced by up to 1.5 h/day in the placebo-treated group, while the observed reduction in the LID on time was 2 h/day in the intervention group Goetz et al., 2008). The placebo effect is profound in the trials involving PD patients. The main underlying reason can be the reward effect associated with the DA pathology (Quattrone et al., 2018), that is, in blinded studies, after taking the placebo, there is a release of DA in the subject's brain. These effects can lead to false-positive results and can mask the underlying effect of a novel therapy. Thus, understanding the placebo effect can guide the trial design of future clinical trials.
CONCLUSION
LID is a common and difficult condition to treat PD patients. Clinical phenomenology for LID varies, but the three types are high-dose dyskinesia, dressing off or off-period dyskinesia, and diphasic dyskinesia. The exact mechanism of action is not known for dyskinesia, but both presynaptic and postsynaptic methods are effective in the pathogenesis of LID, and the management of dyskinesia depends on the type of dyskinesia encountered and thereafter, is treated. Reduction of dyskinesia is dependent on the dose like peak-dose dyskinesia, while the addition of long-acting formulations is effective in wearing-off dyskinesia. Infusion and surgical procedures are working on all kinds of dyskinesia. Along with that, various other drug targets have also been identified like serotonergic and glutamatergic systems. Targeting these targets as an add-on therapy has proven to be very beneficial in the management of epilepsy. Chemical management with a neurosurgical approach is the future of the management of dyskinesia. Despite various advancements, the preclinical to clinical conversion rate is very low in the case of newly discovered drugs. Also, measuring the severity of dyskinesia has been challenging. More focus should be given to developing a progressive neurodegenerative animal model; a more unified LID measurement scale that does not depend on the patient's perception is needed. Neurosurgical stimulation for primary mitigation, along with drug treatment as maintenance therapy, should be a rational approach going into the future. However, this combination is yet to be tested on humans.
ACKNOWLEDGMENTS
The author AM would like to acknowledge the Department of Pharmaceuticals, Ministry of Chemical and Fertilizers, Govt. of India, and National Institute of Pharmaceutical Education and Research (NIPER), Guwahati, for the necessary infrastructure and facilities. The images were "Created with BioRender.com."
|
2022-04-07T13:24:10.267Z
|
2022-04-07T00:00:00.000
|
{
"year": 2022,
"sha1": "d4eb3940488cd75cfd3fe662bbf6e6a1da1f8bf6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d4eb3940488cd75cfd3fe662bbf6e6a1da1f8bf6",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
201844057
|
pes2o/s2orc
|
v3-fos-license
|
Prediction of Partition Coefficients of Environmental Toxins Using Computational Chemistry Methods
The partitioning of compounds between aqueous and other phases is important for predicting toxicity. Although thousands of octanol–water partition coefficients have been measured, these represent only a small fraction of the anthropogenic compounds present in the environment. The octanol phase is often taken to be a mimic of the inner parts of phospholipid membranes. However, the core of such membranes is typically more hydrophobic than octanol, and other partition coefficients with other compounds may give complementary information. Although a number of (cheap) empirical methods exist to compute octanol–water (log kOW) and hexadecane–water (log kHW) partition coefficients, it would be interesting to know whether physics-based models can predict these crucial values more accurately. Here, we have computed log kOW and log kHW for 133 compounds from seven different pollutant categories as well as a control group using the solvation model based on electronic density (SMD) protocol based on Hartree–Fock (HF) or density functional theory (DFT) and the COSMO-RS method. For comparison, XlogP3 (log kOW) values were retrieved from the PubChem database, and KowWin log kOW values were determined as well. For 24 of these compounds, log kOW was computed using potential of mean force (PMF) calculations based on classical molecular dynamics simulations. A comparison of the accuracy of the methods shows that COSMO-RS, KowWin, and XlogP3 all have a root-mean-square deviation (rmsd) from the experimental data of ≈0.4 log units, whereas the SMD protocol has an rmsd of 1.0 log units using HF and 0.9 using DFT. PMF calculations yield the poorest accuracy (rmsd = 1.1 log units). Thirty-six out of 133 calculations are for compounds without known log kOW, and for these, we provide what we consider a robust prediction, in the sense that there are few outliers, by averaging over the methods. The results supplied may be instrumental when developing new methods in computational ecotoxicity. The log kHW values are found to be strongly correlated to log kOW for most compounds.
INTRODUCTION
The chemical properties, biochemical interference, and biopersistence of environmental pollutants are critical factors for toxicology programs and strategies such as the REACH program and Tox21c. 1−3 It is necessary to label, classify, and predict toxicological properties of chemicals so as to develop evidence-based environmental health and safety (EHS) standards for new and emerging compounds. 4−6 As many new and emerging compounds and pollutants are known to pose serious environmental and health risks, 7 effective and inexpensive modes of the assessment of chemical and toxicological properties are required. In this study, we use a number of computational chemistry methods to estimate the octanol− water (log k OW ) and hexadecane−water (log k HW ) partition coefficients in order to get insight into membrane permeability. The modeling methods are applied to 133 compounds from the following categories: haloalkanes, haloaromatics, polycyclic aromatic hydrocarbons (PAHs), polycyclic biphenyls (PCBs), perfluorinated compounds (PFCs), parabens (PRBs), and phthalates (PHTs). All of these compounds belong to ubiquitous pollutant categories. 7 The log k OW value approximates a compound's potential to partition into membranes, which is indirectly related to toxicity, because for most modes of action compounds have to cross a cell membrane. log k OW values therefore represent a cornerstone in pharmaceutical as well as environmental chemistry and toxicology, and it is important to determine log k OW for pollutants. Indeed, a toxicity profile including log k OW has to be determined before a chemical is allowed to enter the market in Europe, the United States of America, and Japan. 8 However, for many emerging pollutants, the log k OW have not been resolved, 9 and questions on their toxicity as well as regulatory decisions are still pending. 10−13 In addition, there have been reports of disagreements in log k OW measurements. 8 Indeed, experimental numbers for physicochemical observables may need more scrutiny in general. 14 A number of recent experimental studies have focused on the compound classes studied here, for example, Quinn et al. studied partitioning of PCBs into different phases, 15 while Xiang and co-workers focused on log k OW of perfluorated carboxylic acids, 16 but for numerous compounds no reliable measurements are available. In general, the error in experimental measurements of log k OW varies from 0.1 to 1 log units. 17 There is a large body of work related to measurement or prediction of log k OW based on, for instance, quantitative structure−property relationships (QSPRs). 18,19 Although efforts into developing better experimental methods are ongoing, 20 more research is focused on computational predictions ranging from coarse-grained simulations 21 to analytical reference interaction site model theory, 22 to quantum chemistry, 23 to machine learning 24,25 and other empirical methods. 26,27 Interestingly, log k OW have been used to parameterize models for dissipative particle dynamics simulations as well. 28 Simultaneously, there are still new efforts to measure new physicochemical data including log k OW with improvement of QSPR methods being one of the reasons. 29 This is important as experimental databases are known to have errors in everything from names and structures to physicochemical properties. 14,30,31 Environmental and toxicological sciences can, in principle, benefit from using the tools of computational chemistry to determine log k OW values and other properties of new chemicals to facilitate the differentiation of pollutants from innocuous chemicals. 23,24,32−39 In this study, we compare three computational approaches to calculate log k OW of 133 pollutant compounds, and, in addition, the log k HW are computed using two of the methods. The results are compared to experimental data as well as the XlogP prediction method 40,41 and the widely used KowWin (k OW -Windows), 42,43 which is part of the EPI Suite. 44
METHODS
One hundred and thirty-three compounds were selected from seven pollutant categories, haloalkanes, haloaromatics, PAHs, polychlorinated biphenyls (PCBs), PFCs, PRBs, PHTs, and a control category. All compounds and computed values are listed in Table S1. Initial 3D coordinates of the pollutants were taken, if available, from the PubChem database directly, 45 and if not available, the structures were built manually using Discovery Studio 4.5 Visualizer, followed by AM1 46 and PM6 47 optimizations in the gas phase using the Gaussian 09 48 software. It should be noted that a varying number of the compounds studied in this paper have been used for parameterizing the methods used here. Experimental log k OW and log k HW values were taken from a number of sources (Table S1). In some cases, the partition coefficients were determined from the difference between solvation free energies in water and octanol, alternatively water and hexadecane. 49 Some of the log k HW were taken from Hafkenscheid and Tomlinson who specify that the solvent is "aliphatic alkane". 50 All results can be visualized on the (http://virtualchemistry.org) website. 51 It should be noted that many papers in the literature refer to computed data as experimental or even present log k OW without any reference whatsoever.
2.1. SMD Calculations. Solvation model based on electronic density (SMD) calculations were performed with the Gaussian 09 48 or the Gaussian 16 52 software. Optimizations and frequency calculations were carried out for all the pollutants in the liquid phase with SMD solvent models (i.e., water, noctanol, and n-hexadecane) 49 and in the gas phase separately at the HF/6-31+G(d,p) 53 level of theory. The LANL2DZdp-ECP basis set 54 was used for iodine atoms, as this has shown to yield reasonable results in other studies. 55 The abbreviation Hartree− Fock (HF) will be used for these calculations in the remainder of this work. In earlier work, 56 a number of levels of theory were used, and it was found that both HF and density functional theory (DFT) are predicting numbers that are too low, which could be due to not only the basis set size applied but also the method. In order to distinguish these two possibilities, a further set of calculations was done using the BP86 functional 57,58 (denoted as BP86 in what follows). The solvation free energy of pollutant molecules was defined to be the difference in the free energy of solute calculated in the liquid phase and in the gas phase. 56,59,60 The partition coefficients were computed from the differences in solvation free energy of each compound in water, 1-octanol, and n-hexadecane. This approximation ignores the fact that octanol is significantly hydrated, the solubility of water in octanol being 48.8 g/kg at room temperature. 61 The Minnesota solvation database 62 has been used to develop and tune the SMD method, and some compounds from the database are used here.
2.2. COSMO-RS Calculations. COSMO-RS, that is, the conductor-like screening model for realistic solvation, 63−65 is a quantum chemically based approach to predict thermodynamic equilibrium properties of molecules in liquids. It starts from polarization charge densities of solute and solvent molecules, which arise if the molecules are embedded in a virtual conductor. These can be efficiently calculated using DFT combined with the conductor-like screening model (COSMO) 66 which is available in most quantum chemical programs. The TURBO-MOLE program 67 with a Becke−Perdew functional 57,58 and a TZVPD basis set 68 was used for these calculations, together with the default COSMO parameters in TURBOMOLE. On the basis of the individual COSMO results of solutes and solvent molecules, the COSMO-RS method expresses the specific interactions of molecules in a liquid system, that is, electrostatic interactions and hydrogen bonding, pairwise, local interactions of surface segments quantified by the COSMO polarization charge densities σ of the interacting segments. By an efficient and accurate statistical thermodynamics calculation for the interacting surfaces, the chemical potentials and free energies of
ACS Omega
Article the molecules in pure and mixed solvents are calculated. For the current project, standard COSMO-RS calculations have been performed with the COSMOtherm program with the BP_TZVPD_FINE_18 parameterization. 69 This means that the conformations and geometries used in the COSMO-RS calculations for the solutes and solvents were generated and handled as described in Klamt et al. 70 For compounds, for which multiple conformations are relevant, in each solvent the free energy is calculated from the logarithm of the conformational partition function, leading to a multiconformational treatment which would be cumbersome in methods like SMD. log k OW values of 16 of the compounds studied here were used in tuning the COSMO-RS code.
2.3. XlogP3 and KowWin. The XlogP3 algorithm first searches for the most similar compound in the database, and if there is no full hit, the differences in the structures are accounted for by an incremental method. 41 The values for the compounds studied here were downloaded from PubChem 71 (Table S1). The algorithm yielded a root-mean-square deviation (rmsd) of 0.41 log units for 8199 compounds in the original paper. 41 A significant fraction of the compounds studied here is in the training set for XlogP3.
KowWin log k OW were computed based on an empirical atom and fragment contribution method 42 by the widely used EPI Suite. 43,44 2.4. Potential of Mean Force Calculations. Rectangular boxes containing 313 molecules of 1-octanol and 2627 water molecules were built where the 1-octanol fraction was slightly solvated (see analysis in Results and Discussion). This box was equilibrated for 2 ns in order to obtain a stable biphasic system. Pollutant input files were generated as described above. The generalized Amber force field (GAFF 72 ) was used for 1-octanol and all pollutant compounds. Charges for the pollutants were computed from the electrostatic potential using the Merz− Kollman procedure 73,74 in Gaussian 16, 52 computed using DFT (B3LYP 57,75−77 ) combined with the aug-cc-pVTZ basis set. 78−80 The compounds are part of the Alexandria database, 31 and Gaussian log files are available for download at Zenodo. 81 The TIP3P water model 82 was used. The GROMACS 2016 software package 83,84 was used for all simulations. Long-range Coulomb interactions and Lennard-Jones (LJ) interactions were treated using the particle-mesh Ewald method (PME). 85,86 LJ-PME was used because it has been shown that the omission of long-range LJ interactions leads to incorrect surface tensions of liquids 87−89 and biological membranes 86 and, in addition, has an effect on protein aggregation at high protein concentrations in simulations. 90 Constraints were used on all chemical bonds to hydrogen atoms, applying the LINCS algorithm, 91 allowing a 1 fs integration time step. Temperature coupling in production simulations was applied using the v-rescale algorithm 92 with a time constant of 0.5 ps. The pressure was controlled using the Parrinello−Rahman algorithm 93 with a time constant of 10 ps, using the semi-isotropic scheme where the direction orthogonal to the interface is coupled separately from the other two directions. 2 ns simulations were performed "pulling" the pollutant through in the 1-octanol water box (see Movie M1) perpendicular to the water/1-octanol interface. We note that in the PMF method explicit water molecules do enter the octanol phase and contribute to the energy profile as discussed below. As the resulting numbers are in principle a property of the force field only, they will be denoted GAFF-ESP because electrostatic potential-derived charges were used.
RESULTS AND DISCUSSION
The prediction of log k OW values is summarized quantitatively in Table 1 for each of the classes of compounds. The lowest rmsd from experimental data for all compounds is obtained for XlogP3, COSMO-RS, and KowWin (all about 0.4 log unit), followed by the SMD method (HF: 1.0 and BP86: 0.9 log units) and the potential of mean force (PMF) calculations (1.1 log
ACS Omega
Article unit). Both SMD methods systematically underestimate log k OW with a mean signed error (MSE) of ≈−0.6. The HF method is known to overpolarize compounds; 31,94 however, this should not be the case for BP86, and therefore, there may be other contributing factors. The GAFF-ESP calculations on the other hand overestimate log k OW possibly due to lack of explicit polarizability. The results are skewed slightly by outliers in some of the compound classes, which will be discussed in some detail below. The XlogP3 rmsd is low due to the fact that some of compounds may be part of the database used for optimizing the algorithm. It should also be noted that the experimental error varies between 0.1 and 1.0 log unit, with larger compound having larger uncertainty. 17 The calculation times vary between less than a second for the QSPR methods to minutes for COSMO-RS to days for the DFT and HF methods to half a year for each of the PMFs. Although long calculation times may preclude high-throughput usage of the methods, it is important to establish the relative accuracies of the methods.
3.1. Control Compounds. The control class consists of a number of small polar and apolar compounds including aromatic compounds. They were chosen to have a range of log k OW values, including negative ones and known experimental values. All methods except GAFF-ESP perform relatively well for this category with small MSE (Table 1).
3.2. Haloalkanes. The haloalkane group comprised a set of eight compounds (Table S1), for which log k OW predictions all yield high correlation coefficients and low rmsd (Table 1). Interestingly, SMD yields an almost perfect correlation coefficient r 2 of 0.98; however, all log k OW are overestimated by 0.42 log units, while, in contrast, all other groups are underestimated systematically (Table 1). This result suggests that there is room for improvement with the parameterization of the 1-octanol solvent in the SMD model. The compounds chloromethane, chloroethane, and pentachloroethane were also
ACS Omega
Article evaluated using GAFF-ESP, and these are overestimated in all cases, in particular for 1,1,1,2,2-pentachloroethane. The reason for this discrepancy is likely associated with the force-field parameters, which are not specifically optimized for compounds such as haloalkanes. Addition of a virtual site to model halogen bonding, as present in other general force fields, 95 might help resolve these issues to some extent. 3.3. Haloaromatic Compounds. log k OW for 12 haloaromatic compounds were predicted using the SMD and COSMO-RS methods, and for three of these, 1-chloro-3phenylbenzene and hexachlorobenzene predictions were done using GAFF-ESP as well. The results (Tables 1 and S1) show an rmsd between 0.4 and 0.5 for all methods. The rmsd for SMD-BP86 is slightly higher than the other methods because of one compound, namely, 1-chloro-3-phenylbenzene (Table S1).
3.4. Polycyclic Aromatic Hydrocarbons. It should be relatively easy to predict log k OW for PAHs given their planar structures, lack of substituents, and their limited number of geometrical conformations. Indeed, our calculations display a reasonable agreement with empirical data (Table 1) with r 2 > 0.85 in all cases except SMD-BP86. However, both SMD methods have a few severe outliers (−2 log units, Table S1). It may be that larger PAHs (>C 16 ) which pertain higher log k OW values are more difficult to be predicted correctly using SMD as a result of their aromatic moment across the large planar structures. 32 1,2-Dihydroacenaphthylene and chrysene were predicted using GAFF-ESP, yielding moderate overestimations in both cases (Table S1), in line with the overall trend (Table 1).
3.5. Polychlorinated Biphenyls. PCBs are predicted quite accurately by all methods, in particular COSMO-RS. In the case of SMD-HF, there is a MSE of −0.64 log units, −0.44 for SMD-BP86. Given that PCBs have been present in the environment for a long time, 7 there is a large amount of data available and only two predictions are made here (Table 2).
3.6. Perfluorinated Compounds. PFCs have an antipromiscuous chemistry, which makes these compounds associate with neither water nor octanol. This might lead to problems in experimental assessments too. Our predictions for the few compounds are quite close to the experimental values; however, the values in Table 2 vary a lot between the methods used here, with large standard deviation for a number of compounds. Hidalgo and colleagues 96 recently reported log k OW values computed using SMD for medium-weight (up to 11 carbon atoms) linear PFCs and compare the results to empirical log k OW from the KowWin program. 42−44 They question some of the experimental data and also find systematic differences between the results obtained using purely empirical methods and the quantum chemistry-based SMD results. GAFF-ESP simulations were done for four PFCs from this set, namely, 1,1,1,2,2,2-hexafluoroethane, 1,1,1,2,2,3,3,3-octafluoropropane, 1,1,1,3,3,3-hexafluoropropan-2-ol, and 2,2,2-trifluoroacetic acid yielding overestimations of 1.2, 1.0, 1.4, and 1.1 log units from the experiment, respectively. The PMF method with the used GAFF-ESP force field therefore does not improve on the accuracy of the SMD methods or COSMO-RS.
3.7. Parabens. The log k OW for PRBs are predicted accurately using COSMO-RS and for XlogP3 and KowWin as well. For SMD, a large systematic deviation (MSE of −1.8 log units) was found using both HF and BP86 methods. To our knowledge however, only one study has reported multiple theoretically predicted log k OW values for PRBs. 97 In that work by Casoni and Sarbu, the most accurate method for calculation was found to be ACLogP, where methylparaben, ethylparaben, propylparaben, and butylparaben were predicted with a deviation of 0−0.22 log units compared with experimental results from a study by Kitagawa and Li. 98 From the study by Casoni and Sarbu, it can be concluded that the larger the PRB molecule becomes, the higher the deviation from the empirical results. The results for SMD based on either HF or BP86 point more to a constant offset, however. Five of the PRBs were also studied using PMFs, namely, methyl 4-hydroxybenzoate, ethyl 4-hydroxybenzoate, propyl 4-hydroxybenzoate, butyl 4-hydroxybenzoate, and heptyl-4-hydroxybenzoate, which gave a lower rmsd than SMD (1.2 log units) but opposite sign of the MSE.
3.8. Phthalates. PHTs turned out to be difficult to predict with an rmsd > 0.9 for both SMD methods (Table 1). For COSMO-RS, this category displays the largest (positive) MSE of all. The PHTs bear a ring moiety with two carbon chains attached and have therefore a large number of degrees of freedom, which may contribute entropically to the free energies of solvation and hence to the high rmsd. Although the PHT category contains many relatively large compounds, there is no clear correlation between molecular weight and error in the prediction.
3.9. log k OW Predictions. Table 2 displays computed log k OW for 36 compounds. The averages over the five numbers are proposed to be the predicted values because the rmsd for the average for the compounds where there are experimental data is slightly lower than any of the methods by itself. Inoue and coworkers reported predictions of log k OW for some large PFCs 100 using the KowWin program. 42−44 Their numbers are quite a bit higher than what is found here using SMD, but it seems that the log k OW of this category of compounds as well as PRBs and PHTs are underestimated systematically in SMD (Figure 1). For PAHs, the size-dependent underestimation is present for the SMD methods. Nevertheless, there is a good correspondence between the methods, and because of the physical approach used in SMD and COSMO-RS, it seems reasonable to assume
ACS Omega
Article that the average numbers provided in Table 2 are good approximations.
The PMF calculations described here allow water to enter the octanol phase and in this manner influence the log k OW through preferential solvation or, in principle, by binding alcohol groups in the octanol phase. An analysis of the amount of water in the octanol phase yields no difference depending on the solute: in all systems, approximately 6 ± 1 water molecule is found in the octanol phase.
3.10. log k HW . log k OW partition coefficients have been used in environmental analysis and for toxicity prediction for several decades, and various methods for calculating and determining log k OW empirically have been devised. However, some studies 101,102 suggest that relying solely on the solubility of a compound in octanol and water may not yield a complete picture of the potential toxicity of the compound. For this reason, we have also performed predictions of the hexadecane (C 16 H 34 ) water partition coefficient log k HW (Table S1). C 16 H 34 has a higher capacity to solvate heavy apolar compounds, such as large PAHs and hydrocarbons and potentially also PFCs. 50 All computed log k HW are given in Table S1 alongside a small number (12) of experimental data points obtained from the Minnesota solvation database 62 (based on experimental data from, e.g., Abraham 103 ) and from partition coefficients for water/aliphatic alkanes. 50 Compared to these 12 data points, the predictions are within 1 log unit for all compounds except for urea. Figure 2 shows the correlation between log k HW computed using COSMO-RS and both SMD methods. When neglecting one outlier, 2,2,3,3,4,4,5,5,6,6,7,7,7-tridecafluoroheptanoic acid, the correlations r 2 = 0.72 for HF and 0.76 for BP86, respectively, with rmsd between the two methods of 2.4 (HF) and 1.1 (BP86) log units. Because both COSMO-RS and SMD contain empirical elements, it is difficult to pinpoint what could be the underlying reason for the discrepancies, but BP86 is much closer to COSMO-RS than HF. It may be, obviously, that less effort has gone into fine-tuning the parameterization of implicit solvent models for hexadecane than for water and octanol. Nevertheless, COSMO-RS is known to perform well for a range of solvents, 104,105 while SMD also has been shown to outperform implicit solvent models based on empirical force fields. 56 Another issue could be that the basis set is not sufficiently large for PFCs, but evaluation of basis sets is beyond the scope of this paper. Figure 3 shows that the correlation plots for the PFCs, PRBs, and PHTs all have a slope close to one but an offset of −3 to 4 log units. For haloalkanes, haloaromatics, and PCBs, the difference between log k OW and log k HW is small as it is for the hydrophobic (log k OW > 0) control compounds. The truly hydrophilic control compounds are much more soluble in octanol than in hexadecane. For most PAH compounds log k HW is slightly larger than log k OW . These findings are in line with the wellknown result that more aliphatic compounds solvate more readily in lipid bilayers. 106 This suggests that there is not a lot of extra information to be had from log k HW calculations (or measurements) if the log k OW is known already.
CONCLUSIONS
Comparisons of empirical methods for computing log k OW have been published previously, including molecular modeling studies 107,108 and more approximative models. 109 108 These authors obtained an rmsd of ≈1.6 log units, slightly larger than the value found here (1.1, Table 1). Although the number of compounds is too small to draw any conclusion and the compounds studied are different, it might be worthwhile to study whether the biphasic system used here improves the predictive power. Benfenati and co-workers compared KowWin to a number of other software packages and found this package to be one of the most accurate ones. 109 More recently, dos Reis and co-workers compared different prediction algorithms in a statistical analysis and found KowWin to be one of the most accurate ones. 111 In contrast, Geisler et al., in a comparison of log k OW prediction for small compounds, found KowWin to be quite a bit less accurate than COSMOtherm, 112 which is more in line with the results presented here. It is of interest that development of QSPR methods is ongoing, 114 in part fueled by the finding that databases used to derive older QSPR methods from needed to be curated. 30 In this study, we have derived log k OW values for a large set of compounds from different chemical classes using four computational methods. The quantum chemical SMD approach and the COSMO-RS methods were used to compute log k OW and log k HW for 133 compounds, while XlogP3 (log k OW ) values were downloaded for reference and KowWin values computed using the EPI suite. By taking the average over four to five values, we provide what we consider accurate log k OW predictions for 36 compounds for which no experimental data are available (Table 2). Because the number of available experimental data points remains limited despite decades of measurements, we hope these numbers may be of use in environmental toxicity applications. Of the three methods used for the predictions, the SMD method systematically underestimates log k OW , while COSMO-RS and XlogP3 overestimate log k OW slightly. COSMO-RS yields the most accurate predictions in the tests provided here.
For a number of difficult cases, molecular dynamics simulations were used to compute the PMF for transport through the octanol−water phase. The method has an rmsd from the experimental data of ≈1.1 log units. However, if the large MSE (≈1.0 log units) is subtracted from the results, the rmsd reduces to 0.8 log units. The finding that the PMF method systematically overestimates log k OW could be related to a deficiency in either of the solvent models, although the TIP3P Figure 2. Comparison between log k HW computed using the COSMO-RS (X-axis) and the quantum chemical method SMD (Y-axis) for the HF and BP86 methods.
ACS Omega
Article water model is known to reproduce solvation relatively well. 104,105,115,116 Considering the computational cost, PMFs are not competitive because the quality of the predictions is not better than the cheaper methods. Of the other methods, SMD is relatively expensive with CPU requirements varying from minutes to days because of nonlinear scaling of quantum chemical calculations with system size. DFT is more CPU-time efficient than HF. COSMO-RS is quite a bit more efficient than SMD, while KowWin is virtually instantaneous. Nevertheless, with the present quality of predictions, it may be wise to apply more than one method. It should also be added that our results should not be extrapolated to compounds with chemical moieties far outside the range of compounds here. . Comparison between log k OW and log k HW computed using the quantum chemical method SMD-HF and SMD-BP86 as well as COSMO-RS for all compounds. The green lines correspond to log k OW = log k HW , and it is plotted to guide the eye.
ACS Omega
Article
|
2019-09-09T18:55:29.658Z
|
2019-08-12T00:00:00.000
|
{
"year": 2019,
"sha1": "f68ccef8258fa03829b55e6cb9746d2bbc659c62",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.9b01277",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ccd8f00c01b2aa751a39b011d15dbfe28f8b309b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
252112026
|
pes2o/s2orc
|
v3-fos-license
|
Hybrid approaches for container traffic forecasting in the context of anomalous events: The case of the Yangtze River Delta region in the COVID-19 pandemic
The COVID-19 pandemic had a significant impact on container transportation. Accurate forecasting of container throughput is critical for policymakers and port authorities, especially in the context of the anomalous events of the COVID-19 pandemic. In this paper, we firstly proposed hybrid models for univariate time series forecasting to enhance prediction accuracy while eliminating the nonlinearity and multivariate limitations. Next, we compared the forecasting accuracy of different models with various training dataset extensions and forecasting horizons. Finally, we analysed the impact of the COVID-19 pandemic on container throughput forecasting and container transportation. An empirical analysis of container throughputs in the Yangtze River Delta region was performed for illustration and verification purposes. Error metrics analysis suggests that SARIMA-LSTM2 and SARIMA-SVR2 (configuration 2) have the best performance compared to other models and they can better predict the container traffic in the context of anomalous events such as the COVID-19 pandemic. The results also reveal that, with an increase in the training dataset extensions, the accuracy of the models is improved, particularly in comparison with standard statistical models (i.e. SARIMA model). An accurate prediction can help strategic management and policymakers to better respond to the negative impact of the COVID-19 pandemic.
Introduction
Container transportation has become one of the most essential activities in the world's economic and logistics chain (Onut et al., 2011;Balci et al., 2018) and container throughput has been widely recognized as the most important indicator of port activity (Jiang et al., 2022;Gao et al., 2016;Grifoll et al., 2018). For this reason, accurate forecasting of container throughput plays a crucial role, regardless of the port development strategies (Feng et al., 2019), infrastructure investments or maritime supply chain (Ha et al., 2019). Accurate forecasting can also help strategic management and policy development by allowing better real-time decision-making (Stavroulakis and Papadimitriou, 2017), especially in the context of anomalous events such as the COVID-19 pandemic. In addition, port authorities can use forecasting methods for route optimisation, resources assignment and terminal management (Grifoll, 2019;Grifoll et al., 2021;Tsai and Huang, 2017;Levine et al., 2009).
Anomalous events are generally characterised by their abruptness and unpredictability, such as the recent COVID-19 pandemic. Patients with COVID-19 were first detected in Wuhan, the capital city of Hubei Province of China, in December 2019. The outbreak of COVID-19 has posed unprecedented challenges to human beings and caused farreaching consequences for a highly globalised world economy (Narasimha et al., 2021;Zhao et al., 2022). As container transport is closely linked to the world's economic developments, consumer activity and supply chains, container shipping has been severely affected by the COVID-19 pandemic (Guerrero et al., 2022;UNCTAD, 2020). According to Koyuncu et al. (2021), there was a 15.8% drop in total container throughput in China due to the lockdown strategy and deferred deliveries. When compared to the same period in 2019, the total containers handled at Chinese ports declined by 10.1% in the first two months of 2020. However, inaccurate forecasting of container throughput may also lead to avoidable financial losses and management confusion Xie et al., 2019). In this sense, it is really necessary and beneficial for policymakers and port authorities to explore a new method to capture anomalous events and analyse the influence of the COVID-19 pandemic. Consequently, container throughput forecasting catches more attention and numerous forecasting methodologies have been proposed.
The Autoregressive Integrated Moving Average (ARIMA) model is the most extensive and useful approach for container throughput forecasting; it is convenient and efficient in computation and outperforms other models in some cases, especially in short-term forecasting (Geng et al., 2015). The ARIMA model is also successfully applied in many other fields of forecasting, such as economic, traffic and environmental problems Nepal et al., 2020). The ARIMAX model is based on the ARIMA model, where 'X' stands for "exogenous" external information, which can improve forecasting performance. The Seasonal Autoregressive Integrated Moving Average (SARIMA) model is based on ARIMA and brings the seasonal factor "S" into the ARIMA model, to exploit seasonal fluctuations in the time series (Ruiz-Aguilar et al., 2014); the same applies to SARIMAX.
An Artificial Neural Network (ANN) is a mathematical model that simulates neuronal activity and is an information processing system based on emulating the structure and function of the brain's neural networks. ANNs are excellent at extracting the nonlinear relationships and dynamic patterns widely used in forecasting tasks (Ruiz-Aguilar et al., 2014). Given these characteristics, it is no surprise that ANN achieves numerous successes in transportation forecasting (Gosasang et al., 2011). Hua and Faghri (1994) first applied ANN to traffic prediction and, since then, more and more ANN-based forecasting models have emerged to improve traffic forecasting performance. Typical examples include Back Propagation Neural Networks (BPNN) (Kunnapapdeelert and Thepmongkorn, 2020), Feed Forward Neural Networks (FFNN) (Do et al., 2019), Radial Basis Function (Zhu et al., 2014), and Recurrent Neural Networks (RNN) (Li et al., 2018). Meanwhile, ANN has been used to compare traditional prediction models, to demonstrate the promising performance of ANN for specific applications (Sayed and Razavi, 2000). In this regard, Karlaftis and Vlahogianni (2011) compared ANN with classical statistical methods, and the results show that ANN is more flexible and has higher accuracy than classical statistical models.
Usually, traditional RNN fails to capture the input sequence's long temporal dependence (Ma et al., 2015); ANN prediction models usually need more training samples, while container throughput datasets are limited. However, Long Short-Term Memory (LSTM) can overcome those problems (Geng et al., 2015). A Support Vector Machine (SVM) was proposed by Vapnik (2013). When SVM is used to solve a regression problem, it is called Support Vector Regression (SVR) and SVR has eliminated the limitation of ANN on the size of the dataset (Cao and Cai, 2007). SVR has several distinct benefits when it comes to solving small sample, nonlinear, and high-dimensional forecasting problems (Vapnik et al., 1997;Vapnik, 2013). Therefore, SVR has been widely applied in many fields, e.g. Huang and Hong (2009) used SVR to forecast the exchange rate, and Hong et al. (2011) applied SVR to forecast tourist arrivals.
According to the research findings in transportation prediction, the single model is incapable of capturing nonlinear behaviour (Karlaftis and Vlahogianni, 2011). Given these properties, hybrid forecasting techniques have received more attention and extensive research has shown that hybrid forecasting techniques outperform the single model in terms of forecasting accuracies (Zheng et al., 2006). Hybrid models are mainly divided into two categories. One category applies the optimisation algorithm to optimise the hyperparameters of another forecasting model, such as Ping and Fei (2013), which applies genetic algorithms (GA) to optimise the backpropagation neural network model (BPNN) for forecasting the container throughput in Guangdong Province. These results showed that GA-BPNN has better accuracy. Mak and Yang (2007) presented a modified version of the support vector machine (SVM) to forecast container throughput in Hong Kong, which shows an impressive performance in the area of time series analysis.
The other category combines two forecasting models, one used to forecast the linear component and another used to forecast the nonlinear component, such as the Gray-SARIMA dynamic model (Carmona-Benítez and Nieto, 2020), the ANN-SARIMA model (Ruiz-Aguilar et al., 2014) and the GA-SVR-SARIMA model (Hong et al., 2011). Usually, the traditional statistical models (e.g. SARIMA and ARIMA) are used to predict the linear component and the Machine Learning models (e.g. ANN, SVR and LSTM) are used to predict the nonlinear component.
However, the port container traffic time series are difficult to classify as purely linear parts or nonlinear parts and, generally speaking, these time series contain both a linear part and a nonlinear part due to the seasonality, randomness and complexity presented in the time series (Wang et al., 2012;Khashei and Bijari, 2011). Therefore, it is inadequate to apply SARIMA or Machine Learning models to fit the linear part and nonlinear part, respectively. Meanwhile, traditional hybrid models are best suited to multivariate forecasting, and the authors have not found research papers related to port container traffic univariate forecasting by hybrid models, despite the increasing interest in port container traffic. Also, anomalous events such as the COVID-19 pandemic usually occur suddenly and unpredictably with asymmetric information and can bring great harm to all walks of life (Jin et al., 2019). The time series containing anomalous events is described as an inherently nonlinear complex and chaotic dynamic system, which has an impact on the prediction accuracy (Faulkner and Russell, 1997).
Based on the above problem, the contributions of this paper are fourfold. Firstly, we proposed a hybrid model to enhance prediction accuracy and remove nonlinearity and the multivariate limitations. Secondly, we compared the prediction performance of different models for various training dataset extensions and forecasting horizons. Third, we explored the forecasting performance of different models in the context of the COVID-19 pandemic. Finally, we analysed the impact of the COVID-19 pandemic on forecasting work and maritime transportation.
The Yangtze River Delta multi-port system (YRDP) is located in the most developed area of China (see Fig. 1). This area has been investigated from different perspectives. Feng et al. (2020) proposed a novel ternary diagram method to visualise the evolution of YRDP. Huang et al. (2022)explored the temporal and spatial characteristics of YRDP by a compositional data method and the results indicated that the development of YRDP has gone through four stages: the evolution of YRDP is characterised by a tendency towards a 'multi-core development' and faces a differentiated pattern of 'peripheral port challenges'. Veenstra and Notteboom (2011) analysed the level of cargo concentration and the degree of inequality in the operations of the container ports to address the dynamics in YRDP.
In this paper, the time series of the container throughput of Shanghai port (SH), Ningbo port (NB), Suzhou port (SZ) and Lianyungang port (LYG) in YRDP were applied for illustration and verification purposes. The reason why we selected those four ports is that SH and NB are international ports, ranked first and third in the world in terms of container traffic, while LYG and SZ are small-scale regional ports in China; thus the forecasting work consists of large and small ports' container traffic time series, making the work more convincing.
The organisation of this paper is as follows. Section 2 describes the methodology, including the SARIMA model, LSTM model, SVR model, and two hybrid models, each with two configurations (configuration 1: S-L1, S-S1 and configuration 2: S-L2, S-S2). In Section 3, the experimental procedure is introduced. The empirical results and discussion are presented in Section 4. Finally, conclusions and future research are proposed in Section 5.
Methodology
This section shows the analytical methods used in this contribution, including SARIMA, SVR, LSTM and the hybrid models.
SARIMA
A more sophisticated and accurate algorithm for analysing and forecasting time series data is the Box-Jenkins method, including the autoregressive model AR (p), the moving average model MA (q), the autoregressive moving average model ARMA (p, q), and the Autoregressive Integrated Moving Average model ARIMA (p, d, q). The form of the ARIMA model is as follows: Adding a seasonal factor for the SARIMA(p, d, q)(P, D, Q) model: The following is a compact expression of the model: where: The detailed parameters are presented in Appendix A.
Support vector machine (SVM)
The SVM algorithm used kernel functions to map data from low dimensions to high dimensional space. This method reduces dimensional catastrophe and computational complexity while having better scalability and an improved ability to fit the nonlinear data (Moscoso-López et al., 2016). Compared to traditional neural network algorithms, the SVM model uses structural risk optimisation and its scalability has been one of the advantages of the model. For a given sample (x i ,y i )(i = 1, 2, 3,...,n), n is the sample volume, x i is the input vector, and y i is the output target. The SVM model uses highdimensional mapping of the feature space R n to R m and then a function approximation in the feature space using a linear regression function. SVM for regression is called SVR: where w is the weight vector, φ(x) donates the kernel function used for the input vector x, and b is the bias term. According to the statistical theory, SVM obtained w and b and fits the regression function formula by minimizing the objective function.
where C denotes the regularisation parameter, y i − f(x i ) represents the loss function and the ε-intensive loss function is defined as: where ε is the tolerance error. Through Lagrange multiplier techniques, Eq. (5) leads to the following dual optimisation problem: Subject to the constraints for i = 1, 2, ..., n.
The training error over ε is denoted as ξ * i , while the training error less than − ε is denoted as ξ i . The parameter vector w in Eq. (4) is derived by solving the quadratic optimisation problem with constraints: The Lagrange multipliers β * i , β i are derived by solving a quadratic program.
Finally, the SVR regression is calculated as: K(x i , x j ) are kernel functions allowing for the mapping of input data into a high-dimensional feature space where a linear regression can be performed. This contribution uses the Gaussian Radial Basis Function as follows: where σ represents the width of the Kernel function.
Long Short-Term Memory networks model (LSTM)
LSTM, as a special Recurrent Neural Network, effectively overcomes the shortcomings of gradient disappearance and gradient explosion in machine learning (ML) models and has intensity processing capability for temporal data with relatively long intervals and delays (Huang et al., 2021). The LSTM structure consists of a forget gate f t that controls information transfer, an input gate i t and an output gate C t that are used to decide which signals are going to be forwarded to another node, as shown in Fig. 2.
., x t } is fed into the LSTM encoder: where w xi 、 w hi 、 w si represent the weight distribution of different cellular mechanisms, respectively. In Eq. (12), w xi x t i meaning the external information variables associated with the input gates.
represents the moment t− 1 generic state, since the LSTM model cell correlation and implicit node information are shared. It can be considered as being part of the external input, where b is the bias vector and f denotes the sigmoid activation function. The mechanism of the forgetting and the output gates (as well as the associated parameters) are similar to the input and the final state values of the hidden cell given by the tanh activation function (Eq. (14)), to get the input predictions.
The hybrid models
The hybrid model can predict more accurately than the single model (Wang et al., 2012;Ruiz-Aguilar et al., 2014). In this paper, we proposed two hybrid models, each with two configurations, to predict the container throughput. Due to the seasonality, complexity and randomness, the time series contains both linear and nonlinear patterns. Therefore, the application of SARIMA and ML-based models fit the linear and nonlinear patterns, respectively. Then: where L t is the linear component and N t represents the nonlinear component.
The SARIMA model is applied to fit the linear part and the LSTM model and the SVR model are used to forecast the nonlinear part. Hence, the forecast value of the linear part L t and the residual at time t is equal to the difference of the true value Y t and the forecast value L t .
Based on the characteristics of the LSTM and SVR, they can overcome the multivariate limitation and also resolve the nonlinearity of the container throughput time series. So, in Eq. (17), f is the nonlinear function calculated by the LSTM model and SVR model.
The final forecasting values are obtained: where L t is the linear function calculated by the SARIMA model and N t is the nonlinear function calculated by Eq. (17). The hybrid models in Eq. (18) are composed of the SARIMA model, LSTM, SARIMA and SVR, respectively. Therefore, these two hybrid models are SARIMA-LSTM and SARIMA-SVR. In this step, we called the hybrid models configuration 1, including SARIMA-LSTM 1 (S-L1) and SARIMA-SVR 1 (S-S1).
The time series of the container throughput is hardly ever purely linear or nonlinear, it contains both linear and nonlinear patterns. So, to overcome this point and further improve the forecasting performance of configuration 1, we proposed configuration 2 (based on configuration 1) as follows: where f is the nonlinear function calculated by the LSTM model and SVR model, Ŷ t1 is calculated by Eq. (18), ê t is calculated by the SARIMA model and N t is calculated by Eq. (17). Eq. (19) is configuration 2 of the hybrid models, including SARIMA-LSTM 2 (S-L2) and SARIMA-SVR 2 (S-S2).
Experimental procedures
This section shows the experimental procedure. Firstly, we describe the container traffic time series used in the paper and the division of the dataset. Then, the Anomaly Detection Method (ADM) is introduced to detect anomalous points. The third is the modelling process, including the training model, model loading and forecasting. Finally, the performance of the different models is assessed. The LSTM, SVR and hybrid models were carried out in Python, with the function of LSTM and SVR. The SARIMA model was developed by R language using a forecast package. The auto.arima function in the forecasting package was convenient for generating the parameters. Table 1 displays the explanation of some key notation.
Dataset description and division
In this work, the container throughput time series of SH, NB, LYG and SZ were analysed. These time series datasets are shown in Fig. 3, which contains monthly records related to container traffic from 2012 to 2021. All of the data came from the Ministry of Transport of the People's Republic of China (https://www.mot.gov.cn/).
In this paper, the time series datasets were divided into two periods: the first period is prior to , from 2012 to 2019, and the second period is post-COVID-19, from January 2020 to December 2021. For pre-COVID-19, we compared the forecasting ac-
Anomaly point inspection and detection
Anomalous points of time series are usually expressed as abnormal p The non-seasonal autoregressive order d The differences order q The non-seasonal moving average parameters P The non-seasonal autoregressive order D The differences order Q The non-seasonal moving average order φ p The autoregressive order θq The moving average order ΦP The seasonal order ΦQ The seasonal operator xt Container traffic time series Support Vector Machine (SVM) φ(x) The kernel function b The bias term C The regularisation parameter data points relative to some standard or conventional signals, such as an unexpected peak, unexpected trough, trend change and horizontal translation (Nguyen et al., 2021). The time series consists of a trend, season and remainder. We need first to decompose it by the Seasonal-Trend decomposition procedure based on Loess (STL) and to remove the trend part and season part, and then check whether the remainder part consists of anomaly points (Cleveland et al., 1990;Rojo et al., 2017). STL first decomposed the time series into three components: trend, seasonal, and remainder. Second, we removed the "trend" and "season" components and then tested the remainder component by the inter-quartile range (IQR) of ± 25 of the median, where IQR is the difference between the 25% and 75% quantiles. The Anomaly Detection Method uses an interquartile range (IQR) of ± 25 of the median, where IQR is the difference between the 25% and 75% quantiles (Cleveland et al., 1990). When the Anomaly Detection is finished (see Fig. 4), we used the median of each container traffic time series to replace the anomalous points to make the forecasting work accurate.
Modelling process and assessment criteria and robustness
In the modelling process, random initialisation is the first and most important step. In this paper, we used He's initialisation (tensorflow. keras.initializers.he_normal()) of the TensorFlow module in Python to initialise the parameters (He et al., 2015). Then the next step is to find the best parameter combinations by the Grid Search method and Cross-Validation method in the GridSearchCV function of the scikit-learning module in Python. For the ARIMA model, there is a function of auto.arima in R language to return the best parameters. Finally, the Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) were used to evaluate the performances of these models: where y represents the true values and f(x) denotes the forecast values.
Numerical results and discussion
This section presents the predicted results of the hybrid models and benchmark models (e.g. ML models and the SARIMA model). We compared the prediction performance of various models, considering different training dataset extensions and forecasting horizons, and then analysed the impact of the anomalous events of the COVID-19 pandemic on the predictions. Lastly, we provided some managerial insight based on the forecast results. Table 2 shows the forecasting performance of the different models for various training dataset extensions. The forecasting performance was measured by three criteria (i.e. MAE, MAPE, and RMSE). Table 2 indicates that the hybrid models (both configuration 1 and configuration 2) have a better forecasting performance than the SARIMA model and the ML models (i.e. SVR and LSTM). For instance, from the MAE criteria we can see that the biggest value of the hybrid model for NB comes from S-L1, ranging from 9.55 to 10.23 corresponding to training dataset extension 84 and to training dataset extension 60. However, the best performance of the single model is LSTM, for which the MAE ranges from 10.13 to 10.90, which is bigger than S-L1. In the same way, the greatest single model for SH is also LSTM, whose MAE ranges from 19.92 to 20.49, which is much smaller than S-L1's 9.55 to 10.23. The MAPE and RMSE also can indicate this point. The worst forecasting accuracy of the hybrid model for NB is S-L1 for both MAPE and RMSE, whose values range from 4.36 to 4.37 and 10.23 to 10.89, respectively. But the best prediction model for a single model of NB is LSTM, with MAPE and RMSE of 9.19-9.46 and 10.78 to 11.95, respectively. This pattern also applies to LYG and SZ. With the extension of the training dataset, the accuracy is increased. This is because most of the criteria are increased with the increase of training dataset extensions for all the forecasting models, except for the RMSE of SVR for LYG and the MAE of S-L2 of SZ. Table 3 shows the forecasting performance of the different models for various forecasting horizons. The forecasting performance was measured by three criteria. respectively, which is lower than the hybrid models. According to Khashei and Bijari (2011), with the increase of the forecasting horizons, the forecasting accuracy is decreased. However, from Table 3 we can see that the three criteria do not show sufficient evidence for this pattern. This is because the forecasting horizons of the various models show an irregular pattern; for example, the most accurate forecasting horizon of S-L1 for MAE of NB is forecasting horizon 24, but for MAPE and RMSE it is forecasting horizon 12.
Forecasting performance considering different training dataset extensions and forecasting horizons
For the three single models, according to the three criteria, it is no surprise that the SARIMA always has the biggest value, SVR is lower than SARIMA, and LSTM's criteria are the lowest, irrespective of the different training dataset extensions or different forecasting horizons Tables 2 and 3). That fact indicates that the LSTM shows the most accurate performance and SVR is second, while the traditional statistical model SARIMA has the worst performance. When we compared configuration 1 (S-L1 and S-S1) to configuration 2 (S-L2 and S-S2), irrespective of the various training dataset extensions or various forecasting horizons, the three criteria show that configuration 2 has noticeably better performance than configuration 1, which means the configuration 2 we proposed can further improve the prediction performance of configuration 1. Table 4 and Table 5 display the difference of the three criteria between configuration 1 and configuration 2 for the various training dataset extensions and forecasting horizons. From those values, we can see that all values are positive, which means also that configuration 2 can improve the forecasting performance of configuration 1 for different training dataset extensions and forecasting horizons.
Impact of COVID-19 on the prediction
This subsection investigates the prediction performance of different forecasting models in the context of anomalous events. In this sense, the COVID-19 pandemic provides a suitable example to test the prediction ability of the different forecasting models using the container throughput time series.
In Table 6, the splitting strategy of the training dataset extension for post-COVID-19 is different from the training dataset extensions for pre-COVID-19. The training dataset extensions for the pre-COVID-19 period are split as follows: training dataset extension 84 is the data from January 2012 to December 2018, training dataset extension 72 is the data from January 2013 to December 2018 and training dataset extension 60 is from January 2014 to December 2018; the test dataset is the data from January 2019 to December 2019. For the post-COVID-19 period, each training dataset extension was postponed for two years, respectively, and the test dataset is the data from January 2021 to December 2021. Table 6 displays the three criteria of the various training dataset extensions for the post-COVID-19 period. From Table 6 we can see that the three criteria also show that the hybrid models have better predictive power than the single models during the post-COVID-19 period. For example, for NB, the worst hybrid model is S-S1 with the MAE ranging from 11.60 to 12.23, but the best single model is LSTM with the MAE ranging from 12.71 to 13.65. In the same way, the MAPE and RMSE of LSTM are correspondingly lower than S-S1. At the same time, the differences of the three criteria between configuration 1 and configuration 2 are all positive (except for the MAE of SH for training dataset extension 72 and 60; see Table 6). This fact indicates that configuration 2 also can improve configuration 1 during the post-COVID-19 period. For example, in terms of the MAPE of SH, S-L2 can improve S-L1 by about 0.22-0.41 (see Table 7). Table 8 shows the difference between the three criteria of the corresponding training dataset extensions for the pre-COVID-19 period and post-COVID-19 period. The three criteria in Table 8 are all positive, which means that each criterion post-COVID-19 is higher than the pre-COVID-19 period. In other words, the COVID-19 pandemic makes the forecasting accuracy lower.
Discussion and managerial insights
The COVID-19 pandemic has led to a slowdown in container transportation and maritime trade (Guerrero et al., 2022). As the COVID-19 pandemic spread all over the world, many countries fell into a "lockdown and stagnant" state. The global supply chains were disrupted and Chinese ports were also affected by the COVID-19 pandemic. The COVID-19 pandemic related restrictions such as the lockdown strategy had a series of negative impacts on port activities. The decline, mainly in the first half of 2020, particularly in February, plummeted by 2.63%, 20.94%, 19.45%, and 39.13% in NB, LYG, SH, and SZ, respectively (see Fig. 5). In the next few months of 2020, it can be clearly found that the year-on-year growth rate is always negative from January 2020 to June 2020. It was inferred that the lockdown strategy had a negative influence on the economy and maritime trade, which in turn affected the container transportation sector (Zhao et al., 2022). After June 2020, the Chinese government efficiently resumed work and production, the transportation industry gradually recovered in those four ports, and the year-on-year growth rate turned positive for the first time since the COVID-19 pandemic; the four ports showed resilience and vitality and the container traffic began to rebound.
After October 2020, we found that the four ports showed a downward trend, and that the second wave of the COVID-19 pandemic around the world caused a shock to container transportation. In this context, those four ports were declining for three months from October 2020 (see Figs. 3 and 5). In the last half year of 2020, the major economies implemented vaccination plans based on their anti-epidemic experience in 2020 to achieve economic growth. At the same time, favourable factors such as the recovery of steady economic growth and the signing of the Regional Comprehensive Economic Partnership (RCEP) have also provided strong support for the development of foreign trade. NB and SH are ranked first and second in terms of container traffic in Chinese ports and have a close connection with the world maritime trade. By 2021, the container traffic in NB and SH broke a new record of 3180 and 4703 thousand TEUs. The container traffic year-on-year growth rate in NB, SH, LYG and SZ are all positive, and the growth trend returned to the pre-COVID-19 period. As a result, container traffic likewise returned to pre-epidemic levels in 2021 (see Fig. 5).
The port industry is traditionally labour-intensive (Trujillo and Nombela, 1999). The prevention and control measures of the epidemic in China forced the port to apply digital technology, which accelerated the process of port digital transformation. Chinese ports reduced the contact risks by improving their automatisation during the epidemic to ensure the efficient and orderly operation of the entire supply chain, and also improved the understanding and recognition of digitalisation and automation in the port industry. LYG and SZ are small-scale ports in comparison with SH and NB, whose development benefits from the Chinese new development pattern whereby "internal circulation dominated and double circulation promoted each other". This new Table 4 Difference of the three criteria between configuration 1 (S-L1, S-S1) and configuration 2 (S-L2, S-S2) for various training dataset extensions during the pre-COVID-19 period. The S-L represents the difference between S-L1 and S-L2, and S-S represents the difference between S-S1 and S-S2. MAE Zhao et al. (2022), the prediction error can serve as an indicator to measure the impact of the COVID-19 pandemic on maritime transportation. The larger the error, the greater the impact of the COVID-19 pandemic on maritime transportation. Proceeding from this point, we compared the accuracy of the different training dataset extensions between the pre-COVID-19 period and the post-COVID-19 period. We found that the accuracy of the post-COVID-19 period was Table 5 Difference of the three criteria between configuration 1 (S-L1, S-S1) and configuration 2 (S-L1, S-S1) for various forecasting horizons during the pre-COVID-19 period. The S-L represents the difference between S-L1 and S-L2, and S-S represents the difference between S-S1 and S-S2. Table 7 Difference of three criteria between configuration 1 (S-L1, S-S1) and configuration 2 (S-L2, S-S2) for various training dataset extensions during the post-COVID-19 period. The S-L represents the difference between S-L1 and S-L2, and S-S represents the difference between S-S1 and S-S2. higher than the pre-COVID-19 period (see Tables 6 and 8), which indicated that the COVID-19 pandemic had a negative influence on the prediction work, but different forecasting models have different predictive power, so the accuracy cannot reflect the impact of the COVID-19 on maritime transportation. The experimental prediction of the container throughput at NB, SH, LYG and SZ in YRDP was performed by using hybrid models, ML models (LSTM and SVR) and the SARIMA model. The MAE, MAPE and RMSE were then used as the measurement criteria to compare the predictive performance. For the predictive performance, configuration 2 (S-L2 and S-S2) was the most accurate in the various models, while configuration 1 (S-L1 and S-S1) was more accurate than the SARIMA model and ML models. At the same time, the accuracy of the S-L1, S-S1, S-L2 and S-S2 was also higher than the four EMD-BPN models (Wei and Chen, 2012), SARIMA-ANNs models (Ruiz-Aguilar et al., 2014) and W-LSSVR, EMD-LSSVR, and EMD-ANN (Xie et al., 2019).
In addition, the S-L2 and S-S2 performed better in the context of the COVID-19 pandemic. In this sense, some managerial insights for the prediction of the container throughput were obtained. First of all, the hybrid models can improve on the prediction performance of the single models. Configuration 2 can help policymakers to make an accurate decision during the operational planning of a port, especially in the context of anomalous events such as the COVID-19 pandemic. The results also indicated that, with the increase of the training dataset extensions, the prediction accuracy of the container throughput is higher. This suggests that transportation practitioners should keep a sufficient training dataset and reduce the forecasting horizons to improve the prediction accuracy. Finally, configuration 2 is suitable for the univariate time series, which can be easily implemented by strategic management and policymakers.
Conclusion
In this paper, to enhance prediction accuracy while eliminating nonlinearity and the multivariate limitations in container throughput forecasting, especially in the context of the anomalous events (e.g. COVID pandemic), we proposed two hybrid models, each with two configurations (configuration 1:S-L1, S-S1, and configuration 2: S-L2, S-S2) in comparison to the benchmark models. Then, we explored the response of the different training dataset extensions and forecasting horizons to the prediction work and also analysed the influence of the COVID-19 pandemic on container throughput forecasting and maritime transportation. The conclusions of this study, based on the verification of the container throughput time series of four typical ports in YRDP, are as follows.
• The hybrid models (configuration 2) we proposed can improve the performance of benchmark single models and also resolve the nonlinear problem and remove the multivariate limit, which provides an efficient decision-making tool for policymakers and port authorities. At the same time, configuration 2 can further improve the accuracy of the traditional hybrid models (configuration 1). • With the increase of the training dataset extensions, the accuracy of the models increased.
• Contrary to popular belief, with the increase of the forecasting horizon, there is insufficient evidence to indicate that the accuracy was lower. • Configuration 2 performs better than other models in the context of the COVID-19 pandemic.
Future research into the model in this paper is expected to be used in other time series, such as the stock price, GDP and rainfall. On the other hand, in the case of sufficient data, the hybrid models in this paper can better improve the accuracy of multivariate time series prediction.
Funding
This work has been partially funded by the MOLIÈRE project from the European Global Navigation Satellite Systems Agency (now EUSPA) under grant agreement No 101004275 and K.C.Wong Magna Fund at Ningbo University.
Declaration of competing interest
None.
Data availability
Data will be made available on request.
|
2022-09-08T13:16:05.658Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "1d55e08e06a1fbbdeb64689f9572e11ba6b2754f",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.tranpol.2022.08.019",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0673853ee63e7b987eb3d48ce95d63dd661b5371",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250459673
|
pes2o/s2orc
|
v3-fos-license
|
Elevated Adipsin and Reduced C5a Levels in the Maternal Serum and Follicular Fluid During Implantation Are Associated With Successful Pregnancy in Obese Women
Introduction Complement factors mediate the recruitment and activation of immune cells and are associated with metabolic changes during pregnancy. The aim of this study was to determine whether complement factors in the maternal serum and follicular fluid (FF) are associated with in vitro fertilization (IVF) outcomes in overweight/obese women. Methods Forty overweight/obese (BMI = 30.8 ± 5.2 kg/m2) female patients, 33.6 ± 6.3 years old, undergoing IVF treatment for unexplained infertility were recruited. Baseline demographic information, including biochemical hormonal, metabolic, and inflammatory markers, and pregnancy outcome, was collected. Levels of 14 complement markers (C2, C4b, C5, C5a, C9, adipsin, mannose-binding lectin, C1q, C3, C3b/iC3b, C4, factor B, factor H, and properdin) were assessed in the serum and FF and compared to IVF outcome, inflammatory, and metabolic markers using multivariate and univariate models. Results Out of 40 IVF cycles, 14 (35%) resulted in pregnancy. Compared to women with failed pregnancies, women with successful pregnancies had higher levels of adipsin in the serum and FF (p = 0.01) but lower C5a levels (p = 0.05). Serum adipsin levels were positively correlated with circulating levels of vitamin D (R = 0.5, p = 0.02), glucagon (R = 0.4, p = 0.03), leptin (R = 0.4, p = 0.01), resistin (R = 0.4, p = 0.02), and visfatin (R = 0.4, p = 0.02), but negatively correlated with total protein (R = −0.5, p = 0.03). Higher numbers of top-quality embryos were associated with increased levels of C3, properdin, C1q, factors H and B, C4, and adipsin, but with reduced C2 and C5a levels (p ≤ 0.01). Conclusions Higher adipsin and lower C5a levels in the maternal serum during implantation are potential markers of successful outcome in obese women undergoing IVF-assisted pregnancies.
INTRODUCTION
The global obesity epidemic has led to detrimental consequences on human bodily functions including reproductive health (1), with obesity being recognized as one of the leading causes for the decrease in fertility rate. Infertility affects between 8% and 12% of the reproductive age group population, and in certain regions, such as the Middle East and North Africa, a region that includes Qatar, the infertility rate is estimated to be as high as 30%. During the last decade, Qatar has seen a steady rise in the overweight/obese population and a decline in fertility rates among women (2). Obese women have a higher incidence of menstrual dysfunction, anovulation, pregnancy complications, polycystic ovary syndrome, and fertility issues. Furthermore, the rate of miscarriages increases as BMI increases (3). Excess adipose tissue in obese women secretes elevated inflammatory adipocytokines that alter the hormonal balance and cause reproductive disturbances. Elevated levels of inflammatory cytokines, together with elevated insulin secretion, induce hyperandrogenemia and promote granulosa cell apoptosis, which may affect both ovaries (4) and endometrium (5), causing fertility problems.
Complement proteins are secreted by the immune cells, which contribute to danger sensing by activating complement receptors on their target cells. Such activation participates in the type and magnitude of the immune response together with signaling pathways activated in response to pattern recognition receptors (6).
Adipsin, also known as complement factor D, is a member of the trypsin family of peptidases and is secreted mainly from adipocytes, monocytes, and macrophages (7,8). It plays a critical role in the development of the C5-C9 membrane attack complex and the production of several signaling molecules, including the anaphylatoxins, C3a and C5a (9)(10)(11). It is also involved in the first step of activation of the alternate complement pathway (Supplementary Figure 1), where it produces the C3bBb complex (C3 convertase) by factor B and C3b. C3 convertase is responsible for cleaving C3a from C3 and releasing C3b (12). Adipsin regulates adipose tissue homeostasis and elevates glucose secretion (9). It also regulates the differentiation of adipocytes and promotes the accumulation of lipids, which is hypothesized to be a potential cause for the association of adipsin with metabolic disorders (13). Indeed, elevated adipsin levels have been associated with ischemia perfusion (14) and sepsis (15). Furthermore, increased levels of circulating adipsin were closely associated with polycystic ovary syndrome (16), mild cognitive impairment in type 2 diabetic mellitus patients (18), and coronary artery disease (17), suggesting adipsin as a promising biomarker for the diseases. C5a, on the other hand, is a very potent complement factor that plays a chemoattractant role by inducing the migration of many cells involved in the immune response and wound healing, including neutrophils and macrophages (19). It also links innate and adaptive immunity, extending its role in inflammation (19). Previous studies have suggested a role of C5a in the risk of diabetic kidney disease (20) and cardiovascular disease (21).
The plasma levels of adipsin and C5a were shown to be significantly elevated prior to delivery in pregnant women with preeclampsia (22); however, in healthy pregnant women, plasma adipsin and C5a were increased from the third trimester (23).
As noted above, the main function of adipsin is to catalyze the breakdown of complement factor C3; therefore, adipsin may affect the downstream molecules such as C3a and C5a that have been shown to be increased in pregnancy, indicating that the complement system is activated during normal pregnancy (Karina 24).
The complement system exhibits both damaging and protective roles at the placental level. Activation of complement factors at the fetal-maternal interface protects against infectious agents and removes apoptotic and necrotic cells (25). However, various reports have implicated complement activation in the pathogenesis of adverse pregnancy outcomes (26)(27)(28)(29)(30). Despite evidence suggesting involvement of complement factors in obstetrics diseases, no study has investigated their concentrations with regard to in vitro fertilization (IVF) outcome. The hypothesis of this study was that complement factors are dysregulated in obese/overweight women who fail to have successful pregnancy following IVF treatment. To address this hypothesis, levels of 14 complement factors were assessed in the maternal serum and follicular fluid (FF) and compared in relation to IVF pregnancy outcome in overweight/obese women and other inflammatory markers.
Study Design
This was a prospective exploratory pilot cohort study and was performed from January 2017 to January 2018. Forty young overweight/obese Qatari female patients undergoing IVF for unexplained infertility were recruited at Hamad Medical Corporation. Inclusion criteria for the study are as follows: no concurrent illness, not on any medication for the preceding 9 months except fertility medications, and patient gave written informed consent. Exclusion criteria are as follows: women with diabetes, non-classical 21-hydroxylase deficiency, hyperprolactinemia, and Cushing's disease, and women who had androgen-secreting tumors were excluded from the study. Demographics, anthropometrics, and medical history data were collected, including age, ethnicity, socioeconomic background, vital signs, height, weight, menstrual cycle, period of infertility, medications, complications, comorbidities, and family medical history. Study participants had no medical condition or illness and all women were on folic acid 400 mcg daily, but no other medication. Demographic data are shown in Table 1. Blood samples were collected at the beginning of the IVF cycle (taken in the follicular phase of the cycle) and just prior to hormonal downregulation, and immediately processed and stored at −80°C pending analysis.
All patients underwent a standard IVF antagonist protocol (31). rFSH stimulation was started on day 2 of their menstrual cycle using Gonal-F (Merck Serono). To prevent a premature LH surge, the GnRH antagonist (Cetrotide: Merck Serono) was used. To monitor the ovarian response to stimulation, ultrasound scans were performed from day seven and every 2 days thereafter. The response to therapy was determined by follicular diameter and follicle numbers. Final maturation was initiated when two or more leading follicles were ≥18 mm using human chorionic gonadotrophin (hCG, Pregnyl, Merck Sharp and Dohme).
Oocyte retrieval was performed, and the FF was centrifuged and stored at −80°C until analysis. At the same time as oocyte retrieval, an additional blood sample was taken and prepared as noted above. Transcervical embryo transfer was performed and embryos were classified using standard criteria (32): top-quality embryos on Day 3 as per Alpha Consensus ("Istanbul consensus workshop on embryo assessment: (33)proceedings of an expert meeting," 2011). Embryo transfers were performed either on day 3 or ideally on day 5 (blastocyst) for implantation. Blood biochemistry tests were conducted at the chemistry laboratory of Hamad Medical Corporation, Doha, Qatar. Pregnancy outcomes of gestational age at delivery, birth weight, maternal weight, blood pressure, and fetal outcome were recorded. Protocols were approved by Institutional Review Boards of the Hamad Medical Corporation (15101/15) and Weill Cornell Medical College in Qatar (15-00016).
Human Complement-Related Protein Measurements
MILLIPLEX MAP Kit Human Complement Magnetic Bead Panels 1 and 2 (HCMP1MAG-19K and HCMP2MAG-19K) were used to measure levels of 14 complement factors in the sera and FF of participants according to the manufacturer's instructions (Merck Millipore, USA). Serum samples were diluted 200 times for complement panel 1 containing C2, C4b, C5, C5a, C9, adipsin, and mannose-binding lectin and 40,000 times for complement panel 2 containing C1q, C3, C3b/iC3b, C4, factor B, factor H, and properdin as per the manufacturer's instructions. Five-parameter logistic regression algorithms built into the Bioplex manager six software were used to assess complement levels in reference to standards. Analysis was conducted using a Bioplex-200 instrument according to the manufacturer's instructions (BIO-RAD, Hertfordshire, UK).
Statistical Analysis
Comparisons were performed using t-test, Wilcoxon-Mann-Whitney, 1-way ANOVA, or linear models as appropriate using IBM SPSS statistics 21. Linear regression models were used when analyzing differences in complement factor levels between pregnancy outcome groups by considering age as a potential confounder. Correlations were performed using Pearson's correlation via SPSS version 27. Data were presented as mean ± standard deviation (SD).
RESULTS
Based on their pregnancy outcome, patients were dichotomized into successful (n = 14) and unsuccessful (n = 26) pregnancies. As shown in Table 1, women who had successful pregnancies were slightly younger (average 4.5 years, p = 0.01) with higher ALT (p = 0.01) and AST (p = 0.004), but lower LH (p = 0.03) and lesser infertility duration (p = 0.04).
Comparing Complement Factors Levels Between Successful and Unsuccessful IVF Cycles
Linear regression was used to compare levels of complement factors in the maternal serum and FF between women with successful or unsuccessful pregnancies. Among the 14 tested complement factors, levels of adipsin in the serum prior to the IVF cycle ( Figure 1A) and FF ( Figure 1B) were higher in women with successful pregnancies than women with unsuccessful pregnancies (p = 0.01 and 0.05, respectively). Conversely, serum C5a levels ( Figure 1C) were significantly higher in participants with unsuccessful pregnancies (p = 0.01) compared to those with successful pregnancy ( Table 2). When the 14 tested complement factors and levels of adipsin were measured in the serum at the time of oocyte collection, no significant changes in complement proteins were seen (data not shown).
Multivariate Analysis of Pre-IVF Serum and FF Comparison With Pregnancy Outcome
A multivariate variate OPLS-DA comparing levels of complement factors in pre-IVF serum and FF between women with successful versus unsuccessful pregnancies revealed one class-discriminatory component accounting for 47% of the variation in the data due to pregnancy outcome (R-squared-Y = 0.47) (Figure 2A). The corresponding loading score, shown in Figure 2B confirms higher adipsin levels and lower C5a levels in women with successful pregnancies compared to women with unsuccessful pregnancies.
Correlation of Adipsin and C5a With Covariates
Pearson's correlation between levels of adipsin (serum and FF), C5a, and various inflammatory and metabolic disease markers showed positive correlations between pre-IVF serum adipsin and levels
Comparing Complement Factors Levels Associated With the Number of Top-Quality Embryos
Linear regression was used to identify the association between complement factors in the pre-IVF serum and FF and the number of top-quality embryos (ordinal). Higher numbers of topquality embryo levels were associated with increased levels of C3 ( Figure 4A), properdin, C1q, factors H and B, C4, and adipsin ( Figure 4B and Table 3). Conversely, C2 levels were reduced with higher numbers of top-quality embryos ( Figure 4C). Interestingly, C5a was also negatively correlated with the number of top-quality embryos (R = 0.2, p = 0.05) ( Figure 4D).
Comparison of Complement Factors in Serum and FF With Embryo Quality
A multivariate OPLS analysis comparing levels of complement factors in serum and FF between women with number of topquality embryos revealed one class-discriminatory component accounting for 44% of the variation in the data due to pregnancy outcome (R-squared-Y = 0.44) ( Figure 5A). The corresponding loading score, shown in Figure 5B, confirms greater levels of serum C3, properdin, C1q, factors H and B, C4, and adipsin, but lower C2, with higher numbers of top-quality embryos ( Figure 4C).
DISCUSSION
It is critical to identify novel biomarkers that may identify those women who will have a favorable outcome and those who will not from those who are undergoing assisted reproductive technology; this may guide the therapeutic strategy. Whereas serum may offer a general indication of the woman's health, the FF during IVF treatment reflects the intimate environment of the developing cumulus/oocyte complex (34). Pregnancy activates the complement system by triggering the innate immune response, leading to elevation in the levels of C3a, C4a, and C5a. This elevation compensates for the inhibition of adaptive immunity during the normal pregnancy (24) and plays a critical role in pregnancy outcome (35). However, reports have indicated that circulating levels of C3a and C5a pose increased risk of preeclampsia (36). The interface between maternal and fetal tissues is enriched with complement inhibitors to protect the placenta from consequences of complement activation and adverse pregnancy outcomes (37). Complement components are also present in the mucosal secretions of the fallopian tubes, cervix, and uterus (38), suggesting that pregnancy failure could potentially result from excessive complement activation in the pre-implantation stage (35). In this study, we have assessed the levels of complement factors in the pre-IVF serum, serum at the time of oocyte retrieval, and FF samples obtained from women undergoing IVF cycles and compared their levels in relation to IVF outcome (i.e., successful or unsuccessful pregnancy and number of top-quality embryos). Out of 14 assessed complement factors, only adipsin and C5a were significantly associated with IVF outcome. Higher levels of pre-IVF serum and FF adipsin and lower levels of serum C5a were identified in successful pregnancies. In order to understand the potential mechanisms underlying these associations, levels of various cytokines, adipokines, and myokines were determined in the pre-IVF serum and FF samples from the same women and compared in relation to IVF outcome. Serum adipsin levels were positively correlated with circulating levels of vitamin D, glucagon, leptin, resistin, and visfatin, but negatively correlated with total protein. Higher numbers of top-quality embryos were associated with increased levels of C3, properdin, C1q, factors H and B, C4, and adipsin, but with reduced C2 and C5a levels. However, serum taken at the time of FF removal showed no correlation of the complement proteins or with any other parameters, suggesting that the intervention of the exogenous gonadotrophin therapy and the hormonal changes that resulted in oocyte stimulation have modified the complement system abrogating the predictive indices seen in the pre-IVF sera for these patients. The effect of exogenous gonadotrophin therapy on the complement system during oocyte stimulation is not known and needs further specific investigation. The higher levels of adipsin in the pre-IVF sera and FFs taken from women with successful pregnancies suggest a positive role of adipsin in the pregnancy outcome. The higher serum levels of adipsin prior to implantation could reflect a protective phenotype since serum adipsin levels are negatively associated with insulin resistance, especially in overweight and obese subjects (39). Studies have indicated that adipsin improves the maintenance of b-cell function as adipsin knockout mice exhibit A multivariate variate OPLS-DA comparing levels of complement factors in pre-IVF serum and FF between women with successful versus unsuccessful pregnancies revealed one classdiscriminatory component accounting for 47% of the variation in the data due to pregnancy outcome (R-squared-Y = 0.47) (Figure 2A). The corresponding loading score, shown in Figure 2B, confirms higher adipsin levels and lower C5a levels in women with successful pregnancies compared to women with unsuccessful pregnancies.
glucose intolerance due to insulinopenia while replenishment of adipsin boosts their insulin secretion (9). Indeed, studies have shown that insulin resistance lowers implantation rate in the in vitro maturation/IVF embryo transfer cycle (40). Therefore, lower adipsin in failed cycles could reflect insulin resistance-mediated lowering of the maturation/IVF embryo transfer cycle. Our data showed that serum adipsin levels were positively correlated with serum vitamin D, glucagon, and various adipokines such as leptin, resistin, and visfatin. However, other studies suggested a negative role for adipsin in pregnancy being associated with metabolic changes during pregnancy (41,42), including pathogenesis of preeclampsia due to its role in activation of factor B with a direct association with the development of preeclampsia (43,44). Several studies have indicated the important physiological role of adipocytokines in metabolism (45). Leptin was suggested to be involved in the control of reproductive functions by acting both directly on the ovaries and indirectly on the central nervous system (46), although other reports have suggested no correlation with pregnancy outcome (47). Reports have also shown that resistin does not correlate with IVF outcome (48). Visfatin, on the other hand, was found to restore ovarian aging and fertility in aging mice (49). Furthermore, previous studies have shown that women with elevated vitamin D had more successful IVF cycles (50). Inflammation is another critical factor for embryo implantation during pregnancy. A proinflammatory environment is essential during embryo implantation (51), followed by suppression of inflammation for the rest of the pregnancy until onset of labor. However, if chronic or acute inflammation persists for a longer duration within the uterine cavity, this may increase the chance of spontaneous abortion or preterm labor. Obese women are likely to have a low-grade inflammatory state throughout pregnancy, which may compromise embryo implantation. This may explain why the success rate of IVF-assisted pregnancy is lower in obese women. Our data showing higher levels of C5a in the sera taken from women with unsuccessful pregnancies suggest a negative role for C5a in the pregnancy outcome. Our data therefore are in agreement with previous studies implicating elevated C5a levels in trophoblast dysfunction, impaired placental angiogenesis, and adverse pregnancy outcomes (36,52,53). Higher levels of C5a were also associated with preterm birth by increasing contraction frequencies (35), cervical remodeling, and fetal brain injury (54)(55)(56). Therefore, higher maternal C5a serum levels could be an indicator of failed IVF cycles, as suggested by our data.
Our data also indicate that higher numbers of top-quality embryos are associated with increased levels of C3, properdin, C1q, factors H and B, C4, and adipsin, but with reduced C2 and C5a levels. Levels of specific substances in FF were previously shown to be associated with fertilization outcome and early post-fertilization development, including elevated levels of LH, growth hormone (GH), prolactin, 17b-estradiol (E2), and insulin-like growth factor (IGF)-I and lower IL-1 in women with successful pregnancies. Furthermore, LH and GH levels were higher in follicles, resulting in top-quality embryos with the best morphology and fastest cleavage rate (57). The immunomodulatory role of locally produced complement factors, including C3, properdin, C1q, C4, adipsin, and factors H and B, in immunological tolerance and cellular survival was previously established (58)(59)(60). The positive correlation between top-quality embryos and these complement factors could reflect a pro-survival environment for developing oocytes that resulted in better-quality embryos. Future studies are warranted to investigate the functional relevance of these associations in larger independent cohorts.
A strength of this study was that our cohort does not include known causes of infertility, but further studies on specific causes of infertility are needed. The fact that the study utilized pre-IVF and matched serum at the time of oocyte retrieval when the FF samples were taken is also a major strength. Limitations include the fact that our complement cascade panel only had 14 complement-related protein markers and so other crucial proteins involved in the activation of complement cascades were not measured, including protein D hydrolysis and activation mechanisms associated with the MASP proteases. Our pilot study was undertaken in a small cohort of women, and larger studies are needed in the future to confirm and extend our findings. In addition, all the women were Qatari and, therefore, the results may not be generalizable to other ethnic groups.
CONCLUSIONS
This study reports, for the first time, higher adipsin and lower C5a levels in the pre-IVF serum in successful IVF-assisted pregnancies and showed positive correlations between complement factors and embryo quality, with a potential utilization as predictive biomarkers of successful pregnancies in obese women.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The study was approved by the Institutional Review Boards of the Hamad Medical Corporation (15101/15) and Weill Cornell Medical College in Qatar (15-00016) research Ethics Committee, and all study participants signed an informed consent form prior to participation. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
MR designed the study and performed the measurements. ID and NR analyzed the data and performed statistical analysis and prepared the tables. YD, LA, TA, MS, MB, EE, HB, and SA supervised clinical studies, recruited the patients, and collected demographics. AB prepared pathway analysis for the manuscript. SA, MR, AA-S, and ME supervised the study and interpreted data. ME wrote the manuscript. All authors contributed to the article and approved the submitted version.
|
2022-07-13T13:08:55.657Z
|
2022-07-13T00:00:00.000
|
{
"year": 2022,
"sha1": "a64f846e8788e2677d213b3053de83375bbe9fb8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a64f846e8788e2677d213b3053de83375bbe9fb8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
227256066
|
pes2o/s2orc
|
v3-fos-license
|
Mitochondrial dysfunction in sepsis is associated with diminished intramitochondrial TFAM despite its increased cellular expression
Sepsis is characterized by a dysregulated immune response, metabolic derangements and bioenergetic failure. These alterations are closely associated with a profound and persisting mitochondrial dysfunction. This however occurs despite increased expression of the nuclear-encoded transcription factor A (TFAM) that normally supports mitochondrial biogenesis and functional recovery. Since this paradox may relate to an altered intracellular distribution of TFAM in sepsis, we tested the hypothesis that enhanced extramitochondrial TFAM expression does not translate into increased intramitochondrial TFAM abundance. Accordingly, we prospectively analyzed PBMCs both from septic patients (n = 10) and lipopolysaccharide stimulated PBMCs from healthy volunteers (n = 20). Extramitochondrial TFAM protein expression in sepsis patients was 1.8-fold greater compared to controls (p = 0.001), whereas intramitochondrial TFAM abundance was approximate 80% less (p < 0.001). This was accompanied by lower mitochondrial DNA copy numbers (p < 0.001), mtND1 expression (p < 0.001) and cellular ATP content (p < 0.001) in sepsis patients. These findings were mirrored in lipopolysaccharide stimulated PBMCs taken from healthy volunteers. Furthermore, TFAM-TFB2M protein interaction within the human mitochondrial core transcription initiation complex, was 74% lower in septic patients (p < 0.001). In conclusion, our findings, which demonstrate a diminished mitochondrial TFAM abundance in sepsis and endotoxemia, may help to explain the paradox of lacking bioenergetic recovery despite enhanced TFAM expression.
Sepsis is characterized by a dysregulated immune response, metabolic derangements and bioenergetic failure. These alterations are closely associated with a profound and persisting mitochondrial dysfunction. This however occurs despite increased expression of the nuclear-encoded transcription factor A (TFAM) that normally supports mitochondrial biogenesis and functional recovery. Since this paradox may relate to an altered intracellular distribution of TFAM in sepsis, we tested the hypothesis that enhanced extramitochondrial TFAM expression does not translate into increased intramitochondrial TFAM abundance. Accordingly, we prospectively analyzed PBMCs both from septic patients (n = 10) and lipopolysaccharide stimulated PBMCs from healthy volunteers (n = 20). Extramitochondrial TFAM protein expression in sepsis patients was 1.8-fold greater compared to controls (p = 0.001), whereas intramitochondrial TFAM abundance was approximate 80% less (p < 0.001). This was accompanied by lower mitochondrial DNA copy numbers (p < 0.001), mtND1 expression (p < 0.001) and cellular ATP content (p < 0.001) in sepsis patients. These findings were mirrored in lipopolysaccharide stimulated PBMCs taken from healthy volunteers. Furthermore, TFAM-TFB2M protein interaction within the human mitochondrial core transcription initiation complex, was 74% lower in septic patients (p < 0.001). In conclusion, our findings, which demonstrate a diminished mitochondrial TFAM abundance in sepsis and endotoxemia, may help to explain the paradox of lacking bioenergetic recovery despite enhanced TFAM expression.
Sepsis is defined as an acute organ dysfunction caused by a dysregulated immune response to an infection, affecting millions of individuals per year worldwide and representing a major healthcare concern 1,2 . Interest has increasingly focused on the link between sepsis-associated organ failure and mitochondrial dysfunction 3,4 . Mitochondria generate most of the adenosine triphosphate (ATP) required for normal cellular function, but are also involved in multiple intracellular signaling and regulatory processes such as intracellular calcium regulation and production of reactive oxygen species [5][6][7] . These important regulatory mechanisms seems to be profoundly disturbed in human sepsis, which can ensue mitochondrial dysfunction and reduced oxidative ATP production 3,4,8 .
Impaired mitochondrial functionality and ability to recover likely contribute to organ dysfunction and death 3,4,9,10 . However, mitochondrial dysfunction seems to be highly variable and should not be seen as general denominator for multiple organ failure in sepsis and septic shock 11 . However, describing potential cellular and mitochondrial abnormalities could help to improve our still insufficient understanding about mitochondrial dysfunction in sepsis. Generally, mitochondrial injury and ATP depletion trigger an increased activation of www.nature.com/scientificreports/ mitochondrial biogenesis, aimed to ameliorate the cellular effects of mitochondrial dysfunction [12][13][14] , and involves a signaling network that converges on the nuclear-encoded mitochondrial transcription factor A (TFAM) 12 . TFAM regulates de novo synthesis of mitochondrial proteins, facilitates mitochondrial DNA replication, and mediates mitochondrial DNA protection 15,16 . TFAM is a ~ 24 kDa protein with non-specific DNA-binding properties. After cytosolic synthesis as a precursor protein (~ 29 kDa), TFAM is shuttled to the mitochondria, crossing the outer and inner membranes. Mature TFAM is then generated by cleavage of a targeting sequence (~ 5 kDa) by a processing peptidase in the mitochondrial matrix 17,18 . Lack of mature TFAM entails mitochondrial dysfunction and an energy crisis, with insufficient TFAM resulting in possible death 19 . The ability to resolve a critical condition such as sepsis-induced organ failure hence could depend on the ability to increase intramitochondrial TFAM abundance so as to restore adequate mitochondrial function 10,12,17,[20][21][22] . Specifically, TFAM plays a central role in the mitochondrial core transcription initiation complex 23 that is required not only for expression of mitochondrial-encoded respiratory chain subunits but also for mitochondrial DNA replication 24,25 . Recent studies, however, provide growing evidence that activation of mitochondrial biogenesis in sepsis, although associated with an increased intracellular TFAM expression, is not necessarily accompanied by recovery of mitochondrial function 10,[26][27][28] . This raises the question as to whether steps in TFAM`s production and activity, from nuclear transcription to intramitochondrial actions, are disturbed. Specifically, the functionally important intramitochondrial TFAM has not been explored in cells from septic patients or cellular surrogate models of sepsis.
Accordingly, to test the hypothesis that enhanced TFAM gene expression does not translate into an increased abundance of intramitochondrial TFAM and maintenance of mitochondrial dysfunction, we studied both blood mononuclear cells (PBMCs) from sepsis patients and lipopolysaccharide (LPS)-stimulated PBMCs drawn from healthy volunteers.
Results
Baseline characteristics of the septic patients are shown in Table 1. The SOFA score at inclusion was 10 ± 4 and 9 patients required norepinephrine for blood pressure support. Thirty-day mortality was 40%. The 20 healthy volunteers consisted of 9 females and 11 males with a mean age of 39 years ± 9. The age-adapted subgroup of controls (n = 9) consisted of 4 females and 5 males with a mean age of 54 years ± 7.
All these observations were replicated in LPS-stimulated PBMCs from healthy volunteers serving as controls ( Fig. 2a-h). Here, LPS stimulation of PBMCs also evoked a 1.5-fold increase of extramitochondrial TFAM protein at 24 h (p = 0.003) and a twofold increase at 48 h (p < 0.001) compared to unstimulated controls (Fig. 2f,g). However, the functionally important intramitochondrial TFAM diminished over time despite the increase in extramitochondrial TFAM (Fig. 2f,h). Indeed, intramitochondrial TFAM had halved at 24 h (p = 0.038) and further decreased to 40% at 48 h (p = 0.002). Thus, LPS stimulation mirrored the altered intracellular distribution of TFAM showing a decreased intramitochondrial presence despite increased extramitochondrial presence.
To assess the functional relevance of intramitochondrial TFAM in PBMCs from sepsis patients, we quantified the interactions of TFAM and mitochondrial Transcription Factor 2B, also known as the mitochondrial core transcription initiation complex ( Fig. 3a-c). Here, we found a marked decrease of 74% in PLA signals per cell, as a measure of protein interactions (Fig. 3a, p < 0.001), when comparing PBMCs from septic patients (1.2 signals per cell; 95%-CI 0.7 to 1.6, Fig. 3c) to controls (4.5 signals per cell; 95%-CI 3.7 to 5.2, Fig. 3b), in line with diminished mitochondrial TFAM. Of special clinical interest, the diminished protein interactions of TFAM with TFB2M in PBMCs of septic patients inversely correlated with the SOFA score (r 2 = 0.58 ; p = 0.011). This may indicate a potential association between intramitochondrial TFAM abundance and severity of the sepsisrelated organ dysfunction.
Discussion
Our study reveals that TFAM abundance in mitochondria decreases during the early inflammatory phase of sepsis, despite cellular upregulation of TFAM expression. Deprivation of intramitochondrial TFAM in turn was associated with reductions in mitochondrial DNA copy number, mitochondrial NADH dehydrogenase subunit 1 expression, and decreased cellular ATP content, all suggesting decreased cellular energy supply.
Over the last two decades, several studies have provided substantial evidence that sepsis-related organ failure may relate to mitochondrial dysfunction and lack of bioenergetic recovery 3,9,[29][30][31][32] . Mitochondrial recovery mainly depends on a sufficient upregulation of mitochondrial biogenesis 10,33 . Notably, the early inflammatory response in sepsis amplifies expression and activation of factors stimulating mitochondrial biogenesis and repair such as TFAM, in line with our findings 10,14,34 . Indeed, it seems evolutionarily prudent that an inflammatory response, especially to sepsis, triggers TFAM expression to mitigate harm inflicted by excessive inflammation. www.nature.com/scientificreports/ In line with other studies, we found an early upregulation of TFAM mRNA and an increase in extramitochondrial TFAM protein in septic patients that was mirrored within 24 to 48 h by LPS stimulation of PBMC from healthy volunteers 10,26,33,34 . High TFAM concentrations have been described as clinically beneficial, potentially alleviating organ dysfunction whereas a lack of TFAM has detrimental effects on organ function and outcome 12,19,20 . While the correlation between mitochondrial function in PBMCs and that of organs remains contested, investigations of cells from solid organs would require invasive biopsies which would hardly be ethically justified. However, tests of peripheral blood PBMCs have been proposed to offer valid information about "general" mitochondrial health 35 . Thus, the timely (within the first 24 h) increases in cellular TFAM expression in PBMCs of septic patients and in endotoxemia, as supported by our results, may be beneficial and promote mitochondrial recovery. Nevertheless, our data cannot prove a causal correlation or fixed association of mitochondrial function between peripheral blood cells and other solid organs in sepsis, especially since other studies in this context provide heterogeneous results needing further clarification [36][37][38][39] .
As shown by the progressive decrease both in mitochondrial DNA copy number and ATP content, mitochondrial function still deteriorated despite increased cellular TFAM concentrations. A compromised mitochondrial biogenesis despite an early activation of important transcription factors has already been described by others, including investigations on mitochondrial function in cardiomyocytes 10,26,32 . Remarkably, following LPS stimulation, impaired recovery of the mitochondrial respiratory chain was observed despite an increase in the nuclear transcription factor PGC-1α, a master regulator of mitochondrial biogenesis 27 . In this context, our results shed light on a potential intracellular maldistribution of TFAM in sepsis and endotoxemia resulting in decreased intramitochondrial TFAM, although its source, i.e., extramitochondrial TFAM, is conserved or even increased.
Nevertheless, there are several questions that need further clarification. For example, whether this problem is TFAM specific or also affects other mitochondrial proteins? Are the observed findings different in cells of solid human organs, in particular brain, kidney, heart, and liver? Furthermore, do our observations in PBMCs from sepsis patients and LPS-stimulated controls reflect an adaptive re-programming of electron transport chain function rather than a primary damage inflicting mechanism of sepsis? Although further exploration of molecular mechanisms potentially responsible for our observations was beyond the scope of the present study these questions warrant further work. Table 1. Baseline characteristics of sepsis patients. Data are presented as n (%) or mean (± SD), as appropriate. The presented characteristics refer to baseline measurements recorded on study inclusion. There were no missing data. www.nature.com/scientificreports/ www.nature.com/scientificreports/ A strength of our study is, that we could independently support our finding of decreased intramitochondrial TFAM by a dramatically diminished TFAM/TFB2M protein interaction rate. This protein interaction makes up the human mitochondrial transcription initiation complex. A decrease of this interaction was accompanied by a profoundly affected mitochondrial transcription and replication machinery, as shown by decreases in mitochondrial DNA copy number, mitochondrial NADH dehydrogenase subunit 1 mRNA expression, and cellular ATP content. Therefore, TFAM seems not to appear at its proper site of action where it is needed. However, our data does not allow to decide whether the described phenomenon is a TFAM-specific problem or whether several proteins are affected simultaneously. Therefore, the results of our PLA could further be influenced by a decreased TFB2M concentration or post transcriptional modifications that are currently unknown.
Though our data do not provide causal or even more detailed mechanistic molecular insights, our findings provide reason to speculate that a diminished intramitochondrial TFAM concentration may impair mitochondrial recovery and energetics. We also hypothesize that it may contribute to organ dysfunction and death from sepsis. In this context, a hampered mitochondrial protein import may represent an interesting mechanism evoking diminution of intramitochondrial TFAM 40,41 . Recent studies also indicate that the complex mitochondrial protein import is strictly regulated, suggesting a remarkable diversity of potentially mechanisms underlying our www.nature.com/scientificreports/ findings 40 . Other explanations are also possible. For example, a higher proteolytic activity in the mitochondrion in sepsis and endotoxemia could also explain our results. Our data do not provide a definitively answer as to which mechanisms are responsible for our observations. In addition, a detailed investigation of the mitochondrial oxidative metabolism may have expanded our mechanistic insights but was beyond the scope the present study. Therefore, further research is warranted to elucidate the molecular mechanisms underlying the alterations in cellular and intramitochondrial TFAM distribution, its causes, and consequences. Aside from mechanistic considerations, it is intriguing to speculate that the extent and duration of the apparent intracellular TFAM maldistribution might represent a prognostic biomarker, since TFAM/TFB2M protein interactions strongly correlated with the SOFA score of our sepsis patients. Further studies in larger groups of patients are needed to explore this hypothesis.
Conclusion
Despite increased extramitochondrial TFAM both intramitochondrial TFAM abundance and the functionally important protein interaction with TFB2M are decreased in PBMCs from sepsis patients. This is associated with a decreased mtDNA copy number and cellular ATP content suggesting a mitochondrial dysfunction. All these findings can be replicated when PBMCs are exposed to LPS. Taken together this suggests that intracellular TFAM maldistribution could be an important feature in sepsis. This calls for further studies investigating the molecular mechanisms responsible for the observed findings.
Materials and methods
Study design and oversight. We conducted a prospective, observational, single-center, in vitro and in vivo study registered in the German clinical trials database (DRKS00015619) prior to first patient enrollment. The Ethics Committee of the Medical Faculty of the Ruhr-University of Bochum (protocol no. #18-6257) reviewed and approved the study and written informed consent was obtained from healthy subjects and patients or their guardians, as appropriate. This study was conducted in accordance with the revised Declaration of Helsinki, good clinical practice guidelines, and local regulatory requirements.
Patient and volunteer cohorts and treatments.
We recruited twenty healthy subjects from the Medical Faculty of the Ruhr-University Bochum between October 10 and December 21, 2018 who free from infection for at least 4 weeks prior to study participation. Blood was drawn and PBMCs isolated as described below.
For subsequent experiments PBMCs of healthy subjects were seeded at a density of 2 × 10 7 cells per well and incubated with or without 10 µg/mL LPS (Supplementary Figure 1; Escherichia coli type 0111:B4; L4391, Sigma-Aldrich, St.Louis, MI). Serial in vitro measurements were performed at baseline prior to LPS stimulation and at 0.5, 4, 24, and 48 h.
Septic patients were considered eligible if they fulfilled the criteria for sepsis as defined by the current Sepsis-3 definition and enrollment, written informed consent and blood sampling had been completed within the first 24 h after diagnosis of sepsis 1 . Exclusion criteria were age under 18 years, pregnancy, pre-existing anemia, known mitochondrial disorder, and the decision to withhold or withdraw life-sustaining treatment on the day of study inclusion. Ten septic patients admitted to the intensive care unit (ICU) of the University Hospital Knappschaftskrankenhaus Bochum between December 3, 2018 and February 28, 2019 were included. PBMCs of these patients were isolated as described below. Cells were not stimulated and directly processed. We followed all patients for 30-day survival commencing from the day of the diagnosis of sepsis.
Isolation of peripheral blood mononuclear cells. Peripheral blood mononuclear cells (PBMCs) were
isolated using a density gradient centrifugation protocol (Ficoll Paque solution, GE Healthcare Bio Science AB, Uppsala Sweden). Briefly, cells were centrifuged in Ficoll Paque solution, forming a PBMC-rich layer that was collected. PBMCs of septic patients were directly processed, as described below. We resuspended isolated cells of healthy volunteers in full RPMI 1640 medium (Invitrogen, Carlsbad, CA) containing 10% fetal calf serum (FCS) (Biochrom AG, Berlin, Germany), 100 U/mL penicillin plus 100 μg/mL streptomycin (both Invitrogen), and held at 37 °C in a humidified atmosphere containing 5% CO 2 until further use.
Isolation of mitochondria.
To assess and compare intramitochondrial TFAM concentrations to their mitochondria-free cytonucleoplasm, the mitochondria were isolated for each measurement, adapted from the protocol Argan et al. published 42 . Briefly, the supernatant of the PBMCs was collected for quantification of TNFα, IL-6, and IL-10 as described below. For septic patients' blood serum was used. The cells were then osmotically swelled and mechanically shredded (homogenized) to release the mitochondria. The mitochondria were then separated from the cytonucleoplasm and cellular debris by different centrifugation steps. Mitochondria were then lysed and protein was isolated. The quality of the mitochondrial isolation procedure was validated as shown in Supplementary Figures 2 and 3. Cytokine concentrations. Supernatant of PBMCs was collected as described above and used for quantifying the cytokines TNF-α, interleukin-6, and interleukin-10 utilizing appropriate human ELISA kits (all Bio-Legend, San Diego, CA) according to the manufacturers' instructions. Each cytokine was derived by applying respective calibration standard curves.
For PGC-1α quantification a separate nuclear protein extraction was performed where cells were centrifuged at 4000g and the pellet then resuspended in Pre-Extraction Buffer (Abcam, Cambridge, UK) allowing the cells to swell on ice. After vortexing and further centrifugation, the pellet was dissolved in Complete Lysis Buffer (Active www.nature.com/scientificreports/ Motif, Carlsbad, CA). The lysate was then sonicated to ensure complete lysis and then centrifuged at 13,000g. The concentrations of PGC-1α, was measured using a dedicated human ELISA kit (Wuhan EIAab Science Co, Wuhan, China) according to the manufacturers' instructions.
Cellular ATP content. The CellTox Green cytotoxicity assay (Promega, Madison, WI) was used to assess the degree of cytotoxicity and remained in all cases less than 15% under our experimental conditions (Supplementary Figure 4). Briefly, cells were seeded in 96-well plates and stimulated as described above. Then CellTox Green reagent was added and incubated for 15 min. Fluorescence was recorded at 520 nm. To assess the cellular ATP content we performed a luciferase-based assay (Cell Titer Glo 2.0 Assay, Promega, Madison, WI) following the manufacturer's instructions. After assessment of cytotoxicity as described above CellTiter Glow 2.0 reagent was added to the wells and incubated for 10 min. Subsequently the luminescence was recorded.
Expression of TFAM, mitochondrial NADH dehydrogenase subunit 1, and mitochondrial DNA. To assess the gene products by quantitative polymerase chain reaction, total DNA and RNA was extracted from PBMCs using the QIAamp and RNeasy kits respectively, according to the manufacturer's instructions (QIAGEN, Hilden, Germany). In mRNA samples, the purified RNA was reverse transcribed into complementary DNA using the QuantiTect Reverse Transcription Kit (QIAGEN). Polymerase chain reaction was performed in duplicate using the GoTaq1 qPCR Master Mix (Promega) and specific primers (see Supplementary Figure 5) on a CFX Connect Real-Time System (Bio-Rad Labs). Relative mRNA expression was calculated after normalization using beta actin and ribosomal protein lateral stalk subunit P1 as internal controls using the 2 −∆∆CT method 43 . Mitochondrial DNA copy number was quantified as the ratio of DNA products of mitochondrial NADH dehydrogenase subunit 1 normalized to ribosomal 18S-RNA serving as an internal control using the 2 -∆∆CT method 10 .
Mitochondrial interaction of TFAM with mitochondrial transcription factor 2B. To quantify the mitochondrial protein interactions of TFAM with the Transcription Factor 2B a Proximity Ligation Assay (PLA) was performed 24 . Primary antibodies against TFAM (1:50, sc-376672, Santa Cruz Biotechnology) and mitochondrial Transcription Factor 2B (1:50, 13676, Abcam,) were incubated for 1 h at room temperature. Proximity probes (anti-Mouse Plus; DUO92001 and anti-Goat Minus; DUO92006, both Sigma-Aldrich, each 1:5) were incubated for 1 h at room temperature, S3 splint and S3 backbone oligonucleotides (Biomers.net; Ulm, Germany) were hybridized, ligated and amplified (Supplementary Figure 5). The rolling circle products were visualized with a detection oligonucleotide 44 . Images were submitted to a Cell Profiler pipeline quantifying the proximity ligation assay signals with single cell resolution.
Statistical analysis. This is the primary analysis of this data. The characteristics of the patients are reported as percentages for categorical variables and as means with SD or medians with interquartile ranges (25th; 75th percentile) as appropriate. Categorical variables were compared using McNemar, or Fisher's exact tests, as appropriate. Continuous independent variables were compared using the Student's t test or the Mann-Whitney test. Continuous dependent variables were compared using the paired samples Student t-test or the Wilcoxon signedrank test, as appropriate. To explore potential age-related bias, we additionally formed a subgroup only covering controls with an age > 40 years (Supplementary Figure 6). The relationship between mitochondrial TFAM (protein concentration or protein interactions) and the SOFA-score was evaluated using Spearman's correlation.
A p-value of less than 0.05 was considered statistically significant. All CIs were calculated with a coverage of 95%. All analyses were performed using SPSS (version 25, IBM, Chicago, IL, USA). For graphical presentations GraphPad Prism 8 (Graph-Pad, San Diego, CA, USA) was used.
Data availability
The complete source data of this manuscript is provided as Supplementary Information.
|
2020-12-04T05:07:37.193Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "aef124aa32e0314bf38e838dad6fd680de44441a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-78195-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aef124aa32e0314bf38e838dad6fd680de44441a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
21965547
|
pes2o/s2orc
|
v3-fos-license
|
Demonstration of broad photonic crystal stop band in a freely-suspended microfiber perforated by an array of rectangular holes
: It is shown that photonic crystal (PhC) optical reflectors with reflectance in excess of 60% and fractional bandwidths greater than 10% can be fabricated by ion beam milling of fewer than ten periods of rectangular cross section through-holes in micron-scale tapered fibers. The optical characteristics agree well with numerical simulations when allowance is made for fabrication artifacts and we show that the radiation loss, which is partly determined by optical interference, can be suppressed by design. The freely-suspended devices are compact and robust and could form the basic building block of optical cavities and filters.
Introduction
Silica optical fibers are without doubt the most important type of optical waveguide by virtue of their excellent transparency, ease of functionalization by microstructuring or doping, and straightforward, low loss integration [1]. In the emerging area of nanophotonics microfiber devices such as high Q, small mode volume cavities, created by tapering and structuring of conventional fibers, are potentially important tools because the low refractive index of silica leads to a substantial evanescent field outside the waveguide and they are freely positionable, thus allowing stronger interaction with and more flexible access to micro-or nano-scale optical components and materials outside the waveguide. In comparison, high index semiconductor-based optical waveguides such as silicon wires are not easily reconfigurable because they have to be attached to a supporting substrate and need additional coupling optics. In addition they have weak evanescent fields and usually exhibit much higher propagation loss due to material absorption and sidewall roughness [2]. Despite such disadvantages, nearly all nano-photonic devices have so far been based on semiconductor platforms. This can be partly ascribed to the easy creation of high Q, small volume cavities in high-index waveguides as a result of the large index difference between semiconductor and air. It is recognized that such cavities are a prerequisite for enhanced light-matter interactions and high-density photonic integration. As an example, a broadband Fabry-Perot cavity in a silicon wire formed by periodic removal of material [3] only requires a few silicon-air periods to make each mirror [we call this basic building block a photonic crystal (PhC) reflector] so that the cavity length can be very short. In contrast, to do the same in a conventional germanium-doped silica fiber Bragg grating (FBG) requires hundreds of periods [4,5], the effective cavity is thus much longer due to the substantial field penetration into the mirrors, and the bandwidth is small. This is simply because of the low index contrast in the silica-doped silica system.
Employing microfiber, which can be precisely tapered down from a conventional fiber, and then manufacturing structures by etching away selected portions seems like a good solution to improve the refractive index contrast in a fiber-based approach. Plasma etched corrugations were first made and tested on the surface of 10 m thick microfibers [6] but strong PhC effects in this simple prototype reflector, were not observed. Later, Liu et al [7] and Nayak et al [8] adopted the more versatile focused ion beam (FIB) milling technique to make PhC reflectors in microfibers. In Liu et al's work, shallow gratings were used but the fractional reflection bandwidth was less than 0.3%. Nayak et al pushed this value to ~2% by creating deeply-indented recesses on both sides of a sub-wavelength fiber. However, both these groups restricted the depth of the etched structures to retain mechanical strength. In order to further raise the index contrast, drilling air holes into the central region of the microfiber, where the field of the guided wave is highest, is a possible way forward, as demonstrated by Ding et al [9] who created a PhC reflector with a reflection bandwidth of ~10% by milling an array of biconvex cross-section air-holes in a 1.34 µm diameter fiber. However, in the latter work a substrate was required for mechanical support which so that applications are very limited compared with a freely suspended fiber. The contact of the microfiber with the supporting material also causes serious light leakage, which greatly degrades the spectral performance.
In order to resolve the conflicting requirements of high index contrast and mechanical strength, in this letter, we report work on a slightly bigger diameter (1.7 µm) microfiber with a PhC reflector made using rectangular through-holes and find that a broad reflection bandwidth (~20%) and robust freely-suspended operation can be combined. The symmetrical shape of our etched holes prohibits energy coupling to odd-parity modes and requires us to adopt single-even-mode rather than conventional single-mode condition, which we show is an advantage in that larger diameter fibers can be used. We also measure the radiation losses in the centre of the stop band of such a PhC reflector and show that the variation with the number of periods can be explained as an interferometric process which could be exploited to lower the total radiation loss by careful choice of the number of periods. Figure 1 plots the wave vector diagrams of periodic quarter-wave stacks consisting of silicon/air [ Fig. 1(a)], silica/air [ Fig. 1(b)], and silica/germanosilicate [ Fig. 1(c)]. The central wavelengths of their stop bands and the refractive index difference between silica and germanosilicate are 1.55 m and 0.01 respectively. A transfer matrix method was employed to calculate the real and imaginary () parts of the wave vector (k) of the Floquet-Bloch wave as a function of the vacuum wavelength (0). From Fig. 1, it is seen that the peak value of and the width of the stop band in the silicon/air stack are three times of those in the silica/air stack and ~150 times greater than those in the silica/germanosilicate stack (The fractional bandwidths are 0.74, 0.24, and 0.0042 respectively). From this comparison, a silica/air periodic structure should behave similarly to a silicon/air one. Figure 2(a) depicts an adiabatically tapered fiber drawn from a conventional single-mode fiber (SMF). The core mode of the SMF on the left is converted to the fundamental mode in the taper waist, and then converted back to the core mode of the SMF on the right with very high efficiency [10], verified by the measured insertion loss less than 0.1 dB. As required by the mode orthogonality, any higher-order mode generated in the taper waist region will be converted to radiation or cladding modes in the SMF which are heavily attenuated at the lossy interface between the silica cladding and the polymer coating [6]. When rectangular holes are drilled through the central region of the fiber taper, the structural symmetry prohibits energy conversion from the fundamental mode (HE11) to the first three odd modes (TE01, TM01, HE21). This relaxes the usual condition for single-mode operation ( 2.402 V ) to that for the single-even-mode ( 3.832 V ) [11], allowing us to use somewhat thicker and stronger tapers. A thicker microfiber also mitigates against any contamination-induced device degradation. In this work, we fix the diameter of the microfiber to be ~1.7 m and the working wavelength to 0 = 1.55 m [the black dashed line in Fig. 2(b)], where only the fundamental and the first three higher-order guided modes exist, while the latter three guided modes are not excited.
Experiments and simulations
The lengths of each taper transition and the taper waist are ~20 mm. As shown in Fig. 3(a), nine rectangular holes (1.04×0.16 m 2 ) are milled along a 1.72 m-thick section of fiber taper with a period, 0.63 m. The dimensions are determined by scanning electron microscopy. During the milling, a clean silicon wafer serves as a conducting substrate and reduces charging effects. Neither metal nor dielectric coatings [9] are employed to avoid any contamination. The beam spot size and depth of focus of our FIB facility (FEI DB235) are 14 nm and 10 m respectively. After milling, the fiber taper was detached from the silicon wafer. Although the etched holes occupy 72% of the overall cross section, the microfiber device exhibits surprisingly robust mechanical strength when detached from the substrate. [12]) of our periodic waveguide is estimated to be greater than 10%, which agrees well with the spectral measurements. For further comparison, Figs. 3(b) and 3(c) also show finite-difference time-domain (FDTD) simulated results for two polarizations together with their averages. In the simulation, two 1.26 m diameter microfibers, whose single-even-mode ( 3.832 V ) operation condition covers the wavelength region 0 >1.1 m, are used as the input and output waveguides. Via two adiabatic transitions, they are connected with a 1.72 m-diameter microfiber containing an array of perfectly rectangular through-holes. Such a configuration mimics experiment by ensuring that only the fundamental mode (HE11) contributes to the spectra. Without these fictitious input and output microfibers, even higher-order modes (EH11, HE31…) will contribute to the transmittance and reflectance in the simulation. Note that, in the short wavelength region (0 <1.46 m), the HE11 mode couples to these even-parity higher-order modes in the structured microfiber region, while in experiments these modes are converted to cladding modes in the SMF and are strongly attenuated.
It can be seen that the peak wavelength and the positions of the first-minima on either side in the measured reflectance spectrum agree well with the simulated result in Fig. 3(b). The fractional bandwidth is ~20% which is one order of magnitude greater than previously reported for a freely-suspended microfiber reflector [8]. The average modal index in the PhC region is Ñeff = 0/(2) ≈ 1.23, which is approximately equal to the arithmetic mean of the simulated local modal indices of the unetched ( () 1 [8] (n0/n0 = 0.01) and leads to a significantly larger reflection bandwidth. The calculated difference in the peak reflectance in TM and TE polarizations [~0.6 dB comparing the blue and the red lines in Fig. 3(b)] is also close to that observed by monitoring the reflected power at the peak wavelength whilst rotating the paddles of an in-line polarization controller. The only significant discrepancy between measurement and simulation is the lower measured peak reflectance (0.60 compared with 0.77), which may be attributed to non-ideal fabrication.
The measured transmission spectrum in Fig. 3(c) shows a dip at the wavelength 0 ≈ 1.55 m in accord with the simulation, but the transmission at shorter wavelengths is much lower than predicted. As mentioned above, for wavelengths shorter than 1.46 m coupling between the HE11 mode and even-parity higher-order modes becomes important whilst below 1.42 m the HE11 mode starts to couple with the radiation mode because the wavevector of the corresponding Bloch mode falls into the air light cone [13] [see Fig. 4(a)]. Both of these effects will degrade the transmission. However, they have been taken into account in the simulations so that we believe the discrepancy between the measurement and simulation in Fig. 3(c) is due to the loss induced by a small tilt of the inner walls of the etched holes which is an artifact of the ion beam etching. The shape of the hole is trapezoidal, and the estimated angle between two walls is ~ 9º. Preliminary tests show that this angle can be reduced by further optimization of the fabrication process. Figure 4 shows the wave-vector diagrams of our device and that described in Ref. [8] calculated using a commercial plane-wave-expansion band solver. The stop bands (red lines) of these two periodic structures exhibit fractional bandwidths of ~9.3% and ~1.0% respectively. Both lie outside of the air light cones (the gray areas in Fig. 4), which verifies the existence of leakage-free Bloch guided modes. In contrast, the light cone for n ~ 1.373 (the green area) covers the stop band in Fig. 4(a). This corresponds to the situation in Ref. [9] and may explain why the reflected spectrum in that work exhibits unwanted structure since coupling to high order modes in the polymer cladding is then allowed. Although exhibiting strong photonic crystal effects, our structured microfiber suffers from radiation loss, as revealed by the non-complementary transmittance and reflectance spectra [ Fig. 3(d)]. Apart from the already mentioned loss associated with unintentional fabrication artifacts there is additional loss associated with the transitions between structured and unstructured fiber, as pointed out in Ref. [14]. Simulations show that although an incident guided wave in the unstructured fiber predominantly couples to the Bloch wave in the PhC waveguide section, changes in effective modal index can give rise to significant coupling to forward-tilted radiation modes at the interface between these two sections. Similar energy coupling to forward-tilted radiation modes also occurs when the Bloch wave leaves the PhC section. In Ref.
[15], we analyzed the interference between such radiated waves in a low-index PhC waveguide and pointed out that it can be engineered by adjusting the length of the PhC section in order to couple part of the radiated energy back into the waveguide. We have examined this hypothesis experimentally by constructing a series of microfiber-based PhC reflectors with the same dimensions as above except for different numbers of periods, N. Radiation loss at the center of the stop band in each device was estimated from the measured transmittance and reflectance spectra with the result shown in Fig. 5. The undulation in the radiation loss with varying N is a signature of interference and the position of the maximum radiation loss at N = 4 is very close to that predicted in Ref. [15]. As discussed in more detail in Ref.
Conclusions
In conclusion, a PhC reflector with a fractional reflection bandwidth in excess of 10% and a device length of only a few wavelengths has been fabricated in a silica microfiber. The measured transmittance and reflectance spectra agree well with simulation when allowing for the complexities introduced by the inclined inner walls of the etched holes. We have quantified the radiation loss at the center of the stop band and explain its variation with the number of periods as due to interference. Although the loss at the center of the stop band is presently high (~30%), simulations suggest that this could be reduced by an order of magnitude if the fabrication process can be improved. By drilling holes through the core of a microfiber taper and adopting single-even-mode propagation condition, rather than the usual single-mode condition, we simultaneously obtain a strong photonic crystal effect and robust, freely-suspended operation which may inspire future applications in nano-photonics.
|
2018-04-03T04:46:15.416Z
|
2014-02-10T00:00:00.000
|
{
"year": 2014,
"sha1": "79d859bcd1bc4f659501ee23241dcff58cbd0c60",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.22.002528",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "57db789db760c3a82eb11d173fbb25be21265cc7",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
236954503
|
pes2o/s2orc
|
v3-fos-license
|
Prediction of Soft-Tissue Changes Following Single and Bi-Jaw Surgery: An Evaluative Study
Introduction: Orthognathic surgery is carried out in the hard tissues; however, the patient perceives change in the soft tissue. It is important to accurately predict postoperative facial changes associated with each surgical procedure. This study aims to evaluate the changes in the soft tissues resulting from the movement of the hard tissue following single and bi-jaw surgeries. Materials and Methods: An evaluative clinical study was carried out on a total of 34 subjects which consisted of 52 jaw surgeries. Maxilla and mandible were considered as a separate entity even in bi-jaw cases for evaluation. Surgical procedures performed were either bilateral sagittal split osteotomy, Le Fort I osteotomy or both. Pre- and post-surgical lateral cephalograms were compared to assess the soft-tissue change at various soft-tissue points and were labeled T1 and T2, respectively. The points on maxilla were Point A and PrS on upper lip. The points on mandible were PrI and Point B on lower lip and Pog and Gn on chin. Results: All the points on the maxilla and mandible had a strong correlation between the hard and soft-tissue points except point PrS on upper lip. Discussion: Facial appearance is an important parameter in the present times which influences the social and psychological development of an individual. What patient sees is the external soft-tissue drape whereas orthognathic surgery is carried out on bony components of the face. Thus prediction of soft-tissue changes following surgery is an important part of treatment planning.
at the time of enrolling them for the study. All the data obtained were analyzed by a biomedical statistician.
Inclusion criteria
• Patients requiring orthognathic surgery for the correction of skeletal deformities and have undergone presurgical orthodontic decompensation • Patients within 17-30 years of age • The American Society of Anaesthesiologists (ASA) Class I and Class II.
Exclusion criteria
• Prior surgical procedures including esthetic surgery and craniomaxillofacial surgery • Post traumatic defects • Underlying pathological conditions • ASA III and IV • Temporomandibular joint dysfunction.
Method
A total of 34 subjects were included in the study based on the inclusion and exclusion criteria. In the pre surgical phase, cephalometric tracings and mock surgery was performed, followed by the fabrication of surgical guides. Surgical procedures performed were Le Fort I osteotomy for maxilla and bilateral sagittal split osteotomy (BSSO) for mandible or both.
Cephalometric analysis
Presurgical lateral cephalogram (T1) was taken 1 week prior to the surgery and postsurgical lateral cephalogram (T2) 6 months after the surgery. Lateral cephalogram was taken in the natural head position. All the radiographs were digitized and processed using Adobe Acrobat Pro Dc Software version 11 (Adobe Inc. Released 2012. Version 11.0. California) by a single investigator. Hard-tissue landmarks of the cephalograms were traced using a modified version of the analysis of Legan and Burstone [2] and Lew et al. [3] Hard and soft-tissue points were marked as illustrated in Figure 1. The distance between the hard and soft tissue points and the vertical reference line in pre-and post-surgical radiographs were recorded [4] [Figures 2a, b and 3a
Observations
The relationship between the changes in soft tissue and those of hard tissue was determined by Pearson correlation coefficient. It was observed that, the correlation coefficient between points Ah and As for maxillary advancement and setback surgeries was statistically highly significant (P = 0.003 and P = 0.000, respectively). This indicates a very strong correlation between the hard and soft tissues of the maxilla on advancement as well as setback surgeries.
The graph demonstrated that with every unit advancement in maxilla, the upper lip will advance by 1.23 units [ Figure 4a] and move back by 0.97 units on every unit of setback [ Figure 4b].
The correlation coefficient between Points PrSh and PrSs was statistically not significant. The correlation coefficient between the hard and soft tissue points on the mandible in relation to the lower lip was found to be highly significant (P = 0.000) for advancement as well as setback surgeries thus proving a strong correlation between tissues.
The graph further demonstrates that the lower lip will advance by 0.66 units and the soft tissue of mentolabial sulcus will advance
dIscussIon
Facial skeleton and its soft tissue drape are the determinants of facial harmony and balance. The foundation on which the aesthetics of the face is based, is formed by the architecture and topographic relationships of the facial skeleton. However, the visual impact of the face totally depends on the form and proportion of the soft tissues. [5] Changes in the soft tissue form after the surgery depends on various factors such as lip morphology, wound closure, and postoperative swelling. The assessment of these changes requires around 6 months to 12 months of duration. [6] In the present study, 34 patients were assessed for changes occurring in soft tissues with hard tissue movement following orthognathic surgery. A total of 52 jaws were evaluated considering each jaw as a separate entity even in bi-jaw cases. The surgeries performed were Le Fort I for maxillae and/or BSSO for mandibles. The linear horizontal soft-tissue changes in relation to the hard tissue change were recorded and analyzed.
Changes in the lower lip and chin region were evaluated at four points on the mandible namely: PrI, B, Pog, Gn. Points evaluated were same as used by Ribeiro et al. [4] Storms et al. [7] used soft-tissue points namely: Li (Labrale inferius), B', Pog', Gn' and Me' (soft tissue Menton) which involved hard tissue points on dental structures as well. Authors have investigated the changes in soft tissue with some variations in the landmarks such as SNB angle, N-B distance [8] to determine the vertical parameters in the past. However, the present study was limited to osseous structures to eliminate changes occurring due to dental movements and only linear horizontal changes were evaluated as the changes occurring due to autorotation are negated.
After performing mandibular advancement surgeries on 15 jaws following BSSO, it was noted that the ratio of soft These were similar to the ratios obtained in studies by Lines and Steinhauser [9] and Quast [10] which stated that although the soft and hard tissue chins predictably advanced in a 1:1 ratio, the lower lip changes were more variable with soft/ hard tissue ratios ranging from 0.38:1 to 0.75:1 [11] Talbott [12] stated that in cases of mandibular advancement, ratio at lower lip was 0.85:1 and at chin was 1.04:1. Proffit and Epker [13] showed a mean change of 0.75:1 at lower lip and 1:1 at chin. Mommaerts and Marxer [14] stated a change of 0.56:1 at lower lip and 1.03:1 at the area of chin.
Changes in soft-tissue following surgery were first reported in mandibular setback procedures. An attempt was made to quantify the noticeable changes that occurred in lower lip and chin. [15] The present study revealed that with every 1 unit setback of mandible, lower lip moved back by, 0.7 units whereas chin area moved back by 0.8-0.9 units. This was in accordance with the previous studies which stated that for every 1 mm of posterior mandibular skeletal movement, the soft tissue lip receded by 0.6-0.75 mm while the soft tissue chin receded by 0.9-1 mm. [9,16] The results obtained from the present study showed a strong correlation between hard and soft-tissue points at Pog and Gn with r = 0.96 and r = 0.85 respectively which were in concurrence with the study conducted by Lin and Kerr [17] and Rupperti et al. [18] This indicates accurate prediction of soft-tissue points on the chin. Points PrI and point B showed moderate correlation with r = 0.79 and r = 0.73 respectively which makes them less predictable. This is in accordance with the study performed by Do and Lam. [19] Le Fort I surgery was performed on 21 maxillae which comprised of 8 advancements and 13 setbacks. Various authors have used variables such as pronasale, columella, subnasale, nasolabial angle, nasal tip angle to evaluate the changes in nasal tip projection. [20] However, the points that were monitored for the soft-tissue change in the present study were Point A and PrS.
In this study, upper lip followed Point A with a ratio of 1.23:1. The study of Ribeiro et al. [4] gave a ratio of 0.85:1 between As and Ah which was lower in comparison. However, the results from this study are proven to be true by Landes et al. [21] who said that maxillary advancement had an 84% impact when applying anthropometry whereas using roentgenocephalometry an advancement had a 105% response, which was seen in our study. Soft-to-hard ratios have ranged from 0.32:1 to 0.93:1 as stated by San Miguel Moragas et al. [22] When v-y and cinch were performed together, the ratios ranged from 0.78:1 to 0.93:1.
The ratio obtained at base of upper lip (PrSs: PrSh) in the study was 0.64:1. This was in similar lines with the study done by Willmar; [23] where he obtained a ratio that ranged from 0.4:1 and 0.80:1 (mean: 0.57:1) in cases where nasal cinch suture and V-Y lip plasty were not performed. In contrast, the ratio ranged from 0.56:1 to 0.78:1 (mean: 0.66:1) if only V-Y was performed. [24] Naini et al. [25] in 2017, found that nasal cinch suture along with V-Y plasty led to lip lengthening. It showed higher ratios which ranged from 0.9:1 to 0.95:1. [26] The ratio obtained for upper lip at Point A after setback of maxilla was 0.97:1 which revealed that the upper lip receded by 0.97 units with every unit setback of maxilla. Whereas the ratio at base of upper lip (PrSs: PrSh) was found to be 0.85:1 which was higher than 0.67:1 as given by Lines and Steinhauser [9] in 1974. In 1992, Jensen et al. [11] noted that the upper lip moved back by the ratio ranging from 0.33:1 to 0.76:1 in cases of maxillary setback.
As per the correlation coefficients described by Lin and Kerr, [17] there was a strong correlation between As and Ah in advancement and setback surgeries and is proven true in the present study (r = 0.89 and r = 0.87 respectively). Whereas weak correlation exists between PrSs and PrSh in advancement and setback of maxilla (r = 0.49 and r = 0.48). This implies that other factors contribute more than 55% to soft tissue response at this point.
It was observed that the soft to hard tissue ratios of the maxilla obtained in this study were higher than previous studies. Regardless of the type of maxillary surgery whether advancement or setback, there were changes in the nasal tip, nasal width, and upward nasal rotation. [27] These changes may be attributed to the modifications done in the soft tissues of upper lip and a new positioning of the anterior nasal spine during the surgery.
In case of bi-jaw surgeries, the thickness of the soft-tissue Pog may increase slightly after surgery in patients with skeletal Class III malocclusion with a higher preoperative mandibular plane angle. [28] The predictions of soft-tissue changes were found to be less accurate for bi-jaw surgeries than those for single jaw surgeries. [29] Non specificity and large variability in the ratios obtained is the drawback of the study as movement of soft tissues in vertical direction was not considered.
Confounding factors to the study included:
• Selective case analysis • Patient compliance • Error associated with surgical planning • Splint fabrication • Anatomical variation in prediction tracing.
Results of this study could have been positively influenced by: • Larger sample size • Longer follow up period.
Variation in the values as compared to other studies in the soft-tissue changes in the maxillary procedures could be attributed to the vertical movement of the maxilla which was not been factored.
conclusIon
The cephalometric prediction of orthognathic surgery is considered as a gold standard for surgical planning and patient counseling. With the help of this, accurate description of the orthodontic and surgical outcome should be done prior to the treatment. This aids in evaluation of treatment feasibility, to optimize case management, to increase patients' understanding and acceptance of the recommended treatment.
To improve the outcome of the surgical procedure, changes in the soft tissue must be incorporated in treatment planning. This necessitates certain norms to be established for the changes occurring in soft-tissues following orthognathic surgery among the native population. Some factors which affect the soft-tissue response are inevitable or sometimes difficult to control and predict. Patients should be informed prior to the surgery that predictions are only a guide and may not represent the actual surgical outcome.
|
2021-08-09T13:15:47.905Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "f2e6668445ddcbecb8ea1228ecc1cbaf16b79719",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8407611",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "813548beb64e6306d3236b76c0a4afe40d10750e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52910211
|
pes2o/s2orc
|
v3-fos-license
|
Optimization of Polyphenol Extraction from Allium ampeloprasum var. porrum through Response Surface Methodology
Allium ampeloprasum var. porrum has been recognized as a rich source of secondary metabolites, including phenolic acids, flavonoids and flavonoid polymers (proanthocyanidins or condensed tannins), with related health benefits. Both parts of Allium ampeloprasum var. porrum (white bulb and pseudostem) are traditionally consumed either as a vegetable or as a condiment in many Mediterranean countries. The aim of the present study was to optimize the extraction conditions of polyphenols from white leek stem and green leek leaf by implementing a Box-Behnken design (BBD). The optimization considered basic factors affecting extraction efficiency, including extraction time, solvent to plant material ratio and solvent mixture composition. Maximum polyphenol yield was achieved at an extraction time of 80 and 100 min for white leek stem and green leek leaf extracts respectively, solvent to plant material ratio of 5:1 (v/w) and methanol to water ratio of 40:60 (v/v), for both leek extracts. Interestingly, higher total phenolic content was found in green leek leaf extracts compared to white leek stem extracts, due to a possible relationship between polyphenol production and sunlight radiation. High correlation values were also observed between total phenolic content and antioxidant-antiradical activity of optimized leek extracts.
Introduction
Leek (Allium ampeloprasum var. porrum) belongs to the Allium genus and is one of the most important vegetables cultivated in European countries from the Balkan Peninsula to Ireland and in western Asia (e.g., Middle East) [1,2]. Leek is brought to market year round by combining different production methods and cultivars, either in protected conditions or in the field for summer, autumn and winter harvest. Leek is grown for its cylindrical pseudo stem, which is blanched white from growing underground and is made up of the bases of long leaves [3].
Extraction Procedure
The factorial designed experiments (as described in Section 2.7) were conducted with a solvent mixture (methanol:water) composition varying from 50:50 to 90:10 (v/v), solvent to plant material ratio varying from 3:1 to 7:1 (v/w), and an extraction time between 30 and 90 min. The selection of values for each factor was based on preliminary experimentation and literature data. Approximately 5-6 g of white leek stem and green leek leaf pulp was placed in 50-mL Falcon centrifuge tubes with the appropriate volume of solvent mixture (methanol:water) composition, as defined by the experimental design. Extractions were performed in an orbital and linear motion shaker "Rotaterm" (J.P. Selecta, Cod: 3000435) at 175 rpm, at room temperature (22 ± 2 • C) for predetermined time periods. Upon completion of the extraction, the extracts were filtered and the extract volume was recorded prior to subsequent determinations.
Determination of Total Phenolic Content (TPC)
The total phenolic content of each sample was determined applying a micro method of Folin-Ciocalteu's colorimetric assay, as modified by Andreou et al. [21]. Absorbance was measured at room temperature at 750 nm with a Vis spectrophotometer (Spectro 23, Digital Spectrophotometer, Labomed, Inc., Los Angeles, CA, USA). The total phenolic content was expressed as mg gallic acid equivalents (GAE) per 100 g wet weight (ww), using a standard curve with 25-2600 mg/L gallic acid (y = 0.0005x + 0.0786, R 2 = 0.9989).
Experimental Design and Statistical Analysis
Experiments were conducted in triplicate and the data including the partial correlations were analyzed by STATISTICA software (Statsoft Inc., 2004, Tulsa, OK, USA). Analysis of variance (ANOVA) and Duncan's multiple range tests were used to determine the significant difference in total phenolic content, at a 95% confidence level (p < 0.05).
The extraction parameters, i.e., extraction time (X 1 ), solvent to plant material ratio (X 2 ) and solvent mixture (methanol:water) ratio (X 3 ), were optimized using the Box-Behnken experimental design by means of the software STATISTICA (Statsoft Inc., 2004). The experimental design involved three process variables each at three equidistant levels (−1, 0, +1) and the response variable was the total phenolic content (Y). The levels of the three variables were chosen according to preliminary experiments. In total, 15 combinations of process variables were applied. The combination of variables at the center of the level was run in triplicate. The experimental design determined the effect of the three main variables (X 1 , X 2 and X 3 ) and their interactions on the response variable. The effect of each variable and their interactions on total phenolic content was evaluated by using ANOVA technique. In order to describe relationships between response (Y) and the experimental variables (X 1 , X 2 , X 3 ), a regression model containing 10 coefficients, including linear and quadratic effect on factors and linear effect on interactions, was described by the following equation: where β o is the constant coefficient (model intercept), β i is the linear coefficient of the main factors, β ii is the quadratic coefficient for the main factors, and β ij is the second order interaction coefficient. The 3D response graphs and profile for the predicted values and desirability level for factors were plotted using STATISTICA software.
Results
A Box-Behnken experimental design applied for 3 variables was used to optimize the extraction of polyphenols from green leek leaf and white leek stem extracts and to determine the combined effect of the extraction time (X 1 ), the solvent to plant material ratio (X 2 ) and the solvent mixture composition (X 3 ) on the total phenolic content (TPC) (Y). Table 1 presents the levels of the independent process variables (X 1 , X 2 and X 3 ) in their coded and actual form, according to the experimental design and the observed and predicted values of the response Y (mg GAE/100 g wet weight) for all experiments. The experiments were randomized in order to maximize the effects of unexplained variability in the observed responses due to extraneous factors. Table 1. Independent variables, their coded (actual) levels and the corresponding observed and predicted responses. The correlation between total phenolic content (Y) and the three processing variables (X 1 , X 2 and X 3 ) were described by two second-order polynomial equations, for both sections of leek, as shown in Table 2. Table 2. Polynomial equations and statistical parameters calculated after implementation of a three-factor Box-Behnken experimental design.
Response Variable (Total Phenolic Content, mg GAE/100 g) 2nd Order Polynomial Equations R 2 p
Leek green leaf 1.53 + 0.08X 1 + 0.00X 1 Leek white stem Both equations found that TPC can adequately predict the total phenolic content at different levels of three process variables influencing the extraction, as the lack of fit was statistically significant (p < 0.05) and close to the observed ones (R 2 = 0.98 and 0.92), for green leek leaf and white leek stem extracts, respectively.
Taking into consideration the analysis of variance (ANOVA) ( Table 3) and on the basis of the F-test, for green leek leaf extracts, the extraction time (X 1 ) and solvent to plant material ratio (X 2 ) had significant (p < 0.05) quadratic effect on total phenolic content, whereas only the interactions between extraction time (X 1 ) and solvent to plant material ratio (X 2 ) and between extraction time (X 1 ) and methanol:water ratio (X 3 ), had a significant (p < 0.05) effect on total phenolic content. Additionally, for white leek stem extracts, both solvent to plant material ratio (X 2 ) and methanol:water ratio (X 3 ) had significant (p < 0.05) linear and quadratic effects on total phenolic content, whereas the extraction time (X 1 ) had only quadratic effect on total phenolic content. Finally, the interaction between extraction time (X 1 ) and solvent to plant material ratio (X 2 ) and between solvent to plant material ratio (X 2 ) and methanol:water ratio (X 3 ), had a significant (p < 0.05) effect on total phenolic content. Table 3. Analysis of variance (ANOVA) for the total phenol content (Y) from leek extracts as a function of extraction time (X 1 ), solvent to plant material ratio (X 2 ), solvent mixture (methanol:water) composition (X 3 ) and their interactions. The theoretical calculation of the optimum conditions for extraction of total phenols was evaluated by the non-linear optimization algorithm and a maximum total phenolic content of 22.114 and 17.620 mg GAE/100 g ww was achieved, for green leek leaf and white leek stem extracts, respectively. These maximum values were computed under optimum conditions; more specifically, at 100 and 80 min extraction time, for green leek leaf and white leek stem extracts respectively, and at solvent to plant material ratio 5:1 (v/w) and methanol:water composition 40:60 (v/v) for both extracts.
Sum of Squares Degrees of Freedom
The desirability levels for the three extraction variables for optimum polyphenol extraction are presented either as profiles or desirability surface/contour plots in Figures 1 and 2, indicating that the maximum desirability of 1.0 (in a scale of 0-1) can be achieved with the aforementioned optimum conditions for both extractions. The total phenol content and desirability level reduced considerably when the extraction time was less than 100 and 80 min, for green leek leaf and white leek stem extracts, respectively. Additionally, for both extracts, the desirability level reached the maximum value of 1, between solvent to plant material ratio of 5:1 and 7:1 (v/w), and methanol:water composition of 40:60 and 90:10 (v/v); therefore, it is recommended to use the minimum solvent to plant material ratio and methanol:water composition, for reasons of cost, disposal and ease of separation from final extract.
The optimized extracts of green leek leaf and white leek stem were further analyzed in order to determine their antiradical and antioxidant activity, by ABTS and FRAP assays. Results for ABTS and FRAP assays, expressed as mg of Trolox equivalents (TE) and Fe(II) respectively, per 100 g of leek wet weight (ww), ranged from 31.03 to 36.56 mg TE/100 g ww and from 5.39 to 7.25 mg Fe(II)/100 g ww. Furthermore, correlation analysis was performed and a strong positive correlation was noticed between the TPC and the antiradical activity (0.74, p < 0.05), between the TPC and the antioxidant activity (0.86, p < 0.05), as well as between antiradical and antioxidant activity (0.78, p < 0.05). The desirability levels for the three extraction variables for optimum polyphenol extraction are presented either as profiles or desirability surface/contour plots in Figures 1 and 2, indicating that the maximum desirability of 1.0 (in a scale of 0-1) can be achieved with the aforementioned optimum conditions for both extractions. The total phenol content and desirability level reduced considerably when the extraction time was less than 100 and 80 min, for green leek leaf and white leek stem extracts, respectively. Additionally, for both extracts, the desirability level reached the maximum value of 1, between solvent to plant material ratio of 5:1 and 7:1 (v/w), and methanol:water composition of 40:60 and 90:10 (v/v); therefore, it is recommended to use the minimum solvent to plant material ratio and methanol:water composition, for reasons of cost, disposal and ease of separation from final extract. The optimized extracts of green leek leaf and white leek stem were further analyzed in order to determine their antiradical and antioxidant activity, by ABTS and FRAP assays. Results for ABTS and FRAP assays, expressed as mg of Trolox equivalents (TE) and Fe(II) respectively, per 100 g of leek wet weight (ww), ranged from 31.03 to 36.56 mg TE/100 g ww and from 5.39 to 7.25 mg Fe(II)/100 g ww. Furthermore, correlation analysis was performed and a strong positive correlation was noticed between the TPC and the antiradical activity (0.74, p < 0.05), between the TPC and the antioxidant activity (0.86, p < 0.05), as well as between antiradical and antioxidant activity (0.78, p < 0.05).
Discussion
The experimental values of total phenolic content obtained with different combinations of independent variables, varied from 5.579 to 21.456 and from 5.906 to 12.863 mg GAE/100 g wet weight (ww), for green leek leaf and white leek stem extracts, respectively. The above values are comparable to those reported for leeks extracted with 80% ethanol and ranging from 210.67 ±16.63 mg/kg to 254.80 ±10.09 mg/kg [24]. Another study determined the content of total phenols in the ultrasonic extracts of leek Allium porrum L. treated with ethanol and total phenol content (TPC) was found to be 45.39 and 69.46 mg gallic acid equivalent (GAE)/g dry extract, for leaf and stem extract, respectively [25]. Our results are lower than those of 30 leek cultivars reported by other researchers [26], with values ranging from 74.87 to 196.84 mg GAE 100 g −1 fresh weight (fw) in the white stem and from 77.13 to −1
Discussion
The experimental values of total phenolic content obtained with different combinations of independent variables, varied from 5.579 to 21.456 and from 5.906 to 12.863 mg GAE/100 g wet weight (ww), for green leek leaf and white leek stem extracts, respectively. The above values are comparable to those reported for leeks extracted with 80% ethanol and ranging from 210.67 ±16.63 mg/kg to 254.80 ±10.09 mg/kg [24]. Another study determined the content of total phenols in the ultrasonic extracts of leek Allium porrum L. treated with ethanol and total phenol content (TPC) was found to be 45.39 and 69.46 mg gallic acid equivalent (GAE)/g dry extract, for leaf and stem extract, respectively [25]. Our results are lower than those of 30 leek cultivars reported by other researchers [26], with values ranging from 74.87 to 196.84 mg GAE 100 g −1 fresh weight (fw) in the white stem and from 77.13 to 213.47 mg GAE 100 g −1 fw in the green leaves. In agreement with the above findings, previous studies [13,27] reported average total phenolic content of wild leek as 5.77 mg GAE/g extract, whereas the respective values were found to be considerably lower for A. porrum (0.369 mg GAE/g Foods 2018, 7, 162 8 of 10 extract). The U.S. Department of Agriculture [28] reported a total phenolic content of 47 mg GAE 100 g −1 fw in the bulb and lower leaves of leek.
The comparison of results of several research studies is not always appropriate to estimate extraction efficiency due to variations in plant characteristics, such as plant cultivars, growing seasons and agricultural practices. Additionally, the type of solvent and the extraction technique can influence the total phenol content as well as the variation of moisture content in the original plant, which can affect the expression of results.
Finally, in our study green leek leaf extracts contained higher amounts of total phenolics in comparison with white leek stem extracts, identifying a possible correlation between increased polyphenol production and exposure to sunlight radiation, as previously reported in St. John's wort [29] and barley [30]. Moreover, the high correlation values found among TPC, FRAP and ABTS assays indicate the significant contribution of the phenolic compounds contained in the optimized leek extracts in their antiradical and antioxidant activity. According to Bernaert et al. [31] and Soininen et al. [32], the main flavonoids identified in Allium species were kaempferol and quercetin derivatives. Among the methods for determining the antioxidant capacity in vitro, DPPH, FRAP and ORAC assays are the most widely used. For these assays, different data may be obtained, due to different activity patterns of the sample antioxidants in each method. Therefore, several methods and standards should be used and their results compared in order to confirm the antioxidant capacity of a complex sample. FRAP is the only assay that directly measures antioxidants in a sample and provides information about the ability of a compound to reduce ferric complex ion, whereas DPPH, ORAC and ABTS assays are indirect, because they measure the inhibition of reactive species (free radicals) generated in the reaction mixture, and their results also depend strongly on the type of reactive species used. In a previous study [26], extracts of the white shaft and green leaves of 30 leek cultivars were investigated for their antioxidant properties and the white leek shaft had an antioxidant activity of 57 µmol TE g −1 dry weight (dw) (ORAC), 9 µmol Fe 2 SO 4 g −1 dw (FRAP) and 6 µmol TE g −1 dw (DPPH), whereas the green leaves had an antioxidant activity of 101 µmol TE g −1 dw (ORAC), 27 µmol Fe 2 SO 4 g −1 dw (FRAP) and 9 µmol TE g −1 dw (DPPH). Additionally, based on the results of previous study [13], wild populations of Allium ampeloprasum L. had low antioxidant activity measured by DPPH and inhibition of β-carotene bleaching, and a moderate-high antioxidant activity measured by TBARS and reducing power methods.
Conclusions
Conclusively, response surface methodology was effectively applied to optimize the phenolics extraction from white stems and green leaves of Allium ampeloprasum. Based on the regression model computed, the most significant parameters affecting phenolics recovery were the extraction time, the solvent to plant material ratio (v/w) and the solvent mixture composition (methanol:water, v/v). Moreover, the optimum conditions attained in the present work could be implemented to similar plant materials.
|
2018-10-21T20:22:52.320Z
|
2018-10-01T00:00:00.000
|
{
"year": 2018,
"sha1": "9a2add6ef7f276dd8daa630ab695c2473e738e89",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/7/10/162/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a2add6ef7f276dd8daa630ab695c2473e738e89",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
199662445
|
pes2o/s2orc
|
v3-fos-license
|
Biochemical characterization of the mouse ABCF3 protein, a partner of the flavivirus-resistance protein OAS1B
Mammalian ATP-binding cassette (ABC) subfamily F member 3 (ABCF3) is a class 2 ABC protein that has previously been identified as a partner of the mouse flavivirus resistance protein 2′,5′-oligoadenylate synthetase 1B (OAS1B). The functions and natural substrates of ABCF3 are not known. In this study, analysis of purified ABCF3 showed that it is an active ATPase, and binding analyses with a fluorescent ATP analog suggested unequal contributions by the two nucleotide-binding domains. We further showed that ABCF3 activity is increased by lipids, including sphingosine, sphingomyelin, platelet-activating factor, and lysophosphatidylcholine. However, cholesterol inhibited ABCF3 activity, whereas alkyl ether lipids either inhibited or resulted in a biphasic response, suggesting small changes in lipid structure differentially affect ABCF3 activity. Point mutations in the two nucleotide-binding domains of ABCF3 affected sphingosine-stimulated ATPase activity differently, further supporting different roles for the two catalytic pockets. We propose a model in which pocket 1 is the site of basal catalysis, whereas pocket 2 engages in ligand-stimulated ATP hydrolysis. Co-localization of the ABCF3–OAS1B complex to the virus-remodeled endoplasmic reticulum membrane has been shown before. We also noted that co-expression of ABCF3 and OAS1B in bacteria alleviated growth inhibition caused by expression of OAS1B alone, and ABCF3 significantly enhanced OAS1B levels, indirectly showing interaction between these two proteins in bacterial cells. As viral RNA synthesis requires large amounts of ATP, we conclude that lipid-stimulated ATP hydrolysis may contribute to the reduction in viral RNA production characteristic of the flavivirus resistance phenotype.
Mammalian ATP-binding cassette (ABC) subfamily F member 3 (ABCF3) is a class 2 ABC protein that has previously been identified as a partner of the mouse flavivirus resistance protein 2,5-oligoadenylate synthetase 1B (OAS1B). The functions and natural substrates of ABCF3 are not known. In this study, analysis of purified ABCF3 showed that it is an active ATPase, and binding analyses with a fluorescent ATP analog suggested unequal contributions by the two nucleotide-binding domains. We further showed that ABCF3 activity is increased by lipids, including sphingosine, sphingomyelin, platelet-activating factor, and lysophosphatidylcholine. However, cholesterol inhibited ABCF3 activity, whereas alkyl ether lipids either inhibited or resulted in a biphasic response, suggesting small changes in lipid structure differentially affect ABCF3 activity. Point mutations in the two nucleotide-binding domains of ABCF3 affected sphingosine-stimulated ATPase activity differently, further supporting different roles for the two catalytic pockets. We propose a model in which pocket 1 is the site of basal catalysis, whereas pocket 2 engages in ligand-stimulated ATP hydrolysis. Co-localization of the ABCF3-OAS1B complex to the virus-remodeled endoplasmic reticulum membrane has been shown before. We also noted that co-expression of ABCF3 and OAS1B in bacteria alleviated growth inhibition caused by expression of OAS1B alone, and ABCF3 significantly enhanced OAS1B levels, indirectly showing interaction between these two proteins in bacterial cells. As viral RNA synthesis requires large amounts of ATP, we conclude that lipid-stimulated ATP hydrolysis may contribute to the reduction in viral RNA production characteristic of the flavivirus resistance phenotype.
Members of the genus Flavivirus in the family Flaviviridae include human pathogens, such as West Nile virus (WNV), 2 Japanese encephalitis virus, tick-borne encephalitis virus, yellow fever virus, dengue virus, and Zika virus. Phenotypic evidence of genetically controlled host resistance to particular virus pathogens has previously been obtained, but few of the genes involved have been identified and characterized (1). Flavivirus resistance and susceptibility in mice are controlled by the alleles of the Flv locus, which encode the 2Ј-5Ј-oligoadenylate synthetase (OAS) 1B protein. Flavivirus-resistant mice express a full-length OAS1B protein, while susceptible mice produce a truncated protein (OAS1B-tr) due to the presence of a premature stop codon (2,3). oas1 genes are components of the cellular innate immune response that when activated by viral dsRNA synthesize short 2Ј-5Ј-linked oligoadenylates (2-5A). These bind to cytoplasmic RNase L causing it to dimerize and cleave single-stranded cell and viral RNAs (4). Eight orthologs of the oas1 gene (oas1ah) have been identified in mice (5,6). The proteins produced by only two of the murine oas1 genes (OAS1A and OAS1G) are active synthetases. OAS1B is an inactive synthetase that cannot produce 2-5A (7,8).
The OAS1B protein localizes to the endoplasmic reticulum (ER) through a C-terminal transmembrane domain consisting of 23 amino acid residues (9). OAS1B-tr, the truncated version of OAS1B, lacks this C-terminal transmembrane domain and is therefore unable to anchor to the ER. Flavivirus RNA replication occurs within invaginations in the ER membrane (10). Although flaviviruses can attach and enter resistant and susceptible mouse cells with similar efficiency, resistant cells produce reduced levels of intracellular flavivirus RNA as well as lower virus yields (9). A yeast two-hybrid screen of a mouse brain library identified two binding partners for OAS1B: ABCF3, which belongs to class 2 of the ATP-binding cassette (ABC) superfamily of proteins; and ORP1L, a protein involved in sterol binding and regulation of late endosome motility as well as protein and lipid transport (9,11). Interaction between OAS1B and ABCF3 was further demonstrated by co-immunoprecipitation in mammalian lysates and co-localization in baby hamster kidney cells by fluorescence microscopy (9). Knockdown of ABCF3 protein levels increased WNV yields but not those of two nonflaviviruses, vesicular stomatitis virus and Sindbis virus, supporting a specific role for ABCF3 in OAS1B-mediated flavivirus resistance (9). Moreover, the flavivirus-specific effect of knockdown of ABCF3 was only seen in resistant mouse embryo fibroblasts (MEFs) that naturally express the full-length OAS1B protein and not in susceptible MEFs expressing the truncated OAS1B-tr (9).
Most ABC proteins contain two nucleotide-binding domains (NBDs) and two transmembrane domains (TMDs) that are present either in the same polypeptide or on separate subunits. The NBDs contain several conserved motifs, including Walker A, Walker B, ABC Signature, Q-loop, and Switch motifs, and the TMDs have limited sequence conservation (12). The Walker A motif of ABC proteins plays a critical role in ATP binding, and Walker B is involved in hydrolysis (12). Analyses of the crystal structures of ABC proteins suggest that their ATPbinding pockets are located at the interface formed by the Walker A motif of one NBD juxtaposed against the signature motif of the other NBD in a head-to-tail configuration (12,13). ABC proteins are normally involved in the cellular transport of a diverse range of substrates in both prokaryotes and eukaryotes, and this process is coupled to the energy of ATP hydrolysis (14,15). Members of the ABC superfamily are divided into three classes (14,15). Although the function, mechanism, and structure of class 1 and class 3 proteins have been elucidated in detail (14,15), little is known about class 2 proteins and their physiological roles. Class 2 proteins are distinct in that they lack the TMD domains but contain two tandemly-linked NBD domains, which also likely participate in a head-to-tail configuration resulting in two ATP-binding pockets (13). Because of the lack of a transmembrane domain, the primary cellular role of most class 2 proteins is believed to be regulatory in nature (15). It has been postulated that some class 2 proteins may complex with other cellular membrane proteins to enable them to transport ligands. The bacterial Mel protein that interacts with the proton-motive force-driven transmembrane pump protein MefE to form a complex involved in the transport of erythromycin is the single example of such a complex available to date (16,17).
The ABC proteins have also been assigned to eight subfamilies, A-H. Subfamilies E and F, which do not contain TMDs, belong to ABC class 2 (15). Subgroup F includes mammalian F1, F2, and F3; yeast GCN20 and EF3; and bacterial EttA, Vga(A), and Uup proteins. Some bacterial ABCF proteins, such as Vga(A), confer antibiotic resistance by drug displacement at the peptidyl transferase center of the ribosome (18 -22). EttA, which functions as a translation factor and regulates progression of the 70S initiation complexes into the elongation cycle, also associates with the ribosome (23,24). Association with the ribosome was also reported for other ABCF proteins, including mammalian ABCF1 (ABC50) and yeast EF3, ARB1 (ABCF2), and GCN20 (ABCF3) proteins, which regulate protein translation either at the level of initiation or elongation (25)(26)(27)(28)(29)(30)(31)(32)(33). Additional reports suggesting that the eukaryotic F1, F2, and F3 proteins can impact diverse cellular activities, including innate immunity against retroviruses (34), promotion of phagocytosis (35), anti-apoptotic effect (36), and co-localization with a tumor-inducing protein (37) are also available; however, there is no firm consensus about their cellular functions, and none have been characterized biochemically. However, data from limited biochemical analyses of the bacterial ABCF proteins Vga(A), Uup, and EttA are available. All three of these bacterial ABCF proteins have ATPase activity, which was shown to be essential for their function in the cell (23,38,39). The ATPase activity of Vga(A) was found to be inhibited by the antibiotic pristinamycin IIA (38), suggesting direct binding of Vga(A) with its ligand, although there is no known partner protein with a TMD. By contrast, ABC proteins belonging to class 1 and 3 normally bind their ligands only when in complex with their cognate TMD partner proteins (14). Finally, the bacterial ABCF proteins also have about an 80-amino acid long inter-ABC domain linker, which contains conserved sequences and is rich in positively-charged residues (40). In the case of Vga(A) and EttA, the linker region was shown, by mutagenesis and deletion analysis, to be critical for their association with the ribosome and for their function (20,21,23). Interestingly, association of the EttA linker with the ribosome was found to be sensitive to the ATP/ADP ratio, leading to the proposal that this protein plays a role in regulation of protein chain elongation in energystarved cells (23).
To gain an understanding of mammalian ABCF3 protein functions that may play a role in the OAS1B-mediated flavivirus-resistance mechanism, the ATP-binding and ATPase activities of mouse ABCF3 were characterized, potential ligands of ABCF3 were identified, and the ability of ABCF3 to interact with OAS1B in bacterial cells was analyzed. We showed that purified ABCF3 protein is an active ATPase with both NBDs contributing to the catalytic activity. TNP-ATP binding studies showed that the two NBDs of ABCF3 are asymmetric with NBD2 playing a more important role in nucleotide binding. The substrates of the ABCF3 protein are currently not known. However, many ABC family proteins are known to transport lipids and amphiphilic drugs, and their ATPase activities have been shown to be stimulated or inhibited by these substrates (41)(42)(43)(44)(45)(46)(47)(48)(49). Moreover, flavivirus infections modulate host cell lipid metabolism (50) and result in changes in the levels of fatty acids, phospholipids, sphingolipids, and cholesterol in cell membranes (51)(52)(53), including in the ER, which is the site of OAS1B/ABCF3 localization and a major site for lipid biosynthesis (9,54). Therefore, in this study we tested the effect of multiple lipids and amphiphilic drugs on ABCF3 activity. Interestingly, the ATPase activity of ABCF3 was found to be modulated by several of the tested lipids, but not by amphiphilic drugs. The basal and lipid-stimulated ATPase activity data obtained with ABCF3 mutated in NBD1 and NBD2 suggested that the two ATP-binding pockets may play different roles in ATP hydrolysis. Co-expression of abcf3 and oas1b in bacteria resulted in alleviation of growth inhibition caused by oas1b expression alone and significantly enhanced OAS1B levels, suggesting an intracellular protein-protein interaction in bacterial cells.
Analysis of the ATPase activity of ABCF3
ABCF3 protein was expressed from pET28a-abcf3 (Fig. 1A) or pGEX-abcf3 (Fig. 1B), as described under "Experimental procedures." These two clones produced ABCF3 with an N-terminal His-tag and GST-tag, respectively. A basal activity of 39 nmol/min/mg in 50 mM HEPES containing 125 mM NaCl, pH 7.5, was observed when the GST-tag was still present (Fig. 1, B
Biochemical characterization of mouse ABCF3
and C). In contrast, a basal activity of 125 nmol/min/mg was observed when the GST-tag was removed. His-tagged ABCF3 exhibited a basal ATPase activity of 132 nmol/min/mg that was similar to that of untagged ABCF3 (Fig. 1, A and C), indicating that the presence of the His-tag at the N terminus of ABCF3 had no deleterious effect on its activity. Also, the yield of pGEXf3expressed ABCF3 protein obtained after removal of the GST tag was significantly lower than that of pET28af3-expressed His-ABCF3 (Fig. 1, A-C). Purified His-tagged ABCF3 protein was used in the subsequent experiments.
Modulation of the ATPase activity of ABCF3 by potential ligands
The substrates of the ABCF3 protein are currently unknown. To identify potential ligands, the effect of various lipids, sterols, and drugs on the ATPase activity of ABCF3 was analyzed (Fig. 2, A-J, gray diamond) as described under "Experimental procedures." The selected lipids and drugs were chosen based on previous reports in the literature (11,(45)(46)(47)(48)(49). To remove background activity, the effect of each ligand on the activity of the double Walker A mutant K216A/K531A (described later) was also analyzed (Fig. 2, A-J, tan circle). The normalized activities were then calculated by subtracting the activity of the double mutant from the WT ABCF3 activity at each ligand concentration (Fig. 2, black square). The data in Fig. 2 show that the ATPase activity of ABCF3 was significantly modulated by sphingosine, sphingomyelin, platelet-activating factor (PAF), lysophosphatidylcholine (LPC), lysophosphatidylinositol (LPI), lyso-PAF, cholesterol, and alkyl ether lipids. Sphingosine, sph-ingomyelin, PAF, and LPC induced nearly a 3-fold enhancement in activity (Fig. 2, A-D). The alkyl ether lipid miltefosine induced a biphasic response with ATPase activity stimulated at low concentrations and inhibited at higher concentrations (Fig. 2E). In contrast, the other two alkyl lipids, edelfosine and perifosine, as well as LPI, lyso-PAF, and cholesterol, inhibited ATPase activity at all concentrations tested, including very low concentrations (Fig. 2, F-J). Several drugs, which are known to be substrates of multidrug resistance pumps, such as Hoechst 33342, verapamil, vinblastine, and quinidine as well as lipids, including phosphatidylcholine, phosphatidylethanolamine, sphingosine 1-phosphate, ceramide, and dihydroceramide, had no effect on the ATPase activity of ABCF3 (Fig. 2K).
Role of the two NBDs of ABCF3 in ATP binding
Similar to other class 2 ABC proteins, mouse ABCF3 contains two tandem NBDs connected by an 80-amino acid long linker sequence with each NBD containing all of the previously described conserved motifs (Fig. S1A). Alignment of the amino acid sequence of mouse ABCF3 with those of other class 2 eukaryotic and prokaryotic ABC proteins showed a very high degree of sequence similarity in all of the conserved motifs present in both NBD1 (Fig. S1B) and NBD2 (Fig. S1C). The human and mouse ABCF3 proteins showed more than 95% overall sequence identity with each other extending over the entire sequence of these proteins. The inter-ABC domain linker region of bacterial ABCF proteins contains conserved sequences and several charged residues (Fig. S2A) (21,23,40). An alignment of the bacterial and eukaryotic linker sequences
Biochemical characterization of mouse ABCF3
showed regions of relatively high homology between the two groups (marked with green highlighted boxes, Fig. S2B) and within members of the eukaryotic group (Fig. S2C), suggesting that the linker region of eukaryotic proteins may also play an important role in the function of these proteins.
To determine the function of each NBD of the mouse ABCF3 protein, point mutations were made in either the conserved lysine in the Walker A motif that is known to be critical for ATP binding or in the conserved glutamate in the Walker B motif that plays an important role in ATP hydrolysis (12,14). Clones containing simultaneous mutations in both NBDs of ABCF3 were also made. The nucleotide-binding characteristics of the purified WT and mutated ABCF3 proteins were initially analyzed by intrinsic tryptophan (Trp) fluorescence quenching. This approach is commonly used to determine conformational changes in proteins in response to the binding of nucleotides and other substrates (42). An emission scan of ABCF3 and NATA (a tryptophan analog) showed that, as expected, the environment of the Trp residues in ABCF3 is more nonpolar than that of NATA (Fig. S3A). Titration of purified ABCF3 protein with ATP or ADP showed saturable quenching, indicating specific binding of each nucleotide (Fig. S3B). However, and cholesterol (J) were added to 5 g of purified ABCF3 in a 1-ml volume reaction. The coupled assay for ATPase activity was carried out as described under "Experimental procedures." Data points represent mean ATPase activity with standard deviation for 10 trials in nanomoles/min/mg. Gray diamond indicates WT ABCF3; tan circle indicates K216A/K531A mutant ABCF3; and black square indicates normalized WT ABCF3 ATPase activities. ATPase activities were normalized by subtracting the activity of K216A/K531A mutant protein from WT ABCF3 activity at each ligand concentration. K, table shows a summary of fold-change in ATPase activity produced by various ligands. The activity in the absence of a ligand was designated as 1.0. Fold-change of Ͼ1 implies stimulation, and Ͻ1 implies inhibition. The reported fold-change was observed at the concentration indicated in parentheses. The results of Wilcoxon matched-pairs signed rank tests comparing normalized basal ATPase activity and normalized ligand-stimulated activity of WT ABCF3 are shown next to the ligand concentration values. ***, p value Յ 0.001; **, p value Յ 0.01; *, p value Յ 0.05.
Biochemical characterization of mouse ABCF3
the corrected fluorescence data could be fitted to single-site Michaelis-Menten kinetics suggesting that there is only one nucleotide-binding site in WT ABCF3 (Fig. S3B). This may be due to the asymmetric distribution of the Trp residues in ABCF3 (with four located in NBD1 and only one near NBD2, see Fig. S1A), which likely introduces a bias in the Trp-quenching experiments. Analysis of single or double Walker A mutants (K216R, K531R, or K216R/K531R) surprisingly showed that the ATP-binding affinity of each of these mutants was unaffected compared with that of WT ABCF3 (Fig. S3C), further indicating that Trp-quenching analysis may not be suitable for studying nucleotide binding to ABCF3.
To further investigate nucleotide binding, TNP-ATP, a fluorescent analog of ATP, was used. TNP-ATP alone exhibits some fluorescence in solution; however, its interaction with the nucleotide-binding pocket of a protein results in enhanced fluorescence (44,(55)(56)(57). TNP-ATP binding to ABCF3 resulted in a 2-fold increase in fluorescence (in the presence or absence of 10 mM MgCl 2 ) compared with TNP-ATP in buffer (Fig. 3A). Moreover, a red shift in max from 551 to 545 nm was also observed indicating that TNP-ATP binding occurs within a hydrophobic region in ABCF3. To determine whether TNP-ATP binds to the ATP-binding pocket(s), increasing concentrations of different nucleotides, including ATP, ADP, or AMP, were added to TNP-ATP-bound ABCF3. It was expected that the addition of nucleotides would displace TNP-ATP from the binding pocket and result in a decrease in fluorescence, as reported previously (44,(55)(56)(57). The addition of 0.1 mM ATP resulted in a sharp decrease in fluorescence indicating displacement of TNP-ATP (Fig. 3B). Significantly less displacement was seen with either 0.1 mM ADP or AMP. These results suggest that TNP-ATP binds specifically to the nucleotide-binding pocket(s) in ABCF3 and that ATP binds with higher affinity than either ADP or AMP.
To determine the ABCF3 binding affinity for TNP-ATP, 5 M ABCF3 was titrated with increasing concentrations of TNP-ATP ranging between 0.1 and 20 M. TNP-ATP binding to WT ABCF3 followed sigmoidal kinetics, suggesting the presence of two nucleotide-binding sites in this protein (Fig. 3C). The data could be fitted to an allosteric binding model that exhibited a
Biochemical characterization of mouse ABCF3
K 0.5 of less than 3 M and a Hill coefficient of 1.8 (Fig. 3H), suggesting positive cooperativity between the two binding sites.
To determine the effect of Walker A mutations on TNP-ATP binding, titrations were also carried out with the single (NBD1 or NBD2) and double (NBD1/NBD2) mutant ABCF3 proteins. Lysine to arginine substitution mutations in the Walker A motif in each NBD of ABCF3 (K216R or K531R) resulted in higher K 0.5 values, implying a lower-binding affinity for TNP-ATP, as expected (Fig. 3, D and H). The K216R (NBD1) mutant protein bound TNP-ATP with a 2-fold higher K 0.5 than WT ABCF3, whereas the K531R (NBD2) mutant protein showed about a 5-fold higher K 0.5 (Fig. 3, D and H) Since the conservative lysine to arginine mutations described above resulted in only limited loss of function of ABCF3 (especially for the NBD1 K216R mutant), nonconservative mutations of lysine to alanine were also constructed to further examine the role of each NBD. As expected, these mutations (K216A, K531A, and K216A/K531A) produced a much more drastic effect on TNP-ATP binding resulting in incomplete saturation when each protein was titrated with increasing concentrations of TNP-ATP (Fig. 3E). Moreover, the K 0.5 value in each case was significantly higher as compared with the lysine to arginine mutations, and the Hill coefficient in each case was Ͻ1.0 (Fig. 3H, shown in red font). The combination of the nonsaturating binding curves seen in Fig. 3E and the high error in K 0.5 values reported by GraphPad in Fig. 3H suggested that these K 0.5 values are most likely underestimated. Overall, these results imply that TNP-ATP binding activity is severely compromised in proteins with the nonconservative lysine to alanine mutations in either NBD and confirm a role for each NBD of ABCF3 in ATP binding.
Although the Walker B motif of ABC proteins plays an important role in ATP hydrolysis (described below), the effect of single (E353Q or E636Q) and double (E353Q/E636Q) point mutations in Walker B on TNP-ATP binding was also investigated (Fig. 3F). As expected, the effect of conservative Walker B mutations on TNP-ATP binding was less severe than that observed for the conservative Walker A mutations. These three mutants displayed a slightly enhanced K 0.5 , with the double mutant showing the largest increase (Fig. 3H). Nonconservative Walker B mutations (E353A, E636A, and E353A/E636A) were also examined for their effect on TNP-ATP binding. The effect of these mutations was also not as severe as that observed with the nonconservative Walker A mutations (Fig. 3, G and H).
Role of the two NBDs of ABCF3 in ATP hydrolysis
The effect of the Walker A mutations (K216R, K531R, K216A, and K531A) or the Walker B mutations (E353Q, E636Q, E353A, and E636A) on ATPase activity was next determined. The most drastic effect on basal activity was observed for the single nonconservative point mutation, K216A, in NBD1, which resulted in less than 30% residual activity (Fig. 4A, column 1, highlighted in red font). In contrast, a protein with the single K531A mutation in NBD2 retained 100% of the basal activity, whereas the double mutation K216A/K531A showed less than 30% activity similar to K216A (Fig. 4A, column 1). ABCF3 proteins containing conserved double Walker A (Lys to Arg) or Walker B (Glu to Gln) mutations retained about 60 -70% residual activity. The double Walker B (nonconservative Glu to Ala) mutant protein, however, showed normal basal activity for unexplained reasons (Fig. 4A, column 1, highlighted in red font). Because the basal activity does not represent specific ligand-stimulated activity, the finding that the effect of various point mutations on basal activity varied is not surprising.
The ATPase activity of WT ABCF3 was previously shown to be stimulated by sphingosine ( Fig. 2A), and the effect of sphingosine on the activities of the Walker A and Walker B mutant proteins was next determined. The NBD1 and NBD2 mutant proteins behaved differently after addition of sphingosine. Although the ATPase activity of WT ABCF3 was stimulated about 3-fold by sphingosine, the activity of the NBD1 mutant K216A was stimulated 15-fold compared with its reduced basal activity (Fig. 4A, compare columns 1 and 3). The overall stimulated activity of K216A (626 nmol/min/mg) was 1.7-fold higher than the stimulated WT protein activity (367 nmol/min/mg). In contrast, the activity of the corresponding NBD2 K531A mutant protein was inhibited 3-fold by sphingosine, and the activity of the double K216A/K531A mutant protein was unaffected (Fig. 4A, columns 1 and 3). The protein with the conservative Walker A mutation K216R in NBD1 also showed about a 3-fold stimulation of activity, whereas the K531R NBD2 mutant protein showed a 2-fold decrease (Fig. 4A, columns 1 and 3). The Walker B NBD1 mutant proteins (E353A and E353Q) also showed a 4 -6-fold stimulation in activity, whereas the activities of the NBD2 mutant proteins (E636A and E636Q) were decreased by about 1.5-fold, overall indicating a similar trend for the NBD1 and NBD2 mutations. Since the nonconservative double Walker A mutation K216A/K531A was most detrimental to the basal (Fig. 4A, column 1) and sphingosine-stimulated (column 3) ATPase activities, the residual activity of this mutant likely represents background or nonspecific activity. Therefore, the ATPase activity data were normalized by subtracting the basal and stimulated activity of the double mutant from the corresponding activities of WT ABCF3 and all other mutants (Fig. 4A, columns 2 and 4). A scatter plot of the normalized basal and sphingosine-stimulated ATPase activities of the WT and mutants is shown in Fig. 4B. After normalization, the ATPase activity trends remained the same. The activity of the different NBD1 mutants was stimulated by sphingosine, and the activity of different NBD2 mutants was inhibited, although the degree of fold-stimulation or inhibition was
Biochemical characterization of mouse ABCF3
altered to varying degrees for the different mutant proteins. For example, the normalized activity of the K216A mutant protein with sphingosine was on average nearly 300-fold higher compared with its basal activity (Fig. 4A, column 4). This is due to the raw basal activities of the K216A and K216A/K531A mutants being very similar, as stated above, and thus after normalization K216A exhibited minimal basal activity (Fig. 4A, column 2), resulting in a much higher fold-change of stimulated activity with a broader range (164 -337) as shown in Fig. 5.
To determine whether sphingosine enhances the catalytic activity of the NBD1 mutant K216A by increasing its affinity for ATP, 5 M purified WT or K216A protein was titrated with increasing concentrations of TNP-ATP in the presence of 15 M sphingosine. While the addition of sphingosine did not produce a significant change in the binding affinity of WT ABCF3 for TNP-ATP (Fig. 4C), the saturation curve no longer exhibited sigmoidal behavior, and the kinetics yielded a Hill coefficient of 1.0 instead of 1.8 seen in the absence of sphingosine (Fig. 4G). As predicted, the binding affinity of the NBD1 mutant K216A for TNP-ATP was significantly enhanced by the presence of sphingosine (Fig. 4, D and G). Moreover, the binding kinetics of the K216A mutant exhibited saturable binding, which is in contrast to the incomplete saturation seen in the absence of the ligand (Fig. 4D). The K 0.5 for K216A in the pres-
Biochemical characterization of mouse ABCF3
ence of sphingosine was about 10-fold lower than that seen in the absence of sphingosine and was now in the same range as for WT ABCF3. Despite the very high binding affinity, the binding curves were not sigmoidal, and the kinetic data showed a Hill coefficient of 1.1 (Fig. 4G). Surprisingly, addition of sphingosine also resulted in saturable binding of TNP-ATP to the NBD2 mutant K531A and the double mutant K216A/K531A with significantly reduced K 0.5 values (Fig. 4, E-G), even though the ATPase activities of these mutants were not stimulated by sphingosine (Fig. 4, A and B).
The two ATP-binding pockets in WT ABCF3 protein are shown as P1 and P2 in the linear schematic shown in Fig. 5A, and the negative effect of point mutations on pocket function is indicated in Fig. 5, B-D. The accompanying table in Fig. 5 summarizes the differential effects of NBD1 and NBD2 mutations on TNP-ATP binding and the ATPase activities shown in Fig. 4. Based on the data shown in Figs. 4 and 5, A-D, a model of the function of each pocket was generated (Fig. 5E). The ATP-binding pockets are expected to be located at the interface of NBD1 and NBD2 in ABCF3 and to be formed by a head-to-tail interaction previously seen in other ABC proteins (12). Pocket 1 (P1) is formed by association of Walker A and Walker B regions of NBD1 with the signature motif of NBD2, whereas pocket 2 (P2) contains the opposite arrangement. The possible implications of this model for the catalytic mechanism of ABCF3 are discussed later.
Co-expression of OAS1B and ABCF3 in bacteria
OAS1B protein was previously shown to be localized to the ER membrane in mammalian cells (9). This is proposed to result from the presence of a putative TM domain located at the C terminus of OAS1B (9). As bacteria contain only a cell membrane and lack organelle membranes, we hypothesized that an ABCF3-OAS1B complex formed in bacterial cells might localize to the cell membrane and provide a model for studying function, including lipid transport, by this complex. Co-expression of OAS1B and ABCF3 was analyzed in Escherichia coli cells. The expression of full-length oas1b alone from pGEX-oas1b at 20°C resulted in complete growth arrest within 30 min of induction of protein expression in E. coli cells; however, expression of the truncated OAS1B protein lacking the putative TM domain (from pGEX-oas1b⌬tm) induced no growth inhibition (Fig. 6A). Western blot analysis showed significantly higher levels of the OAS1B⌬TM protein compared with the full-length OAS1B (Fig. 6B, compare lanes 2-4 with lanes 5-7). This result was expected because samples used in Fig. 6B, lanes 2-4, were derived from viable cells, whereas samples in lanes 5-7 were derived from growth-inhibited cells. Overall, the differential growth effect observed was attributed to the presence of a TMD on the full-length OAS1B protein. Most of the OAS1B⌬TM protein was sequestered in the inclusion body (IB) fraction (Fig.
The growth inhibitory effect of full-length OAS1B was also analyzed in two other bacterial expression systems. When fulllength OAS1B was expressed at 20°C from the extremely lowexpression, pACYC-based, pSU2718 vector (60), growth inhibition was initially seen, but the cells were recovered after about 1 h of induction (Fig. 6A). Expression from pSU2718-oas1b at 37°C, however, resulted in no growth inhibition (Fig. 6A). In contrast, expression of full-length OAS1B from the higher copy number pED-oas1b clone at 37°C resulted in severe growth inhibition that was not reversed until 3 h (Fig. 6C). Growth inhibition, although to a lesser extent, was also seen when OAS1B was expressed at 30°C (Fig. 6D). Expression of ABCF3 alone did not have a negative effect on bacterial growth under all tested conditions (Fig. 6, C and D). A, Rosetta 2(DE3)pLysS cells containing pGEX-oas1b (gray times sign) or pGEX-oas1b(⌬tm) (purple triangle) or BL21 cells containing pSU2718 (black circle) or pSU2718-oas1b (blue square) were grown at 37°C to mid-log phase (A 600 nm ϭ 0.6) and induced with 0.25 mM IPTG for 3 h at 20°C. BL21 cells containing pSU2718 (dark gray circle) or pSU2718-oas1b (cyan square) were also separately induced with 0.25 mM IPTG for 3 h at 37°C. Cell growth was monitored for 3 h after induction. A representative growth experiment is shown. B, Western blot analysis using anti-OAS1 antibodies. pGEX-oas1b(⌬tm) or pGEX-oas1b-containing cells collected at the 3-h time point in A were fractionated as described to obtain the inclusion body (I.B.), cytosol (C), and membrane (M) fractions. Twenty five g of each sample was loaded on 10% SDS-polyacrylamide gels, followed by Western blotting against anti-OAS1 (1:500) antibodies as described under "Experimental procedures." Lane 1, marker (Mr); lanes 2-4, OAS1B(⌬TM); lanes 5-7, OAS1B. C and D, effect of co-expression of OAS1B and ABCF3 on growth. E. coli Rosetta 2(DE3)pLysS cells containing pED (black square), pED-oas1b (red circle), pED-abcf3 (gray triangle), or pED-oas1b-abcf3 (teal times sign) were grown at 37°C to mid-log phase (A 600 nm ϭ 0.6) and induced with 0.25 mM IPTG for 3 h at 37°C (C) or 30°C (D). The growth was monitored at A 600 nm for 3 h after induction, and a representative growth experiment is shown.
Biochemical characterization of mouse ABCF3
To determine whether co-expression of abcf3 would impact the growth inhibitory phenotype of oas1b expression, the growth of E. coli transformed with the pED clones expressing abcf3 (pED-abcf3), oas1b (pED-oas1b), or both abcf3 and oas1b (pED-oas1b-abcf3) genes was analyzed at 37 or 30°C. Interestingly, co-expression of oas1b and abcf3 completely alleviated cell growth inhibition at both 37°C (Fig. 6C) and 30°C (Fig. 6D), suggesting that an intracellular interaction between ABCF3 and OAS1B had occurred.
The cellular distribution of each expressed protein was next analyzed. The cells induced at 30 or 37°C were lysed, and the cytosolic, membrane, and IB fractions were prepared. The proteins in each fraction were separated by SDS-PAGE and detected by Western blotting with either anti-OAS1 or anti-ABCF3 antibody (Fig. 6, E-H). An 8 -10-fold increase in the level of total (T) OAS1B in cells co-expressing oas1b and abcf3, compared with that in cells expressing oas1b alone, was observed both at 30 and 37°C (Fig. 6, G and H, compare lanes 5 and 9, and K and L, T). However, the majority of the OAS1B protein produced under co-expression conditions was found in the IB fraction with about a 25-fold increase in the accumulation of OAS1B in IB at 37°C compared with expression of OAS1B alone (Fig. 6, G, compare lanes 2 and 6, and K, IB). At 30°C, a 20-fold higher level of OAS1B was observed in the IB fraction (Fig. 6, H, compare lanes 2 and 6; and L, IB) with lower OAS1B levels detected in the cytosolic and membrane fractions. The results indicate that due to the sequestration of overexpressed OAS1B in inclusion bodies at both temperatures, OAS1B localization to the membrane remained the same or decreased in co-expressing cells. Moreover, co-expression at 30 or 37°C had little or no effect on the stability or cellular distribution of ABCF3 (Fig. 6, I and J).
Discussion
Recent studies have demonstrated the involvement of the full-length OAS1B protein in conferring a flavivirus resistance phenotype in mice (9,61,62). In yeast two-hybrid and subsequent in vitro pulldown experiments, ABCF3 and ORP1L were identified as potential OAS1B partners that may play a role in the flavivirus resistance mechanism (9). However, the specific roles of these partners in the resistance phenotype have not been determined.
In this study, a nonhydrolyzable analog TNP-ATP was used to gain an understanding of the nucleotide-binding properties of ABCF3. Interestingly, we found that TNP-ATP binding to ABCF3 follows allosteric kinetics and exhibits positive cooperativity with a Hill coefficient of 1.8. The two NBDs in ABCF3 are each thought to participate in forming an ATP-binding pocket (Fig. 5E), and the data obtained suggest cooperativity between the two pockets with binding of a nucleotide to one pocket in ABCF3 increasing the binding affinity of the other pocket. Conservative mutations (Lys to Arg) in the Walker A motif of either NBD resulted in a decrease in affinity for TNP-ATP, implying a role for each NBD in nucleotide binding. However, the NBD2 mutation (K531R) produced a much larger effect on TNP-ATP binding, suggesting an unequal contribution of the two NBDs.
Results showing an unequal contribution of the two NBDs to the function of the bacterial ABCF protein Vga(A) were previ-ously reported (38). Moreover, the NBD2 mutation in Vga(A) was found to be more detrimental than the NBD1 mutation, as seen in the case of ABCF3.
We determined that ABCF3 is an active ATPase with a basal ATPase activity of about 130 nmol/min/mg. Modulation of ABCF3 activity by several lipids and alkyl ether lipid-based amphiphilic drugs was observed, suggesting an ability of ABCF3 to directly bind these lipids and drugs. Although sphingosine, sphingomyelin, PAF, and LPC enhanced the activity, the alkyl ether lipids miltefosine, edelfosine, and perifosine, as well as LPI, lyso-PAF, and cholesterol either inhibited activity or produced a biphasic response. Alkyl ether lipids are derived from the glycerophospholipid LPC (63), and the results suggest that small changes in lipid structure can produce different effects on ABCF3 activity. Although it is currently not understood why some lipids enhance while others inhibit ABCF3 ATPase activity, differential effects of different substrates on the activities of other ABC proteins have been observed (45,64,65). Strong inhibition of the ATPase activity of Vga(A) and other ABCF proteins by their antibiotic substrates reported previously also suggested direct interaction with their substrates (22,38).
Point mutations in the NBD1 and NBD2 of ABCF3 affected both basal and ligand-stimulated ATPase activity differently providing further evidence for the asymmetric nature of the two NBDs. Specifically, the NBD1 mutant K216A protein (containing intact pocket 2, Fig. 5B) exhibited significantly reduced basal activity, whereas the NBD2 mutant K531A protein (containing intact pocket 1, Fig. 5C) showed full basal activity. Furthermore, addition of sphingosine to proteins containing mutations in NBD1 resulted in a significantly higher stimulation of activity than observed with the WT protein (Fig. 5B), while the activity of proteins containing NBD2 mutations was not only unstimulated but was inhibited in response to sphingosine (Fig. 5C). Sphingosine also did not stimulate the ATPase activity of the K216A/K531A double mutant (Fig. 5D). Based on these observations, we assume that the 100% basal ATPase activity seen in K531A mutation results from the intact pocket P1 (Fig. 5C), and the high sphingosine stimulation seen in K216A comes from the intact pocket P2 (Fig. 5B). Therefore, we propose that pocket 1 is the site of basal catalysis, whereas pocket 2 engages in ligand-stimulated ATP hydrolysis (Fig. 5E). The inhibition of the ATPase activities of the NBD2 mutant proteins also suggests that sphingosine binding produces a dual effect, stimulating the ATPase activity of pocket 2 while inhibiting the activity of pocket 1 (Fig. 5, C and E).
The above data are consistent with the TNP-ATP binding analysis conducted in the presence of sphingosine. While sphingosine restored the binding affinity of the K216A mutant protein to WT levels, the binding occurred with a Hill coefficient of 1.1, suggesting that the enhanced TNP-ATP binding in the presence of sphingosine occurs predominantly to the intact pocket 2 (Fig. 5B). Interestingly, sphingosine-dependent TNP-ATP binding to WT ABCF3 also demonstrated a Hill coefficient of 1.0 in contrast to the cooperative binding seen in the absence of the ligand (Fig. 5A), indicating that in the presence of sphingosine only one site preferentially binds ATP, and this active site corresponds to pocket 2. Surprisingly, addition of
Biochemical characterization of mouse ABCF3
sphingosine to the pocket 2 mutant (K531A) or the double mutant (K216A/K531A) protein also resulted in overall high affinity TNP-ATP binding. In contrast to the K216A mutant protein, however, neither the pocket 2 mutant nor the double mutant showed any stimulation of ATPase activity by sphingosine (Fig. 5, C and D). Because TNP-ATP binding normally occurs with a much higher affinity than ATP binding (44,55,56), the simplest explanation for these data may be that sphingosine can enhance TNP-ATP binding to pocket 2 despite the presence of the K531A mutation, but it is unable to restore its catalytic function. Overall, these data indicate the importance of an intact pocket 2 in sphingosine-stimulated ATP binding and catalysis by ABCF3.
Both co-immunoprecipitation of OAS1B and ABCF3 from mammalian cells and co-localization of OAS1B and ABCF3 at the ER membrane of mammalian cells were previously shown (9). The OAS1B-tr protein, which does not contain a C-terminal transmembrane domain, is unable to localize to the ER and does not confer flavivirus resistance. Knockdown of ABCF3 in infected cells resulted in an increase in WNV yields but did not affect the yields of two nonflaviviruses, indicating that the action of ABCF3 is specific for flaviviruses. Furthermore, the effect of ABCF3 knockdown on WNV yields was observed in WNV-infected MEFs expressing full-length OAS1B but not in infected MEFs expressing OAS1B-tr, suggesting that interaction between ABCF3 and OAS1B at the ER plays a role in the OAS1B-mediated flavivirus resistance phenotype in infected cells (9). The data presented here provide strong, but indirect, evidence for interaction between OAS1B and ABCF3 in bacterial cells. We showed that the expression of OAS1B alone in E. coli results in varying degrees of growth inhibition, including complete growth arrest, depending on the copy number of the vector and the temperature of expression. This phenotype is consistent with the inhibitory effect produced by overexpression of some membrane proteins in bacterial cells (58,66). Removal of the C-terminal domain of OAS1B containing the putative TM domain resulted in alleviation of growth inhibition, providing support for the proposal that OAS1B is a membrane-embedded protein (9). Furthermore, co-expression of full-length OAS1B with ABCF3 rescued the growth inhibitory phenotype produced by OAS1B expression alone at 30°C or 37°C, suggesting interaction between OAS1B and ABCF3. Coexpression also unexpectedly resulted in a striking increase in the cellular levels of OAS1B, indicating that ABCF3 protects OAS1B from degradation by cellular proteases. The majority of the OAS1B protein stabilized under the co-expression growth conditions at either 30 or 37°C was, however, sequestered in an insoluble fraction in the cell. It is well-documented that the expression or overexpression of a heterologous membrane protein in bacteria can often result in toxic effects (67,68), proteolysis by housekeeping proteases (69 -72), and/or accumulation of the overexpressed protein in inclusion bodies (58,59). Co-expression with an interacting partner protein has been previously shown to result in alleviation of toxicity and protection from proteolysis (69,72,73). We saw evidence of all these phenomena under different expression conditions: toxicity and proteolysis of OAS1B when it was expressed alone but alleviation of growth inhibition and stabilization of OAS1B, followed by sequestration in inclusion bodies, when co-expressed with ABCF3.
To our knowledge, interaction between eukaryotic proteins in bacterial cells has not been shown previously. The pETDuet-1-based bacterial co-expression system described here is not only ideal for examining protein complexes (74 -76), but it also offers several distinct advantages for advancing knowledge of the two proteins. The availability of a clear growth phenotype (growth inhibition/rescue) could be used to develop a genetic screen for further analyzing the domains involved in interaction between OAS1B and ABCF3. For example, the linker domain of ABCF3 may play a role in interaction with OAS1B. The effect of mutations and/or deletions in this and other domains of either ABCF3 or OAS1B could be tested in the bacterial system by a simple growth inhibition/rescue assay. Furthermore, stabilization of large amounts of OAS1B by ABCF3 and the resulting sequestration in inclusion bodies was unexpected, and this could be utilized to prepare large amounts of OAS1B for biochemical and structural analysis in the future. Some eukaryotic proteins have previously been genetically manipulated to promote inclusion body formation and then recovered from inclusion bodies by solubilization and refolding into a functional form (77,78). Functional integration of OAS1B and ABCF3 into bacterial membranes may also be achievable in the future through further optimization of lowlevel expression (66,79,80), as was previously shown for G protein-coupled receptors (81)(82)(83).
In conclusion, we showed that the mouse ABCF3 is an active ATPase, and its activity is modulated by several lipids, including sphingosine and sphingomyelin, two lipids previously shown to have altered levels in flavivirus-infected cells (50 -52). High levels of ATP have been shown to be required for efficient viral RNA synthesis inside membrane replication vesicles (84,85). The dengue NS3 helicase unwinds dsRNA templates in the presence of high levels of ATP but anneals complementary RNA strands when ATP levels are low (86). Although OAS1B protein is not an active 2-5A synthetase, we found it to have an ATPase activity of about 90 nmol/min/mg (Fig. S4, A-C). Therefore, the ABCF3-OAS1B complex, which is anchored in the endoplasmic reticular membrane, may contribute to the reduced level of viral RNA production characteristic of the flavivirus resistance phenotype through its ATP binding and hydrolysis activities, which may be modulated by lipids as shown in this study.
Biochemical characterization of mouse ABCF3
chased from Thermo Fisher Scientific. Nucleotides, pH 7.5, and drugs were prepared in distilled deionized water unless otherwise stated. Cholesterol, sphingosine, sphingosine 1-phosphate, PAF, lyso-PAF, LPC, LPI, alkyl ether lipids, ceramide, dihydroceramide, and quinidine were prepared in ethanol prior to use. 10:0 PC and 14:0 PE were prepared in a buffer consisting of 50 mM MOPS, 125 mM NaCl, pH 7.5, and sonicated before use.
Subcloning of abcf3 and oas1b
A TOPO XL PCR cloning kit (Invitrogen) was used to clone abcf3 or oas1b into the pCR-XL-TOPO vector (pCR). The abcf3 gene was subcloned from pCR-abcf3 into pUC18 using EcoRI and XbaI, into pET-Duet-1 (pED) using NdeI and AvrII, and into pGEX-6p-1 (pGEX) using EcoRI and XhoI restriction sites. abcf3 was then subcloned from pGEX-abcf3 into pET28a using EcoRI and XhoI restriction sites. The pGEX-abcf3 and pET28a-abcf3 clones express ABCF3 containing an N-terminal GST-tag and His-tag, respectively.
The oas1b gene was subcloned from pCR-oas1b into pSU2718 using PstI and HindIII, into pED using NcoI and BamHI, and into the pGEX-6p-1 vector using BamHI and EcoRI restriction sites. A C-terminally truncated version of oas1b, named oas1b(⌬tm), was amplified using a forward primer containing a BamHI restriction site and a reverse primer containing a stop codon after nucleotide 1059 of oas1b followed by an EcoRI restriction site. The oas1b(⌬tm) fragment was then subcloned into the pGEX-6p-1 vector using BamHI and EcoRI restriction sites to generate pCR-oas1b(⌬tm). To create the double-expression clone pED-oas1b-abcf3, oas1b from pCR-oas1b was subcloned into pED-abcf3 using NcoI and BamHI restriction sites. The pED clones (referred to as pED1b, pEDf3, and pED1bf3) express ABCF3 and/or OAS1B protein without a tag.
Media, growth, isolation, and analysis of cell fractions
E. coli Rosetta 2(DE3)pLysS cells containing pED, pED1b, pEDf3, or pED1bf3 were grown in 50 ml of LB medium with ampicillin (100 g/ml) at 37°C overnight. The next day these cultures were diluted 1:50 into 250 ml of fresh LB with ampicillin in a 1-liter flask and incubated at 37°C until the mid-log phase was reached (A 600 nm ϭ 0.6). The cultures were then induced with 0.25 mM IPTG and incubated at 37 or 30°C for 3 h following induction. Cells in 100 ml of culture media obtained under different growth conditions were pelleted by centrifugation. The pellets were resuspended in 3 ml of 1ϫ PBS buffer, pH 7.4, containing 20% glycerol (Buffer A), 1 mM DTT, and protease cocktail inhibitor (Roche Diagnostics). Samples were lysed twice by passage through a mini-French pressure cell (Thermo Electron Corp.) at 16,000 p.s.i. to obtain a total cell lysate. After centrifugation at 13,000 ϫ g for 20 min at 4°C, the inclusion body (pellet) was collected, and the supernatant was centrifuged at 100,000 ϫ g for 1 h to obtain the cytosol (supernatant) and the membrane (pellet). The membrane and the inclusion body pellets were resuspended in 250 and 500 l, respectively, of Buffer A containing 1 mM DTT. The protein concentration of each fraction was determined with a DC TM assay (Bio-Rad).
Western blot analysis
ABCF3 or OAS1B in cellular fractions and as purified proteins were detected by Western blotting. Proteins were separated by 10% SDS-PAGE, and then transferred to a nitrocellulose membrane for 16 h at 4°C. An equal amount of protein was loaded per well unless otherwise indicated in the figure legends. After transfer, the membranes were blocked with 0.2% nonfat dry milk for ABCF3 and 5% BSA for OAS1B. Membranes were incubated with either anti-OAS1 antibody at 4°C for at least 16 h or with anti-ABCF3 antibody for 1 h at room temperature. Rabbit anti-ABCF3 polyclonal antibody (Bethyl Laboratories, Inc.) was diluted 1:2000 with 0.2% nonfat dry milk in 1ϫ TTBS (1% Tween 20 in 20 mM Tris, 500 mM NaCl, pH 7.5). Rabbit anti-OAS1 polyclonal antibody (Abcam Inc.) was diluted to 1:500 with 1% BSA in 1ϫ TTBS. Secondary anti-rabbit goat IgG antibodies obtained from Bio-Rad were diluted to 1:3000 in 0.2% nonfat dry milk in 1ϫ TTBS for ABCF3 or 1% BSA in 1ϫ TTBS for OAS1B detection.
Densitometric scanning and quantification
The nitrocellulose membranes were scanned, and Multi-Gauge version 2.3 software was used for quantification of protein band intensity. The expression of ABCF3 or OAS1B under single expression conditions was designated as 1.0. A foldchange in expression of each protein under double expression conditions was calculated by dividing the amount of each protein in a double expression sample by the amount in a single expression sample from the same gel. Data from at least three independent experiments were combined to obtain average relative expression values.
Purification of GST-tagged ABCF3
E. coli Rosetta 2(DE3)pLysS cells containing pGEX plasmids were grown in 1 liter of LB medium with ampicillin (100 g/ml) at 37°C until mid-log phase was reached (A 600 nm ϭ 0.6) and then induced with 0.25 mM IPTG at 20°C for 16 h. ABCF3 protein was purified after expression from the pGEXf3 clone according to the manufacturer's instructions (GE Healthcare) with some modifications. The cell pellets were resuspended in 50 ml of Buffer A containing 10 mM DTT and complete protease inhibitor cocktail. The cells were broken by two passages through a French press followed by centrifugation as described above. The supernatant was mixed with 1.3 ml of washed Glutathione Sepharose (GE Healthcare) for 16 h in a tube revolver at 10 rpm and then transferred to a 10-ml gravity-flow column. To obtain uncleaved GST-ABCF3 protein, the column was washed with three 10-column volumes of Buffer A with 1 mM DTT and eluted twice with 1 ml of 10 mM glutathione in 50 mM Tris-HCl, pH 8.0, with 20% glycerol. To obtain ABCF3 without the GST tag, the column was washed five times with 10 column volumes, three times with Buffer A, and two times with 1ϫ cleavage buffer (GE Healthcare) containing 20% glycerol and 1 mM DTT (Buffer B). The washed sepharose was then removed from the column, mixed with 920 l of Buffer B and 80 l of PreScission protease in an Eppendorf tube, and incubated on a tube revolver for 4 h (10 rpm) at 4°C. The sepharose was then added back to the column, and the cleaved ABCF3 was eluted from the column twice with 1 ml of Buffer B. The protein con-
Biochemical characterization of mouse ABCF3
centration was determined using the DC TM assay (Bio-Rad), and aliquots were stored at Ϫ80°C until use.
Purification of His-tagged ABCF3
E. coli HMS174(DE3) cells transformed with pET28a DNA encoding the WT or a mutant abcf3 gene were grown in 1 liter of LB medium with kanamycin (30 g/ml) at 37°C until midlog phase was reached (A 600 nm ϭ 0.6) and induced with 0.25 mM IPTG at 20°C for 16 h. The cells were pelleted, the cell pellet was resuspended in 10 ml of Buffer A containing 1 mM DTT and complete protease inhibitor, and the cells were lysed with a French press followed by centrifugation as described above. The supernatant was then mixed with 2 ml of Ni-NTAagarose (previously washed with 40 ml of Buffer A containing 10 mM imidazole) in a closed 10-ml gravity-flow column on a tube revolver at 10 rpm for 1 h at 4°C. The flow-through was collected, and the column was washed with 50 ml of 30 mM imidazole and 1 ml of 100 mM imidazole. The ABCF3 protein was then eluted twice with 1 ml of Buffer A containing 200 mM imidazole. The two elutions were separately dialyzed against 500 ml of Buffer A overnight and again for 2 h the next day before collection. Protein concentration was determined by the DC TM assay (Bio-Rad), and aliquots were stored at Ϫ80°C until use.
Site-directed mutagenesis of the Walker A or Walker B motifs of ABCF3
Site-directed mutagenesis of the abcf3 gene was performed using a QuickChange site-directed mutagenesis kit (Stratagene, La Jolla, CA). Mutations in the Walker A (K216A, K531A, K216R, and K531R) or Walker B (E353A, E636A, E353Q, and E636Q) domain of each NBD were created using the pET28a-abcf3 plasmid as a template. Plasmid DNA with a single mutation in one NBD was used as the template to make a second mutation in the other NBD creating the double Walker A (K216A/K531A and K216R/K531R) or the double Walker B (E353A/E636A and E353Q/E636Q) mutants.
ATPase activity assay
The ATPase activity of 5 g of purified WT or mutant ABCF3 protein was determined using an ATPase activity assay, as described previously (44,87). The slope of the reaction was measured between 200 and 400 s and used to determine the ATPase activity in nanomoles/min/mg. Different concentrations of a ligand were added to 5 g of purified ABCF3 in a 1-ml reaction volume.
Analysis of TNP-ATP binding to ABCF3
TNP-ATP binding assays were conducted with purified WT or mutant ABCF3 proteins. TNP-ATP (0.1 to 20 M) was added sequentially to 5 M ABCF3 in Buffer A in a total starting volume of 500 l in each titration. The titrations were performed on an Alphascan-2 spectrofluorometer (Photon Technology International, London, Ontario, Canada) with the following settings: 1.00-mm slit widths at 75 watts with 403 nm excitation and 450 -600 nm emission. To determine the increase in fluorescence resulting from TNP-ATP binding to the protein, values obtained from a negative control titration without any added ABCF3 were subtracted from the respective fluorescence values obtained in reactions containing ABCF3. The fluorescence units obtained were then corrected for inner filter effects using Equation 1 (88), F i, cor ϭ ͑F i Ϫ F B ͒͑V i /V 0 ͒ ϫ 10 0.5b͑ Aex ϩ Aem͒ (Eq. 1) In Equation 1, F i, cor is the revised fluorescence intensity value based on inner filter effects; F i corresponds to the preliminary fluorescence values; F B is the fluorescence for the blank (no protein) titration at a given point; V 0 is the starting sample volume; V i is the sample volume at a given point in the titration; b is the optical cell path length measured in centimeters; and A ex is the absorbance at 403 nm excitation with A em the absorbance at emission wavelength 548 nm.
Percent increase in fluorescence was then obtained by using Equation 2, % increase ϭ ͑͑F i, cor Ϫ F 0, cor ͒/͑F f, cor ͒͒ ϫ 100 (Eq. 2) In Equation 2, F i, cor is the fluorescence intensity value corrected at a given point in the titration, F 0, cor is the initial corrected fluorescence value for the initial titration value, and F f, cor is the final corrected fluorescence value for the titration. Nonlinear regression in GraphPad Prism 6 Software was used to analyze binding kinetics based on a single site, two site, or allosteric model for binding.
TNP-ATP displacement assays
To determine whether TNP-ATP binds to the nucleotidebinding pocket(s) of ABCF3, titrations were performed with increasing concentrations of ATP (0.1-20 mM), ADP (0.1-20 mM), or AMP (0.1-20 mM). Briefly, 5 M ABCF3 was mixed with 5 M TNP-ATP and 10 mM MgCl 2 in 500 l of Buffer A, and the reaction was incubated at room temperature for 5 min before starting the assay (55,56). Increasing amounts of nucleotide were then added to the sample, and the fluorescence was monitored. For each experiment, a blank titration (sample prepared without ABCF3) was also performed. The fluorescence values were corrected for inner filter effects according to Equation 1 above.
Intrinsic Trp fluorescence quenching analysis
Intrinsic Trp fluorescence of ABCF3 was determined on an Alphascan-2 spectrofluorometer (Photon Technology International, London, Ontario, Canada) with the following settings: 1.00-mm slit widths at 75 watts with 295 nm excitation and 310 -370 nm emission. Quenching of intrinsic fluorescence by ATP or ADP was then determined by titrating increasing amounts of nucleotide (5 M to 5 mM) into a 500 l reaction volume containing Buffer A and 0.5 M purified ABCF3 protein. Control titrations containing 10 M NATA in the 500 l reaction volume described above were also carried out with ATP or ADP to determine the degree of nonspecific quenching of tryptophan fluorescence. All fluorescence values obtained were corrected for inner filter effects with Equation 1, using 295 nm excitation for A ex and 330 nm emission for A em . Percent quenching was then obtained with Equation 3, % quenching ϭ ͑͑F 0, cor Ϫ F i, cor ͒/͑F 0, cor ͒͒ ϫ 100 (Eq. 3)
Biochemical characterization of mouse ABCF3
In Equation 3, F i, cor and F 0, cor are the same values as described above. Kinetic analysis was performed using nonlinear regression with GraphPad Prism 6 software using one-or two-site binding kinetics.
|
2019-08-16T13:04:04.244Z
|
2019-08-14T00:00:00.000
|
{
"year": 2019,
"sha1": "898394226004cf245817058b5a0712deed2b9f5b",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/294/41/14937.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "35636d242a27e18b0b5456f5afd974296e07880d",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
13875141
|
pes2o/s2orc
|
v3-fos-license
|
Pharmacokinetic properties of intranasal and injectable formulations of naloxone for community use : a systematic review
• The US FDA has approved two naloxone products for use by laypersons in community settings for emergency treatment of known or suspected opioid overdose: an intranasal spray with a concentrated naloxone dose of 2 or 4 mg in 0.1 ml and an auto-injector for intramuscular (im.) or subcutaneous (sc.) use with a naloxone dose of 0.4 or 2 mg. • In the absence of head-to-head, comparative efficacy studies, which are not feasible for ethical and logistical reasons, pharmacokinetic data provide important information about effective doses and routes of administration of naloxone for opioid overdose reversal. • In pharmacokinetic studies, both the approved intranasal spray and the im./sc. auto-injector demonstrated bioequivalence with a previously approved formulation, indicating that naloxone exposure was adequate to reverse an opioid overdose. • Both the approved intranasal spray and the im./sc. auto-injector demonstrated sufficient plasma exposure within the first 15–20 min after administration. • Usability studies with laypersons in simulated overdose conditions have found that more than 90% of participants were able to successfully administer naloxone using the approved intranasal spray or im./sc. auto-injector without prior training; however, these studies have identified critical errors with the proper assembly and use of unapproved intranasal kits, even when training had been provided. • Approved intranasal naloxone is appropriate for most patients, with the exception of those with known nasal pathology (e.g., polyps and chronic intranasal drug use). • Providing prescriptions for community-use naloxone may reduce future risk in patients who are receiving chronic opioid therapy for pain control or who have histories of illicit opioid use.
• In the absence of head-to-head, comparative efficacy studies, which are not feasible for ethical and logistical reasons, pharmacokinetic data provide important information about effective doses and routes of administration of naloxone for opioid overdose reversal.
• In pharmacokinetic studies, both the approved intranasal spray and the im./sc.auto-injector demonstrated bioequivalence with a previously approved formulation, indicating that naloxone exposure was adequate to reverse an opioid overdose.• Both the approved intranasal spray and the im./sc.auto-injector demonstrated sufficient plasma exposure within the first 15-20 min after administration.• Usability studies with laypersons in simulated overdose conditions have found that more than 90% of participants were able to successfully administer naloxone using the approved intranasal spray or im./sc.auto-injector without prior training; however, these studies have identified critical errors with the proper assembly and use of unapproved intranasal kits, even when training had been provided.• Approved intranasal naloxone is appropriate for most patients, with the exception of those with known nasal pathology (e.g., polyps and chronic intranasal drug use).
• Providing prescriptions for community-use naloxone may reduce future risk in patients who are receiving chronic opioid therapy for pain control or who have histories of illicit opioid use.
Aim: To assess the pharmacokinetic properties of community-use formulations of naloxone for emergency treatment of opioid overdose.Methods: Systematic literature review based on searches of established databases and congress archives.Results: Seven studies met inclusion criteria: two of US FDA-approved intramuscular (im.)/subcutaneous (sc.) auto-injectors, one of an FDA-approved intranasal spray, two of unapproved intranasal kits (syringe with atomizer attachment) and two of intranasal products in development.Conclusion: The pharmacokinetics of im./sc.auto-injector 2 mg and approved intranasal spray (2 and 4 mg) demonstrated rapid uptake and naloxone exposure exceeding that of the historic benchmark (0.4 mg im.), indicating that naloxone exposure was adequate for reversal of opioid overdose.
Background
Drug poisoning is the leading cause of accidental death in the USA and is driven largely by overdose of prescription or illicit opioids [1][2][3].From 2000 to 2014, the rate of opioid-related (e.g., prescription analgesics and heroin) overdose deaths tripled [2], with a further increase of 16% observed from 2014 to 2015 [3].A sharp increase was noted in overdose deaths related to fentanyl and fentanyl derivatives, potent synthetic opioid analgesics that can be manufactured or purchased illicitly [4-6].Of 52,404 deaths caused by drug overdose in the USA in 2015, 63.1% (33,091 deaths) involved an opioid [3].Each year, there are more than one million emergency department visits for drug poisoning in the USA [7].From 2008 through 2011, 14% of emergency department visits for unintentional overdose were opioid related [7].An analysis of the 2010 Nationwide Emergency Department Sample found that 67.8% of emergency department visits for opioid overdose involved prescription opioids, and 16.1% involved heroin (13.4% were unspecified and 2.7% involved multiple opioid types) [8].
Importance
Since its introduction more than 40 years ago, the opioid antagonist naloxone has been used to reverse respiratory and central nervous system depression resulting from opioid overdose [9].Until 2014, naloxone was approved by the US FDA only in injectable formulations for use by trained healthcare professionals [10].In response to the increase in fatalities caused by opioid overdose, government agencies and community organizations have worked to establish wider access to naloxone [11][12][13].Unapproved intranasal kits contain an injectable formulation of naloxone (e.g., prefilled syringe); to enable intranasal administration, the user must first attach an atomizer (manufactured by another company but provided in the kit) to the syringe [14].Such kits have been increasingly available for public use [14] and have been employed successfully by first responders (e.g., emergency medical service personnel, police officers and bystanders) to reverse opioid overdose [15][16][17][18][19][20].Although these products are FDA approved as injectables, they are not FDA approved for intranasal administration when included in a kit with an atomizer.Furthermore, little data have been collected on the bioavailability of naloxone when administered using these unapproved intranasal kits [14].Importantly, human factors studies have found that many laypersons (i.e., individuals with no medical training) were unable to employ unapproved intranasal kits correctly, even after training [21,22].For example, a prospective usability study of 42 healthy adults found that no participants (0%) could successfully administer a dose of naloxone using an unapproved intranasal kit before training, and fewer than 60% of participants were able to successfully administer a dose of naloxone using this kit after receiving training [21].
The FDA has approved two naloxone products for use by laypersons in community settings for emergency treatment of known or suspected opioid overdose: an auto-injector for intramuscular (im.) or subcutaneous (sc.) use with a naloxone dose of 0.4 or 2 mg (EVZIO R ; Kaléo, Inc., VA, USA) [23,24] and an intranasal spray with a concentrated naloxone dose of 2 or 4 mg in 0.1 ml (NARCAN R ; Adapt Pharma, Inc., PA, USA) [25].The efficacy of naloxone for reversing opioid overdose is well established; therefore, FDA approval of these products was based on other data, including: compliance with good manufacturing practice requirements for combination products (drug + device) [26], human factors studies demonstrating label comprehension and ease of use [27,28] and pharmacokinetic studies demonstrating adequate bioavailability [27].
Goals of this investigation
The purpose of this systematic review is to summarize the pharmacokinetic properties of formulations of naloxone for community use (i.e., formulations currently available or in commercial development for use by laypersons for opioid overdose reversal) as a means for understanding the speed of onset, adequacy and duration of the clinical effects.With the increasing availability of highly potent synthetic opioids, the naloxone dose required to reverse opioid overdose has increased, and multiple dosing has also become common [29,30].Consequently, the approved naloxone products and products in development offer larger naloxone doses in the devices.Therefore, it is important to evaluate the pharmacokinetic properties of these new formulations to understand their potential role in highly potent opioid overdose reversal.A secondary aim is to establish selection of optimal naloxone product based on patient-specific and product-specific factors such as route of administration, formulation and dosing considerations for community use.Community-use formulations include the im./sc.auto-injector, approved intranasal spray, unapproved intranasal kits and intranasal in-development products.
Methods
Searches of the MEDLINE and Embase R databases were conducted on 9 November 2017.Search terms included 'naloxone' and ('pharmacokinetic' OR 'pharmacokinetics'), with the dates of publication set as 2000 to present.Congress programs and abstract archives from January 2012 through October 2017 were accessed online for scientific meetings of pain medicine (American Academy of Pain Medicine and PAINWeek), addiction medicine (American Society of Addiction Medicine and Society for the Study of Addiction) and emergency medicine (American College of Emergency Physicians, National Association of EMS Physicians and Society for Academic Emergency Medicine) professionals.These scientific meetings were selected for review based on the authors' clinical and research expertise and the volume of material presented at these conferences relevant to the topic.Abstract and presentation titles were searched electronically for 'naloxone'.In addition, briefing documents from the 2016 FDA advisory committee meeting on naloxone and FDA product labels for naloxone products for community use were hand searched for pharmacokinetic studies not reported in other published or congress sources.This review included original research studies that were published in English and that reported prespecified pharmacokinetic parameters for a community-use formulation of naloxone administered to either human volunteers or patients.The PRISMA guidelines checklist was followed to comply with the systematic review methodology.Prespecified pharmacokinetic variables included maximum plasma concentration (C max ; ng/ml), time to C max (t max ; hours), area under the plasma concentration-time curve (AUC; ng•h/ml), terminal elimination half-life (t 1 2 ; hours) and bioavailability (%).C max and AUC assess peak and overall drug exposure, respectively.T max is an indicator of the speed of onset, whereas t 1 2 is an indicator of the duration of effect.Most studies with AUC data reported AUC from baseline extrapolated to infinity (AUC 0-∞ ); therefore, AUC 0-∞ was selected as the primary assessment of total naloxone exposure.If AUC 0-∞ was not reported, AUC from baseline to the last measurable concentration (AUC 0-t ) was used.Relative bioavailability was based on AUC 0-∞ data unless otherwise specified.For the prespecified pharmacokinetic variables, measures of central tendency (mean, geometric mean and median) and variability (percent coefficient of variation, 95% CI and range) were extracted from each study report and summarized.Pharmacokinetic variables reported in units that differed from those described above were converted as appropriate.
Results
The literature search and study selection are described in Figure 1 [24,[31][32][33][34][35][36][37].Seven studies were included in this review [24,[31][32][33][34][35][36].Three studies with naloxone pharmacokinetic data [38-40] were excluded because they used study-specific, investigator-compounded agents that did not represent formulations or doses currently available for community use (or in development for community use).Table 1 provides a summary of the study designs and formulations/doses used.Naloxone pharmacokinetic data were obtained from two studies of im./sc.autoinjector, one study of the approved intranasal product, two studies of unapproved intranasal kits and two studies of intranasal products in development.Results for the prespecified pharmacokinetic variables (C max , t max , AUC, t1 2 and bioavailability) from each study are shown in Table 2 [24,31-36].
FDA-approved products for community use
Naloxone pharmacokinetics for the im./sc.auto-injector were evaluated in two studies that varied with regard to the naloxone doses included.A study of 30 healthy volunteers assessed 0.4 mg of naloxone via im./sc.auto-injector, with 0.4 mg of naloxone im.via standard syringe and needle as the reference product [31].Pharmacokinetic parameters of the im./sc.auto-injector and im.syringe and needle were similar for mean C max (1.2 and 1.1 ng/ml, respectively), median t max (0.25 and 0.33 h, respectively), mean AUC 0-∞ (1.9 and 2.0 ng•h/ml, respectively) and mean t 1 2 (1.3 and 1.4 h, respectively).The relative bioavailability of naloxone for the im./sc.auto-injector compared with im.syringe and needle was 98.3% (Table 2).
A separate study of 24 healthy volunteers evaluated im./sc.auto-injector doses of 0.4, 0.8 mg (administered as two injections of 0.4 mg) and 2 mg [24].Mean C max and AUC 0-∞ were dose proportional (Table 2) [24].Median t max and mean t 1 2 were similar across doses.Naloxone pharmacokinetics for the approved intranasal spray were evaluated at various doses (2 mg [1 spray], 4 mg [as 1 or 2 sprays] and 8 mg [2 sprays]) in a study of 30 healthy volunteers, with 0.4 mg of naloxone im.via standard syringe and needle as the reference product [32].Mean C max and AUC 0-∞ were dose proportional for the approved intranasal spray (Table 2).Mean C max , AUC 0-∞ and t 1 2 were greater for all doses of the approved intranasal spray compared with the im.reference.Mean C max was 3.1 and 5.3 ng/ml, respectively, for the approved intranasal (single spray) 2 and 4 mg, compared with 0.9 ng/ml for the im.reference (Figure 2) [32].Mean AUC 0-∞ was 4.7 and 8.5 ng•h/ml, respectively, for the approved intranasal (single spray) 2 and 4 mg, compared with 1.8 ng•h/ml for the im.reference.In addition, the mean t 1 2 was 1.9 and 2.2 h, respectively, for the approved intranasal (single spray) 2 and 4 mg, compared with 1.3 h for the im.reference.Median t max was generally similar for the approved intranasal spray (0.3-0.5 h) and im.reference (0.4 h).Early-stage plasma concentrations for the 4-mg dose of the approved intranasal spray relative to the im.reference are shown in Figure 3 [25].Compared with † One study was initially included in the systematic review based on information presented as a poster [37]; however, this study was published after the search was performed [35].iv.: Intravenous; PI: Prescribing information; PK: Pharmacokinetic.
im. administration, the relative bioavailability of naloxone for the approved intranasal spray was 51.9% for 2 mg, 46.2% for 4 mg administered in one spray, 53.5% for 4 mg administered in two sprays of 2 mg and 43.9% for 8 mg (administered in two sprays of 4 mg).
Unapproved intranasal kits
A study of 36 adults with chronic rhinitis assessed a commercially available, unapproved intranasal kit (2-mg naloxone as 1 mg/ml in each nostril) compared with 2-mg im.(1 mg/ml in each thigh via standard needle and syringe) [33].C max and AUC 0-∞ were lower for the unapproved intranasal kit compared with the 2-mg im.
For both formulations, median t max (0.25 h) and mean t 1 2 (1.5 h) were similar.Relative bioavailability (which takes dose into account) was 14.6% for the unapproved intranasal compared with im.naloxone.The use of an intranasal vasoconstrictor (30 min prior) reduced the naloxone exposure obtained using the unapproved intranasal kit (Table 2).
A study of six volunteers used a population-pharmacokinetic modeling and simulation approach to evaluate unapproved intranasal, im. and intravenous (iv.) delivery of naloxone (commercially available, 0.4 mg/ml) [34].Pharmacokinetic parameters were not reported separately for each formulation, with the exception of relative bioavailability (derived from the modeling/simulation), which was 4% for unapproved intranasal compared with iv.administration.The relative bioavailability of im.versus iv.administration was 36%.
Community-use formulations in development
Naloxone pharmacokinetics for the two intranasal products in development were evaluated in one study each.A study of 38 healthy volunteers assessed an intranasal spray (Mundipharma) at doses of 1, 2 and 4 mg, with 0.4 mg of naloxone im.via standard syringe and needle as the primary reference product (and also 0.4-mg iv.naloxone) [35].Geometric mean C max was 2.9 ng/ml for the 2-mg intranasal in-development product compared with 1.3 ng/ml for 0.4-mg im.(standard needle and syringe) and 5.9 ng/ml for 0.4-mg iv.(Figure 4) [35].Also, geometric mean AUC 0-∞ was 5.0 ng•h/ml for the 2-mg intranasal in-development product compared with 2.1 ng•h/ml for both the 0.4-mg im.(standard needle and syringe) and the 0.4-mg iv.product.Median t max was somewhat longer for the 2-mg intranasal in-development product (0.5 h) compared with 0.4-mg im.(0.2 h) naloxone; whereas, mean t Intranasal in development (dne pharma) [36] 0.8 mg (1 spray of 0.8 mg/ml) 12 52 ¶ ¶ † Relative to 0.4-mg im.standard syringe/needle.‡ 1 mg/ml in each nostril using the LMA R MAD Nasal™ atomizer.§ Adults with chronic rhinitis; all other studies were of healthy adults.¶ Relative to 2-mg im.standard syringe/needle (1 mg in each thigh).# Oxymetazoline HCl 0.05% nasal solution (Afrin R ).
to the iv.and im.reference products are shown in Figure 5. Compared with im.administration, the bioavailability for intranasal in-development naloxone was 50.8% for 1 mg, 47.1% for 2 mg and 48.3% for 4 mg (administered as two sprays of 2 mg).
A different intranasal in-development product (manufactured by dne pharma) was assessed in 12 healthy volunteers; naloxone doses were 0.8 and 1.6 mg (2 sprays of 0.8 mg), with 1.0-mg iv.naloxone as the reference product [36].Mean C max was 2.6 ng/ml for the 1.6-mg intranasal in-development product compared with 14.2 ng/ml for iv.administration (Figure 6) [36].Mean AUC 0-t was 3.1 ng•h/ml for intranasal in-development product 1.6 mg compared with 4.0 ng/ml for 1.0 mg iv.Mean t max was longer for intranasal in-development product 1.6 mg (0.3 h) compared with iv.(0.04 h) naloxone; whereas, mean t 1 2 was similar (1.3 and 1.2 h, respectively).Compared with iv.administration, the bioavailability for the intranasal in-development naloxone was 54% for the 0.8-mg dose and 52% for the 1.6-mg dose.
Discussion
Two naloxone products for community use have been approved by the FDA for emergency treatment of known or suspected opioid overdose, based on pharmacokinetic and human factors studies: an im./sc.auto-injector and a concentrated naloxone dose via an intranasal spray (no device assembly required) [23][24][25].In the absence of headto-head, comparative efficacy studies in the community-use setting, which are not feasible for ethical and logistical reasons, pharmacokinetic data provide important information about effective doses and routes of administration of naloxone for opioid overdose reversal.
In pharmacokinetic studies, both the im./sc.auto-injector and the approved intranasal spray demonstrated bioequivalence with a previously approved formulation, indicating that naloxone exposure was adequate to reverse an opioid overdose [31,32].By contrast, unapproved intranasal kits (syringe with atomizer attachment) using a commercially available naloxone solution intended for iv.use (0.4 mg/ml, 2 mg/2 ml [predominantly used]) have shown low bioavailability of naloxone relative to iv. (4%) [34] or im.(15%) [33] administration; additionally, the unapproved kits lack the label comprehension or human-use study data needed for FDA approval of a combination drug/device product.The poor bioavailability for the unapproved intranasal kits is likely related to the large volume of the solution that has to be atomized and absorbed in the nasal cavity, which may result in a loss of naloxone from the site of absorption (via drainage, either into the nasopharynx or externally) [41,42].As a consequence of nasopharyngeal drainage, intranasal administration of a large volume of solution fails to bypass the extensive first-pass metabolism associated with oral administration of naloxone [43].The approved intranasal spray addresses this issue by using a highly concentrated solution of naloxone such that the volume of each spray is only 0.1 ml [25].Consistently, an explorative review integrating patent application data for noninjectable naloxone for opioid overdose and scientific publications reported that bioavailability of intranasal naloxone products has a positive association with dose and negative association with volume [44].Although there are concerns of overantagonism with higher doses of naloxone resulting in severe withdrawal symptoms [45,46], the risk of inadequate reversal, especially with overdose of potent opioids such as fentanyl, is far greater than the risk of unpleasant opioid withdrawal reactions [46].No studies have yet assessed the initial dose of naloxone required to reverse a fentanyl-related overdose.
Rapid uptake of naloxone is critically important because opioid overdose may result in respiratory depression with hypoxia, which leads to cardiopulmonary arrest and long-term damage to the central nervous system or future science group 10.2217/pmt-2017-0060 death [47].The need for both rapid onset and adequate duration of the naloxone effect is especially significant in light of the increase in overdose deaths involving high-potency, synthetic opioids [2][3][4].Both the im./sc.autoinjector and the approved intranasal spray demonstrated sufficient plasma exposure within the first 15-20 min after administration to garner FDA approval.By contrast, a different intranasal spray was denied approval, potentially because of inadequate early-stage uptake of naloxone [48].The duration of action is shorter for naloxone compared with most opioids; additional dose(s) may be required if the initial response is inadequate or if signs of overdose (e.g., respiratory depression) recur [23,25,27,49].
The optimal naloxone dose is one that successfully reverses opioid overdose without precipitating acute withdrawal symptoms [50].However, most of the information necessary to make a precise dose determination (e.g., mu receptor affinity of the opioid taken and dose taken) is unavailable at the time that naloxone is administered, and varying naloxone dosing algorithms have been suggested [43,49,50].The recent increase in overdose deaths related to potent opioids such as fentanyl [4] has tipped the balance toward the need for adequately high naloxone doses to prevent overdose fatalities.The FDA stance on naloxone dosing is evident in the approval of a new, higher dose (2 mg) for the im./sc.naloxone injector and a limited indication for the lower dose (2 mg) of intranasal naloxone (only for opioid-dependent patients expected to be at risk for severe opioid withdrawal [assuming this information is known at the time of naloxone administration]).The higher dose of the im./sc.auto-injector was developed to ensure that adequate naloxone would be provided for reversing overdose of various types of opioids, including potent opioids such as fentanyl [24].In fact, an FDA advisory committee voted in 2016 to increase the current pharmacokinetic benchmark (0.4-mg im.) for approval of naloxone products for community use [51,52].The makers of the im./sc.auto-injector intend to discontinue manufacturing the lower (0.4-mg) dose [53].Error bars represent standard deviation.im.: Intramuscular; iv.: Intravenous.Adapted with permission from [35] (2018) via a CreativeCommons Attribution-NonCommercial license.
Approved intranasal spray initially received FDA approval in 2015 at a dose of 4 mg.A concentrated solution (4 mg/0.1 ml) is used for optimal absorption in the nasal cavity, with repeat dosing available if necessary [25].The recently approved 2-mg dose of approved intranasal spray has a restriction in the 'Indications for Use' section of the label that limits its use to a specific patient population under particular circumstances.Specifically, use of the 2-mg dose is restricted to "opioid-dependent patients expected to be at risk for severe opioid withdrawal in situations where there is a low risk for accidental or intentional opioid exposure by household contacts" [25].In practice, the lower (2-mg) dose of the approved intranasal spray provides a dosing alternative for patients in whom there are concerns about precipitating severe opioid withdrawal living in situations where the lower dose of naloxone will not put other members of the household at risk for opioid overdose [54].The intranasal products in development appear highly similar in both formulation (high concentration and low volume) and device to the approved intranasal spray [32,[35][36].
Although comparative efficacy studies of naloxone formulations in the community-use setting are not feasible, the use of unapproved intranasal naloxone spray in a prehospital setting has been shown to be effective in reversing opioid overdose in retrospective studies [17,19], prospective nonrandomized studies [15,18,20] and in a randomized controlled study with im.naloxone as a comparator treatment arm [16].A recent survey of first responders and community-based organizations assessing the initial real-world experience of the approved 4 mg intranasal naloxone spray reported successful reversal of opioid overdose in 98.8% of the cases [55].
In addition to efficacy, usability is a vital characteristic for community-use formulations of naloxone, which are expected to be used by laypersons in highly stressful situations.Studies have identified critical errors with the proper assembly and use of unapproved intranasal kits by laypersons in simulated overdose conditions, even when training had been provided [21,22].However, human factors studies have found that more than 90% of participants were future science group 10.2217/pmt-2017-0060 1-mg intranasal (1 spray of 1 mg/0.1 ml) 2-mg intranasal (1 spray of 2 mg/0.1 ml) 4-mg intranasal (2 sprays of 2 mg/0.1 ml) 0.4-mg iv.0.4-mg im.Adapted with permission from [35] (2018) via a CreativeCommons Attribution-NonCommercial license.
able to successfully administer naloxone using the im./sc.auto-injector [21,22] or the approved intranasal spray [32] without prior training.
A study conducted at an urban hospital in Canada evaluated an emergency-department-based take-home naloxone program for patients at the risk of opioid overdose [56].Of 201 participants, 68.2% accepted an unapproved intranasal kit and training.Since 92% of participants believed that take-home naloxone was 'a good idea', acceptance would likely be greater for an FDA-approved product that can be used successfully without training (instead, a brief explanation should be provided and recipients of the product should be encouraged to read the instructions for use thoroughly).Prescription of approved naloxone products also may reduce the training burden on pharmacists, since the counseling required by standing naloxone protocols in effect at pharmacies in many states is simpler for approved products than for unapproved intranasal kits [57][58][59].
Clinical implications
Providing prescriptions for community-use naloxone to patients at risk of opioid overdose (prescribed opioids or illicit use) may help reduce the number of opioid-related fatalities [56].A prescription for community-use naloxone may be particularly appropriate for patients receiving daily opioid therapy for chronic pain and for patients who are known (or suspected) users of illicit opioids, based on self-report or observed signs and symptoms.For patients on daily opioid therapy, guidelines from the Centers for Disease Control and Prevention suggest a dose threshold of concern at 50 morphine milligram equivalents (MME) per day [60].Specifically, the guidelines state "Clinicians should use caution when prescribing opioids at any dosage, should carefully reassess evidence of individual benefits and risks when considering increasing dosage to 50 MME or more per day and should avoid increasing dosage to 90 MME or more per day or carefully justify a decision to titrate dosage to 90 MME or more per day" [60].Squares are the 0.8 mg intranasal, dots are the 1.6 mg intranasal and triangles are the 1.0 mg iv.[36].CI: Confidence interval; iv.: Intravenous.Adapted with permission from [36] C Springer (2017).
Healthcare providers may consider giving a naloxone prescription to patients with chronic pain with a daily opioid dose ≥50 MME and to all patients who are known or suspected users of illicit opioids.Selection of the optimal community-use naloxone product depends on patient-specific and product-specific factors.Approved intranasal naloxone is appropriate for most patients, with the exception of those with known nasal pathology (e.g., polyps and chronic intranasal drug use [e.g., heroin and cocaine]).Auto-injector delivery of naloxone is im.or sc., based on the depth of the needle relative to the patient's clothing and adipose tissue.Because information about sc.absorption of naloxone is limited, we suggest that use of approved intranasal naloxone is preferred in patients who are overweight (BMI of 25-30 kg/m 2 ) or obese (BMI >30 kg/m 2 ).
When selecting the dose of a community-use product, the need for the maximum available safe dose of naloxone (that does not harm the patient) is paramount.For high-concentration, low-volume intranasal spray formulations (both approved and in development), bioavailability relative to im. administration was approximately 50% (Table 2), indicating that similar overall naloxone exposure would be achieved with a 4-mg intranasal dose (using a highconcentration product) and a 2-mg im.dose.For the approved intranasal spray, 4 mg is the first dose of the product approved by the FDA and is considered the standard dose for this product; the 2-mg dose is indicated only for patients considered at risk of severe opioid withdrawal [54].If members of the patient's household are at risk for accidental or intentional exposure to opioids, the 4-mg dose of the approved intranasal spray is indicated.Because the 0.4-mg dose will be discontinued, the im./sc.auto-injector should be prescribed at the 2-mg dose [53].
As with any medication, cost and availability are relevant concerns for patients and their families and caregivers.Prescribers should take into consideration potential socioeconomic barriers to obtaining naloxone products (e.g., insurance coverage and out-of-pocket costs).Although cost is one of the most relevant barriers to gaining future science group 10.2217/pmt-2017-0060 access to naloxone, standard metrics for comparing medication costs (e.g., wholesaler acquisition cost) do not reflect the actual costs of these products to patients.FDA-approved naloxone products are covered by most insurers (commercial and public), often with low (or no) copays.For patients without insurance coverage, clinicians can identify community organizations that may provide naloxone at no cost.For community-use naloxone, ease of use under stressful conditions is also of critical importance.Consideration should be given to providing the community-use naloxone product directly to patients, since overdose may occur before a naloxone prescription is filled if dispensing pharmacy resources are not easily available.Education of patients, family members and companions in the use of the naloxone product selected may be provided by nursing staff, as is typical for other newly prescribed medications such as inhalers, epinephrine auto-injectors or glucometers, although naloxone products will universally be administered by bystanders in a community-use setting.The integration of public health resources into emergency departments may serve to reach at-risk and underserved populations [61].Similarly, public health programs for opioid overdose prevention may target patients who are at-risk users of opioids (via either legitimate prescriptions or illicit sources).
Limitations
Despite a search of recent congress presentations, as well as MEDLINE and Embase, relatively few studies were identified.Methodology varied across studies, including differences in the reference products used, injection sites for the reference im.products, AUC parameters reported and statistical analyses performed.In addition, study participants were primarily healthy volunteers rather than the intended population for naloxone prescription (i.e., patients at risk for opioid overdose).Because of copyright restrictions, it was not possible to show AUC curves for all naloxone products available for community use.
Conclusion
The US opioid epidemic continues to worsen; unintentional overdose of prescription and illicit opioids remain all too common.Two naloxone products for community use have been approved by the FDA (based on compliance with good manufacturing practice requirements for combination [i.e., drug plus device] products, human use/label comprehension studies and pharmacokinetic studies) and have been used successfully by laypersons to reverse opioid overdose.Prescriptions for community-use naloxone may reduce future risk in patients who are receiving chronic opioid therapy for pain control or who have histories of illicit opioid use.Selection of community-use naloxone formulation and dose is based on product-and patient-specific characteristics.It is imperative that providers take into account the need for the maximum available safe dose of naloxone (especially in areas where synthetic opioids such as fentanyl are prevalent).It is also important to prescribe community-use formulations that are simple to use and appropriate for the individual patient.
Future perspective
Turning the tide on the epidemic of opioid overdose deaths will require a multifaceted approach that includes safer opioid prescribing, increased access to treatment programs for opioid abuse (e.g., medication-assisted treatment with behavioral therapies) and increased access to naloxone for opioid overdose reversal [2].Recent US data indicate that opioid prescribing decreased from 2010 through 2015 but remained three-times greater than 1999 levels [62].Wider access to community-use naloxone (in adequate dosages and easy-to-use formulations) is important for reducing the number of opioid-related deaths in the coming years.
Figure 1 .
Figure 1.Flow chart of study selection.†One study was initially included in the systematic review based on information presented as a poster[37]; however, this study was published after the search was performed[35].iv.: Intravenous; PI: Prescribing information; PK: Pharmacokinetic.
Financial & competing interests
disclosure SA Ryan reports serving as a consultant to Adapt Pharma, Inc. and serving as a consultant to Braeburn Pharmaceuticals, Inc. RB Dunne reports no relevant financial relationships to disclose.The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.Writing assistance was utilized in the production of this manuscript.Medical writing/editorial support were provided, under the direction of the authors, by N Holland, Synchrony Medical Communications, LLC, PA, USA, and sponsored by Adapt Pharma, Inc., PA, USA.10.2217/pmt-2017-0060 Pain Manag.(Epub ahead of print) future science group Pharmacokinetic properties of intranasal & injectable formulations of naloxone for community use: a systematic review Review Open access This work is licensed under the Attribution-NonCommercial-NoDerivatives 4.0 Unported License.To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ future science group 10.2217/pmt-2017-0060
Table 1 .
Summary of included studies.
Table 2 .
Pharmacokinetic parameters for community-use formulations of naloxone.
|
2018-05-03T00:19:20.010Z
|
2018-04-23T00:00:00.000
|
{
"year": 2018,
"sha1": "a8be45f0c20f5c06ebd0c1a5f81131e95950c7c3",
"oa_license": "CCBYNC",
"oa_url": "https://www.futuremedicine.com/doi/pdf/10.2217/pmt-2017-0060",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a8be45f0c20f5c06ebd0c1a5f81131e95950c7c3",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
23429307
|
pes2o/s2orc
|
v3-fos-license
|
Alcohol Recognition by Flexible, Transparent and Highly Sensitive Graphene-Based Thin-Film Sensors
Chemical sensors detect a variety of chemicals across numerous fields, such as automobile, aerospace, safety, indoor air quality, environmental control, food, industrial production and medicine. We successfully assemble an alcohol-sensing device comprising a thin-film sensor made of graphene nanosheets (GNs) and bacterial cellulose nanofibers (BCNs). We show that the GN/BCN sensor has a high selectivity to ethanol by distinguishing liquid–phase or vapor–phase ethanol (C2H6O) from water (H2O) intelligently with accurate transformation into electrical signals in devices. The BCN component of the film amplifies the ethanol sensitivity of the film, whereby the GN/BCN sensor has 12400% sensitivity for vapor-phase ethanol compared to the pure GN sensor, which has only 21% sensitivity. Finally, GN/BCN sensors demonstrate fast response/recovery times and a wide range of alcohol detection (10–100%). The superior sensing ability of GN/BCN compared to GNs alone is due to the improved wettability of BCNs and the ionization of liquids. We prove a facile, green, low-cost route for the assembly of ethanol-sensing devices with potential for vast application.
diodes 27,28 . We are interested in the rich surface chemistry and high absorbance of BCNs and predict that BCNs will have a synergistic relationship with GNs in sensor devices.
In this study, BCNs are designated as the matrix to host GNs to achieve the desired typical tendency of a microstructure like that of BCNs' to "wrap" around GNs. The GN/BCN composite forms a thin film, which is capable of serving as a sensing material. We predict that the composite will produce a superior electrical signal in response to ethanol and water in both liquid and vapor phases.
Results and Discussion
Assembly process from raw material to sensor device. We assembled the GN/BCN devices via vacuum filtration followed by lamination process (Fig. 1a) [29][30][31][32] . Figure 1b shows BC hydrogel pellicles with a solid weight content of 0.5 wt.%. The gel contains many ultrafine nanofibers with water filling as high as 99.5 wt.% in the nanofiber network. The BCNs were obtained by grinding these BC pellicles in a kitchen blender (for 5 min) followed by mild stirring assisted with 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO) radical mediation. TEMPO mediation is a treatment commonly used in woody industries to destroy fiber-fiber chemical bonding. Figure 1c shows the aqueous BCN suspension at a concentration of 0.1 wt.%, obtained following purification. Although the extraction of BCNs is similar to that for plant-based cellulose nanofibers 33 , the difference is that BCNs are based on bacterial cellulose pellicle, which can be grown on a large scale at very low cost. Their extraction is therefore facile and energy efficient. They are also desirable because they are considered to be a "green" supply with near-zero energy consumption during the extraction process compared to the energy consumption of plantbased nanofibers, which is often as high as 22, 2.8 and 0.5-2.3 kWh/kg for homogenization, microfluidization and chemical/enzyme treatment 34,35 . After extraction, the aqueous BCN suspension was mixed with different concentrations of GN powder to form GN/BCN mixtures (Fig. 1d). The mixtures were loaded into vacuum-filtration equipment to form wet thin films with residual solvent (Fig. 1g). The film, supported by a filter membrane, was placed face down on top of a plastic substrate that had been deposited with interdigitated Titanium/gold (Ti/Au) (10, 100 nm thick, respectively) electrodes (Fig. 1e). These electrodes collect the electrical signals of the GN/BCN sensor. Ti was used to increase the adhesion between the Au layer and the plastic substrate. After compressing/ drying, the filter membrane was peeled away from the surface of the GN/BCN film leaving behind a dry GN/ GCN thin film deposited on the substrate (Fig. 1f). Figure 2a shows a TEM image of TEMPO-mediated BCNs with an average diameter (D) of 9.9 nm. We measured fiber length (L, within visible area) at a minimum of 5.5 μm (fibers were very curved and knitted), and therefore, their aspect ratio (L/D) was estimated to be in the order of 5000. These fibers are relatively long, compared to either ~1 μm long wood-derived cellulose nanofibers, or ~hundreds of nm long cellulose nanocrystals 10 . Long fibers with a high aspect ratio like those of TEMPO-mediated BCNs are believed to have good mechanical properties (i.e., flexibility in devices) 10 . Figure 2b,c show graphene sheets randomly distributed within the fibrous BCN network. BCNs contain abundant hydroxyl groups (C-OH) on their surfaces due to their intrinsic cellulosic origin (Fig. 2d). These groups change into carboxyl groups (O=C-O-Na) after TEMPO modification (Fig. 2e). The successful modification made to the BCN fibers by TEMPO oxidization was evidenced by the disappearance of the -OH peak at 1647 cm −1 and the increase in the C=O peak at 1607 cm −1 (Fig. 2e). This section proves that TEMPO-treated BCNs were successfully obtained; homogenous mixing of GNs with BCNs facilitates formation of a GN/BCN composite thin film. Figure 3 illustrates our studies of the electrical and optical properties of GN/BCN thin films. Figure 3a shows a 225-nm-thick piece of the semi-transparent GN/ BCN film with 40 wt.% GNs concentration (φ). This value was determined as the critical threshold, η c , which was obtained from the relationship of electrical conductivity and GN concentration for a GN/BCN film (Fig. 3b). At this point, the GNs form a percolated conductive network with conductive particles "just-connected" rather than "under-connected" or "overlapped" (see Fig. 3b illustration). In other words, we are very close to the electrical percolation point. When the GNs are just connected, the vacancies between GNs absorb molecules from the environment, which can cause a quick upshift or downshift in electrical conductivity. Figure 3c plots the current-voltage (I-V) characteristics of 225-833-nm-thick GN/BCN thin films using 40 wt.% GNs measured under voltages between −5 and +5 V. The linear I-V curves indicate that satisfactory Ohmic contact is achieved between the GN/BCN films and the interdigitated Ti/Au electrode. The reciprocal of the I-V slope represents the resistance, and these data show that thinner GN/BCN films have higher electrical resistance. Figure 3d shows the direct transmittance, T direct , against wavelengths by various GN/BCN thin films. T direct is defined as the amount of light that passes through the sample within 2.5° with respect to the total amount of light that passes through the sample. Figure 3e shows that 225-nm-thick GN/BCNs have 50% direct transmittance, which attests to the high transparency of both GNs 4 and BCNs 28-32, 36, 37 .
Electrical and optical properties of GN/BCN films.
We focus on high transparency because we expect that we will integrate our thin-film sensor into or laminate it onto any electronic device (touch screens, flexible displays, printable electronics, solid-state lighting and thin-film photovoltaic) without deteriorating the optical properties when it functions to detect alcohol. We quantified the dependence of transparency (direct transmittance, T direct ) on the GN/BCN thin-film sensor when it was covered with either ethanol or water. As shown in Fig. 3f, the transparency was 34.5% (transparency varied from 30 to 50% among samples) for a GN/BCN thin film exposed to air. The transparency increased slightly to 37.6% when exposed to water. It then reached its maximum at 41.2%. In general, we ascribed these changes to the smooth surface of the liquid films that decreased the surface roughness. A decreased surface roughness caused a decreased scattering factor, which allowed more light to pass through the GN/BCN film.
Sensing behavior to liquid-phase and vapor-phase ethanol and/or water. When applying GN/ BCN film sensors practically, we used sensitivity (∆R/R, %) to evaluate their sensing performance when exposed to target liquids or vapors: in target in air in air where ΔR is the change in resistance, R is the original resistance, R in target is the real-time resistance as the sensing device is exposed to the target and R in air is the initial resistance for the device in air. (Fig. 4a). GN/ BCN films of various thicknesses were tested to probe the relationship between sensitivity and film thickness. We observed that for all samples, GN/BCN devices had increasing electrical resistance with increasing RH (Fig. 4b), We explain the higher sensitivity of devices made with thinner than with thicker films by considering the diffusivity in the layered structure as parallel circuits. In a thick film with R 1 , R 2 and R n resistive layers (Fig. 4c), total resistance (R total ) is defined as: For example, if we assume a thick film consists of two layers and a thin film only has one layer, their total resistance becomes: Assuming in dry air, the initial values of each layer is R 1 = R 2 = 1 Ω, then initial R total is 0.5 Ω for the thick film and 1 Ω for the thin film. When the first layer is exposed to targets (H 2 O, C 2 H 6 O), we assume again that the resistance R 2 remains the same, R 1 gets doubled (i.e., R 1 = 2 Ω). Thus, R 2 = 1 Ω so R total = 0.7 and 2 Ω for thick and thin films, respectively. Sensitivity (ΔR/R) was calculated to be 17% and 100% for thick and thin GN/BCN films, respectively. This simple model demonstrates that a thin-film GN/BCN sensing device has high sensitivity when exposed to target liquids.
Next, the real-time response of a GN/BCN sensor (225-nm thick) to liquid-phase ethanol and water was tested (Fig. 4d). This required that we record the electrical resistance of the GN/BCN film when exposed to dry air followed by a quick insertion of the film into a container filled with pure liquid ethanol (or water) for 5 s. Once removed, the sensor was left to air dry for 5 min. The GN/BCN device showed different sensitivity in response to ethanol and water (Fig. 4e). For example, GN/BCN films had a sensitivity as high as ~15700% to pure liquid ethanol and a sensitivity of 292% to pure liquid water (small irregularities at time of 300 s were caused by the 5 s immersion in target liquids). In comparison, pure GN sensors had a sensitivity of 65% to pure liquid ethanol and a sensitivity of −14% to pure liquid water.
Sensing behaviors of 225-nm-thick GN/BCN devices in response to vapor targets using pure ethanol, pure water and ethanol/water mixtures were further investigated. Figure 5a shows the cyclic sensing test with a 5-min exposure/recovery interval time of GN/BCN sensors. It also illustrates representative cyclic electrical curves of the sensor performed under pure ethanol and in air and compares those with the sensing performance of GN/ BCN films with exposure to pure water in the vapor phase. Sensitivity of pure GN sensors to the same target vapors is also presented. Average sensitivities for GNs/BCNs to various vapor targets are plotted in Fig. 5b. The composite GN/BCN device had much higher sensitivity, achieving up to ~12400% sensitivity in response to pure ethanol vapor and 920% sensitivity in response to pure water vapor compared with the pure GN sensor with 21% sensitivity in response to pure ethanol and −1% in response to 100% water. Overall, sensing behavior was similar in response to vapor targets and liquid targets. Thus, although both types of sensors prove to be smart devices, demonstrating intelligence by "telling" us if the target is water or ethanol via an electrical signal, the composite does so with much higher sensitivity than the pure GN sensor. The GN/BCN sensor exhibits clear response and recovery behavior and acceptable repeatability: response time and recovery time of 108-147 s and ~0 s, respectively (Fig. 5c). We measured the response of the hybrid sensor to ethanol/water vapor mixtures at varied mass ratios. On average, the sensor exhibited a positive relationship with respect to sensitivity to ethanol concentration at 10-90% (Fig. 5d. The fitting equation for sensitivity Y and ethanol concentrations X is represented as Y = 1.52X + 87.72 with a standard deviation (SD) of 29%. These results verify GN/BCN films as suitable smart sensors with low-energy consumption, fast response, high selectivity and rapid recovery characteristics.
Relationship between sensitivity and target liquids. Electrical resistance change of GN/BCN sensors
is thought to relate to mass diffusivity in the porous GN/BCN film. Surface pores of GN/BCN thin films permits the penetration of liquid into the internal microstructure. The change in resistance is related to the intrinsic (volume, dielectric) properties of target liquids. Both effective diffusivity D e (cm 2 ·s −1 ) and spreading coefficient S (mN·m −1 ) can be used to evaluate diffusivity:
SG SL LG LG
where D is the diffusion coefficient of liquid filling the pores, ε t is the porosity, δ is the constrictivity (dimensionless), τ is the tortuosity (dimensionless) and γ refers to the surface tension. SG, SL, LG represent the interface between each of the two phases: solid (S), gas (G) and liquid (L) (Fig. 6b). Here, the spreading coefficient S determines the spontaneous spreading for a drop of liquid placed on a solid substrate. We measured the changes in resistance when we inserted a 225-nm-thick GN/BCN sensor into three most common types of alcohol used in the laboratory to check if it depended on the type of alcohol (Fig. S1). The initial resistance was recorded for around 20 s. Then, the sensor was inserted into the target liquid for 3 s, followed by removing the sensor and drying in air (around 60% relative humidity (RH%)). The same sensor was used throughout all experiments. Prior to each type of measurement, the sensor was dried completely by compressed air with 20 RH%. We found there that the GN/BCN sensor could detect different alcohols. For example, resistance change was the highest for methanol, followed by ethanol and isopropanol. We ascribe the similarity to the similar physical properties as shown in Equations 5 and 6. However, we are unable to identify from where the difference originated. An in-depth study of computation of the multi-physical mechanics is necessary. Figure 6a shows representative contact angles of pure ethanol and pure water sitting on GN, GN/BCN or BCN films. Overall, GN/BCN films had higher wettability (small contact angle) to ethanol than to water (3.4° < 34.8°). Note that rapid absorption of ethanol into the film (coupled with fast evaporation of ethanol) can make the angle measurement difficult and inaccurate. Therefore, θ values only present the initial value at the moment just after the droplet contacted the film surface. Differences in wettability result in differences in penetration and interlayer expansion of the liquid in films. Therefore, wettability likely contributes considerably to the performance of the sensing device.
We also correlated the contact angle and surface tensions of liquid ethanol/water mixtures on the surface of GN/BCN, GN and BCN films (Fig. 6c). For all the films, the cos θ on GN/BCN films decreases with increasing surface tension γ LG , suggesting that ethanol content (resulted in different surface tensions) was responsible for the changed wettability (Fig. 6d). When exposed to various concentrations of liquid ethanol, GN/BCN films had a high cos θ, indicating a high absorbance to liquids and subsequently a large volume of liquids in the devices. Figure 6d shows that S increases as ethanol content increases; for example, GN/BCN films had the highest S in pure ethanol, suggesting that GN/BCN films had the highest ethanol absorbance, which agrees well with our analysis of contact angle/surface tension measurements.
We confirmed that disparity in spreading coefficient results in a difference in volume of absorbed liquids. BC films have previously been reported to have a high swelling expansion of up to 6225% from its dried state due to a hydrophilic functional surface 38 . Humidity sensors based on hydrophilic polyvinyl alcohol/carbon nanotube composites have also been reported to have a swelling expansion effect due to the hydrophilic polymer compound in devices 39 . Likely the reason that our GN/BCN films absorbed different volumes of liquids is because of a disparity in wettability upon contact with target liquids or vapors (Fig. 6g,h). It is their large number of hydrophilic groups, including carboxyl groups, hydroxyl groups and air vacancies, in the network of BCN fibers that yields their high absorbance to both ethanol and water 40 .
Secondly, it is believed that ionization of liquids play a vital role for conductance of GN/BCN films (Mechanism II). It is well known that electrical conductance in graphene or graphene-oxide increases if water is absorbed onto graphene. This is why it is often used as a material aimed for humidity sensors. The mechanism behind the resistance change is called water-induced ionic conductivity. Ionization of water creates hydronium ions (H 3 O + ) that behave as charge carriers 41 : These carriers are mobile, making the electrical path more conductive via a Grotthuss chain reaction through proton transfer (Fig. 6e): This also generates its high dielectric constant up to 80.4 (unit-less), which means that substances whose molecules contain ionic bonds will tend to dissociate, yielding solutions containing ions. When Freeman et al. measured the generation of free ions of different solvents induced by radiation, they correlated the dielectric constant with the yield of free ions to confirm that the relationship is proportional. We can compare the values for ethanol and water in this report 42 .
Under this circumstance, because GN/BCN films become more conductive (low resistance) if water is absorbed, the large number of existing -COONa groups in BCNs could aid with proton migration. The increase in conductance (Grotthuss chain reaction) will cancel out the increase in resistance caused by Mechanism I, making the change in resistance (sensitivity) of GN/BCN films relatively low. This explains why GN/BCN films have a low sensitivity to water.
In contrast, when GN/BCN films absorbed ethanol, they did not become more conductive (dielectric constant 24.3) (Fig. 6f). This is because ethanol cannot be ionized, and therefore, the number of charge carriers in the GN/ BCN network does not increase Instead, the absorbed ethanol produces more "insulating" segments in the conductive network, resulting in a GN/BCN sensor with high sensitivity.
Conclusion
We successfully built flexible, transparent, highly sensitive GN/BCN thin film sensor devices with excellent alcohol recognition performance. Electrical tests under different liquid environments showed that the GN/BCN sensor exhibited ultrahigh sensitivity of up to 12400% in response to pure ethanol in a vapor phase compared to a 920% sensitivity response to pure water. We ascribed the altered wettability of BCN films and the ionization of liquids as the reasons for their excellent sensing performance.
Materials and Methods
Non-polar, hydrophobic GN powder (N002-PDR, Angstronmaterials Company) was used as received. 5 wt.% sodium hypochlorite (NaClO) solution and sodium bromide (NaBr) powders were purchased from RICCA Chemical Company. 2,2,6,6-tetramethyl-1-piperidinyloxy (TEMPO, 98% purity) was purchased from Sigma-Aldrich Company, and ethanol (100 vol.%) was purchased from VWR International. Water was purified by distillation in a Milli-Q (Advantage A10 model) system. BC cubic gels were produced by Thai Agri Foods Public Company Limited. The cubes were cleaned by soaking in distilled water for 15 d; the water was changed every 24 h. Cleaned gels had a BC concentration of 0.5 wt.%. BCNs were extracted from BC hydrogels using a facile TEMPO-mediated blending process. For this, 58.15 g of bacterial cellulose hydrogels, 72.5 g of water and 100 g of ice were blended together by a commercial blender for 5 min. Then, 0.1072 g of TEMPO, 0.7572 g of NaBr and 40.956 g of NaClO solution were mixed with the slurry and stirred at 500 rpm for 20 min. The TEMPO-mediated suspension was sonicated (500 W, 20 kHz, Cole-Parmer Company) at 765 W for 1 min. Next, the TEMPO/NaClO/ NaBr/BCNs aqueous colloidal solution was centrifuged (Centrifuge 5810, Eppendorf Company) at 10,000 rpm for 10 min. Liquid chemicals were removed by repeated centrifugation and the obtained wet treated BCN slurry was dialyzed against pure water for 20 d. The concentration of BCNs was adjusted to 0.45 wt.% by adding water. GN/BCN colloidal solutions were prepared by mixing GN powder and BCN suspension at various concentrations of GN for 20 min using an ultrasonicator. GN/BCN suspensions were centrifuged at 1000 rpm for only ~10 s to remove large aggregates and the remaining homogenous GN/BCN colloidal solutions were used to assemble the sensors.
Sensor devices were assembled by vacuum-filtration of GN/BCN colloidal solutions. Filtration with a 47-mm Wheaton filtration assembly and polycarbonate filter substrate took 1 min, leaving a thin wet GN/BCN film supported by a membrane filter. Afterwards, a wet GN/BCN film supported by a PC membrane was placed on top of a plastic substrate (0.005, Clear Dura-Lar Brand) with the GN/BCN film facing the plastic substrate. This substrate was previously sputtered with a titanium/gold (10, 100 nm thick) interdigitated electrode with a metal-sputter system (Equipment Support Co., Cambridge, England). Next, samples and several layers of soft tissue paper were laminated between two metal cells (2400 g) and dried in a vacuum oven at 60 °C for 4 h. Finally, the supporting materials, including the tissue paper and membrane filter, were peeled away from the GN/BCN layer. TEM images were taken using a Tecnai Twin microscope (FEI). Fourier transform-infrared (FT-IR) spectroscopy measurements were performed on a Nicolet iS10 (Thermoscientific Inc). Ultraviolet-visible (UV-vis) spectroscopy measurements were measured by a Cary100 ConC UV-vis spectrophotometer (Agilent Technologies).
Thicknesses of GN/BCN films were measured by a DEKTAK*8 profilometer (Veeco Company) equipped with a 12.5-μm radius tip. Sheet resistances were measured using a CMT-SR2000N four-probe system purchased from Materials Development Corporation. Data were averaged based on ten measurements at different locations. Surface tensions were averaged based on twenty times of measurements by dynamic surface-tension-ring method on a Kruss K100 tensiometer (Kruss Company) operating at 20 °C. Contact angles were measured by DSA100 equipment purchased from Kruss Company. The sensing performance of GN/BCN sensors was evaluated using a homemade Climatic Test Chamber (30, 45, 60 cm 3 ) operating at 21.5 °C equipped with an air humidifier (LB88 dual, Beurer Company). Electrical resistance of the sensors was measured using a U1281A True RMS Multimeter (Keysight Company).
|
2018-04-03T01:45:24.111Z
|
2017-06-28T00:00:00.000
|
{
"year": 2017,
"sha1": "526081cd6ad78bc1977721f4d1eb7c7ca8eab4db",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-04636-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c0a22c3f7cca7befde2ce5d596bac180dd64b531",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
3576681
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Temperature on Sound Production and Auditory Abilities in the Striped Raphael Catfish Platydoras armatulus (Family Doradidae)
Background Sound production and hearing sensitivity of ectothermic animals are affected by the ambient temperature. This is the first study investigating the influence of temperature on both sound production and on hearing abilities in a fish species, namely the neotropical Striped Raphael catfish Platydoras armatulus. Methodology/Principal Findings Doradid catfishes produce stridulation sounds by rubbing the pectoral spines in the shoulder girdle and drumming sounds by an elastic spring mechanism which vibrates the swimbladder. Eight fish were acclimated for at least three weeks to 22°, then to 30° and again to 22°C. Sounds were recorded in distress situations when fish were hand-held. The stridulation sounds became shorter at the higher temperature, whereas pulse number, maximum pulse period and sound pressure level did not change with temperature. The dominant frequency increased when the temperature was raised to 30°C and the minimum pulse period became longer when the temperature decreased again. The fundamental frequency of drumming sounds increased at the higher temperature. Using the auditory evoked potential (AEP) recording technique, the hearing thresholds were tested at six different frequencies from 0.1 to 4 kHz. The temporal resolution was determined by analyzing the minimum resolvable click period (0.3–5 ms). The hearing sensitivity was higher at the higher temperature and differences were more pronounced at higher frequencies. In general, latencies of AEPs in response to single clicks became shorter at the higher temperature, whereas temporal resolution in response to double-clicks did not change. Conclusions/Significance These data indicate that sound characteristics as well as hearing abilities are affected by temperatures in fishes. Constraints imposed on hearing sensitivity at different temperatures cannot be compensated even by longer acclimation periods. These changes in sound production and detection suggest that acoustic orientation and communication are affected by temperature changes in the neotropical catfish P. armatulus.
Introduction
Ectothermic animals are dependent on environmental heat sources and control their body temperature through external means. Compared to endothermic animals, they maintain relatively low metabolic rates. In general, the speed of all metabolic processes is influenced by the body temperature, which depends on the ambient temperature [1,2,3,4,5]. Therefore, ambient temperature affects various physiological processes such as neuronal and muscular activities, including all sensory systems in ectothermic animals [6,7,8,9,10,11].
In various climates, fish have to deal with seasonal and diurnal fluctuations in water temperature. Fish either cope with temperature fluctuations or they migrate. Thus, the thermal tolerance range of fish species differs to some degree. Certain physical constraints cannot be compensated for even when animals are acclimated [12,13], suggesting the presence of an optimum temperature range.
Fish have evolved the largest diversity of sound-producing mechanisms among vertebrates, and sounds are emitted in numerous contexts: e.g. disturbance situations, during courtship, competitive feeding, territorial encounters (for reviews see [14,15,16,17]. Representatives of some catfish families possess two different sound-producing mechanisms [18,19]. High-frequency stridulation sounds are emitted when pressing ridges of the dorsal process of the pectoral spine against the groove of the pectoral girdle while abducting or adducting pectoral spines [20,21,22,23,24]. In contrast, vibrations of the swimbladder by sonic muscles result in the emission of low-frequency drumming sounds [15,18,25]. In the family Doradidae or thorny catfishes, a thin round bony plate termed elastic spring ('Springfeder'; [26]) vibrates the swimbladder. The elastic spring is rapidly pulled forward during contractions of sonic muscles which originate at the occipital bone and insert at the elastic spring [19,27].
Effects of temperature have not been studied in broadband stridulation sounds so far, but have been studied in low-frequency sounds such as drumming sounds. In general, the sound duration and the fundamental frequency increased with rising ambient temperature, whereas the pulse period decreased due to the higher muscle contraction rate (Gobiidae: [28,29]; Sciaenidae: [30]; Triglidae: [31]; Batrachoididae: [32,33]. Brawn [34] observed a temperature-dependent increase in the number of sounds produced in the cod Gadus callarias. Fish depend on hearing for analyzing the acoustic scene, for orientation, prey and predator detection and for intraspecific communication [35,36,37]. Ambient temperature affects hearing in invertebrates and ectothermic vertebrates. Such effects have been examined in insects [38,39,40], amphibians [41,42,43] and reptiles [44,45]. In general, raising the temperature increased both the most sensitive (best) frequency and the absolute sensitivity [46,47]. The number of action potentials increased and the temporal tuning of auditory neurons shifted to higher rates of amplitude modulation [48]. Similar results have been found in the tuning of the auditory system in cicadas and locusts [38,40].
In fish, only a few studies investigated the effects of temperature changes. Dudok van Heel [49] found that the European minnow, Phoxinus phoxinus, can discriminate between higher frequencies at higher ambient temperature. In goldfish, Carassius auratus, warming increased the spontaneous activity and sensivity of auditory neurons, the best frequency at a given signal level and the responsiveness to an acoustic stimulus [50]. The walleye pollock, Theragra chalcogramma, showed a reduced auditory sensitivity at lower ambient temperature within hours [51]. Wysocki et al. [13] showed that the eurythermic channel catfish, Ictalurus punctatus, and the stenothermic tropical catfish Pimelodus pictus exhibited higher hearing sensitivity at higher temperatures, especially at the highest frequency tested. Differences between temperatures were more pronounced in the eurythermic catfish species.
Sound characteristics are important for coding information in agonistic and reproductive contexts (conflict resolution, distress situations, courtship, establishment of territories). Fish often produce series of short broad-band pulses, for example in the stridulation sounds of catfishes and gouramis [18,52], with distinct temporal patterns and variable interpulse intervals [52,53]. Severals studies suggest that temporal patterns are important carriers of information in fish [53,54]. Wysocki and Ladich [54] showed that the auditory system of the catfish Platydoras armatulus (formerly P. costatus) and the croaking gourami Trichopsis vittata were able to process each pulse within a stridulation sound.
The present study was designed to investigate the effects of temperature on (1) sound production and sound characteristics, (2) the absolute auditory sensitivity and (3) the ability of the auditory system to resolve temporal patterns of sounds in the Striped Raphael catfish.
The neotropical catfish P. armatulus [55] was chosen because this group produces two different sound types (swimbladder and pectoral stridulatory sounds) and because it possesses accessory hearing structures (Weberian apparatus). Groups with accessory hearing structures that couple air-filled cavities acoustically to the inner ear are most likely affected by temperature changes as shown previously [13,56]. Platydoras armatulus inhabits the Amazonian river system and is known to emit both types of sounds in distress situations [18]. This is the first study in which the effects of temperature on both vocalization and hearing have been examined in the same fish species.
Stridulation sounds
All P. armatulus produced sounds by moving the pectoral fins forward (abduction, AB) and backward (adduction, AD), utilizing either one or both fins at the same time. Fish could also move fins without emitting sounds or lock spines in an abducted position. Subjects usually started producing sounds with an adduction movement because they spread their pectoral fins in an adducted position during handling. Stridulation sounds consisted of series of broadband pulses with main energies ranging from 0.3 to 1.3 kHz (Fig. 1). All fish emitted stridulation sounds when hand-held (but not all produced drumming sounds).
In AD-and AB-stridulation sounds, sound duration showed significant differences between temperatures (AD-stridulation sounds: Friedman-test, x 2 = 14.250, df = 2, p#0.01; AB-stridulation sounds: Friedman-test, x 2 = 10.750, df = 2, p,0.01). In both sound types, duration was significantly shorter at 30uC (Wilcoxontests for AD: 22uC versus 30uC: Z = 2. 38 The pulse period showed great variability among and within individuals. In general, the periods were longest in the centre of the stridulation sounds and became shorter at the beginning and at the end of the stridulation sounds ( Fig. 1, see Material and Methods). The mean minimum pulse period ranged from 7.4-8.8 ms in AD-and from 5.1-7.7 ms in AB-stridulation sounds (Tab. 1). A Friedman-test (x 2 = 7.40, df = 2, p,0.05) followed by a Wilcoxon-test revealed that the minimum pulse periods in ABstridulation sounds were significant shorter at 30uC than at 22uC repeated (Z = 22.521, p,0.05). The minimum pulse periods of AD-stridulation sounds and maximum pulse periods of AD-or AB-stridulation sounds did not change with temperature.
Sound pressure levels did not change significantly with temperature and remained almost constant at about 137 dB rel 1 mPa ( Friedman-test, x 2 = 2.250, df = 2, p.0.05) (Tab. 1). Otherwise, the dominant frequency revealed significant differences between 22uC and 30uC and between 22uC and 22uC repeated (Wilcoxon-test, Z = 22.380, p#0.05). Dominant frequency doubled after fish were acclimated to 30uC from 601.6 Hz to Drumming sounds P. armatulus emitted two different types of drumming sounds: series of short drumming sounds and single long drumming sounds. Series of short drumming sounds were recorded in 6 out of 8 animals but not at all temperatures (22uC: N = 4; 30uC: N = 4; 22uC repeated: N = 1). Long drumming sounds, in contrast, were recorded in every individual but again not at every temperature (22uC: N = 5; 30uC: N = 8; 22uC: repeated N = 5). The long drumming sounds revealed a harmonic structure with fundamental frequencies (drumming muscle contraction rate) between 100 and 150 Hz (Fig. 3).
P. armatulus produced more stridulation than drumming sounds. Stridulation sounds were produced by each individual at both temperatures which was not the case in drumming sounds. Stridulation sounds and drumming sounds were often emitted simultaneously. In general, long drumming sounds were longer than stridulation sounds, in some cases over 300 ms. Long drumming sound duration did not change significantly with temperature (Tab. 2) (Kruskal-Wallis test, x 2 = 1.411, df = 2, p.0.05). Accordingly, the mean number of pulses in drumming sounds did not change either (Kruskal-Wallis test, x 2 = 3.740, df = 2, p.0.05 ).
Auditory abilities
Best hearing occurred at 0.5 and 1 kHz at both temperatures (Tab. 3, Fig. 5). A two-factorial ANOVA revealed that the auditory sensitivity differed between temperatures (F 2,126 = 13.46, p,0.001) and that there was a significant interaction between temperature and frequency (F 10,126 = 2.15, p#0.05). Thus, changes in auditory sensitivity showed different trends at different frequencies. The hearing sensitivity was higher at the higher temperature and differences were more pronounced at higher frequencies (0.5-4 kHz).
Waveforms and latencies in response to single clicks
AEPs of P. armatulus in response to clicks consisted of a series of negative and positive deflections whose amplitude decreased when lowering the SPL. AEPs started with a negative peak (Fig. 6). The most constant peaks -N1, P1, N2 and P2 -occurred in the AEPs in response to a single-click presentation at 22uC and 30uC. Significant differences in latencies of peaks P1, N2 and P2 were found between temperatures (P1: Friedman-test, x 2 = 12.0, df = 2, p,0.01; N2: Friedman-test, x 2 = 13.231, df = 2, p,0.01; P2: Friedman-test, x 2 = 12.250 , df = 2, p,0.01). The delay in the onset of P2 was significantly longer at lower temperature (Tab. 4) (22uC and 30uC: Wilcoxon-test, N = 8, p#0.05; 30uC and 22uC repeated: Wilcoxon-test, N = 8, p#0.05). The peak-to-peak amplitude between the first positive peak and the second positive peak increased with rising temperature. N1 and N2 tended to fuse at higher temperature, whereas P1 almost disappeared (Fig. 6).
Temporal resolution measurements
Two distinct AEPs were detectable in response to double-clicks at click periods of 5 ms down to 1.5 ms (Fig. 7). At shorter click periods, the responses to the first and to the second click were partly overlaid (Fig. 7). The minimum resolvable click period was 0.81 ms. Near to the hearing threshold, N1 and N2 as well as P2 and P3 tended to merge until one negative and positive peak remained. AEP shape and latency varied within and between individuals. No significant difference was observed in the minimum resolvable click periods between temperatures (Friedman test: x 2 = 3.5, df = 2, p.0.05). Mean minimum gap width ranged from 0.81 (60.09 SE) to 1.00 ms.
Discussion
Physiological processes depend on the surrounding temperature in ectothermic animals. This leads to the assumption that both sound production (sound characteristics) and sound detection are affected by the temperature in fishes. Previous studies reveal that, in several vocalizing species, temperature change induced changes in temporal characteristics of sounds including sound duration, dominant/fundamental frequency, and/or sound pressure level [28,29,30,31,32,33,57]. In addition a few studies showed that temperature also affects hearing [13,51]. However, the present study is the first one investigating such effects on sound communication by studying sound characteristics and hearing abilities in parallel in the same species.
Temperature effects on sound characteristics
In general, sound duration and fundamental or dominant frequency increased, whereas pulse period and pulse duration decreased with rising ambient temperature. Note, however, that not all sound characteristics are effected by temperature changes in species studied and that opposite trends have been observed in a few cases.
The duration of stridulation sounds in P. armatulus was affected significantly at elevated ambient temperature. Both AB-and ADstridulation sounds became significantly shorter at the higher temperature. This is probably because pectoral muscles contract faster, taking less time for a complete pectoral fin sweep [19].
Stridulation sounds were influenced by temperature, whereas duration of drumming sounds did not change in the current study. Similarly, in the searobin Prionotus carolinus, Connaughton [58] reported no relation between sound duration and temperature variation. Temperature effects on drumming sounds are a well-studied topic in fish biology. Drumming sounds in piranhas, Serrasalmus nattereri, in the oyster toadfish, Opsanus tau, and in the gobies Padogobius bonelli and P. nigricans became shorter at higher temperatures [28,29,33,57]. In contrast, drumming sound duration in the weakfish, Cynoscion regalis, and in the Lusitanian toadfish, Halobatrachus didactylus, increased with rising ambient temperature [30,32]. Thus, results on sound duration influenced by temperature showed different trends. For instance Amorim [31] reported that in H. didactylus 'knocks became shorter and 'grunts' became longer at higher temperature. So far, sound characteristics are temperature-dependent, although no conclusions could be drawn about which factors are responsible for sound lengths either increasing or decreasing with temperature change.
The maximum and minimum pulse periods of stridulation sounds showed temperature-dependence to some degree. The minimum period became shorter in AB-stridulation sounds at higher temperature, and a significant difference was also found between the two cold measurements, whereas in AD-stridulation sounds no trend was detected. The shorter pulse periods at higher temperatures most likely decreased the duration of AB-stridulation sounds because the number of pulses was constant. The lack of such a relationship in AD-stridulation sounds is probably because the minimum and maximum pulse periods do not reflect the mean pulse period of sounds completely. Dominant frequency of stridulation sounds tended to increase with temperature. No comparable studies have been conducted on the temperatureeffects on stridulation sound characteristics.
In drumming sounds of P. armatulus, the mean pulse period tended to decrease with increasing temperature. The fundamental frequency which reflects the muscle contraction rate increased from approximately 75 Hz to 100 Hz. Drumming muscles are fast-contracting muscles consisting of many thin myofibrils encircled by layers of sarcotubules [27]. A temperature change may affect the pulse pattern generator circuits and the muscle contraction properties that change the contraction rate of the drumming muscles. A warmer sarcoplasmic reticulum can cycle calcium more rapidly in the oyster toadfish Opsanus tau [28,32,59]. Studies on the Arno goby, Padogobius nigricans, the searobin Prionotus carolinus and the oyster toadfish, Opsanus tau, reported a rise in fundamental frequencies with higher temperature [29,33,58]. These studies did not investigate if, due to this outcome, pulse periods decreased with elevated temperature. Interestingly, Connaughton et al. [30] described shorter pulse duration but increasing pulse periods in the weakfish at higher temperature. Nevertheless, sound characteristics such as pulse period and fundamental and/or dominant frequency showed an overall strong correlation with ambient temperature. In P. armatulus, no temperature effect was found on the sound pressure level in stridulation sounds. Those levels ranged from 136.4 to 137.9 dB. Connaughton [58] observed that the sound pressure level of the searobin Prionotus carolinus was not influenced by temperature as well. In contrast, lower sound pressure levels have been described in the piranha and the weakfish at lower temperatures [30,57].
Temperature effects on hearing
In several ectothermic animals, temperature-dependent effects on the auditory system have been reported. Amphibians showed lower hearing thresholds at higher surrounding temperature [46,47]. In insects, warming above ambient temperature increased the characteristic hearing frequency or best frequency, the spike rate and the sensitivity [38,40].
Higher temperatures induced a frequency-dependent change in sensitivity in all fish species investigated so far [13,51]. Dudok van Heel [49] was the first to describe temperature effects on the auditory function in fishes. He trained blinded European minnows (Phoxinus phoxinus) to react to different frequencies. At higher temperature, the upper limit of frequency discrimination shifted from 1200 Hz up to 1600 Hz. Subsequently, the detectable frequency range became wider. Wysocki et al. [13] were interested if ambient temperature influenced auditory sensitivity in a erythermal and stenothermal catfish differently. Hearing thresholds of the stenothermic tropical catfish Pimelodus pictus decreased from 22 to 30uC [13]. Pimelodus pictus and P. armatulus showed a similar frequency-dependent increase in sensitivity when increasing the ambient temperature by 8uC (Fig. 8).
The eurythermal North American channel catfish Ictalurus punctatus differed considerably from the stenothermal tropical catfishes (P. pictus and P. armatulus) ( [13], and current study). The channel catfish exhibited higher changes in hearing sensitivity when the temperature changed, especially at the highest frequency tested. In I. punctatus, hearing sensitivity at 4 kHz increased by 23 dB when temperature was raised from 18 to 26uC. Hearing thresholds of the tropical catfish P. pictus showed smaller differences (maximum change: 5 dB) at a similar temperature change of 8uC.
Several factors explain the phenomenon that hearing sensitivity at higher frequencies is more affected by temperature changes than at lower frequencies. Fay and Ream [50] concluded that temperature-dependent effects on the nervous system in goldfish, Carassius auratus, may reflect changes in the release and reuptake of neurotransmitter at the synapses between hair cells and auditory nerve fibres. Elevated temperature increased the cells' spontaneous activity, sensitivity, best frequency and responsiveness. Wysocki et al. [13] argued that high-frequency hearing needs faster firing of action potentials due to synchronization with the shorter sound cycles. The refractory periods and transduction processes are perhaps more temperature-dependent than those of longer cycles of lower frequencies. This would be consistent with the frequencydependent improvement of hearing in the present study.
Latencies decreased in three out of four peaks (P1, N2 and P2) at higher temperatures in P. armatulus. This result might be explained by temperature dependence of spike conduction velocity, of spike shape and perhaps of synaptic delay. Short latencies indicate better hearing cability at higher temperature [56]. Besides, Wysocki and Popper [60] also observed different AEP shapes at different temperatures. At higher temperature, peaks tended to fuse, especially the first and the second negative peak, and AEP amplitude increased.
In the locust Locusta migratoria, higher temperatures resulted in a better resolution of gaps [39]. No such change with temperature was found in the current study. Wysocki and Ladich [54] reported that the mean minimum resolvable pulse period of the Lined Raphael catfish was 0.52 ms, measured at 25uC. The current study found a mean value of 0.86 (60.05) ms at 32 dB above hearing threshold at both temperatures investigated; two distinct AEPs were clearly traceable at a click period exceeding 3.5 ms (according to [54]). The minimum pulse periods in the stridulation sounds (2 ms) and in the drumming sounds (6 ms) in P. armatulus as measured in the recent study are longer than the minimum resolvable click period. This indicates that catfishes encode the temporal information of sounds from conspecifics, independent of changes in ambient temperature.
Temperature and acoustic communication
Many catfish species produce sounds in various behavioural contexts such as disturbance, agonistic behavior and male courtship display [19,61,62]. Thus, the detection of stridulation and drumming sounds is an important factor in catfish behavior. In disturbance situations, catfish are likely to emit more stridulation sounds, whereas in intraspecific contexts more drumming sounds are produced [62]. Accordingly, stridulation sounds may have a warning or defense intention, while drumming sounds play an important role in intraspecific communication [62,63].
Temperature affects sound characteristics in both stridulation sounds (duration) and drumming sounds (pulse period, fundamental frequency). Both observations agree with the fact that the muscle contraction rate increases with temperature. Higher contraction speed of the pectoral abductor and adductor muscle results in shorter AB-and AD-stridulation sounds. Similarly, a higher drumming muscle contraction rate results in shorter pulse periods and a higher fundamental frequency. Stridulation sounds Figure 8. Comparison of the change in hearing sensitivity in the Amazonian catfishes Pimelodus pictus and P. armatulus (current study). Differences are shown in both species after acclimation for at least 3 weeks to either 22uC or 30uC. doi:10.1371/journal.pone.0026479.g007 tended to have higher dominant frequencies and shorter pulse periods. Sound frequencies of both sound types shift to higher frequencies with rising temperatures, and hearing sensitivity increased at higher frequencies. Thus, low-frequency (0.1 and 0.2 kHz) drumming sounds and in particular high-frequency stridulation sounds (above 500 Hz) will be better detectable at higher temperatures. The lower hearing thresholds, together with the faster response of the auditory system (shorter latencies of AEP waves), leads to the assumption that changes in temporal patterns of both types of sounds (duration, pulse periods) are detected and that acoustic communication is facilitated at higher temperatures in catfishes. The habitat temperature typically ranges between 23uand 30uC. Studies on vocalizing species are required to determine whether this effect is more pronounced in eurythermic than stenothermic fish species.
Ethics Statement
The study protocol was approved by the Austrian Federal Ministry of Science and Research, permit number GZ 66.006/ 0023-II/10b/2008.
Animals
Lined Raphael catfish [55] were kept in a community tank (110655630 cm, 2561uC) and a total of 8 adult specimens of P. armatulus were used in the present study. They were obtained from a local pet supplier. Groups of four fish were introduced into two experimental tanks (70640630 cm) which were equipped with half flower pots and whose bottom was covered with sand. The water was filtered by external filters and a 12:12 hour light-dark cycle was maintained. Fish were fed with frozen chironomid larvae and flake food five days per week. The size of fish was as follows: total length: 126.2-142.5 mm; standard length: 108.6-121.1 mm; body mass: 27.9-41.8 g. The sex of the fish was not determined because this was not possible without killing the animals.
Temperature in the experimental tanks was changed using submersible heaters by approximately one degree per day until final temperatures of 2261uC and 3061uC, respectively, were achieved. Fish were acclimated for at least three weeks to each experimental temperature, first to 22uC, then to 30uC and finally to 22uC again. Auditory measurements were conducted between 24 h and 4 weeks prior to sound recordings. Fish recovered completely within one day.
Sound and video recordings
Sound and video recordings were conducted in a sound-proof room in a separate recording tank (50627630 cm) either at 2261uC or at 3061uC, depending on the acclimation temperature in the experimental tank. Fish were hand-held at a distance of 5 to 10 cm from the hydrophone which was positioned in the middle of the recording tank. In order to avoid overlap of stridulation sounds generated simultaneously by both pectoral fins, one fin was fixed.
Sounds and fin movements were recorded using a hydrophone (Brüel & Kjaer 8101) connected to a power supply (Brüel & Kjaer 2804) and an amplifier (AKG B29L), and a video camera (Sony VX1). Both acoustic and video signals were recorded simultaneously on a harddisk video recorder (Panasonic DMR-EX95V). Videorecordings were necessary to determine which sounds were produced during abduction and adduction of pectoral fins.
Sound pressure levels (RMS fast, L-weighting) were recorded using a sound level meter (Brüel & Kjaer Mediator 2238) which was connected to the power supply of the hydrophone. Three walls of the recording tank were lined on the inside by acoustically absorbent material (airfilled packing foil) and its bottom was covered with fine sand. The recording tank supporting table was placed on a vibration-isolating concrete plate.
Sound analysis
Sounds were analysed using Cool Edit 2000 (Syntrillium Software Corporation, Phoenix, USA) and ST x Soundtools 3.7.8. (Institute of Sound Research at the Austrian Academy of Sciences). P. armatulus produced sounds during the adduction (AD) and abduction (AB) of pectoral fins [18]. The following sound characteristics were determined in stridulatory sounds: the sound duration (ms), the number of pulses, the minimum and maximum pulse period (ms), the dominant frequency (Hz) and the sound pressure level (dB re 1 mPa) (Fig. 9). In each individual, five ADand five AB-stridulation sounds (a total of 10 sounds) were examined. In the drumming sounds, the sound duration (ms), the number of pulses, the mean pulse period (ms) and the fundamental frequency (Hz) were determined. Sound pressure levels could not be determined for AB-and AD-stridulation sounds separately because the sound level meter does not allow SPL readings at such short intervalls. Furthermore, SPLs could not be determined for drumming sounds because fish produced stridulation sounds, which were much louder, at the same time.
The pulse period was defined as time between the peak amplitudes of two subsequent pulses within a sound. In stridulation sounds, only sounds consisting of at least four pulses were used for pulse period measurements. The average of the minimum and maximum pulse periods of stridulation sounds (each N = 3) were calculated separately for each fish instead of a total mean due to the large variabilty in these sound characteristics. For each individual, 60 pulse periods were measured at each temperature. The dominant frequencies of stridulation sounds were measured using cepstrum-smoothed power spectra (filter bandwidth 1 Hz, 50% overlap, number of coefficients 100, hamming filter), determined from five AD-and five AB-stridulation sounds, thus 10 stridulatory sounds per fish. A sound file made up of Figure 9. Drawings of the ventral side of the catfish and oscillogram of an AD-and AB-stridulation sound. The upper drawings illustrate the fin movement during production of AB-and ADsounds, the lower oscillogram shows temporal sound characteristics measured. Sound duration was measured from the beginning to the end of a sound. The pulse period was defined as the time between the peak amplitudes of two subsequent pulses within a sound. A minimum and a maximum pulse period are shown within a stridulatory sound. doi:10.1371/journal.pone.0026479.g008 stridulation sounds was created separately to determine individualspecific dominant frequencies.
In drumming sounds, pulse periods were defined as the time between subsequent drumming muscle contractions. Pulse periods were analyzed in at least four drumming sounds per fish (10 pulse periods per fish). The mean pulse period was calculated for each fish. The fundamental frequency of drumming sounds was determined from sound power spectra calculated from 10 sounds per fish. Again, a sound file consisting of drumming sounds of one specimen was created to calculate the fundamental frequency of each individual.
Auditory sensitivity measurements
Auditory sensitivity was measured using the auditory evoked potential (AEP) recording technique described by Kenyon et al. [64] and modified by Wysocki and Ladich [54,65]. Test subjects were secured in a round plastic tub (35 cm diameter, 15 cm height, lined on the inside by acoustically absorbent material, 1 cm layer of fine sand) filled with water and adjusted so that the nape of the head was just above the surface of the water, and a respiration pipette was inserted into the animal's mouth. The water temperature was either at 2261uC or 3061uC, depending on the temperature in the holding tanks.
Respiration was achieved by a temperature-controlled gravityfed water circulation system. To immobilize animals and to reduce the myogenic noise level, they were injected with a curariform agent (Flaxedil; gallamine triethiodide; Sigma-Aldrich, Vienna, Austria). The dosage required was 1.5-2.8 mg g 21 and allowed the fish to perform opercular movements during the experiment. The plastic tub was positioned on an air table (TCM Micro-g 63-540) which rested on a vibration-isolating concrete plate. The entire setup was enclosed in a walk-in soundproof room which was constructed as a Faraday cage (interior dimensions: 3.263.26 2.4 m).
The AEPs were recorded using silver wire electrodes (0.32 mm diameter) that were pressed firmly against the skin, which was covered by small pieces of tissue paper to keep it moist, in order to ensure proper contact during experiments. The recording electrode was placed in the midline of the skull over the region of the medulla and the reference electrode cranially between the nares. Shielded electrode leads were attached to the differential input of an a.c. preamplifier (Grass P-55, Grass Instruments, West Warwick, RI, USA; gain 100x, high-pass at 30 Hz, low-pass at 1 kHz). A ground electrode was placed in the water near the subject. Both stimuli presentation and AEP-waveform recording were accomplished using a Tucker-Davis Technologies (TDT, Gainesville, FL, USA) modular rackmount system (TDT System 3) controlled by a Pentium PC containing a TDT digital processing board and running TDT BioSig RP Software.
Presentation of sound stimuli
Sound stimuli waveforms were generated using TDT SigGen RP software and fed through a power amplifier (Alesis RA 300, Alesis Corporation, Los Angeles, CA, USA). A dual-cone speaker (Tannoy System 600, frequency response 50 Hz to 15 kHz6 3 dB), mounted 1 m above test subjects in the air, was used to present the stimuli during testing. Sound stimuli consisted of tone bursts presented at a repetition rate of 21 s 21 . Hearing thresholds were determined at frequencies of 0.1, 0.2, 0.5, 1, 2 and 4 kHz, presented in random order. Rise and fall times were one cycle at 0.1 and 0.2 kHz and two cycles at all other frequencies. All bursts were gated using a Blackman window.
The stimuli were presented at opposite polarities (180uphase shifted) for each test condition and the corresponding AEPs were averaged by the BioSig RP software in order to eliminate stimulus artefacts. The sound pressure level (SPL) of tone-burst stimuli was reduced in 4 dB steps until the AEP waveform was no longer apparent. The lowest SPL for which a repeatable AEP trace could be obtained, which was determined by overlaying replicate traces, was considered the threshold [56,66]. A hydrophone (Brüel & Kjaer 8101, Naerum, Denmark; frequency range 1 Hz to 80 kHz62 dB, voltage sensivity -184 dB re 1 VmPa 21 ) was positioned near the right side of each fish (2 cm away) to determine absolute SPLs values underwater, close to the subjects.
Temporal resolution measurements
In order to analyze the temporal resolution ability at different temperature, the technique described by Wysocki and Ladich [54] was applied. Single clicks and double-clicks were generated using TDT System II and TDT 'SigGen' software and fed through a DA1 digital-analog converter, a PA4 programmable attenuator, and a power amplifier (Denon PMA 715R) to the air speaker (Tannoy System 600). Each type of stimulus (single click and double-click) was presented to the animals at a repetition rate of 35 s 21 . Double-click stimuli were presented at 28 dB above hearing threshold. Ten different click periods were presented, beginning with the shortest click period. Click periods tested were 0.3, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4 and 5 ms.
The amplitudes of the responses to the second click of each pair of clicks were measured and compared to the response to a single click following the method used in Wysocki and Ladich (2002). The most consecutive peaks were used for analysis. The AEP components were denominated with P for positive peaks (directed upwards) and N for negative peaks (directed downwards) by ascending numbers. The main peaks for analysis were N1, N2, P2 and P3. First, the hearing threshold in response to a single click was determined, followed by a presentation of double-clicks at 28 dB above hearing threshold.
A point-to-point subtraction operation was conducted [54] to isolate the response to the second click within a pair of clicks. The AEP in response to a single click was substracted from the response to a double-click. The shortest click period at which a second response was still detectable was classified as the minimum resolvable click period.
Latency measurements
The latency was defined as the time between the onset of the single click stimulus and the first four constant peaks of the AEP recorded in responses to this click stimulus. The most constant peaks in the AEPs were N1, P1, N2 and P2 (see Fig. 2 in [54]). The single click was presented at 28 dB above hearing threshold.
Statistical analyses
All data were tested for normal distribution using the Kolmogorov-Smirnov-test and when data were normally distributed, parametric statistical tests were applied. Stridulation sounds data determined at three different experimental temperatures were compared using a non-parametric test (Friedman-test followed by a Wilcoxon-test). A Kruskal-Wallis test was applied to calculate differences in drumming sound characteristics because only five individuals produced drumming sounds at all temperatures. Audiograms obtained at the three temperatures (22uC, 30uC and 22uC repeated) were compared by a two-factorial analysis of variance (ANOVA) using a general linear model where one factor was temperature and the other was frequency. The temperature factor alone should indicate overall differences in sensitivity between temperatures and in combination with the frequency factor if different tendencies exist at different frequencies of the audiograms. A Post-hoc test (Bonferroni) revealed differences between temperatures. All statistical tests were run using SPSS 17.0. The significance level was set at p#0.05.
|
2014-10-01T00:00:00.000Z
|
2011-10-17T00:00:00.000
|
{
"year": 2011,
"sha1": "0a8c79054a5f3113205707b6a1e19aa527de5afc",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0026479&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a8c79054a5f3113205707b6a1e19aa527de5afc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
52957917
|
pes2o/s2orc
|
v3-fos-license
|
A Dimerization Function in the Intrinsically Disordered N-Terminal Region of Src
SUMMARY The mode of regulation of Src kinases has been elucidated by crystallographic studies identifying conserved structured protein modules involved in an orderly set of intramolecular associations and ligand interactions. Despite these detailed insights, much of the complex behavior and diversity in the Src family remains unexplained. A key missing piece is the function of the unstructured N-terminal region. We report here the function of the N-terminal region in binding within a hydrophobic pocket in the kinase domain of a dimerization partner. Dimerization substantially enhances autophosphorylation and phosphorylation of selected substrates, and interfering with dimerization is disruptive to these functions. Dimerization and Y419 phosphorylation are codependent events creating a bistable switch. Given the versatility inherent in this intrinsically disordered region, its multisite phosphorylations, and its divergence within the family, the unique domain likely functions as a central signaling hub overseeing much of the activities and unique functions of Src family kinases.
INTRODUCTION
The Src family kinases (SFKs) are a closely related family of non-receptor tyrosine kinases that play important roles in many different cellular signaling pathways (Erpel and Courtneidge, 1995;Parsons and Parsons, 2004;Thomas and Brugge, 1997). Their participation in such a wide repertoire of cellular pathways and the plethora of substrates phosphorylated by SFKs preclude the assignment of a unique function to this family, and it appears that SFKs play a fundamental role in regulating many aspects of metazoan life. The SFK family encompasses 11 members in humans by the Manning classification, including a core group of closely related members Src, Yes, Fyn, Fgr, Blk, Hck, Lck, and Lyn and a more distantly related group of Frk, Srm, and Brk (Manning et al., 2002). Src, Yes, and Fyn are widely ex-pressed in most tissue types, whereas the other members have a much more restricted expression pattern. All SFKs share a simple modular domain structure consisting of SH3, SH2, and catalytic kinase domains (KDs) followed by a short C-terminal tail, all of which are highly homologous within the family. However, N-terminal sequences of ~70-80 residues are entirely divergent imparting a unique identity and possibly unique and nonredundant function to individual members. The more closely related core SFK members also share a few residues at the extreme N terminus following cleavage of the starting methionine, providing a recognition motif for myristoylation of the N-terminal glycine by N-myristoyl transferase (NMT) (Resh, 1994(Resh, , 1999. This motif, along with several charged amino acids within the first 10-17 residues that enhance affinity to membrane lipids, is referred to as the SH4 domain and accounts for the localization of these SFKs to membranes (McCabe and Berthiaume, 1999;Resh, 1994).
Decades of efforts have revealed considerable insights into how the activities of SFKs are regulated by their modular structure (extensively reviewed in Boggon and Eck, 2004;Roskoski, 2004;Sicheri and Kuriyan, 1997;Xu et al., 1999). Key determinants of their activity are phosphorylation and protein-protein interactions. Phosphorylation at Tyr 527 (using chicken c-Src numbering) by C-terminal Src kinase (CSK) is inhibitory, while phosphorylation at Tyr 416 is activating, although neither of these phosphorylations by themselves exert full positive or negative regulatory control. Pioneering crystallographic studies reveal that Src is held in an autoinhibited state by intramolecular interactions wherein the SH2 domain is bound to its phosphorylated C-terminal Tyr 527 and the SH3 domain engages a proline-rich SH2-KD linker region and the kinase N-lobe. In this state, the active site of the KD is disrupted by displacement of the aC helix. These intramolecular interactions are weak in nature and can be destabilized by solicitation of the SH2 and SH3 domains by higher-affinity ligands (Moarefi et al., 1997) or by dephosphorylation of Tyr 527 by a number of phosphatases (Roskoski, 2005). Auto-phosphorylation at Tyr 416 stabilizes the activation loop, an event that in most kinases is required for catalytic activity. Thus, the selection of substrates by Src is directly influenced by ligand recognition via the SH3 and SH2 domains and not simply by the substrate motif preferences of its catalytic domain active site in what has been termed the "turned on by touch" mode of signaling (Boggon and Eck, 2004). As such, the activities of SFKs are greatly affected by their molecular surroundings. These structural modules are highly homologous across the entire family, leaving unanswered why the family is so large and how individual members can perform unique functions.
A key feature of SFKs that is missing in current models of their regulation is their unique domains (UDs). This is because the UD is intrinsically disordered and has eluded analysis in the crystallographic studies that have informed much of our current understanding of SFK regulation. However, considerable lines of evidence suggest that the UD is critically involved in the regulation of SFKs. Although the UD sequences are not conserved between family members, they are well conserved across species, suggesting that it is not merely a spacer region but rather harbors specific functions. Furthermore, although there is little sequence homology between the UDs of SFKs, studies using nuclear magnetic resonance (NMR) and small-angle X-ray scattering reveal compaction of the UD mediated through several long-range interactions between aromatic residues that are conserved within the SFK family, suggesting that these disordered regions share at least some common structural features (Arbesú et al., 2017). The UDs of individual SFK members confer specific signaling functions distinct from other members, and in fact some of these functions can be lost by UD deletion or transferred by UD-swapping experiments (Carrera et al., 1995;Hoey et al., 2000;Summy et al., 2003;Wer-dich and Penn, 2005). Further consistent with a regulatory function, the UDs of SFKs are phosphorylated and dephosphorylated on several sites, affecting their signaling functions (Amata et al., 2013(Amata et al., , 2014Hansen et al., 1997;Johnson et al., 2000;Joung et al., 1995). How these phosphorylations regulate function remain largely unknown, awaiting more mechanistic insights into the functions of the UD. The most detailed function described thus far is for the UD of Lck. Lck binds the T cell coreceptors CD4 and CD8 on the surface of T lymphocytes through its UD (Shaw et al., 1989). Although its UD is unstructured in isolation, it adopts an organized heterodimeric solution structure when complexed with zinc and the cytoplasmic tails of CD4 or CD8, providing a glimpse of how this disordered region can mediate a highly specific interaction that enables Lck to function as a second messenger signaling molecule for these surface receptors (Kim et al., 2003). Such a function has yet to be demonstrated for any other SFK UD, and in fact more recent efforts seem to imply considerable versatility in the functions of UDs.
The interest in understanding the functions of intrinsically disordered proteins (IDPs) or intrinsically disordered regions (IDRs), such as the UD of SFKs, has exploded in recent years as the sheer abundance and relevance of IDPs and IDRs in the eukaryotic proteome has become apparent (Dunker et al., 2002;Dyson, 1999, 2015). IDPs frequently exhibit multiple interaction capabilities and function as protein interaction hubs in cellsignaling networks (Dunker et al., 2005;Kim et al., 2008). Although intrinsically disordered, they often contain small elements that fold upon binding to protein targets, mediating interactions characterized by high specificity but modest affinity. Such interactions begin and end with rapid kinetics and thus are highly dynamic and often transient (Dyson and Wright, 2005;Wright and Dyson, 1999). The modest affinities of IDPs allows for a substantial degree of regulation through post-translational modifications, enabling them to function as signaling hubs (Borg et al., 2007;Gsponer and Babu, 2009;Van Roey et al., 2012. The emerging evidence and the developing paradigms for how IDRs can mediate key regulatory functions has renewed our interest in understanding the functions of the disordered UDs at the N-terminal region of SFKs.
In this work, we began by asking whether SFKs signal as dimers, what structural elements may be mediating the formation of such dimers, and how dimerization affects signaling function. The evidence implicates the UD along with the N-terminal myristoylation as critical mediators of dimerization and highly relevant for kinase activity and substrate phosphorylation.
Src Forms Dimers Involving the SH4-UD and KD Regions
Dimerization is a key mechanism mediating the signaling functions of many protein kinases. However, a role for dimerization has not been described for SFKs. We began this study by asking whether Src dimerizes, and if so, to determine the structural determinants that mediate dimerization and the potential relevance of dimerization in Src signaling. Constructs were generated using the full-length human c-Src or various deletion mutants lacking one or more of its functional domains. The structures and nomenclature used for all these constructs are schematically depicted in Figure 1. All constructs were generated in duplicate with either a hemagglutinin (HA) tag or a FLAG tag at the C terminus, enabling simple detection of dimerization through HA or FLAG co-immunoprecipitation assays. Transient transfection of these constructs was done in SYF cells (Src-Yes-Fyn null), which lack the expression of the SFK family in order to eliminate any background effects related to competition from endogenously expressed SFK proteins. The transfection efficiencies are high and the expression levels achieved in these experiments are roughly comparable to the endogenous expression of Src in many human cell lines, and thus the interactions detected in these studies are not due to artifactually high expression ( Figure S1). The high transfection efficiency of GFP and full-length Src provides a reasonable estimate of this transfection technique in this cell type, although there may be differences in efficiency among the many different mutant constructs used.
From these HA or FLAG co-immunoprecipitation assays, it is clear that Src forms dimers ( Figure 2A). Dimerization of Src requires the N-terminal region containing the SH4 and UD ( Figure 2B, lane 3 versus lane 7; Figure 2C, lane 1 versus lane 2). Dimerization is maintained in constructs lacking the SH3-SH2 domains ( Figure 2B, lane 3). Dimerization is greatly impaired, although not completely lost, in constructs lacking the UD and SH3-SH2 domains but maintaining the SH4 domain ( Figure 2B, lane 5). Dimerization is also greatly impaired, although not completely lost, in constructs lacking myristoylation due to G2A mutation ( Figure 2C, lane 3). The SH3-SH2 domains have no ability to dimerize with themselves ( Figure 2D, lane 5) and are dispensable for dimerization of full-length Src ( Figure 2B, lane 3). The KD has no ability to dimerize with itself ( Figure 2B, lane 7); however, the KD is not dispensable for the dimerization of Src, and constructs lacking the KD are impaired in dimerization ( Figure 2D, lane 3). The N-terminal SH4-UD segment is not able to dimerize with itself. This cannot be demonstrated through the simple expression of an SH4-UD segment, since this instrinsically disordered segment by itself is unstable and very poorly expressed ( Figure S2). However, the SH4-UD region fused to mCherry is stable and expressed and shows no detectable self-dimerization ( Figure 2E). Taken together, these data suggest that the N-terminal SH4-UD region and the KD region are the functional determinants of dimerization. Both myristoylation and the adjacent UD region have functions that contribute to the dimerization of Src. This is evident in assays using constructs that lack only the UD domain or constructs that are defective in myristoylation due to mutation of the myristoylated glycine 2 residue. In these assays, separate deletion of myristoylation or the UD region partially, but not completely, diminishes dimerization ( Figures 2B and 2C).
Dimerization Is Asymmetric
The assays described above query the dimerization of identical constructs. In experimental designs that query the dimerization of nonidentical constructs, it is evident that the Nterminal SH4-UD is only required on one partner for dimerization ( Figure 2G, lane 2). A partial decrease in dimerization is evident when the SH4-UD region is present in only one partner. On the other hand, the deletion of the KD in either partner prevents dimerization ( Figure 2H, lanes 2 and 3). This implicates a more complex role for the KD, which is explored further below.
Dimerization Is Direct
The observed interaction of Src proteins is a direct interaction and not a molecular proximation afforded by larger cellular macromolecular protein complexes. This is evident when recombinant purified HA-tagged and FLAG-tagged Src proteins are incubated by themselves in vitro ( Figure 2I). Purified Src run on a native gel migrates in two molecular weight forms consistent with monomers and dimers ( Figure 2J). The relative ratio of monomers to dimers in this assay is likely not a faithful reflection of in-cell dimerization due to dimer disruption by the stringent purification process. Larger oligomeric forms may also be potentially disrupted during purification, and the macromolecular environment of Src in cells may involve larger forms. The cell-based assays suggest that the minimal regions necessary for dimerization are the N-terminal region and the KD. The interaction of these two domains can also be recapitulated in vitro. While the N-terminal SH4-UD region cannot be expressed and purified by itself, it is stable and can be purified when fused with glutathione S-transferase (GST). This recombinant purified SH4-UD-GST protein binds a purified KD, confirming the affinity of the N-terminal region of Src with the C-terminal KD of Src (Figures 2K).
Src Dimerization Is Directly Evident in Living Cells
The dimerization of Src observed in co-immunoprecipitation assays is an in-cell interaction and not a post-lysis artifact. For more direct evidence of Src dimerization in living cells, we took an orthogonal approach using a protein fragment complementation assay. The complementation of fragmented SNAP tags is particularly useful to detect stable proteinprotein interactions, as it is not complicated by the irreversible complementation seen with many split fluorescent proteins (Mie et al., 2016). The signal generated by the complementation of split SNAP tags fused to the C terminus of Src provides direct evidence in living cells of Src dimerization (Figure 3). The dimer-ization observed in this assay requires the N-terminal SH4-UD of Src, consistent with the biochemical studies discussed above ( Figure 3).
Y419 Phosphorylation and Open Conformation Are Required for Src Dimerization
The dimerization of Src is linked with its activation state and conformational dynamics. Dimerization is promoted by the open active state of Src, and the closed tethered state is nonpermissive to dimerization. This is evident in dimerization assays using the Y530F mutant that promotes the open state by destabilizing the Y530-SH2 interaction (Liu and Pawson, 1994) or using the 530YEEI mutant that promotes the closed state by stabilizing the Y530-SH2 interaction (Schindler et al., 1999) ( Figure 4A). Auto-phosphorylation of Y419 is required for dimerization, as evident in Y419F mutation analysis ( Figure 4B). This is also evident in kinase-dead constructs that cannot auto-phosphorylate and thereby fail to dimerize ( Figure 4C). Experiments with kinase inhibitors shed more subtle insights into the requirement for autophosphorylation in dimerization. If Src inhibitor treatment is applied immediately upon Src expression, then dimers will not form ( Figure 4D, lane 2). However, if Src is inactivated long after its expression and dimerization, then preformed dimers are not disrupted despite clear dephosphorylation of Y419 ( Figure 4D, lane 3; Figure S3). The observed reduction of dimerization following 1 hr of late Src inhibitor exposure is not due to disruption of preformed dimers; rather, it is due to the natural turnover of Src and the inhibition of dimerization in newly expressed Src molecules. This becomes most evident if the late initiation of Src inhibitor treatment is allowed to continue for a prolonged time frame beyond the life-time of Src proteins that may have been previously engaged in dimers ( Figure 4D, lane 4). These experiments are consistent with the scenario wherein autophosphorylation of Y419 is a prerequisite for dimerization but is thereafter dispensable once the dimer is formed.
Y419 Phosphorylation and the SH4-UD Are Required in cis
Assay designs querying the dimerization of nonidentical constructs provide more clarity regarding the requirement for kinase activity and autophosphorylation in dimerization. Autophosphorylation at Y419 is required only in one partner for dimerization ( Figure 4E, lanes 6 and 7). Since the N-terminal region was also shown to be required in only one partner, a logical next question is whether these two requirements are in cis or in trans.
Comparing dimerization in a variety of experimental arms designed to test whether the Nterminal region and the Y419 autophosphorylation are required in cis or in trans, it is evident that this requirement is in cis ( Figure 4E, lane 6 versus lane 7). Similarly, an assay designed to test whether the N-terminal region and catalytic kinase activity are required in cis or in trans revealed that this requirement is also in cis ( Figure 4F, lane 3 versus 4). Taken together, this means that for dimerization to occur, partner A must contain an SH4-UD region and be auto-phosphorylated on its Y419 in order to dimerize with partner B. The autophosphorylation of Y419 is an intermolecular phosphorylation involving another partner A, as Src does not have the ability for intramolecular Y419 autophosphorylation. The fact that Y419 autophosphorylation is an intermolecular event was demonstrated in a more specifically designed experiment using the K298M kinase-dead mutant of Src. Constructs were generated that contain the 20-kDa C-terminal SNAP tag. These constructs migrate higher on gels, allowing for easy identification of simultaneously expressed tagged and untagged forms of Src. In this experimental design, it is evident that the kinase-dead mutant of Src is autophosphorylated on Y419 in an inter-molecular event by a wildtype Src ( Figure 4G, lanes 4 and 5).
Dimerization Involves the N-Terminal Myristate Binding to a KD Pocket in trans
The evidence thus far identifies the SH4-UD region as well as the KD to be determinants of Src dimerization. How these regions can mediate dimerization is not evident from our current understanding of Src structure, since the SH4-UD domain is intrinsically disordered and almost all of the X-ray crystallographic studies have been performed on constructs lacking the N-terminal region or lacking myristoylation. Studies of the closely related Abl kinase provide some insights into potential binding interfaces. Abl is myristoylated similar to Src. However, the myristoyl moiety can engage in an intramolecular interaction with the KD Nagar et al., 2003). In this interaction, the myristoyl moiety embeds deep into a hydrophobic pocket within the base of the C-lobe of the KD ( Figure S4). This engagement of the myristate group induces conformational changes in the C-lobe that affects its regulation . Crystallographic studies of Src show structural features at this site of the KD that could potentially also bind myristate, and this is supported by NMR studies (Cowan-Jacob et al., 2005). In structural alignment, the overall architecture of this myristoyl pocket in Abl and Src appear nearly identical ( Figure S4). To explore the possibility that this type of myristate-KD interaction may mediate the dimerization of Src, we focused on mutational studies of this hydrophobic pocket in Src ( Figure 5A). Several bulkier residues lining this pocket were mutated to alanine, opening up this pocket in the KD of Src, potentially making it more permissive to myristate binding. These mutations, to varying degrees, increase the dimerization observed between the KD segment and the Nterminal region of Src ( Figure 5B). On the other hand, T459 is a lone small amino acid at the gate of this pocket, and a T459F mutation introduces a bulkier residue into this site, making it more restrictive for myristate binding. Consistent with this, the T459F mutation decreases the dimerization observed between the KD and the N-terminal region of Src ( Figure 5C). Taken together, these studies support the hypothesis that dimerization involves the interaction of the myristoylated N-terminal region on one partner with the KD hydrophobic pocket in another. Alternatively it remains possible that an intramolecular myristate-KD interaction forces a conformational state that exposes another dimerization interface. To distinguish between these possibilities, the restrictive T459F mutation or the more permissive L494A mutation were introduced into the KD either in cis or in trans with the SH4-UD region. In these experiments, it is evident that the restrictive T459F mutation diminishes dimerization only when it is introduced in trans with the SH4-UD domain ( Figure 5D, lane 2 versus lane 3), and similarly, the more permissive L494A mutation enhances dimerization only when it is introduced in trans with the SH4-UD domain ( Figure 5E, lane 2 versus lane 3). These experiments are most consistent with a mode of dimerization wherein the N-terminal region of one partner binds to the KD of the other partner.
The binding of the myristate to the KD would be expected to come at the cost of membrane localization, since it would no longer be available for embedding into the plasma membrane. Consistent with this, the mutations that increase myristate-KD binding demonstrate a decrease in membrane localization, whereas the mutations that decrease myristate-KD binding demonstrate an increase in membrane localization ( Figure S5). The myristate-KD binding is also potentially subject to competition by appropriately designed small-molecule inhibitors. Such inhibitors have been developed that bind the myristate-binding pocket in Abl (Zhang et al., 2010) ( Figure S4D). This allosteric inhibitor of Abl, although not specifically designed for the analogous hydrophobic pocket in Src, does show some weak activity in increasing the membrane localization of Src, consistent with its ability to at least partially compete out the myristate-KD interaction ( Figure S5D). These data are consistent with the notion that the observed dimerization of Src is mediated through myristate binding within the hydrophobic pocket of the KD analogous to Abl. Whether the UD segment also makes specific interactions with residues on the KD C-lobe is not addressed by these experiments, although numerous experiments with deletion constructs described earlier reveal that both myristoylation and the UD segments are important for dimerization.
N-Terminal Region Enhances Kinase Activity
The N-terminal region of Src is not only important in its dimerization but also contributes to kinase activity. Deletion of the N-terminal SH4-UD region reduces Src autophosphorylation ( Figure 6A). This impairment of Src autophosphorylation activity is not due to increased Y530 phosphorylation ( Figure 6A) and cannot be rescued by forcing the open and active conformation through Y530F mutation ( Figure 6B). Both the myristoyl moiety and the UD contribute to activity with a greater impact from the loss of myristoylation. Myristoylation may directly enhance catalytic activity by its binding to the KD or alternatively may mediate an increase in substrate phosphorylation through localization effects or endorsing a specific molecular microenvironment. As an example, the catalytic activity of the Src-related cytoplasmic Abl kinase is known to be regulated by its myristoylation signal . To more directly explore the role of the N-terminal SH4-UD region in the catalytic activity of Src inde-pendent of their effects on compartmentalization, we turned to in vitro kinase assays. Src was expressed and purified from HEK293T cells so that it may be properly myristoylated ( Figure 6C). Assay conditions were established to be in linear range with respect to duration and concentration ( Figure S6). Deletion of the N-terminal SH4-UD region substantially diminishes the kinase activity of Src measured through its autophosphorylation ( Figure 6D). These are partial decreases relative to wild-type, and the elimination of myristoylation has a bigger impact than the elimination of the UD ( Figures 6D, 6E, and S6). Partial differences can also be appreciated in cell-based experiments by increasing expression levels through increasing plasmid DNA amounts transfected ( Figure S7). Similar results are obtained when assaying the kinase activity of Src against purified substrates, including paxillin, enolase, or Trask/CDCP1 ( Figure 6F). The concentrationactivity relationship of wild-type (WT) Src is consistent with an increase in specific activity as a function of concentration ( Figure 6E), a finding that is consistent with a dimerizationdriven mode of activation. Although it is difficult to compare soluble protein concentrations in vitro to compartmentalized proteins in cells, the cellular concentration of Src is estimated to be in the 1-10 nM range (Gee et al., 1986;Milo et al., 2010), and thus these in vitro experiments are in a concentration range that can potentially inform physiologic behavior. More-over, disruption of Src N-terminal myristoylation in several human cell lines by a selective inhibitor of N-myristoyltransferase also diminishes Src autophosphorylation and phosphorylation of the substrate FAK ( Figures 6G and 6H), although in the cellular context, effects on subcellular or molecular localization may also be contributing to the observed effects.
Dimerization Is Required for Signaling Activity
Interfering with kinase activity or Y419 autophosphorylation prevents dimerization as shown by the mutational and drug inhibition studies described above. The reverse also holds true, such that interfering with dimerization interferes with the cellular phosphorylation activities of Src. This was determined by the identification and use of constructs that can interfere with dimerization in a dominant-negative manner. The Y419F-K298M-Y530F triple mutant of Src, which lacks its two main regulatory phosphorylation sites and is also kinase dead, has been used in the field as a dominant-negative version of Src. Our studies would suggest that this triple mutant exerts its dominant-negative activities by interfering with the dimerization of Src. Consistent with this notion, this triple-mutant Src, while deficient at self-dimerization, does bind with WT Src and diminishes the autophosphorylation activity of WT Src and the transphosphorylation activity of the substrate FAK ( Figure 7A). Since we have shown that dimerization is mediated through the N-terminal SH4-UD region, we should be able to reproduce this dominant-negative or inhibitory activity through the expression of a competing SH4-UD segment alone. Indeed, we find through stepwise deletion of the KD and SH3-SH2 regions of this dominant-negative construct that its dominant-negative function is preserved and in fact enhanced by the smaller constructs, possibly related to higher expression of these smaller constructs ( Figure 7B). The minimal SH4-UD region by itself is unstable and not expressed, but a GFP-fused version is well expressed and demonstrates that the inhibitory activity is entirely contained within this minimal region ( Figure 7B, lane 5). More detailed analysis of this minimal SH4-UD-GFP construct reveals that it binds WT Src and successfully outcompetes a WT Src partner in the process, and the result is inhibition of Src Y419 autophosphorylation activity and inhibition of phosphorylation of its substrate, FAK ( Figure 7C).
DISCUSSION
The experiments here provide structural and functional insights into the dimerization of Src. They show that Src does dimerize and that dimerization is mediated by the myristoylated Nterminal region of one partner and the hydrophobic pocket within the KD C-lobe of the other partner. Dimerization requires Y419 phosphorylation in cis with the N-terminal region but is dispensable after dimer formation. The fact that Y419 phosphorylation is required for dimerization is corroborated by Y419F mutation, kinase-dead Src mutants, and kinase inhibitor studies. But why phosphorylation of Y419 in cis is required for the N-terminal region to engage the KD of a partner in trans is not immediately clear. Stable protein interactions often involve intermediate conformational states and it is possible that the Y419 phosphorylated state of Src provides an appropriate platform for a transient conformation that precedes the more stable dimeric state. It is also possible that when Src is not phosphorylated at Y419, the N-terminal region is bound to specific proteins or engaged in a specific conformational state and that Y419 phosphorylation releases it from such a state, making it available for dimerization. Regardless of the structural basis, the functional link between Y419 phosphorylation and dimerization provides important clues regarding the role of Y419 phosphorylation in the activities of Src kinases. In the earliest days of this field, Y419 phosphorylation was thought to be integral to the activation process of Src because of reduced signaling and biologic activities observed with Y419F mutation of Src and emerging evidence from other kinases that phosphorylation in the activation loop of kinases is essential for access to the active site (Adams, 2003;Nolen et al., 2004). But numerous lines of evidence proved inconsistent with this generalization. The Y419F mutation only produces partial changes in observed activities, and these appear to be context dependent, possibly substrate dependent, discordant between in vitro and in-cell experiments, and not observed in all circumstances (Boerner et al., 1996;Ferracini and Brugge, 1990;Kmiecik and Shalloway, 1987). In in vitro kinase reactions, substrate phosphorylation occurs in competition with Y419 autophophorylation (Sun et al., 2002). Furthermore, the Y419F mutant of Src retains activity in vitro (Ferracini and Brugge, 1990;Piwnica-Worms et al., 1987), as does the Y419A mutant of Hck (Porter et al., 2000). In some cell signaling functions, Y419 phosphorylation appears redundant. For example, Src catalytic activity is required to phosphorylate FAK and mediate integrin signaling and cell spreading and migration, yet this integrin-induced function is performed without a detectable induction in Y419 phosphorylation of Src and is unmitigated by the Y419F mutation of Src (Cary et al., 2002). Furthermore, the v-src oncogene retains its tumorigenic properties despite Y416F mutation (analogous to human Y419F) (Snyder and Bishop, 1984). Cowan-Jacob et al. (2005) reported crystallographic studies of the unphosphorylated form of Src, revealing the A-loop to be in the open extended state with the active site exposed, consistent with the conformation of an active kinase. The active conformation was seen regardless of whether the kinase was bound to an inhibitor or AMP-PNP or in apo form. Although seemingly not required to stabilize its extended conformation, it was hypothesized that Y419 phosphorylation of the A-loop may facilitate binding certain substrates. It is clear now that while Y419 phosphorylation alters the biologic behavior of Src, this is not due to on or off switching of catalytic activity. Our studies establishing a functional link between Y419 phosphorylation and dimerization provide further insights into the role of Y419 phosphorylation. Our findings suggest that Y419 phosphorylation, by enabling dimerization, may promote Src to participate in specific macro-molecular complexes and thereby affect substrate access and selection by Src, affecting its functions in this more specific way rather than a nonspecific activation of catalytic function. It is possible that monomeric Src has biologic actions distinct from dimeric Src and that Y419 phosphorylation can dictate this transition. While our study establishes the existence and importance of Src dimerization, the techniques used here do not rigorously address the stoichiometry of dimerization. Future lines of study will more specifically need to determine how much of Src is involved in monomers versus dimers versus larger oligomers and establish the dynamics of these macromolecular events using techniques more suited for these queries.
The fact that Y419 autophosphorylation is required for dimerization ( Figure 4) and that dimerization is required for Y419 autophosphorylation (Figure 7) presents a seemingly difficult regulatory relationship to reconcile at first glance, since it seems to present a substantial barrier for the establishment of an autophosphorylated Src dimer. But this likely proceeds through transient unstable states of dimerization without phosphorylation or vice versa and a high threshold to adopt an initial stable phosphorylated dimer. This, in fact, describes a bistable switch common in biologic systems. While some signaling events in biological systems provided graded outputs operating like a rheostat, other signaling events in nature provide binary outputs, operating like a switch in the on or off positions. Such biological systems are stable in the off position and stable in the on position and are thus called bistable switches (Chatterjee et al., 2008;Pomerening, 2008). The barrier to activation in such bistable switches functions to ensure that the signal is not activated prematurely, and the self-perpetuating mechanism functions to ensure continuous and sustained output once activated, functioning as a sort of biology memory for an initiating signal that does not itself persist. The relationship between Src autophosphorylation and dimerization appears to describe such a bistable switch and suggests a binary signaling mode of output inherent in at least some of the functions of the Src family.
Many kinases are known to dimerize, although the modes of dimerization vary considerably between kinase families (Lavoie et al., 2014). The best-understood function of dimerization is the regulation of the on or off state of kinases through orthosteric or allosteric mechanisms. In the case of Src kinases, the observed bistable switch linked with dimerization is unlikely to correspond to catalytic on or off states. Consistent with this, and as described above, the Y419F mutant of Src is impaired in dimerization yet retains activity in in vitro and in vivo studies. More likely, the dimerization of Src affects its biologic behavior, perhaps including access to specific macromolecular complexes or engagement of specific substrates. The fact that the N-terminal region mediates dimerization when binding a partner KD yet mediates membrane localization when embedding in the plasma membrane creates a functional competition that may have additional relevance to the regulation of the activities of Src kinases. The N-terminal region is also known to mediate interactions with other proteins (Shaw et al., 1989;Vonakis et al., 1997), also potentially in competition with its dimerization function, providing yet another layer of complexity in the functions of this region. The multifunctional nature of the N-terminal domain is likely enabled by its lack of an ordered structure, allowing it to adopt different conformations during interactions with different partners. The flexibility inherent in the UD makes it an attractive candidate as a regulatory hub that governs the overall activities of Src kinases. Adding additional layers of diversity is that fact that dimerization in the Src family is not restricted to homodimers. We also readily detect Src-Fyn heterodimers and Fyn-Fyn homodimers in similar HA or FLAG pulldown assays (not shown). Since the N-terminal UDs are divergent between Src family members, the existence of various homodimeric and heterodimeric complexes of Src kinases creates even broader diversity in potential signaling activities associated with the various homo-and heterodimers.
Our understanding of the mechanisms underlying the regulation of Src kinases was largely informed by the crystallographic studies of the late 1990s. These developments came during an era of pioneering studies revealing how structured protein modules have evolved to regulate protein function. Now, two decades later, many of the complexities in the Src family remain difficult to explain, and current models of Src regulation have major conceptual gaps that await deeper insights. These gaps may potentially be filled by a new leap in proteomics, recognizing the sheer abundance and functional significance of unstructured protein regions. Indeed, we may have underestimated the importance of the N-terminal region of Src kinases, and this unstructured region may underlie much of the complexity inherent in the functions of this kinase family. This would be consistent with an increasing body of evidence regarding the critical role of IDRs in regulating protein function. These regions can harbor structural plasticity, adopting different structures on different targets (Dyson and Wright, 2005), or they can bind via dynamically heterogeneous conformations, or they can bind without any apparent order (Baker et al., 2007;Mittag et al., 2010;Tompa and Fuxreiter, 2008;Wang et al., 2011). The conformational entropy inherent in IDRs facilitates the allosteric coupling of protein domains, placing IDRs at the heart of the processes that regulate protein functions. Since the interactions of IDRs are based on limited or transient structural features and lower-affinity interactions, they are much more affected by posttranslational modifications, and it is well appreciated that IDRs are a hub for the regulation of protein function through post-translational modification (Borg et al., 2007;Gsponer and Babu, 2009;Van Roey et al., 2012. Indeed, post-translational modifications of the proteome are biased toward IDRs, including phosphorylation, glycosylation, hydroxylation, acetylation, sulfation, ADP ribosylation, SUMOylation, and ubiquitination (Iakoucheva et al., 2004;Pejaver et al., 2014). The N-terminal region of Src is no exception to this and is known to undergo multiple phosphorylations with consequent affects in its observed activities (Amata et al., 2013(Amata et al., , 2014Hansen et al., 1997;Johnson et al., 2000;Joung et al., 1995). The presence of other post-translational modifications of the N-terminal UD of Src has not been studied to date, but our work provides a compelling case that the functions of Src kinases may be regulated in part by post-translational modification of the N-terminal UD. IDRs such as the UD of Src can function as signaling hubs, coordinating the activities of Src kinases in response to specific modifications. Such modifications can bias the binding affinities of the N-terminal region between the KD of a dimerization partner, the plasma membrane, or an interacting protein or cytoskeletal macromolecule.
The dimerization of Src has been considered and detected by other studies but with conflicting results and no functional insights. Weijland et al. (1997) purified and studied a truncated Src protein missing the N-terminal SH4-UD region and found it to be monomeric by gel filtration chromatography, consistent with our findings using a similarly truncated construct. Kemble and Sun (2009) studied bacterial purified Src with an interest in redox effects and reported that oxidation induces a covalent dimerization of Src in vitro mediated through disulfide Cys bridging. Irtegun et al. (2013) reported Y416 phosphorylation in the closed state of Src and in their studies conducted ultracentrifugation of EGFP fused Src, observing both monomeric and dimeric forms. These studies suggest the possibility that Src may form dimers, but the lack of myristoylation, the presence of dimerizing tags, or the lack of in cell evidence has not allowed confirmation and insightful analysis of dimerization. Le Roux et al. (2016aRoux et al. ( , 2016b studied the kinetics of in vitro liposome binding to N-terminal Src regions and reported that the myristoylated SH4 region alone binds with kinetics that suggest dimerization is involved. The near-complete lack of protein content in this minimal construct and its dependency on lipid composition in the liposome may indicate the role of this region in the clustering of Src at the membrane and in membrane microdomains rather than dimerization involving protein interfaces. The arrangement of Src in some packed crystal structures of KD fragments has been interpreted as dimeric (Breitenlechner et al., 2005), but the relevance of these crystal findings to the physiologic structure of the fulllength native protein is doubtful.
The importance of Src myristoylation has been known for some time, but this has largely been attributed to its role in mem-brane localization. Src constructs defective in myristoylation have altered phosphorylation activities (Bagrodia et al., 1993;Linder and Burr, 1988) and loss of transforming ability (Kamps et al., 1985(Kamps et al., , 1986, but the functions of the myristoyl moiety exceed beyond simple membrane localization. In some proteins, myristate is known to bind intramolecularly within hydrophobic pockets of the protein. In these scenarios, alternative binding modes (i.e., to the membrane versus the protein) affect protein function and define a myristoyl switch, such as seen in HIV-1 gag, recoverin, and Abl (Ames et al., , 1996Hantschel et al., 2003;Nagar et al., 2003;Resh, 2004;Tanaka et al., 1995). Src may also be regulated by a myristoyl switch, and our results support this model, but structural evidence for such a binding mode has thus far eluded crystallographic studies. Patwardhan and Resh (2010) conducted a biochemical analysis of myristoylated and non-myristoylated Src and reported effects on kinase activity, a possible association with the KD hydrophobic pocket, and enhanced protein stability and half-life. These are partly consistent with our findings, but this study did not investigate dimerization or signaling functions. The importance of the N-terminal UD for autophosphorylation activity has been demonstrated for the more distantly SFK-related protein Srms. The N-terminal UD of this protein is required for its catalytic activity (Goel et al., 2013), although the lack of Nterminal myristoylation in Srms highlights key differences between these distantly related kinases. The coding sequence of human c-Src was amplified from c-Src cDNA clone sc125208 (Origene). The primer sequences for this and all other construct cloning and mutation are provided in Table S1. Some constructs were cloned by amplification of the desired segment with appropriate primer extensions for cloning into pDONR221. The amplified PCR product was recombined by using BP Clonase II (ThermoFisher Scientific) into pDonR221. Some constructs were made by deletion cloning, i.e., outward PCR amplification of the entire plasmid omitting the desired deletion region. These were done by low number amplification cycles using Pfu Ultra HF polymerase and 5 0 phosphorylated primers followed by a self ligation reaction and DpnI digestion. Point mutations were created by site-directed mutagenesis using Pfu Ultra HF polymerase, whole plasmid amplification, followed by DpnI digestion and ligation in bacteria. All constructs were sequenced across the insert to verify the identity of the construct and rule out the presence of undesired mutations. The constructs in pDONR221 were moved to destination vectors using LR Clonase II (ThermoFisher).
Purification of src proteins-Plasmids containing the desired WT or mutant Src constructs with in-frame C-terminal tags (V5, HA,FLAG) followed by a PreScission protease cleavage site and GST were transfected in HEK293T cells (typically 40-60 µg plasmid per 15 cm dish). Next day the cells were treated with 1mM dasatinib (unless otherwise stated) for 1 hour to dephosphorylate Src and cells lysed in HEK293T cells lysis buffer (20mM Tris HCL, pH 7.5, 1% Triton X-100, 10% Glycerol, 400 mM NaCl, 1mM EDTA, 200 nM DTT, 1 mM PMSF, 2 µg/ml Leupeptin, 2 µg/ml Aprotinin, 1 nM sodium vanadate). The lysates were incubated at 4ºC for 30 minutes with constant rotation, and then centrifuged at 16,000 g for 5 min. The supernatant was collected and incubated with Glutathione Sepharose 4B beads (GE Healthcare)at 4ºC for 3 hours with constant rotation. The beads were washed 4 times with HEK293T lysis buffer and once with PPCB (PreScission Protease Cleavage Buffer: 50 mM Tris-HCl, 150 mM NaCl, 1 mM EDTA, 1 mM dithiothreitol (DTT), pH 7.0). Next the beads were incubated with 1U PreScission protease (GE Healthcare) in PPCB at 4ºC overnight with constant rotation. Next day the supernatant, containing the purified src proteins was carefully collected. Typically, around 10% of the sample was run in a gel for Coumassie staining. The purified proteins were stored at −20ºC in 50% glycerol.
Split SNAP-tag complementation assay-The split SNAP-tag complementation assay was performed by splitting the SNAP tag into nSNAP and cSNAP fragments following Gln91 according to a previously established complementation assay for living cells (Mie et al., 2012). Modified pDEST40 vectors were engineered with c-terminal nSNAP or cSNAP tags following 2XHA or 2XFLAG tags to generate pDEST40-2XHA-nSNAP and pDEST40-2XFLAG-cSNAP expression vectors. All constructs were confirmed by sequencing. WT and mutant Src inserts were then cloned into these vectors. These vectors were transfected into SYF cells using Lipofectamine 2000 according to the manufacturer's instructions (Invitrogen). 24h after transfection, cells were labeled with SNAP-Cell Oregon Green (1 uM) for 30min. The cells were washed three times to remove unreacted substrate, counterstained with 5uM Hoescht 33342 for 2 minutes and incubated in fresh medium for 30 min. Cells were fixed with 4% formaldehyde solution (Thermo Scientific) and permeabilized with 0.02% Triton X-100. Cells were blocked and stained with the indicated anti-HA antibody (Y-11 sc-805 Rb) at 1:500 and anti-FLAG Ab (Sigma F1804 Ms) at 1:200 followed by the appropriate secondary antibodies conjugated with Alexa Fluor 546 (excitation at 488 nm and emission at 585-615 nm) or Alexa Fluor 647 (excitation at 633 nm and emission from 650 nm). Cells were imaged using a Spinning disk confocal Nikon TI inverted microscope (× 40 objective, lasers 405, 488, 561 and 648 nm). For each transfection, at least 49 cells were analyzed using ImageJ. The complementation signal was plotted as a function of expression levels of the individual constructs.
Immunoprecipitation assays-Equal amounts (250ng-2 µg) of indicated HA and FLAG-tagged src constructs were co-transfected in SYF cells (6cm or 10 cm dishes). Total cellular lysates were harvested in modified RIPA buffer (10 mM Na phosphate (pH 7.2), 150 mM NaCl, 0.1% SDS, 1% NP40, 1% Na deoxycholate, protease inhibitors and 1mM sodium orthovanadate). For immunoprecipitation studies, 200-300ug of lysate in mRIPA was incubated overnight with mouse anti-HA antibodies (clone F7, SantaCruz Biotechnology), immune complexes collected by protein G-Sepharose beads (GE Healthcare), washed, and the denatured complexes separated by SDS-PAGE, transferred to membranes, and immunoblotted with appropriate antibodies. For western blotting, 30ug of each lysate was separated by SDS-PAGE, transferred to membrane, and immunoblotted using appropriate primary and secondary antibodies and enhanced chemoluminescence visualization.
For detection of Src dimers using native gel electrophoresis, Src and SrcG2A constructs were purified from HEK293T cells as described above. Equal amounts of purified Src and SrcG2A were loaded into nativePAGE Novex Bis-Tris gels (Life Technologies). Electrophoresis was performed in the presence of Coomassie G-250 in the cathode buffer according to the manufacturer's instructions (#BN1002BOX Life Technologies). Gels were transferred onto a PVDF membrane and blocked with 3% bovine serum albumin. The membranes were stained with primary antibodies overnight at 4ºC, washed 3 times with TBST (Tris-buffered saline containing 0.5% tween), stained with secondary antibodies for 1 hour at room temperature and detected using the chemiluminescence method.
In vitro interaction of purified S4-UD with KD-For purification of the N-terminal S4-UD region fused to GST, the construct did not contain the Precision Protease cleavage site. The S4-UD-GST fusion construct was expressed in HEK293T cells and lysates loaded on Glutathione Sepharose 4B beads (GE Healthcare) at 4ºC for 3 hours and subsequently washed 4 times with HEK293T lysis buffer and once with mRIPA. To demon-strate the interaction with the KD, the purified Src KD was incubated with the S4-UD-GST bound beads at 4ºC overnight in mRIPA buffer. Next day the bound proteins were eluded with Elution Buffer (25 mM glutathione, 50 mM Tris, pH 8.8, 10 mM DTT, 200 mM NaCl, 10% glycerol)) and boiled in Laemmli buffer. The proteins were separated by SDS-PAGE and immunoblotted.
In vitro dimerization of purified src proteins-HA and FLAG tagged src constructs containing Y530F mutation were purified from HEK293T cells as described above, but the cells were not treated with dasatinib to preserve the endogenous Y419 phosphorylation. The src proteins were dialyzed against buffer D (50 mM Tris-HCl, 150 mM NaCl) by using Slide-A-Lyzer MINI Dialysis Devices (ThermoFisherScientific). The dialyzed proteins were co-incubated in the presence of kinase assay buffer (see below) and immunoprecipitated with mouse anti-HA antibody, or normal mouse IgG control antibodies. The immune complexes were washed and boiled, separated on SDS-PAGE, transferred to membrane, and immunoblotted with anti-FLAG antibodies.
In vitro kinase assay-The kinase assay buffer and ATP were purchased from SignalChem (Richmond, BC). The indicated amounts of purified src proteins were incubated in kinase assay buffer (5 mM MOPS, pH7.2, 2.5 mM B-glycerol-phosphate, 4 mM MgCl2, 2.5 mM MnCl2, 1 mM EGTA, 0.4 mM EDTA, 10 mM DTT, 250 mM ATP) at 30ºC for 10-30 min. The reactions were stopped with 5x Laemmli buffer and boiling for 5 min. The results were analyzed by SDS-PAGE and western blotting. Recombinant human Paxillin was purchased from Raybiotech (Norcross, GA) and recombinant Enolase (ENO3) was purchased from Sino Biological Inc.(Beijing, PRC). All recombinant proteins used as substrates were purified from bacteria and therefore were not phosphorylated prior to the kinase reactions. HEK293T cells, transfected with the desired src constructs were treated with 1 mM dasatinib (Src and Csk inhibitor) to dephosphorylate Src for 1 hour prior to lysis.
Extraction of membrane and cytosolic fractions-Separation of membrane and cytosolic fractions was performed with Mem-Per™ Plus Membrane Protein Extraction Kit (ThermoFisher Scientific). Briefly, the FLAG-tagged versions of indicated src constructs were transfected in SYF cells. Next day the cells were lysed in Permeabilization buffer and span 16,000 g for 15 min. The supernatant containing cytosolic proteins was care-fully transferred to a new tube. Any residual droplets of supernatant that still remain around the pellets were removed carefully with P10 pipette. Next the pellets were resuspended in Solubilization buffer and incubated at 4ºC for 30 minutes with constant mixing. After spinning at 16,000 × g for 15 minutes at 4ºC, the supernatants containing solubilized membrane and membrane-associated proteins were collected. The collected cytosolic and membrane proteins were run on SDS-PAGE and visualized with anti-FLAG antibodies. The efficiency of fractionation procedure was monitored with antibodies against HSP90 (localizes to the cytosolic frac-tion) and Caveolin-1 (localizes to the membrane fraction).
QUANTIFICATION AND STATISTICAL ANALYSIS
Quantification of image-based signals was done using ImageJ software (https:// imagej.nih.gov/).
•
The unique domain of one partner interacts with the kinase domain of another partner
Figure 1. Index to Constructs
The various constructs designed for use in this study are shown here with the nomenclature used to refer to them. All constructs were created in both HA-and FLAG-tagged versions for easy analysis of dimerization. In addition, some of these constructs were further modified by point mutations and these are indicated in the figure labels. Immunoblots of whole-cell lysates were also performed as indicated.
(I) Y530F mutant of Src tagged with HA or FLAG was purified from HEK293T cells. The Y530F mutant was used to best demonstrate in vitro dimerization, since Src purified from these cells has significant Y530 phosphorylation and can promote the closed conformation. Coomassie stains of the purified proteins are shown on the left. The purified proteins were mixed and incubated for 2 hr, followed by addition of anti-HA antibodies or isotype control immunoglobulin G (IgG). The anti-HA immunoprecipitates were immunoblotted as indicated and shown on the right. A portion of the input mixture prior to immunoprecipitation was also immunoblotted as indicated.
(J) Src and SrcG2A constructs were purified from HEK293T cells and analyzed in blue native gels followed by immunoblot using anti-Src antibody.
(K) The GST-tagged S4-UD construct was expressed and purified from HEK293T cells by pull-down on glutathione Sepharose beads. The FLAG-tagged KD construct was expressed and purified from HEK293T cells by glutathione sepharose beads and the GST tag cleaved off. The purified proteins were mixed and incubated overnight at 4ºC in mRIPA buffer; beads were pulled down, eluted in GST elution buffer, separated by SDS-PAGE, and immunoblotted as indicated. Experiments with the various constructs were performed 2-4 times, and representative data are shown here.
(B) The two expression signals and the complementation signal were quantified in individual cells in the WT Src transfection and the S32-KD construct transfection. The expression of the two constructs (from HA and FLAG immunostains) is depicted in the x and y axes, whereas the intensity of the complementation signal is depicted in the diameter of the circles on the graph and has been corrected for background evident in vector transfected controls. Complementation of the full-length WT Src is best compared with the S32-KD construct in ranges of similar expression such as the 2-8 ranges along the x and y axes. Experiments were done 3 times, and representative data are shown here.
Figure 4. Dimerization of Src Requires Y419 Autophosphorylation in cis with the N-Terminal Region
(A-C) SYF cells were transfected with the indicated constructs. HA-tagged proteins were immunoprecipitated from cell lysates and immunoblotted with the indicated antibodies.
Immunoblots of whole-cell lysates were also performed as indicated.
(D) SYF cells were transfected with HA-tagged and FLAG-tagged full-length wild-type Src and treated with DMSO control or 1 mM dasatinib and anti-HA immunoprecipitates were immunoblotted as indicated. The dasatinib treatment was started either early or late or for Immunoblots of whole-cell lysates were also performed as indicated. The double bands seen in some immunoblots are due to a higher migrating Y338-phosphorylated form of Src as (D) In vitro kinase reactions were carried with 10 nM of the indicated purified V5-tagged Src proteins for 10 min and assayed by anti-pY419Src immunoblotting. ATP was omitted in the negative control arms. (E) The same in vitro kinase reactions were performed using different concentrations of protein as indicated. The quantified autophosphorylation results are shown here, and the corresponding immunoblots of these reactions are shown in Figure S6. The data points represent the average of n = 2; errors bars represent SEM.
(F) Three separate in vitro kinase reactions were performed using 4 nM Src proteins and each of the three indicated purified recombinant substrates. Kinase reaction linear conditions were previously established and are as follows: 40 nM paxillin in reaction for 20 min, 250 nM enolase in reaction for 20 min, and 70 nM Trask/CDCP1 in reaction for 10 min. Substrate phosphorylation was assayed by anti-pTyr immunoblotting and kinase and substrate proteins immunoblotted with V5 and substrate-specific antibodies. ATP was omitted in the negative control arms. (G) The indicated cell lines were treated with 1 µM DDD85646 for 30 hr and cell lysates immunoblotted as indicated. Anti-pY416Src antibodies cross-react with all members of the Src family.
(H) The same lysates were assayed specifically for Src autophosphorylation by anti-Src immunoprecipitation and immunoblotted as indicated. Experiments with the various constructs were performed 2-3 times, and representative data are shown here.
|
2018-10-27T16:49:07.565Z
|
2018-10-01T00:00:00.000
|
{
"year": 2018,
"sha1": "b287d9710bad7588913e10e4bb3cbbdaa13e3a33",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2211124718314803/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b287d9710bad7588913e10e4bb3cbbdaa13e3a33",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
237340537
|
pes2o/s2orc
|
v3-fos-license
|
Multifunctional and Self-Healable Intelligent Hydrogels for Cancer Drug Delivery and Promoting Tissue Regeneration In Vivo
Regenerative medicine seeks to assess how materials fundamentally affect cellular functions to improve retaining, restoring, and revitalizing damaged tissues and cancer therapy. As potential candidates in regenerative medicine, hydrogels have attracted much attention due to mimicking of native cell-extracellular matrix (ECM) in cell biology, tissue engineering, and drug screening over the past two decades. In addition, hydrogels with a high capacity for drug loading and sustained release profile are applicable in drug delivery systems. Recently, self-healing supramolecular hydrogels, as a novel class of biomaterials, are being used in preclinical trials with benefits such as biocompatibility, native tissue mimicry, and injectability via a reversible crosslink. Meanwhile, the localized therapeutics agent delivery is beneficial due to the ability to deliver more doses of therapeutic agents to the targeted site and the ability to overcome post-surgical complications, inflammation, and infections. These highly potential materials can help address the limitations of current drug delivery systems and the high clinical demand for customized drug release systems. To this aim, the current review presents the state-of-the-art progress of multifunctional and self-healable hydrogels for a broad range of applications in cancer therapy, tissue engineering, and regenerative medicine.
Introduction
Tissue engineering offers alternative ways to address the current challenges associated with autologous grafts by developing highly porous biomaterials and scaffolds to encapsulate cells that spread and reorganize into tissue-like architectures to replace damaged tissues [1,2]. Among many scaffolding biomaterials, hydrogels are considered as a unique group of three-dimensional (3D) polymeric substances due to their biomimetic properties, biocompatibility, porous structure, and the capability to absorb and retain a high amount of water, and adaptability using interchangeable sol-gel conditions [3,4]. Hydrogels are widely utilized in various healthcare applications such as wound dressings, contact lenses, sensors, and drug delivery systems [5]. The most common water-soluble polymers include poly(vinylpyrrolidone), poly(acrylic acid), poly(vinyl alcohol), poly(ethylene glycol), polyacrylamide, and some polysaccharides. Many hydrogels have been patented for drug delivery like SUPPRELIN LA, which is utilized for the treatment of children with central PHEMA Hydrogel fibers PHEMA/beta-CD hydrogels Ocular drug admission [8] HS-TENG Freeze drying PVA/PDAP/MWCNT Wearable application [10] TC-Gel Hydrogel nanoparticles TA@CNCs/PANI Sensor for electronic skin devices [11] PDMS/PEDOT:PSS/PAA 3D Printing polyacrylamide and poly(3,4ethylenedioxythiophene) polystyrene sulfonate hydrogel Neural attachment [12] In other words, the application of various injectable hydrogels is limited by control gelation time. Slow gelation could give rise to rapid material dispersion and loss of cargo, while clogging causes rapid crosslinking during the injection device [13]. Many factors are involved in communication between hydrogels and stem cells, affecting stem cell survival, such as polymer types, stiffness, porosity, degradation, and compatibility [14,15]. The shape of smart hydrogels could alter under the effect of an appropriate stimulus because of reversible covalent or physical cross-links [16]. Combination of hydrogels with flexible three-dimensional structures and self-healing property has been made into novel intelligent hydrogels [17]. Recently, intelligent hydrogels have been investigated with self-healing properties, which means the capability of the material to recover its initial structure after being damaged [18,19]. This phenomenon is like the same healing process of natural organisms via the reversibility of dynamic covalent crosslinks without an external stimulus, e.g., low pH, enzymatic environment, electric fields, temperatures, and UV light [20]. These hydrogels mimic the 3D ECM to regenerate after minor injuries similar to native tissues. In other words, self-healing capability refers to a gel's ability to recover to its original shape via noncovalent interactions, dynamic covalent, and physical bonds ( Figure 1) [21]. Supramolecular hydrogels consist of self-assembling peptides, hosts such as cyclodextrins and polysaccharides, which could improve interactions between two types of different polymers and give rise to higher self-healing properties [22]. Self-healing hydrogels have beneficial properties, including injection via needles without clogging, homogenous encapsulation of payloads, recovery to their initial state, and easy delivery of damaged tissue [13]. Furthermore, intelligent hydrogels have great potential for targeted drug delivery, non-invasive, remote-controlled therapies, regenerative medicine, tissue engineering, and implanting artificial organs. Dynamic covalent bonds show stable and slow dynamic equilibriums, whereas noncovalent interactions indicate rapid dynamic equilibriums and fragile ones [23]. However, despite the recovering feature of self-healing hydrogels, their application is restricted due to poor conductivity and lower fracture energies than that of native tissue [23]. One approach to overcome the limitations of intelligent hydrogels is a combination of synthetic and natural polymers with nanomaterials to enhance biocompatibility and physical capability [14].
The local delivery of therapeutic agents and drugs are currently utilized for inflammation prevention, cancer elimination, and the regeneration of injured tissue [24]. Noticeably, the local delivery can decrease the limited toxic effect of a drug in the tumor area without any cytotoxic effect for the adjacent healthy tissues. Self-healing hydrogels suggested avoiding a chronic foreign-body immune response after surgery. Besides, they could release drugs for a long time and be stimuli-responsive for an on-demand drug [1]. The structure of intelligent hydrogels can be altered by different types of stimuli from internal (e.g., in-body) or external (i.e., out-body) sources. Internal and external sources are classified as biological, physical, and chemical [25].
This manuscript aims to summarize important studies conducted toward developing self-healing hydrogels for various applications in regenerative medicine and cancer delivery applications. We investigated supermolecular hydrogels with dynamic bonds that possess mechanical and conductive properties for cell and drug delivery. In this regard, the current main challenges and issues to apply hydrogels as scaffolds for drug or biologic agent delivery to maintain, regenerate, and modify lost or damaged tissues are discussed and summarized.
Tissue Engineering Applications and Cancer Drug Delivery
Several properties of the synthetic and natural hydrogels, such as biocompatibility, biodegradability, mechanical properties (applicable to the bone and muscle tissue), and electrical feature (applicable to cardiac and nerve tissues), should come into account for better integration with native organs [26,27]. Furthermore, scaffolds should trigger cell attachment, proliferation, and differentiation without a cytotoxic and minimal immune reaction [28]. Numerous studies have focused on the development of self-healing hydrogels that spontaneously possess electroactivity with stable mechanical properties [26]. The dynamic nature of reversible crosslink facilitates recapitulating the viscoelastic nature of the ECM and leads to strengthening restoring behavior and shear-thinning (decline in viscosity as shear stress rises) behavior, which is essential for injection [27]. Figure 2 summarizes the main elements in tissue engineering based on engineering stem cells, nanoparticles, and novel self-healing polymers to enhance the structural and functional resemblance to the tissues. Recent investigations have tried to modify mesenchymal stem cells (MSCs) by incorporating beneficial genes/drugs to augment the differentiation capacity or induce the production and secretion of growth factors [29][30][31], which is beyond the scope of this article. This section will consider recent developments of self-healing hydrogels focusing on cancer drug delivery, bone, cardiac, neural, and lung tissue engineering.
Cancer Drug Delivery
Cancer is considered as the second global cause of death [32,33]. Due to the lack of reliable treatment alternatives and high systemic doses of chemotherapy and drug resistance [26], matrices with features of sustained drug release are crucial [34]. Smart selfhealing injectable hydrogels provide sustained release of bioactive agents, such as microor macromolecular drugs [35]. Natural hydrogels have widely been used as drug delivery vehicles such as HA, collagen, gelatin, alginate, chitosan [36]. Hydrogels based on different parameters can be classified based on network size (macrogels, microgels, nanogels), composition (homo or copolymeric), electrical charge (non-ionic, cationic, anionic, amphoteric, or zwitterionic), crosslinking (physical or chemical) [37]. Some hydrogels can provide a response to an environmental stimulus such as ionic force or pressure, temperature, pH, and light which reflect to their structure [37].
Self-healing hydrogels offer benefits compared with conventional hydrogels for drug delivery, such as homogeneous encapsulation of loaded drugs [38], preventing drug diffusion, improving the treatment effect of drugs, decreasing toxicity to normal tissues [33], reducing viscosity through enhancing the rate of shear stress and resistance to the stressinduced formation of cracks [39], which result in lingering for their lifetime during injection [40]. Here, we report some sensitive hydrogels with self-healing ability to control cancer drug release ( Figure 3). PH-sensitive self-healing polysaccharide-based hydrogels were prepared as Doxorubicin (Dox) drug delivery vehicles for hepatocellular carcinoma therapy by dynamic covalent Schiff-base linkage between amine groups from carboxyethyl chitosan (CEC) and benzaldehyde groups from dibenzaldehyde-terminated poly (ethylene glycol) (PEGDA). Since chitosan is biodegradable, nontoxic, and biocompatible, it is used for extensive applications in clinical fields. Release of Dox from CEC/PEGDA hydrogel was evaluated at pH 4, 5.5, and 7.4. The rate of drug release at pH 4.0 was more rapid than others with approximately 96% during four days. In addition, 92 and 89% of Dox were released in PBS at pH 5.5 and 6.8 after seven days of incubation, respectively. On the other hand, 42% of Dox was released in PBS at pH 7.4 after seven days. Furthermore, Their results showed that Dox concentration plays a crucial role in both burst and cumulative release ratio [35]. In another study, Liposomes/curcomin was encapsulated by thiolated chitosan (CSSH), which was fluidic at room temperature and gelled quickly at 37 • C. CSSH/Cur-Lip gel leads to the cumulative release of curcomin at approximately 31% at 12 h in breast cancer treatment [41]. Protein-based hydrogel has also been utilized as prominent scaffolds in various biomedical applications such as delivery, sensing, tissue engineering, and fabricating artificial organs due to their biocompatibility, biodegradability, viscoelasticity in the body [42]. For example, Upadhyay et al. utilized bovine serum albumin (BSA) as a scaffold to promote gelation by using glutaraldehyde as an external crosslinking agent or by internal crosslinking through a disulfide bridge. The release of Dox from BSA hydrogel occurred over five days to the extent of almost 37, 26, and 21% in PBS at pH 5.5, 6.8, and 7.4, respectively, which result in mortality of three cancer cells (MCF-7, HeLa, and MDA-MB-231) [43]. Yang et al. used chitosan derivatives dynamic network consisting of glycol-chitosan crosslinked by the benzaldehyde to form Schiff base to steadily deliver a highly concentrated antitumor drug (Taxol) for seven days for treatment of human hepatocarcinoma tumor [44].
Nowadays, photothermal therapy (PTT) plays a crucial role in treating cancers due to its low toxicity and high efficiency [45]. The fabrication of self-healing hydrogels is promoting an approach to sustain drug release through the hydrophilic-hydrophobic connection. In other words, some nanomaterials, including gold, carbon nanotubes, graphene, and Fe 3 O 4, can be activated with near-infrared (NIR) light, which can lead to producing thermoresponsive self-healing hydrogel [46]. Among them, polydopamine (PDA) is the most efficient due to its distinctive adhesion, biocompatibility, and well dispersion in hydrogels to produce heat to change the morphology of hydrogels [47]. Wang et al. fabricated a thermoresponsive self-healing hydrogel through dynamic covalent enamine bonds between polyetherimide (PEI) and poly(2-(dimethylamino)ethyl methacrylate) (PDMAEMA) copolymer. They used the PDA (0.2 wt.%) in the hydrogel to control the release of Dox under NIR laser irradiation. Subsequently, the construction of hydrogel changed from hydrophilic to hydrophobic. PDA concentration in the hydrogel is important because of its effect on strength and injectability [48].
Another thermoresponsive self-healing hydrogel composed of graphene conjugated to branched polyethyleneimine (BPEI) was prepared to interact with chondroitin sulfate multialdehyde (CSMA) hydrogel [49]. BPEI-GO was dispersed in CSMA to make sustained drug delivery with a photothermal effect [49]. Graphene oxide (GO) was also used in this study due to its mechanical strength, tumor therapy, surface modifiability, good colloidal stability, biocompatibility, and near-infrared (NIR) absorption, which transforms absorbed light into heat. The results indicated that more Dox could release from BPEIGO at pH 6.5 (57.05%) compared to pH 7.4 (33.89%) and pH 10.0 (36.73%) in 24 h with CSMA 30 wt.% [49]. Furthermore, codelivery has shown several benefits through synergistic effects of different agents (drug/drug or drug/gene) with reduced side effects property [50]. One of the limitations of hydrogel codelivery drugs is maintaining the sequential and sustained release of multiple drugs [51]. Yavvari et al. used catechol-based hydrogel chitosan (CAT-Gel) as injectable and self-healing hydrogel, which assembled via catechol-Fe(III) coordinative connections for local delivery hydrochloride (Dox) and hydrophobic docetaxel (DTX) anticancer drugs in murine lung and breast cancer models. Their result demonstrated that Dox release was faster than hydrogel in 20 days with an antitumor effect, whereas DTX released slowly due to interactions of DTX with hydrogel network, as shown in the rheological studies [51].
Notably, nanogels have advantages for drug delivery, including easily modifying targeting ligands like aptamers, blood circulation, and reducing the drug dosage to strive for specific recognition of the target cells. For example, nucleoside analogues like floxuridine (F) as cytotoxic drugs have been loaded in DNA and RNA spherical nanogels via solid-phase synthesis or enzyme-mediated transcription [52]. Zhao et al. proposed ATP responsive DNA nanogel for the delivery of DOX to cancer cells. This self-healing core-shell is composed of DNA chains (DNA1 and DNA2), which are rich in guanine (G) and cytosine (C) bases with a carboxyl group at the 5 terminal of both single-stranded DNA chains to react with amide biocompatible carboxymethyl chitosan (CMCS). Therefore, CMCS-DNA1 and CMCS-DNA2 could use hybrid ATP aptamer as a crosslinking agent to form core-shell NGs NGs@DOX. The results revealed the effect of ATP on the release of DOX from NGs@DOX, whose fluorescence intensity of DOX-loaded NGs was effectively enhanced with higher concentration of 0.3 Mm of the ATP. Besides, pH values (6.5 and 7.4) play an essential role in DOX release from NGs@DOX. It was indicated the fluorescence intensity of DOX released from NGs@DOX in pH 6.5 PBS was higher than that of in pH 7.4 PBS, under the same amount of ATP triggers. Temperature is considered another important factor in releasing DOX from NGs@DOX. Although there was no significant release increase of DOX in the A549 tumor cells at 4 • C, the obvious fluorescence intensity could be identified at 37 • C. It inhibited ATP by (iodoacetic acid, IAA) reagent to confirm the role of ATP in the release of DOX from the hydrogel. It was shown that the capability of uptake of NGs@DOX at 37 • C after incubation during 1, 2, 3, and 4 h on A549 cells. The fluorescence intensity of NGs@DOX-treated A549 cells was slightly more increased than that of free DOX after 2 h incubation. Notably, cell viabilities were more than 80% in the group treated with NGs without DOX, which revealed the suitable biocompatibility of the carriers. DOX-loaded nanogels displayed lower viabilities than free DOX due to the better cellular uptake of NGs@DOX and ATP-responsive release DOX in cancer cells [53].
Cancer immunotherapy has progressed immediately due to target and durable antitumor responses via the use of the host's natural immune system. Leach et al. suggested peptide hydrogel for delivery synthetic cyclic dinucleotides (CDNs) as stimulating strong antitumor responses in preclinical models. Self-assembling peptide nanofiber hydrogel with sequence K 2 (SL) 6 K 2 (MDP) illustrated eight-fold slower release rate CDNs compared to a standard collagen hydrogel, thereby reducing infiltration in a normal cell in local delivery model of head and neck cancer tumors in wild type C57BL/6 mice with a single injection of hydrogel [54]. Furthermore, gel-based could promote retention of immunotherapeutics like antibodies in a tumor. Sprayed immunotherapeutic fibrin gel composed of fibrinogen and thrombin with CaCO 3 nanoparticle to deliver aCD47 and control release in the acidic tumor microenvironment during six days in mouse animal model blocking the interaction of CD47 on cancer cells could enhance phagocytic cancer cells by macrophages, dendritic cells, and neutrophils [55]. Thermosensitive hydrogels have evaluated synergistic antitumor efficacy vaccines through regulating immune function and inducing cell cycle arrest based on PEG and poly(γ-ethyl-L-glutamate) formulation for local codelivery of cytokine (IL-15) and cisplatin (CDDP) on C57BL/6 mice bearing melanoma. This hydrogel can release a therapeutic agent during weeks after subcutaneous injection, in which after 2 days of incubation, approximately 66% of drug (CDDP) was released from the hydrogels, compared to IL-5 with 37% released [56]. Another stimulation tumor factor that affects altering shape hydrogel is reactive oxygen species (ROS). Gu et al. used poly(vinyl alcohol) (PVA) with a ROS-labile linker N1-(4-boronobenzyl)-N3-(4-boronophenyl)-N1,N1,N3,N3tetramethylpropane-1,3-diaminium(TSPBA) to gemcitabine (GEM) delivery and enhanced antitumor responses through local release of anti-PD-L1 blocking antibody (aPDL1) in the B16F10 melanoma and 4T1 breast tumor [57]. Table 2 summarizes the in vitro and in vivo studies for cancer drug delivery using self-healing hydrogels.
Bone
Extracellular matrix (ECM) bone mainly comprises of 0-70% primarily inorganic hydroxyapatite (HAp), 20-40% organic constituents (primarily type I collagen), 5-10% water, and 3% lipid and proteins such as growth factors [25]. Conventional bone treatment methods, including autografts, allografts, and xenografts, are restricted due to their potential infection, risks of donor-site morbidity, high nonunion rate with host tissues, and adverse immune response [58]. There are several factors involved in bone tissue engineering (BTE), including osteogenic scaffold, osteogenic stimulating factors, and osteogenic cells. Hydrogels have been widely used in BTE due to their capability to mimic bone ECM and effective delivery of growth factors, mRNAs, miRNAs, and drug and nanoparticle delivery [59]. Thus, self-healing hydrogel scaffolds have emerged that are biocompatible, nontoxic, and capable of improving cell differentiation and new tissue formation by facilitating the nutrients' diffusion as a promising alternative to auto and allografts [14,60].
Ideal hydrogels for bone regeneration need to be osteoconductive, osteoinductive, osteocompatible for enhanced bone regeneration, and nonimmunogenic to avoid causing inflammation [61]. These features should be strived for by choice of the hydrogel polymer backbone, crosslinking, and numerous functionalizations. Meanwhile, mechanical strength is considered as one of the vital issues for bone hydrogel scaffolds that need to tackle the load-bearing conditions in the native bone in expediting new tissue ingrowth. A doublenetwork hydrogel was recently used to develop robust hydrogels with high self-healing properties consisting of a strong and rigid polymeric network and catechol groups with reversible and irreversible crosslinks [14]. Notably, the incorporation of nanoparticles and bioglass and ceramics could optimize the physical and chemical properties of hydrogels as well as increasing vascularization. For example, biocomposites contain β-TCP with ions Mn 2+ , Zn 2+ , and Sr 2+ and polymer silk fiber tagger high osteogenesis and cell proliferation, thereby showing the capability of promoting immunomodulation [28].
Other factors required for bone tissue engineering are suitable pore size and interconnected porosity achievable by changing the concentration and variety of polymers, crosslinkers, and fabrication method [62]. These factors will help with improving the controlled release of encapsulated drugs and the exchangeability of oxygen and nutrients in hydrogels [63]. Hence, several studies have been investigated on injectable and 3D printed natural and synthetic hydrogels with a high potential for self-healing [64]. Since crosslinking plays an important role in increasing the mechanical property and stability of hydrogels, we will discuss polymer sources and crosslinking technologies being utilized to construct BTE hydrogels with their advantages and limitations for use in bone repair in this section.
Injectable Hydrogels for Bone Tissue Engineering
Injectability and self-healing properties play critical roles in providing less invasive therapeutic applications for patients. Common approaches to achieving injectable hydrogels include dynamic-covalent bonds, electrical interactions, thermal gelation, imine bonds, Diels-Alder, and host-guest interactions. The reversible nature of crosslinkers should contribute to shear-thinning property hydrogel [65]. Previous reports have mentioned the advantages and drawbacks of natural and synthetic polymers [59]. Commonly utilized self-healing hydrogels are based on prepolymers protein-DNA complexes, HA [66,67], PEG [68], elastin-like polypeptides (ELP) [69], chondroitin sulfate, and silk fifibroin [70]. Backbone polymers such as polypeptide, polyacrylamide, and carbon nanotube can form novel hydrogel via the linker DNA sequences that possess self-healing or self-repairing properties. The advantage of using "X"-shaped DNA as a crosslinker is its responsiveness to temperature, UV, enzyme, pH, and light. For example, Li's group synthesized the polypeptide-DNA as cross linker to polymer poly(L-glutamic acid-co-γ-propargyl-Lglutamate). A total of 5-6 ssDNAs were conjugated to each polypeptide backbone and sticky ends of the DNA could label with 5(6)-carboxyfluorescein (5(6)-FAM) and 5(6)carboxy-X-rhodamine (5(6)-ROX) or both of them, which are visualized as green, red and orange, respectively (Figure 4a) [71]. Chain-exchange reactions in DNA double helix were found to be the reason for self-healing with different fluorophore labeling (Figure 4b). The diffusions of DNA chain caused the removal of interfaces among other parts. The next day, the three colorful hydrogels appeared completely and adjusted themselves according to the shape of the container at 4 • C (Figure 4c). Figure 4d exhibited that the mechanical strength of the merged material could spread to 80% of their original value and healing and recovery property of the hydrogel (4 wt.%) that was cut into pieces, and could maintain the original mechanical strength in 5 min [71]. Notably, recent studies have focused on DNA-based hydrogel nanocomposite due to the success of dynamic covalent bonds via imine-based reactions that emerged as a sustained release of drugs. In this regard, Basu et al. utilized nanocomposite DNAbased hydrogels through dynamic reversible imine covalent bonds and the incorporation of silicate-based nanoparticles (nSi) for improving the shear strength of the formulated hydrogel by establishing electrostatic interactions with the phosphate groups of the DNA network. This optimized DNA hydrogel indicated the sustained release of Simvastatin as an osteogenic drug for more than a week [72].
Ureido-pyrimidinone (UPy) hydrogel with the reversible and dynamic quadruple hydrogen bonding capacity has suggested both self-healing and shear-thinning properties for bone-cartilage (osteochondral) interface [39]. UPy hydrogels were developed for drug delivery systems without requiring the incorporation of hydrophobic spacers for gelation. For instance, Hou et al. prepared a self-integrating and injectable polysaccharide supramolecular hydrogel from dextran polymer polysaccharide (DEX) with multiple pendant UPy crosslinking along the DEX backbone for loading bone morphogenetic protein 2 (BMP-2) for supporting the growth of both bone and cartilage tissues in vivo subcutaneous implantation model in nude mice ( Figure 5). Three constructions of different kinds of cells were assessed, such as chondrocytes alone, BMSCs/BMP-2, and self-integrated implant with the two types of cells encapsulated on two sides of the gel disk. Markers of osteogenesis and chondrogenesis were evaluated through Alizarin red staining for bone and Alcian blue staining for cartilage, which the formation of cartilage and bone within the single cell-type groups was visualized, respectively ( Figure 5A,B). The self-integrated osteochondral implants were formed ( Figure 5C), and the image was magnified ( Figure 5D). The rheological characterization of the hydrogels depends on the concentration of DEX and UPy. When DEX concentration increased from 10 to 12.5% (w/w), the storage modulus changed from 170 to 700 Pa, leading to enhance the mechanical performance of the hydrogel [39]. Peptide amphiphiles (PA) hydrogels are considered as self-assembling hydrogels with inherent bioactivity and biocompatibility for 3D cell culture, drug delivery, and applications in hard tissues [73][74][75]. PA is hydrophilic and can attach covalently to other long hydrophobic polymers. PAs have been widely utilized to recreate dentin for delivery of both dental stem cells and growth factors [67,68] through matrix metalloproteinase (MMP) MMP-cleavable linker, which improves viability, migration spreading of human mesenchymal stem cells, and angiogenesis. For example, PuraMatrix™ hydrogel scaffold self-assembling is composed of 16 amino acid peptide sequences called RADA16 with noncovalent β-sheeted structures [76]. The injectable RADA16 hydrogel, when demonstrated successfully, slowed BMP-2 release in vitro. Furthermore, the incorporation of cell adhesion motifs into RADA16 hydrogels enhanced osteoblast cell attachment and migration into the scaffold [76].
Gacanin et al. demonstrated multifunctional protein-DNA hybrid hydrogel consisting of human serum albumin (HAS-PEG) conjugated to rationally ssDNA sequence linker to deliver recombinant Rho-inhibiting C3 toxin (C2IN-C3lim-G205C) [77]. C2IN-C3lim-G205C is able to selectively reduce osteoclast migration in vitro without affecting the viability, activity, and proliferation of bone-forming osteoblasts. Therefore, it is considered inhibition of the osteoclast's formation for the local delivery in osteoporosis. This hybrid hydrogel has several advantages, such as self-healing property with favorable injectability, rapid gelation under physiological conditions without any chemical crosslinker and toxic groups, as well as high capability of loading and controlled release through DNA-cleaving enzyme (DNase). Besides, the result showed a significant decrease (≈96%) in the expression of essential osteoclast markers and the resorption activity of osteoclast cells [77]. Electrostatic attractions are considered as the most common mechanism to form self-healing hydrogels for bone tissue engineering. Silk fibroin (SF) showed several benefits for bone-related applications such as strong mechanical properties, biocompatibility with FDA approval, tunable biodegradability, and minimal/nonimmunogenicity [14]. For example, Shi et al. studied dynamic SF-based hyaluronic acid (HA) hydrogel modified with bisphosphonate (BP) groups to be able to bind reversibly to Ca 2+ ions that were coated onto silk microfibers (mSF) [78]. Although HA is a critical component of the ECM in angiogenesis, wound repairing, matrix organizations, morphogenesis, and cell signaling, one of the severe issues of HA is its high degradation rate in vivo. Consequently, conjugation with synthetic materials could enhance the strength, toughness, durability of HA. Silk-based hydrogels exhibited low mechanical properties, which can be improved by incorporating a UV crosslinkable compound [78].
HA-BP-acrylamide (Am-HA-BP) prepolymer leads to more stability and an increment of almost 15 times in storage modulus. Am-HA-BP hydrogel indicated osteogenic properties in vitro and a considerable increase in bone regeneration. In addition, the formation rate of bone increased to 220% compared to untreated groups in vivo. Biomineralization silk fibroin (SF) through immersing to calcium phosphate with interaction and in the next step, the natural polymer (HA) modified with bisphosphonate (BP) as binders to chelate Ca 2+ ions prepared double networks that were fully reversible and dynamic ( Figure 6A). The autonomous self-healing property was confirmed by macroscopic observation ( Figure 6B). Human mesenchymal stem cells (h-MSCs) were seeded on the surface both of Am-HA-BP+mSF and Am-HA-BP·CaP@mSF hydrogels for 14 days. Cells viability was determined by fluorescence stained cells with Phalloidin Tetramethylrhodamine (TRITC). Their results have revealed mineralized mSF increasing cell adhesion and formation of self-healing silk hydrogel occurred in combination with Am-HA-BP filling irregularly shaped bone cavities without the risk of liquid material leakage ( Figure 6C). Expression of osteocalcin (OC) as late marker osteogenesis and Col I as an early marker on the DC Am-HA-BP·CaP@ was decreased, while the expression of VEGF, which is strongly associated with the vascularization process during bone repair, was increased in comparison with Am-HA-BP+mSF hydrogel ( Figure 6C). Am-HA-BP·CaP@mSF hydrogels were implanted into rat cranial critical defects (diameter: 8 mm) to confirm bone regeneration in vivo during the fourth and eighth week. Although new bone was observed in the implanttissue interface area, there was not any noticeable mineralized tissue in the untreated group ( Figure 6D). Meanwhile, HA-BP and acrylate BP (Ac-BP) with magnesium chloride (MgCl 2 ) were reported as electrostatic interactions used to improve the shear thinning property, compressibility, and stress relaxation profile of this Am-HA-BP hydrogel. This nanocomposite hydrogel indicated the differentiation ability of hMSCs to osteoblast [78].
Numerous bioactive molecules such as genes, growth factors, and small molecule drugs, which are crucial in molecular signaling, can be delivered by self-healing hydrogels. Zhang et al. proposed a nanocomposite hydrogel based on conjugation methacrylate HA with thiolglycolated pamidronate (thiol-Pam) and self-assembled Pam-Mg nanoparticles (NPs). Therefore, magnesium ions (Mg 2+ ), as a critical cofactor for the enzymatic activity of alkaline phosphatase (ALP), could steadily release and promote differentiation hMSCs to osteoblast. The dynamic organization between Mg2+ and pamidronate (Pam) improved desirable injectability and efficient stress relaxation. Moreover, loading synthetic glucocorticoids dexamethasone (Dex) as pro-drug Dex phosphate (DexP) exhibited effectively sustained release drug from hydrogel, which could improve bone regeneration in a rabbit model [65]. Table 3 summarizes the works conducted on the purpose of bone regeneration using self-healing hydrogels. Table 3. Examples of self-healing hydrogels for bone regeneration.
Self-Healing Mechanisms Materials Application(s) Ref.
Imine bond Nanocomposite oxidized alginate (OA) With DNA nucleotide Inducing osteogenic differentiation and migration of human adipose-derived stem cells [72] Hydrogen bonds Dextran polymer polysaccharide (DEX) with multiple pendant UPy Bone regeneration in a nude mouse model [39] Electrostatic interactions Polypeptide backbone derived from human serum albumin Treatment osteoporosis [75] Electrostatic interactions SF-HA Bone regeneration as a carrier of cell and drug delivery in the rate model [78] Thiol-ene HA-Pam-Mg Bone regeneration as a Dex drug delivery in a rabbit model [65]
Cardiac Tissue Engineering
Myocardial infarction (MI) is considered as having one of the highest mortality rates worldwide. Although natural hydrogels are used for myocardial injection therapies, including collagen, chitosan, fifibrin, alginate, HA, and Matrigel, they demonstrate batchto-batch differences [79]. Several synthetic hydrogels and self-assembling peptides can potentially deliver cells and drugs by injection and formation in situ sol-gel without any bacterial contamination to restore the damaged myocardium. Recently, among the biomaterials utilized for cell delivery, injectable hydrogels with high conductivity have been broadly studied as potential candidates for cell and drug delivery carriers. Selfhealable hydrogels can mimic myocardium with minimally invasive effects, which are helpful for regenerating the damaged myocardium [14].
Injectable Hydrogels for Cardiac Tissue Engineering
Synthetic hydrogels have typically formed via chemical or physical crosslinking, selfassembly, thermal switching, photo-induced polymerization, or noncovalent interactions by dynamic nature of the supramolecular that are held together to control sol-gel switching behavior under mild conditions, e.g., UPy [14]. Bastings and coworkers reported that Catheter-based drug delivery approaches are considerably less invasive than surgical implantation [80].
Catheter delivery of myocardial active growth factor by UPy-modified poly (ethylene glycol) (PEG) chains have been proposed for the treatment of MI. The UPy hydrogels illustrated a self-healing property within minutes. Meanwhile, the storage modulus of the 10 wt.% UPy conjugated with alkyl-urea spacers to 10 k gel matches the mechanical stiffness of the natural cardiac tissue. Furthermore, this hydrogel, as the carrier of growth factors, was able to considerably reduce the size of an infarct scar within a porcine model and improved the activation of resident regenerative cells to boost rapid cardiac tissue regeneration. The results showed that UPy with alkyl-urea spacers to 10 k hydrogel release the growth factors during seven days through an initial 41%, and then a sustained release until 97% at the end [80]. Notably, double network hydrogel containing covalent and noncovalent interactions could improve mechanical strength. For instance, guest-host interactions CD and ferrocene (Fc) have responded to electrical stimulation as well as a photo-switchable crosslinker CD and trans azobenzene, which have altered from the gel phase to the sol phase with light exposer into the infarcted region of cardiac tissue of a sheep model [13,81]. Furthermore, injectable hyaluronic acid (HA) hydrogel with guest-host interactions of CD as host and adamantane (AD) as a guest for the local and sustained delivery of miR-302 mimics the MI mouse heart to promote mammalian cardiac regeneration during two weeks [82]. For self-assemble hydrogels from guest (adamantine, Ad) and host (β-cyclodextrin, CD), following injection, a secondary covalent crosslinking is used in situ through Michael-acceptor reactivity and catalytic conditions in order to stabilize the network pendant groups. This mechanism facilitates shear-thinning delivery with high retention at the MI pig model [83].
Recently, nanomaterial-based hydrogels have been highly noticed from a biomedical point of view to enhance the thermal and electrical conductivity of hydrogels [84]. For instance, the incorporation of carbon nanotubes (CNT) into the photocrosslinkable gelatin methacryloyl (GelMA) hydrogel leads to a stable beating rate and three times greater and more homogeneous F-actin fibers than the pristine GelMA. The result of Shin SR et al.'s study revealed CNT-GELMA hydrogel has an integration ability and improves cell to cell communication among seeding neonatal rat cardiomyocytes on hydrogel [84]. In addition, reversible Schiff base reaction between aldehydeoxidized alginate (ALG-CHO) and amine gelatin groups are able to control H 2 S-gas releasing to aim at the complex symptoms of MI such as increasing vascularization and mechanical performance to adjust to the dynamic cardiovascular condition, tightly integrated with the myocardial tissue [85].
Wu et al. proposed the combination of conductive cardiac patch and injectable selfhealing hydrogels consisted of a mixture patch of gelatin-dopamine (GelDA) with ionic coordination conjugated to dopamine-modified polypyrrole (DA-PPy). Schiff base reaction between oxidized sodium hyaluronic acid (HA-CHO) and hydrazide hyaluronic acid (HHA) leads to improving mechanical support, and this hydrogel can enhance the adhesive property and promote angiogenesis MI. The result of the combination internal (hydrogel injection)−external (patch) therapy showed improving storage modulus, conductivity, and gelation time of the hydrogel with outstanding effect in enhancing the cardiac function after the occurrence of MI compared to single-hydrogel systems (Figure 7) [86]. Dong et al. developed a self-healing, electroactive, and biocompatible hydrogel based on an chitosan-graft-aniline tetramer (CS-AT) polymer [79]. The dynamic covalent Schiffbase linkage among chitosan amine groups and PEG-DA benzaldehyde groups acts as an excellent vehicle for cell therapy within cardiac tissues. CS-AT hydrogel showed near conductivity to native cardiac tissue, and encapsulation of myoblasts (C2C12) and cardiac cells (H9c2) revealed a linear-like release profile controlled by the density and type of cells and rapid degradation profile over the span of 45 days without inducing a considerable inflammatory reaction. The proliferation of C2C12 cells was boosted (see Figure 8A-C). As revealed in Figure 8D, the number of cells significantly increased on days 2 and 3 compared to day 1. After injection of hydrogel containing C2C12 cells (1 × 10 6 cells mL −1 ) via 22 gauge needles ( Figure 8E), cells maintained good viability and morphology with confocal microscopy (Figure 8F-H). Since repairing impaired cardiac tissue with more than one kind of cells is desirable, ADMSCs as stem cells (red) and C2C12 cells (green) could inject together and fusion ( Figure 8I-L) [79]. Finally, the self-healing hydrogel formation through host-guest interactions of adamantane and beta-cyclodextrin (CD) modified HA were studied to encapsulate endothelial progenitor cells for enhancing neovascularization in the ischemic rat model [83]. According to the result of this study, the stiffness and retention capacity of the hydrogel increased compared to the untreated hydrogel [83]. Furthermore, HA hydrogels modified with CD were developed for siRNA against MMP2 (siMMP2) delivery for the treatment of infarcted myocardium. This structure contains modified HA with CD for host-guest interaction with cholesterol from siRNA so that siRNA could retain for more than two weeks. Moreover, hydrazone bonds have been formed between aldehyde from aldehyde-modified HA (ALD-HA) and hydrazide from hydrazide-modified HA macromer (HA-MMP-HYD), and eroded in response to MMP activity to sustain release of siRNA [87].
Recently, stem cell-derived secretome has been studied as a potential approach to conventional stem cell therapy for regeneration of the myocardial. For instance, nanocomposite hydrogel consisted of Laponite as nanoclay disk-shaped nanoparticles can adjust the release of secretum-loaded hydrogel via electrostatic interactions with hydrogel [88]. In other words, a highly cross-linked gel with Laponite was able to release secretum secreted by human adipose-derived stem cells (hASCs) for a long time to improve angiogenesis and cardioprotection in vitro and in vivo [88]. Table 4 shows some of the studies about heart tissue regeneration using self-healing hydrogels. Table 4. Examples of self-healing hydrogels for heart regeneration.
Injectable Hydrogels Used for Neural System Applications
The mammalian central nervous system (CNS) has a weak capability to regenerate neurons or axons after damages [89]. Although stem cell therapy has shown benefits for repairing CNS damages, most of the cell's loss at the targeted sites limits transplantation of the stem cells. Hydrogels improve cell survival via carrying stem cells, especially self-healing hydrogels with high stability in situ gelations. Electroconductive self-healing hydrogels could regulate migration, differentiation, metabolism, adhesion and, a proliferation of electrically excitable cells, which is critical for neural tissue engineering [90]. For example, nanocomposite self-healing hydrogel from N-carboxyethyl chitosan (CEC) with polypyrrole (DCP) nanoparticle (~40 nm) and a unique aldehyde-terminated difunctional polyurethane (DFPU) as linker revealed electroconductive properties in vitro and in vivo. This hydrogel could stimulate proliferation, attachment, and differentiation of neural stem cells (NSCs) [90]. Recently, semi-interpenetrating polymer network (SIPN) hydrogels have been studied for cell encapsulation consisting of linear, branched, and crosslinked polymeric networks [91]. For example, SIPN hydrogels composed of HA were incorporated into the chitosan-based self-healing hydrogel with the appropriate stiffness and good injectability to enhance axonal growth in the zebrafish model's traumatic brain injury [91]. However, with amyloid nanofibrils with the capability of self-assembly used as biomaterial applications for encapsulation murine neural stem cells and neural-like PC12, weak mechanical properties of the hydrogel limited its application inside the body [91,92]. In another approach, self-healing hydrogel composites made from chitosan-cellulose (CS-CNF) nanofiber dramatically improved their strain sensitivity via the interaction of cellulose nanofibers (CNFs) with the reversible Schiff crosslinking in the CS self-healing hydrogel. Besides, neural stem cells encapsulated in the hydrogel showed significantly better oxygen metabolism and neural differentiation. The results demonstrated that cell viability and cell metabolism in the samples with a low concentration of CNF (CS-CNF1/2) was higher compared to high CNF content samples (CS-CNF3 and CS-CNF4) and pristine CS hydrogel ( Figure 9A) [93]. The bioenergetics of NSCs in hydrogels was measured by the OCR values of the cells. Figure 9B has indicated OCR in CS-CNF2 increased compared to cells in the pristine CS hydrogel, which means cells in CS-CNF2 hydrogel produce higher ATP even after the addition of rotenone. Both mitochondrial function and nonmitochon-drial respiration were considerably improved in CS-CNF2, whereas it was diminished in CS-CNF4, compared to those in the pristine CS hydrogel ( Figure 9C-E) [93]. Some injectable hydrogels with both biological and mechanical properties have been fabricated from collagen type I [94]. Collagen hydrogel crosslinked with 4S-StarPEG (PEG ether tetrasuccinimidyl glutarate) was utilized as a carrier for loading genetically modified rat bone marrow MSCs to over-expressed glial cell line-derived neurotrophic factor (GDNF), which indicated a well-tolerated cell delivery platform [94,95]. Since MSCs delivered to the CNS typically exhibit low survival post-transplantation, this hydrogel improves cell survival and graft integration. However, MSCs cannot survive in scaffold properly and are reduced in volume post gelation in vitro and in vivo for several days [94]. Notably, a self-healing hydrogel composed of dynamic imine bonds between chitosan and oxidized sodium alginate was utilized to carry NSCs into the CNS. It was revealed that it could be to delicately modulate the hydrogel stiffness in the range of 100 to 1000 Pa to exactly fine-tune the differentiation and growth of NSCs in a mice model [96]. This system led to a homogenous cell distribution when the NSC-loaded hydrogels were injected into mice model brains. Due to the fact that electrical properties are considered as vital properties for the engineering of neural tissues, some studies focused on the mechanical and electrical features in microenvironments to help the progress of the neural cells. In this approach, Hou et al. formulated a graphene-based self-healable electroconductive hydrogel to assure the growth of neural-like PC12 cells under nonphysiological conditions [97]. Thus, further investigation is being done on self-healing and electroconductive hydrogels for neural tissue applications [97]. The percentage of transplanted survival cells is limited by acute inflammation after injury or transplantation. Therefore, an intelligent double-layer alginate hydrogel system was designed in which an inner layer modified with alginate MMP and RGD polypeptides to improve neural stem cells (NSCs) adhesion, and an outer layer contains Cripto-1 antibodies to facilitate differentiation of NSCs to dopaminergic neurons for treatment of Parkinson's disease [98]. In summary, the few covered studies here revealed the feature of injectable self-healable hydrogels to modulate growth, survival, and differentiation of neural cells to target stem cell therapy for the degenerated CNS. It is essential to use the potential of biomaterials for biomedical applications. Table 5 contains a summary of self-healing hydrogel application in neural tissue engineering. Table 5. A summary of self-healing hydrogels used in vitro and in vivo for neural tissue engineering.
Self-Healing Mechanisms
Materials Application(s) Ref.
Imine bond CEC-DFPU/DCP Neural repair in zebrafish brain injury model [90] Imine bond SIPN Neural repair in the zebrafish brain model [91] Imine bond CS-CNF Neural stem cells encapsulation and differentiation [93] Imine bond collagen-4S-StarPEG Stem cell therapy for the degenerated CNS [95] Ionic interaction calcium alginate gel beads Neural stem cells encapsulation and differentiation in a mouse model [98]
Injectable Hydrogel Used for Lung-Related Applications
The lung is a structurally and functionally complex organ [98]. Lung disease is among the leading global causes of death [99] and a health concern that severely affects patients' quality of life [100]. There are limited treatments for lung diseases, including cystic fibrosis (CF), chronic obstructive pulmonary disease (COPD), and idiopathic pulmonary fibrosis (IPF), and lung transplantation is often the only option for end-stage patients [101]. Acute respiratory distress syndrome (ARDS) is a major cause of failure from lung endothelial and epithelial permeability, leading to hypoxemia, pulmonary oedema, and loss of lung compliance [102,103]. ARDS occurs in above 10% of intensive care unit (ICU) patients worldwide, and some of them receive mechanical ventilation in the ICU [104]. ARDS can also be developed by a pulmonary infection or a trauma that leads to providing pro-inflammatory cytokines, which can improve acute organ dysfunction [105].
Although ARDS is a serious illness with a large rate of incidence, no direct therapies have been developed for it [106]. Currently, the most commonly used strategies are protective mechanical ventilation and fluid-restrictive, which improve oxygen perfusion in the lungs [107]. Meanwhile, pharmacologic treatments such as surfactants, glucocorticoids, antibiotic therapy, antioxidants, and a wide range of other anti-inflammatory treatments have been indicated as completely ineffective [108].
One of the most common problems is a lack of lung tissue for transplantation. In addition, transplant recipients have to use immunosuppressive drugs for the rest of their lives, which can lead to problems. Recently, a new approach called Cell Formation in Lung Tissue Engineering has been proposed. The main components of tissue engineering include (I) a proper biological or artificial 3D scaffold, (II) the source of cells or stem cells, (III) the growth factors necessary for cell differentiation and proliferation, and (IV) bioreactor, which supports a biologically active 3D composite [100,101]. Hydrogels are among the components used for scaffolding ( Figure 10) [100]. The use of biological scaffolds retains many of the complexities of combining extracellular matrix (ECM) and biological activity. Hydrogels are quite suitable for making lung tissue cell scaffolds due to their proper structure and mechanical properties similar to the extracellular matrix (ECM) of soft tissue. Collagen and hyaluronic acid are among the compounds used to make hydrogels suitable for soft tissue, which is widely used due to their biocompatibility and biodegradability [109].
The superior generation of hydrogels is the self-assemble type, which are sensitive to temperature, PH, and salt concentration and are formed at physiological body temperature [110]. Collagen I hydrogel matrix and fibrinogen-fibronectin-vitronectin hydrogel (FFVH) are among lung tissue scaffolds.
The FFVH scaffolds are suggested in end-stages of lung disease, and their results indicate proper adhesion and distribution of cells in the extracellular matrix using this model of hydrogel scaffold in the laboratory. Collagen I hydrogel matrix scaffolds differentiate mesenchymal stem cells into epithelial and endothelial lineages more efficiently and also provide better cell preservation. ECM compatible natural hydrogels such as collagen I and Matrigel gels as 3D matrices are suitable to differentiate lung cells and lung tissue morphogenesis due to their mechanical and biochemical properties [100].
The use of ECM hydrogel as a cell scaffold for lung cells has shown favorable results. In a study by Link et al., the use of the appropriate concentration of genepin has led to more similarity between the hydrogel made and lung tissue [110]. Dunphy et al. also conducted a study involving collagen-elastin structures designed to match the properties of an alveolar wall. The study showed how elastin affects the stiffness of collagen hydrogels, and the inclusion of pulmonary fibroblasts in these structures shows similar results to an alveolar wall [99]. Pouliot et al.'s study confirmed that hydrogels obtained from de-cell lungs are very promising for extracellular modeling of the lung in vitro and the development of clinical therapies [101].
The use of cellulosic scaffolds for tissue-specific ECM isolation (dECM) for tissue engineering as modern clinical therapies and the improvement of in vitro ECM study models is increasing. Most ECM hydrogels are produced using an overall strategy in which the decellularized tissue is lyophilized, ground, and then enzymatically, most often with pepsin, placed in an acidic environment for 24-72 h (hours) to achieve proper solubility. Recently, lung ECM hydrogels have been developed as a model for the study of ESM obtained from diseased and healthy lungs, as a protective treatment for radiationinduced lung damage in vivo, and as bio links for the production of three-dimensional additives. This highlights the potential of ECM lung hydrogels for impact in several areas. ECM hydrogels are easily regenerated by immune cells, which is one of hydrogels' advantages [101].
The combination of natural biological materials alginate and gelatin with a carbodiimide cross linker is being considered as a new concept for tissue engineering. High molecular weight polymer alginate is biocompatible and nontoxic. Gelatin is a natural water-soluble polymer and is one of the most widely used materials for tissue engineering applications due to its biodegradability and biocompatibility. The Shulimzon study confirms the effectiveness of three-dimensional hydrogel scaffolds with gelatin-alginate origin in lung tissue regeneration [111]. Many researchers and physicians believe that the use of scaffolding improves tissue regeneration and can be useful in lung diseases such as COPD. Moreover, mesoporous silica scaffold and poly(lactide-co-glycolide) matrices were fabricated to modulate the immune cell function for tumor vaccines in animal models with hopeful results and may apply as a platform for the design of vaccines against SARS-CoV-2 [112]. Recently, the polymer-nanoparticle (PNP) hydrogel system has been able to sustain antigen release with strong antibody responses to adjuvanted subunit antigens considered as injectable vaccines. Gale et al. used the receptor-binding domain (RBD) of the SARS-CoV-2 spike protein (10 µg) as antigen and FDA-approved adjuvants CpG and Alum, which is loaded in injectable self-healing hydrogel from hydroxypropylmethylcellulose (HPMC-C12) combination with poly(ethylene glycol)-b-poly(lactic acid) (PEG-PLA). Their results indicated slow release of adjuvant and antigen during 18 days leads to the highest titers neutralizing antibody [113]. The study of Wu et al. shows that HTCC temperature-sensitive hydrogels are highly promising for delivering H5N1 influenza split antigens (A/aNhui/1/2005(H5N1)) with a concentration of 150 µg/L and vaccination against influenza through internasal administration in female Balb/c mice [114].
Since the respiratory system is exposed to the external environment and microorganisms (such as bacteria, viruses, and fungi) in the air, it causes respiratory infections and diseases around the world [115]. Nanogels have been suggested to tackle this problem by loading antibiotic agents in hydrogel to protect the antimicrobial agents from deactivation and diminish the adverse effects by decreasing the drug exposure to the rest of the body. Nanogels are formed by physical and chemical crosslinking of polymers, making them more promising for biomedical applications [116]. Chen et al. proposed an injectable self-healing PEG thiolated hydrogel for sustained release clarithromycin (CAM) and budesonide (BUD) in rabbit models of acute bacterial rhinosinusitis (ABRS) with Staphylococcus Pneumonia, which during two weeks indicated a therapeutic effect. The injectable self-healing capability PEG hydrogel depends on the high affinity, and reversible binding between sulfhydryl groups and silver ions give rise to control release drugs without adverse side-effects as well as inhibition inflammatory responses [117]. Additionally, injectable smart hydrogels based on poly (ε-caprolactone-co-lactide)ester-functionalized hyaluronic acid (HA-PCLA) for delivery immunomodulatory factor (OVA expressing plasmid, pOVA) could induce production antibody and effective inhibition of human lung carcinoma in vivo [118]. Table 6 summarizes selected studies of self-healing hydrogels for lung tissue engineering. Table 6. A summary of self-healing hydrogels used in vitro and in vivo for lung tissue.
Self-Healing Mechanisms
Materials Application(s) Ref.
Ionic interaction HPMC-C12/PEG-PLA COVID-19 vaccine for sustained release antigen RBD [113] Ionic interaction HTCC hydrogel/split antigen H5N1 influenza vaccine for sustained release H5N1 antigen [114] Ionic interaction CAM@Hydrogel/silver Antibacterial and anti-inflammatory properties [117] Imine bond HA-PCLA Delivery of pOVA to inhibit human lung carcinoma [118] 2.6. Injectable Hydrogel Used for Wound Healing Applications Physical or thermal damage leads to chronic wounds, especially in people who suffer from diabetes. Hydrogels have benefits for treatment and reducing inflammation because of excellent permeability, excellent biocompatibility, and providing a wet environment for wound repair. In addition, antioxidant hydrogels could decrease the levels of reactive oxygen species (ROS) to prevent oxidative stress and subsequently repair the wound [119]. Li et al. suggested that sodium alginate/ZnO hydrogel beads have the capability for sustained release of curcumin. These hydrogel beads indicated pH sensitivity and controlled-release ability curcumin with high antioxidant activity [120]. Liang et al. showed that gelatingrafted-dopamine (GT-DA) and polydopamine-coated carbon nanotubes (CNT-PDA) have antibacterial, adhesive, antioxidant, and conductive ability. GT-DA/CS/CNT hydrogel could release antibiotic doxycycline for treatment of full-thickness defect wounds [121]. Moreover, N-deacetylated derivative of chitosan with functional groups could bioconjugate with oxidized chondroitin sulfate (OCS) and make injectable, self-healing, and antibacterial hydrogel newtwork for drug delivery without any chemical crosslinking. Li and coworkers showed that N,O-carboxymethyl chitosan/oxidized chondroitin sulfate (N,O-CMC/OCS) hydrogel have a long gelation time (133 s), inherent antibacterial and stable performances properties for fibroblast cells, and that endothelial cell delivery damages skin [122]. Recently, Kaolin has been utilized as a blood clotting stimulation via its negative charge contact to the factor XII and platelets. Tamer et al. indicated hemostatic and antibacterial properties of polyvinyl alcohol (PVA/Kaolin) hydrogel. Kaolin could improve the swelling capacities of hydrogel as well as pores sizes of the fabricated hydrogels, which lead to the absorption of wound exudates. Antibacterial property of this hydrogel could be boost by loading penicillin streptomycin (Pen-Strep) for prevention of skin infections [123]. Although numerous hydrogels with inherent antibacterial capability or loading anti-bacterial drugs have been used, being drug-resistant limits it from being widely used and is considered a critical issue. Liang and coworkers proposed graphene oxide (GO) loading in gelatin methacrylate (GM) and glycidyl methacrylate functionalized quaternized chitosan (QCSG) to overcome drug resistance in damaged skin with Staphylococcus aureus (MRSA) infected mouse. Their results indicated GO with negative charge on the surface and good photothermal features, and antibacterial properties could repair damaged skin. Besides, methacrylate groups could increase mechanical properties of gelatin [124]. Table 7 summarizes selected studies of self-healing hydrogels for wound healing. Table 7. A summary of self-healing hydrogels used in vitro and in vivo for wound healing.
Self-Healing Mechanisms
Materials Application(s) Ref.
Conclusions
This review has provided a glimpse of the numerous applications of self-healing hydrogels as biomaterials, which are highly desirable because of useful advantages such as recovering their shape and mechanical properties after damage, moldability, and smooth injectability applications. The aim of current strategies is to design and fabricate applications that can mimic the in vivo conditions of different tumor types and restore damaged tissues and organs. Various research sought to create sensitive hydrogels to control the delivery system of anti-cancer drugs at the tumoral site. However, multiple features are required to accurately lead to this aim, including immunological response, the reaction time, degradation rates, surface hybridization, and inflammation reactions.
The studies were outlined on supramolecular interactions of hydrogen to enable tissue ingrowth and cell migration, improving the hydrogels' capability to thrive in physiological environments and anti-cancer drug delivery. Although these hydrogels promote proliferation, differentiation, and cell spreading, they do not have enough research in the clinical phase. Since the amassed demands for completely imitating the structure for cell development and growth and sustain drug release, the applications of self-healing hydrogels have been increased. The scope of this review is interdisciplinary among chemistry, medicine, physics, nanoscience, biology, and mechanical engineering, which altogether can address some of the current issues.
Future Prospective
Desirable scaffold structure and biological function have improved through concurrent advancements in vascularization and immunomodulation. Markedly, the designee of biocompatible polyethylene glycol hydrogel base on CRISP system with single-stranded DNA and endonuclease cas12 could develop sustained release drugs, nanoparticle delivery [125]. It is expected that in the near future, conventional hydrogels will be replaced with intelligent self-healing hydrogels to solve the problem of UV-light in crosslinking, which is usually unable to diffuse the deeper parts of the body.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-08-29T06:16:17.463Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "b67b83ca5370f32a15624558aa17386bf8787a44",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/polym13162680",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7cec45ff8816293c21d3ee65b71c841a2c73fba9",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3737459
|
pes2o/s2orc
|
v3-fos-license
|
Non-pharmacological care for patients with generalized osteoarthritis: design of a randomized clinical trial
Background Non-pharmacological treatment (NPT) is a useful treatment option in the management of hip or knee osteoarthritis. To our knowledge however, no studies have investigated the effect of NPT in patients with generalized osteoarthritis (GOA). The primary aim of this study is to compare the effectiveness of two currently existing health care programs with different intensity and mode of delivery on daily functioning in patients with GOA. The secondary objective is to compare the cost-effectiveness of both interventions. Methods/Design In this randomized, single blind, clinical trial with active controls, we aim to include 170 patients with GOA. The experimental intervention consist of six self-management group sessions provided by a multi-disciplinary team (occupational therapist, physiotherapist, dietician and specialized nurse). The active control group consists of two group sessions and four sessions by telephone, provided by a specialized nurse and physiotherapist. Both therapies last six weeks. Main study outcome is daily functioning during the first year after the treatment, assessed on the Health Assessment Questionnaire. Secondary outcomes are health related quality of life, specific complaints, fatigue, and costs. Illness cognitions, global perceived effect and self-efficacy, will also be assessed for a responder analysis. Outcome assessments are performed directly after the intervention, after 26 weeks and after 52 weeks. Discussion This article describes the design of a randomized, single blind, clinical trial with a one year follow up to compare the costs and effectiveness of two non-pharmacological interventions with different modes of delivery for patients with GOA. Trial registration Dutch Trial Register NTR2137
Non-pharmacological treatment (NPT) is considered to be important in the management of OA in order to reduce the impact of OA on pain and physical functioning [8]. Current OA research on NPT-options, focuses mainly on the hip and knee joint [9]. An abundance of research literature illustrates that NPT is a useful treatment option in the management of hip or knee OA [9]. The initial focus of NPT should lie on self-management and patient-driven treatments, rather than on passive therapies delivered by allied health professionals [8]. Provision of information and patient education about the objectives of treatment and the importance of changes in lifestyle, exercise, pacing of activities, weight reduction and other measures to unload damaged joints is supported by two meta-analyses [10,11] on the efficacy of non-pharmacological interventions in chronic diseases. Increasing the functional capacity [12][13][14] and encouraging the patient to undertake and maintain regular exercise [15] have also been found effective.
There is lack of evidence concerning the optimum mode of care delivery. The more traditional face-to-face contact is by far the most evaluated type of therapy delivery. However, telephone contact aimed at promoting selfcare appears to be more cost-efficient [16] and has also been associated with improvements in joint pain [17,18] and physical function [18] for up to a year in patients with knee OA. Moreover, in a recent study by Eakon et al (2009) telephone counselling was suggested a feasible mode of delivering lifestyle interventions to patients with chronic conditions and demonstrated modest improvements in diet and physical activity [19].
To our knowledge, only one study investigated the effect of a non-pharmacological intervention in the management of GOA [20]. However, in this study all GOApatients recently underwent major joint replacement, therefore the results of this study cannot be generalized to a population in which joint replacement is not (yet) an option. Taking into account a. the extensive body of literature on NPT for hip or knee OA, b. the substantial group of patients and c. the fact OA in multiple joints is more disabling than in one joint [21][22][23], it is remarkable that research on the efficacy of NPT options in GOA is hitherto neglected.
Considering the latter, we infer that the development and evaluation of a treatment programme is warranted. To do so, we installed an expert group consisting of a physiotherapist, an occupational therapist, a specialized nurse, a rheumatologist and two researchers, all of whom have extensive experience with GOA patients. Consequently, the expert group systematically conceptualized a definition of GOA and a treatment programme tailored to the needs of patients with GOA and based on recommendations for the management of hip and knee OA [8,9] and on the clinical experience of the health care providers. This resulted in a best-evidence, multi-disciplinary treatment programme. Since there is no information about the optimal treatment intensity and mode of delivery, we decided to compare the effectiveness of a fully supervised multi-disciplinary program to an active control [24] (i.e. a telephone monitored program combined with two supervised contact moments). Due to the complex nature of GOA and the fact that guidelines for hip and knee OA recommend multiple NPT modalities, both interventions are multi-disciplinary [8]. We hypothesize that both programmes have beneficial effects on the patients' quality of life and ability to cope with their disease, however we expect the face-to-face programme to be superior with respect to daily functioning to the telephone programme.
The primary aim of this study is to compare the effectiveness of a supervised multi-disciplinary programme to an active control on daily functioning in patients with GOA during the first year after treatment. Secondary aims of the study are to investigate the short-term effects of interventions and to compare the cost-effectiveness of both interventions.
Methods/Design
A pragmatic randomized, single blind, clinical, superiority trial with active controls will be used to study the aforementioned aims. The study will be performed at the outpatient rheumatology departments of the Sint Maartenskliniek Hospitals in the cities of Woerden and Nijmegen in The Netherlands. Both centres have piloted the interventions and in both centres, rooms well equipped for group based treatments are available.
Patients -referred by their rheumatologist to the outpatient department for multi-disciplinary NPT -eligible for both the GOA health care program and the trial are informed about the trial. Subsequently, consenting patients are randomly allocated to one of the two groups and followed by questionnaires for a total of 52 weeks (figure 1).
The trial has been reviewed by the Institutional Review Board of the University Medical Centre Nijmegen (protocol number 2009/290) and they concluded that the study did not fall within the remit of the Medical Research Involving Human Subjects Act. So the study can be carried out (in the Netherlands) without an approval by an accredited ethical board.
Eligibility Criteria
Men and women (≥ 18 years old) are eligible to enter the trial if they are diagnosed with GOA (see our definition in the following paragraph), motivated to alter their lifestyle (assessed by a standardized set of questions), willing to participate in a group and able to comply with the planned time schedule of both treatment conditions. Patients are excluded when they are 1. awaiting surgery, 2. already participated unsuccessfully in a self-management program, 3. are considered not to be able to participate in a group due to limited psychological functioning (on the basis of clinical judgment of a psychologist), 4. are illiterate, 5. are not capable of communicating in Dutch or 6. are incapable of coming to the hospital.
From the patients who were in principle eligible considering the in-and exclusion criteria but decided not to participate in the study, baseline demographics (ie. age and sex) will be gathered, to assess possible selection bias.
Definition of GOA
In the abovementioned inclusion criteria we mentioned that patients must be diagnosed with GOA. However, no uniform GOA definition is available in the literature. A pragmatic literature search elicited numerous definitions for GOA [7,[25][26][27][28][29][30][31][32][33], mainly used in genetic studies and for the greater part based on patterns of distribution of joints with radiological changes.
In clinical practice the term GOA refers to the combination of clinical symptoms and radiographic changes in multiple joints which can be attributed to OA as obtaining a full picture of radiological changes in all joints is not feasible nor desirable in clinical practice. For the purpose Baseline assessment (T0) of this project we formulated a pragmatic definition of GOA based upon literature findings and on the basis of consensus of several clinicians and health professionals with experience in patients with GOA. In our definition signs and symptoms are combined with radiological changes. In this project a patient is defined as having GOA if he or she complies with the following three conditions: a. experiencing complaints in three or more groups of joints, and; b. having at least two objective signs that indicate OA in at least two joints (objective signs indicating OA are malalignment, palpable osteophytes/nodules, crepitations over the full range of motion, and limited range of motion or radiographic signs including the presence of joint space narrowing and/or osteophytes, and; c. is limited in daily functioning (Health Assessment Questionnaire score [34] > 0.5).
Interventions
During a six-week treatment period, patients will receive one of the following two treatment programs: -Interdisciplinary, group-based, self-management program (experimental intervention); -Telephone-based, self-management program (active control). Both pilot tested interventions were developed from a clinical and pragmatic perspective, meaning that both interventions had to be useful and feasible in clinical practice. This resulted in two, for patients and health care providers satisfactory, interventions. However, from a research perspective lack of contrast between both interventions might be observed. To further elaborate differences between groups, we depicted a specific overview of the content of the non-pharmacological treatment in both arms in figure 2, according to the recommendations of Perera et al 2007 [35].
For both interventions, manuals and standardized presentations were created. At baseline and every six months, all care providers assemble to assess and enhance the adherence with the treatment protocols.
Experimental intervention group
Patients (eight per group) allocated to the experimental intervention group attend six therapeutic group sessions and one group evaluation. During these six sessions patients aim to improve daily functioning by optimising their current lifestyle (i.e. physical activity and diet) and by enhancing self-efficacy to control the consequences of the disease in everyday life (i.e. activity pacing, pain management and daily functioning). To enhance patients' selfefficacy the 5As model of behaviour change counselling is used, which is an evidence-based approach appropriate for a broad range of different behaviours and health conditions. The 5As consists of: Assessing patient level of behaviour, beliefs and motivation; Advising the patient based upon personal health risks; Agreeing with the patient on a realistic set of goals; Assisting to anticipate barriers and develop a specific action plan; and Arranging follow-up support [36]. The following example might illustrate the use of the 5As model. A participant wears a pedometer to elicit his/her physical activity level (A1). Together with the health care provider, the patient discusses the outcome (A2) and set a goal to increase the level of physical activity (A3). Both the health care provider and the patient must believe the goal is adequate (A3) and realistic (A4). Consequently, patient and therapist closely monitor the personal goals (A5). In addition to the self-management programme, patients are also enrolled in an exercise programme aimed to 1. improve the quality of movement and 2. implement the learned exercises in the home situation.
Active control group
Patients enrolled in the active control group, attend two group sessions (eight patients per group) and are further monitored through four telephone contacts [37]. As per with the experimental intervention, the active control group aims to optimise the patients' current lifestyle (i.e. physical activity and diet) and to enhance the patients' self-efficacy to control the disease (i.e. activity pacing, pain management and daily functioning). Again, all patients set personal goals on the abovementioned items. Progress on these personal goals will be monitored by the health care provider through planned telephone contact. Patients are asked to self-monitor their own health-status [37], by filling out activity and dietary diaries.
Health care providers
A total of 14 health care providers (five physiotherapists, three occupational therapists, five specialized nurses, and one dietician) are involved in the therapy sessions. All health care providers are specialized in the management of patients with musculoskeletal disorders and have experience in teaching self-management principles to groups. Moreover, all care providers took the course 'motivational interviewing'.
The experimental intervention will be provided by one of three physiotherapists, one of three occupational therapists, one of two specialized nurses and one dietician. In the active control group, the two group sessions are provided by one of two physiotherapists and two of three specialized nurses. The telephone contact will be provided by the specialized nurses. Assignment of health care providers to the therapy programs was done on basis of availability.
Timeline Experimental intervention Active control
General intake by PT and OT
Eligibility assessment
Baseline assessment
Measurement of outcomes
Group education on the health care program, diaries and expectations.
Group education on osteoarthritis, pain and medication.
General exercise program.
Group education about physical activity.
Group education on activity pacing.
Recreational activity.
Group based monitoring of personal goals.
Group education on food consumption.
Specific exercise program, based upon the PSK-scores.
Group education on acceptance and helplessness.
Evaluation and setting goals for the future.
Monitoring of personal goals via telephone by specialized nurse.
Primary outcome Daily functioning
The primary outcome of the study is the Stanford Health Assessment Questionnaire (HAQ) Disability Index during the first year after treatment [34,38]. The HAQ is an independent patient-reported outcome questionnaire containing 20 questions, covering eight domains of activities of daily living. For each item, there is a four-level response set that is scored from 0 to 3, with higher scores indicating more disability (0 = without any difficulty; 1 = with some difficulty; 2 = with much difficulity; and 3 = unable to do). Both total scores as well as each of the subscores range from 0 (no disability) to 3 (severe disability). An improvement of 0.26 points is considered to be clinically relevant between group change [39]. The HAQ has been found to be more responsive for measuring functioning than the WOMAC questionnaire; a widely used in hip and knee OA [40].
Health-related quality of life (clinical efficacy)
To assess the efficacy of the interventions on healthrelated quality of life (HRQoL) the RAND 36-Item Health Survey 1.0 (RAND-36) will be used [41]. Scores from the eight subscales of the RAND-36 will be aggregated into two summary scores: a Physical Component Summary (PCS) and a Mental Component Summary (MCS). This instrument has been translated and validated for use in Dutch patients [42].
Patients specific complaints (clinical efficacy)
Physical functioning assessed with the patient specific complaints questionnaire (PSK). The PSK is a patientspecific questionnaire in which the patient is asked to select three activities that (s)he perceives as problematic (activities that can easily be avoided are not allowed) and scores the severity on a 10 cm visual analogue scale (VAS) [43].
Fatigue (clinical efficacy)
Fatigue is measured with the "Subjective Fatigue" subscale of the Checklist Individual Strength (CIS) [44]. The CIS is a self-administered questionnaire assessing 20 items, concerning 4 subscales divided in: subjective experienced fatigue (8 items), concentration (5 items), motivation (4 items) and physical activity (3 items). The outcomes per question are given in a 7-point scale, ranging from the statement 'totally right' to the statement 'totally wrong'. The total score is counted in points with a range of 1-7 per question and a total score range of 8-56 points. The CIS is a sensitive instrument with good discriminating power and reliability [44].
Health-related quality of life (health economics)
To measure the HRQoL of patients for the purpose of economic evaluation the EuroQol-5D (EQ-5D) will be used [45]. This HRQoL instrument will be completed by the patients and is available in a validated Dutch transla-tion. The EQ-5D is a generic HRQoL instrument comprising five domains: mobility, self-care, usual activities, pain/discomfort and anxiety/depression. The EQ-5D index is obtained by applying predetermined weights to the five domains. This index gives a societal-based global quantification of the patient's health status on a scale ranging from 0 (death) to 1 (perfect health). The utility weights captured by these preferences will enable the derivation of the Quality Adjusted Life Years (QALY) for each intervention and will be used in cost-utility analyses. Patients will also be asked to rate their overall HRQoL on a visual analogue scale (EQ-5D VAS) consisting of a vertical line ranging from 0 (worst imaginable health status) to 100 (best imaginable).
Costs (health economics)
Volumes of care will be measured prospectively using patient-based diaries (complemented by patient chart data if necessary). Per arm (intervention and control) full cost-prices will be determined using an activity based costing approach. Productivity losses for patients will be estimated using a postal questionnaire on a 3 months recall basis. The friction cost-method will be applied following the Dutch guidelines for cost analysis (Oostenbrink et al., CVZ 2004). Also travel time to a session or outpatient clinic and related costs patients make will be considered (also on the basis of 3 months recall).
The second part of the cost analysis consists of determining the cost prices for each unit of consumption in order to use these for multiplying the volumes registered for each participating patient. The Dutch guidelines for cost analyses will be used (CVZ, Oostenbrink et al., 2004). For units of care/resources where no guideline or standard prices are available real cost prices will be determined.
Study endpoints
Participants will receive postal questionnaires at baseline, and at 6, 13, 26, 39 and 52 weeks after the start of the intervention. The primary endpoint to study the long term effects of the interventions is the averaged HAQscore [34] as obtained from the 6, 26 and 52 week time points. The 6 week time point will provide a secondary endpoint for investigating the short term effects of the interventions. On time points 6, 13, 26, 39 and 52 costs will be assessed.
Socio-demographic information will be collected at baseline including age, gender, employment nature and body mass index. In Table 1 we outlined all outcome measures that will be collected at baseline and at followup evaluations.
Other outcomes
Since no validated outcome measures are available yet for the assessment of health status in patients with GOA, we decided to evaluate effectiveness also with a responder analysis. We developed an adapted version of the OMER-ACT-OARSI Responder Criteria as the secondary outcome measure of our study [46]. This composite index permits presentation of results of symptom modifying clinical trials in OA based on individual patient responses (responder yes/no). In this study, patients are considered responders if at least 3 of the 6 targeted areas (i.e. physical functioning, pain, fatigue, physical activity, acceptance, and patient global assessment) improve by ≥ 20% [47]. We assess the targeted areas with the following secondary outcome measures: RAND-36-pain, PSK, CIS and SQUASH, ICQ, PGA (as described below).
Physical activity
The Short QUestionnaire to ASsess Health-enhancing physical activity (SQUASH) [48] will be used to measure physical activity. The SQUASH measures habitual physical activity level and is structured in a way that allows comparing the results to international physical activity guidelines. The questions are prestructured into activities at work, activities to/from work, household activities, lei-sure-time activities and sports activities. Spearman correlation has shown an overall reproducibility of 0.58 (p < 0.05) for the SQUASH. The SQUASH has been validated using an accelerometer, the CSA Inc. Activity Monitor (model AM7164-2.2), showing a Spearman correlation coefficient between CSA readings and total activity score of 0.45 (95% CI 0.17-0.66) [48].
Illness cognitions
Illness cognitions (acceptance and helplessness) are measured using the Illness Cognitions Questionnaire (ICQ). The ICQ is an 18-item questionnaire measuring three generic illness cognitions: helplessness, acceptance and disease benefits. Participants rate the extent to which they agree with the statements on a 4-point Likert scale, ranging from 1 (not at all) to 4 (completely). Higher scores at subscales reflect higher levels of agreement with that generic illness cognition. The scale has excellent construct and internal validity [49]. In this study we use the subscales acceptance and helplessness.
Self-efficacy
Self-efficacy is evaluated with the General Self-Efficacy Scale (GSES) -the Dutch Language Version; a self-administered questionnaire assessing 10 items, concerning problems in daily living and the capability to bring up solutions for these problems [51]. The scores of the questions are rated on a 4-point scale. Possible responses are not at all true, hardly true, moderately true and exactly true yielding a total score between 10 and 40 points. A higher score represents a higher level of self-efficacy. The GSES was found to be configurally equivalent across 28 nations, and it forms only one global dimension. High reliability, stability, and construct validity of the GSES were confirmed in earlier studies [51].
Randomization, allocation concealment and blinding
Participants included in the study are randomly assigned to one of the treatment programmes. Restricted randomization will be used by randomly varied block sizes (2 to 6) [52]. A computer-generated randomization sequence table will be produced with random allocation software [53] by an independent researcher (DJ). Subsequently an independent person will assign patients to one of the treatment groups. This person has no information about the persons included and has no influence on the assignment sequence or on the decision about eligibility of patients.
Patients and health care providers allocated to the experimental and active control group will be aware of the allocated arm, whereas the outcome assessor and data analysts will be kept blinded to the allocation.
Sample size
For the sample size calculation we used the statistical package G*power 3.0.10 [54]. We utilized the equation for sample size required per group using an unpaired t-test to compare differences between two independent means. To detect a minimal clinically important change of 0.26 points [39] in mean HAQ scores between both groups, assuming a SD of: 0.66 (SE*???8N = 0.04*???ξτ271 = 0.66) [40] with 80% power and a two-sided 5% level, we will need 102 patients per arm (effect size is 0.26/0.66 = 0.39). The abovementioned sample size calculation is relevant for analyses with independent t-tests.
In our analyses, however, we will use the baseline HAQ as a covariate. By a straightforward generalisation of the method described in Borm et al [55], it can be shown that in this case the sample size must be multiplied by (1-(k-1)ρ)/k -ρ B 2 (see note below), where k is the number of follow-up assessments (3 in our case), where ρ B is the corre-lation between the outcome measured at baseline and at follow-up and where p is the correlation between the follow-up measurements. Although some publications report test-rest correlation for the HAQ of more than 0.8 (31), there is no direct information about the correlation that is to be expected in our trial. We expect ρ B to be smaller than p (even within the treatment groups), because the interventions will be 'between' the baseline and follow-up assessments. The interventions may not have the same effect on all patients and therefore decrease the correlation. When p is between 0.7 and 0.9 and ρ B = p-0.2, the sample size can be reduced by a (design) factor 0.44 to 0.55. For p between 0.8 and 0.9 and ρ B = p-0.1, the sample size can be reduced by a factor 0.38 to 0.44. A trial with 55 patients per treatment group will then have at least 80% power (when the design factor is 0.55). In the most optimistic scenario, when the design factor is 0.38, the study has slightly over 90% power.
Finally, as the patients will be treated in groups (cluster) of approximately eight, the patient numbers have to be increased by a factor 1+(8 -1)*ICC. For ICC = 0.05, this leads to 74 patients. In order to compensate for possible drop-outs (15%), we plan to enrol 85 patients per treatment group.
Planned data analysis
Study data are entered in Access 2003, exported to the statistical package STATA v10 stored on a secure network drive. Five percent of the data will be entered twice to assess percentage and nature of typing errors. All paper records are stored in a locked cabinet in an anonymised format. The researcher will check for any missing data and will manage this according to the recommendations of the questionnaires. Descriptive statistics will be used to determine participant characteristics. Continuous variables will be reported using means, standard deviations (SD) and inter-quartile ranges when appropriate, if not median and ranges are shown. For dichotomous/categorical variables, we will display absolute numbers and percentages. The primary analysis will be according to the intention to treat principle.
Clinical efficacy
The primary variable, HAQ during the first year after treatment, will be analysed with a random effects model with the HAQ scores after 6, 26 and 52 weeks as dependent variable. The fixed factors will be assessment (6, 26 or 52 weeks), treatment group, sex and baseline value. In order to account for the group wise treatment and the repeated measurements, the random effects group and patient will be included. HAQ immediately after treatment will be evaluated in a random effects model with fixed factors treatment group, sex and baseline value, and random factor group. All other continuous variables will be analysed in a similar way. Skewed variables will be transformed before analysis. For dichotomous outcomes, random effects general linear models with Bernoulli distribution and linear link function will be used, similar to the ones described for continuous outcomes.
Changes in effect size over time will be evaluated by adding the interaction of assessment and treatment group to the model.
Health economics
The economic evaluation is based on the general principles of a cost-effectiveness analysis and cost-utility analysis. For the cost-effectiveness analysis we will calculate the incremental cost-effectiveness ratio (ICER) as cost per unit improvement on the HAQ. For the cost-utility analysis we will calculate the ICER as cost per Qaly gained. This ICER will be evaluated stochastically and uncertainty will be determined using the bootstrap method and/or Fieller method. A cost-effectiveness acceptability curve will be derived that is able to evaluate efficiency by using different thresholds (Willingness To Pay) for a QALY. The impact of uncertainty surrounding deterministic parameters (for example cost-prices) on the ICER will be explored using one-way sensitivity analyses on the range of extremes. The economic evaluation is done along-side the clinical trial and consequently adheres to the earlier presented design.
Discussion
To date, research on NPT options for OA has mainly focused upon patients with hip and knee OA. In 2008, NICE disseminated multiple recommendations for future OA research based on the research hiatuses they identified. One of their research questions was: "What are the benefits of individual and combination OA therapies in people with multiple joint region pain?". This study will contribute to the body of evidence on NPT in GOA patients.
A possible limitation in our study is the limited contrast in the content of the experimental and control intervention. Both interventions were developed from a clinical and pragmatic perspective. Since both interventions should be directly implementable in clinical practice after study completion, we created two treatment protocols according to the recommendations outlined in OA guidelines and current best-practice. The content of both interventions is very similar but several critical differences are apparent such as the mode of delivery, the number of involved health care providers and the number of groupsessions. Specific insights in the effectiveness and costs of these differences will aid health care providers and care vendors in their decision making for the management of patients with GOA.
To our knowledge we are the first to define GOA from a clinical rather than a radiographic perspective, as no consistent clinical useful definition of GOA is available. In 1952, Kellgren and Moore defined GOA as involvement of multiple joints combined with Heberden's nodule [31]. Since then, multiple definitions of GAO have been used, for the greater part based on radiological changes. Most definitions state that GOA involves at least three joints [31], although this again is questioned [27]. The group of joints most often incorporated in definitions are the hands, neck, lower back, knees and hips [7,56,57], whereas other definitions postulate that the involvement of atypical joints [25,26] or hallux valgus [25,58] is essential for GOA. To date, two specific phenotypes of GOA have been established [28], however, these phenotypes are far from useful in daily practice as these phenotypes only represent a very small proportion of patients with OA-like complaints in multiple joints. Considering the low feasibility and desirability of obtaining radiographs of a large number of joint in clinical practice, we believe that clinical signs and symptoms should also be taken into account in the definition of GOA. Especially, since pain at multiple joint sites is associated with lower levels of functioning [21,23,[59][60][61], more pain [21,23,59,60] and higher levels of distress [21,59,62], and complaints rather than radiographic OA are the main motivation for patients to engage in therapy. So, for the purpose of this project we formulated a pragmatic definition of GOA (as described earlier) based upon literature findings and on the basis of consensus of several clinicians and health professionals with experience with patients with GOA.
There is a need for outcome measures to evaluate selfmanagement interventions [63]. Self-management is defined as the individual's ability to manage the symptoms, treatment, physical and psychosocial consequences and lifestyle changes inherent in living with a chronic condition [64]. Characteristically, one or more of these areas are addressed by self-management interventions [63]. In our study we target physical functioning, pain, fatigue, physical activity, and acceptance. However, no comprehensive outcome measure is available to measure all these different aspects. Mulligan et al (2005) state that when designing a self-management intervention, it is important to be clear about what the intervention is designed to achieve, in what areas it is likely to have an effect, and to choose outcome measures accordingly [63]. Therefore, we decided to include a responder analysisderived from the OMERACT-OARSI responder criteria [46] -as one of the secondary measures in our analysis that specifically evaluates those areas we aim to address. In a future publication we intend to evaluate and discuss this method of assessing self-management interventions.
In conclusion, this study will provide additional insights in the effectiveness of non-pharmacological interventions for GOA. The publication of our study protocol enables future readers to compare what was originally intended with what was actually done, thus preventing both "data dredging" and post-hoc revisions of study aims.
|
2016-05-12T22:15:10.714Z
|
2010-07-01T00:00:00.000
|
{
"year": 2010,
"sha1": "1dcebaf9dba0a07180aeead5ad6e4bcc34d01798",
"oa_license": "CCBY",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-11-142",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "569b00d4a4fec7bb3cf59d25478cb489f4ac999a",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250282778
|
pes2o/s2orc
|
v3-fos-license
|
Frailty in COPD: an analysis of prevalence and clinical impact using UK Biobank
Background Frailty, a state of reduced physiological reserve, is common in people with chronic obstructive pulmonary disease (COPD). Frailty can occur at any age; however, the implications in younger people (eg, aged <65 years) with COPD are unclear. We assessed the prevalence of frailty in UK Biobank participants with COPD; explored relationships between frailty and forced expiratory volume in 1 second (FEV1) and quantified the association between frailty and adverse outcomes. Methods UK Biobank participants (n=3132, recruited 2006–2010) with COPD aged 40–70 years were analysed comparing two frailty measures (frailty phenotype and frailty index) at baseline. Relationship with FEV1 was assessed for each measure. Outcomes were mortality, major adverse cardiovascular event (MACE), all-cause hospitalisation, hospitalisation with COPD exacerbation and community COPD exacerbation over 8 years of follow-up. Results Frailty was common by both definitions (17% frail using frailty phenotype, 28% moderate and 4% severely frail using frailty index). The frailty phenotype, but not the frailty index, was associated with lower FEV1. Frailty phenotype (frail vs robust) was associated with mortality (HR 2.33; 95% CI 1.84 to 2.96), MACE (2.73; 1.66 to 4.49), hospitalisation (incidence rate ratio 3.39; 2.77 to 4.14) hospitalised exacerbation (5.19; 3.80 to 7.09) and community exacerbation (2.15; 1.81 to 2.54), as was frailty index (severe vs robust) (mortality (2.65; 95% CI 1.75 to 4.02), MACE (6.76; 2.68 to 17.04), hospitalisation (3.69; 2.52 to 5.42), hospitalised exacerbation (4.26; 2.37 to 7.68) and community exacerbation (2.39; 1.74 to 3.28)). These relationships were similar before and after adjustment for FEV1. Conclusion Frailty, regardless of age or measure, identifies people with COPD at risk of adverse clinical outcomes. Frailty assessment may aid risk stratification and guide-targeted intervention in COPD and should not be limited to people aged >65 years.
INTRODUCTION
Chronic obstructive pulmonary disease (COPD), characterised by fixed and progressive airflow obstruction, is the third leading cause of death worldwide. 1 COPD is also a condition associated with ageing. While it is estimated that 10% of the adult population worldwide may be living with COPD, 1 the prevalence increases from <5% in people aged <65 years to >20% in people aged >85 years. 2 This has highlighted the need to understand the links between COPD and states associated with ageing, such as frailty. 3 4 However, neither frailty nor COPD exclusively affect older people, and there is no clearly defined threshold above which frailty becomes a clinically meaningful concept. Most studies of frailty have focused exclusively on people over the age of 65, in whom frailty is more common. Frailty can affect people across a range of ages, 5 6 including people aged <65 years in whom it has been far less frequently studied. The clinical implications of frailty at younger ages remain unclear.
Frailty describes a state of reduced physiological reserve. 7 People living with frailty are more vulnerable to decompensation and adverse health outcomes in response to physiological stress. This confers an increased WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Frailty is common in people with chronic obstructive pulmonary disease (COPD), including in younger people (eg, those aged less than 65 years); however, the clinical implications of COPD in this age group are poorly understood.
WHAT THIS STUDY ADDS
⇒ Frailty in people with COPD aged 40-70 is associated with increased risk of mortality, hospital admission, major adverse cardiovascular events and COPD exacerbations. ⇒ This relationship is independent of the severity of airflow limitation.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ Current policies for frailty identification tend to focus exclusively on those aged 65 and over. These findings suggest that in people with COPD, identifying frailty in younger people may aid risk stratification an identification of those for whom interventions may be designed and targeted.
Open access risk of a range of outcomes including mortality, hospital admission, adverse drug reactions and falls. 8 COPD is associated with a range of extrapulmonary complications including cardiovascular morbidity, 9 osteoporosis, 10 and muscle weakness, 11 all of which may contribute to frailty. Frailty is highly prevalent in people with COPD. 12 Most previous studies have focused exclusively on people aged >65 years. 5 13-15 However, none of these studies have explored the clinical implications of frailty in younger people with COPD. Furthermore, while some studies have demonstrated an association between frailty and both severity of airflow limitation [16][17][18] and mortality in people with COPD, [19][20][21] these findings have been inconsistent. [22][23][24][25] It is also not clear if the relationship between frailty and adverse outcomes in COPD is independent of the severity of COPD assessed by airflow limitation.
This study seeks to address these gaps using data from the UK Biobank, a cohort of people aged 40-70, representing a relatively younger age range than most previous studies. It will assess two models of frailty; the frailty index and the frailty phenotype. We aim: (1) to assess the prevalence of frailty in UK Biobank participants with COPD, (2) to explore the relationship between frailty and FEV1 and (3) to quantify the association between frailty and mortality, hospitalisations, major adverse cardiovascular events (MACE) and COPD exacerbations.
METHODS
This is an observational analysis of the prevalence and impact of frailty, assessed using two different definitions, in UK Biobank participants with COPD.
Study population UK Biobank is a large cohort, recruited by invitation between 2006 and 2010 (5% response rate). Participants were aged between 40 and 70 and had to be registered with a general practitioner and live within 20 miles of one of 22 assessment centres in England, Scotland and Wales. Participants underwent a baseline assessment questionnaire, nurse interview, physical assessment and provided biological samples. Informed consent was also given for linkage to healthcare records including primary care, hospital episode statistics and national mortality records. Currently, linked primary care records are available for 218 570 of the original 502 533 participants. Participants with available primary care data are similar to the wider UK Biobank cohort in terms of age, sex, socioeconomic status and self-reported long-term conditions (online supplemental appendix 1).
Identifying COPD
Participants with COPD were identified from linked primary care data using a previously validated list of diagnostic codes (Read-codes). 26 This code list has been shown to have a high positive predictive value for COPD (86.5%). We included participants with any relevant code occurring prior to UK Biobank baseline assessment. We did not include people with self-reported COPD if they did not have a corresponding primary care Read code.
Spirometry
We assessed the severity of COPD using spirometry data. We relied primarily on spirometry values coded in primary care records in the 2-year period prior to baseline assessment, as the quality of spirometry undertaken in primary care is known to be high. 27 Where no primary care measures were available, we used spirometry data from UK Biobank baseline assessment. These measurements were taken using a Vitalograph Pneumotrac 6800 according to American Thoracic Society/European Respiratory Society guidelines. No postbronchodilator measurements were taken. Criteria for acceptable spirometry values from UK Biobank assessment data were taken from previous UK Biobank studies and are described in full in online supplemental appendix 1. 28 We did not use spirometry to confirm the diagnosis of COPD as UK Biobank spirometry was not postbronchodilator, and previous studies demonstrated that the addition of spirometry only marginally improves the positive predictive value of the diagnostic codes used to identify COPD.
For all analyses using spirometry, we performed sensitivity analyses based on primary care values and UK Biobank values separately.
Assessing frailty
We used two different definitions of frailty, the frailty index and the frailty phenotype, which we analysed in parallel. These are described briefly here with full details in the online supplemental appendix 1.
Open access
A frailty index is a non-weighted count of age-related deficits (including comorbidities, symptoms, functional limitations and laboratory values). The frailty index was originally developed by Rockwood and Mitnitski and includes a standard protocol for selecting deficits from a given data set based on specific criteria. [29][30][31] Deficits should be associated with increasing age and with poor health status; be neither too rare (<1% prevalence) or ubiquitous and cover a range of organ systems. 29 We used the frailty index previously developed by Williams et al for UK Biobank. 32 Deficits are summed and then divided by the total number of possible deficits to give a value between 0 (no deficits) and 1 (all possible deficits). We analysed the frailty index as a numerical variable. For estimating prevalence and for presentation in tables, we also categorised the frailty index into robust (0-0.12), mild (0.12-0.24), moderate (0.24-0.36) and severe (>0.36) frailty. Cut-points were selected based on the electronic frailty index used routinely in UK primary care. 33 The frailty phenotype is based on five criteria: low grip strength, weight loss, slow walking speed, exhaustion and low physical activity. Frailty is defined as the presence of 3 or more criteria, with 1 or 2 criteria indicating prefrailty. We have previously adapted the original criteria by Fried et al to UK Biobank (described in detail elsewhere). 5 7 Briefly, cut-offs for grip strength were as per the original frailty phenotype description, weight loss was self-reported and (given the wording of the UK Biobank questionnaire) not specified to be 'unintentional', slow walking speed was self-reported (in contrast to the original frailty phenotype in which gait speed was measured) as were exhaustion and physical activity. Detailed comparison between the UK Biobank and original definitions for each component are in the online supplemental appendix 1.
Covariates
Baseline covariates were taken from UK Biobank assessment centre data. Age, sex and ethnicity were selfreported. Body mass index was calculated based on measured height and weight. Smoking was categorised as current, previous and never, based on self-report. Selfreported frequency of alcohol intake was categorised (never/special occasions, 1-3 times per month, 1-4 times per week, of daily/almost daily).
Outcomes
We assessed the following outcomes by linkage to prospective healthcare records: all-cause mortality; all-cause hospitalisations, MACE; hospitalisation with COPD exacerbation; community COPD exacerbation. Follow-up was 8 years.
Mortality was assessed through linkage to national mortality registers. Hospitalisations were defined as any hospital admission coded as 'urgent' or 'emergency' (excluding 'elective' admissions). MACE was defined using International Classification of Diseases 10th
Open access
Revision (ICD-10) codes from mortality records (cardiovascular death) and hospital episode statistics (non-fatal myocardial infarction (I21) or stroke (I63-I64)). Hospitalised COPD exacerbations were defined using previously validated ICD-10 codes (acute exacerbation of COPD (J44.0 or J44.1) or lower respiratory tract infection (J22) codes in any position, or COPD code (J44.9) in first position of a hospital episode). 34 Community COPD exacerbations were identified using a previously validated combination of primary care diagnostic codes, symptom codes and prescriptions. 35 We defined an exacerbation as either (1) a medical diagnosis of lower respiratory tract infection of acute exacerbation of COPD, (2) prescription of COPD-specific antibiotic combined with oral corticosteroid prescription or (3) two or more respiratory symptoms recorded on the same day as prescription of COPD-specific antibiotics or oral corticosteroids. These criteria were applied after excluding events occurring on the same day as codes suggesting routine annual COPD reviews or provision of rescue medication. 35
Statistical analysis
The overall distribution of each frailty measure was summarised descriptively using bar plots. The relationship between frailty and baseline characteristics was summarised using descriptive statistics (means and SD or counts and percentages for continuous and categorical variables, respectively). For the frailty index, we summarised this data using categories of the frailty index (robust, mild, moderate, severe) as described above.
To assess the relationship between each frailty measure and adverse clinical outcomes, we used Cox-proportional hazards models (for all-cause mortality and MACE, modelling time to first event for MACE) and negative binomial models (for all-cause hospitalisations, hospitalised COPD exacerbations and community COPD exacerbations). For MACE, a cause-specific model was used, with participants dying of other causes being censored at death with event status set to '0'. All models were initially adjusted for age, sex, socioeconomic status, body mass index, smoking and alcohol frequency (model 1) and then additionally adjusted for FEV1 (expressed as a percentage of predicted FEV1 based on age, height and ethnicity) (model 2). Negative binomial models also included an offset term of log observation time. In all models, fractional polynomials were used to model nonlinear associations between numerical variables (frailty index, age, socioeconomic status and percent predicted FEV1). We assessed interactions using product terms between frailty and age and between frailty and percent predicted FEV1. This was to assess whether the association between frailty and outcomes varied depending on age or severity of COPD. Interaction terms were retained if they improved model fit (assessed using Akaike Information Criterion).
In sensitivity analyses, we repeated all of the above analyses restricting the sample to those with primary-carebased spirometry values (as UK Biobank spirometry data were not postbronchodilator). We also repeated all analyses using FEV1 expressed as an absolute value instead of as a percentage of predicted FEV1.
Finally, in post hoc analyses, we modelled the relationship between frailty and mortality, and between frailty and hospital admissions in the full cohort (with available primary care data), including a term for the interaction between frailty and COPD. This was to assess whether any relationship between frailty and mortality or hospitalisation was similar in people with and without COPD.
All analyses were performed using R.
Patient and public involvement
Patients were not involved in the planning and conduct of this research.
RESULTS
We identified 3132 UK Biobank participants with a COPD-specific primary care diagnostic code prior to baseline assessment (flow diagram shown in online supplemental appendix 1). Of these, 2820 had spirometry data (2203 of which were from primary care data recorded up to 2 years before baseline assessment, with 617 relying on UK Biobank spirometry), 3011 (96%) had complete data on frailty phenotype variables and 3131 (99.9%) had sufficient data to calculate the frailty index. The total number of participants included in each analysis is shown in the flow diagram in online supplemental appendix 1. The prevalence of frailty was 17% (n=514) using the frailty phenotype, while with the frailty index 28% (n=872) had moderate frailty and 4% (n=121) had severe frailty. For both frailty measures, prevalence was higher in people with COPD than in the wider cohort online supplemental appendix 1. Baseline characteristics are shown in table 1. The relationship between frailty and per cent predicted FEV1 is shown in figure 1.
Airflow limitation was modestly lower in frailty based on the frailty phenotype (with considerable overlap in the distributions). However, this relationship was not seen between airflow limitation and the frailty index. The relationship between frailty and clinical outcomes is summarised in figure 2. Using both the frailty index and the frailty phenotype definition, presence of frailty was associated with greater risk of all-cause mortality, MACE, all-cause hospitalisations, hospitalisation with COPD exacerbation and community COPD exacerbation. For MACE, CIs for different levels of frailty index, and for prefrailty and frailty, were overlapping. The relative effect of frailty on each of these outcomes was similar before and after adjusting for airflow limitation, with only modest attenuation of the effect estimates.
The predicted risk of clinical outcomes at different levels of frailty and airflow obstruction are shown in figure 3 (all-cause mortality and MACE), figure 4 Open access (all-cause hospitalisation and hospitalised COPD exacerbations) and online supplemental appendix 1 (community COPD exacerbations).
At all levels of frailty, the risk of all-cause mortality rose in a non-linear fashion with lower FEV1. There was no evidence of statistical interaction between either frailty definition and FEV1 or between age and either frailty or FEV1. This implies that, although the relative increase in mortality risk with frailty was similar at all levels of airflow obstruction, the absolute difference in mortality risk between 'robust' and 'frail' individuals was greatest in participants with lower FEV1. Furthermore, although the relative impact of frailty did not vary with age, absolute risk of outcomes is also therefore greater among older participants at any given level of frailty.
For MACE, the relationship with airflow limitation, as well as with frailty, was more modest. However, both were independently associated with a higher risk of MACE.
For hospitalisations and COPD exacerbations (hospitalised or community), there was a clear increase in risk with both airflow limitation and with frailty (figure 4 and online supplemental appendix 1). As with mortality and MACE, there was no evidence of statistical interaction.
In sensitivity analyses based on primary care-coded spirometry data, all results were similar including the relationship between frailty and FEV1 and the relationship between frailty and clinical outcomes adjusting for FEV1. Findings were also similar when using raw FEV1 values rather than per cent-predicted FEV1. Finally, the relationship between frailty and mortality and between frailty and hospital admissions, on the relative scale, was Figure 2 This figure shows HR and incidence rate ratios (IRR) for the association between frailty and clinical outcomes. Two models are presented, model 1 (adjusted for age, sex, socioeconomic status, smoking and alcohol frequency) and model 2 (adjusted for all covariates in model one plus forced expiratory volume in 1 s).
Open access similar between people with and without COPD (with no evidence of statistical interaction, shown in the online supplemental appendix 1).
DISCUSSION
Frailty is common in 'middle-aged' as well as older people with COPD and is associated with a range of adverse health outcomes. In UK Biobank participants with COPD, aged between 40 and 70, frailty prevalence was 17% using the frailty phenotype, while using the frailty index 28% had moderate and 4% had severe frailty. The frailty phenotype, but not the frailty index, was associated with lower percent-predicted FEV1. Both frailty definitions were associated with higher all-cause mortality, MACE, hospitalisations and both hospitalised and community COPD exacerbations. The relationship with each of these adverse outcomes was independent of the degree of airflow limitation, for both frailty definitions. However, the difference in absolute risk between frail and robust participants was greatest in those with severe airflow limitation. These findings demonstrate that frailty is a common and clinically significant concept in people with COPD, including those aged <65 years in whom it is not routinely identified and has been infrequently studied.
Our findings that frailty in COPD is associated with mortality independently of FEV1 are consistent with some previous studies, 19 21 although some have shown null associations after adjustment for age and FEV1. 22 23 These studies varied in their frailty definition, sample size and length of follow-up. Frailty has also been associated with exacerbations in two cross-sectional and one longitudinal study. 18 21 36 The association with MACE has not been described in previous studies of frailty in COPD.
Our findings that frailty was common in people with COPD are in keeping with previous epidemiological studies of frailty in COPD 12 as well as the wider literature of the broad physiological implications of COPD. 37 COPD impacts multiple organ systems and is often associated with muscle weakness, osteoporosis and malnutrition. 10 11 The severity of COPD is best characterised by a multidimentional assessment reflecting these broad impacts. For example, the BODE (Body-mass index, airflow Obstruction, Dyspnea, and Exercise) index comprises four domains (body mass index, FEV1, dyspnoea assessed using the modified Medical Research Council scale and exercise capacity based on the 6 min walking distance). It is used to assess the severity of COPD, and it is a superior predictor of mortality in COPD than FEV1 alone. 38 Domains of the BODE index have considerable overlap with features of the frailty phenotype (eg, weight loss and slow walking speed) and are commonly-used deficits within the frailty index. However, the extent to which frailty is caused by these features of COPD, or reflects a physiological decline distinct from COPD, is not clear. The development of frailty is multifactorial with multiple potential causal mechanisms. Many of these, including environmental exposures, systemic inflammation and altered body composition, are closely linked to COPD (either as common causal factors, such as environmental exposures, or as sequelae of COPD that may contribute to the development of frailty). As frailty development is multifactorial, this is likely to vary between individuals and may also differ depending on the measure used to define frailty.
Open access
Frailty is a dynamic concept. Longitudinal studies have shown that COPD is associated with the transition from a robust to a frail state using the frailty phenotype. 39 40 Conversely, some people with frailty and COPD undergoing pulmonary rehabilitation show a marked improvement in frailty status. 41 Therefore, while COPD may be a risk factor for frailty progression, the shared features may offer opportunities for interventions targeting both frailty status and COPD. The observation that frailty may improve in the context of pulmonary rehabilitation, as described by Maddocks et al, 41 is consistent with recent reviews of interventions targeting frailty in general, in which exercise and nutritional interventions have shown the most promise in ameliorating frailty. 42 Identification of people with COPD and frailty may, therefore, be beneficial for both identification of risk and for targeted intervention. Our findings demonstrate that this identification should not be limited to 'older' people with COPD, as frailty is prevalent across a wide age range and associated with a range of clinically important outcomes.
Strengths of this study include its large sample size and prospective linkage to a wide range of healthcare outcomes. We also used validated definitions, based on linked diagnostic codes, to identify baseline COPD and subsequent exacerbations. 26 34 35 The range of variables available from the UK Biobank baseline assessment also allows the analysis of two separate measures of frailty. However, there are some important limitations. Our definition of the frailty phenotype was adapted from the original. 5 7 Unlike the original, weight loss was not specified as unintentional in UK Biobank and walking speed was self-reported rather than measured. The frailty index was constructed according to the standard protocol; however, there is a relative lack of functional measures and few measures of sensory or cognitive impairment. UK Biobank is also not nationally representative, with participants being on average more affluent, having fewer comorbidities, and more predominantly White ethnicity than the UK population. This lack of representativeness may lead to bias in the estimation of associations between exposure and outcomes. For example, UK Biobank appears to underestimate the risks of mortality, hospitalisation and MACEs associated with high levels of multimorbidity. 43 It is likely, therefore, that our estimates of the associations between frailty and adverse outcomes may be conservative. UK Biobank spirometry data are also not postbronchodilator; however, we used primary care spirometry data where possible (available for 70% of participants), which has been shown to be of high quality, and our findings were consistent when restricting our analysis to those with primary care spirometry alone.
Conclusion
Our findings demonstrate that frailty is common in people with COPD, including those under 65 years of age, and has clinically significant implications for this population regardless of which frailty definition is used.
This relationship is independent of the degree of airflow limitation. Identification of frailty in people with COPD may aid risk stratification and identification of those who may benefit from targeted interventions. For this to be beneficial, frailty assessment would need to become integrated into the routine monitoring and management of COPD.
Contributors PH, JL, JKQ, DAM and FM designed the study and wrote the analysis plan. BIN is the data holder under UK Biobank project 14151. PH performed the analysis. PH, BDJ, JKQ, JL, DAM and FM interpreted the findings. PH wrote the first draft. PH, BDJ, JKQ, JL, DAM and FM reviewed this and subsequent drafts and approved the final version for submission. PH, BDJ, BIN, DAM and FM had full access to the data. FM is the guarantor. Competing interests FM is the principal supervisor of PH (first author) who is funded by a MRC Clinical Research Training Fellowship (Grant reference: MR/ S021949/1) which supported PH to do this work. FM is also a principal investigator or co-investigator on grants funded by the MRC, NIHR, Wellcome, CSO, and EPSRC to undertake multimorbidity research. The funds go to FM's institution, the University of Glasgow.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Patient consent for publication Not applicable.
Ethics approval
The UK Biobank has full ethical approval from the NHS National Research Ethics Service (16/NW/0274). All participants gave informed consent for participation in UK Biobank. Access to UK Biobank data was granted under project 14151. Participants gave informed consent to participate in the study before taking part.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data may be obtained from a third party and are not publicly available. The UK Biobank data that support the findings of this study are available from the UK Biobank ( www. ukbiobank. ac. uk), subject to approval by UK Biobank.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
|
2022-07-06T06:16:44.070Z
|
2022-07-01T00:00:00.000
|
{
"year": 2022,
"sha1": "1cea55b682021834956f44a08ca21e23f9bab1a9",
"oa_license": "CCBY",
"oa_url": "https://bmjopenrespres.bmj.com/content/bmjresp/9/1/e001314.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f866ec4a01215606f601852fae1b2e6ad3b40dbf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
58010228
|
pes2o/s2orc
|
v3-fos-license
|
Gene-Expressing Liposomes as Synthetic Cells for Molecular Communication Studies
The bottom-up branch of synthetic biology includes—among others—innovative studies that combine cell-free protein synthesis with liposome technology to generate cell-like systems of minimal complexity, often referred to as synthetic cells. The functions of this type of synthetic cell derive from gene expression, hence they can be programmed in a modular, progressive and customizable manner by means of ad hoc designed genetic circuits. This experimental scenario is rapidly expanding and synthetic cell research already counts numerous successes. Here, we present a review focused on the exchange of chemical signals between liposome-based synthetic cells (operating by gene expression) and biological cells, as well as between two populations of synthetic cells. The review includes a short presentation of the “molecular communication technologies,” briefly discussing their promises and challenges.
MOLECULAR COMMUNICATIONS AND SYNTHETIC CELLS (SCs)
Natural organisms coordinate their activities through communication. Isolated cells, tissue cells, as well as higher organisms, share their environment with other living forms. Tactile, physical, and especially chemical signals define in unique and complex manner the sensory world of living beings. Communications in the chemical domain are ubiquitous intercellular processes, and play important roles in all organisms.
Inspired by the already mentioned capabilities of natural organisms, a new branch of biomimetic technology has been proposed which focuses on molecular communications (Nakano et al., 2011(Nakano et al., , 2013. Network engineers have envisioned the exploitation of chemical exchanges as the basis for developing new types of Information and Communication Technologies (the so-called bio-chem-ICTs, Figure 1A). This is an exciting new arena for engineers and biologists that aims at the construction of well-characterized biological parts, devices, and systems that will process chemical information in a controlled and programmable manner, as it happens with classical electric signals. The challenge, here, relies on the ability of managing communication and information processing through chemical signals with the same mastery as nature has done for billions of years. Such a broad and innovative territory of research offers several opportunities for various approaches to synthetic biology, which needs adequate theoretical frameworks, numerical modeling strategies, and experimental methodologies. More generally, bio-chem-ICTs refers to radically new forms of computation, communication, and information processing approaches-at the nano-and micro-scale levels-based on chemical and biochemical systems (Amos et al., 2011). bio)chem-information and communication technology that can be applied to nanomedicine (smart drug delivery systems), smart responsive materials, synthetic biology (construction of biochips), artificial intelligence (AI), hybrid bio-electronic systems and for sensors in environmental monitoring (Nakano et al., 2013). (B) Synthetic cells are cell-like systems, generally built by encapsulating a number of (bio)molecular components into artificial micro-compartments. One of the possible designs focuses on liposome-based SCs operating by gene expression (Luisi, 2002;Luisi et al., 2006). With this aim, TX-TL kits produce the protein(s) of interest starting from the corresponding DNA sequence. The SC membrane can be functionalized with membrane proteins as pores (Noireaux and Libchaber, 2004) and receptors (Hamada et al., 2014); cytoskeletal proteins have been implemented as well (Maeda et al., 2012). (C) The principles of autopoiesis (self-production) (Varela et al., 1974), which guides the long-term goal of constructing SCs that produce all their components. Autopoiesis provides insights into the spatial and dynamical organization that a chemical system should be endowed with in order to display self-maintenance, organizational closure, homeostasis and reproduction achieved by the internal processes of manufacturing and assembling its components. (D) Schematic representation of a SC which produces and releases a signal molecule into the environment. The signal is perceived by a natural cell (e.g., a bacterium) that consequently activates a response (for example, a reporter protein, an enzyme operating as an actuator to perform a certain operation, including a reply signaling) (Nakano et al., 2011;Stano et al., 2012). Table 1 reports several cases of unidirectional or bidirectional molecular communications between SCs, or between SCs and natural cells. (E) The vision of using SCs as smart drug delivery systems or for enzyme replacement therapy (Leduc et al., 2007). SCs, intended as a biotechnological evolution of current liposomes for drug delivery, reach and bind to the target cells by a molecular recognition mechanism and activate their internal circuits responding to chemical stimuli and consequently act, in a programmable manner, for a certain task (e.g., producing a therapeutic or diagnostic agent Ding et al., 2018;Krinsky et al., 2018, or a secondary easy-to-detect signal, etc.). The chemical stimulus can be an endogenous chemical that derives from the target cell itself (as shown in the cartoon) or from other tissues (not shown), as well as purposely-added exogenous chemicals (not shown).
Owing to our direct involvement in the field Rampioni et al., 2014Rampioni et al., , 2018, and considering recent exciting reports, in this review we present and discuss the intersection between the bio-chem-ICT idea of exchanging chemical signals in a programmable way, and the bottom-up synthetic biology approach focused on the construction of cell-like systems based on gene expression inside liposomes (Luisi, 2002;Noireaux and Libchaber, 2004;Luisi et al., 2006;Ichihashi et al., 2010;Stano et al., 2011;Nourian and Danelon, 2013;Spencer et al., 2013). For simplicity, we will shortly refer to these systems simply as "synthetic cells" (SCs, Figure 1B), keeping in mind that these are rather simple mimics of biological cells.
In this mini-review, the principles on which liposomebased SCs operate will be summarized, together with an explanation of the reason why they could contribute significantly to molecular communication technologies on account of their inherent possibilities in terms of design, modeling, control, programmability, and modularity. Next, recent experimental reports focused on chemical communication between SCs and natural cells (or with other SCs) will be reviewed (see also Lentini et al., 2016), while the opportunities and challenges facing this novel research arena will be discussed in the final section.
Before advancing in the discussion, two notes of warning are intended for readers unfamiliar with this research field. Firstly, the term "synthetic cell" is also used in synthetic biology to indicate living cells generated either by engineering biological cells (e.g., metabolic engineering, genetic optimization, or reprogramming), as well as by the transplantation of an entire synthetic genome in a living cell deprived of its own genome. Second, bottom-up synthetic biology approaches aiming at constructing cell-like systems are not restricted to liposomebased SCs. No less interesting are systems based on other types of compartments (Walde et al., 1994;Martino et al., 2012;Huang et al., 2014;Karzbrun et al., 2014;Dora Tang et al., 2015;Rideau et al., 2018), nor those based on new artificial molecules (Kurihara et al., 2011;Marguet et al., 2013;Taylor et al., 2015). Interested readers can refer to recent reviews for a broader discussion (Buddingh and van Hest, 2017;Salehi-Reyhani et al., 2017;Göpfrich et al., 2018;Schwille et al., 2018). The current review will focus only on SCs based on gene expression inside liposomes.
BASIC PRINCIPLES ON LIPOSOME-BASED SCs OPERATING VIA GENE EXPRESSION
SCs based on gene expression inside liposomes find their origin in early studies on cell models aiming at achieving minimal lifelike behaviors (Morowitz et al., 1988;Luisi and Varela, 1989;Schmidli et al., 1991;Oberholzer et al., 1995aOberholzer et al., ,b, 1999Szostak et al., 2001;Luisi, 2002;Pohorille and Deamer, 2002;Mansy and Szostak, 2009). Born within the origins-of-life community, this research was intended as a means of investigating the emergence of life on Earth, more precisely by demonstrating the emergence of life as a system-level phenomenon due to a particular type of organization (the autopoietic one). Hence, the autopoietic (self-production) (Varela et al., 1974;Luisi and Varela, 1989;Luisi, 2003) (Figure 1C) and the chemoton theories (chemical automaton) (Gánti, 1975) are two valuable theoretical frameworks for the construction of SCs which display features of biological organisms. Starting in the first years of 2000, SCs and similar constructs became highly relevant also in the context of synthetic biology, either as tools for generating basic knowledge, or as systems designed for applied research, i.e., biotechnology and nanomedicine.
The SCs discussed in this review are liposomes, with a size ranging typically from 0.1 to 10-100 µm: they contain DNA and a cell-free gene expression system. They are made by assembling liposomes in an aqueous phase which contains all the molecules needed to be encapsulated for accomplishing protein synthesis from a DNA template (e.g., enzymes, ribosomes, tRNAs, nucleotides, amino acids etc.). The protein synthesis machinery can derive from a cell extract or from a reconstituted system [such as the PURE system (Shimizu et al., 2001)]. Accordingly, it can be noted that SC technology is based on liposome technology (including microfluidics) and cell-free systems (including biochemical reconstitution approaches). As a result of the reactions occurring in their aqueous lumen and/or on their boundary surface, SCs can display behavior(s) typical of living cells. For example, SCs produce proteins from a corresponding gene; in turn, the synthesized protein can be an enzyme that converts substrates into products, or it can be a pore-forming protein, creating pores on the liposome membrane, or it can be a receptor that binds a signal molecule, etc. More in general, SCs can be functionalized with any chemical network of biological relevance that is functional in vitro.
Several reactions different from gene expression have been successfully performed inside liposomes, confirming the potentiality of SCs in terms of scope, programmability, and functionality. Some examples are: PCR and RT-PCR (Oberholzer et al., 1995a;Shohda et al., 2011;Lee et al., 2014;Tsugane and Suzuki, 2018), DNA replication (Sakatani et al., 2018;van Nies et al., 2018), and several enzymatic reactions. Moreover cytoskeletal elements have been reconstituted inside SCs (Cabré et al., 2013;Furusato et al., 2018;Litschel et al., 2018). Ad hoc designed gene circuits lead to SCs that can perform useful operations in a programmable way, including communication, as discussed below. SCs with the capacity of self-producing all their own constitutive components, and which possibly growand-divide as living cells do, are still missing, although interesting reports that show progress in this directions have been published (Kurihara et al., 2011).
This mini-review focuses on SCs capable of communicating with biological cells and with each other. However, other interesting research directions are under current development, including the construction of SCs with nested design (Deng et al., 2017;York-Duran et al., 2017;Hindley et al., 2018), the production of ATP inside SCs (Feng et al., 2016;Altamura et al., 2017;Lee et al., 2018), the attempts of self-producing SC parts (Schmidli et al., 1991;Kuruma et al., 2009;Scott et al., 2016;Li et al., 2017;Exterkate et al., 2018), and the shift from isolated SCs to "SC communities", including tissue-like structures Hadorn et al., 2013;Booth et al., 2016).
SCs THAT EXCHANGE CHEMICAL SIGNALS: A BOTTOM-UP SYNTHETIC BIOLOGY PLATFORM FOR MOLECULAR COMMUNICATIONS
SCs based on gene expression inside liposomes can be useful tools for developing molecular communication technologies . Current SC technology allows building simple systems capable of exchanging chemical signals, and therefore performing elementary signal processing. The idea is to design SCs capable of communicating with each other or with biological cells in a programmable manner ( Figure 1D). This innovative perspective has multifold theoretical and practical consequences. From the theoretical viewpoint, SCs that can regulate their internal mechanisms in response to external perturbations (the chemical signaling) are de facto experimental tools for investigating minimal cognitive systems (Damiano and Stano, 2018a,b). Considering the proposed extension of the Turing imitation game to the SC realm (Cronin et al., 2006), molecular communication can contribute to the determination of life-likeness criteria as referred to SCs, as recently investigated by the Sheref Mansy group (Lentini et al., 2017). In a more practical perspective, an expansion of actual drug delivery strategies can be proposed. Inspired by the scenario depicted by Leduc and collaborators ( Figure 1E) (Leduc et al., 2007), SCs could activate internal mechanisms upon perception of chemical signals, thus acting as "intelligent" drug carriers. As an example, SCs could be targeted to specific cells (e.g., tumoural cells) by exploiting antigen-antibody recognition. Once localized, their internal genetic circuit could be activated by chemical stimuli produced by the target cell itself or by other endogenous or exogenous chemical signals. These "smart" SCs could produce and release therapeutics (or drugs) in situ. Note that a recent study has reported SCs (injected into the tumor) that constitutively produce a toxin against breast cancer cells (Krinsky et al., 2018). The therapeutic (or diagnostic) use of SCs is, today, still a hypothetic scenario. Nevertheless, continuous improvements in SC design and construction is expected to favor a more rapid prototyping, thus accelerating the path toward applicative purposes.
Sensors, Actuators, Controllers, and Molecular Diffusion
Like hardware robots or conventional communication devices, SCs are embodied systems composed of molecular elements that perform specific operations. Hardware components, such as sensors, controllers, and actuators (Mataric, 2007;Wang et al., 2013) have their molecular counterparts in SCs.
In the context of SCs operating by gene expression, sensors can be protein receptors or RNA aptamers that bind to a signal molecule and consequently change their conformation. This event directly or indirectly affects the "controller system, " which is based on the regulation of gene expression by protein receptors or RNA aptamers (riboswitches) at the transcriptional or translational level, respectively. These mechanisms are wellunderstood (Alberts et al., 2014). Depending on its design, the regulatory circuit can involve a single gene or multiple genes. As a result of this sensing-and-regulation system, the synthesis of an actuator (a protein) is promoted or inhibited. In turn, the actuator operates on some further step (e.g., producing a signal molecule, catalyzing a useful reaction, creating a pore on the SC membrane, acting as a controller/regulator of another circuit, etc.). Key examples of this general mechanism will be commented on in section A Survey of Published Reports and listed in Table 1.
To provide SCs with communication capability, water-soluble proteins (sensors, regulators, signal-producing elements, or components of the gene expression machinery) should be either encapsulated, or synthesized in the SC lumen. This has become a standard practice, somehow, at least for some prokaryotic proteins (Stano et al., 2011). It is not trivial, instead, dealing with membrane-associated and integral membrane sensors/receptors, even if reports have shown that this is a feasible goal in SCs technology (strategies as membrane protein reconstitution Yanagisawa et al., 2011;Altamura et al., 2017;Jørgensen et al., 2017 or synthesis-from-within Kuruma et al., 2009;Hamada et al., 2014;Soga et al., 2014 have been employed). Genetic circuits of distinctive complexity have already been proven to be functional, also inside liposomes (Noireaux et al., 2003;Shin and Noireaux, 2012;Siegal-Gaskins et al., 2014).
In addition to molecular elements, in order to establish an intercellular communication channel, diffusion of the signal molecule in the outer aqueous environment should be taken into account. The signal molecule cannot be directed toward the communication partner, but it spreads in all direction, guided by the concentration gradient. Although the average behavior of many signal molecules can be foreseen, individual molecules follow an erratic path. In addition to free diffusion, for closely packed SCs, communication through gap junctions (reconstituted in liposomes) has been proposed (Ramundo-Orlando et al., 2005;Moritani et al., 2010).
A Survey of Published Reports
The pioneer experimental report on a simple cell-like system sending a signal molecule to biological cells was published by the Ben Davis group (Gardner et al., 2009). The authors encapsulated the precursors of the formose reaction inside liposomes, and observed that one class of products of the intra-vesicular reaction escaped the liposomes through a channel formed by α-haemolysin and spontaneously reacted with the borate ions present in the external medium to generate furanosyl-boronates structurally similar to the quorum sensing (QS) signal molecule AI-2, that naturally triggers bioluminescence in Vibrio harveyi. Remarkably, the "synthetic" signal released by the liposome was able to induce natural behavior (i.e., light emission) in this bacterium.
Despite its great interest as proof of the concept study, the SCs used by Ben Devis and co-workers were not based on gene expression, therefore they lacked those aspects of programmability and control that are peculiar to synthetic biology. Being a novel research area, literature on the liposomebased SCs which operate by gene expression to interface with natural cells (or with other SCs) is, to the best of our knowledge, limited to the six studies that are summarized in Table 1 together with the already cited study by Gardner et al. (2009). Additional cases involving non-liposome compartments are also available (Gupta et al., 2013;Schwarz-Schilling et al., 2016;Sun et al., 2016;Niederholtmeyer et al., 2018), but these will not be discussed in this mini-review.
In 2014, Sheref Mansy and collaborators designed SCs acting as "translators" for the bacterium Escherichia coli, using theophylline as trigger and isopropyl β-D-1thiogalactopyranoside (IPTG) as signal molecule (Lentini et al., 2014). These SCs are liposomes containing IPTG, the PURE system as the transcription-translation (TX-TL) machinery, and a DNA template coding for a riboswitch that, after binding Note that the first study ( to the free-diffusible molecule theophylline, activated the expression of the pore forming protein α-haemolysin. The authors demonstrated that only in the presence of theophylline, did IPTG escape the liposomes through α-haemolysin, and activate the expression of the green fluorescent protein (GFP) gene in receiver E. coli cells. In this way, SCs acted as chemical translators allowing E. coli to sense theophylline (the latter molecule cannot be normally sensed by E. coli). Adamala et al. (2017) built SCs containing engineered genetic circuits and regulatory cascades. These SCs can be controlled/triggered by external signals, and can be fused together in order to bring together products of incompatible reactions. In particular, the group lead by Edward Boyden showed that by using cell lysates with transcriptionaltranslational activity, DNA vectors encoding genes for IPTG (or doxycycline) detection and permeable chemical inducers, as arabinose or theophylline, the arabinose (or theophylline) activates the α-haemolysin production in the first SC population, so that pre-encapsulated impermeable IPTG (or doxycycline) could be released, and thus activate a response in a second SC population.
The group of Sheref Mansy recently reported two-way chemical communication between SCs and bacteria (Lentini et al., 2017). They exploited cell extracts to generate SCs able to synthesize molecules perceived by Vibrio fischeri, V. harveyi, E. coli, and Pseudomonas aeruginosa. In particular, the expression of LuxI-like synthases inside liposomes, in the presence of acetyl coenzyme A and S-adenosylmethionine (SAM), resulted in the production of molecules able to activate E. coli and V. fischeribased biosensor strains for acyl-homoserine lactone (AHLs) detection. Cell extracts operated both for TX-TL reactions and for the synthesis of some AHL precursors. Moreover, it was shown that SCs containing ad hoc designed genetic circuits could express QS signal molecule receptors able to trigger the expression of reporter and QS signal synthase genes (e.g., gfp and luxI), upon perception of QS signal molecules produced by bacteria. The extent to which SCs could "imitate" natural cells in term of their response to the investigated QS signal molecule was estimated by a sort of cellular Turing test (Cronin et al., 2006).
The signaling between liposome-based SCs and proteinosomes (cell-like particles made of proteins) mediated by glucose, has been recently reported by a joint work of the groups of Sheref Mansy and Stephen Mann (Tang et al., 2018). In this study, the unidirectional signaling pathway was based on: (i) liposome transmitters, containing the PURE system, a DNA plasmid carrying a chemically inducible repression switch (EsaR), a gene coding for α-haemolysin, and glucose; (ii) proteinosome receivers, consisting of a cross-linked enzymatically active glucose oxidase (Gox)poly(N-isopropylacrylamide) (PNIPAAm) membrane and encapsulated horseradish peroxidase (HRP). The addition of the permeable AHL molecule N-(3-oxohexanoyl)-L-homoserine lactone (3OC6-HSL) triggered intravesicular α-hemolysin expression and consequent membrane pore formation in liposome-based SCs, which allowed the release of glucose contained in the aqueous lumen. Glucose oxidation on the proteinosome membrane produced hydrogen peroxide, which in turn converted a molecule into a fluorescent output by reacting with the HRP encapsulated in proteinosome. This study provides an example of molecular communication between two different types of artificial cell-like systems.
A recent report comes from our laboratory, and it deals with unidirectional SC-P. aeruginosa communication, based on the QS AHL signal molecule C4-HSL (Rampioni et al., 2018). In particular, SCs were prepared by encapsulating the PURE system inside GVs prepared by the droplet transfer method (Pautot et al., 2003;Fujii et al., 2014), together with butyryl coenzyme A and SAM as precursors, and a plasmid encoding for RhlI, the synthase for C4-HSL production. SCs produced C4-HSL (a natural QS signal molecule), which was perceived by P. aeruginosa both in liquid medium and in gel. In particular, P. aeruginosa modified its gene expression pattern in response to the C4-HSL-produced by SCs, demonstrating that reprogramming of gene expression in the bacterial cell is similar when interacting with other bacteria or with SCs. The entire TX-TL mechanism was assessed by rhlI mRNA and RhlI protein quantification, as well as by chemical identification of the C4-HSL signal produced by the SCs. The experimental results interestingly match with previously published numerical modeling (Rampioni et al., 2014), confirming the predictive power of in silico simulations in SCs research.
Finally, the Tan group reported an interesting study where SCs and bacteria engaged unidirectional communication in various ways (SCs to SCs, bacteria to SCs, and SCs to bacteria) (Ding et al., 2018). In this case, the QS signal molecule was produced via the EsaI synthase, and was perceived by the cognate EsaR receptor. Gene expression in SCs was triggered when binding of the QS signal molecule to EsaR led to derepression of an EsaRcontrolled promoter region. Quite interestingly, the authors designed SCs that produce an antimicrobial peptide (Bac2A) in response to QS signal molecules sent by bacteria-a proof of principle of the use of signal processing and actuation dynamics for the generation of SCs interfacing with natural cells. Moreover, SCs embedded in biofilms were also reported.
DIRECTIONS AND CHALLENGES FOR FUTURE WORK
The works compared in Table 1 represent proof-of-concept pioneer works that will likely stimulate further research to expand SC capabilities related to molecular communications. In this context, several challenges and open questions can be envisaged. Some refers to mechanistic, biochemical and biological aspects, others to the capability of engineering molecular communications.
With respect to the mechanisms of molecular communication, "sender" and "receiver" SCs mainly relied on transmembrane diffusion of signal molecules. This simple approach has been effective because some QS signal molecules, such as short-tail AHLs, can cross the lipid bilayer (Pearson et al., 1999). The generation of α-haemolysin pores is a drastic (yet effective) solution that has been used to bypass the low permeability of SC membranes when non free-diffusible signal molecules have been used (e.g., IPTG or glucose), but this causes the release of all the low-MW compounds contained inside SCs (the cut-off molecular weight value for the α-haemolysin pore is 3 kDa; Song et al., 1996). An alternative could be the use of DNA nanopores, whose properties are tunable by design (Krishnan et al., 2016). The future employment of more sophisticated import/export mechanisms based on membrane proteins will allow expanding the chemical repertoire of signal molecules secreted or perceived by SCs (e.g., peptides), thus increasing the communication capability and specificity. In this respect, ongoing progress on the functionalization of SC envelopes with integral membrane proteins is promising (see section Basic Principles on Liposome-Based SCs Operating via Gene Expression).
Looking at the biological partners of SCs for molecular communications, early studies focused on bacteria, since they are prone to genetic engineering and their intercellular communication systems have been thoroughly studied at the molecular level, especially in the case of QS systems. From a practical viewpoint, SC/bacteria communication is a technological platform for the long-term goal of interfering with bacterial populations and for therapeutic strategies that could be devised against infections. Indeed, the ability of SCs to drive gene expression in response to external cues envisages the generation of injectable SCs endowed with the ability to produce or release an antimicrobial compound only in response to a signal molecule produced by a bacterial pathogen. The study reported in Table 1 by the Tan group (Ding et al., 2018) has provided a proof-ofprinciple that SCs can be generated which are able to kill bacteria by a mechanism triggered by the bacteria themselves.
Proving that SCs can communicate with eukaryotic cells is one of the next milestones, especially when nanomedicine applications are devised. This complex task could require the generation of SCs with internal operations that rely on eukaryotic signal synthesis or more complex signal reception machineries. The relevance of these approaches is that SCs could be employed as intelligent drug-delivery systems that perform a therapeutic action by extracting information from their microenvironment. As mentioned, the generation of SCs constitutively producing a tumor-killing protein (the Pseudomonas exotoxin A) has been recently described (Krinsky et al., 2018). Another task would involve enzyme replacement therapy (Itel et al., 2017). For example, SCs that consume excess phenylalanine could play a therapeutic role in phenylketonuria (Leduc et al., 2007). Notably, Thomas M. S. Chang proposed in a pre-liposome age the therapeutic use of enzyme-containing semi-permeable collodion capsules circulating in the bloodstream (Chang, 1964(Chang, , 1972. The generation of SC interfacing via molecular communication with neural cells can also be imagined. The resulting hybrid bio/synthetic cell networks could also be exploited for innovative investigations of neural functions (Pinato et al., 2011).
Considering the engineering plan of networking SCs (or SCs and biological cells), the rigorous design of molecular communication channels requires a proper modeling of the physical and information levels. At the physical level stochastic diffusion plays a central role. This peculiar aspect is the ultimate limit of molecular communication (when compared to traditional electro-magnetic systems) because it is essentially a random process. Intercellular molecular communications rely on diffusion of chemical signals under a concentration gradient. They are, therefore, slow stochastic processes; their success depends on a number of factors, like the sender/receiver ratio, their spatial arrangement, the viscosity of the medium, and the temperature. Numerical models can be useful to understand the limiting factors and the constraints operating at this (inescapable) physical level (Nakano et al., 2011(Nakano et al., , 2013. The stochastic dimension of molecular communications affects its reliability. Facing with it represents an engineering challenge. The second aspect refers to the amount of information transmitted in the molecular communication "channel, " and this is a theoretical issue. To apply classical information and communication theory to such a novel scenario, "information" should be defined with respect to the type of signal molecules, number of sent/received molecules, and time-dependent concentration profile (switchlike, pulse-like, etc.). Control theory for bottom-up synthetic biology should be delineated (Del Vecchio et al., 2016). Its peculiarity stems from molecular discreteness, random timing of sending/receiving, nature of "noise, " etc.
In conclusion, SCs could significantly contribute to the origin of a very novel research field based on communication with biological cells. Thanks to their modular constructive principle, their biocompatibility and programmability, SCs of the type discussed in this review have the unique ability to act as passive carriers of hydrophilic and hydrophobic drugs, and to actively drive gene expression in response to chemical stimuli from other cells and from the environment.
At present, main challenges in this field rely on our capacity of (i) designing and build multi-functional SCs based on a proper genetic circuit and auxiliary molecular parts/devices, (ii) building homogeneous populations of SCs that are stable in biological fluids, (iii) and being able to control SC behavior even in a complex and fluctuating environment, such as a human host. All these challenges will probably be solved in the near future thanks to constant improvements in SC technology (in a broad sense, i.e., not necessarily restricted to liposomes). Along this path, there will be room for developing various systems in which in vitro usage will generate opportunities for understanding principles of biological systems and constructing short-term devices (e.g., biosensors).
AUTHOR CONTRIBUTIONS
PS conceived the research, all authors wrote the paper.
|
2019-01-17T14:01:52.076Z
|
2019-01-17T00:00:00.000
|
{
"year": 2019,
"sha1": "0450bb40817fcd4015f57eb9a621aa2d6c390dba",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2019.00001/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0450bb40817fcd4015f57eb9a621aa2d6c390dba",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
247773099
|
pes2o/s2orc
|
v3-fos-license
|
Risk Factors for Venous Thrombosis after Spinal Surgery: A Systematic Review and Meta-analysis
Background Venous thrombosis, comprising DVT and PE, is an orthopedic condition that may be fatal after surgery. This study's purpose was to analyze risk factors for venous thrombosis following spine surgery to help guide treatment prophylaxis. Methods A computer searched English databases such as PubMed, Web of Science, Embase, Cochrane Library, and Google Academic for relevant publications after spinal surgery. Preoperative walking difficulties, hypertension, diabetes, heart disease, preoperative bleeding volume, etc., were all examined using the NOS scale. Data were analyzed using Review Manager 5.3 software. An analysis was done. Due to the study's differences, the data was compiled using fixed effects or random effects models. Results A total of 25 studies were considered, with a total of 1,927,781 individuals after spine surgery, including 7843 patients with venous thrombosis. The included literatures had NOS scores ranging from 5 to 8. According to the findings of the meta-analysis, the age of patients with venous thrombosis after spinal surgery (OR = 7.53, 95% CI (6.73, 8.33)), blood loss (OR = −141.79, 95% CI (-154.68, -128.9), P = 0.00001), and operation time (OR = 76.93, 95% CI (73.17, 80.86), P = 0.00001) were higher than those without; diabetes mellitus (OR =1.23, 95% CI (1.12, 1.34), P = 0.00001) and walking disability history (OR = 2.97, 95% CL (1.77, 4.98), P = 0.0001) increased the incidence of postoperative venous thrombosis. Conclusion High age, female, spinal fusion, big volume blood loss patients, operation time, and hypertension, diabetes, and walking issue are all risk factors for venous thrombosis following surgery.
Introduction
Patients having major orthopedic surgery (DVT) have an increased risk of venous thromboembolism, which may include pulmonary embolism (PE) and deep venous thrombosis [1]. VTE is a well-known and feared surgical complication, as well as a leading cause of death [2]. Acute thromboembolic illness (VTE) may result in substantial morbidity, poor quality of life, and even death [3]. It may also lead to increased medical costs and a considerable financial burden on individuals and their families [4]. The use of pharmacological prophylaxis is well established in a variety of surgical procedures, most notably hip and knee replacements, for which there are wellestablished criteria and dosing guidelines [5]. Chemoprophylaxis recommendations in spinal surgery are less well defined, and there are currently no clear evidence-based standards in this discipline [6]. Because there is no consensus on the effi-cacy and safety of chemoprophylaxis in spine surgery, a wide range of treatment options are available, many of which are dependent on the surgeon's personal experiences with the medication [7]. As a consequence, spine surgeons must be aware of the prevalence of VTE as well as the risk factors that contribute to the development of this condition [8].
An epidural hematoma, a devastating but rare complication of spinal surgery, may occur from bleeding issues [9]. As a consequence, spine surgeons should use caution when prescribing anticoagulants. To balance VTE morbidity and mortality with epidural hematomas' potential of permanent neurological disability, they must decide [10]. Many researchers have looked at the VTE risk factors following spine surgery [11]. Due to the limited sample sizes and varying detection technology, the incidence varies [12]. These studies found that older patients on prolonged bed rest for paralysis or pain had a higher incidence of VTE than younger patients [13]. Although variables such as the presence or absence of D-dimer in the blood [14], the length of the operation, intraoperative blood loss, and surgical procedures all have an effect on the frequency of VTE after spine surgery, the incidence remains consistent and never paradoxical [15].
A comprehensive review of the available literature was conducted in order to get a better understanding of VTE incidence and risk factors in patients undergoing spine surgery. In order to assist surgeons in making well-informed therapy choices, evidence-based information about this subject will be presented in a straightforward way.
Materials and Methods
2.1. Search Strategy. Search the computer for PubMed, Web of Science, Embase, Cochrane Library, and other English databases. The English search terms are "Spinal surgery", "Venous thrombosis", and "spine", all from the medical topic word list (mesh). The retrieval term for all databases is from the time the database was created until December 30, 2021. In addition, in order to completely include the literature, several references were manually gathered.
Inclusion and Exclusion Criteria.
All of the studies that were included were studies that were based on what people saw and did on the incidence of venous thrombosis after spine surgery. The following factors are included: (1) age, (2) gender, (3) BMI, (4) history of hypertension, (5) diabetes history, (6) history of heart disease, (7) preoperative D-dimer level, (8) history of preoperative walking problem, (9) mode of surgery, (10) mode of anesthesia, (11) surgical location, (12) duration of operation, (13) blood loss, (14) smoking history, (15) alcohol history, and (16) postoperative infection. At least one of the aforementioned signs may be found in the included literature. Criteria for exclusion include (1) a summary study, (2) an expert opinion, (3) a case report or case series report, (4) preoperative or coagulation function abnormal and clinically significant, (5) venous thrombosis occurred before the procedure, (6) blood system illnesses, (7) various thrombus prevention techniques were utilized both before and after the operation, and (8) after contacting the author, the relevant information could not be obtained.
Selection of Studies.
Two investigators independently reviewed all of the subjects, abstracts, and full texts of the literature that had been chosen. Following that, the eligible studies were selected in accordance with the inclusion criteria. Discussion and consensus were used to address any disagreements that arose among the investigators. A third author was sought to help settle the situation when no agreement could be obtained.
Data
Extraction and Quality Assessment. Two researchers independently extracted the data, which included the name of the original author, the year of publication, the country of origin of the subjects, and information on numerous inclusion indicators indicated in the inclusion criteria. If there are disagreements, they must be handled by conversation; if the dispute cannot be resolved, the third researcher must be consulted. Two researchers extracted the data independently, including the first author's name, the year of publication, the country of the study population, and the details on numerous inclusion indicators indicated in the inclusion criteria. If there are disagreements, they must be handled by conversation; if the dispute cannot be resolved, the third researcher must be consulted.
Statistical Analysis.
For meta-analysis, Review Manager 5.3 software was employed. The analysis statistics were odds ratio (OR) or relative risk (RR); the measurement data analysis statistics were weighted mean difference (WMD) or standardized mean difference (SMD), and each effect quantity was given by a 95% confidence interval (95% CI). The Q test and I 2 were used to quantify the heterogeneity of the research. When P > 0:1 and I 2 > 50%, it is assumed that the heterogeneity is not significant, and the fixed effect model is used to combine the data. When P < 0:1, which is 50%, it is assumed that the heterogeneity is substantial, and the fixed effect model is used to integrate the data. The model with a random effect is used, and the cause of heterogeneity is identified as thoroughly as feasible for subgroup analysis. If the reason for heterogeneity cannot be determined, the random effect model is utilized in meta-analysis. There was a statistically significant difference between the two groups (P < 0:05).
Selected Study
Results. The approach for screening and selecting articles for inclusion in this study is shown in Figure 1. Initially, a total of 2139 studies were discovered. 491 were eliminated for duplicate entries, and 1600 were excluded following a title/abstract assessment. The remaining 48 papers were then subjected to a full-text review. 23 of them were rejected because they did not match the qualifying requirements. Finally, 25 papers satisfied the inclusion criteria and were included in our meta-analysis; the features of these studies are shown in Table 1. One of the 25 studies was meant to be prospective, while the other 24 were planned to be retrospective. The entire number of participants in the study was 3,215,173, of which 1038 were difficult to deal with VTE, and the overall occurrence of VTE following spine surgery was 0.35% (the original studies' occurrence of VTE was 0.15-29.38%). VTE occurred in 8.43% of patients from Asians and 0.33% of Western patients; the difference is statistically significant (P < 0:0001).
3.2. Data Quality Assessment. All 14 studies were critically appraised independently by the two reviewers. The study design and outcome measure were valid and appropriate to the research questions. The risk of bias in the study design and results was assessed by the revised Cochrane risk of bias in randomized trials (RoB 2) latest version 22 August 2019 ( Figure 2).
BMI.
Five studies [16][17][18][19][20]35] found a link between BMI and postoperative venous thrombosis. There are two for Chinese and two for foreigners. In the study sample, there were 378 instances of VTE (+) and 5196 cases of VTE (-). Because the studies had considerable statistical heterogeneity (I 2 = 87 percent, P0.00001), the random effect model was employed for meta-analysis. Those with VTE (+) had a lower mean BMI than patients with VTE (-) ( Figure 5).
Operation Methods.
Six studies comprising 49,389 patients found an association between surgical procedures and the prevalence of venous thrombosis following spine surgery, including 32,032 patients with nonfusion and 17,357 patients without fusion [23][24][25][26][27][28][29]. Because there was no significant heterogeneity across the studies, the model with a fixed effect was adopted for meta-analysis. The findings revealed that the frequency of VTE in patients with nonfusion was greater than that in patients with fusion. There was a statistically significant difference between the two (OR = 1:67, 95% CI (1.40, 1.99), P = 0:00001, Figure 6).
Operative
Approach. Three studies comprising 461 patients found a relationship between surgical approach and the frequency of venous thrombosis following spine surgery [21][22][23], including 328 instances of posterior surgery, 133 cases of anterior/posterior mixed surgery, 34 cases of VTE (+), and 427 cases of VTE (-). The fixed effect model was adopted for meta-analysis since there was no statistical heterogeneity across the studies. The data revealed that the incidence of VTE was higher in patients undergoing simple posterior surgery than in patients undergoing anterior/ Figure 7).
3.3.6. Operative Site. Four investigations including 1617 patients found a link between the surgical site and the incidence of venous thrombosis following spinal surgery [12][13][14][15][16], including 285 instances of cervical surgery, 1332 cases of thoracolumbar surgery, 75 instances of VTE (+), and 1542 instances of VTE (-), because there was no statistically significant difference between the studies (I 2 = 0, P = 0:59). In this meta-analysis, the fixed effect model was used as a basis. The findings revealed that the incidence of VTE was lower in cervical surgery patients than in thoracolumbar surgery patients (OR = 1:17, 95% CI (0.66, 2.08), P = 0:59, Figure 8).
3.3.7. Duration of Surgery. A total of 8 investigations comprising 46,840 patients, including 680 instances of VTE (+) and 46,160 cases of VTE (-) [12][13][14][15][16][17][18][19]35], indicated that the length of surgery was associated with the incidence of The surgical billing database at our institution was queried for inpatients discharged between 2008 and 2015 after the following procedures: atlantoaxial fusion, anterior cervical fusion, posterior cervical fusion, anterior lumbar fusion, posterior lumbar fusion, lateral lumbar fusion Transfusion using a liberal trigger is associated with increased morbidity, even after controlling for possible confounders. Our results suggest that modification of transfusion practice may be a potential area for improving patient outcomes and reducing costs 5 Computational and Mathematical Methods in Medicine venous thrombosis following spine surgery. Because there was statistical heterogeneity in each research question (I 2 = 98%, P = 0:00001), the random effect model was utilized for meta-analysis. The findings of the study revealed that the mean operation time of VTE (+) patients following spinal surgery was greater than that of VTE (-), and the dif-ference was statistically significant (OR = 76:93, 95% CI (73.17, 80.86), P = 0:00001, Figure 9).
3.3.11. Heart Disease. Heart disease has been linked to a higher risk of postsurgical blood clots in four studies [20][21][22][23][24], involving 5718 patients, including 1550 cases of heart disease, 4168 cases of nonheart disease, 453 cases of VTE (+), and 5265 cases of VTE (-), which were not available due to statistical heterogeneity (P = 0:95). As a result, for the meta-analysis, a fixed effect model was adopted. The findings revealed that the incidence of VTE following spinal surgery was greater in patients with nonheart disease than in patients with heart disease; the difference was statistically significant (OR = 0:96, 95% CI (0.78, 1.20), P = 0:74, Figure 13).
3.3.14. Publication Bias. Begg and Egger tests revealed no evidence of publication bias in any of the papers included in this review ( Figure 16).
Discussion
Complications from spine surgery cause people bodily, emotional, and social distress. These two incidents have diminished [34]. Although medications may effectively prevent venous thrombosis, the risk of venous thrombosis following spine surgery is significantly smaller than after joint surgery [37]. However, several investigations have demonstrated that deadly PE has much more severe repercussions and medical risks than epidural hematoma [38]. If surgical patients' venous thrombosis prevention and treatment are ignored, preventable PE will endanger the patient's life. Indicator measurement is difficult due to various aspects such as 9 Computational and Mathematical Methods in Medicine research design, sample size, and subject inclusion/exclusion criteria [39]. The incidence of venous thrombosis following spinal surgery varies between 0.31% and 31%, showing that there is no unanimity on the occurrence. Moreover, the existing system assessment findings vary. So, through meta-analysis/systematic review, in line with the inclusion and exclusion criteria, a more extensive qualitative and quantitative synthesis of previous research is required to investigate the risk factors for thrombosis following spine surgery.
In terms of surgery, fusion vs. nonfusion, blood loss (big), operation duration (long), past history face, hypertension (+), diabetes (+), preoperative walking problem (+), and thrombosis risk following spinal surgery [40], thrombosis in spinal surgery is linked to the following variables, with close correlation to (1) trauma, blood loss, and blood transfusion caused by operation damaging intima of blood vessels and making the body hypercoagulable; (2) compression of the venous system caused by long-lying posture during operation, such as inferior vena cava, iliac vein, and femoral vein; (3) implantation of metal and other artificial materials, such as pedicle screw system, bone cement, and artificial bone; (4) anesthesia, especially general anesthesia; (5) lower limb paralysis occurrence. Lower limbs lose muscular pump and vasomotor reflex function; (6) changes in body fluid balance, electrolyte imbalance, and fluctuation of the internal environment during the perioperative period; and (7) staying in bed or break for a long time after surgery. These are connected to the three pathogenic components of venous thrombosis: vascular wall damage, sluggish blood flow, and hypercoagulable condition. The beginning of any fac-tor may result in thrombosis. A high quantity of glucose in the blood might cause blood. Meta-analysis research found that vascular endothelial damage increased the risk of blood coagulation (OR = 1:49, 95% CI (1.40, 1.58), P = 0:00011); the result of this research is comparable [41].
Studies on the occurrence of venous thrombosis after column surgery have been reported; however, the results are mixed [42]. Compare this study's results to prior studies' findings on hypertension (+), diabetes (+), preoperative walking difficulty (+),surgery (fusion), age, gender, blood loss, and operation time (long). This study found a statistically significant difference in the incidence of venous thrombosis after spine surgery due to these variables [43]. This is because, in terms of age-related venous thrombosis incidence, this investigation included five previously published studies that complimented the results and increased the sample size. The data suggest that age may be a risk factor for postsurgery thrombosis. In terms of gender and venous thrombosis incidence, this study excluded case-control studies (the researchers believe the control selection is not representative) and studies utilizing prophylactic methods in favor of three newly published studies [44]. Gender may be a risk factor for thrombosis following spine surgery. However, it is unclear if gender (for example, men) is a risk or protective factor for thrombosis. Based on Zacharia et al., this study comprised five investigations on the impact of blood loss on venous thrombosis incidence [45]. Because blood loss varies widely, SMD was used in this study. The number of fused segments and hand operation method may be linked to heterogeneity. This study included seven studies on the impact of operation time on venous thrombosis incidence, increasing sample size, and statistical accuracy. However, the included studies are quite varied, maybe related to surgical method, bleeding volume, or other factors. The patient's condition and personal will, as well as the doctor's evidence-based decision-making, impact the operation time and amount of bleeding, as well as the hemostasis caused by intraoperative bleeding. The comparison of BMI and the incidence of venous thrombosis after spinal surgery revealed that the heterogeneity of the included studies was substantial, and the difference was not statistically significant [46]. The source of heterogeneity was investigated using subgroups of the study population. Following subgroup analysis, the heterogeneity of each subgroup fell dramatically, suggesting that the population may be the cause of the considerable heterogeneity across groups. The mean value of BMI difference in various Asian populations is demonstrated to be in the other direction. It implies that BMI may not be a direct cause of venous thrombosis following spine surgery but may have an indirect effect on the establishment of postoperative thrombosis. Intermediate elements include vascular elasticity, blood viscosity, and the degree of vascular wall damage. This research also discovered that there was no significant difference in preoperative D-dimer levels between the postoperative VTE (+) group and the VTE (-) group. It is stated that the present data are insufficient to show that the difference in preoperative D-dimer levels has a statistically significant influence on the incidence of venous thrombosis following spine surgery. Large sample size and multicenter research are required. At the same time, consider the influence of the dynamic shift in postoperative D-dimer level on thrombosis.
Limitations
Age, gender, BMI, surgery time, and blood loss were all taken into account. They propose that where there is little or no heterogeneity, analytical meta-analysis may help determine effect sizes or test hypotheses. Methodological, clinical, or statistical variables may all contribute to heterogeneity. The majority of the papers in this study are retrospective NOS score investigations. Biases exist in certain literatures, such as (1) research bias due to disparities in inquiry methods between venous and nonvenous thrombosis situations; responders and Figure 15: Forest diagram of preoperative walking disorder and the incidence of venous thrombosis after spinal surgery.
investigators may make systematic mistakes. Concern about the patient's medical history, for example, varies. (2) The study's major result and measurement bias were not addressed, and there were no consistent diagnostic criteria across trials; neither are preoperative diagnostic criteria for walking dysfunction or hypertension. Variable and method heterogeneity may be included in the meta-analysis. Oral vitamin E, pregnancy-preventive medicines, aspirin, postoperative functional activity, postoperative braking, and other factors contributed to clinical variability. These signs have not received much attention. Their absence might result in phenotypic differences. As a consequence, age and gender are exaggerated or minimized, resulting in clinical diversity. Clinical variability may emerge as a result of surgical skill, competency, and illness severity, all of which influence operation duration and blood loss. The unequal distribution of these components causes variation in study outcomes. There was insufficient data in the included trials, and there was no subgroup analysis to determine the cause of heterogeneity.
Conclusion
However, there is no strong evidence that these characteristics are independent risk factors for postoperative thrombosis. Diabetes may be caused by vascular endothelial damage, which is an indirect risk factor for venous thrombosis and poor cardiac function. Insufficient energy may restrict activities and produce lower extremity venous stasis (nonindependent risk variables). Large-scale multicenter prospective research is needed to evaluate the incidence of venous thrombosis following spine surgery. Regardless, this investigation found venous thrombosis following spine surgery. Birth-related risks need the spinal surgeon to focus on the postoperative period. Venous thrombosis is deadly for screening and surveillance of people with venous thrombosis. Concerning postoperative venous thrombosis prevention and therapy, a systematic review and meta-analysis based on high-quality original research are still required.
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
|
2022-03-29T15:12:18.078Z
|
2022-03-27T00:00:00.000
|
{
"year": 2022,
"sha1": "2d69fece04dbca2177cf6ca6284f9ed2184595c1",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cmmm/2022/1621106.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "36a12871329adb537c1bca2d4ba70a149bb91c3d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119433192
|
pes2o/s2orc
|
v3-fos-license
|
Quantum mechanics in phase space: The Schr\"odinger and the Moyal representations
We present a phase space formulation of quantum mechanics in the Schr\"odinger representation and derive the associated Weyl pseudo-differential calculus. We prove that the resulting theory is unitarily equivalent to the standard"configuration space"formulation and show that it allows for a uniform treatment of both pure and mixed quantum states. In the second part of the paper we determine the unitary transformation (and its infinitesimal generator) that maps the phase space Schr\"odinger representation into another (called Moyal) representation, where the wave function is the cross-Wigner function familiar from deformation quantization. Some features of this representation are studied, namely the associated pseudo-differential calculus and the main spectral and dynamical results. Finally, the relation with deformation quantization is discussed.
Introduction
A key principle of quantum mechanics states that the fundamental configuration and momentum operators satisfy the commutation relations of the Heisenberg algebra (1.1) [x j ,ξ k ] = iδ jk j, k = 1, .., n all the other commutators being zero. The most standard implementation of this algebra is given by the Schrödinger representation, wherex j andξ j are viewed as self-adjoint operatorŝ x j = multiplication by x j ,ξ j = −i∂ x j acting on the Hilbert space L 2 (R n ) of square integrable functions (with support) on the classical configuration space R n . It is well known that the quantization rules x j −→x j , ξ j −→ξ j do not provide the complete information on how to quantize an arbitrary classical observable since the formal prescription a(x, ξ x ) ←→â = a(x,ξ x ) (where x = (x 1 , ..., x n ), ξ x = (ξ 1 , ..., ξ n )) is order ambiguous. The Weyl pseudo-differential calculus yields the standard (but not unique) solution for this problem [31,17,32]. The Weyl correspondence a Weyl ←→â W is a oneto-one linear map that associates with each symbol a ∈ S ′ (R 2n ) a linear 1 operatorâ W : S(R n ) → S ′ (R n ), uniquely defined bŷ where the integral is a Bochner (operator) integral, is the symplectic Fourier transform of a andT is the Heisenberg-Weyl operator, defined for all ψ ∈ L 2 (R n ) by We now observe that: (1) The Schrödinger representation is by no means the unique possible implementation of the commutation relations (1.1). The momentum representation in L 2 (R n ) is a well known alternative. More generally, any unitary operator U : L 2 (R n ) → L 2 (R n ) generates another "quantization rule" through the prescription (2) Other interesting representations can be obtained by implementing the observables x and ξ x as operators acting on Hilbert spaces, other than L 2 (R n ).
This second possibility was seriously considered in a series of papers [29,16,18,19,20,21,9] focusing on representations in terms of operators acting on the Hilbert space L 2 (R 2n ) of functions with support on the phase space R 2n . The Frederick and Torres-Vega representation [29,16], leading to the Schrödinger equation in the phase space and the "Moyal" representation (given by the "Bopp shifts" [8]) are two examples of this sort. The latter, more symmetric, representation leads to what we shall call the "Moyal-Weyl pseudo-differential calculus" originally presented in [20] and further studied, in connection with the related "Landau-Weyl" calculus, in [18,21]. The representation eq. (1.3) is intimately connected to the deformation formulation of quantum mechanics. Indeed, for an arbitrary Weyl symbol a Weyl ←→â W , we have in the Moyal representation [19,20] a(X,Ξ x ) = a(x, p)⋆ where ⋆ is the Moyal starproduct [5,32]. Hence, the stargenvalue equation (1.4) a(x, p) ⋆ Ψ λ (x, p) = λΨ λ (x, p) can be written in the form Moreover, it was also proved in [19,20] that the solutions of the previous eigenvalue equation (1.5) are related to the solutions of the usual eigenvalue equation by the action of intertwining operators W φ : L 2 (R n ) → L 2 (R 2n ) defined for each φ ∈ S(R n ) by and related to the cross Wigner distribution W (ψ, φ) by a simple normalization factor W φ ψ = (2π) n/2 W (ψ, φ). Combining the two results (the equality of the two equations (1.4) and (1.5) and the relation -given by W φ -of the solutions of eq.(1.5) with those of eq.(1.6)), de Gosson and Luef [20] found a simple proof of the spectral relation between Schrödinger quantum mechanics and the deformation quantization of Bayen et al. [5,6].
Some of these results were generalized in [13] where an extension of the "phase space Moyal representation" (1.3) and the associated pseudodifferential calculus, allowed for the precise construction of the quantum theory associated with the extended Heisenberg algebra. The resulting "noncommutative quantum mechanics" displays an extra noncommutative structure in both the configurational and momentum sectors and has played an important role in some recent approaches to quantum cosmology [3,4,7,10] and quantum gravity [14,30]. Furthermore, the relation of the approach of [13] with the deformation formulation of noncommutative quantum mechanics [1,2] was studied in [12].
In this paper we intend to further study the structure and the properties of the phase space formulation of quantum mechanics, focusing, this time, on the most central (and simplest) Schrödinger representation where the operatorsX,Ξ x act on phase space functions Ψ(x, p) ∈ L 2 (R 2n )) and on its relation with the Moyal representation (1.3). In addition, we will also discuss some features of the phase space formulation of mixed quantum states.
More precisely, we will: (1) Present the phase space formulation of quantum mechanics in the Schrödinger representation.
(2) Determine and study the associated Weyl pseudo-differential calculus.
(3) Study the relation of the "Schrödinger phase space representation" with the "Schrödinger configuration space representation". In particular, show that there is a family of isometries T χ : L 2 (R n ) → L 2 (R 2n ) (indexed by χ ∈ L 2 (R n ) : ||χ|| = 1) that intertwine an arbitrary configuration space operatorâ with the corresponding phase space operatorÂ.
(4) Show that the phase space formulation of quantum mechanics allows for an uniform treatment of both pure and mixed quantum states.
(5) Prove that the Schrödinger phase space representation and the Moyal phase space representation are unitarily related. Determine the one-parameter group of unitary transformations (and its infinitesimal generators) that connects the two representations and use it to prove the main spectral and dynamical results of the Moyal representation.
(6) Discuss the relation of the Moyal representation with the deformation quantization of Bayen et al.
1.1.
Motivation: Double phase space formulation of classical mechanics. Let us consider a dynamical system living on the phase space R n ⊕ R n spanned by the canonical variables (x, ξ x ) which satisfy the usual Poisson bracket structure {x, ξ x } = I (where I is the n × n identity matrix) all the others being zero. Let h(x, ξ x ) be the Hamiltonian of the system.
A trivial formulation of this system in the double phase space is obtained by considering the extension ((p, (all the other commutators being zero) and the new Hamiltonian subjected to the initial data constraints The new set of observables yields exactly the same predictions as the original formulation in terms of the standard phase space observables a(x, ξ x ). Notice that the role of the constraints here is only to fix the initial values of the non-physical sector of the theory. The constraints commute with the new Hamiltonian and are thus preserved through the time evolution.
Another double phase space formulation of classical mechanics is obtained from the action of the symplectic transformation on the previous double phase space formulation. The classical system is then described in terms of the new observables and the new double phase space Hamiltonian which yield, of course, the same physical predictions as the original double phase space formulation in terms of the Hamiltonian H 1 and the observables A 1 . Loosely speaking, this paper is devoted to presenting the quantum counterparts of the two former double phase space formulations of classical mechanics, to studying their properties and their relation with the standard, configuration space formulation of quantum mechanics. In spite of their apparently simple structure, they yield quantum theories which display a set of quite interesting properties, namely an uniform description of pure and mixed states and a remarkable connection with deformation quantization.
Before finishing this section, we note for future reference that the transformation S can be written as the composition of the two following symplectic maps and S R (θ) is a double rotation in the (x, ξ p ) and the (p, ξ x ) planes respectively.
Notation.
A generic point of the original phase space R 2n = R n ⊕R n is denoted by z x = (x, ξ x ) and that of the double phase space R 4n = R 2n ⊕ R 2n by Z = (z x , z p ) where z p = (p, ξ p ). We will also use z = (x, p) and z 0 = (x 0 , ξ x0 ). The symplectic form on R 2n is σ(z . The corresponding symplectic groups are denoted by Sp(2n, σ) and Sp(4n, σ x ⊕ σ p ).
We write S(R n ) for the Schwartz space of rapidly decreasing test functions on R n and S ′ (R n ) for the dual space of tempered distributions. The notation a, t stands for the action of the distribution a on the test function t. Symbols (or classical observables) on R 2n are denoted by small Latin letters a, b, ..; if they have support on R 4n they are denoted by capital Latin letters A, B, .... The wave functions in L 2 (R n ) are denoted by small Greek letters ψ, φ, ... and those in L 2 (R 2n ) by capital Greek letters Ψ, Φ, .... The standard inner product on L 2 (R n ) is written (ψ|φ); on L 2 (R 2n ) is ((Ψ|Φ)). The corresponding norms are ||ψ|| and |||Ψ|||.
Operators acting on functions (or distributions) on R n are usually denoted by small Latin letters with a hatâ,b, ... and those acting on phase space functions or distributions usually byÂ,B, ... if they are in the Schrödinger representation and byÃ,B, ... if they are in the Moyal representation. Weyl pseudo-differential operators display a W -superscript. The superscript * denotes both the complex conjugation (for functions) and the adjoint (for operators).
The unitary Fourier transform and its inverse on L 2 (R n ) are defined bŷ The same notation will also be used for the generalized (i.e. distributional) Fourier transform.
Phase space quantum mechanics in the Schrödinger representation: General results
The key object relating the configuration and the phase space Schrödinger representations of the Heisenberg algebra is the map T χ is an isometry from L 2 (R n ) into a subspace H χ of L 2 (R 2n ). Hence the map T χ is also linear, injective and continuous. We then have the following Notice that: The choice of a particular χ plays here the same role as the imposition of the classical constraints (1.7) in the classical double phase space formulation. In fact, just like in the classical case, this imposition tantamount to the complete specification of the initial data for the unphysical sector of the theory. Notice also that the quantum states do not satisfy the strong (Dirac) version of the quantum constraints [23] ( which are, in fact, incompatible (becausep,ξ p do not commute). They may, however satisfy a weaker version which may be seen as a necessary, but not sufficient, condition for the states to be of the form (2.2).
A key property of T χ is given by: Proof. Consider the most general states Then: (1) Ψ belongs to the domain of the adjoint D(T * χ ) and It also follows that ψ ∈ L 2 (R n ) if Ψ ∈ L 2 (R 2n ) and so D(T * χ ) = L 2 (R 2n ), which concludes the proof.
is a one to one map and is the inverse of T χ . The proof is trivial. Moreover: Proof. P χ is an orthogonal projector because (i) P χ = P * χ and (ii) P χ P χ = P χ . The first identity follows from T * * χ = T χ : On the other hand, the range of a projector in a Hilbert space is a closed linear subspace of that Hilbert space, thus also a Hilbert space. Hence, H χ =Ran P χ is a Hilbert space, subspace of L 2 (R 2n ) and with the same inner product. The identity (2.4) follows trivially .
Remark 2.6. Notice that there are many possible choices for the space of states H χ . This is related, of course, to the possibility of choosing χ arbitrarily in L 2 (R n ). However, these (different) phase space formulations are related by the one to one, orthogonal transformations We now introduce operators on H χ : AΨ = T χâ T * χ Ψ , ∀Ψ ∈ D(Â) and call the "phase space operator associated withâ" or, more precisely, the "H χ -representation ofâ".
the associated operator iŝ . Hence, each map T χ generates a phase space Schrödinger representation of the Heisenberg algebra from the original configuration space Schrödinger representation.
We proceed with the study of some properties of the operatorsÂ. Theorem 2.9. Let be the phase space operator associated withâ in the sense of the Definition 2.7. Then (1) is symmetric iffâ is symmetric.
Theorem 2.10 (Spectral results). The state Ψ λ ∈ H χ is a solution of the eigenvalue equation Hence, the two operators andâ display the same spectrum and their eigenfunctions are related by the T χ -transformation.
Proof. The proof is trivial. In one direction it follows from applying T χ to both sides of the eigenvalue equation (2.8) and, in the other direction, from applying T * χ to both sides of (2.7). We also need to notice that Corollary 2.11. We conclude that the physical predictions of the phase space and the configuration space formulations of quantum mechanics are the same. For the average values we have from Definitions 2.1 and 2.7 and for the transition amplitudes (and probability amplitudes) from Theorem 2.10 The same conclusion is valid for the dynamics, Proof. This theorem is also trivial. Again the equivalence of the two dynamics can be proved by applying the maps T χ and T * χ to the dynamical equations and by taking into account the relations (2.9,2.10) and also that the time derivative commutes with T χ (because χ is time independent) and with T * 3. Mixed states in phase space quantum mechanics The formalism of the last section was defined in the spaces H χ and provides a description of pure states, only. However, one realizes that most of the results can be generalized for the case where the space of states is a Hilbert space larger than a particular H χ or is even the entire L 2 (R 2n ).
Such generalizations lead to a suitable description of mixed states as we now show. More precisely, we will discuss (some aspects of) the extension of the formalism of the last section to the case where the Hilbert space of states is of the form H = ⊕ k H χ k for some orthogonal, finite set of functions χ k ∈ L 2 (R n ) such that k ||χ k || 2 = 1.
The normalization of Ψ follows directly from the normalization of each ψ k and this formula suggests that the square of the norm of χ k can be interpreted as the classical probability associated with the k-component of the mixed state Ψ.
The H-representation of a configuration space operatorâ generalizes Definition 2.7. We now have Let us now recover the quantum predictions for mixed states from applying the standard rules of "pure state quantum mechanics" to the phase space H-formulation. Let φ α be a normalized eigenfunction ofâ associated with the non-degenerate eigenvalue α (to make it simpler we shall assume the one-dimensional case; n = 1). Then α is also an eigenvalue of (in this case, degenerate). A normalized basis of the α-eigenspace of is given by the eigenfunctions hence the probability associated with the eigenvalue α is (from standard rules) which is then a convex combination of the probabilities P(â = α) calculated for each of the components ψ k of the mixed state Ψ. This result thus reenforces the interpretation of the square of the norm of χ k as the classical probability associated with the k-component of the mixed state. The projection of Ψ into the α-eigenspace of yields the state and so the collapse of the wave function (produced by a measurement of with output α) yields Just like for standard, pure state, quantum mechanics the probability P( = α) is identical to the transition probability |((Ψ | Υ c ))| 2 . Indeed and since Hence, a representation of mixed states in terms of standard wave functions with support on the phase space is possible. We intend to study this topic in more detail in a forthcoming paper.
Phase space Weyl calculus
We start this section with a brief review of the standard definitions and properties of Weyl operators acting on functions with support on the configuration space (for details and proofs the interested reader may refer to [17,19,24,27,28,32]). We then present the extension of these operators to phase space functions and study the main properties of the resulting phase space Weyl calculus.
Standard Weyl calculus.
Let L (S(R n ), S ′ (R n )) be the space of linear and continuous operators of the form S(R n ) −→ S ′ (R n ). In view of Schwartz kernel theorem all operatorsâ ∈ L (S(R n ), S ′ (R n )) admit a kernel representation where K a ∈ S ′ (R n × R n ). The Weyl symbol ofâ is then and, conversely, where the integrals are interpreted as generalized Fourier (and inverse Fourier) transforms (i.e. in the sense of distributions). The inverse formula can be re-written in the form where F σ is the symplectic Fourier transform (1.2). An important property of Weyl operators is that, ifb ∈ L (S(R n ), S(R n )) thenĉ =âb ∈ L (S(R n ), S ′ (R n )) and its Weyl symbol is given by (4.5) where ⋆ is the "twisted product" or Moyal product familiar from deformation quantization [5,25,32].
The Weyl correspondence a
Weyl ←→â W (given by eqs.(4.1,4.2)) can be written more straightforwardly as where the integral is an operator valued (Bochner) integral andT (z 0 ) : explicitly: Remark 4.1. The Heisenberg-Weyl operatorsT (z 0 ) are unitary operators associated with the self-adjoint Hamiltonians σ(ẑ x , z 0 ). In fact, one can easily check that is the unique solution of the initial value problem HenceÛ (z 0 , t) = e −itσ(ẑx,z 0 ) in S(R n ) and since both operators are continuous we also haveÛ Finally, it is trivial to check thatT (z 0 ) =Û (z 0 , 1).
The Weyl correspondence satisfies the metaplectic covariance property, which will be useful in section 5, ←→ a, let S ∈ Sp(2n, σ) and letŜ ∈ Mp(2n, σ) be (one of the two) metaplectic operators that projects onto S. Then For proof see [28,32].
4.2.
Weyl calculus in phase space. In this section we consider the extensions ofâ W to H χ and S(R 2n ). We will focus on the case where χ ∈ S(R n ) so that T χ [ψ] ∈ S(R 2n ) for all ψ ∈ S(R n ). Let us then introduce the notation and consider the extension of T χ to S ′ (R n ) , ∀t ∈ S(R 2n ) which is well defined because χ ∈ S(R n ).
By a natural generalization of Definition 2.7 (to operators of the form a : S(R n ) → S ′ (R n )) the extension ofâ W to S χ ⊂ S(R 2n ) is given bŷ F σ a(z 0 )T χT (z 0 )T * χ dz 0 and its action is explicitlŷ which is well defined since T χT (x 0 , ξ x0 )T * χ Ψ(x, p) = e i(ξ x0 ·x− 1 2 ξ x0 ·x 0 ) Ψ(x − x 0 , p) belongs to S(R 2n ) for all (x, p). Now consider the following Definition 4.3. Let the phase space Heisenberg-Weyl operator be defined byT Notice that Remark 4.4. From Remark 4.1 it is trivial to conclude that where (Ẑ x = (X,Ξ x ), cf. Remark 2.8) is the one-parameter unitary evolution group with infinitesimal generator σ(Ẑ x , z 0 ). Moreover, from the definition ofT P S (z 0 ) we also realize that It follows that W χ can be written as Now note that the functional form of the previous operator is independent of χ and that its action can be consistently extended to S(R 2n ). This naturally suggests We now prove that W is indeed a Weyl operator, whose restriction to S χ satisfies W | Sχ = W χ for all χ ∈ S(R n ). that is, formally, by eq.(4.12), is a Weyl operator with symbol A = a ⊗ 1, in coordinates (4.14) A(x, p, ξ x , ξ p ) = a(x, ξ x ) and so A ∈ S ′ (R 2n ⊕ R 2n ).
Proof. Let Ψ ∈ S(R 2n ). SinceT P S Ψ(z) ∈ S(R 2n ) for all z and F σ a ∈ S ′ (R 2n ) the operator W , given by eq.(4.13), is well-defined. We then havê where we performed the substitution x ′ = x − x 0 . Hence, the action of W can be written for the kernel K A ∈ S ′ (R 2n × R 2n ) given by where the integral is interpreted in the distributional sense. Comparing this expression with eq.(4.4) we find that where K a ∈ S ′ (R n × R n ) is the kernel of the Weyl operatorâ W . From eq.(4.2) it follows that the Weyl symbol of W is Let us prove the first intertwining relation. For every ψ ∈ D(â W ) = S(R n ) we have T χ [ψ] ∈ S χ and sô where we used the identity T * χ T χ = 1 in S(R n ). The second intertwining relation follows from The spectral results for the operators W follow from the ones forâ W by a direct application of the previous Theorem.
Corollary 4.8. Letâ W and W be the Weyl and phase space Weyl operators associated with the symbol a ∈ S(R 2n ), respectively. Then (i) The eigenvalues ofâ W and W are the same.
To conclude the proof of (i) we just notice that for every Ψ λ ∈ S(R 2n )−{0} there is always some χ ∈ S(R n ) such that T * χ [Ψ λ ] = 0.
Moyal representation
In this section we construct another phase space representation of quantum mechanics, which is intimately connected with the deformation quantization of Bayen et al [5,6]. Namely, the eigenvalue equation (for a generic Weyl operator in this representation) is just the Moyal ⋆-genvalue equation (for the Weyl symbol of that operator) and its solutions are thus the ⋆genfunctions (Theorem 5.8 and Corollary 5.9). Moreover, the Schrödinger equation (in this representation) can be written in terms of the Moyal starproduct and the Weyl symbol of the Hamiltonian operator (Corollary 5.10). For these reasons we shall call it the Moyal representation. This formulation of quantum mechanics has been studied before (also for the more general case where the canonical structure is given by the extended Heisenberg algebra [13]) using a set of partial isometries L 2 (R n ) → L 2 (R 2n ) (the windowed wavepacket transform, familiar from time-frequency analysis [22]) mapping the standard Schrödinger configuration space representation into the Moyal phase space representation [18,19,20,21]. Here we will follow a different approach by showing that the Schrödinger phase space representation and the Moyal representation are related by the unitary (in fact metaplectic) transformation U associated with the symplectic transformation eq.(1.9). This result allows us to translate the Weyl pseudo-differential calculus and the spectral and dynamical results of the Schrödinger phase space representation directly into the Moyal representation.
In the next subsection we will determine the unitary transformation U explicitly and show that its action on Ψ ∈ Sχ is nothing else but the cross Wigner function associated with the density matrix element |ψ χ|. In subsection 5.2 we use the transformation U to determine the Weyl pseudodifferential calculus in the Moyal representation. Finally, in subsection 5.3 we construct the eigenvalue and dynamical equations in this representation and study their relation with the deformation quantization formulation.
Unitary transformation.
In this section we determine the one-parameter quantum evolution groups generated by the operatorŝ which are obtained after quantizing the Hamiltonians (1.10,1.11) and use then to determine the explicit form of the unitary operator U . Note that the operatorsX,P ,Ξ x ,Ξ p are given in the phase space Schrödinger representation (5.2)X = x = (x 1 , ..., x n ) ,P = p = (p 1 , ..., p n ) This immediately follows from the fact that they are polynomial differential expressions which are formally self-adjoint [15] We then have Theorem 5.2. The one-parameter quantum evolution group generated bŷ H D is given by (s ∈ R) is the unique solution of (cf. and both operators are continuous (and S(R 2n ) is dense in L 2 (R 2n )), Theorem 5.3. The one-parameter unitary evolution group generated byĤ R is are the partial and inverse partial Fourier transforms, defined as unitary operators on L 2 (R 2n ).
Proof.
SinceĤ R is self-adjoint and S(R 2n ) ⊂ D(Ĥ R ), the unique solution of the initial value problem is given by , thatΨ satisfies the initial value problem x(θ) = x cos θ + ξ p sin θ , ξ p (θ) = −x sin θ + ξ p cos θ always is in S(R 2n ). Hence, the unique solution of (5.5) is it is trivial to check that U R (θ) is linear, unitary (so bounded and continuous) operator. Since by (5.6) and both operators are continuous (and S(R 2n ) is dense in L 2 (R 2n )), We are now in position to define the unitary operator that corresponds to the symplectic transformation eq.(1.9) Corollary 5.4. Consider the unitary operator Then U acts as and for which is (up to a factor of (2π) n/2 ) the cross Wigner function associated with the density matrix element |ψ χ|, i.e. U Tχ[ψ] = W (ψ, χ).
Finally, we consider the action of the unitary transformation on the fundamental operators Theorem 5.5. The operator U maps the Schrödinger phase space representation (5.2) into the Moyal representation of the Heisenberg algebra on L 2 (R 2n ) Proof. We first notice that the operatorsĤ D andĤ R are self-adjoint and quadratic. Hence, the solution of the Heisenberg equations of motion coincides with the classical solution for bothĤ =Ĥ D andĤ =Ĥ R . The operators (5.10) are theĤ R -evolution (up to t = π/4) of theĤ D -evolution (up to t = ln √ 2) ofX,P ,Ξ X andΞ P . The solution (5.10) then follows from the equivalent classical solutions that were obtained in section 1.1. We have, for instance, forX Finally, since the transformation is unitary, the commutation relations are preserved and the operators (5.10) provide a phase space representation of the Heisenberg algebra.
5.2.
Moyal-Weyl pseudo-differential calculus. A generic operator in the Moyal representation can be obtained from the corresponding operator in the phase space Schrödinger representation by the action of the unitary transformation U . For the operators W the action of U yields the Moyal-Weyl pseudo-differential operators where the domain of U (5.8) was trivially extended to S ′ (R 2n ). We then have Theorem 5.6. The Moyal-Heisenberg-Weyl operator is given explicitly by Proof. We recall from Remark 4.4 thatT P S (z 0 ) is the unitary operator T P S (z 0 ) = e −itĤz 0 t=1 associated with the infinitesimal generator ThenT M (z 0 ) =T (z 0 , 1) and Ψ(z, t) =T (z 0 , t)Ψ 0 (z) is the unique solution of the initial value problem Since (cf. eq.(5.10)) it is trivial to check that Ψ(z, t) = e −i(x 0 ·p−ξ x0 ·x)t Ψ 0 (z − 1 2 z 0 t) is a solution of eq.(5.13). It follows thatT (z 0 , t) is unitary and extends trivially to L 2 (R 2n ). Hence,T M (z 0 ) =T (z 0 , 1) is, in fact, the unitary transformation given by eq.(5.12).
Proof. A simple proof follows from Theorem 4.2 and the fact that U −1 ∈ Mp(4n, σ x ⊕ σ p ) (because it is a unitary transformation generated by quadratic Hamiltonians) and projects into the symplectic transformation S ∈ Sp(4n; σ x ⊕ σ p ) that was calculated explicitly in section 1.
Deformation Quantization.
In this section we succinctly discuss the relation between the Moyal representation and the deformation quantization of Bayen et al. [5,6]. For a complete presentation the reader should refer to [20,21,18] Theorem 5.8. Letà W : S(R 2n ) → S ′ (R 2n ) be the Moyal-Weyl operator (5.14) written in terms of the Weyl symbol a ∈ S ′ (R 2n ) ofâ W , i.e.
Two immediate corollaries are Corollary 5.9. Ψ λ is the right-stargenfunction of a, i.e.
Proof. The result follows from Corollary 4.5 by a direct application of the unitary transformation U (taking into account Corollary 5.10).
|
2012-09-09T22:49:09.000Z
|
2012-09-05T00:00:00.000
|
{
"year": 2012,
"sha1": "ed220b2a70af3c7454ff75f6cbdd35f384073784",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1209.1850",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b29747da8bec358b2545d093cb965772d0156132",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
53230371
|
pes2o/s2orc
|
v3-fos-license
|
Berberine alleviates oxidized low‐density lipoprotein‐induced macrophage activation by downregulating galectin‐3 via the NF‐κB and AMPK signaling pathways
Macrophage activation plays a central role in neoatherosclerosis and in‐stent restenosis after percutaneous coronary intervention (PCI). Galectin‐3, mainly expressed on macrophages, is an important regulator of inflammation. This study aimed to investigate the effects of berberine (BBR) on oxidized low‐density lipoprotein (ox‐LDL)‐induced macrophage activation and galectin‐3 expression and their underlying mechanisms. THP‐1‐derived macrophages were pretreated with BBR prior to stimulation with ox‐LDL. Galectin‐3 expression was measured by real‐time PCR, Western blotting, and confocal microscopy. Macrophage activation was assessed by lipid accumulation, expression of inflammatory cytokines, and CD11b and CD86. Plasma galectin‐3 levels were measured in patients undergoing PCI at baseline and after BBR treatment for 3 months. BBR suppressed ox‐LDL‐induced upregulation of galectin‐3 and macrophage activation. Overexpression of galectin‐3 intervened the inhibitory effect of BBR on macrophage activation. BBR activated phospho‐AMPK and inhibited phospho‐NF‐κB p65 nuclear translocation. AMPK inhibition and NF‐κB activation abolished the inhibitory effects of BBR on galectin‐3 expression and macrophage activation. Combination of BBR and rosuvastatin exerted greater effects than BBR or rosuvastatin alone. However, BBR treatment did not further reduce plasma galectin‐3 after PCI in patients receiving standard therapy. In conclusion, BBR alleviates ox‐LDL‐induced macrophage activation by downregulating galectin‐3 via the NF‐κB and AMPK signaling pathways.
aggression, and leukocyte infiltration are the distinctive characteristics of immediate inflammatory response to stent injury, whereas macrophage infiltration is the main manifestation of delayed in-stent neointimal formation (Libby, Schwartz, Brogi, Tanaka, & Clinton, 1992;Welt & Rogers, 2002;M. Zhang et al., 2014). Clinical studies have demonstrated that instent NA and ISR are associated with accumulation of lipid-laden foamy macrophages within the neointima (Jinnouchi et al., 2017;Nakazawa et al., 2011). Classically activated foamy macrophage clusters have been a promising target in the progression of NA and ISR (Nakazawa, Ladich, Finn, & Virmani, 2008;Otsuka et al., 2015).
Berberine (BBR) has been used as part of the traditional Chinese medicine to treat diarrhea and gastrointestinal disorders for centuries in China and Korea (Choi et al., 2003), and it has also been shown to inhibit carcinogenesis (Tsang et al., 2013) and exert antibacterial properties (Roser, Grundemann, Engels, & Huber, 2016). In recent decades, studies have shown that BBR has various beneficial effects on the cardiovascular system, including improvement of insulin resistance (Ye et al., 2016) and inhibition of inflammation (Fan et al., 2015) and atherosclerosis (Chen et al., 2014). However, the exact mechanisms of BBR on macrophage activation deserve further research. In the present study, we investigated the effects of BBR on macrophage activation and galectin-3 expression compared with rosuvastatin and their underlying molecular mechanisms. In addition, we investigated whether additive BBR treatment for 3 months further reduced plasma galectin-3 levels in ACS patients following PCI on top of standard therapy including statin.
| Participants
The population for the study comprised patients who successfully underwent primary or elective PCI for ACS at Xinhua Hospital affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China, between July 1, 2016 and June 30, 2017. Patients with an angiographically visible thrombus within target lesions, cardiogenic shock, New York Heart Association Class III/IV heart failure, abnormal liver or renal function, various inflammatory diseases, or infectious diseases were excluded from the study. A total of 45 ACS patients were included in this prospective study. Patients were single-blind and divided into two cohorts after PCI according to 2:1 randomization ratio: 30 patients received 300-mg BBR hydrochloride (Shanghai Sine Pharmaceutical Ltd., Shanghai, China) t.i.d., a common clinical dose, in addition to standard therapy, whereas 15 patients received standard therapy alone. As for standard therapy, all patients received clopidogrel (300-mg loading dose and then 75-mg/ day maintenance dose), aspirin (300 mg loading dose and then 100 mg/ day maintenance dose), and rosuvastatin (20 mg/day); the use of angiotensin-converting enzyme inhibitors or angiotensin receptor blockers, calcium channel blockers, beta-blockers, and/or antidiabetic therapy (including insulin or oral medication) was decided on an individual basis by the attending physician (
| Biochemical analysis
Blood was taken on the morning before coronary angiography for baseline measurements and again after treatment for 3 months. Serum samples were obtained by centrifuging the blood at 1,600 g for 15 min at room temperature within 30 min of venipuncture, and aliquots were stored immediately at −80°C for future analysis. Total cholesterol (TC) were determined by 3-methoxy-5-methylaniline methods (cholesterol multipurpose liquid reagent), low-density lipoprotein cholesterol (LDL-C) by direct method (LDL-C kit), high-density lipoprotein cholesterol (HDL-C) by antibody hindrance homogenous method (highly density lipoprotein cholesterol kit), and triglycerides (TG) by 3-methoxy-5methylaniline methods (free glycerol determination kit) on the day of blood collection in the laboratories of Xinhua Hospital. Fasting plasma glucose (FPG) was measured by the hexokinase method (Glucose assay kit). All the above reagents were from Wako Pure Chemical Industries, Ltd., Odakyu Sharyo Kogyo, Japan, and assays were performed on Hitachi 008As (Hitachi, Tokyo, Japan) on the day of blood collection in the laboratories of Xinhua Hospital. Westgard multi-rule quality control method was used as the decision rules of quality control of our laboratory in determining both plasma lipid profiles and FPG. The reference materials of lipid profile (Bio-Rad, Foster City, California, USA) and FPG (Beckman Coulter, Carlsbad, California, USA) were distributed by Shanghai Clinical Laboratory Quality Control Center. Serum hypersensitive C-reactive protein (hsCRP) was detected by the particle-enhanced turbidimetric immunoassay. Serum aspartate aminotransferase, alanine aminotransferase, blood urea nitrogen, urea acid, and creatinine were detected using routine biochemical methods in the Central Clinical Laboratory of Xinhua Hospital. All assays were performed in a blinded manner.
| Quantification of plasma galectin-3 levels by enzyme-linked immunosorbent assay
Plasma galectin-3 was measured by enzyme-linked immunosorbent assat kits (eBioscience, San Diego, CA, USA) according to the manufacturer's instructions. All samples were assayed in duplicate, and values were analyzed according to standard curves. The lower detection limit for this assay is 0.005 ng/ml. Blood samples used for this analysis were restricted to a single freeze-thaw cycle.
| RNA isolation and real-time PCR
Total RNA was isolated from cells using Trizol (Invitrogen, Carlsbad, CA, USA). Real-time PCR was performed to determine gene expression of galectin-3, interleukin-6 (IL-6), tumor necrosis factor-α (TNF-α), and interleukin-1β (IL-1β). The primer sequences are shown in Table 2 GAPDH was used as an endogenous control. All samples were normalized to internal controls, and the relative expression level was calculated using the 2 −ΔΔCt analysis method.
| Confocal microscopy
For confocal microscopy, different groups of THP-1-derived macrophages were blocked with 1% bovine serum albumin (Sigma-Aldrich)/PBS for 1 hr at room temperature, and cells were then incubated for 1 hr at room temperature with a rabbit anti-human galectin-3 antibody (1:100; Abcam). After being washed with PBS containing 0.1% Tween-20, samples were incubated with a secondary antibody (Alexa Fluor 647 mouse-anti-rabbit IgG at 1:200; Invitrogen) for 2 hr at room temperature. Following fixation, the cell nucleus was stained with 4,6-diamino-2-phenylindole (eBioscience). Cells were then examined on a LSM 510 confocal laser scanning microscope (Carl Zeiss Inc., Maple Grove, Minnesota, USA).
| Flow cytometry
The expression of CD86 and CD11b on the surface of macrophages was determined by flow cytometry. After the removal of medium from the wells, cells were collected, incubated with FcR blocking reagent and stained directly with (APC)-conjugated anti-human-CD86 anti- were not significantly different between the two groups. There were no significant differences in plasma galectin-3, liver and renal function, and coronary angiography details between the two groups.
| Plasma galectin-3 levels, serum lipids, and inflammatory markers of ACS patients after 3 months
Plasma levels of galectin-3 after 3 months were significantly reduced in both the control group who received standard therapy alone and in additive BBR-treated group compared with baseline levels (both p < 0.05; Figure 1a). The decrease of hsCRP in additive BBR-treated group was significant (p < 0.05), whereas the decrease in hsCRP in the control group did not reach significance (p > 0.05; Figure 1b).
Compared with baseline values, serum levels of TC and LDL after 3 months were significantly reduced in additive BBR-treated group (p < 0.001; Figure 1c,e), and in the control group who received standard therapy alone (p < 0.01; Figure 1c,e). Serum levels of TG after 3 months were significantly reduced in additive BBR-treated group (p < 0.001; Figure 1d) but not in the control group. Mild elevation in HDL-C was observed in both groups (p < 0.05; Figure 1f).
| Safety of BBR
Compared with baseline values, alanine aminotransferase, aspartate aminotransferase, blood urea nitrogen, creatinine, urea acid, and estimated glomerular filtration rate after 3 months were not changed in either BBR group or the control group (p > 0.05; Figure S1), suggesting that BBR has no adverse effect on liver and renal function. No patients from the BBR-treated group complained of any side effects.
Overall, BBR at the dose administered in the present study is very safe in ACS patients.
| Overexpression of galectin-3 intervened the inhibitory effect of BBR on the macrophage activation
We then investigated whether manipulation on the gene expression of galectin-3 could intervene the inhibitory effects of BBR on the
| Role of the AMPK and NF-κB signaling pathways in BBR mediated decrease in galectin-3 expression and macrophage activation
We later examined the involvement of signaling pathways in regulating galectin-3 and macrophage activation by BBR. First, we examined the regulation of signaling pathways in macrophages by BBR. THP-1-derived macrophages were pretreated with BBR, rosuvastatin, and combination with BBR and rosuvastatin for 1 hr, and signaling pathway proteins (p-AMPK, AMPK, NF-κB p65, and p-p65) were examined by Western blotting at 30 min after the induction by ox-LDL (Figure 6a,b). We found that p-AMPK expression was upregulated by BBR, combination of BBR and rosuvastatin, but not by rosuvastatin (Figure 6a). Intranuclear p-p65 was down- Studies of optical coherence tomography proved that NA and ISR lesion was associated with the presence of activated lipid-laden foamy macrophages (Jinnouchi et al., 2017;Nakazawa et al., 2011). Thus, inhibiting macrophage infiltration and foam cell formation may limit NA and ISR. Over the last few years, our team has made great efforts to investigate the effects of BBR on macrophage activation and atherosclerosis. We first demonstrated that BBR markedly inhibited matrix metalloproteinase 9 in PMA-induced macrophages (Huang et al., 2011). We then found that BBR treatment for 30 days further decreased circulating levels of matrix metalloproteinase 9, intercellular adhesion molecule-1, and vascular cell adhesion molecule-1 in ACS (c, d) CD11b and CD86 expression on macrophages was measured by flow cytometry. Data are represented as mean ± SD. n ≥ 3. * p < 0.05, ** p < 0.01, and *** p < 0.001 versus ox-LDL group; #p < 0.05 versus PBS group. Scale bar = 100 μm [Colour figure can be viewed at wileyonlinelibrary.com] patients after PCI compared with the standard therapy alone (Meng et al., 2012). In ApoE(−/−) mice, we found that BBR derivatives (dhBER and Di-MeBER) inhibited inflammatory response and reduced plaque size and vulnerability (Chen et al., 2014). In the present study, we further examined the effect of BBR on ox-LDL-induced activation of THP-1-derived macrophages and demonstrated that BBR suppressed ox-LDL-induced macrophage activation as indicated by reduced lipid accumulation and decreased expression of inflammatory cytokine (TNF-α, IL-1β, and IL-6) and CD11b and CD86 on macrophages. Consistently, a very recent paper found that BBR suppressed inflammatory responses in RAW 264.7 macrophages (H. Zhang, Shan, et al., 2017).
Galectin-3, abundantly expressed on macrophages and lipidladen foam cells (Bekkers et al., 2010;van der Veer et al., 2007), contributes to atherosclerotic progression via enhancing the recruitment of monocytes and macrophages and amplifying inflammatory state through macrophage activation in atherosclerotic lesions (Papaspyridonos et al., 2008;Taylor et al., 2004). It has been shown that both genetic and pharmacological inhibition of galectin-3 reduces atherosclerotic lesions and slows atherosclerotic plaqueprogression in ApoE knockout mice (Funaro et al., 2011;Hamirani et al., 2014). Plasma galectin-3 has also been reported as a major predictor of cardiovascular mortality in ACS patients following PCI (Ito, 2006;Kloner, 2011;Kumar et al., 2011). In the present study, we demonstrated that BBR downregulated ox-LDL-induced galectin-3 expression on macrophages, and knockdown of galectin-3 abrogated, whereas overexpression of galectin-3 enhanced the effects of ox-LDL on macrophage activation. Overexpression of galectin-3 intervened the inhibitory effect of BBR on macrophage activation. These results suggest that galectin-3 mediates the effect of BBR on macrophage activation. However, we did not find that additive BBR treatment for 3 months further reduced plasma galectin-3 levels in ACS patients following PCI on top of standard therapy including statin. Such discrepancy between in vitro experiments and the clinical study deserves some comments. First, plasma galectin-3 derives from different cell types and tissues (Payne et al., 2011). Second, THP-1-derived macrophages were pretreated with BBR before stimulation with ox-LDL, but BBR was given to patients with established coronary artery disease and after PCI. Notably, the sample size of our patients was quite small. AMPK activation improves macrophage cholesterol homeostasis in mice (Fullerton et al., 2015), inhibits PMA-induced monocyte-tomacrophage differentiation (Vasamsetti et al., 2015), and attenuates atherosclerosis by enhancing the anti-atherogenic effects of highdensity lipoproteins (Ma, Wang, Yang, An, & Zhu, 2017) and reducing atheroma-inducing macrophages formation in ApoE (−/−) mice (Wang, Ma, Zhao, & Zhu, 2017). In the present study, phosphate AMPK protein was downregulated on ox-LDL-induced macrophages, which was rescued by BBR. Pretreated with compound C (a specific inhibitor of AMPK) abolished the effect of BBR on galectin-3 protein and macrophage activation. NF-κB signaling pathway was commonly regarded as a key pathological mechanism of atherosclerosis and other inflammation diseases (Pamukcu, Lip, & Shantsila, 2011;FIGURE 5 Overexpression of galectin-3 intervened the inhibitory effect of berberine (BBR) on the macrophage activation. THP-1 cells were infected by lentivirus-Gal-3 or lentivirus-negative control (NC) and then induced to differentiate into macrophage by phorbol 12-myristate 13acetate. THP-1-derived macrophages were pretreated with phosphate-buffered saline (PBS) and BBR (25 μM) for 1 hr and then induced by 100 μg/ml oxidized low-density lipoprotein (ox-LDL) for 24 hr. (a) The lipid accumulation was confirmed by oil red O stain. (b) The IL-6, TNF-α, IL-1β expression in macrophages was evaluated by real-time PCR. (c, d) CD11b and CD86 on macrophages were measured by flow cytometry. Data are represented as mean ± SD. n ≥ 3. * p < 0.05, ** p < 0.01, and *** p < 0.001 versus ox-LDL group. Scale bar = 100 μm. NS: non-significant [Colour figure can be viewed at wileyonlinelibrary.com] Yu, Zheng, & Tang, 2015). In the present study, BBR also suppressed ox-LDL-induced p-p65 protein nuclear translocation. Activating NF-κB signaling pathway by prostratin counteracted the inhibitory effect of BBR on galectin-3 expression and macrophage activation. Dumic, Lauc, and Flogel (2000) found that NF-κB p65 was involved in the regulation of galectin-3 expression in a transcriptional manner. Taken together, the effects of BBR on galectin-3 and macrophage activation are mediated by the AMPK and NF-κB signaling pathways.
Statins not only lower lipid effectively but also have beneficial cardiovascular pleiotropic effects including inhibiting inflammatory responses, protecting blood vessel endothelium, and stabilizing atherosclerotic plaques. Statins attenuate plaque vulnerability via reducing the number and activity of macrophages on blood vessels FIGURE 6 Berberine (BBR) downregulated galectin-3 expression by inhibiting NF-κB and activating AMPK signaling pathway. (a-d) Macrophages were pretreated with phosphate-buffered saline (PBS), BBR (25 μM), rosuvastatin (25 μM), and combination of BBR (25 μM) and rosuvastatin (25 μM) for 1 hr and then stimulated by 100 μg/ml oxidized low-density lipoprotein (ox-LDL) for 30 min. Western blots and quantification of (a) AMPK and phospho-AMPK (p-AMPK) and (b) NF-κB p65 and phospho-NF-κB p65 (p-p65). Protein and mRNA expression of galectin-3 measured by Western blotting and real-time PCR on THP-1-derived macrophages pretreated with PBS, BBR, rosuvastatin, and BBR and rosuvastatin, in the presence or absence of (c and d) compound C (10 μg/ml) or (e and f) prostratin (10 μg/ml) for 1 hr before the induction by 100 μg/ml ox-LDL for 24 hr. Galectin-3 was normalized by GAPDH levels. Phospho-AMPK (Thr172) was normalized by total AMPK. p-p65 (Ser536) was normalized by NF-κB p65. GAPDH was the indicator for cytoplasm protein. Histone H3 was the indicator for nuclear protein. β-Actin was the indicator for both cytoplasm protein and nuclear protein. Data are represented as mean ± SD. n ≥ 3. * p < 0.05, ** p < 0.01, *** p < 0.01 versus ox-LDL group; # p < 0.05 versus ox-LDL + BBR group; & p < 0.05 versus ox-LDL + rosuvastatin group. NS: non-significant intima and downregulating inflammatory and thrombotic signals in the endothelium. Statins appear to be safe for use in the majority of patients, but still a proportion of patients are at risk of side effects such as myopathy, hepatic damage, and diabetes from long-term statin use (Patel, Martin, & Banach, 2016). BBR is a safe Chinese medicine without adverse effects on liver and kidney and can also improve insulin resistance (Ye et al., 2016). In the present study, we found that BBR and rosuvastatin had similar effects on downregulating galectin-3 and inhibiting macrophage activation. However, although both BBR and rosuvastatin inhibited p-p65 nuclear translocation, BBR, but not rosuvastatin, activated phospholation of AMPK. Activating NF-κB signaling pathway counteracted the inhibitory effects of both BBR FIGURE 7 Compound C alleviates oxidized low-density lipoprotein (ox-LDL)-induced classical macrophage activation. THP-1-derived macrophages were pretreated with PBS, BBR (25 μM), rosuvastatin (25 μM), and BBR (25 μM) and rosuvastatin (25 μM), in the presence or absence of compound C (10 μg/ml) for 1 hr and then induced for 24 hr by 100 μg/ml ox-LDL. (a) The lipid accumulation was confirmed by oil red O stain. (b) The IL-6, TNF-α, and IL-1β expression in macrophages was evaluated by real-time PCR. (c, d) CD11b and CD86 on macrophages were measured by flow cytometry. Data are represented as mean ± SD. n ≥ 3. * p < 0.05, ** p < 0.01, and *** p < 0.001 versus ox-LDL group; versus ox-LDL group; # p < 0.05 versus ox-LDL + BBR group; & p < 0.05 versus ox-LDL + rosuvastatin group. Scale bar = 100 μm. PBS: phosphate-buffered saline. NS: non-significant [Colour figure can be viewed at wileyonlinelibrary.com] and rosuvastatin on galectin-3 expression and macrophage activation.
However, blocking AMPK only abolished the effect of BBR, but not of rosuvastatin, on inhibiting galectin-3 expression and macrophage activation. So the underlying mechanisms for the actions of BBR and statins may be a bit different. Moreover, we proved that combined treatment of BBR and rosuvastatin exerted greater effect than BBR or rosuvastatin alone. So our results suggest that BBR could be an effective alternative drug for those who cannot tolerate statins, and BBR may provide additional benefits in preventing ISR or NA for ACS patients who undergo PCI procedure and receive standard therapy.
There are several limitations of this study. First, as mentioned above, the sample size of our patients was small. It is reasonable to call FIGURE 8 Prostratin intervened the inhibitory effect of berberine (BBR) on oxidized low-density lipoprotein (ox-LDL)-induced macrophage activation. THP-1-derived macrophages were pretreated with phosphate-buffered saline (PBS), BBR (25 μM), rosuvastatin (25 μM), and BBR (25 μM) and rosuvastatin (25 μM), in the presence or absence of prostratin (10 μg/ml) for 1 hr and then induced for 24 hr by 100 μg/ml ox-LDL. (a) The lipid accumulation was confirmed by oil red O stain. (b) The IL-6, TNF-α, and IL-1β expression in macrophages was evaluated by real-time PCR. (c, d) CD11b and CD86 on macrophages were measured by flow cytometry. Data are represented as mean ± SD. n ≥ 3. ** p < 0.01 and *** for more information at a larger patient base to further study the issue. Second, we did not collect follow-up data to examine whether BBR had a preventive effect on NA or ISR, which warrants further investigation. In addition, because BBR was used as a pretreatment before the stimulation of ox-LDL, the effects of BBR on macrophage activation and galectin-3 expression are protective rather than therapeutic. Whether BBR has therapeutic effects on macrophage activation and galectin-3 expression is yet to be explored.
In conclusion, BBR alleviates ox-LDL-induced macrophage activation by downregulating galectin-3 via suppressing NF-κB and activating AMPK signaling pathway, whereas combination of BBR and rosuvastatin exerts greater effects than rosuvastatin alone.
|
2018-11-15T17:36:51.512Z
|
2018-11-06T00:00:00.000
|
{
"year": 2018,
"sha1": "8ab23989d68aa31496c7bed322fe823270982156",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ptr.6217",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ab23989d68aa31496c7bed322fe823270982156",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
182100926
|
pes2o/s2orc
|
v3-fos-license
|
Coronary Artery Diffuse Aneurysmal Dilation in an Acute Myocardial Infarction Patient
Coronary artery aneurysm (CAA) is a rare disease that is associated with dangerous dormant complications. It is associated with atherosclerotic heart disease in half of the cases during a coronary angiogram. Currently, there are no guidelines for the management of such cases. We present a case of acute ST-segment elevation myocardial infarction in a male patient who was found to have diffuse aneurysmal dilation of the coronary arteries along with 100% occlusion of the right coronary artery. The complexity of the lesions caused him not to be a candidate for either percutaneous or surgical intervention. This raises an important question regarding treatment options in such a rare case.
Introduction
Coronary artery aneurysm (CAA) is an uncommon anomaly defined as a segment of the coronary artery being more than 1.5 times the normal adjacent segment [1]. CAA is associated with coronary artery atherosclerosis that can result in hazardous outcomes. We report the case of a male patient who presented with an ST-segment elevation myocardial infarction who was found to have diffuse aneurysmal dilation of the coronary arteries along with a 100% occlusion of the right coronary artery. The unusual complexity of our case raises important questions regarding the patient's treatment options.
Case Presentation
A 66-year-old male, former smoker, with a past medical history of hypertension, obesity, and obstructive sleep apnea presented with chest pain of three hours duration prior to presentation. Electrocardiogram (EKG) revealed an ST-segment elevation myocardial infarction in the inferior leads ( Figure 1). The patient was rushed to cardiac catheterization for an angiogram. The angiogram has revealed large aneurysmal dilation of the left main coronary artery (LMCA) (Figure 2). The left anterior descending (LAD) and left circumflex (LCX) coronary arteries have shown diffuse aneurysmal dilation affecting the entire length of the vessels (Figure 2). There was rupture of plaque and dissection in the proximal LAD ( Figure 3). The right coronary artery (RCA) has shown proximal aneurysmal dilation with 100% occlusion and with large thrombus that was the culprit lesion ( Figure 4). The lesion was not amenable for intervention, as its diameter was larger than the available cardiac stents. Moreover, due to the complexity of the patient's lesion, he was not a candidate for surgical intervention. An intra-aortic balloon pump (IABP) was inserted to maintain the patient's blood pressure and he was started on medical management with heparin infusion, dual antiplatelet therapy (DAPT), beta-blocker (BB), and high-intensity statin. An echocardiogram was done to reveal moderate left ventricular systolic dysfunction with an ejection fraction of 35%. Although there are no guidelines for the management of such a rare case, heparin was switched to rivaroxaban for long-term anticoagulation and IABP was removed. The patient was stable during the hospital stay and was discharged to follow-up as an outpatient.
Discussion
Coronary artery aneurysm (CAA) is not a common pathology but a potentially serious disease. It was first described by Morgagni in 1761 [2]. CAA is defined as a segment of the coronary artery with a diameter more than 1.5 times the adjacent normal segment. The term ectasia is used when there is diffuse dilation that involves at least 50% of the vessel length. Its prevalence varies between studies and falls between 1.2% and 7.4% [3][4][5]. The pathology doesn't affect the coronary arteries equally. The most frequently involved artery is the right coronary artery followed by the left anterior descending coronary artery and the left circumflex artery [1].
CAA can occur with no obvious background disease in a small percentage of the cases. However, CAA is associated with atherosclerotic heart disease in half of the case during an angiogram. It is presumed that CAA is a form of coronary artery disease [1,4,[6][7]. CAA is also associated with vasculitis (Kawasaki disease, Takayasu's arteritis, and polyarteritis nodosa), systemic lupus erythematosus, Marfan syndrome, Ehlers-Danlos syndrome, trauma, and some infections [7][8]. The exact pathology is unknown, however, some possible mechanisms like genetic predisposition and arterial wall damage have been suggested [7].
The literature on CAA is limited to case reports and review articles. Patients who have CAA associated with atherosclerosis usually present with angina and myocardial infarction. Also, life-threatening complications such as rupture, vasospasm, and thromboembolism can occur [7]. An angiogram of the coronary arteries is the gold standard to diagnose CAA [6]; however, multi-slice computed tomography (MSCT) coronary angiogram can be an alternative diagnostic method [9]. There are a limited number of case reports about CAA that are associated with acute ST-segment elevation myocardial infarction [10][11][12][13][14].
Our case is unique, given the complexity of the anomalies and lesions presented by the angiogram. Currently, there is limited data regarding the treatment of CAA that is associated with acute myocardial infarction (MI). Other case reports have described cases of CAA with MI but were lacking the complexity of this case. Our patient had a left main coronary artery (LMCA) and left anterior descending (LAD) and left circumflex (LCX) coronary arteries diffuse aneurysmal dilation affecting the entire length of the vessels with rupture of plaque in the proximal LAD. The right coronary artery (RCA) has shown proximal aneurysmal dilation with 100% occlusion and a large thrombus that wasn't amenable to either percutaneous or surgical intervention. The last resort was to start him on anticoagulation with a heparin drip followed by rivaroxaban upon discharge. These kinds of CAA cases associated with acute MI raise important questions regarding management options.
Conclusions
Coronary artery aneurysm is a rare disease found on a small percentage of coronary angiograms. It is associated with coronary artery disease in half of the cases during an angiogram. Acute myocardial infarction in a large coronary aneurysm is a very rare condition. Medical management is the last resort and the role of anticoagulation remains controversial. Our case is of particular interest, as there are no available guidelines for management. Dual antiplatelet therapy (DAPT), anticoagulation, and continuous follow-ups are the available options for such a rare case.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2019-06-07T21:13:35.975Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "55e457ec481c8882121feca4c8aa3da044b1519a",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/19827/1564429676-20190729-25397-1ut3hw0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "55e457ec481c8882121feca4c8aa3da044b1519a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2717987
|
pes2o/s2orc
|
v3-fos-license
|
Bee Diversity in Thailand and the Applications of Bee Products
This chapter provides information on honey bees (genus Apis) and their reasonably close relative group, the stingless bees within the Meliponini Tribe. Their taxonomic position, common morphology and behaviour are defined and explained. Also, a species identification of the four native Thai honey bee species, including the comb and nest structure, worker morphology, species distribution and description of each species behaviour, is summarized. Beyond their role as pollinators, honey bees and stingless bees have important economic, ecological and social values for many rural people in Asia. Especially, wild honey bees are hunted for their products (honey, brood and wax), providing many people with a useful component of household income. Therefore, the applications of bee products, which are important for many rural people in Asia including Thailand, are briefly outlined.
Introduction
This chapter provides information on honey bees (genus Apis) and their reasonably close relative group, the stingless bees within the Meliponini Tribe.Their taxonomic position, common morphology and behaviour are defined and explained.Also, a species identification of the four native Thai honey bee species, including the comb and nest structure, worker morphology, species distribution and description of each species behaviour, is summarized.Beyond their role as pollinators, honey bees and stingless bees have important economic, ecological and social values for many rural people in Asia.Especially, wild honey bees are hunted for their products (honey, brood and wax), providing many people with a useful component of household income.Therefore, the applications of bee products, which are important for many rural people in Asia including Thailand, are briefly outlined.
The genus Apis
Honey bees are classified in the Apini tribe within the subfamily Apinae and family Apidae (Ruttner, 1988).They are part of the large insect order Hymenoptera that includes bees, wasps, ants and sawflies (Gullan & Cranston, 2000).Apis is the only genus of true honey bees and is comprised of the ten Asian species and one Western species (Oldroyd & Wongsiri, 2006).Some of the most discriminate morphological criteria for worker bees of the genus Apis are: the compound eyes covered with erect long hairs, a strongly convex scutellum, the pollen press on the hind leg, the greatly elongated marginal and submarginal cells of the forewing and the jugal lobe in the hind wing (Oldroyd & Wongsiri, 2006).All honey bee species are highly social insects.Oldroyd & Wongsiri (2006) revealed at least three criteria for defining the eusociality form in honey bees that correspond with that of Wilson (1971).First, an individual larva is reared and cared for by a multitude of workers, and no one larva receives special attention compared to the others (of the same caste), except those going to be queens.Second, they have a pronounced reproductive division of labour, which is that one individual monopolizes reproduction (queen) while others are sterile (workers) for most or all of the time.Third, the form of eusociality in honey bees has overlapping generations.Therefore, during the short life span of workers they are surrounded by their sisters and brothers.
134
Usually, the social structure of a honey bee colony is composed of a single fertile female queen, several thousand sterile female workers, and, at certain times, a few hundred males (drones) (Fig. 1).The queen and workers both develop from fertilized eggs (diploid, 2n = 32) that are heterozygous at the sex locus.Their different and irreversible development trajectorie are thus not directly genetically predetermined but rather are determined solely epigenetically (environmentally) by their feeding and other treatments that they recieve as larvae.Unlike the queen and workers, drones or functional males are hemizygotes (haploid, n = 16) and develop from unfertilized eggs under the arrhenotous sex determination system.Note that fertilised eggs that are homozygous at the sex locus will develop as diploid males, but their functionality and fertility is limited.Within a hive, the queen is the only fertile female so she is the mother of all diploid (queen and worker) members (Crane, 1990), whilst she is typically (under normal circumstances) the mother of all unfertilized eggs (functional drones) as well.Interestingly, a virgin queen can mate with many drones, and so limits the chances of a matched mating (homozygous at the sex locus) and diploid male production (Gould & Gould, 1988).Such high level of polyandry is especially the case for A. dorsata queens that have mating frequencies of up to 88.5 (Wattanachaiyingcharoen et al., 2003).Such polyandry, given sperm mixing, leads to asymmetrical levels of the genetic relatedness between workers within colonies and has a profound effect on the bee biology and on the evolution of sociality in bees (Oldroyd & Wongsiri, 2006).A queen can release twenty or more pheromones from her mandibular gland (Crane, 1990).These queen pheromones are volatile compounds, which are important in ensuring colony cohesion within the nest and the dominance of the single queen that heads the colony.The queen's mandibular gland pheromenes induce retinue physiology and behavior in workers (Slessor et al., 1988).For instance, they inhibit the worker's ovary development leading to non reproductive females, and stimulate workers to release pheromones (e.g.Nasonov pheromone) attracting other workers.They can stimulate workers to forage and regulate worker coherence in a swarm or abscorn (Crane, 1990).
Although workers are typically sterile, (in some circumstances some workers can lay unfertilized eggs which if left unattacked by other workers will develop as drones), they have many activities in a colony.For example, a very young adult worker cleans vacated brood cells.Then, at about five-days old, it can feed young larvae and a queen since its hypopharyngeal glands locating in its head are fully active to synthesize royal jelly.Later, these glands start to degenerate at 10-days-old (Crane, 1990).Next, the glands change to produce wax for comb building and to clean the colony.At about two weeks of age, the venom sac is full (Crane, 1990), and some worker bees become active as guards of the colony.As the workers develop from two to four weeks of age, their hypopharyngeal glands secrete increasing amounts of invertase and glucose oxidase, enzymes used in making honey from nectar (Gould & Gould, 1988).At the final stage, the workers will go out of a hive to forage food.Drones are normally fertile (haploid) males.In all Apis spp.except A. dorsata and A. laboriosa, drones are reared in drone cells on the periphery of the brood nest (Oldroyd & Wongsiri, 2006).These cells are similar to the worker cells in shape and orientation, but the hexagous cells are about three times larger than those of workers (Gould & Gould, 1988).Drones do nothing except leave the colony and mate with a virgin queen.Then, they die.The morphology of the honey bee penis (genitalia) is unique to the genus (Michener, 2007) so it is one of the most useful species identification characters (Radloff et al., 2011).
Diversity and distribution of honey bees
From Oldroyd & Wongsiri (2006), three subgenera of honey bees are currently recognized (Table 1), and these differ in the location and structure of building their hive.The two dwarf honey bee species from the subgenera Micrapis, A. florea and A. andreniformis, build a single comb surrounding a twig, while the giant honey bees (subgenera Megapis), A. dorsata and A. laboriosa build a single massive comb under a branch or cliff overhanging or under the eves or roof of a building.Cavity-nesting honey bees (Apis), A. mellifera, A. cerana, A. koschevnikovi, A. nuluensis and A. nigrocincta, build multiple comb nest in cavities.A recent molecular phylogeny (Lo et al., 2010) added two new taxa to the existing genus Apis, one each in the subgenera Megapis and Apis.Based on Bayesian and maximum parsimony phylogenetic trees, their analysis support recognition of A. indica (the plains honey bee of south India) as a separate group from the more broadly distributed A. cerana.In addition, it also supported classification of the giant Philippines honey bee, A. breviligula, as a separated species from the more broadly distributed lowland A. dorsata.Thus, three subgenera and 11 species of honey bee of genus Apis have been recognized.The distribution of these species is highly uneven (Fig. 2).Interestingly, nine of these 11 species of honey bee can be found in the South-east Asia region, and combined with molecular phylogentic estimates of diveregnce times within the genus, supports that Asia is the most likely birthplace of the Apis genus.In Thailand, there are five Apis species which are A. andreniformis, A. florea, A. dorsata, A. cerana and A. mellifera (Rattanawannee et al., 2007).The first four species are native to Thailand but A. mellifera has been introduced by man (anthropogenic) into the country for the apicultural industry (Wongsiri et al., 1996).To recognize these four native species, Rattanawannee et al. (2010) revealed that geometric morphometric analysis of the single wing alone could be used to identify four Asian honeybee species in Thailand and that the sex of the individual does not impede identification.A description of each of the four native species in Thailand are provided below.Ruttner, 1988;Oldroyd & Wongsiri, 2006;Lo et al., 2010).
Dwarf honey bees, subgenus Micrapis
The existence of two dwarf honey bee species (A.andreniformis and A. florea) as valid biological species is well revealed (Radloff et al., 2011), although they are mostly similar in worker and nest sizes.Both build an exposed single comb colony and may utilize similar resources in similar habitats (Wongsiri et al., 1996).Considering species-specific mophological characters, A. andreniformis workers have black hairs on their hind tibia and the dorsolateral surface of the hind basitarsus whilst A. florea workers have white hairs instead (Wu & Kuang, 1987).In addition A. andreniformis workers also have black pigment all over which makes the bees look the darkest, while A. florea workers have less black pigment and so are mostly yellow bees (like red dwarf honey bees).The exception is the pigmentation of the scutellum, where in A. andreniformis it is yellowish, while that for A. florea tends to be black (Wongsiri et al., 1996).Furthermore, the abdominal segments of A. andreniformis queens and drones are all black, whilst in A. florea, queens present all orangeyellow abdominal segments while drones have grey abdominal segments with white hairs (Rinderer et al., 1995).
Although the endophalli of both species have a pair of bursal cornua, the morphology of the drone's endophallus is different in the two species.In A. florea the fimbriate has three protrusions with a strongly curved terminal whilst for A. andreniformis the fimbriate has six protrusions with thick and straight terminal.In addition, the thumb-like bifurcate basitasus of the hind leg of drones (Fig. 3) is comparatively longer in A. florea (Wu & Kuang, 1987), being just under a half and 2/3 that of the tibia length in A. andreniformis and A. florea, respectively (Wu & Kuang, 1987).Although the nests of both species are very much alike (Figs. 4 and 5), some clear differences in the nest architecture are still observed.When viewed from the edge, cells in the honey storage area of A. florea nests are orientated inwards towards a supporting branch (Wongsiri et al., 1996).Considering a cross section of the crown of an A. florea nest, there are three levels of inter organization.The first level from the edge contains very long cells that extend to a supporting branch.The second level contains cells coming from the opposite side that have their base at the sides of cells coming from the other side.The third level contains cells coming from the top of the honey storage area that have the same pattern as cells from the second level.However, some cells open to the top surface have their base well away from the supporting cell's base (Rinderer et al., 1996).As a consequence of the comb building process, the crown of A. florea nests do not contain a midrib (Oldroyd & Wongsiri, 2006).These features contrast with the honey storage area in A. andreniformis's nests, where a characteristic crest appearance is evident when viewed from the outside.A cross section of the honey storage area of an A. andreniformis nest reveals a clear midrib structure where the bases of opposing cells come into contact as found in the brood area (Rinderer et al., 1996).
Apis andreniformis Smith, 1858
The black dwarf honey bee or small dwarf honey bee, A. andreniformis, is the smallest species in the genus Apis.It is widely distributed in the tropical and sub tropical regions of Asia, especially in the southern part of China, India, Burma, Laos, Vietnam, Malaysia, Indonesia and the Philippines (Fig. 2).It is always found at coastal flats and near foothill areas (1 -100 m above sea level) to high mountain and forest areas at about 1600 m attitude (Wongsiri et al., 1996).The economic value of A. andreniformis has not been documented.However, the importance of the naturally occurring flora in the range of A. andreniformis probably depends on this bee species for pollination (Rinderer et al., 1995).Since A. andreniformis is a rare and patchily distributed species, very little work has been reported.For example, intraspecific variation of A. andreniformis was reported by Rattanawannee et al. (2007).They sampled from 27 colonies (for morphometric analysis) and 32 colonies (for genetic analysis) of A. andreniformis throughout Thailand.In addition, three colonies for morphometric analysis and five colonies for DNA polymorphism were taken from Tenom in Sabah, Malaysia.For morphometry, 20 informative morphometric characters were used to assess the variation.Principle component analysis (PCA) yielded four factor scores.Within PCA plots, A. andreniformis from across Thailand and Tenom (Malaysia) formed a single group, a notion further supported by a hierarchical cluster analysis generated dendrogram.However, linear regression analysis showed clinal patterns of morphometric characters, where the body size of bees increased from the South to the North, associated with increasing altitude, but decreased from the West to the East, associated with decreasing altitude.For genetic variation, based on the sequence analysis of the mitochondrial cytochrome oxidase subunit b (Cyt-b) gene fragment, two groups of A. andreniformis populations from Thailand were found.However, these results are tentative, pending more extensive analyses of samples across the distribution areas of A. andreniformis.
Apis florea Fabricius, 1787
The red dwarf honeybee, A. florea, is extremly widespread in Asia, extending from Vietnam and southeastern China, across mainland Asia along and below the southern Himalayas, westwards to the Plateau of Iran and southern into Oman (Fig. 2) (Hepburn & Hepburn, 2005).However, the main habitat of this bee species is Pakistan, India, Sri Lanka, Thailand, Indochina, Malaysia, part of Indonesia and Palawan at altitudes below 1000 m (Ruttner, 1988).Multivariate morphometric analysis of A. florea using 20 morphometric characters to investigate the intraspecific morphometric variation of 18 samples of A. florea (360 bees) from Sri Lanka, Thailand, Pakistan, Iran and Oman revealed three morphocluster groups of A. florea (Ruttner, 1988); (i) Sri Lanka and south India, (ii) Thailand and Oman and (iii) Pakistan and Iran.In addition, the body size of A. florea was observed to increase across the study range from the South to the North.Subsequent analysis using 12 morphometric characters of A. florea from 26 localities in southern Iran revealed two morphoclusters of A. florea; a larger bee group at high latitudes (29° -34°) and a smaller bee group at lower latitudes (<29°) (Tahmasebi et al., 2002).After combining their data with that of Ruttner (1988) and Mogga & Ruttner (1988), they also identified three morphoclusters for all A. florea samples.However, information on the geographical contiguity of this honey bee species was still missing (Radloff et al., 2011).
To fill the geographical contiguity of A. florea, Hepburn et al. (2005) performed multivariate morphometric analysis of 184 colonies (2,923 individual workers) of A. florea collected from 103 localities across the full distributional area from Vietnam and southeastern China to Iran and Oman.They concluded that A. florea was a panmictic species comprised of three morphoclusters; northwestern, southeastern and an intermediate form.They suggested that the seasonality of reproductive swarming was temporary continuous allowing gene flow throughout this panmictic species.
In Thailand, Chaiyawong et al. (2004) performed multivariate morphometric analysis of 50 A. florea colonies (750 worker bees) from different locations throughout Thailand.From a PCA and cluster analysis of 22 morphometric characters, they revealed only a single group of A. florea in Thailand.Then, after reducing the number of characters, a degree of isolation from the mainland group was obtained for Samui Island and Pha-ngan Island, but the bees from these locations were correctly regarded simply as variants.This single morphocluster for Thailand of A. florea was in close agreement to the report of Nanork (2001), who found no variation among sympatric A. florea, in Thailand using PCR-RFLP analysis of the Cyt-bI-tRNA coding gene region of the mtDNA.
Apis dorsata Fabricius, 1793
The common giant honeybee, Apis dorsata, is one of three species of the subgenus Megapis.
Neither Ruttner (1988) nor Engels (1999) separated A. dorsata from the closely related species, A. laboriosa.However, various evidences have demonstrated the difference between the two giant honey bee species.For example, Underwood (1990) reported the mating flight of Nepalese A. laboriosa drones was during 12:30 -14:30 h whereas A. dorsata drone mating flight occured just after dusk, during 18:15 -18:50 h (Koeniger et al., 1988), suggesting a prezygotic reproductive barrier.Also, the vocal communication dance performed by A. dorsata workers is different from that of the silent A. laboriosa workers (Oldroyd & Wongsiri, 2006;Kirchner et al., 1996).Furthermore, Arias & Sheppard (2005) (Oldroyd & Wongsiri, 2006).This giant honey bee is strikingly distinguished from A. dorsata owing to black rather than yellow coloration of the abdomen and that it never forms colony aggregations as A. laboriosa and A. dorsata do (Lo et al., 2010).Therefore, three species of giant honey bee in the subgenus Megapis of the genus Apis have now been recognized.
The distribution of A. dorsata is over a vast geographic area in the South and Southeast Asia (Fig. 2).To the West, A. dorsata occurs not farther than the Indus river, and to the East, A. dorsata are throughout the Philippines and even cross the Wallace line.The giant honey bee is reported to present at altitudes up to 1000 -1700 m, or even up to 2000 m during migration (Ruttner, 1988).In Thailand, A. dorsata is the only species of the subgenus Megapis that can be found.Among honey bee spp., individual workers of A. dorsata are relatively large, being about 17 mm long.Thus, the giant honey bees are distinguished from the other four honey bee species in Thailand by their much larger body size and that their wings that are fuscous, and quite hairy (Oldroyd & Wongsiri, 2006).The fore and hind wings of A. dorsata workers are 12.96 and 8.91 mm long, respectively (Tan, 2007).The body color of A. dorsata workers is yellow, with tergites 2 and 3 being reddish-brown (Crane, 1990).Unlike the comb of the dwarf honey bees (A.florea and A. andreniformis), in which the crown of the comb always encircles the support, the massive single comb colony of A. dorsata is always attached under the surface of a stout tree branch or an overhang of a rock face, and nowadays also sometimes to the eves of buildings or other urban structures (Fig. 6) (Paar et al., 2004).Where A. dorsata nests are found in trees, the diameter of the supporting branches varies from 12 -30 cm (Morse & Laigo, 1969) or much larger (Oldroyd & Wongsiri, 2006).A slightly sloping branch is preferred (Tan et al., 1997).The width of A. dorsata combs varies from 43 -162 cm, and the height from 23 -90 cm (Tan, 2007).Honey is stored in one corner of the comb nearest the uppermost section of the comb in an area about 10 -20 cm in a large nest (Oldroyd & Wongsiri, 2006).In the large colonies, the number of individual workers can be over 50,000 (Morse & Laigo, 1969).About 3 -4 weeks after nesting, a colony of A. dorsata typically has about 4 kg of stored honey in the comb, but the highest recorded amount is 15.7 kg (Tan, 2007).Three further typical characters of A. dorsata are as follows.First, colonies are unusual in terms of that nests often occur in dense aggregations of up to 100 or even 200 colonies on a single tree or building (Koeniger & Koeniger, 1980), and these colonies are often separated by only a few centimeters (Figs. 7 and 8).Secondly, the nest sites are occupied seasonally year after year.Interestingly, queens often return to the same nest site even after an absence of up to 18 months (Paar et al., 2000).In Thailand, aggregations of nests are formed by swarms that arrive at the onset of the dry season.Finally, colonies usually display seasonal migration between alternate nesting sites.Nest sites of these bee populations tend to be occupied for 3 -4 months (Paar et al., 2004).Towards the end of this period, colonies abscond, leaving an empty comb (Fig. 8).The swarms leave the nest site to a new site up to 200 km away (Koeniger & Koeniger, 1980), and most like spending the wet season as combless swarms in mountainous regions (Ruttner, 1988).The proximate cause of migration may be related to available flowers.Abscornding A. dorsata have been observed to travel among habitats with different blooming seasons (Crane et al., 1993).The migration of colonies may also contribute to control infections with the parasitic mite Tropilaelaps clareae, since it needs bee brood in order to reproduce (Paar et al., 2004).Therefore, colonies may reduce infestation levels by this parasitic mite with a period of broodless migration (Rinderer et al., 1994).
Fig. 8. Absconding nests within a colony aggregation of A. dorsata on a single tree in Sakonnakorn, Thailand.Photo by A. Rattanawannee.
Apis cerana Fabricius, 1793
A. cerana can be found throughout Asia, including in the great mountain ranges and deserts (Ruttner, 1988), except that there is no evidence of A. cerana occurrence in the northern Japanese island of Hokkaido.In contrast, it is widely distributed over the other islands in Japan.In Southeast Asia, A. cerana is restricted to the Malayan region, the West of the Wallace line (Ruttner, 1988, as shown in Fig. 2).
A. cerana is a medium sized bee (in body length) with a fore wing length of 7 -10 mm (Oldroyd & Wongsiri, 2006).Feral colonies of A. cerana are found in a similar location as A. mellifera colonies, such as tree hollows, clefts in rocks and walls (Fig. 9) (Ruttner, 1988).They usually build three or more parallel combs attached to the roof of tree hollows (Fig. 10).Among the native Thai honey bee species, only A. cerana can be maintained in hives like A. mellifera (Wongsiri et al., 1986).However, traditional hives for A. cerana are substantially smaller than those constructed for A. mellifera (Ruttner, 1988).The first morphometric analysis of A. cerana was reported in Ruttner (1988), where the results of a PCA using 40 morphometric characters on 93 samples (18 Asian countries) revealed four main groups.Group I consisted of A. cerana collected from South India, Sri Lanka, Bangladesh, Burma, Malaysia, Thailand, Indonesia and the Philippines, whereas A. cerana from Afghanistan, Pakistan, North India, China and Vietnam were classified in Group II. A. cerana samples from central and east Himalaya belonged to Group III whilst Group IV contained A. cerana from Japan.
In Thailand, Limbipichai (1990) successfully used standard morphometrics to verify a geographic subpopulation of A. cerana split by the Isthmus of Kra, a biogeographic transition area (12° N latitude).This morphometric result was supported by Deowanish et al. (1996) who used PCR-RFLP analysis of the tRNA leu -COII region mitochondrial DNA sequence based analysis and found variation in the PCR-RFLP banding patterns among Thai samples when using BglII, EcoRV, HaeIII, HinfI and NdeI.In addition, A. cerana from the South of Thailand (Hatyai and Samui) could be clearly separated from the mainland population when the tRNA leu -COII region containing amplicon was digested by EcoRV and HindIII.In some support of this, Sihanuntavong et al. (1999) also reported that the A. cerana population from the Samui islands (South of Thailand) was distinct from the mainland populations, as determined using PCR-RFLP anlaysis using DraI restriction of the PCR amplicons of the srRNA and lrRNA gene and COI-COII coding regions.Likewise, Songram et al. (2006) revealed eight distinct RFLP patterns of the ATPase6-ATPase8 gene region when the DNA was digested by VspI.Overall, a strong biogeographic pattern between the northern and southern latitude bee populations in Thailand was revealed.
Stingless bees
Meliponini is one of the 19 tribes in the subfamily Apinae, including Apini, Euglossini and Bombini (Michener 1974(Michener , 2000)).Apini and Meliponini are the two tribes that contain members that display a high level of social behavior (Arias et al., 2006).Meliponines are groups of stingless bees whose size, body color and appearance vary greatly.For example, stingless bees in some species have a slender body while those in other species have a wide body.Some appear shiny and others as hairy somewhat like small bumble bees.Also, stingless bees in some species look metallic (Crane, 1990).The number of stingless bee species of Meliponini is still controversial, but it is estimated to be about 50 times more species than Apis spp.(Roubik, 2006).Currently, over 600 species in 56 named genera have been recorded in the tropical and subtropical regions of the world.Of these, 400 known species exist in the Neotropical regions and at least 45 species were described in Southeast Asia (Cortopassi-Laurino et al., 2006).Stingless bees and honey bees are both classified as highly eusocial insects (Michener, 2000), with large perennial colonies, morphologically distinct worker and queen castes and an intricate division of labour and recruitment to food sources (Peter et al., 1999).They normally have a single egg-laying queen and reproduce by division of a colony between the mother queen and a daughter, which is called reproductive swarming (Roubik, 1989).Meliponines differ from honey bees of the genus Apis in many biologically significant ways.For example, they generally have no sting, do not use water to cool their nest and pure wax to build it, and the males feed at flowers while the gravid queens cannot fly (Roubik, 2006).Moreover, Peter et al. (1999) showed that single mating is the rule in stingless bees; in contrast to the well-known multiple mating of honeybee queens (Oldroyd et al., 1997), since diploid males (from sex allele matched matings) are not tolerated and lead to the queen bee being usurped.Stingless bees nest in cavities, which differ in locality between species and may be underground, in tree or other enclosed spaces, such as buildings and termite nests (Crane, 1990).Stingless bee species are recognizable from the characteristic nest entrances and often their particular site (Roubik, 2006).Nests are made of wax secreted from the metasomal terga mixed with resins and gums collected by stingless bee workers.Some species add mud, feces or other materials to certain parts of the construct.In all Meliponine species, the composition and texture differ in different parts of the nest (Michener, 2000).Unlike honey bees, they produce brood in the manner of a solitary bee with an egg placed on top of a food mass in a sealed cell (Michener, 2000).Inside the nest of stingless bees, there are different shapes and arrangements of brood cells and food storages (Fig. 11).Brood cells in many stingless bee species are spherical to ovoid, while food storage containers are small to large spheres, or are egg-shaped, or even conical or cylindrical (Roubik, 2006).Honey and pollen are usually stored in separated containers called "storage pots".Usually, pots are constructed together in conglomerates, as are the brood cells.Interestingly, the horizontal brood cells of stingless bees open upwards and are closed after an egg is laid.The egg is positioned on the semi-liquid mix of honey, hypopharyngeal-gland secretion and pollen.All brood cells are destroyed by workers after use and cannot be reused as they are in honey bees (Michener, 2007).More than 50 genera of Meliponines have been reported (Arias et al., 2006).In Thailand, only one genus, Trigona, is recognized as endemic with 32 species currently reported (Klakasikorn et al., 2005).However, this genus is found extensively in tropical regions.In the Neotropics it ranges from Mexico to Argentina, in the Indo-Australian region it extends from India and Sri Lanka to Taiwan, the Solomon islands, South Indonesia, New Guinea and Australia (Michener, 2000).The Thai name for stingless bees varies across the regions, and are Channa Rong (Central), Kheetung Nee (North), Khee Suit (Northeast), and Oong (South).
Pollination value
Up to a third of the food we eat is derived from plants that are either dependent on or benefit from insect pollination (Oldroyd & Nanork, 2009), especially by honey bees (Richards, 2001).The European honey bee, Apis mellifera, is the most economically valuable pollinator of agricultural crops worldwide (Conte & Navajas, 2008).However, in most areas of Southeast Asia, there is no significant pollination industry.Insect pollinated crops are, therefore, completely reliant on wild bees, particularly honey bees and stingless bees, for their pollination (Rahman & Rahman, 2000).Because of their dance language and broad foraging length, honey bees can rapidly identify and exploit the available flowers for nectar and or pollen or plant sap for propolis over a wide range (Dornhaus et al., 2006).Therefore, honey bees are better at long-distance dispersal of pollen than solitary arthropods (Oldroyd & Wongsiri, 2006).Circumstantially, honey bees may partially compensate for fragmentation by bridging the gaps between isolated plant communities (Johnson & Steiner, 2000).Corlett (2001) reported that 86% of plant species in an extremely disturbed area in Hong Kong were visited by A. cerana.Thus, although A. cerena is probably not a pollinator of all these plants, it does appears to maintain Hong Kong's diverse flora.Lychees, Litchi chinensis Sonn., is one of the important commercially grown economic fruit plants in Thailand.Field trials suggested that the reduction of fruit yield by as much as 11.2% occurs in the absence of pollinators (Oldroyd & Wongsiri, 2006, as cited in Sihag, 1995), and that the majority of pollinators are honey bees and stingless bees.Wongsiri et al. (1996) reported that A. florea and A. andreniformis are excellent orchard and field crop pollinators, including for longan (Dimocarpus longan Lour.) and mango (Mangifera indica L.).Since A. florea is easy to maintain in orchards and is abundant throughout Thailand, this dwarf honey bee is an excellent pollinator for economic crops and wild plants (Ruttner, 1988).The lowland forests of Asia are dominated by trees in the family Dipterocarpaceae.Since an individual tree of each species tends to be over the long distances required for efficient effective fertilization and gene flow (Itioka et al., 2001).This requires an animal vector that has species fidelity while foraging, a large foraging range, and the tendency to visit multiple trees, either as individual foragers, or via transfer of pollen among foragers in the nest.The giant honeybee has all these characteristics (Oldroyd & Nanork, 2009).In addition, Momose et al. (1998) reported that A. dorsata is one of the major pollinators of several dominant components of the forest canopy in Southeast Asian lowland Dipterocarp forests, one of the richest terrestrial ecosystems in the world.It was reported that A. dorsata pollinated at least 15 species of emergent and canopy trees at Lambir (Momose et al., 1998) and was the dominant pollinator of the upper strata in rainforests in peninsular Malaysia (Appanah, 1993), and for canopy dipterocarps in Sri Lanka (Dayanandan et al., 1990).
Products value
Bee products (honey, royal jelly, propolis, bee pollen, wax and bee venom) are of increasing economic importance.Honey is always consumed as food while royal jelly, propolis and bee pollen are useful in nutrient supplements and applied in cosmetics and traditional medicine.Furthermore, bee venom has long been used in Apitherapy.Among the different bee products in Thailand during 2008 -2010, honey seems to be the only world trade product according to the statistical record of Thai Custom Department, Ministry of Finance of Thailand (Tables 2 and 3).
It is obvious that China's market is the biggest, both in terms of importation and exportation.Interestingly, the US exports the highest quantity of honey while relatively only a small quantity is consumed in the country.Although China exports a high quantity of honey, a much higher quantity of imported honey is observed.In contrast, Germany is the leading exporter of honey overseas.In addition, although there are many bee farms in countries such as Thailand, Myanmar and Australia, large quantities of honey still have to be imported.This suggests that a promotion program in bee apiary should be arranged and supported.2008 -2010).The data was obtained from Thai Custom Department, Finance Ministry, Thailand.ND represents no data.
Application of bee products
Not only are bee products consumed as food, as mentioned earlier, but they also have long been used in medical aspects, especially in traditional medicine.Bee products are derived from plants.For example, honey is the modified form of plant nectar by alpha-glucosidase (Kubo et al., 1996).Propolis is collected from plant buds and barks (Castaldo & Capasso, 2002).Since it is hard to control the consistency of bioactivities from natural products, both in their original form and crude extract, it is important to obtain a chemical structure of the active compounds for subsequent chemical synthesis or (bio) assay of the active contents.Many purification steps were used in order to obtain a pure compound.To this end, spectroscopic techniques, such as Infrared spectroscopy (IR) and Nuclear Magnetic Resonance (NMR), have been broadly applied.Once the structures of the bioactive compounds are known, it can lead into their chemical synthesis and or serve as templates for modifications for subsequent drug development.Currently, the bioactive chemical compounds found in propolis and honey, which mainly belong to the groups of flavonoids and phenolic compounds, seem to be similar to those found in the pollen or sap of the foraged plants (Katircioglu & Mercan, 2006), as expected.Some of the bioactivities from bee products are briefly outlined below.
Antimicrobial activity
Anti-bacterial activity has been reported against pathogenic bacteria for bee products, and especially propolis and honey (Boorn et al., 2010).Overall, Gram positive bacteria are more sensitive to the bee products than Gram negative bacteria (Marcucci et al., 2001).Active compounds may act on the inhibition of bacterial RNA polymerase (Takaisi & Schilcher, 1994), degrade the cytoplasmic membrane of bacteria (Cushnie & Lamb, 2005) or cause bacteria to lose their capacity to synthesize ATP, membrane transport and mobility (Mirzoeva et al., 1997).For example, the proliferation of Staphylococcus aureus (Gram +ve bacteria) and Escherichia coli (Gram -ve bacteria) is inhibited by the propolis of Melipona compressipes (Kujumgiev et al., 1999).Furthermore, propolis collected from the same bee species but in different regions, or different bee species in the same region show marked differences in bioactivity levels as well as susceptible microbes, as expected given the different flora available or utilized by the different bee species in different regions.For example, propolis collected from Spain yielded a higher antimicrobial activity than that collected from Mongolia (Kujumgiev et al., 1999).However, such geographical and likely seasonal variations in the bioactivity of bee products necessitates some form of standardization of their bioactivity.There has been some progress in the improvement of the standard and acceptance in using bee products in medicine, especially medical-grade honey (Kwakman et al., 2011).Manuka honey, is one such medical-grade honey with antibacterial bioactivity (Lin et al., 2011).Given the severe problem of bacterial resistance to antibiotics, such as methicillin-resistant S. aureus, there is a growing need to find new antimicrobial agents.Interestingly, honey from A. mellifera in Ireland (Maeda et al., 2008) and from T. laeviceps in Thailand (Jirakanwisal, 2010) can inhibit the growth of methicillin-resistant S. aureus in vitro better than the currently used antibiotics.
In addition, other antibiotic-resistant bacteria, such as gentamicin-resistant E. coli, methicillin-resistant S. epidermidis, vancomycin-resistant Enterococcus faecium, could be killed by medical-grade honey (Kwakman et al., 2008).Other than pathogenic bacteria, antifungal activity has been reported for bee products, such as the in vitro inhibition of Candida albicans growth by propolis from Brazil (Kujumgiev et al., 1999).Interestingly the antifungal activity of the propolis extract from A. mellifera against Phomopsis spp., Fusarium spp.Trichoderma spp.and Penicillium notatum was greater than that seen with ketoconazole, an antifungal drug (Quiroga et al, 2006).
In addition to bacterial and fungal pathogens are severe human diseases caused by viruses.Due to their high rate of mutation, the development of new antiviral agents is always required.With respect to bee products, in 1999, Kujumgiev et al. presented that the aromatic acids and flavonoid aglycone compounds in the propolis from M. compressipes in Brazil could inhibit the growth of avian influenza virus in vitro.Furthermore, the in vitro replication of herpes simplex virus was also inhibited by propolis (Erukhimovitch et al., 2006) and honey (Banerjee, 2006).
Anti-inflammatory activity
Inflammation is part of the immune and general tissue damage defense response of the vascular tissues, such as for aiding removal of invading pathogens, intruders, which can be microbes, wounds, allergenic proteins, auto-immune, some chemicals, and removing damaged or necrosing tissues Although required for the healing process and part of the immune response, as outlined above, inappropriate or chronic inflammation is deleterious and can lead to, for example, asthma, atherosclerosis and rheumatoid arthritis as well as pain and poor healing.Each individual is differently susceptible to the anti-inflammatory agents or drug.It is still necessary to find out a new anti-inflammatory agent.However, this response can be inappropriate or too extreme and detrimental, driving the requirement for topical, specific and systemic agents to control the anti-inflammatory responses.With respect to bee products, Paulino et al. (2003) reported an anti-inflammatory activity in the ethanolic extract of propolis from A. mellifera in Bulgaria, which had a similar antiinflammatory activity level to that provided by indomethacin, a recent anti-inflammatory drug.Subsequentially, Hu et al. (2005) reported that the water and ethanolic extracts of propolis from A. mellifera in China could significantly decrease the swollen symptoms within two hours of treatment.
Other than propolis, an anti-inflammatory activity can be provided by bee pollen, such as that reported in the ethanol extract of pollen from A. mellifera in Brazil (Medeiros et al., 2008).The main active compounds were found to be phenolic compounds and furthermore these were similar to those found in various plants, such as berry ,vegetables, fruits and tea leaves.Moreover, it was reported that the flavanol derivatives from propolis could reduce the allergenic symptom of paw edema, inhibit the synthesis of immunoglobulin E (IgE) and immunoglobulin G 1 , reduce the activity of eosinophil peroxidase and reduce the mobility of pulmonary cells.Thus, it is promising that we may be successful in finding new antiinflammatory agent in propolis.
Free radical scavenging activity
Free radicals are oxygen-centered molecules that contain a single electron at the outermost orbit.Although they play an important role in biological processes such as in immunity (intracellular killing of bacteria) and certain redox signaling pathways, their inappropriate expression in terms of level or cellular location can lead to serious cell damage as they can bind to low-density lipoprotein (LDL) and some other compounds including proteins and DNA causing irreversible changes.The bound or modified compounds can be toxic to cells leading to premature or inappropriate cell death, and can cause mutations in the genetic materials transforming normal cells to cancer cells (Campos et al., 2003).Other than cancer, excess free radicals are linked to a diverse array of disorders, such as atherosclerosis, cerebral ischemia, cardiac ischemia, Parkinson's disease, gastrointestinal disturbance and aging (Ames et al., 1993).It has long been challenging to find new free radical scavenging agents.However, with respect to bee products, Choi et al. (2006) reported that A. mellifera propolis from different regions in Korea (Yangpyeong, Boryung, Cheorwon and Yeosu) contained free-radical scavenging activity, but that they differed in their ED 50 values between regions.Indeed, propolis from the same bee species collected in Portugal showed the same free radicalscavenging affect (Moreira et al., 2008).Both works also support the idea that natural products from different regions provide an interesting bioactivity at different efficiencies.Other than propolis, Silva et al. (2005) reported the presence of a free radical scavenging activity in bee pollen from the stingless bee, Melipona subnitida, in Brazil.Analysis of the bee pollen revealed that they were from Mimosa gemmulata, a plant in the Mimosaceae family, and from plants in the Fabaceae family.The efficiency of the free radical scavenging activity obtained mainly depended on the organic solvents used in the extraction process.Ethyl acetate was the most efficient extraction solvent for recovery of this bioactivity, followed by ethanol and hexane, respectively.Active compounds were analyzed to be naringenin, isorhamnetin, D-mannitol, β-sitosterol, tricetin, selagin and 8-methoxinerbacetin.
Antiproliferative activity
Although cancer research has long been established, cancer is still the leading cause of death and sickness to people worldwide.Due to the high cost of cancer treatment and the limitation of recent therapy, including the evolution and spread of resistance to current chemotherapy agents, alternative and complementary medicines are becoming of increasing interest and potential importance, especially those with a different mechanism of action.Indeed, a significant proportion of cancer research has been focused upon finding new anti-cancer agents.With respect to bee products, Awale et al. (2008) reported that the methanolic extract of red propolis in Brazil contained an antiproliferation activity against human pancreatic cancer cells (PANC-1) in tissue culture (in vitro).From this extract, forty-three active chemical compounds were analyzed.Among those, three new compounds, ) 6a S, 11 aS)-6a-ethoxymedicarpan, 2 -) 2' 4' -dihydroxyphenyl)-3 methyl-6 methoxybenzofuran and 2, 6-dihydroxy-2-[(4-hydroxy-phenyl) methyl]-3-benzo-furanone, were found.In addition, Umthong et al. (2009) reported that the propolis from T. laeviceps in Thailand had an antiproliferative activity against the colon cancer (SW620) cell line in tissue culture.The concentration of the methanolic extract of this propolis showed a linear correlation to the anti-proliferative affect, whereas the water extract revealed a biphasic effect.Bee pollen has also been reported to have an antiproliferative activity upon cancer cell lines in tissue culture, and this has been linked to the flavonoid composition (Rice-Evans et al., 1997).The antiproliferative activity from A. mellifera bee pollens collected from Cystus incanus L. in Croatia were found to be mediated by phenolic compounds, such as flavonol (pinocembrin), flavanol (quercetin, kaempferol, galangin and isohamnetin), flavones (chrysin) and phenylpropanoid (caffeic acid).Overall, it is evident that the active chemical compounds and bioactivities depend mainly on the bee species, collecting sites, biogeography and other external factors.Flint (1994) reported that between 1880 and 1980 Southeast Asia showed an average loss of forest cover area of 0.3%, which was primarily caused from agricultural expansion and commercial logging.Subsequent to 1985, deforestation has remained particulary severe in Southeast Asia (Achard et al., 2002).Little is known about how deforestation will affect honey bees, especially the giant honey bee.Liow et al. (2001) revealed that the proportion of stingless bees and honey bees (Hymenoptera: Apidae) was very low in oil palm plantation areas and very high in undisturbed areas, which implies that oil palm plantations are not suitable in terms of either fulfilling the preferences of honey bees or the ability to support them.Palm trees do not produce nectar and their dense leaves render them unsuitable for nest building by A. dorsata (Oldroyd & Nanork, 2009).The removal of nesting trees of A. dorsata is of great concern in their conservation (Oldroyd & Nanork, 2009).Giant honey bees tend to build their nests in aggregations, sometimes with more than 100 colonies on a single tree (Oldroyd et al., 2000).In addition, A. dorsata colonies often migrate long distances, but return to their previous nesting site every year (Koeniger & Koeniger, 1980).Thus, the felling of major bee trees may cause a significant decline in the A. dorsata populations.Although the effects of agricultural landscapes and industrialization have significantly increased in Thailand, deforestation could represent as a main threat to wild honey bee and stingless bee populations and their nesting sites should be protected (Dietemann et al., 2009).
Brood and honey hunting
Honey hunting is the general term given to the collection of honey from wild honey bee colonies.Traditional honey hunting is an important role in the life of Asian people.They have been hunting wild honey bees for more than 40,000 years (Crane, 1999) and honey bee hunting remains a widespread practice throughout the region (Oldroyd & Nanork, 2009).The existing method of honey hunting in giant honey bees is the same across Asia.Hunting A. dorsata and A. laboliosa is more ruthless, and often burning the bees with a smoldering torch of tightly-bound brush (Lahjie & Seibert, 1990).In traditional honey hunting, night time is preferred by many hunters.The smoking is considered crucial to disorientate the bees and so to reduce the number of stings received.After smoking off the bees from the comb, most honey hunters cut down the whole combs destroying all the brood and food stores.A large number of larva and young bees, some hundreds of adult bees and drones are also killed while hunting honey (Tsing, 2003).Many queens must be lost during these harvest methods, and their colonies perish along with them (Oldroyd & Nanork, 2009).Therefore, these methods of hunting may kill many colonies of A. dorasata within colony aggregations in one night.
Honey bee diseases and parasites
Honey bee colonies can be infected by numerous pathogens (viruses, bacteria, fungi and protozoa), and can be infested by various parasitic insects and mites (Morse & Nowogrodzki, 1990).Normally, feral honey bee populations are not threatened by the parasites and pathogens with which they have co-evolved (Oldroyd and Nanork, 2009).However, Allen et al. (1990) reported that A. laboriosa populations in Nepal were infected by European foulbrood (Mellisococcus pluton), which they attributed to environmental stress by deforestation.Moreover, A. mellifera colonies have been introduced into many countries in Southeast Asia.Thus, the anthropogenic movement of honey bee populations between countries increasingly exposes wild populations to novel pathogens and parasites that they have no or reduced resistance to the pathogen alone or after subsequent stress (Oldroyd & Nanork, 2009).The Tropilaelaps mite is a serious external parasite of the honey bee.Its primary host was subsequently recognized as the giant honeybee, A. dorsata (Laigo & Morse, 1968) and it has now been reported throughout the entire distribution range of A. dorsata (Matheson, 1996).It is also associated with other Asian honey bees, including A. laboriosa, A. cerana and A. florea (Delfinado-Baker et al., 1985).The greater wax moth, Galleria mellonella, is the most serious pest in honey bee colonies worldwide.Its larvae cause considerable damage to bee colonies by feeding on the wax combs and cells containing broods, honey and pollens.The wax moth larvae also destroy the comb structure by forming tunnels inside the comb (Jyothi et al., 1990).Furthermore, Tingek et al. (2004) reported that a Conopid fly, Physocephala parralleliventris Kröber (Diptera: Conopidae) parasitizes A. dorsata, A. cerana, and A. koschevnikovi in Borneo.This fly grasps foraging bees in flight and deposits a larva on the integument.Then, the larva penetrates the bee cuticle and consumes the bee from the inside.
Pesticides
Some commercial fruit orchards, particularly longan (Dimocarpus longan), litchi (Litchi chinensis) and citrus are major nectar producers which are highly attractive to honey bees (Crane et al., 1984).Commercial sun flower (Helianthus annuus) plantations are one of the most important sources of pollen and nectar for bees in Thailand.However, these commercial crops are regularly sprayed with insecticides, especially in the flowering period.Oil palm (Elaeis spp.) orchards are also regularly exposed to insecticides, and this may contribute to the observed low number of honey bees within oil palm crops (Oldroyd & Nanork, 2009).Colonies of all bee species may lose field bees when foraging on crops that are exposed to insecticides.The regulation of pesticide use is lax in some Southeast Asia countries, and can increase the possibility of honey bees exposure to pesticides (Oldroyd & Wongsiri, 2006).
Impact of climate change
Climate influences flower development and nectar and pollen production, which are directly linked with the colonies' foraging activity and development (Winston, 1987).A major effect of climate change on honey bees stems from changes in the distribution of the flower species (Thuiller et al., 2005) on which the bees depend for food.Rain can impact on honey harvesting.For example, when acacia (Acacia spp.) flowers are washed by rain, they are no longer attractive to honey bees as it dilutes their nectar too much (Conte & Navajas, 2008).Likewise, an overly dry climate can reduce the production of flower nectar for honey bees to harvest, since many plant flowers produce no nectar when the weather is too dry, which makes harvesting by bees a largely hypothetical matter.In these situations, honey bees can die of starvation.
Conclusion
In Southeast Asia, the bee diversity is very high, especially for honey bees (Apis spp.).In Thailand, there are four native honey bees; A. cerana, A. florea, A. dorsata and A. andreniformis, plus the anthropogenically imported A. mellifera.Other than Apis spp., stingless bees can produce honey as well.In Thailand, there are more than 50 species of stingless bees, of which the most common is T. laeviceps.Besides the biology, diversity and ecology of the bees, variation, both morphometric and genetic variations have been evaluated.In addition, although the gender of bees can be distinguished easily by their morphology, geometric morphometric analysis of their wings alone could successfully distinguish the genders.Bees are classified as eusocial insects since there are three distinct castes within a hive; that is the queen, drones and workers.Not only are bees are very useful as pollinators, but their bee products, especially honey, are economically important.Other than being consumed as food, bee products, especially honey, propolis and bee pollens, have long been used in traditional medicine.They provide many bioactivities, such as antimicrobial, antiinflammatory, free radical scavenging and antiproliferation activities amongst others.Although they are important in agriculture, at present, it is obvious that there is a decrease in the number of hives of these bees.This may be due to a combination of deforestation, hunting, diseases, pesticide and other factors.Thus, it is very important to consider the conservation of bees and promote the bee management in each country.
Fig. 1 .
Fig. 1.The size dimorphism between castes of the giant honey bee, Apis dorsata F. is less pronounced than other Apis.(A) A queen is surrounded by her workers.Her thorax is slightly longer and broader than workers'.(B) Drones have larger eyes (white arrow) but are slightly shorter than workers.Photo by S. Wongvirat.
Fig. 3 .
Fig. 3. Right hind leg of two dwarf honey bee drones showing the thumb-like bifurcate basitasus.Photo by A. Rattanawannee.
Fig. 4 .
Fig. 4. A nest of Apis andreniformis in Thailand, showing the sticky resin around the supporting branches.Photo by S. Wongvirat.
Fig. 5 .
Fig. 5.An Apis florea nest in Thailand, showing that the comb is built around a small branch.Photo by S. Wongvirat.
Fig. 6 .
Fig. 6.A masive single comb nest of Apis dorsata attached under the eves of buildings at Mae Fah Luang University, Chiang Rai, Thailand.Photo by A. Rattanawannee.
Fig. 7 .
Fig. 7.An aggregation of Apis dorsata colonies under the roof of a temple in Chaing Rai, Thailand.Photo by A. Rattanawanee.
Fig. 9 .
Fig. 9.A feral colony of Apis cerana in a coconut tree hollow in Samut Songkham, Thailand.Photo by J. Kaewmuangmoon.
|
2017-09-14T11:18:40.661Z
|
2011-11-14T00:00:00.000
|
{
"year": 2011,
"sha1": "18ea221555dc108d305b5fd4a62f19435140d045",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/23068",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4a53a9a2a538ebaa19d25db609405fd65a81885b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
}
|
8928219
|
pes2o/s2orc
|
v3-fos-license
|
The use of fluorescence enhancement to improve the microscopic diagnosis of falciparum malaria
Background Giemsa staining of thick blood smears remains the "gold standard" for detecting malaria. However, this method is not very good for diagnosing low-level infections. A method for the simultaneous staining of Plasmodium-parasitized culture and blood smears for both bright field and fluorescence was developed and its ability to improve detection efficiency tested. Methods A total of 22 nucleic acid-specific fluorescent dyes were tested for their ability to provide easily observable staining of Plasmodium falciparum-parasitized red blood cells following Giemsa staining. Results Of the 14 dyes that demonstrated intense fluorescence staining, only SYBR Green 1, YOYO-1 and ethidum homodimer-2 could be detected using fluorescent microscopy, when cells were first stained with Giemsa. Giemsa staining was not effective when applied after the fluorescent dyes. SYBR Green 1 provided the best staining in the presence of Giemsa, as a very high percentage of the parasitized cells were simultaneously stained. When blood films were screened using fluorescence microscopy the parasites were more readily detectable due to the sharp contrast between the dark background and the specific, bright fluorescence produced by the parasites. Conclusion The dual staining method reported here allows fluorescence staining, which enhances the reader's ability to detect parasites under low parasitaemia conditions, coupled with the ability to examine the same cell under bright field conditions to detect the characteristic morphology of Plasmodium species that is observed with Giemsa staining.
Background
Plasmodium falciparum malaria is a medical emergency that requires correct diagnosis and appropriate treatment [1][2][3][4] The re-emergence of malaria in Africa and other regions is a disaster [5,6] and has lead to an increased number of imported cases in non-endemic regions. One diagnostic option is the use of rapid tests that detect P. fal-ciparum histidine-rich protein 2 (HRP-2) in whole blood [7,8], however while these tests are available in some malaria-endemic regions, regulatory issues can limit their availability elsewhere. PCR methods are also effective for diagnosing malaria [9,10], however, they have not been adopted for widespread clinical use for regulatory and cost reasons [7,11]. The current "gold standard" for malaria diagnosis in most clinical laboratories remains microscopic examination of Giemsa-stained thick and thin blood films [12], but this method requires a reader with experience and well-developed pattern recognition skills to provide an accurate diagnosis [13][14][15]. Adding fluorescent dyes to blood samples to highlight the presence of parasites within erythrocytes has been considered as a potential method of improving the accuracy of microscopic diagnosis [16][17][18], but this strategy by itself has several limitations, including cost and regulatory issues. A method that would allow malaria parasites to be simultaneously stained with both a fluorescent nuclear stain, thereby allowing rapid screening of erythrocytes for parasitized forms, and with Giemsa stain, thereby permitting confirmation and speciation of the specimen to take place in the traditional manner, might improve reader performance.
The purpose of this study was to determine if a dual staining method combining protocols that are currently in use with a fluorescent stain of nuclear material in parasites could be developed with an ultimate goal of improving reader accuracy. Despite the development of new diagnostic technologies, microscopic examination remains the method commonly used to diagnose malaria [19] and therefore we attempted to develop an enhancement, rather than a replacement, of this method. A potential source of variation among the accuracy of readers may result from differing degrees of pattern recognition proficiency [15]. Blood films present the reader with potentially complex visual fields and frequently require accurate recognition of small numbers of parasites of variable morphology [7].
The use of fluorescence stains was designed to reduce the visual task of perceiving the presence of nucleated material contained in parasitized red blood cells (pRBCs), such that potential parasites could then be examined more carefully in the familiar Giemsa-stained format. Fluorescent dyes have been used in the analysis of malaria in a variety of ways including: rapid diagnostics [20], the detection of different stages of Plasmodium in culture, animal models using microscopy [21], flow cytometric determination of parasitaemia [22,23], for rapidly testing the efficacy of new therapeutics by use of nucleic acid staining of PicoGreen, and for rapid drug screening in microfluorimetric assays [24]. A combination of the fluorescent dyes DAPI and propidium iodide has previously been shown to markedly reduce the length of time required for detection of malaria parasites in thin blood smears when compared with Giemsa staining [20]. However, staining nucleic acids with fluorescent dyes alone is non-specific for Plasmodium and cannot be used to differentiate species of malaria.
Twenty two nucleic acid-binding fluorescence dyes were examined to determine how suitable they were for detecting P. falciparum within red blood cells using three commonly available excitation and emission filter sets. Dyes that provided clear and intense staining of nuclear material were then tested to determine if they continued to function in the presence of Giemsa stain. A combination of Giemsa followed by SYBR Green 1 was found to produce stained structures under light microscopy that permitted confirmation and speciation while still permitting screening for parasites under fluorescent microscopy. P. falciparum was used because it could be easily cultured, however since the other species of Plasmodium that cause human malaria can be visualized with Giemsa stain and contain nucleic acids the method should be applicable to them as well.
Plasmodium falciparum cultures
Parasites were grown in type O-positive blood obtained by venipuncture of volunteers. Cultures were maintained by the method of Trager and Jensen [25] by using RPMI 1640 supplemented with 10% human serum (a kind gift obtained under ethical consent from Chemo Day Care at the Princess Margaret Hospital, Toronto, Canada) and 50 µM hypoxanthine (Gibco, Grand Island, NY). Culture isolates of P. falciparum that were used included ITG, 3D7, FCR3 and two patient isolates. No difference was observed in the staining between these lines.
Molecular dyes
The bright field microscopy stain evaluated was Accus- to a concentration of 1 mg/ml); Hoechst 33258 pentahydrate (bis-benzimide) (powder was suspended in DMF to a concentration of 0.67 mg/ml); and ethidium monoazide (EMA) (powder and suspended in sterile distilled H 2 0 to a concentration of 1 mg/ml).
Fluorescent staining
Ten microliter samples of P. falciparum cultures (5% haematocrit with parasitaemia ranging from 1 to 5%) were prepared as films on glass slides. The films were air dried, fixed by flooding the slide with room temperature absolute methanol (MeOH) and allowed to air dry. Individual fluorescent stains were performed by adding 300 µL of the dye solution (diluted 1:500 in 10 mM Tris pH 8) directly to the dried film. The slide was then placed in the dark for 20 min at room temperature. Dye solution was then gently rinsed away using tap water and the slides were airdried in the dark. A drop of emersion oil was then placed directly onto the slide and the blood film was examined using fluorescent microscopy. EMA photoactivation was performed by adding a 1:500 dilution of EMA in 10 mM Tris pH 8 to a culture film that had previously been hydrated with 10 mM Tris pH 8. A cover slip was then placed on top of the slide to prevent the EMA solution from drying and the slide was placed in the dark for 5 min to allow the dye to permeate the blood film. The slide was then placed 8 cm away a Super-Bright 20-LED Pivot Lantern (Innovage Inc., Outdoor™, Concord, ON), and exposed for 20 minutes. Residual EMA solution was gently rinsed off using tap water.
Giemsa staining plus SYBR
Dual staining of culture films was carried out using a 1:25 dilution of Giemsa (Accustain ® Giemsa Stain, SIGMA) in 10 mM Tris pH 8 that had been passed through a 0.2 µm filter to remove undissolved material. Prior to use the diluted Giemsa solution was centrifuged at 10,000 × g for 2 min to pellet particulates in the solution. Giemsa stain (250 uL) was added to each slide by carefully spreading the solution across the entire blood film and left for 30 min at room temperature after which the stain solution was removed by rinsing with tap water. Excess water on the slide was removed and 300 µL of 10 mM Tris pH 8 was added. The Tris buffer was then removed and 300 µL of a 1:3,000 dilution of SYBR Green 1 in 10 mM Tris pH 8 was added. Care was taken to prevent the slide from drying prior to the addition of the SYBR solution to maximize SYBR Green staining. The slide was placed in the dark and left for 15 min at room temperature. The SYBR Green 1 solution was removed from the slide using a gentle stream of tap water prior to air drying in the dark and examination.
Results and discussion
Staining of Plasmodium falciparum cultures with DNA specific fluorescent dyes A series of fluorescent dyes were tested for their ability to provide nucleic acid-specific fluorescence staining of malaria parasites within a red cell and that could easily be visualized using one of three standard filter sets: Ex. 340-380, BA 435-485 ("Fluorescein"); Ex 450-490/Em 520 ("DAPI"); and Ex 546/Em 590 ("rhodamine"). Several dyes produced intense fluorescence that was easily visible ( Table 1 and Table 2). The optimal concentration of these dyes under the conditions employed was found to be a 1:500 dilution, with the exception of SYBR Green 1, which produced strong fluorescence at a 1:10,000 dilution.
Dual staining of parasitized red blood cells (pRBCs)
Dyes that demonstrated good nucleic acid-specific fluorescence (Table 1) were tested in combination with Giemsa stain. No fluorescence was observed with any of the dyes when simultaneous staining of Giemsa and fluorescent dye was performed, or when fluorescence staining preceded Giemsa staining. However, dual staining was observed with SYBR Green 1, YOYO-1 and Ethd-2 dyes when the blood films were first stained with Giemsa, followed by staining with the fluorescent dye ( Table 2). Four factors were found to be important for optimal dual staining: 1) the blood films had to be stained with Giemsa first, followed by the fluorescent dye; 2) particulates within the Giemsa stain had to be removed from solution by centrifugation at 10,000 × g for 2 min; 3) a dilution of 1:25 of the Accustain ® Giemsa Stain, (SIGMA-Aldrich) was optimal for staining of culture films; however most importantly, 4) the culture film had to be hydrated prior to fluorescent staining as no fluorescent staining was observed if the cell film was allowed to dry prior to addition of SYBR Green 1.
Sequential staining was observed to reduce the intensity of the Giemsa stain. Several methods were tested to see if it was possible to enhance the Giemsa stain while maintaining the fluorescent staining. The first approach involved the use of different buffers, such as 10 mM Tris pH 7.2, or 30 mM potassium citrate in 10 mM Tris, pH 7.4. However, no significant enhancement in either Giemsa or fluorescence staining was observed with these buffers compared to the results obtained with 10 mM Tris pH 8.0.
Previous reports have demonstrated that the binding of fluorescent and other dyes is dependent on the ionic strength of the buffer [18,24] and nucleic acid binding can be shifted from intercalation to minor groove binding by increasing the ionic strength of the buffer. To determine if competition between visible and fluorescent chromophores could be reduced dual staining was conducted in the presence of four concentrations of NaCl (diluted in 10 mM Tris pH 8). The three ionic strengths were: 0 mM (control); 10 mM (low); 30 mM (medium); and 100 mM (high). The quality of staining with Giemsa alone in the presence of NaCl was not affected at lower concentrations; however the highest ionic strength resulted in poor Giemsa staining. "Ring" and "mature" stage parasites were pale blue and the blue staining appeared diffuse as opposed to the more solid colour observed at low ionic strength.
To determine if prior covalent attachment of a fluor produced acceptable results EMA was applied to the sample, photoactivated, unbound dye was rinsed away, and the sample was Giemsa stained. EMA becomes covalently linked to DNA upon photoactivation [26], therefore, covalently linked EMA might remain stably attached to DNA while it was stained with Giemsa and the intensity of the Giemsa stain might not be diminished due to leaching of the chromophores. Staining with photoactivated EMA followed by Giemsa resulted in superior Giemsa staining of pRBCs with well defined red and blue structures, however, To overcome the issue of non-specific fluorescent staining two washes with 40% MeOH (v/v) were used to remove excess EMA prior to Giemsa staining. While this step resulted in a very low level of uniform fluorescence of RBCs and bright parasite specific fluorescence prior to Giemsa staining, no parasite-specific fluorescence was observed following Giemsa staining (Figure 1). It remains to be determined whether the Giemsa staining is displacing the fluorescent dye or quenching EMA fluorescence.
Dual staining with Giemsa and SYBR Green 1
The best combination of dyes for dual staining was observed to be Giemsa and SYBR Green 1. Fluorescent staining of P. falciparum cultures with SYBR Green 1 was well defined and rings appeared as small oval bodies within the red blood cells. This pattern was easily distinguishable from low intensity debris as it was uniform in shape and did not quench under exposure to excitation light. Non-parasitized red blood cells (RBCs) showed a low level of green fluorescence which made them easily visible, but they remained easily distinguished from pRBCs which contained brightly stained parasites.
Cells stained with Giemsa alone showed the typical red staining of nuclear material and blue cytoplasmic staining (Figure 2A). However the intensity of Giemsa staining of P. falciparum cultures when stained with SYBR Green 1 was lower than with Giemsa alone ( Figure 2B). The morphology of rings was not as well defined and frequently only blue staining was observed. Nevertheless mature parasites were easily visualized under light microscopy despite being paler and more diffuse.
A method that incorporates Giemsa staining with fluorescence called "Giemsa plus fluorescence" has long been in use for detection of chromosomes whereby the photosensitized nucleic acid binding fluorescent dye produces differential Giemsa staining as viewed under light microscopy [27]. However, this method does not appear to be directly applicable to malaria diagnosis due to the conditions used which do not allow visualization of the fluorescent dye but rather are to enhance visualization of structures under bright field.
Staining of P. falciparum-infected red blood cells Figure 2 Staining of P. falciparum-infected red blood cells. Panel A depicts two different fields of stained P. falciparum-parasitized red blood cells, both viewed using light microscopy. The left hand side was stained with Giemsa alone and the right hand side was stained with Giemsa followed by SYBR Green 1. Panel B depicts the same field of dual stained P. falciparum infected red blood cells viewed under light (left hand side) or fluorescence (right hand side). A.
Giemsa Giemsa and SYBR Giemsa Giemsa and SYBR
Effect of EMA Figure 1 Effect of EMA. P. falciparum-parasitized red blood cells were stained with photoactivated ethidium monoazide (EMA) alone or with photoactivated EMA followed by Giemsa. The images on the top are viewed under light microscopy and the lower images are the same field viewed using fluorescent microscopy.
EMA EMA and Giemsa EMA EMA and Giemsa
It is possible that Giemsa can compete with other nucleic acids stains and displace them. Numerous fluorescent dyes have been developed that bind to a variety of molecules such as DNA, RNA, primary amines. While dual staining with two nucleic acid dyes was permissible using Giemsa followed by SYBR the staining was not optimal and one dye frequently competitively inhibited the other. For example, higher concentrations of Giemsa resulted in exclusion of SYBR. At lower concentrations of Giemsa, SYBR often displaced the red nuclear staining typically observed with Giemsa. Methylene Blue, a component of Giemsa, uses different mechanisms to bind to AT and GC bases of DNA. GC binding involves intercalation and optimal binding occurs at low ionic strengths [28]. AT coupling, on the other hand, occurs through minor groove binding and optimal binding is under high ionic strength. P. falciparum has a high AT content with an average of 80% for the entire genome and 90% in intergenic spaces and in introns [29]. SYBR Green 1 also binds through intercalation to DNA and is influenced by ionic strength [30], but attempts to shift either Giemsa or the fluor staining from intercalation to minor groove binding through increasing ionic strength was not effective in enhancing the Giemsa staining while maintaining good SYBR Green 1 binding.
The application of dual staining, Giemsa + SYBR, to clinical blood films
To determine if results obtained with pRBCs obtained from P. falciparum cultures were applicable to clinical samples, blood films (thick and thin) were prepared from whole human blood that either had low levels of P. falciparum added, or were read as positive by an experienced diagnostician. As was observed previously, there was little non-specific staining observed in the negative samples and the fluorescent staining that was present was easily distinguished from the characteristic pRBC pattern ( Figure 3). The nuclei of leukocytes within the blood were also dual stained and provided a reference for the efficiency of staining for each blood film. A count of dual staining in 600 P. falciparum pRBCs revealed 100% dual staining with Giemsa and SYBR Green 1.
LED illumination and computer based detection of pRBCs
In order to determine if this method was compatible with an LED light source, a Luxeon 5 mW blue LED was mounted in place of the high intensity lamp used to excite fluors and used to excite a blood smear stained with both SYBR Green 1 and Giemsa. A white LED was used for bright field illumination. As seen in Figure 4 the quality of the stain was comparable to the use of incandescent and arc lamps of traditional microscopes. The use of LED light sources has several desirable features over conventional high intensity lamps as LEDs are inexpensive, light weight, can be powered for long periods by flashlight bat-LEDs can be used for detecting SYBR stained pRBCs Figure 4 LEDs can be used for detecting SYBR stained pRBCs. Fluorescence emission from pRBCs excited with a 5 mW Blue LED powered by 4 AA batteries. The pRBCs were stained with SYBR Green 1 and were then examined using a Zeiss Axiostar Plus microscope with an LED light source fitted in place of its mercury lamp. Photographs depict fluorescent (left) and white light (right) images of a mature parasite (upper) and a ring stage parasite (lower). Fluorescent intensity of the stained parasites was comparable to that observed with the HBO 50/AC high intensity light source supplied by the manufacturer.
Dual Giemsa and SYBR Green 1 staining of P. falciparum inoc-ulated blood teries, are physically robust, have a long service life, can be rapidly wand generate relatively little heat. These features allow their use in virtually any environment.
Conclusion
Sequential Giemsa plus SYBR Green 1 staining of pRBCs as described herein provides a method of rapid screening and detection of the pathogen because the bright fluorescence of nuclear staining with SYBR Green 1 is well contrasted with the low RBC autofluorescence and the dark background. The combination of SYBR Green 1 for enhanced detection plus Giemsa for traditional identification and speciation appears to offer a superior diagnostic test.
In the same way that Giemsa is composed of a combination of dyes, it may be useful to combine different fluorescent dyes to further enhance pattern recognition and speciation ( Figure 4). While nucleic acid specific dyes are frequently used to stain fixed tissue, numerous other dyes are known to interact with malaria parasites [31], therefore, future avenues of research may lead to vital staining of live parasites in RBCs. The emergence of virtual slide technology for tele-pathology applications [20] suggests that entire blood films can be digitally scanned in a sequential manner for sensitivity to different dyes and then the gold standard Giemsa stain can be applied at the end to provide a reference link to prior observations. The availability of vital dyes, some of which are substrates for the same transporters whose enhanced activity are responsible for resistance to anti-malarial drugs in certain strains of malaria [32], means that these dyes might also be used to characterize responsiveness and resistance to drug therapy.
At first glance, due to the high capital cost and high running costs of mercury lamps fluorescent-based tests would appear to be impractical for routine use in many clinical laboratories in the developed world and out of reach for laboratories in the developing world. However, the recent emergence of LED based light sources to replace arc lamp and laser sources [33,34], promises to simplify the use of routine fluorescence screening and reduce costs.
Further, the increased contrast provided by fluorescence based localization of pRBCs should facilitate machine vision approaches for automated screening of blood smears for parasitaemia levels and for speciation. This in turn sets the stage for remote tele-cytometry [35] of the samples which may address the experience deficit in dealing with malaria samples. It may, therefore, be practical to introduce an enhanced staining method that will reduce the requirement for high levels of skill and experience presently required to accurately diagnose malaria and increase the reliability of these tests.
Authors' contributions
RG and PL performed the cell staining and evaluation, PP and IC conceived of the study, with PP providing expertise on microscopy and staining methods and IC providing malaria culturing and diagnostic guidance. All authors contributed to the preparation of the manuscript and have read and approved of all submitted versions, including its final form.
|
2014-10-01T00:00:00.000Z
|
2007-07-06T00:00:00.000
|
{
"year": 2007,
"sha1": "c52d1e6d9c06b9dbeb8e3c8bfc019126537e2fdc",
"oa_license": "CCBY",
"oa_url": "https://static-content.springer.com/esm/art:10.1186/1475-2875-6-89/MediaObjects/12936_2007_394_MOESM1_ESM.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d29fb0252bf5807fdbccfb987f59672dafd3b82a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
302519
|
pes2o/s2orc
|
v3-fos-license
|
DCD – a novel plant specific domain in proteins involved in development and programmed cell death
Background Recognition of microbial pathogens by plants triggers the hypersensitive reaction, a common form of programmed cell death in plants. These dying cells generate signals that activate the plant immune system and alarm the neighboring cells as well as the whole plant to activate defense responses to limit the spread of the pathogen. The molecular mechanisms behind the hypersensitive reaction are largely unknown except for the recognition process of pathogens. We delineate the NRP-gene in soybean, which is specifically induced during this programmed cell death and contains a novel protein domain, which is commonly found in different plant proteins. Results The sequence analysis of the protein, encoded by the NRP-gene from soybean, led to the identification of a novel domain, which we named DCD, because it is found in plant proteins involved in development and cell death. The domain is shared by several proteins in the Arabidopsis and the rice genomes, which otherwise show a different protein architecture. Biological studies indicate a role of these proteins in phytohormone response, embryo development and programmed cell by pathogens or ozone. Conclusion It is tempting to speculate, that the DCD domain mediates signaling in plant development and programmed cell death and could thus be used to identify interacting proteins to gain further molecular insights into these processes.
Background
Plants can recognize microbial pathogens by a specific interaction system, which was historically named the gene-for-gene interaction, because particular matching genes must be present in the pathogen as well as in the plant. A successful recognition triggers a hypersensitive reaction of individual plant cells, which is a form of programmed cell death in plants. Though a dead cell on its own might already stop the growth of biotrophic pathogens more importantly the cell death program by itself generates unknown signals for neighboring cells. Thereby the plant immune system is activated locally in some cell layer around the original infection to prepare the plant cells for the next microbial attack. Often this signal from the first infection spreads throughout the whole plant and turns on a long lasting broad pathogen resistance called the systemic acquired resistance. Despite the enormous efforts to dissect the machinery for the hypersensitive reaction many details are still unknown except for the early recognition of the microbial molecules.
Often the programmed cell death in plants requires the signaling compound salicylic acid downstream of the recognition process to proceed beyond restrictions points in the cell death program [1]. A conclusive role for salicylic acid has not been worked out but it is likely to function in signal amplification [2,3] and transcriptional activation of genes are very likely [4,5].
We have isolated a gene from soybean which is strongly induced during the hypersensitive reaction and serves as a marker for programmed cell death in this system [6]. The gene is not directly responsive to salicylic acid but transcription can be amplified in the presence of this signal molecule. The gene encodes a protein consisting of two domains. The N-terminal domain is extremely rich in the amino acid asparagine (~25%) and was therefore called N-rich protein (NRP) [6]. The exact biological function of the NRP-gene remains to be elucidated.
Here we describe the analysis of a protein domain found in the soybean NRP-protein and other plant proteins associated with development. The biological processes associated with these proteins lead us to name this novel domain DCD for their role in development and cell death.
Results and discussion
Sequence analysis revealed a significantly conserved region, hence novel domain DCD. The DCD domain is an approximately 130 amino acid long stretch that contains several mostly invariable motifs (Fig. 1). These include a FGLP and a LFL motif at the N-terminus and a PAQV and a PLxE motif towards the C-terminus of the domain. Several amino acids are positionally conserved in all members with a DCD domain indicating a critical role of these residues in structure and function ( Fig. 1). In particular three cysteines are almost generally (red asterisks in Fig 1) or subfamily specifically (green asterisks in Fig. 1) conserved, which putatively possess a metal binding feature. The predicted secondary structure is mostly composed of beta strands and confined by an alpha-helix at the N-and at the C-terminus. Using the metaserver 3D-Jury [7] no similarities to any other known structural folds could be assigned. The modular nature of the DCD domain is supported by the presence in several protein families with different domain architecture (Fig. 2). The DCD domain is only found in plant proteins but absent from bacteria, fungi and animals. The two fully sequenced plant genomes from rice and Arabidopsis contain 11 and 7 members with a DCD domain, respectively. At least four subgroups of proteins can be identified by phylogenetic comparison of the DCD domain each having members in the rice and in the Arabidopsis genome (Fig. 2). A similar picture emerges from the analysis of plant EST-sequences, which also cluster to the different subgroups (data not shown). The four subgroups differ in the architecture where the DCD domain is located within the protein (Fig. 2). Whereas in subgroup I the DCD domain is found in the C-terminus of the protein, it is found more towards the middle of the protein in subgroup II. The third (III) subgroup is more variable; the proteins are mostly characterized by a DCD domain at the N-terminus and in one case it is found subsequent to a ParB domain. The fourth (IV) subgroup shares a DCD domain at the N-terminus but contains several KELCH repeats at the C-terminal part of the protein.
Whereas the majority of DCD domains (families II, III. IV) contain a second conserved cysteine, directly following the N-terminal one, family I possess a putative functional substitute in the "central loop" of the domain (Fig. 1, green asterisks on the top of the alignment).
We could only identify the DCD domain in a variety of plants, using PSI-BLAST. The domain seems to be present in ESTs from dicots (e.g. Arabidopsis), monocots (e.g. rice), gymnosperm trees (e.g. pine), ferns, and mosses (e.g. Physcomitrella). The available sequences from algae are very limited, but the recently sequenced diatom Thalassiosira pseudonana [8] contains a distant member of this domain in a hypothetical protein (Fig. 2). At least the DCD domain is present from early in plant evolution before the separation of diatoms and green algae, leading to higher plants, occurred about 1 billion years ago.
For three of the proteins with a DCD domain, all clustering into group I, some biological data have been published. These proteins include the B2-protein from carrot, which was found to be strongly and early induced during the developmental shift from undifferentiated cell cultures to somatic embryogenesis [9]. Though the exact function of the protein still has to be elucidated a role in developmental processes is supported by the finding from Arabidopsis transcript profiling with microarrays. Here the DCD containing protein At2g32910 is only weakly expressed throughout the whole life cycle of Arabidopsis except during embryogenic development. A similar pattern is observed for the gene At5g01660, which has several KELCH repeats next to the DCD domain. This gene is most abundantly expressed in embryos but also in the meristem of the shoot apex.
A second protein with a DCD domain was identified by [10] in pea. Here the so called Gda1 gene is strongly expressed in peas during the vegetative phase but rapidly disappears after shifting the plants into the reproductive phase. The transition is mediated by a change of the light period from short to long days. Interestingly the GDA1 gene can be rapidly induced by the phytohormone gibberellic acid, a key player in the developmental change from the vegetative to the reproductive phase in plants.
Multiple sequence alignment of DCD-domains Figure 1 Multiple sequence alignment of DCD-domains. The alignment was built using T-coffee [21] and refined manually. First column: database accession numbers (Genbank, if available); second column: species names (at: Arabidopsis thaliana; cp: Citrus X paradisi; cr: Ceratopteris richardii; gm: Glycine max; mt: Medicago truncatula; os: Oryza sativa; tp: Thalassiosira pseudonana); third column: start of the domain in the respective sequences. The aligment is coloured by chroma [22]. (conserved prolines: white on grey; conserved glycines and alanines: green on grey; conserved leucines, isoleucines, phenylalanines, cysteines, valines and tyrosines: yellow on grey; conserved asparagines and glutamines: dark red on grey; conserved glutamic acids: light red on grey; conserved threonines and serines: light blue on grey; conserved aliphatic residues: grey on yellow; conserved hydrophobic residues: black on yellow; conserved small residues: dark green on white; conserved positively charged residues: blue on white; conserved polar residues: dark blue on white; conserved charged residues: pink on white; conserved aromatic residues: blue on yellow; conserved big residues: blue on light yellow; conserved negatively charged residues: red on white) The consensus sequence (conserved in 80% of the sequences) shown below; h, p, s, l, b, c, a, + and -indicate hydrophobic, polar, small, aliphatic, big, charged, aromatic, positively charged and negatively charged residues, respectively. The predicted secondary structure taken from the consensus of the alignment (H, helix or E, beta sheet predicted with expected average accuracy > 82%; h, helix or e, beta sheet predicted with expected average accuracy < 82%) using PhD [23]. Independent predictions have been performed for Psipred 17 using representatives of distinct groups (accession number: gi|2369766, gi|50932255, gi|51535545). Asterisks on the top of the alignment indicate conserved Cystein residues (red: present in almost all DCD domains, green: present subfamily-specific) Phylogenetic tree of DCD domains in plants and related domain architecture [19,20] http://smart.embl-heidelberg.de * this sequence contains may be an incorrectly sequenced C-terminal part The GDA1 transcript accumulated only 15 min after application of gibberellic acid, indicating that the GDA1 gene is a primary response gene to this phytohormone.
A third protein with a DCD domain was isolated by [6]. This protein was named N-rich protein (NRP) because of the extreme high content of asparagine (~25%) in the Nterminal half in front of the DCD domain. The NRP-gene is rapidly induced during programmed cell death in soybean, caused by inoculation with avirulent bacteria. Isogenic bacteria, lacking a single Avr-gene are not recognized by soybean cells and neither trigger programmed cell death nor the induction of the NRP gene. The gene is induced early in the cell death program well before the cells lose control of their membrane integrity. Using Phytophthora as a fungal pathogen to inoculate soybean plants, the same response was found as with bacteria, indicating that the NRP-gene is responding to the cell death program rather than to specific molecules from a particular pathogen. The putative Arabidopsis ortholog (At5g42050) is induced by several stress conditions including ozone, osmotic and cold stress as indicated by publicly available transcript profiling data (Genevestigator: https://www.genevestigator.ethz.ch/). Ozone treatment leads to small lesions with cell death similar to a hypersensitive reaction caused by avirulent pathogens. A similar set of genes is activated by both inducers of programmed cell death.
The DCD domain is quite well conserved on the amino acid level throughout the plant kingdom. The domain is present in proteins with different architectures. Some of these proteins contain additional recognizable motifs, like the KELCH repeats or the ParB domain. The latter domain has been attributed to the partitioning of plasmids and chromosomes in bacteria and has a nuclease activity [11].
KELCH motifs are typically composed of ~50 amino acid long stretches which form a beta sheet [12]. They occur as 5 to 7 repeats that form a beta propeller tertiary structure. KELCH motifs are widespread and have been identified in viruses, plants, fungi and mammals. Most of the characterized KELCH motifs are interfaces for protein protein interaction, often by interaction with proteins from the cytoskeleton [13].
Conclusion
The occurrence of the conserved DCD domain in plant proteins of variable length and different architecture, but present throughout the plant kingdom, suggests a role in protein-protein interaction. Transcription profiling reveals that the genes encoding a DCD domain are upregulated during plant development and programmed cell death. It is tempting to speculate, that the DCD domain mediates the signaling in these processes and could thus be used to identify interacting proteins to gain further molecular insights into these processes.
Methods
Using the protein sequence of Glycine max (gi|57898928) as query for a PSI-Blast search [14] after one iteration we retrieve homologs in several plant families with high significance (E-value > = 8*e -30 ). A conserved region of ~130 amino acids could be identified and the borders of the shared region were defined according to the PSI-BLAST pairwise alignments. Further PSI-BLAST searches with this region converge within the first iteration. A multiple sequence alignment was build using T-coffee and refined manually; additional HMM searches [15] with profiles based on this alignment of non redundant representatives support the findings.
A phylogenetic tree was reconstructed using the non redundant alignment of 29 sequences (including fragments and one translation (O49932) that likely contains a frameshift at the C-terminus) in MEGA [18], calculated with the neighbor-joining algorithm. Similar topologies were obtained using other methods e.g minimum evolution (data not shown) and bootstrap values were calculated to test significance. The domain architecture is predicted and displayed by the Simple Modular Architecture Research Tool http://smart.embl-heidel berg.de [19,20].
|
2017-08-03T01:03:17.485Z
|
2005-07-11T00:00:00.000
|
{
"year": 2005,
"sha1": "ac107df4f9e9f76954baf158234c509874b8277c",
"oa_license": "CCBY",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-6-169",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "02d0957e182cb340720566c53a929086f5c0e428",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Computer Science",
"Medicine"
]
}
|
40576756
|
pes2o/s2orc
|
v3-fos-license
|
Regulatory capacity building and the governance of clinical stem cell research in China
While other works have explained difficulties in applying ‘international’ guidelines in the field of regenerative medicine in so-called lowand middle-income countries (LMICs) in terms of ‘international hegemony’, ‘political and ethical governance’ and ‘cosmopolitisation’, this article on stem cell regulation in China emphasizes the particular complexities faced by large LMICs: the emergence of alternative regulatory arrangements made by stakeholders at a provincial level at home. On the basis of ethnographic and archival research of clinical stem cell research hubs, we have characterized six types of entrepreneurial ‘bionetworks’, each of which embodies a regulatory orientation that developed in interaction with China’s regulatory dilemmas. Rather than adopting guidelines from other countries, we argue that regulatory capacity building is more appropriately viewed as a relational concept, referring to the ability to develop regulatory requirements that can cater for different regulatory research needs on an international level and at home.
Introduction
Stem cell research is hoped to yield knowledge that can translate the regenerative properties of stem cells to stem cell products and therapies.Such regenerative medicine (RM) is expected to extend and heighten the quality of the lives of large numbers of people suffering from old age diseases and protracted and incurable conditions.Critical scientists emphasize the importance of understanding how the cells work in connection with the safety and efficacy of their use (Bianco and Sipp 2014;Bianco 2014).Knowledge of the complexity of the navigation of the cells and their integration into the body are crucial.To maximize safety and efficacy, standards have been developed for scientists to use in their clinical research and applications.However, regulatory standards can both enable and hinder national capacity building, partly depending on a country's international position: when set high, the cost and expertise required for catering to high standards can disable progress in the field.Such dilemmas have frustrated China's efforts to reform the national regulation for clinical stem cell research.
This article discusses how some notions of regulatory capacity building imply that it refers to the adoption of international regulation.International institutions, such as the International Society for Stem Cell Research (ISSCR), tend to assume that regulatory capacity building refers to the ability of countries and institutions to follow 'international' regulatory standards.Alternatively, critics of hegemonic regulatory standards have argued for selfregulation at a national and lower administrative level (Sleeboom-Faulkner et al. 2016).But countries that find international regulations unsuitable to their conditions may also experience problems with self-regulation (Wahlberg et al. 2013), due to competing interests and clashing regulatory needs at home.This article uses the example of stem cell regulation in China to illustrate the regulatory dilemmas faced by a large low-and middle-income country (LMIC), as a result of external and internal pressures to follow international regulatory trends, on the one hand, and pre-existing alternative regulatory arrangements made by stakeholders at home, on the other.In this article, we show why such notions are inadequate and how they can be improved upon.We argue that, rather than adopting guidelines from other countries, the notion of regulatory capacity building needs to be regarded in a relational light, and should refer to the ability to address regulatory discrepancies between the different regulatory needs on an international level and at home.
This article shows that China faces specific dilemmas related to its size, geographical differences in opportunities, diversity in institutional structures, and contradictions between the political centre and peripheral governing institutions.We provide examples of six types of these 'bionetworks' of clinical stem cell research: life science research networks that embody regulatory norms, which are shaped in interaction with China's regulatory dilemmas.The notion of 'bionetwork' as used here emphasizes the entrepreneurial nature of productive life science networks that share certain scientific norms and regulatory practices with an appeal to health needs.Their shared activities include networking, lobbying, managing, trading, and collaboration with scientific, governmental, and commercial institutions (Sleeboom-Faulkner and Patra 2011).We will show that a variety of different regulatory orientations have developed as part of these bionetworks, the most common of which we have described in this article.As discussed below, the notion of regulatory orientation refers to the shared normative delineations of 'good' and 'bad' scientific research and clinical practices underpinning regulatory arrangements in collaborative networks.By illustrating how these norms are related to socio-economic and political conditions, we point out the necessity of adjusting our understanding of regulatory capacity building.As we show below, it needs to have the capacity to deal with a variety of regulatory research needs, both on an international level and at home.
Standards and regulation
Procedural standards are important to the accurate delineation of the steps that are to be taken when specified conditions of a procedure are met to ensure high quality final products (Timmermans and Epstein 2010).Examples relevant to clinical stem cell research are Standard Operating Procedures (SOPs), and standards and guidelines for preclinical studies, clinical trials, quality controls, Good Manufacturing Practice (GMP), Good Laboratory Practice (GLP), Good Clinical Practice (GCP) and external review by independent expert committees.Such regulation is meant to ensure that there is sufficient evidence for the safety and efficacy of the stem cell products.But the conditions under which the various dimensions of procedural standards develop, including exchangeability, ethical acceptability, political authority, financial support, expertise, political pressure, bureaucracy and reputation are crucial to whether and how regulation is embedded in society (Timmermans and Epstein 2010: 72).The development of standards harbours a dilemma: although flexible definitions of scientific objects can be preferable from a research perspective (Fox Keller 1999: 136-41), when it comes to clinical applications, it is important to have procedural standards in place that link quality standards for final products with standards for the characterization of stem cell lines (Sengoku et al. 2011).Thus, a method that reproducibly induces the same differentiated cell lines from different cell lines or cell types can be part of the protocol examined by an institutional review boards (IRBs) or drug regulatory authority.
Although international organizations, such as the ISSCR and the International Society for Cellular Therapy (ISCT), and many countries and regions have developed guidelines, the international, national, and regional guidelines for clinical stem cell research differ substantially and are subject to radical change (Sleeboom-Faulkner et al. 2016).International authority has been ascribed to the guidelines of the ISSCR (ISSCR 2008) and ISCT (2015), and many countries have followed the guidelines and standards of drug regulatory authorities in the USA and the EU to enable collaborative research efforts.But in LMICs, such as China and India, the articulation of 'international guidelines' with local practices has led to sustained regulatory dilemmas.Especially in China, life science innovation is earmarked as a main driver for economic progress, and bioscience and biotechnology have become key areas for government support and funding for scientific research over the last decades (CURE 2009;China National Center for Biotechnology Development 2011;Wang 2011;MoST 2013).Although various sets of regulations for clinical stem cell research and applications have appeared, major gaps and questions regarding research governance remain.(MoH, 2009), regulation directly pertaining to clinical stem cell science, has appeared in 2009, 2012, 2013, 2015, and 2016(NHFPC 2016)).
Regulatory development of clinical stem cell research in the PRC
In 2009, the MoH promulgated the Management Measures for the Clinical Use of Medical Technologies.This regulation classified a range of new medical technologies and procedures into three categories where stem cell transplants were classified as 'Category 3.' This category of medical technologies involves serious ethical problems, and safety and efficacy issues that still need to be resolved through clinical trials.The regulation stipulated that clinical applications of stem cell technology had to be halted by 31 October 2009, if they had not applied for or passed auditing (MoH 2009).Although stem cell interventions required MoH approval before clinical application, for-profit clinics and a number of hospitals continued to provide 'stem cell therapy'.
In January 2012, the MoH issued the Notification on Self-Evaluation and Self-Correction Work regarding the Development of Clinical Stem Cell Research and Applications (MoH 2012).It gave stem cell research institutions a period of 6 months for selfevaluation and self-correction, and it announced that the CFDA would not accept any applications until 1 July 2012.Clinical stem cell research and clinical trials came to a virtual standstill in most laboratories and hospitals of academic institutions, although there were exceptions, including military and police academies, private hospitals and some lower-tier academic and medical institutions.
In March 2013, the MoH published three interrelated draft regulations for public comments: (MoH, 2013).These draft regulations prepared the way to regulation of clinical stem cell research and applications in China (Sui and Sleeboom-Faulkner 2015).It was not until August 2015 that the MoH published the 'draft' regulation on clinical research and applications that involve human stem cells (NHFPC 2015).It affirmed that stem cell technologies would be regulated as pharmaceutical products, with the exception of routine treatment with haematopoietic stem cells.The CFDA published standards and technical procedures for the collection, manufacture, and storage of stem cells for clinical use in the 'Stem Cell Preparations Quality Control and Pre-clinical Research Guidelines' (CFDA 2015); it also specified the required criteria for safety and efficacy assessment in preclinical studies.Only the highest-level hospitals (tier-three) are permitted to conduct stem cell clinical trials.Applications for these trials are to address provincial branches of the NHFPC and CFDA, and, assisted by expert committees, the NHFPC and CFDA jointly review the projects.Clinical trials need to be registered online at the Chinese Medicine Registry and Management System (see Rosemann and Sleeboom-Faulkner 2016).
Despite regulatory efforts, the regulatory framework has not allowed clinical stem cell researchers from state laboratories to formally register new clinical procedures and products (Rosemann 2013).Even after the latest reforms, there are still many unresolved regulatory issues regarding market permissions, international collaboration, 'compassionate interventions', and the implementation of regulatory rules for for-profit and other unauthorized stem cell procedures.Speculations exist about the strategic purpose of regulatory policies in China: some argue that they serve to stop rogue stem cell interventions, while others comment that the half-hearted implementation of regulation aims to allow a wide variety of stakeholder efforts, such as those of private hospitals, companies and military hospitals, to forge ahead with clinical stem cell research (Sipp 2009;Cyranoski 2012).Such policy would have rendered elite laboratories as casualties of strategic deliberation, as their translational research is subject to regulatory oversight through the funding they receive.In this article, however, we are interested in indicating why regulatory capacity building has been such a challenge in China.
Conceptualization
The regulatory development of many countries is largely influenced by the global dominance of 'Western' research ethics.Various theories emphasize the global hegemony of Western states on life science industry development and regulatory standards (Birch 2012;Salter et al. 2015) through the capitalist exploitation in life science development (Rajan 2006;Cooper 2008;Petryna 2009), the political and ethical governance of RM (Bharadwaj and Glasner 2008;Gottweis et al. 2009;Thompson 2013;Webster 2013;) and 'cosmopolitisation' (Zhang 2012).Such hegemonic conditions could be regarded as disabling, when standards are costly and designs are alien.But not all countries follow Western models.In fact, alternative standards and norms are being developed, and in some countries, a permissive regulatory regime is viewed as enabling (Sleeboom-Faulkner et al. 2016).For this reason, the general focus of global theories on hegemony, neo-liberalism, and cosmopolitanism need to be complemented by a closer examination of how national policies articulate both international and local regulatory orientations in the field of the life sciences.This involves the observation of local modes of stem cell governance, healthcare needs, and economic and scientific ambitions (Sleeboom-Faulkner 2016).In this article, then, we focus on regulatory capacity building defined as the ability to manage and deal with internal and external regulatory pressures on a national regulatory policy-making.We show the challenges faced by a government that has to deal with competing sets of regulations, and argue that these are contingent upon national development strategies and sub-national economic and political developments in the field.
In our view, there is no single global force and no single local pathway that determines the adoption of regulatory standards and values (Ong and Collier 2008); instead, the particular conditions of a country (Sleeboom-Faulkner et al. 2016) and local regulatory developments form both the limitations and the tools of regulatory capacity building.Rather than view regulatory capacity building as the ability to adopt regulatory standards that are developed and vetted elsewhere, we use the term regulatory capacity building to refer to the efforts of agencies and regulators to find pathways that can organize procedures, formulate guidelines, and meet regulatory challenges by users in practice.However, when harmonized with guidelines of international regulatory organizations, such as the ISSCR, such regulatory capacity building can lead to clashes with and between local stakeholders.Although the regulation may enable the translational research of local elite laboratories, they might clash with local measures taken by existing stem cell networks that work with their own informal regulatory standards.In our view, the regulatory developments that have enabled local economic and technological development in the areas of RM, until recently, have preempted the development of what Andrew Barry calls large-scale technological zones (Barry 2006), thereby stymying the development of internationally adjusted regulation.
The harmonization of standards and regulation in stem cell science are believed to enable exchanges in stem cell research and its translation (Eriksson and Webster 2008).In technological zones, such unification takes place in spaces where differences between technical practices, procedures and forms have been reduced or common standards have been established (Barry 2006: 239-45).Technological zones are abstract (not geographical) regions that share a 'community of practice'.The networks in this study, however, are held together and shaped not just by technological knowledge exchanges, but also by entrepreneurial forms of collaboration or bionetworks (Sui and Sleeboom-Faulkner 2015).Such bionetworks develop regulatory orientations instrumental to delineating the rights and wrongs of scientific research and clinical practice.In the case of China, diverging 'local' forms of regulatory harmonization in the field of clinical stem cell research have developed as different communities of practice: diverging spaces of regulatory harmonization have come about across the various bionetworks for a sustained period of time, directly or indirectly supported at various governmental levels.Efforts by the national government to strengthen regulation, aimed at policing and enabling the field, clash with the norms of established communities of practice.In China, this has led to a prolonged regulatory stalemate, frustrating efforts of national harmonization.As we shall argue, this development has been made possible largely due to China's socio-economic and infrastructural diversity, and its political organization: it's relatively large size and power concentration in Beijing have created geographical differences in opportunities; a great diversity in institutional structures has come about, characterized by contradictory developments between regulatory policies created by the political centre in Beijing and those created by provincial governing institutions.
Method and overview
Regulatory pathways are historical and path-dependent.An emphasis on both regulatory capacity building and the entrenched development of bionetworks is necessary to understand the development of the technological zones that have emerged.Our approach proceeds from the view that an exclusive focus on how nation states are limited by global hegemonies neglects locally formed hegemonies and the multitude of forms of regulatory orientation that exist at subnational-provincial and municipal-levels.A focus on factors underpinning regulatory development can improve our understanding of national regulatory impasse.This article illustrates the various dimensions of procedural standards using six cases, showing how uneven conditions and the path-dependency of communities of practice yield various orientations vis-a `-vis national regulation.The cases were selected to show the contrasting regulatory and working arrangements of stem cell hubs that express a desire for national and workable regulation at national stem cell conferences.Although the cases in themselves are unique, they represent main organized forms of stem cell research in China.The six cases also illustrate a variety of regulatory orientations that have developed in interaction with global regulatory trends and the development of local regulatory arrangements.The notion of regulatory orientation, as pointed out above, refers to the shared normative delineations of scientific research and clinical practices underpinning the regulatory arrangements developed in bionetworks.Examples of such normative delineations are making pro-active regulatory contributions to steer the meaning of what is 'good practice', creating alternative regulation to define one's own 'good practice', and toeing the official line to show one adheres to dominant notions of 'good practice'.The local and institutionally entrenched nature of these diverse regulatory orientations, as we shall see, forms a great challenge to regulatory capacity building on a national level.
The materials presented draw on fieldwork in China, which took place from 2007 and 2014.The authors conducted 128 semistructural interviews in both Chinese (about three quarters) and English with experts engaged in various aspects of clinical stem cell research (policy-makers and bioethicists [18], company managers and staff [17], stem cell scientists [59], and medical professionals [34]) in over twenty stem cell hubs in Beijing, Tianjin, Hangzhou, Shanghai, Changsha, Wuhan, Taizhou, Shenzhen, Harbin, Haikou, and Guangzhou.In addition, we attended and spoke at various conferences on stem cell science in China and Asia.The relevance of these numbers lies in the broad basis for formulating the six most common forms of regulatory orientation, exemplified by six cases or bionetworks.The interviews were analysed by repeated readings, thematic content analysis, and the abductive method (Timmermans and Tavory 2012) though which we identified as significant examples and explored the concepts of 'regulatory capacity building' and 'regulatory orientation'.As illustrated, the cases exemplifying the bionetworks correlated with various socio-economic and political characteristics.As these characteristics can explain the different regulatory orientations of the bionetworks, we consider them to express the six most common kinds of regulatory orientations.
The eleven cited interviews with scientists working on RM were conducted by the first author in Chinese (9) and English (2).We limited the number of direct references for practical reasons (word count) and to avoid information that can lead to an undesirable identification of interviewees.The names of interviewees (the names shown in the Appendix Table A.1 are pseudonyms), as the focus of this article is on institutional processes rather than on persons.However, when we draw on materials on well-known figures that can be found in the public domain, we have copied the names used in the publications concerned.We have made sure that the connections relating our interviews to these publically known individuals cannot be traced.
The next section introduces six bionetworks, followed by a discussion of regulatory orientations and why the notion of regulatory capacity building needs to be relational in order to be effective (Table 1).
Bionetworks and the formation of technological zones
The bionetworks described in this section exemplify the most common types of communities of clinical stem cell research practice and have developed their own regulatory orientation.As discussed below, the locally entrenched bionetworks develop particular 'technological zones' across geographical boundaries.This makes regulatory harmonization particularly challenging.In 2003, when Zhao first asked permission to use BM-MSC in a clinical trial, no clear guidelines were available for the use of allogeneic cells, defined by the CFDA as Grade-3 new drugs in need of research review.Zhao's group provided regulators with basic explanations of the procedures, and helped create the regulation that gave them permission to go ahead with the BM-MSC trial in patients with GvHD in 2004 (interview Cha, also see Chen 2009).In December, Zhao began to collaborate with another CAMS team in Tianjin, which had access to patients in the People's Liberation Army (PLA) 307 Hospital (People's Daily 2005).In 2006, Phase II of the GvHD clinical trial commenced, but in 2009, when Phase II was close to finishing, the then-SFDA put a general halt to clinical stem cell applications.Nevertheless, Zhao was able to continue recruitment for clinical trials for biliary cirrhosis (ClinicalTrials.gov2016a), and for GvHD, in collaboration with CAMS, Zhejiang University and various military hospitals, which are regulated separately (ClinicalTrials.gov2016b).In 2012, Zhao's study was the first 'pilot' case to get permission to conduct clinical trials to test the new regulatory system (interview Cai).
Beijing's Chinese academy of medical sciences: elite institutions close to power
Being an elite institute close to the corridors of power has shaped the regulatory orientation of CAMS through both its dependence and influence on state power.Thus, it has received substantial state support.For instance, in 2004, the Ministry of Science and Technology (MoST) invested some 40 m RMB (then US$ 4.8 m) into the research (People's Daily 2005; Chen 2009).At the same time, it could help create the regulation from which its own research would benefit, and it had access to a network of hospitals and state supported academies.Most elite laboratories of well-known academies and universities receive state funding through which they are tied to state policies.Such elite laboratories usually develop a regulatory orientation of toeing the official regulatory policy-line.However, by being so close to state power, this bionetwork was able to work proactively by contributing to regulatory developments.
The Tianjin's stem cell cluster: stem cell industries straddling elite research institutes and private companies
The entrepreneurial cluster around Tianjin Municipality exemplifies the hybridization of state-supported higher educational institutions that have been able to attract private funding.Such clusters combine funding received from state institutions, local governments, and private companies.Their institutional complexity provides them with the leverage to carve out developmental pathways that are not always supported by the central government.While receiving considerable state funding for the IH and the Cell Products & National Engineering Research Centre, Han's network was mainly indebted to local investors.Networking activities between this industrialization hub, the country's largest UCB bank, the placenta bank, and the IH have yielded both wealth and fame.Han has long-term international collaborations with laboratories in France and with Amcell, and occupies important national positions as regulator, as academician, as 'father of family banking in China', as one of the initiators of a licensed UCB bank, and as advocate of research ethics.
The regulatory orientation of elite institutions that are embedded in private-and state-industrial organizations tends to be multiple, whereby international, state, and commercial requirements are taken into account in industrial decision-making.State elite laboratories that advocate 'international' procedures question the standards of the MSC cells banked used in commercial clinical applications.In their view, only transparency can lead to harmonized standards, which they regard essential to safeguarding their own reputation (interview Hou).But the dense interlacing of powerful state and commercial institutions can be a challenge to regulatory oversight.Furthermore, clamping down on such bionetworks may affect the academic research and industrial services of others, including those of the state itself, as state institutions can benefit from the resources provided by these bionetworks, including biomaterials, bio-banking, and processing services.
The military and stem cell activities: a separate world
Although China has a diverse network of military hospitals and research institutes, which can be found in all major Chinese cities, as one category, they constitute a different world from other medical institutions because they follow their own regulatory guidelines.Together with university hospitals, military hospitals are seen as the best medical facilities in China.But military hospitals have their own set of rules and regulations for clinical stem cell procedures, and are overseen by military bodies-separate from the MoHwhich answer to the Central Military Commission.Military research institutes providing stem cell therapies that are not authorized by the CFDA include the Academies of Military Medical Sciences (AMMS), which offers a cure for diabetes (AMMS 2014), and Peoples' Liberation Army (PLA) universities, such as military and police hospitals, PLA hospitals (Shizhentang 2014), navy The military hospitals were early providers of stem cell interventions.According to An Yihua, director of the stem cell transplant department at Beijing's General Hospital of the Chinese People's Armed Police Forces, Chinese hospitals have been using foetal brain cells to treat patients since the 1980s.An's hospital alone has treated nearly 4,000 patients with neural stem cells since 2003, including foreign patients from twenty countries (Tam 2011a).Many small hospitals followed suit.Top tier military hospitals, though relatively autonomous from regulatory point of view, collaborate also with international contract research organizations (CROs) in multicentre clinical trials, such as the collaborative study of a Phase I/II ischaemic stroke trial by Neuralstem and BaYi Brain Hospital (Neuralstem 2014), and with hospitals and research institutes at home.Both CAMS and AMMS have close research links with the military hospitals to further translational research.In addition to state research institutions, there are also private research centres and hospitals that collaborate with the military by providing cellprocessing services (interview Dan).
In China, the military has a good name among much of the population.The mother of a patient, Zhou was told that stem cells were like seeds; after being planted on a liver, they grow, divide and spread, and finally form a healthy liver.The failure of the intervention was published widely and damaged trust relations (Tam 2011a).Leading translational stem cell researchers interviewed, including scientists from CAMS, regard the stem cells derived from healthy aborted foetuses as an obvious advantage for China's research community.The MoH is aware of this.The military provides therapies to study their efficacy rather than to earn profits.As such, the publication of research results at home is thought to be invaluable as a source of experience with stem cell procedures and as a basis for making research progress.
The military, due to their exceptional status, have remained well-financed and well-resourced closed pockets for research and the provision of what is known as experimental stem cell interventions.They have developed their own regulatory orientation.Their regulatory orientation is rather varied, but the permissiveness of some permits applications disallowed elsewhere in China.Despite the January 2012 Notification (MoH 2012), the military continued to collaborate with both private hospitals and prestigious academic research institutions such as CAS, providing them with access to patients at least until our visit later in the autumn of that year.Although provision is continuing through private clinics, it is not yet clear to what extent the autonomous regulatory orientation of the military networks is being affected by the 2015 draft regulation.
Beike biotech: cell banking and processing without observing standards?
Beike Biotechnology was set up in July 2005 by Xiang (Sean) Hu in Shenzhen.It was initially concentrated on the development and commercialization of adult stem cell therapies that have been severely criticized for the commercial provision of unapproved stem cell interventions (McMahon 2014).But Beike strategically deployed international standards for biobanking, scientific research, safety, efficacy, and ethics to maintain its large network.
After his PhD and research on biochemistry and molecular biology at the Universities of Gothenborg (Sweden) and British Columbia (Canada), Hu returned to Zhengzhou University in China in 2001, where he decided to focus on translational research for severely disabled patients.Hu soon attracted capital from Hong Kong Science & Technology and Qinghua Universities (Khayashar 2007).In 2006, the Shenzhen government invested 900k RMB (US$4 m) into its industrial zone, to which it invited Beike, and, in 2009, Beike opened its Stem Cell RM Industrial Complex in Taizhou, calling it 'the world's largest stem cell storage and processing facility' (Beike 2014c).
Beike's work in 2010 with Drum Tower Hospital and Jiangsu University exemplifies its collaborations in translational research and clinical stem cell applications (interview Deng) (Beike 2014b).Financed by Jiangsu Province (US$1.8 million), the collaboration aimed to develop clinical applications using hUC-MSC to treat systemic lupus erythematosus (SLE), multiple sclerosis (MS), and other degenerative diseases.Beike provided the facilities, equipment, management framework, and certain proprietary clinical stem cell technologies for the project.Nanjing University Medical School's Drum Tower Hospital was responsible for administering the human trials, enlisting 200 patients, while Jiangsu University brought its biological research and development resources to the production and animal study phases of the project (Sun et al. 2010).
Internationally, Beike has also branched out to Bangkok, Delhi, and Malaysia, and it created a rehabilitation centre in Romania and invested in stem cell ventures in Japan and Brazil (Beike 2014a).Beike organizes international conferences, fostering national and international collaborations (Zeng 2009), and maintains connections with political leadership.In 2010, Premier Wen Jiabao and President Hu Jintao visited Shenzhen, where they lauded Beike as 'the world's most advanced venture', although the therapies it facilitates have been prohibited since May 2009 (Youtube 2009).Beike has been criticized for selling 'unproven stem cell therapies' for high fees (60À150k RMB, 2012).In 2013, Beike claimed to have treated over 15,000 patients, of which just over half are Chinese (interview Tu).Revenue is mainly pumped into the company's biobanking branch, which since 2012 has AABB accreditation and collaborative agreements with provincial hospitals on tissue-bank management (interview Yan).
Collaboration with local funders, well-known researchers and hospitals is crucial to Beike's development of stem cell products provided through collaborations with provincial hospitals, while industrial areas, universities, and funders are crucial for its biobanking activities.Its international accreditation and proprietary technologies have gained Beike credibility, and its research and publications have helped Beike to build up experience and academic capital.As Beike's activities are intertwined with state funding, research and banking, provincial funders, universities, and hospitals, it has considerable leverage, which it uses to lobby with the committee formulating the 2015 regulation (personal communication, Yang).
Beike has developed various orientations towards regulation.Although Beike has claimed to adhere to national and international regulations, for a long time it has evaded them by delegating the application of controversial clinical procedures to hospitals, which carry the risks of regulatory violation.On the other hand, Beike has also been developing its own standards for deciding which patients to treat and for measuring treatment progress.In this sense, it has its own regulatory orientation to which it adheres when it can.
Although Beike is best known for its stem cell banking and processing activities, there are other similar industrial networks in operation, such as the 'Strategic Alliance for Huaxia Stem Cell Industry and Technological Innovation'. 1
The Guangzhou Alliance: university-hospitalindustry alliances
The Guangzhou Alliance exemplifies one of the various universityhospital-industry alliances that aim to translate RM into clinical applications, rather than making profit.Other examples of local alliances, financially supported by local industry, have been set up in Shanghai and Shenzhen.On 19 June 2008, twelve research institutes, hospitals, and companies involved in RM in the Guangzhou area forged a collaboration to set up the Guangzhou and RM Alliance to facilitate clinical applications (Guangzhou Shengwu-Yiyaowang 2014).This bionetwork illustrates how it has been possible for a regional organization to formulate its own standards for safety, efficacy, scientific protocols, and ethics.Six stem cell science institutes in Guangzhou started developing clinical applications for the Guangzhou City Large S&T Expert Program (Guangzhou Shengwu-Yiyaowang 2014).The Alliance, headed by Professor Pei Duanqing from the Guangzhou Institute for Biomedicine and Health (GIBH), aimed, first, to further basic stem cell science, technological innovation, and design industrialization strategies, second, to provide technological training, contribute technical equipment to Guangzhou's development and sharing of resources, and, third, to develop clinical stem cell procedures.
One example is the collaboration of a tissue-engineering centre (TEC) with various hospitals in transplanting MSCs into thirty patients with GvHD, whereby twenty-two of them clearly showed progress (Guangzhou Shengwu-Yiyaowang 2014).Although the TEC received funding from the Ministry of Education for basic stem cell research in 2007, it also received funding from the local government in Guangdong for translational research.In 2000, the research team found that administering BM-MSCs to rats decreases immunological rejection in GvHD, compared to transplantation of BM alone (interview Deng).Until hearing about a Japanese researcher using a mother's BM-MSCs for her child's GvHD, and about Osiris conducting clinical trials on GvHD, the TEC team leader had not planned to clinically apply MSCs.As his university did not have enough funding for clinical trials, and as the funding from local government was only sufficient for clinical studies, TEC started collaborating with hospitals from the Alliance with small amounts of funding, initially for 2À3 years.They planned to apply for a state license after the basics had been put in place.To the team leader, this research was not about making money, but about 'returning the favor to the taxpayer' (interview Deng).
The Alliance's labour division stipulated that GIBH provides the technology, two women's hospitals provide biomaterials, the Centre for Cells and Tissue Engineering, Southern Medical University, Guangdong Province People's Hospital, the Third Affiliated Hospital of the Guangzhou Medical Academy, and Guangzhou City's First People's Hospital provide the clinical research basis, while Hanshi, Seer, and Guangzhou Huanhuang S&T Companies commercialize it.The Alliance had established its own rules for conducting research and clinical translation to accommodate patients' demands and fulfil expectations local investors in stem cell applications.The Alliance used the following procedure: researchers had to apply for the permission of IRBs before starting clinical research, and register the research with the Guangzhou Hygiene Department.
After experimental stem cell research was denounced in May 2009, the Alliance started to invite SFDA staff as visiting professors to learn how to conform to the ever-changing standards and regulations, and to coordinate its activities with the SFDA.This would facilitate future applications for marketing licenses (interview Deng).
To facilitate clinical stem cell applications with the support of local governments, the regional alliance developed alternative regulation, formulating its own standards for safety, efficacy, scientific protocols, and ethics.However, after the promulgation of the 2009 Management Measures for the Clinical Use of Medical Technologies, it claims to have followed the official line.After the publication of the 2015 draft regulation, alliance research institutions have started to operate on certified hospital premises, as registered experimental interventions can be used as last resort treatment (CFDA 2015; Rosemann and Sleeboom-Faulkner 2016).However, local governments still exert funding pressures to encourage the provision of stem cell interventions for GvHD and to start clinical trials.
Partly state-dependent enterprises from Changsha: in anticipation of guidelines
The last case exemplifies a semi-private bionetwork that has close links to the state, even though it operates largely independently.Semi-independent research institutions that do not have access to powerful central or regional institutions depend on the state emphasis that their activities follow state rules.Xiangya Reproductive Hospital's biomedical research in Changsha also shows preparedness to cooperate in forging official guidelines and it is known for its provision of training courses, ethics activities, and charity.The enterprise goes back three generations: Lu Guangxiu, its current leader, followed in her father's footsteps, and her son followed in hers.In In 2004, the National Development and Reform Commission decided to fund a second national centre for stem cells, the National Centre for Human Stem Cell Research Engineering (NC-SCRE) in Changsha, and asked Professor Lu to lead it.The committee invested 20 m RMB, while Lu had to raise an additional 90 m RMB, which was partly provided by the Changsha local and Hunan Provincial governments (Interview Li, 5 November 2012).In 2009, Lu formed an enterprise, the Hunan Guangxiu Biological Science Co., Ltd. to build the National Centre and the Hunan Guangxiu Hospital next door.The case of Lu's 'family enterprise' illustrates that those conforming to official guidelines change the direction of their research efforts to basic research, but hope to benefit from state support in the future.
Apart from the clinically graded embryonic stem cell bank, CITIC-Xiangya and the NC-SCRE have an umbilical cord bank, a cord blood bank, a placenta bank, and an induced pluripotent stem cell (iPS) bank.Preparations for the cord blood bank started in 2008.Although they have both a private and a public UCB bank, they now want to focus on the public bank to develop clinical stem cell interventions for patients with cerebral palsy, spinal cord injury, ischaemia (for diabetes), cirrhosis of the liver, and pancreatitis.The head of the UCB emphasized that no clinical applications had yet been made: 'Patients keep ringing to ask for help.But it would be a violation of state regulation, and we have no evidence for safety yet' (interview Shang).Lu and her team were the first researchers to engage with and publish on bioethics issues in practice.As soon as the new regulation is promulgated, the Changsha group hopes to receive funding for their UCB projects.Among their contacts in Beijing are Zhao Chunhua, who had permission to use BM-MSCs and Wu Zuke, a famous academician from AMMS, who works with military hospitals (interview Li).While Zhao and Wu continue their research, Changsha is waiting for the green light.
Although largely independent, this Changsha-based research hub, like other state-dependent institutions engaged in clinical research, needs the support and funding of regulators and potential collaborators in Beijing (CAMS/PUMC) to continue their clinical and research activities.Ethics and research authorization are crucial to their ability to conduct business and to their general credibility.Accordingly, they are keen to follow official guidelines and regulations; to them, regulatory deficit hampers translational research activities.
Discussion: diverging regulatory orientations and regulatory capacity building
Global hegemonic pressures have lead governments to follow international guidelines that may not suit a majority of interest groups at home (Sleeboom-Faulkner et al. 2016).In China, initial regulatory reform aimed at policing and enabling the field of clinical stem cell applications in accordance with international guidelines has clashed with the interests of pre-established communities of practice.This led to a prolonged regulatory stalemate, hampering further efforts of national harmonization.The conditions that allowed this development to occur in the first place were related to China's geographical and political characteristics as a large LMIC.Its policy of economic growth whereby 'some may get rich first' (Deng Xiaping cited by Wong 2014) has created the conditions for uneven and unequal socio-economic and scientific infrastructures.The accompanying diversity in regulatory orientations is characterized by contradictory developments between the political centre and peripheral governing institutions.
The six bionetworks of clinical stem cell research, on the one hand, exemplify the variety of shared and diverging regulatory orientations in agreement with these socio-economic inequalities and contradictions, and, on the other hand, reflect the frictions between dominant global regulatory trends and the development of local regulatory arrangements.The development of locally entrenched bionetworks with their particular communities of practice has made the creation of an effective national regulatory infrastructure a major challenge.Local bionetworks have invested in material and intellectual resources, patient recruitment, research networks, commercial relations, and collaborative agreements with municipal, provincial, and national governments over a sustained period of time.They display a range of regulatory orientations in terms of setting standards for safety, efficacy, scientific protocol, licensing and ethics, shaped variously through local, regional, public, private, and state institutions (see Table 2).
One of the reasons that make it hard to change the ways in which clinical stem cell research is practiced in China is their embedding in bionetworks, which feed on local power structures, and the cross-linkages between the bionetworks.Although bionetworks operate around the norms and rules shaped by a shared organizational orientation and scientific practice, they are also tied with other bionetworks with different scientific norms and regulations.These cross-cutting linkages can be found between bionetworks across China and beyond.Thus, we saw that Hanshi in Tianjin was a member of the Guangzhou alliance, Beijing's CAMS operated a biobank with Tianjin's IH; Changsha works closely with Beijing's PUMC, CAMS, but also Lu Daopei hospital, which works closely with military hospitals (interview Dan); and, besides having links to the cord blood banks of various provincial capitals, Beike has close links with Sun Yat-Sen University in Guangzhou.Some bionetworks have myriad collaborations with research institutions abroad, which may well thrive due to differences in the permissiveness of national regulatory systems (Sleeboom-Faulkner and Patra 2011).Standards for clinical stem cell applications co-developed by local investors, researchers, and the stem cell industry diverge from official guidelines, and the promulgation of the 2015 draft regulation promised to eliminate these inconsistencies.
Regulatory implications
The implementation of the new draft regulations is likely to reconfigure the position and the regulatory orientations of bionetworks.It is bound to result in unequal access to financial resources, including state funding and industrial investment.Elite institutions are likely to benefit, but the new standards and requirements may be unaffordable to those less well-resourced or without state support.Although the new draft regulation is clear about its requirements for clinical trials, it is not so about the specification of stem cell lines and the future of clinical stem cell research outside the new regulatory framework.It is unclear whether the clinical use of stem cells will be permitted for patients without other options and under what conditions.The Guangzhou Alliance, Beike and Hanshi (interviews Deng; Tu; Cai), as well as some elite institutes (interviews Li; Dan) considered such experimental treatments justified as a last resort option, and all researchers emphasized the pressure exerted by local funders and patients to develop 'therapies'.On the basis of former trends, it is likely that the provinces and municipalities that have their particular vested interests in patient health and clinical stem cell products will interpret the draft regulation in a manner that befits established investment patterns for clinical applications.
Considering that the various bionetworks have developed procedural standards that cater to their own particular 'technological zones', it is not surprising that the national government has been struggling to articulate a set of regulations acceptable to all players.The 2015 draft regulation has foremost accommodated the regulatory demands of elite laboratories.However, the requirements for market approval for clinical trials and the conditions for routine use of pharmaceutical stem cell products in hospitals have not been published (Sleeboom-Faulkner 2016).The 2015 regulation no longer speaks of controlled research trials (Phases I-III) (MoST 2013), leaving open the possibility of adopting a Japanese or South Korean model that allows conditional market approval on the basis of clinical studies with relatively small numbers of patients (Azuma 2015).In any case, considering China's diversity of bionetworks and large number of medical institutions, a successful implementation of the draft regulation will require considerable investment in regulatory oversight.
It is not clear how the regulation affects clinical stem cell practices of the army and police hospitals, where many commercial stem cell activities have been located in recent years.As the army and police hospitals conduct a large proportion of clinical stem cell research in China, this may affect the overall development of the field.Furthermore, it is uncertain to what extent the new regulation can be ignored or circumvented.The new regulatory arrangements provide official permission for clinical applications only to the stem cell trials that take place in qualified hospitals.Although the draft regulation allows clamp down on unauthorized stem cell applications (McMahon 2014), its focus on review could leave China's trade in stem cell products unmonitored (Rosemann and Sleeboom-Faulkner 2016).The new draft regulation also leaves open questions about the international collaboration stem cell community hope to maintain.The emphasis of the regulation on preclinical studies, clinical trials, quality controls, and independent expert committees corresponds with guidelines developed by the ISSCR (ISSCR 2008), US Food and Drug Administration (US-FDA 2015) and European Medicines Agency (EMA 2007), but clarity on the conditions for market permissions, IPR, and the role of foreign research entities in collaborative research are crucial to attract investors and collaborative partners.
Conclusion
This article began by asking why in China national regulation took a long time to develop and, even under the 2015 draft regulation, is still unclear in crucial areas.Rather than just referring to theories that emphasize the debilitating influence of the hegemony of 'Western' stem cell regulation, or concentrating on the ways in which the government may have tried to enable China's varied landscape of clinical stem cell research to develop, we have outlined some of the difficulties of regulatory steering in China as a large LMIC.Apart from being subject to international political and regulatory trends, we showed how the development of procedural standards is complicated by the existence of bionetworks with shared and diverging regulatory orientations.These orientations were shaped in interaction with international, national and provincial governments and local policies financial, economic, and regulatory policies.
Although any country's institutional landscape of clinical stem cell research may be varied, in China this variety has been allowed to flourish and to consolidate through local bionetworksentrepreneurial scientific networks that share particular scientific norms and practices-for a sustained period of time.The initial, only partly implemented, regulatory conditions in this complex landscape have made it possible for a large number of researchers in China to forge ahead in the clinical stem cell field through unauthorized clinical applications.Nevertheless, already before 2009, the number of stem cell scientists calling for tightly controlled regulation had started to grow; these voices wanted China to take a legitimate position in the global clinical stem cell research field.In this sense, China is an old newcomer: its size, the state's ability to fund stateof-the-art stem cell science, its varied institutional landscape and its 'permissive' regulation (Sleeboom-Faulkner and Patra 2009) had made China an early starter in the field.
The 2009 regulation was a first visible effort to control and regulate the field by the official announcement of the intention to clamp down on for-profit human stem cell enterprises, a step which started to have perceptible effect only since 2012.Although the initial development of the stem cell field had benefited from the relatively uncontrolled environment with its diverse range of stem cell Partly dependent on state-funding and partly dependent on own/industrial funding.At a distance from Beijing, needs toeing the official line to sustain itself and linked enterprises.
Its compliant regulatory orientation is following, supporting, and advocating the official line.When state regulation is defunct, activities in the area concerned are hampered.
networks, it has increasingly become a hindrance to the field's growing cosmopolitanization (Zhang 2012).Thus, the international compatibility of research standards, reputation, and ethics became essential to China's elite centres' efforts to merge with technological zones evolving in the clinical stem cell field, while other bionetworks developed their own idiosyncratic arrangements in line with the aims of local investors and incidental national and international projects.The true challenge China is facing is the double-edged sword of regulatory capacity building: to create national regulation acknowledged by potential collaborators at home and abroad, as well as to cater for the various bionetworks with the potential to fulfil China's political strategy as world leader in the field of stem cell science.
For this reason, we argue that the notion of regulatory capacity building must not indicate the importation of guidelines from other organizations or countries.Rather, it needs to refer to the ability of a country to relate to scientific communities that have been formed under different conditions.The notion of regulatory capacity building, then, needs to refer to the capacity to develop regulation that deal with the regulatory discrepancies between international and national guidelines, and the different regulatory orientations among local bionetworks.This means that the implementation of regulation should have enough clout to function as planned in transactions and in exchanges with both institutions abroad and at home, while being flexible enough to adapt if implementation is impeded at home.In China, such efforts are complicated by the entrenched financial and research interests and regulatory orientations that are embedded in the various bionetworks, some of which cater to the demands by Chinese as well as international patients, and others of which have unauthorized arrangements with powerful (legitimate) research institutions.On an international level, this means that, to avoid clashes as a result of global regulatory discrepancies, the development of new regulation needs to be more inclusive of researchers in large LMICs such as China.
The hybrid cluster illustrates the state's challenges of implementing national standards of safety, efficacy, and ethics.In 2000, Tianjin set up the National Stem Cell Engineering (NSCE) Industrialization Base, where its research centre developed a technological platform (2002), which was to serve the development of the life sciences.Professor Han, a successful scientist who spent 11 years in Paris, was asked to run the famous Institute of Hematology (IH) of the Chinese Academy of Medical Sciences (CAMS) /Peking Union Medical College (PUMC).The IH has received major funding from the state (IH 2014), and from private sources for the construction of buildings in the TEDA development zone.Han co-created the company Union Stem Cell & Gene Engineering (USCGEN), and, together with Zhao Chunhua set up the Tianjin Umbilical Cord Blood Bank in 2001.The local government invested over 10 billion RMB in the Tianjin Huayuan Hi-tech Park, where the Tianjin UCB was established.Claiming to meet international standards, it obtained a license from the MoH (IH 2014).Under Han's direction, 50-odd hospitals in Tianjin started sending umbilical cord blood (UCB) to the bank.Now USCGEN manages and owns the entire process of UCB collection and research: recruitment, banking, cryopreservation, clinical application of stem cells, R&D, manufacture, and the distribution of monoclonal antibodies and gene chips.In June 2002, USCGEN set up the University for Pregnant Women to persuade couples to donate UCB (Union Stem Cell 2014).With the support of the National Development and Reform Commission and the Tianjin City Government, the Cell Product National Engineering Research Center was set up in 2004.In the same year, however, Han used his shares from USCGEN to establish Tianjin Amcell Gene Engineering Co., Ltd., producer of human UC-MSCs, adipose-derived MSCs, placenta-derived MSCs, and amniotic membrane-derived MSCs.Its projects are financially supported by Tianjin City, and backed by research in the IH.In January 2007, Han also set up Hanshi or Huaxia Ganxibao Lianmeng (translated as 'The Beijing Health and Biotech Group'), which specializes in placenta UCB banking (HanShi 2011).In 2008, Tianjin City UCB Bank and the China Bone Marrow bank linked up with Tianjin Xiehe hospital, which was opened in May 2007, and started to specialize in stem cell transplantation and genetic diagnosis in 2008.It has become a large-scale centre for stem cell storage, research, and applications (interview Li).
1984, she opened China's first in vitro fertilization (IVF) clinic, and in 2003, she became President of the Institute of Reproduction & Stem Cell Engineering (Central South University) and President of the Reproductive & Genetic Hospital CITIC-Xiangya.CITIC (China International Trust and Investment Corporation) funded the initial commercialization of the research.
In China, various sets of regulation have been issued since 2000.Apart from the 'Drug Administration Law', issued by the Ministry of Health (MoH, 2001)-now part of the National Health and Family Planning Commission (NHFPC)-the 'Quality Control Standards for Clinical Drug Trials' (China Food and Drug Administration [CFDA, 2007) and the 'Interim Regulations on the Ethical Review of Biomedical Research Involving Human Subjects' Administrative Measures for Clinical Stem Cell Research Trials, Administrative Measures for the Research Base of Clinical Stem Cell Trials, and Guiding Principles for the Quality Control of Stem Cell Research Preparation and Preclinical Research The case of the Chinese Academy of Medical Sciences (CAMS) exemplifies a bionetwork close to central power.It relies heavily on state support, and illustrates how the state has affected its standards of protocol creation, safety, and efficacy.CAMS is a leader in immunology, and pioneers foetal stem cell research (Eurekalert 2009).Professor Zhao Chunhua leads research on clinical applications of haematopoietic stem cells, complemented with what are controversially known as bone marrow-derived mesenchymal stem or stromal cells (BM-MSC) (cf Bianco 2014).Zhao was the first in China to receive support from the State Food and Drug Administration (SFDA) (the current China Food & Drug Administration [CFDA]) to start a clinical trial for patients with graft-versus-host disease (GvHD).
Table 1 .
(Song 2011;Jourdan 2016nd their regulatory orientations Its simultaneous closeness to and regulatory isolation from the state has given the military advantages above other stem cell enterprises.Despite the new draft regulation of 2015, military hospitals can continue to provide unauthorized treatment through arrangement they have with small private clinics, which continue to operate.The clinics operate on a hospital's premises and under its licence(Song 2011;Jourdan 2016).
Table 2 .
Bionetworks, socio-economic and political conditions, and their regulatory orientations Bionetworks Socio-economic and political conditions Regulatory orientation 1. Elite institutions close to main regulatory power hubs (Example CAMS, Beijing) As close to state power, has favourable access to state funding; is respected; has say in creating guidelines; under the state's aegis; internationally respected; collaborates with private companies and the military Positive regulatory orientation, guidelines are advantageous, and as under close state control.Pioneer authorized clinical trials and research.When formal regulatory defunct, it may receive special permissions to continue Double regulatory orientation that tends to evade state guidelines and regulation when possible, emphasizing bioethical principles such as informed consent, patients' selfreporting, patient numbers, and high technical standards, e.g. for banking.5. Regional university-hospital-industry alliances away from Beijing(example Guangzhou) Receive regional funding (from local governments, industry) to translate RM into clinical applications to cater for the growing demands for RM.Often relatively isolated from Beijing regulators and power.
|
2018-01-01T14:04:36.027Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "243ab8d65dd2937b80d67b7a3805b0d5c86eb72f",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/spp/article-pdf/45/3/416/25046710/scx077.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "491ed88678a59ba6041c892f130ccfdb6c461e58",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
219725230
|
pes2o/s2orc
|
v3-fos-license
|
Could SCGF-Beta Levels Be Associated with Inflammation Markers and Insulin Resistance in Male Patients Suffering from Obesity-Related NAFLD?
One of the pathologic hallmarks of obesity is macrophage infiltration of adipose tissue that has been confirmed as source of multipotent adult stem cells. Stem cell growth factor-beta (SCGF-β) shows activity on granulocyte/macrophage progenitor cells in combination with granulocyte macrophage colony-stimulating factor (GM-CSF) and macrophage colony-stimulating factor (M-CSF). Obesity-associated inflammation induces insulin resistance (IR), which is central to nonalcoholic fatty liver disease (NAFLD) or hepatic steatosis (HS). We searched for relationship between levels of SCGF-β and those of C-reactive protein (CRP), interleukin-6 (IL-6), tumor necrosis factor-β (TNF-β), interleukin-12p40 (IL-12p40), interleukin-10 (IL-10), ferritin, GM-CSF and M-CSF and between SCGF-β concentrations and IR in obese patients with HS. Eighty obese patients were retrospectively studied. Serum cytokines levels were appreciated by magnetic bead-based multiplex immunoassays. IR was evaluated by homeostatic model assessment (HOMA), HOMA-derived β-cell function (HOMA-B%), quantitative insulin sensitivity check Index (QUICKI) and single point insulin sensitivity estimator (SPISE). HS and spleen volume were assessed by ultrasonography (US). SCGF-β and IL-6 levels predicted HOMA values (p = 0.032 and 0.041, respectively) only in males. In male patients, CRP and IL-6 levels (p = 0.007) predicted SCGF-β concentrations (p = 0.03 and 0.007, respectively), which in turn predicted HS at US, p = 0.037. SCGF-β levels were linked to IR and HS severity with the mediation role of CRP. IL-10 levels negatively predicted SCGF-β concentrations (p = 0.033). M-CSF levels predicted serum concentration of both TNF-β and IL-12p40 (p = 0.00), but did not predict serum IL-10 (p = 0.30). Prediction of HOMA values by SCGF-β levels, likely mediated by markers of inflammation, characterizes this study, shedding some light on mechanisms inducing/worsening IR of male patients with obesity-related NAFLD.
Introduction
Indeed, there is a reduced group of studies appearing in literature concerning different settings and what is more, they are characterised by a surprising variability of the serum concentrations of this growth factor. For example, serum levels of SCGF-β ranged in patients undergoing bone marrow transplantation from 9760 ± 6810 to 25,010 ± 15,140 pg/mL [1]. Still, different levels of this cytokine were found in unstable asymptomatic carotid plaques compared to stable plaques, varying from undetectability to levels of 600 pg/mL [2]. Furthermore, SCGF-β was significantly increased in patients suffering from Chagas' disease with advanced heart failure compared to those without heart failure, exceeding 22,940 ± 2638 pg/mL, [3]. Recently, authors demonstrate that levels > 21,000 pg/mL) of serum SCGF-β are associated with non responsiveness to therapy of HCC [4]. SCGF-β is elevated in the circulation of patients with chronic spinal cord injury confronted with uninjured subjects, i.e., 47,037 pg/mL vs. 35,521 pg/mL [5]. Finally, Schirmer et al. found in plasma samples from human collateral circulation a median (interquartile) value of SCGF-β equal to 2624.00 (1646.38) pg/mL [6].
Aim
Considering that AT participates in inflammatory pathways [7] and recruitment of macrophages into AT involves interactions of innate and adaptive immunity in multiple organs, although the crosstalk between adipocytes and macrophages lays at its core [8,9], we asked ourselves whether SCGF-β could have a direct or indirect role in a new AT environment characterised by an inflammatory status, leading to IR. Table S1. Predictions of SCGF-β levels by indices of inflammatory responses. It is noteworthy that CRP is the stronger predictor, while IL-10 negatively predicted SCGF-β; d.v., dependent variable; i.v., independent variable. In bold are highlighted the significant ones. The low R-squared in presence of significance shows that even noisy, high-variability data can have a significant trend. The trend indicates that the predictor variable still provides information about the response even though data points fall further from the regression line in graph. Table S4. Prediction of HOMA by SCGF-β, M-CSF, TNF-β, IL-12p40, IL-6 and IL-10. Apart the prediction of HOMA by IL-6, SCGF-β predicted sufficiently insulin resistance, evaluated as HOMA. On the basis of the prediction of HOMA by IL-6 the evaluation of a confounding variable, i.e., CRP was carried out, see Supplementary Table S9. The low R-squared shows that even noisy, highvariability data can have a significant trend. The trend indicates that the predictor variable still provides information about the response even though data points fall further from the regression line in graph; d.v., dependent variable; i.v., independent variable. In bold are highlighted the significant ones.
Conclusion
In other words, this is a possible example of an immunometabolic regulation. Anyway, it is still the case for expecting more confirmation from other studies, mainly on the side of gender difference, beyond a more compelling one from a purely mechanistic standpoint to give our hypotheses a greater construct.
Future directions
Being chronic inflammation a major factor in obesity and related co-morbidities, the hope is that some of the specific mechanisms could translate to optimising immune function in the obese during ageing in order to improve their health.
Methods
Measuring statistical associations, we chose a very powerful technique, i.e., regression, https://s3-eu-west-1.amazonaws.com/.../chapter_summary_ch13, which is used to identify the strength of the effect that independent variables have on a dependent variable. By the way, the predicted values do not depend on the order of predictors in the equation, in the sense that we are always solving the same equation. The statistical associations were performed separately on males and females, but presented as unique group or separate groups according to their significance.
|
2020-06-18T09:10:12.103Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "4a0dba28f52acf8608cfebc002e54cccee151619",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/10/6/395/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57b4c1e66ecc1871bd66c622fdfd1a74aa8470f9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
204390753
|
pes2o/s2orc
|
v3-fos-license
|
Principles of justice and the idea of practice-dependence
ABSTRACT In recent years, several political theorists have argued that reasonable principles of justice are practice-dependent. In this paper it is suggested that we can distinguish between at least two main models for doing practice-dependent theorizing about justice, interpretivism and constructivism, and that they can be understood as based in two different conceptions of practices. It is then argued that the reliance on the notion of participants that characterizes interpretivism disables this approach from adequately addressing certain matters of justice and that a better way of developing the idea of practice-dependence can be found in a constructivism that starts from the Rawlsian idea of overlapping consensus, but which shifts the focus of that approach from societies to a more open-ended category of domains, and which understands the parties to a possible overlapping consensus as stakeholders in a certain set of interconnected practices.
with, Dworkin's account of legal interpretation. Here the idea seems to be that the relevant triangulation is between (i) our practices and (ii) some moral or political values that are too indeterminate to simply derive a conception of justice from. This approach is explicitly adopted by some of the main proponents of practice-dependent theorizing, such as Aaron James (2005, 2012), and possibly by Ronzoni (2012) as well. The second approach is Rawlsian constructivism, which builds on, but need not be identical with, Rawls' political constructivism. Here the triangulation instead relies on (i) our practices and (ii) ideas about public reason and what is involved in being reasonable.
The argument in this paper will be in favour of the Rawlsian approach as being more promising when developing a framework for practice-dependent theorizing. The argument comes in three main steps. First, the underlying difference between the Dworkinian and Rawlsian approaches will be explained in terms of adherence to two different conceptions of practices. Second, it will be argued that while the Dworkinian approach might be reasonable in a legal context, it does not work as a generalized method for normative theorizing. Third, it will be argued that even if Rawls' political constructivism is in need of further development in order to work as a generalized method for practice-dependent theorizing, this type of generalization can be done by understanding Rawlsian constructivism in terms of a method for articulating normative principles for domains (sets of interconnected practices). 2
Understanding practices
Irrespective of how, more concretely, the idea of practice-dependent theorizing is developed, it seems quite clear that contemporary articulations of this idea has their roots in Rawls' later political theorizing, partly in the constructivist approach developed in Political Liberalism (1993), partly in the application of that approach in The Law of Peoples (1999b), the idea then being that different principles of justice will be reasonable at the domestic and the international/global level because of differences in the underlying practices. 3 Sangiovanni (2008, 138n2) also draws even further on Rawls, at least in his stated understanding of the term practice, namely as 'any form of activity specified by a system of rules which defines offices, roles, moves, penalties, defenses, and so on, and which gives the activity its structure' (Rawls 1955, 3n1).
The question here is not necessarily what practices really are, the social world is probably too messy for definitive definitions, but rather how we should best understand them in order to facilitate normative theorizing. Rawls' understanding of practices clearly then makes sense when seeking to theorize justice. It is a fundamentally distributivist understanding: it involves seeing practices as a form of distributions of rights and duties, where a given distribution will define which moves that are open to whom. 4 It is also a picture that fits well with seeing practices as resulting from an ongoing bargaining process where a certain distribution of rights and duties has gradually grown out of repeated interactions, and where there are then normative expectations on us to 2 For more on this, see Brännmark and Brandstedt (2019). 3 As suggested by Jubb (2016, 79-80), this frame of reference has arguably led to discussions of practice-dependence insufficiently engaging with moral and political philosophy beyond the global justice literature. 4 That Rawls understands rights and duties within a distributivist paradigm is a point of criticism in Young (1990, 25), but here this understanding will just be taken for granted. play in accordance with those rules. 5 We might have very different goals in doing so, and we can play along both in good faith or bad, but the rules are there (although political theory can certainly aim at renegotiating them). Because of its strong focus on rules, we might call this a deontic understanding of practices. Its focus lies on which moves that are open or not rather than on any deeper reasons for why certain moves make sense. Indeed, if we want to discuss whether the rules are reasonable or not, we need to take up a standpoint external to the relevant practices, such as constructing a theory of justice. We are then on the level of principles rather than rules.
At the time when Rawls articulated this conception, and drew his distinction between two concepts of rules, he was very much in line with Hart's (1961, 9) distinction between two types of rules, and what later became known as his practice theory of rules. Perhaps somewhat ironically, given that some adherents of practice-dependent theorizing both take their starting-point in Rawls and then favour a Dworkinian understanding of interpretation, the foremost critic of Hart was, of course, Dworkin. 6 The main criticism that Dworkin directs against the model proposed by Hart (and indirectly Rawls) is that it cannot adequately handle disagreement or controversy about what the rules say, but also that it gets the relation between practice and rules wrong: a practice is not a set of rules, a practice is something that underpins social rules and that can be used to justify them (Dworkin 1977, 58). For Dworkin, interpreting our practices becomes not just a matter of laying bare the rules that constitute them, but to lay bare what underlies the rules. There is no rock bottom simply consisting of rules. While theorists like Rawls and Hart understand practices in terms of an analogy with games, whether baseball (Rawls 1955, 25) or chess (Hart 1961, 56), the analogy that informs Dworkin's view of understanding practices is rather one with art, as involving 'a way of seeing what is interpreteda social practice or tradition as much as a text or paintingas if this were the product of a decision to pursue one set of themes or visions or purposes, one "point," rather than another' (Dworkin 1986, 58-59).
We might call the conception of practices underlying Dworkin's model a telic conception of practices. On both conceptions one would certainly accept that practices involve rules, and an adherent of the deontic conception might very well accept that practices often come with ideas about some point(s) or purpose(s), but the difference is that given a telic conception you will not have captured the core of a practice if you have only accounted for the relevant rules. Indeed, similar to how some utilitarians have objected to deontological ethics by comparing it to 'rule worship' (Smart 1973, 6), Dworkin (1986, 89) warns of the risk of legal practice collapsing into 'runic traditionalism' without the right kind of interpretative attitude. As argued by Postema (1987, 302-05) the choice here is however not between understanding practices either in terms of mechanical performance in accordance with certain rules or in terms of some overarching point or purpose. In actuality, mastering a practice can be understood more in terms of being at home in it, being able to navigate it ably and even improvise in a way that still makes sense. But the latter only presupposes that actions within a practice are meaningful, not that this meaningfulness is understood in terms of some ultimate point or purpose of the practice. A particular action might, for instance, balance different duties together with the agent's own self-interest in an elegant and economical way without this way of excelling being a matter of fulfiling some underlying function of the practice as a whole.
We do not accordingly need to adopt the kind of interpretative stance favoured by Dworkin, and we can certainly still make interpretations without them being Dworkinian, 7 but it could still be a good method. One obvious reason why would be that it might facilitate a certain kind of theorizing or concrete interpretative practice. Dworkin's own account is an account of legal interpretation as guided by an 'adjudicative principle of integrity ' (1986, 225), which 'asks judges to assume, so far as this is possible, that the law is structured by a coherent set of principles about justice and fairness and procedural due process ' (1986, 243). To the extent that he should be understood as an example of practice-dependent theorizing, 8 it is because his approach involves a triangulation between these higher principles together with the relevant legal framework and practice, working towards a reading of existing practice that is infused by those higher principles, but does not involve straightforwardly deriving legal conclusions from them. Being neither a natural law theorist nor a positivist, Dworkin's approach is about enabling judges to read positive law in the light of justice. It is a form of legal interpretation where great weight is placed on positive law, but where the arc of legal practice still bends towards justice. 9 In practice-dependent political theorizing, we are of course instead trying to arrive at a conception of justice, so it is clearly not about simply using Dworkin's approach. 10 But we can arguably still take up that kind of interpretative stance, adopting a telic rather than a deontic conception of practices. Sangiovanni (as well as James) is explicit about adopting a Dworkinian three-stage model of interpretation, 11 where the first is pre-interpretative and is about identifying the shared object of interpretation, and where the interpretative second stage is characterized like this (Sangiovanni 2008, 148): First, the interpreter seeks to determine the point and purpose of the institution in question. What aims and goals is it intended to serve? Second, the interpreter assumes the point of view of the participants in order to reconstruct what reasons they might have for affirming its basic rules, procedures, and standards. Why and how do the participants arrange their affairs to achieve the goals and aims of the institution? In achieving both tasks, the interpreter seeks to understand the institution (or set of institutions) as an integral whole, whose parts work together in realizing a unique point and purpose.
Social reality can occasionally be quite messy, so the idea of understanding institutions as integral wholes should probably here be seen in the light of how Dworkin himself assumes a certain level of coherence in order for the interpretative process to get going. Of course, Rawlsian constructivism is also coherentist, since the end-goal is a state of reflective equilibrium, but there coherence comes at the end of the process rather than being assumed as an interpretive strategy. And even then, such coherence can simply be about consistency and mutual support, similar to how a deontological ethical theory can 7 For instance, Rawls is clearly also engaged in interpretation, albeit primarily of the 'public culture' (1993,(13)(14) that accompanies our practices. 8 Although see Meckled-Garcia (2013) for an argument that Dworkin's method is 'the opposite of practice-dependence.' 9 But even with respect to what one could take to be an obviously unjust law, like the 1850 Fugitive Slave Act, it could still be the case that the law of the USA at the time, properly interpreted, did include it (Dworkin 1986, 219). 10 A point noted by, e.g. Valentini (2011, 408). 11 Ronzoni (2012, 176) embraces the idea that 'principles of justice for specific practices depend on the nature of those very practices in light of a sound interpretative account of its point, purpose, scope, and actors.' be coherent without there being a unique point and purpose which the moral framework serves. In contrast, the idea of integral wholes is about something being much more strongly woven together. Once we have this kind of interpretation in place, we turn to the post-interpretative stage, which is where we start moving towards articulating principles of justice. Here both Sangiovanni and James seem to presuppose that we already have a commitment to certain values, but that they are relatively indeterminate. Sangiovanni (2016, 17) talks about 'higher-level principles and valuesof justice, legitimacy, solidarity, reciprocity, and so on' which we can then form determinate conceptions of based in our interpretation of the relevant practice(s). James (2012, 27n27) assumes a contractualist moral theory, but where principles of justice are then still tailored and justified in relation to a specific practice. In contrast to Dworkin, the relevant triangulation here is about arriving at a specific determinate conception of moral or political values or principles which are, at least initially, open to several different precisifications. In Dworkin we are reading practice in the light of justice, in Sangiovanni and James we are reading justice in the light of practice. In both cases, the idea is that when confronted with something that allows multiple readings, whether the law or certain higher-level values of principles, the assumption that our practices can be read in terms of their point(s) and purpose(s) allows us to reason our way towards determinate meanings or conceptions.
If we instead work with a deontic conception of practices, we can still seek coherence between the concrete rules governing those practices and a set of higher principles, but such a pure coherentist approach would be much more open-ended in where it would end up. Because of the messy and contested character of social and institutional reality, there will typically be significant indeterminacy in that there will be several different ways in which we could move towards greater coherence, and with nothing more to aid us than the ideal of coherence alone, there will be an arbitrariness in going in any specific direction. What characterizes the Rawlsian approach, however, is precisely that it is not a pure coherentism. It too adds a further factor to the equation, one that allows a kind of triangulation. Understood as a form of practice-dependent theorizing, the Rawlsian works with (i) principles of rationality, reasonableness, and the idea of public reason, and (ii) a particular social and institutional situation, in order to arrive at principles of justice addressing the latter. The reliance on (i) clearly locates Rawlsian constructivism in a Kantian tradition, although while Kant is attempting to derive determinate content from his analysis of practical reason as such (based in his particular metaphysics of the person and society (cf. Rawls 1993, 100)), the Rawlsian constructivist works with a thinner conception of (i) and needs (ii) as another known point in order to arrive at a determinate enough conception of justice. We will return to the Rawlsian approach in Section 3, but first let us look at some problems faced by the Dworkinian approach.
Against Dworkinian interpretivism
Social and institutional reality is often messy and contested. Dworkin's original approach was developed for a particular part of institutional reality that is arguably the most ordered and systematic, namely domestic law. When turning to other practices, one obvious worry about assuming an interpretative stance that operates on an analogy with works of art is that this will lead to a mischaracterization of those practices, making them come out as more harmonious and uncontested than they might actually be. Even to the extent that a given practice is relatively stable, it does not seem necessary that there must be a clear point and purpose to it, weaving it together into an integral whole; it might just represent a point of equilibrium in a social game involving different types of actors who all have their interests, a mere modus vivendi (Rawls 1993, 47). Stability can simply be a matter of how the balance of power aligns with how much different actors benefit from the practice in question. When we make this type of analysis, we tend however to take an external perspective, where the reasons that participants themselves appeal to might very well be understood simply as pretext for power moves.
In contrast, the interpretivist approach relies on taking an internal perspective, reasoning from the point of view of participants. Yet this reliance on the perspective of participants seems far from unproblematic. To begin with, it seems to presuppose a certain willingness and active involvement in shaping a practice in order for someone to count as a participant, and it seems clear that practices can have a strong impact on people even when we cannot be said to really participate in them. Call this the problem of exclusion. 12 Additionally, there is something about the idea of us all just being 'participants' that hints at a basic equality between us. Yet take a society like Gilead (Atwood 1985). We can certainly describe both men and women there as all being participants in its practices, but would this really give a fair picture, when some have no say in writing the rules of that society? 13 Analysing those involved in a practice merely as participants with different points of view risks masking the power imbalances involved. People can suffer from false consciousness, wishful thinking or have adaptive preferences, and what we then risk doing in our interpretation of the point and purpose of a practice is simply to reify what are really fictions masking the real functioning of the relevant institutions. Call this the problem of masking.
It should be said that both James and Sangiovanni are aware of there possibly being these kinds of difficulties. James is careful to point out that identifying justice claims of participants will not have any 'immediate implication for the status of non-participants' (2005,309), and that it is possible that the justice claims of non-participants can be determined in some other way, while Sangiovanni suggests that we can employ ideology critique even within practice-dependent theorizing (2008,163). Both of these points are certainly correct in terms of identifying what is in principle still possible. But the question here need not, and perhaps should not, be understood as being about whether taking interpretivism as a starting-point positively rules out certain considerations, but about whether doing so misaligns the process of theory construction already from the outset. This might be an issue already with respect to domestic distributive justice, but with many other matters of justice we are dealing with much more fractured practices, and a focus on the category of participants becomes even more problematic. Let us consider three such examples.
To begin with, consider questions about global justice, more specifically the extent to which the more affluent have obligations to the less affluent. This has probably been the main area where practice-dependent theorizing has been applied and the argument is then often that we largely live in a world of sovereign states and that the principal arena for distributive justice is the domestic one (e.g. Miller 2007;Sangiovanni 2007). But is the global stage a reasonable domain for applying Dworkinian interpretation? While on the level of domestic justice we might perhaps already have reasonably shared conceptions of the point and purpose of our domestic institutions, and what we think that justice is about, the international or global level is quite different. It is arguably in large parts an anarchic system, with the actual balance of power between states determining much of what is possible and not, making it considerably more difficult to ascribe a deeper underlying point and purpose to the relevant practices. 14 While the international order certainly has consequences for us all, it seems like a stretch to conceptualize of us all simply as participants in it. Given the power relations involved, and the great discrepancies in wealth that exist between countries, there would seem to be a high likelihood of self-serving ideological constructs playing a significant role in the thinking of people living in the more affluent parts of the world, and adaptive preferences and false consciousness playing a significant role in the thinking of people in the less affluent parts of the world.
The second issue concerns questions about the status and rights of migrants and especially refugees. This has not really been a major area of activity in political theory until relatively recently, with Carens (2013) and Miller (2016) being two major works. Both of them have a strong focus on actual practice, but with Miller probably being the one who most clearly exemplifies practice-dependent theorizing (although not Dworkinian interpretivism per se). If we look at possibly ascribing point and purpose to current institutional arrangements regulating movements of people across borders, we again have the problem that when looking at the overall system level, there does not seem to be much point and purpose to it since it is more or less the product of policy choices made on the level of states with respect to perceived national interests. Miller seems to work from an assumption that we should not start from a systemic perspective already in how he frames his main questions: 'Should we encourage immigrants to join our societies or try to keep them out? If we are going to take some in but refuse others, how should we decide which ones to accept?' (2016, 1). Yet while many of 'us' do indeed belong to a we that can ask questions like these, there is also a broader set of people holding stakes here and more to be said about who should reasonably have a say in constructing the relevant policies. 15 Migrants and refugees certainly exercise agency within the bounds set by the relevant political institutions, but in a system of sovereign states, refugees especially can also be stuck between institutions, not really occupying any stable positions with clear rights and duties. Is thinking in terms of participants really helpful here?
Finally, there are questions about the obligations of people in the present to future generations, a matter which has in recent years primarily been considered in relation to the possible effects of climate change, but which is relevant to the issue of sustainable development in general. Again, this seems like a clear case where focusing on the perspective of participants rather than, say, stakeholders seems problematic (cf. Reglitz 2016). Both of the previous issues were cases where there is a mismatch between (i) who 14 This is not to say that Dworkinian interpretation cannot yield anything in this context, James (2012) is a clear example to the contrary. However, even to the extent that we can identify some point and purpose, we might not have good reason to trust such results if we find that the method has an in-built tendency to skew our interpretations in masking and excluding ways. 15 For a different perspective than Miller's, see Abizadeh (2008). gets to influence the rules that govern the relevant practices and whose interests are accordingly likely to be reflected in the point and purpose that can be read into these practices, and (ii) who will suffer the adverse consequences of them. Future generations have absolutely no influence on what our practices look like, at least not beyond what our ideas about them might lead us to do (currently: very little), and yet our practices might have an enormous impact on them. Of course, the Dworkinian interpretivist might respond that the point is not that we will derive our principles of justice from our current practices, but simply that which principles that can count as reasonable principles of justice will always be constrained by current practices. But even if this is perfectly true in principle, it seems doubtful if Dworkinian interpretation is a viable starting-point in trying to move towards principles of intergenerational justice.
When it comes to domestic justice, one reason why our current practices possibly could work as a reasonable starting point for articulating principles of justice is that the people who have predominantly shaped these practices and the people for whom the relevant political institutions manage their problems of justice more or less coincide. What this means is that in coming to better understand the workings of these practices we will also at the same time come to understand the more precise problems to which our institutions are ideally a solution. Yet in all of the three cases considered above this is not the case. There are mismatches between those who predominantly shape the relevant practices and those who have a stake in the kinds of actions that these practices enable and regulate, and whose lives will be shaped by having them in place. While there might certainly be ways to mitigate these problems, their source is hardwired into the method. The Dworkinian approach is built on taking an idealizing stance in relation to existing practice, one that pushes tensions to the margins and places unifying features at the centre already from the starthence the problem of masking. And the move from merely taking a more sociological view of practices to an emphasis on the insider perspective seems bound to place the notion of participants at the centre of the interpretative approachhence the problem of exclusion.
Rawlsian constructivism beyond Rawls
Can Rawlsian constructivism steer clear of the problems of masking and exclusion? At the very least, working with a deontic conception of practices would seem to provide a more promising start since such a conception allows for practices to be characterized by underlying tensions, very different agendas and objectives among different parties to the practice, and stability primarily based in a power balancea mere modus vivendi. Indeed, the principles of justice that are articulated are supposed to be capable of serving a unifying role, by providing a basis for reasoning together. On the Rawlsian approach, the startingpoint is not that of participants in an already coherent and unified practice, but rather parties to certain shared problems who are in need of shared principles in order to be able to handle those problems as a community of reasoners.
Having said this, it should be noted that Rawls' own constructivism, although it is later applied to the international level as well, is developed in reasoning about domestic justice. This means that while a key notion for Rawls is that of parties (to a possible agreement), political constructivism also seems to share with Dworkinian interpretivism an emphasis on insiders, since it is understood as a procedure involving citizens of a society (Rawls 1993, 90). And yet, institutions and practices 16 in one's own society can clearly have significant impact on the status and situation of people outside of it. The problem of exclusion accordingly seems like it could be an issue for Rawlsian constructivism as well. But does constructivism as a method really presuppose an insider-type framing in terms of societies and citizens? It seems far from obvious that this framing really is hardwired into the method, and the goal of the rest of this section is to suggest, at least in outline, how what is still a basically Rawlsian approach can be developed into a generalized constructivism, one that still relies on ideas about rationality, reasonableness, and public reason as forming a known point in a process of triangulation where the other known point involves current practices and institutions.
Even if we restrict ourselves to principles of justice, there are clearly issues of justice in many human contexts or areas of interaction. Family life, the workplace, sports, various associations, and so onin more or less every area of human interaction there are goods that are somehow distributed, where some might get less and some more, and where our actions have an impact on not just ourselves but others as well. Some of these areas might feature highly reciprocal interactions, others decidedly one-sided ones. And while some areas might be highly formalized and even feature written rules, others will be characterized by more informal and implicit norms (or some mix of formal and informal). But there will be norm-governed offices, roles, moves, penalties, defences, and so on, which provide structure to the relevant activities, i.e. they are practices in the Rawlsian sense. There are certainly questions about how to individuate practices more exactly, but the main point here is that in normative theorizing we are dealing with relatively large-scale areas of life organized and regulated in terms of such practices. Rawls himself suggests that 'there is a special domain of the political ' (1989, 242) which his theory of justice and fairness then seeks to articulate principles for, and we could understand a domain precisely as such an interconnected set of practices (cf. Brännmark 2016).
In carving out the political as a domain, Rawls is looking for something that is fit to be theorized and while relevant domains up to a point can be expected to be described sociologically in terms of interconnected sets of practices held together by certain important offices and roles or key goods and values, we should arguably not expect the notion of domains to be purely empirical: what will count as a domain will partly depend on what it takes for a set of practices to be suitably interconnected in order to be theorized in terms of which normative principles that are reasonable for regulating interactions between people in that domain. This also means that what counts as a domain will partly depend on how we understand normative theorizing. For now, this issue will however have to be set aside, the important thing instead being that it seems reasonable to accept that a Rawls-style constructivism could in principle be more widely applicable than merely to the domain of the political. And if we look at domains more generally we are not committed to citizens being the only relevant parties to consideron the contrary, identifying relevant stakeholders, i.e. those types of actors that should reasonably be included as parties to an agreement on principles of justice, will be an important part of describing specific domains.
In terms of the goal towards which theorizing is supposed to move, an important part of Rawls' political constructivism is the idea of a well-ordered society, a society in which 'reflective equilibrium is also general: the same conception is affirmed in everyone's considered judgments' and where 'not only is there a public point of view from which all citizens can adjudicate their claims, but also this point of view is mutually recognized as affirmed by them all in full reflective equilibrium ' (2001, 31). 17 He then goes on to explicate the idea of an overlapping consensus, 'to formulate a realistic idea of a well-ordered society ' (2001, 32), one where citizens would affirm the same political conception of justice, but would do so based in different comprehensive doctrines. Again, even with this idea of wellorderedness, there does not seem to be any essential tie to the political as a domain. We can distinguish between domains in which the practices that are in place are in place merely as part of a modus vivendi, something that we accept given how things stand now, and wellordered domains, where our interactions are governed by principles around which there is an overlapping consensus. Modus vivendi domains are not worse by any independent moral standard, but to the extent that there is stability in them it is the stability of a balance of power, whereas in a well-ordered domain we have a stability based in a deeply shared understanding of justice, one that can persist in the face of shifts in the balance of power. While in a modus vivendi domain we are primarily making private use of reason, the shared principles in a well-ordered domain enable a genuinely public use of reason.
What theorists will mainly be working towards is to articulate principles of justice around which an overlapping consensus could be formed, a version of the Kantian emphasis on principles that can be shared, but where actual practice forms an important known point in triangulating towards a conception of those principles. While Kant's universalizability test abstracts from our actual situation and turns on the formal character of our maxims, looking at whether there is some contradiction involved in willing them as universal laws, Rawls' constructivist procedure has a substantive element in that it involves looking at the fit between potential principles of justice and our actual social and institutional situation. In looking for such principles it accordingly becomes important to analyse this actual situation in order to identify both main dividers and unifiers, where these can be assumed to be at least potential resources or obstacles for moving towards a well-ordered domain under conditions of reasonableness. 18 Indeed, any systematic practice-dependent approach will have to have a way of characterizing existing practice and the situation we actually find ourselves in. This inevitably means taking a summarizing approach, and hence eliding certain things, and what is needed then is a way of doing so which still steers clear of the problems of exclusion and masking. This is where Dworkinian interpretivism runs into problems, but where Rawlsian constructivism, in relying on a deontic conception of practices, looks more promising. The suggestion here is that in framing the assessment of how proposed principles of justice can fit with our actual situation, the domains for which we are to articulate principles of justice can be analysed in terms of the following three 17 Rawls' specification of full reflective equilibrium is as general and wide, where the latter is a matter of someone having 'carefully considered the leading conceptions of political justice' and 'weighed the force of the different philosophical and other reasons for them ' (2001, 31). 18 Being reasonable is mainly about being realistically responsible qua reasoners; it involves 'the willingness to recognize the burdens of judgment and to accept their consequences for the use of public reason' (Rawls 1993, 54), which among other things means accepting 'that many of our most important judgment are made under conditions where it is not to be expected that conscientious persons with full powers of reason, even after free discussion, will all arrive at the same conclusion' (Rawls 1993, 58).
principal categories (and where all three are in some form already present in Rawls' own approach, but where an attempt is made here to further generalize them): The goods at stake Areas of interaction tend to revolve around certain goods, although exactly which goods might vary from area to area. In articulating apt principles, we need to know what kinds of goods the distribution of which those principles are supposed to handle. Some aspects of human well-being will inevitably be at stake, but such goods can still vary between domains, say, medicine compared to war or in family life compared to academic life, and so on. This kind of identification of goods at stake should be expected to be graded in the sense that certain goods will be the primary concerns in an area, whereas other goods will be more peripheral. Partly this will be due to how, depending on the layout of the relevant practices, there will be certain predominant ways in which the actions of some affect the lives of others; accordingly, what is primarily at stake will to a large extent depend on the character of the relevant practices. If we take future generations as an example, most of the concrete circumstances of specific future individuals will be determined by decisions taken then rather than now, but quantities of natural resources available and qualities of the natural environment are certainly matters that are at stake between us and them. In other cases very different goods will be at stake for different people. When it comes to questions about border controls, for instance, we can on the one hand have people seeking refuge, on the other hand people who primarily have a stake in the functioning of societies where they are already members; these possible tensions will then form a starting-point for practice-dependent theorizing rather than simply being something that independently articulated principles are applied to.
The relevant stakeholders
The problem of exclusion points to a need to move away from a narrow focus on participants, or citizens for that matter, and rely on a more open category for identifying possible parties to an agreement on certain principlesan identification that must be made before we can attempt to articulate such principles. Given that we have started by identifying the goods that are at stake, it seems natural to then identify the different stakeholders involved, and where identifying such stakeholders will also involve identifying certain social positions, in terms of deontic statuses such as rights and duties, that people can occupy within the relevant domain and which govern which moves they can make in relation to which goods. The category of stakeholders is open to there being quite different ways in which we are stakeholders, depending on the concrete relations in which we stand to each other, to the relevant goods, and to the means of acquiring a share of those goods. If we take future generations as an example, we are not stakeholders in the sense that they can affect us through our decisions, but we are still stakeholders in a pool of natural resources where they are also stakeholders. Nor does the notion of stakeholding presuppose reciprocity, and the idea of sharing principles does not presuppose a form of contract between stakeholders. 19 Additionally, as indicated by the example of border controls, in order to be stakeholders in a domain, we do not need to have the same stake in it, only some stake. We can also be stakeholders at different levels. For instance, in looking at global justice, even if most us are primarily stakeholders in the pie to be divided domestically, the size of that pie is clearly affected by distributive effects of how interactions on a global level are regulatedand so we are stakeholders there as well. This observation points to a picture where global justice and domestic justice might be different in terms of which principles that are reasonable where, but also to it being difficult to fully separate the two into distinct areas of inquiry.
The historical context
Practice-independent theorizing often seeks to articulate timeless principles of justice and, then, depending on at which stage of history we are at, the distance to that ideal might be greater or lesser. The idea here, however, is that in order to assess which principles that could be the object of an overlapping consensus for some relevant set of stakeholders, we cannot have an ahistorical understanding of the problems that those stakeholders need to deliberate about. While principles of justice need to lie at a certain level of abstraction in order to really be principles, in order to address relevant stakeholders they also need to be sensitive to the particular historical context of those stakeholders. There are at least three aspects to the historical context that seem reasonable to take into account in theorizing about justice: (a) The main characteristics of the moral and political traditions that largely govern the understandings that stakeholders will have of the domain in question.
Especially since Political Liberalism it is quite clear how Rawls' own theory of justice is strongly rooted in a particular, albeit relatively broadly conceived, traditionwhich also means that as a theory of domestic justice it will not be relevant to all societies. In developing constructivism beyond Rawls' own political constructivism this feature is still an important one, given that we are seeking a general reflective equilibrium. Principles of justice must be conceptually and imaginatively approachable from where we start in our thinking. Sometimes we start in identifiably distinct places and then it will be more likely that the principles around which we can form an overlapping consensus will be weaker. In articulating principles of justice, political theorists can certainly try to affect our self-understanding, but it seems reasonable that in general we should work under an assumption of considerable continuity: we accept that world-views and value systems evolve gradually and that reasonable principles of justice need to be shareable in an area of overlap between world-views and value systems that are recognizably similar to what we find already today, i.e. the world that is supposed to be regulated by those principles. (b) The main features of the backgrounding institutional framework likely to persist over some time. Ahistorical moral and political theorizing tends to proceed by identifying important (alleged) constants in human lives and societies. For practice-dependent theorizing there will instead be factors that contextually can be taken as if they are constants. To the extent that we seek principles that can be shared as principles regulating our practices, these principles need to be ones that address the types of decisions that we actually tend to take, and how these decisions can be framed or possibly reframed. This is something largely determined by the institutions that are already in place. For instance, if we look at questions of global justice in today's world, then the institutional framework for organized distributive justice, similar to what is found in many individual states, is arguably not there, nor does it even seem to lie on the horizon. The existing framework instead points to questions of justice on the global level having mainly to do with (i) international peace and security, and (ii) fairness in trade and investments. Existing frameworks can also have implications for how reasonable principles of justice need to be framed on a conceptual level. For instance, starting in current practices, there seems to be a prima facie case for framing matters in terms of human rights. While we can certainly have different views about how well human rights work as an international framework, as well as about how they should be interpreted and implemented, human rights simply are what comes closest to being an ethical lingua franca (Tasioulas 2007, 75). (c) Major historical wrongs or grievances that bear on how current circumstances are understood. Even in articulating principles of justice for specific domains, such domains will need to be described at a relatively high level of abstraction, rather than in all their particularity. This also means abstracting away from much of the historical background to the tensions that actually shape political discussions in specific societies. At the same time, current practices are often strongly shaped by the particular historical path that has taken us to them and there is a risk of falling prey to the problem of masking if history is not taken into account at all. The idea here is not that certain historical wrongs need to be identifiable as wrongs by some independent standard; what matters for an overlapping consensus is rather the subjective dimension: perceptions of historical wrongs or grievances that can undermine the well-orderedness of a situation where certain principles of justice are to be shared. Take global justice as an example. We could theorize global justice for any world that happens to be a world of sovereign and yet still interdependent states. Our world is such a world. That kind of theory could be applied to possible worlds which have no history of colonialism. 20 Our world is however clearly not one of those. By not directly addressing such a major factor as our history of colonialism, a theory would risk masking the ways in which that history has affected the relative positions of different countries in today's world, and how this history means that principles which are formally egalitarian in the rights and duties they assign risk amounting to simply being the object of a modus vivendi largely favouring those who have gained from the legacy of colonialism. This history therefore seems reasonable as input into the process of theorizing, rather than to be handled as an afterthought. An argument could for instance be made that a conception of global justice that is to be reasonable in the light of the history of colonialism needs to include principles enabling us to address important structural injustices. 21 Note that even with a relatively fleshed-out account of a domain, this kind of analysis of existing practices and their primary accompanying features is more like doing an inventory, understanding the construction site, the basic building materials, and the main features of the people supposed to inhabit the ultimate outcome of the construction process. The constructivist aims at coherence, in the Rawlsian sense of wide and general reflective equilibrium, but in contrast to how the Dworkinian method relies on an assumption of coherence in order to facilitate interpretation, the constructivist method is geared towards identifying the issues that need to be addressed in order to reach a coherent and shared set of principles of justice. The idea of well-orderedness provides a governing idea for this process, but it should certainly be recognized that actually reaching an overlapping consensus is more like an ideal limit. 22 The point is rather that constructivism allows our thinking about justice to be organized and structured already in the present, by thinking in terms of what well-ordered domains would have to be like.
Concluding remarks
Both Dworkinian interpretivism and Rawlsian constructivism are forms of practicedependent theorizing where we move towards a determinate conception of justice by relying on an analysis of existing practice in order to provide direction for the coherentist reasoning on which both approaches ultimately rely. A main distinguishing point between them is what kind of understanding of practices that they assume, and how two different ways of analysing current practices then become natural. It has been argued here that the Dworkinian model, while possibly still reasonable in the case of law, faces two major problems, masking and exclusion, when understood as a generalized method. It has also been argued that while there are aspects of Rawls' own constructivism which might appear to open it up to some such issues, especially the problem of exclusion, the prospects for developing a generalized form of Rawlsian constructivism are considerably more promising and a schematic account of such an approach has also been outlined. There is, of course, much more that needs to be said about its details, but the argument here is mainly that by relying on a deontic conception of practices, the Rawlsian can analyse practices without eliding the kinds of tensions and struggles that exist within real-life practices or the problematic histories that often underpin them.
text, as well as to the two reviewers for this journal, whose perceptive comments were incredibly helpful in shaping the final version of the text.
|
2019-09-26T09:05:12.615Z
|
2019-09-19T00:00:00.000
|
{
"year": 2019,
"sha1": "868f5d26b7f688559dfea08dc59078fe96e6e1b2",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/16544951.2019.1667132?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3fe0b2dc4f82c507f156017a54b19de1e726ef4c",
"s2fieldsofstudy": [
"Political Science",
"Philosophy"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
8864095
|
pes2o/s2orc
|
v3-fos-license
|
Clinical language fMRI with real-time monitoring in temporal lobe epilepsy: Online processing methods
The increasing demand for clinical fMRI data has resulted in a need to translate research methods to clinical use. Referrals for language lateralization prior to epilepsy surgery are becoming more common, but time constraints make this unachievable in many busy neuroimaging departments. This study examines whether a single covert verbal fluency paradigm with real-time monitoring and online processing (BrainWave) could replace conventional offline processing (SPM) for the purpose of establishing expressive language dominance prior to epilepsy surgery. We analyzed language fMRI results of 30 patients (17 female; 24 right‐handed; median age: 30.5) with temporal lobe epilepsy. Concordance between visual assessment of SPM and BrainWave was 92.8%. Lateralization indices correlated closely with visual assessments of lateralization with a concordance of 85.7%. BrainWave provided a real-time, fast and accurate display of language lateralization easily applied in a clinical setting using only online image processing.
Introduction
Functional magnetic resonance imaging (fMRI) is widely used to map language activation and evaluate hemisphere dominance for language prior to epilepsy surgery. Numerous studies have shown fMRI to be a valid replacement for the intracarotid amytal test [1][2][3]. Functional magnetic resonance imaging has the advantage of being safe, noninvasive and repeatable. In addition, it can provide data on the intrahemispheric localization of language.
The increasing demand for clinical fMRI data has resulted in a need to translate research methods to clinical use. Routine referrals for language lateralization prior to epilepsy surgery are becoming more common, but the large amounts of data and the lengthy postprocessing times used in research procedures make this unachievable in many busy neuroimaging departments. In the clinical environment, methods need to be quick, reliable, easy to implement and without special equipment.
The verbal fluency paradigm was selected for its ease of understanding for a wide range of patients with epilepsy with varying levels of cognitive abilities. It has been shown to reliably lateralize expressive language [4,5]. Following left anterior temporal lobe resection, up to 40% of patients will develop notable language deficits, particularly a decline in naming ability [6]. Verbal fluency tasks usually generate stronger and wider activations than verb-generation tasks [7].
Most fMRI post processing requires data to be transferred offline. It is time-consuming, requiring many hours of input by highly skilled operators and access to large amounts of disc space. Statistical parametric mapping (SPM), a widely used standard processing tool, is our routine method for language fMRI processing. In conjunction with this, we have applied BrainWave, a real-time and online fMRIprocessing package developed by the MRI scanner manufacturer. This can be performed and processed in 15 min, with no requirement for offline post processing.
As with most widely used fMRI, BrainWave uses the phenomena of blood-oxygen level dependent (BOLD) contrast [8]. It is only suitable for block design paradigms. BrainWave has the additional advantage of task-performance monitoring with real-time activation maps, so the need to repeat a scan due to poor task performance is known while the fMRI acquisition is taking place. Data quality is also monitored in real time using a traffic light system to alert the operator to poor data due to patient movement.
The main goal of this study was to compare two image processing systems, the current, widely used standard processing tool, SPM8 (Wellcome Centre for Neuroimaging techniques) and BrainWave (General Electric Healthcare 2003), to establish whether fMRI with online processing could replace conventional offline SPM processing for the purpose of establishing expressive language dominance prior to epilepsy surgery.
Patients
Thirty consecutive patients (17 female) with temporal lobe epilepsy (TLE) who had all been referred for presurgical fMRI evaluation of language dominance were studied. The diagnosis of TLE and its lateralization was established by prolonged video/EEG monitoring and neuroimaging. All the patients completed a questionnaire to establish handedness [9]. Six patients were left-handed, and 24 were righthanded. The median age was 30.5 yrs (range: 18-59). Sixteen patients had a diagnosis of right TLE, 12 had left TLE and 2 had bilateral changes on EEG. Structural MRI showed hippocampal sclerosis (7), amygdala sclerosis (1), cavernoma (4), dysembryoplastic neuroepithelial tumor (DNET) (3), focal cortical dysplasia (5) and MRI negative (11) ( Table 1).
All the patients gave informed written consent. This study was approved by the Research Ethics Committee of the National Hospital for Neurology and Neurosurgery and the UCL Institute of Neurology.
MR data acquisition
All scans were performed on a 3 T GE Signa Excite HD scanner (GE Medical Systems, Milwaukee, Wisconsin) at the Epilepsy Society MRI Unit. All data were acquired using an eight-channel array head coil for reception and the body coil for transmission. For the fMRI task, gradient-echo planar T2*-weighted images were acquired providing blood oxygen level-dependent (BOLD) contrast. Each volume comprised 50 2.4/0.1-mm oblique axial slices through the whole brain with a 24-cm field of view, 64 × 64 matrix and in-plane resolution of 3.75 × 3.75 mm. Echo time (TE) was 25 ms, and repetition time (TR) was 2.5 s.
Verbal fluency fMRI paradigm
A variant of a phonemic fluency task was employed [10]. The subjects viewed a single letter projected onto a screen at the end of the scanner couch via a prismatic mirror as they lay in the scanner. The subjects were instructed to covertly generate words in response to the visually presented letters (A, S, W, E and D). Each active condition was presented in blocks lasting for 30 s with ten presentations of a given letter per block and an interstimulus interval (ISI) of 3 s. The active condition was alternated with a 30 s control condition. Five blocks of each condition were performed. In total, the acquisition lasted for 5 min and 10 s.
Scanning and real-time monitoring
Before entering the scan room, the patient was consented and given a verbal explanation and a visual demonstration of the language task. A brief final reminder of the task was given immediately before the scan began.
Patient compliance was monitored in real time during the verbal fluency task using the BrainWave software. The 30-second block design paradigm was alternated between the verbal fluency task and rest, beginning with rest. After the first 30-second task, real-time activation images were viewed on the scanner console by selecting a t-test from the statistical analysis options. Activation maps created in real time by BrainWave were viewed directly on the raw echoplanar imaging (EPI) data or on a high-resolution EPI scan acquired immediately before the functional scan. Activations built up over Table 1 Clinical patient data and lateralization indices for the inferior and middle frontal gyri compared with the visual assessment of lateralization using SPM and BW analyses. Shaded areas show bilateral representation in one or more assessment methods.
ID
Age (yr) Gender time and were saved to the scanner disc in DICOM format during or at the end of the acquisition. The real-time activation plots gave assurance that meaningful data were being collected. Poor task performance was immediately evident, and the task could be repeated, if necessary, after another brief explanation. A data quality algorithm in BrainWave monitored signal-to-noise ratio (SNR), ghosting and patient movement and presented the results in real time with a green/yellow/red traffic light display on the realtime viewing console. If preset limits are exceeded, the light turns to red and the operator can stop the scan and coach the patient to assure high-quality EPI data (General Electric Healthcare 2003).
A 3D T1 Volume (FSPGR; 1.1 mm/256 × 256/1 NEX/24 FOV) was acquired within the same examination. BrainWave post processing was performed online at the scanner console and took 3 min. Echoplanar imaging images were co-registered with the T1 volume displayed in all three orthogonal planes and saved to the scanner disc. The default Z score and P value used for thresholding were Z > 4.56 (P = b0.05). Thresholds could be altered by a simple reprocessing step. A preliminary visual assessment of language laterality could be made before the patient had left the scan room.
SPM analysis
Imaging data were analyzed using statistical parametric mapping (SPM8). The imaging time series was realigned and smoothed with a Gaussian kernel of 8 mm full-width-half-maximum. For each subject, trial-specific responses were modeled by convolving a delta function that indicated each active block onset with the canonical hemodynamic response function (HRF) to create regressors of interest. Each subject's movement parameters were included as confounds, and parameter estimates pertaining to the height of the HRF were calculated for each voxel. One contrast image for the main effect of fluency was created for each subject. The rest condition was used as baseline. We report all activations at a threshold of P b 0.05.
BrainWave analysis
BrainWave real-time online image processing applied motion correction by registering all of the scans in the analysis data set to the same reference scan. Functional magnetic resonance imaging images were aligned using Woods AIR method [11] to minimize movement artifact. A motion correction plot indicated the magnitude and direction of rotations and translations detected and corrected during realignment. The fMRI image volumes were smoothed with a Gaussian spatial filter of full-width-half-maximum (FWHM) of 8.0 × 8.00 × 8.0 mm. Scans were then analyzed on a voxel-wise basis using multiple regression (general linear model) generating a t-test map. The method of Worsley and Friston [12] was used to estimate the effective number of degrees of freedom, to account for temporal autocorrelations due to the smoothness of the hemodynamic response. Using the estimate for the number of degrees of freedom, the t-test was converted into an activation Z map. The activation map was then coregistered to the segmented structural T1 volume series.
Image display
The activation maps, co-registered with the segmented structural T1 volume were created in 3 orthogonal planes and saved as an image stack within the patient directory along with the other structural scans. An overview of the entire brain was also archived with areas of activation displayed on the 3D-segmented volume. The images were transferred to a satellite work station for viewing and reporting.
Language maps rating
Statistical parametric mapping and BrainWave images were assessed independently by two raters, a neuroradiologist specializing in fMRI (CM) and a neurologist specializing in fMRI (MC). The images were anonymized, and the raters were blinded to any clinical information. Areas of activation were divided into middle and inferior frontal gyri, and superior and middle temporal gyri. Visual assessment of significant activations in each of these areas was noted followed by an overall visual assessment for left, right or bilateral language dominance. In addition, leftand right-sided activations in the cerebellum were noted; cerebellar activation in the contralateral side to that of language dominance has been noted [13,14]. A quantitative assessment of lateralization indices (LI) of activation in the middle and inferior frontal gyri (MFG/IFG) was performed for comparison with the visual radiological assessment. We calculated the LI for the MFG/IFG using the bootstrap method of the SPM toolbox [15] for the contrast "verbal fluency" for each subject (−1 for left hemisphere activation and +1 for right hemisphere activation).
Results
Two of the 30 patients were excluded because of poor data quality. There was good concordance between SPM and BrainWave for the remaining 28 patients ( Table 2).
Rater one assessed laterality of verbal fluency the same for SPM and BrainWave activation maps in 26 subjects and with some variation in 2 patients (concordance: 92.8%). A 41-y/o subject with RHS and right temporal dysplasia was reported as left dominant with some right activation on BrainWave but was reported as bilateral on SPM (lateralization index: −0.51). A 46-y/o female with RHS was reported as right dominant on BrainWave and bilateral but with slightly more activation on the right with SPM analysis (lateralization index: 0.25).
Rater two reported 25 of the 28 subjects the same on SPM and BrainWave (concordance: 89.2%). A 32-y/o female with RHS was reported as left dominant but with some right activation on BrainWave and bilateral on SPM (LI: −0.0058). The same 41-y/o subject with RHS and right temporal dysplasia (rater 1, above) was reported as bilateral on SPM and left dominant on BrainWave with some right activation noted. A 59-y/o female with RHS was judged to be bilateral with significant right activation on BrainWave and right lateralized on SPM (LI: −0.038).
In the cases in which the 2 raters disagreed individually, a consensus was reached, and the concordance was 92.8% (Tables 1 and 2).
Comparison of visual reading with lateralization index of activation in the middle and inferior frontal gyri
Left language dominance was defined by an LI of ≤− 0.4 on the verbal fluency task. Right language dominance was defined by an LI of ≤+0.4. The range of LI was between −0.99 and +0.72. Lateralization indices correlated closely with visual assessments of lateralization (Table 1, Figs. 1 and 2). There were 2 cases which were not concordant between BrainWave and SPM, and these were compared with SPM-derived LIs. Patient 22 had an LI of −0.51 and was visually
Discussion
There was good concordance between the two blinded raters for BrainWave and SPM. Inconsistencies were due to some degree of bilateral asymmetric dominance. A consensus was reached between the raters, and the concordance between the two methods was 92.8%. The two remaining cases with inconsistencies between SPM and BrainWave were in those patients with bilateral asymmetric activations.
Lateralization indices correlated closely with visual assessments of lateralization; the concordance was 85.7%. Visual assessment of BrainWave was just as accurate as the visual assessment of SPM when compared with LI. The discrepancies were in cases with degrees of bilateral activation.
Two patient data sets were excluded due to non-diagnostic results on SPM. BrainWave analysis coped well with both movement artifact in the first case and poor activation in the second case. Results on BrainWave for both cases were considered diagnostic. In two other cases, there were discrepancies between visual assessment and LIs. Visual assessment of SPM and BrainWave concurred as left dominant in both cases; however, the LI reading suggested bilateral representation. In one case, there was artifact which was disregarded by the visual assessment but which contributed to the bilaterality of the LI. In the other, the data quality was suboptimal, and a repeat acquisition may have given a clearer result.
Comparison with previous work
Although scanner-based online fMRI-processing packages with realtime monitoring have been available for some time, little has been written about their validity as an alternative to offline processing methods. In 1995, Cox et al. [16] recognized that the capacity for realtime viewing of fMRI data was desirable for several reasons: 1) for data quality monitoring and motion detection, the need for repeat tasks would become immediately obvious; 2) instant access to initial results would make it possible to develop new paradigms more quickly; 3) interactive paradigms would become possible. Weiskopf et al. [17] recognized the value of immediate quality assurance and functional localizers to guide the main experiment, which real-time fMRI analysis provides.
Fernandez et al. [18] studied 12 patients and 12 control subjects using a semantic decision task to evaluate language lateralization in epilepsy patients. They concluded that real-time analysis was a reliable method for assessing language dominance. Several limitations in real-time processing were highlighted; having to set a predefined statistical threshold a priori and the inability to register activation maps with structural anatomical images. Both limitations have been overcome with the current real-time package used in this study; statistical thresholds can be altered by redefining a Z-score value during image processing and viewing. Activation maps can be overlaid in real time with a high-resolution EPI scan acquired in the same plane with the same slice thickness immediately before the task. In addition, overlay with a structural high-resolution T1 volume is performed immediately after the functional acquisition.
Schwindack et al. [19] compared real-time activation maps with standard offline SPM results in 11 patients with brain tumors. For the real-time analysis, Schwindack used an adapted version of AFNI software (National Institute of Mental Health, Bethesda, Maryland, USA) customized to their GE scanner. They found that motor fingertapping tasks provided the most consistent activation between the two methods, but they had less success with real-time language paradigms and, thus, recognized the need for further studies.
Clinical interpretation of results
The presurgical determination of language dominance is required to predict and minimize the risk of language deficits after epilepsy surgery. Left TLE is associated with a higher incidence for atypical language dominance compared to healthy controls and right TLE [20]. Atypical dominance is most likely to occur with onset of epilepsy in childhood [21]. Following left anterior temporal lobe resection, up to 40% of patients will develop notable language deficits, particularly a decline in naming ability [6]. Preoperative language fMRI has been shown to predict marked language decline with increasing activation in the left hemisphere, particularly in the temporal lobe, being associated with increasing risk of postoperative impairment [22]. Left-sided TLE is associated with an increased probability of expressive language activation in the right frontal lobe [20].
As with all fMRI, caution is required in the clinical interpretation of results, whether from BrainWave or SPM analysis. Hemispheric language dominance is not dichotomous but follows a continuum, so individual cases are not always clear-cut. It is helpful to use a laterality index to express language laterality as a continuous variable rather than left, right or bilateral. It has been noted that language lateralization with fMRI might be less reliable in the presence of a structural lesion than without [23]. Functional magnetic resonance imaging results cannot be used to determine the extent of neocortex that is needed to subserve language, the area shown being consequent to the thresholds used to display the results. Direct cortical stimulation is required as an additional preoperative assessment to precisely define critical language cortex if surgery is planned close to this area.
Future studies
We have also had success using online BrainWave image processing for other language paradigms. Verb-generation tasks can be modified to a simple block design suitable for real-time scanning; initial results show good concordance with SPM processing, but more data are needed for future comparison. Motor tasks are ideally suited to real-time scanning; immediate activation maps co-registered with the T1 volume provide useful and instant information on structural proximity of the lesion to the motor cortex. Future validation studies are needed in these areas. BrainWave is just one of several commercially available online fMRI-processing packages suitable for setting up a clinical fMRI language service. Brainlab, AFNI and BrainVoyager are alternative packages produced by other manufacturers.
Conclusions
Online image processing using BrainWave provided a fast and accurate display of expressive language lateralization in presurgical epilepsy patients. It can easily be applied in a clinical setting, without the need for intensive offline data processing. BrainWave showed good concordance with the current standard offline method for fMRI analysis, SPM. Real-time activation plots gave assurance that meaningful data were being collected. Cases of poor task performance were immediately evident on BrainWave activation maps allowing for task repetition during the same examination. BrainWave reliably identified typical left language dominance and highlighted atypical cases that may require offline post processing for full clinical evaluation.
|
2016-10-19T14:08:41.588Z
|
2012-09-01T00:00:00.000
|
{
"year": 2012,
"sha1": "7f7113920cb9ba2cf81088b720aedefa5721dc89",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.yebeh.2012.05.019",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f7113920cb9ba2cf81088b720aedefa5721dc89",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
9739578
|
pes2o/s2orc
|
v3-fos-license
|
T1w dark blood imaging improves detection of contrast enhancing lesions in multiple sclerosis
Purpose In multiple sclerosis (MS) the sensitivity for detection of contrast enhancing lesions (CEL) in T1-weighted scans is essential for diagnostics and therapy decisions. The purpose of our study was to evaluate the sensitivity of T1w MPRAGE scans in comparison to T1w dark blood technique (T1-DB) for CEL in MS. Materials and methods 3T MR imaging was performed in 37 MS patients, including T2-weighted imaging, T1w MPRAGE before and after gadolinium injection (unenhanced-T1 and T1-CE) and T1-DB imaging. After gadolinium application, the T1-DB scan was performed prior to T1-CE. From unenhanced-T1 and T1-CE scans, subtraction images (T1-SUB) were calculated. The number of CEL was determined separately on T1-CE and T1-DB by two raters independently. Lesions only detected on T1-DB scans then were verified on T1-SUB. Only lesions detected by both raters were included in further analysis. Results In 16 patients, at least one CEL was detected by both rater, either on T1-CE or T1-DB. All lesions that were detected on T1-CE were also detected on T1-DB images. The total number of contrast enhancing lesions detected on T1-DB images (n = 54) by both raters was significantly higher than the corresponding number of lesions identified on T1-CE (n = 27) (p = 0.01); all of these lesions could be verified on SUB images. In 21 patients, no CEL was detected in any of the sequences. Conclusions The application of T1-DB technique increases the sensitivity for CEL in MS, especially for those lesions that show only subtle increase in intensity after Gadolinium application but remain hypo- or iso-intense to surrounding tissue.
Results
In 16 patients, at least one CEL was detected by both rater, either on T1-CE or T1-DB. All lesions that were detected on T1-CE were also detected on T1-DB images. The total number of contrast enhancing lesions detected on T1-DB images (n = 54) by both raters was significantly higher than the corresponding number of lesions identified on T1-CE (n = 27) (p = 0.01); all of these lesions could be verified on SUB images. In 21 patients, no CEL was detected in any of the sequences. PLOS
Introduction
Magnetic resonance imaging (MRI) is an essential tool in diagnosing and evaluating disease progression in patients with multiple sclerosis (MS). Besides T2 and T1 lesion load, the appearance of contrast enhancing lesions (CEL) is commonly used as a marker for active inflammation and blood brain barrier breakdown and may even predict longterm outcome in patients suffering from MS. [1,2] Furthermore, the detection of CEL has gained importance with revisions of the diagnostic criteria and can be important for treatment decisions. [3,4] The evidence of a CEL can be essential to fulfil the diagnostic criteria for dissemination in time and in later stages the number of CEL has been used as surrogate for insufficient therapeutic suppression of inflammation. The application of contrast agents is routinely used in diagnosing and monitoring MS as well as in phase I, II and III clinical trials. [5] In clinical routine, two-dimensional T1-weighted spin-echo (T1w-SE) sequences or three-dimensional gradient-echo (T1w-GRE) sequences are commonly acquired for CEL detection in MS patients. However, standardized MR protocols for lesion detection are missing as T1w-SE and T1w-GRE both have their individual limitations. [6][7][8][9] The aim of this study was to introduce T1w dark blood (T1-DB) sequences in CEL detection and to compare this novel method with the routinely used techniques. T1-DB sequences are primarily used for vessel wall imaging and have shown high sensitivity for the detection of vessel wall inflammation, cervical artery dissection and venous thrombosis. [10][11][12][13] Furthermore, in recent studies it was demonstrated that at 1.5 Tesla, T1-DB sequences were superior to T1w-SE detecting brain lesions such as primary central nervous system malignant neoplasia and metastases. [14,15] However, these former studies included a small heterogeneous group of potential contrast enhancing lesions and, so far, no study exists examining the potential of T1-DB sequences for CEL detection in a larger homogeneous population or MS. We therefore chose a population of MS patients, where the detection, number and volume of contrastenhancing lesions is relevant in the diagnostic work-up and is used as endpoint in therapy studies. With the promising results of the recently published studies we hypothesized, that post-contrast T1-DB sequences detect more enhancing MS lesions than the routinely used T1w-GRE with superior inter-observer agreement.
Patients
Thirty-seven patients diagnosed with relapsing-remitting MS were consecutively included in this prospective study between February 2014 and August 2016. Inclusion criteria were as follows: age 18-70 years; diagnosis of relapsing-remitting MS according to the 2010 revised McDonald cirteria [3]; absence of neurologic conditions other than MS. Patients with progressive forms of MS were excluded. All patients were referred to our department from the MS day hospital and received brain MRI. The study was approved by the local Ethical Committee . T1w-MPRAGE and T1-DB were both acquired in sagittal plane. All patients received the same Gadolinium based contrast agent, which was injected with a consistent dose of 0.2 ml/kg of body weight. Subsequently, the T1-DB images were acquired 4 minutes after intravenous contrast agent administration, followed by the acquisition of T1-CE (starting approximately 7 minutes after Gadolinium injection). After image acquisition, unenhanced T1w images were linearly registered to T1-CE images and subtracted afterwards using Analyze 11.0 (AnalyzeDirect, Inc. KS, USA) to obtain a subtraction image (T1-SUB).
Image analysis
Lesion detection was performed by two independent raters with a specific training of ten years and two years in MS image diagnostics and evaluation. In a first reading, T1-DB and T1-CE images were separately presented to the two raters in random order and contrast enhancing lesions were marked on both images independently. In a second reading, lesions that were identified on T1-DB but not on T1-CE were retrospectively evaluated on T1-CE and on T1-SUB to scan for false positive findings in T1-DB. Hence, all lesions that were detected on T1-DB and not detected on T1-CE were compared to T1-SUB images to confirm contrast enhancement.
For further analysis, only lesions that were identified by both raters in the first reading were included. Lesions were outlined using the software Analyze 11.0 and lesion volumes were calculated as mean of both raters. Subsequently, lesion count and volume were then compared between T1-CE and T1-DB scans using Wilcoxon signed-rank test. Statistical analysis was performed by using Statistics in R 3.0.0 and IBM SPSS 21.0 (IBM Corp., Armonk, NY).
Results
37 patients (11 male and 26 female) with a mean age of 37.3 years (± 11.7 years) were included in the study. 16 of these patients presented with at least one contrast enhancing lesion detected by both raters, either on T1-CE or T1-DB images. The remaining 21 patients did not show any CEL-neither on T1-CE, nor on T1-DB images. For a detailed overview see Figs 1 and 2.
All lesions that were detected on T1-CE were also detected on T1-DB images. Also, all lesions that were only detected on T1-DB images could retrospectively be confirmed on T1-SUB images.
Raters agreed on the detected number of CE-lesions in 89% for T1-CE and 86% for T1-DB images. All ratings did not differ by more than one count for both sequences.
When only including patients with any CEL on T1-CE or T1-DB images, 54 CE-lesions were detected on un-subtracted T1-DB images and 27 CE-lesions on un-subtracted T1-CE images after consensus reading. Using the Wilcoxon signed-rank test, the number of CEL detected on T1-DB images (median = 1.5) was significantly higher than the corresponding number of lesions identified on T1-CE images (median = 1) (z = 3.21; p = 0.01).
Discussion
The aim of this study was to introduce post-contrast T1-DB sequences in CEL detection and compare this novel approach with the routinely used T1w-GRE. Our results suggest an increased sensitivity for detecting CEL by using T1-DB sequences. Especially lesions that only show subtle increase in intensity after the application of contrast medium were more reliably detectable in T1-DB than in T1-CE (Fig 3). Therefore, it was possible to identify patients with active inflammatory processes and blood brain barrier breakdown, which would have been missed if lesion detection had only been performed on the routinely acquired sequences. Furthermore, detection of CEL is important as it can be the essential factor to fulfil the diagnostic criteria for dissemination in time and can be important for treatment decisions.
Due to high-spatial resolution (voxel size = 0.9 mm 3 ) and improved lesion detection in T1-DB sequences, a more accurate lesion localization can be achieved (Fig 4). For example, while in FLAIR and T1-CE images a lesion seemed to be located in the periventricular white matter, an involvement of subcortical u-fibres was detected in T1-DB and therefore helped in defining the lesion's location as juxtacortical. In consideration of the 2010 revised McDonald criteria as well as the 2016 introduced Magnetic Resonance Imaging in Multiple Sclerosis (MAGNIMS)-recommended criteria lesion localization is highly important in diagnosing MS, which might induce early therapy decision. [3,4] Furthermore, T1-DB sequences help to minimize false positive findings like arterial or venous vessels by suppression of the blood signal and might therefore prevent unnecessary therapy changes (Fig 5).
Our findings are in line with recent studies, that obtained higher detection rates for brain lesions using T1-DB sequences compared to the routinely used T1w-SE or T1w-GRE sequences. [14,15] However, we did not compare between T1-DB and T1w-SE but T1w-MPRAGE, which is a 3D gradient echo sequence with an initial 180 degree inversion pulse and routinely used in our institution as standard sequence for CEL detection in MS patients. While it is reported that T1w-SE shows better sensitivity of contrast enhancement than T1w-GRE at 1.5 T, it is also prone to flow related artefacts and is usually acquired in thicker slices, since covering the whole brain in thin slices would take too much time in clinical routine. [7,16] Furthermore, it is uncertain whether T1w-SE imaging shows superior contrast intensity than T1w-GRE at higher field strengths. [17] Recent studies reported higher detection rates and reproducibility of contrast enhancing lesions in patients with cerebral tumors and MS, especially those with smaller size, using T1w-GRE at 3 T. [8,9,18] Additionally, T1w-GRE imaging provides whole-brain coverage with thin-section thickness in clinical acceptable scanning times. We used 3D T1w-MPRAGE for lesion detection, which has shown superior detections rates for small enancing brain lesions compared to 2D FLASH sequences. [19] In consideration of these recent findings T1w-MPRAGE seems to be a suitable sequence to compare to.
Recent studies have shown, that serial application of gadolinium-based contrast agents leads to increased signal intensities in certain brain structures, indicating a deposition of gadolinium in the patient's brain. [20][21][22] Since it remains unclear whether this deposition leads to histopathological damages or health related long-term effects, the application of gadoliniumbased contrast agents should be limited to a minimum as possible. With superior detection rates of CEL in T1-DB due to its higher signal-intensity a reduction of contrast agent dose might be possible. However, further investigations and applications of varying doses of contrast agent would be needed.
Since detection of CEL is always highly rater dependent, it has been suggested to use subtraction imaging to increase sensitivity in lesion detection, especially for lesion with only subtle contrast enhancement. [23][24][25][26] However, automatically processed image registration is not provided by all vendors or workstations on-the-fly. Even if available it still has its limitations and subtraction without prior registration is prone to motion artefacts and might lead to a number of false positive results (Fig 6). Therefore, using only T1-SUB for lesions detection is highly ineffective and is not recommended by any of the recently published consensus guidelines. [3,4,27] In this presented study T1-SUB was used to confirm lesion enhancement in T1-DB and eliminate possible false-positive findings, of which we could not find any. Using T1-DB sequences we achieved superior signal intensity contrast between contrast-enhancement and brain parenchyma but are not dependent on post-processing work steps to avoid false-positive findings due to motion artefacts.
The main limitation in this presented study is the small sample size. Though 37 MS patients received MRI, CEL were detected in only 16 of them, which were subsequently inlcuded in statistical analysis. However, we still obtained significant different detection rates between the two acquisition techniques. Since contrast enhancement is an irregular phenomenon in MS and can be completely absent in MRI, a larger sample size does not automatically assure a higher CEL count. To confirm our results, further studies with larger study cohorts are needed.
Another limitation might be the time between the application of contrast agent and the image acquisition. In our imaging protocol, T1-DB sequences were acquired approximately 4 minutes after the administration of contrast medium but always before acquisition of T1-CE, which was started approximately 7 minutes after Gadolinium injection. Optimal enhancement of lesions is reported 5 to 10 minutes after intravenous injection of the contrast agent. [28] Time for contrast enhancement was always shorter in T1-DB than in T1-CE, but still higher detection rates were obtained for T1-DB. Nevertheless, it would be highly interesting to vary the delay between Gadolinium injection and image acquisition. In conclusion, T1-DB is a promising MR sequence to increase sensitivity in CEL detection in MS patients, to minimize false-positive findings and finally might enable the reduction of contrast agent dose. Due to its short acquisition time and since no post-processing steps are needed it can be easily established in clinical routine. However, further investigations with larger sample size are needed to confirm our very promising findings.
|
2018-04-03T03:56:48.877Z
|
2017-08-10T00:00:00.000
|
{
"year": 2017,
"sha1": "baca4e91d69848997aa0b94f36ade72f1c8c2388",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0183099&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "baca4e91d69848997aa0b94f36ade72f1c8c2388",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221366401
|
pes2o/s2orc
|
v3-fos-license
|
Non-Muscle Myosin II in Axonal Cell Biology: From the Growth Cone to the Axon Initial Segment
By binding to actin filaments, non-muscle myosin II (NMII) generates actomyosin networks that hold unique contractile properties. Their dynamic nature is essential for neuronal biology including the establishment of polarity, growth cone formation and motility, axon growth during development (and axon regeneration in the adult), radial and longitudinal axonal tension, and synapse formation and function. In this review, we discuss the current knowledge on the spatial distribution and function of the actomyosin cytoskeleton in different axonal compartments. We highlight some of the apparent contradictions and open questions in the field, including the role of NMII in the regulation of axon growth and regeneration, the possibility that NMII structural arrangement along the axon shaft may control both radial and longitudinal contractility, and the mechanism and functional purpose underlying NMII enrichment in the axon initial segment. With the advances in live cell imaging and super resolution microscopy, it is expected that in the near future the spatial distribution of NMII in the axon, and the mechanisms by which it participates in axonal biology will be further untangled.
Introduction
Myosins are one of the largest and most divergent protein families comprising over 35 classes (reviewed in [1,2]). In humans, myosins are encoded by 40 genes belonging to 13 different classes (I, II, III, V, VI, VII, IX, X, XV, XVI, XVIII, XIX, and XXXV) [1,2]. In reports from the early 1970s, in addition to striated and smooth muscle cells, myosins were shown to be present in several other cell types, including platelets, granulocytes, fibroblasts, and neurons (reviewed in [3]). Initially, these myosins were designated as vertebrate cytoplasmic myosins and nowadays are commonly referred to as non-muscle myosins (NM). Non-muscle myosin II (NMII) is present in every cell type and is the most abundant myosin class. The role of NMII in contraction has been broadly studied and is a crucial player in several key biological processes including cell migration, adhesion and cytokinesis, among many others. Actomyosin is the term used to refer to the cytoskeleton arrangement formed by the complex of actin and myosin. When NMII bipolar filaments bind anti-parallel actin filaments (F-actin), actomyosin contractility occurs through the conversion of chemical energy (hydrolysis of ATP) into mechanical energy, inducing myosin heads to move towards F-actin barbed (+) ends.
The actomyosin cytoskeleton is essential in neurons, where it is involved in the establishment of polarity, growth cone motility, axon growth and axon regeneration, radial and longitudinal axonal tension, and in the establishment of synapses. The details on how myosins, specifically NMII, participate in some of these processes are just starting to be unveiled as their comprehensive analysis is dependent on recent advances of microscopy. Here we will discuss the current knowledge on the participation of the actomyosin cytoskeleton on axonal biology and function. We will mainly focus on the highly abundant NMII, starting by discussing its biochemical properties, structural organization and regulation, and formation of bipolar filaments. We will then explore the involvement of NMII in axonal biology, from neuronal polarization to growth cone motility and neurite outgrowth, to its spatial arrangement and function, both in the axon initial segment (AIS) and in the axon shaft.
NMII Isoforms and Structure
NMII is a hexameric protein composed by two regulatory light chains (RLCs) of 20 kDa and two essential light chains (ELCs) of 17 kDa, tightly bound to two heavy chains of 230 kDa (reviewed in [4,5]). Each heavy chain includes a N-terminal head domain (motor) and a long α-helical rod (tail domain), connected by a neck (lever arm) ( Figure 1A) [4,5]. The head domains of NMII contain a binding site for ATP and also for actin. The neck region is important for the stability of the molecule. This region includes two conserved IQ motifs (IQxxxRGxxxR) that form an amphiphilic α-helix structure with binding affinity for the light chains (ELCs and RLCs). The neck domain is followed by a long α-helical coiled coil rod/tail filament, which mediates heavy chain dimerization and filament assembly. The molecule also includes a short non-helical tail of approximately 33-47 amino acid residues ( Figure 1A), depending on the isoform [5]. Binding of light chains is required to maintain the structure of NMII and is critical for its proper function. As evidenced by X-ray analyses of different myosin crystals, the structure of the NMII lever arm is stabilized by the binding in tandem of ELC and RLC [6][7][8][9]. Supporting the importance of light chains for NMII stability, ablation or reduction of RLC expression levels triggers aggregation of NMII heavy chain in Drosophila [10].
In mammalian cells, NMII exists as three isoforms, NMIIA, NMIIB, and NMIIC [4,5,[11][12][13][14][15][16] encoded by three different genes, MYH9, MYH10, and MYH14, respectively. The head domain is highly conserved between the different NMII isoforms, particularly in the actin-binding site. In contrast, both the C-terminal rod and the non-helical tail, that are crucial to determine the assembly of NMII filaments and intracellular location, differ significantly among the three isoforms [5,17]. The pattern of expression of NMII isoforms is cell type and tissue dependent, with few cell types expressing a single NMII. Generally, NMIIA and NMIIB are the predominant isoforms, while NMIIC is expressed in lower amounts. In neurons, expression of the three NMII isoforms has been reported, NMIIB being the predominant one [11][12][13][14]16,18]. Importantly, as will be further addressed below in the context of the axon, the activity, interactors and subcellular localization, may differ amongst the three NMII isoforms [19][20][21].
NMII Regulation and Function
NMII post-translational regulation has been widely covered in recent reviews [5,22,23] and is briefly discussed here. Phosphorylation of RLC is essential for the regulation of the enzymatic activity of NMII. The biological significance of this phosphorylation was initially unknown. In 1975, Adelstein and Conti showed that RLC phosphorylation by myosin light chain kinase (MLCK) isolated from platelets, increased the NMII ATPase activity [24]. Later on, it was demonstrated that this phosphorylation regulates the conformation of myosin heads, promoting the assembly of NMII filaments [25], the active unit in cells, as further detailed below. Beyond MLCK, several additional enzymes are able to phosphorylate the RLC on Ser19 and Thr18, such as Rho-associated coiled-coil-containing kinase (ROCK), leucine-zipper-interacting protein kinase (ZIPK), citron kinase, Serine/Threonine-protein kinase 21 (STK21) and myotonic dystrophy kinase-related CDC42-binding kinase (MRCK/CDC42BP) [25][26][27][28][29]. The above kinases display specific intracellular locations and are modulated by a variety of signal transduction pathways that provide an intricate regulation to accurately modulate NMII activity. a long α-helical coiled coil rod/tail filament, which mediates heavy chain dimerization and filament assembly. The molecule also includes a short non-helical tail of approximately 33-47 amino acid residues ( Figure 1A), depending on the isoform [5]. Binding of light chains is required to maintain the structure of NMII and is critical for its proper function. As evidenced by X-ray analyses of different myosin crystals, the structure of the NMII lever arm is stabilized by the binding in tandem of ELC and RLC [6][7][8][9]. Supporting the importance of light chains for NMII stability, ablation or reduction of RLC expression levels triggers aggregation of NMII heavy chain in Drosophila [10]. The heavy chain includes the head domain (green), with an actin binding site and an ATPase motor domain, the neck domain to which the ELC and RLC are bound, and the tail domain, with a helical coiled coil rod filament and a non-helical tail. In the absence of RLC phosphorylation, NMII is in an inactive conformation (left). Upon RLC phosphorylation by MLCK or ROCK, for instance, NMII unfolds to generate an active conformation (middle). MLCK phosphorylates RLC on Ser19 and Thr18, depicted as two red circles. The RLC phosphatase MLCP can revert this activation. NMII is then able to assemble into bipolar filaments, which bind to actin (blue) (right). (B) Upon NMII activation, polymerization-competent NMII molecules can form bipolar filaments through electrostatic interactions of their rod domains. Addition of more NMII molecules drives the growth of a bipolar NMII filament. For a matter of simplicity, a smaller number of myosins than that generally found on each side of the filament is drawn. (C) NMII filaments can also form super structures, the NMII stacks. Two models are considered to underlie NMII stack formation: Filament concatenation (left) and filament expansion (right). In the model of filament concatenation, NMII stacks can be formed through concatenation of multiple NMII filaments. In the model of NMII filament expansion, after the formation of the bipolar NMII filament, certain subset of daughter myosins separate themselves from one side of the filament, being each one used as a template. Growth of new templates is driven by addition of NMII filaments.
Addition of ATP to non-phosphorylated NMII favors the folded inactive monomer, in which the motor domains of the two heavy chains associate, and the helical tail folds back and interacts with the RLCs [30]. The folded monomer does not bind to actin therefore preventing the unnecessary consumption of ATP [5,31]. When phosphorylation of RLC occurs, the inactive conformation is disrupted allowing NMII to assemble into filaments, the active unit of the molecule ( Figure 1A). To generate movement on actin, nanoscale motion of the NMII head domain is transmitted to the neck domain, which will ensure that the amplification of these movements is translated into a large power stroke [32]. NMII hydrolyzes one molecule of ATP into ADP and P i , and the neck domain changes from a bent to a straight conformation [32]. Consequently, the neck domain rotates generating a power stroke. As a result, an actin filament translocates 5 to 10 nm [33].
RLC phosphorylation is a reversible biochemical process, tightly regulated by several myosin kinases and phosphatases. In the latter case, a RLC phosphatase will decrease NMII activity by favoring the inactive conformation [34]. RLC phosphorylation at Ser19 and Thr18 is reverted by myosin light chain phosphatase (MLCP) ( Figure 1A), which is a multi-protein complex including a catalytic subunit and a myosin-binding subunit encoded by PPP1R12A and referred to as myosin phosphatase target subunit 1 (MYPT1) [35,36]. MYPT1 phosphorylation inactivates protein phosphatase 1 (PP1) and this leads to a significant increase in RLC phosphorylation and NMII activation [34].
Two important kinetic properties that differ among NMII isoforms are the ATPase activity, and the duty ratio, i.e., the time that the myosin motor domain remains strongly bound to actin [37]. NMIIA presents the highest rate of ATP hydrolysis and it pushes forward actin filaments more rapidly than NMIIB and NMIIC [38]. In contrast, NMIIB has a significantly higher duty ratio than NMIIA and NMIIC, spending more time bound to actin [39]. These observations support the concept that NMIIA and NMIIC may be more involved in contraction of actin filaments, while NMIIB can also maintain tension by crosslinking actin filaments [39]. A MYH10 mutation (R709C), that is located in the head domain, allows the separation of these two functions, providing insight into how NMIIB acts. Although this mutation exhibits very slow ADP release and a disruption in the ability to translocate actin, it spends most of time in a strong-binding state to actin filaments during the ATPase cycle [38,40].
NMII: From Filament Formation to Stacks
The C-terminal tail domains of NMII heavy chains interact electrostatically with each other to form bipolar filaments, positioning the N-terminal head domains on opposite sides of the filament ( Figure 1B). Early studies using electron microscopy revealed that the length of the NMII bipolar filament is approximately 300 nm [41,42], just above the diffraction limit of standard light microscopy. This length was later confirmed using 3D-structured illumination microscopy [43]. Limitations in imaging techniques underlie the fact that questions related to the number of NMII molecules necessary to assemble NMII filaments, and the process of assembly itself, remain unresolved. A single filament includes multiple NMII molecules and the number of molecules is dependent on steric hindrance [41,44]. NMIIA and NMIIB assemble into filaments of approximately 30 monomers, while NMIIC assembles into filaments of fewer molecules, approximately 14 monomers [41,42]. This corroborates Thomas Pollard's initial findings on in vitro recombinant NMII isoforms using electron microscopy [42]. Most NMII filaments are thought to be homofilaments. However, in areas where NMII isoforms (NMIIA, NMIIB and NMIIC), are simultaneously expressed, the assembly of heterofilaments has also been observed [45,46]. Bipolar filaments can additionally form superstructures made of groups of parallel filaments termed as NMII ribbons or stacks [47][48][49] ( Figure 1B). The mechanisms governing their formation are still unclear. NMII filaments may associate amongst each other, in a process known as concatenation [43,44,47] or alternatively they may undergo splitting, with a single filament giving rise to two separate daughter filaments, in a process known as expansion ( Figure 1C) [44].
Actomyosin in Neurons
Neurons are one of the most highly polarized cells in our bodies and their exquisite shape is crucial for the physiology of the nervous system. Neurons possess distinct compartments, the dendrites and the axon that extend from the cell body (soma). Whereas dendrites receive signals from other neurons, the axon transmits signals through the release of neurotransmitters. Typically, the axon is a single long and thin process, also entailing different compartments, with specific functions and structural organizations. These include the axon initial segment (AIS) located close to the cell body, the place where action potentials are generated, the axon shaft, and in the distal axonal tip, the growth cone (during development) or the pre-synaptic terminal (following the establishment of connections). In this review we will focus on the role of NMII in the biology of the different axonal compartments, starting from its outermost region-the growth cone-up to the AIS.
The Actomyosin Cytoskeleton in the Growth Cone
How a round cell breaks symmetry and gives rise to a highly polarized neuron has fascinated neuroscientists for decades. At the tip of the axon and dendrites, growth cones are able to sense and integrate a variety of signals inducing changes in cytoskeletal dynamics, which will ultimately guide them to their targets. The morphological changes that occur during neuronal development in vivo can be recapitulated to some extent in vitro at least for certain neuron types [50,51]. The importance of the actin and microtubule cytoskeletons in neuronal polarization and growth cone formation have been extensively investigated (reviewed in [52]), as well as how intrinsic and extrinsic cues modulate these processes. Here we will focus our attention on the role NMII in the growth cone.
The growth cone is a highly dynamic structure, comprising a central domain, a transition zone and a peripheral domain [53] (Figure 2A). The central domain consists mostly of stable, bundled microtubules whereas the peripheral domain is enriched in actin, either in the form of fingerlike filopodia that dynamically explore the environment for guidance information, or lamellipodia whose turnover contributes to the forward movement of the growth cone. The transition zone is located at the interface between the peripheral and central domains, and is enriched in actin arcs, forming a hemicircumferential ring ( Figure 2A). The transition zone may restrain dynamic microtubules from protruding into the peripheral domain [54,55]. The continuous rearward movement of F-actin from the leading edge towards the growth cone center-actin retrograde flow, combined with F-actin treadmilling i.e., the addition of actin subunits to the barbed-end and disassembly from the pointed-end, is essential for growth cone response to directional cues [53].
The role of NMII in growth cone organization and dynamics, namely in actin arc formation and movement, peripheral domain actin retrograde flow, actin bundle severing in the transition zone, and axon guidance, has been extensively investigated [56][57][58][59][60][61]. Although NMII is enriched in actin arcs in the transition zone, the development of antibodies against specific NMII isoforms allowed to determine that the three NMII isoforms are differentially distributed throughout the growth cone [58]. While NMIIA is highly expressed in the axon shaft and central domain [58], NMIIB and NMIIC are enriched in the transition zone and in the peripheral domain [58][59][60]. Using electron microscopy, both NMIIA and NMIIB, were found in the growth cone in their active form i.e., as bipolar filaments [56]. As is detailed below, the distinct spatial organization of NMII isoforms within the growth cone may underlie the apparent opposite roles of NMIIA and NMIIB in growth cone actin retrograde flow, guidance and axon growth.
In the transition zone, actin arcs function as NMII-driven contractile structures. Using structured illumination microscopy, NMII filaments in actin arcs were shown to be oriented parallel to actin filaments and to co-localize with regions enriched in actin pointed ends, associated with tropomodulin [48]. Actin arc movement is decreased by ROCK and MLCK inhibition and increased after MLCP inhibition, which is consistent with actomyosin-dependent contraction [61]. Actin arc formation is also sensitive to changes in Rho activity [53,61], and is compromised by NMII inhibition through blebbistatin [53,57]. In fact, when NMII activity is inhibited by blebbistatin, long parallel actin bundles fill the complete growth cone, leading to expansion of the peripheral domain, while NMII disappears from the transition zone [53,57]. Of note, actin arcs interact with microtubules and transport them back into the central domain, as NMII provides for the compressive force necessary for microtubule bundling in the growth cone neck [62].
In the peripheral domain of the growth cone, actin retrograde flow is central for axon guidance. Actin bundles assemble near the growth cone leading edge, translocate rearward by retrograde flow, and recycle through bundle severing. Although the role of NMII in the retrograde flow was initially questioned [63,64], several evidence support that this process is indeed NMII-dependent. Actin retrograde flow is significantly decreased by ATPase inhibitors including 2,3-butanedione-2-monoxime (BDM) [65] and blebbistatin [57], as well as by MLCK inhibitors [61].
The steady-state "treadmilling" of actin filaments maintains actin bundles at relatively constant lengths. When NMII activity is inhibited by blebbistatin, the actin bundle length increases significantly and the severing process within the transition zone is impaired [57]. These observations suggest that actin retrograde flow rate in the growth cone is positively regulated by the activity of NMIIB, the isoform more abundant in the peripheral domain and in the transition zone. However, when actin retrograde flow of NMIIB-knockout mouse growth cones was investigated, an increased rate was found, which was suggested to be due to a functional takeover by other NMII isoforms [66].
Cells 2020, 9, x FOR PEER REVIEW 6 of 18 process within the transition zone is impaired [57]. These observations suggest that actin retrograde flow rate in the growth cone is positively regulated by the activity of NMIIB, the isoform more abundant in the peripheral domain and in the transition zone. However, when actin retrograde flow of NMIIB-knockout mouse growth cones was investigated, an increased rate was found, which was suggested to be due to a functional takeover by other NMII isoforms [66]. Inhibition of NMII activity in primary neuron cultures and in neuronal cell lines has disclosed its importance in axon elongation and retraction. During the initial stages of neuronal polarity, extension of undifferentiated minor processes occurs. These will then differentiate into the axon and dendrites which will elongate in a growth cone-dependent manner. Inhibition of NMII with either blebbistatin, or through the inhibition of its upstream regulators (MLCK or ROCK), promotes the fast growth of minor processes [67] ( Figure 2B). These data indicate that NMII negatively regulates neurite outgrowth in the early stages of neuronal polarization. When actin polymerization is inhibited by latrunculin, neurite extension is further potentiated with concurrent NMII inhibition, but fails to reach the magnitude of extension produced by blebbistatin alone [67]. These data suggest that NMII negatively regulates neuronal development by generating contractile forces against F-actin. When microtubule polymerization is disrupted through the use of nocodazole, the increase in minor process length induced by blebbistatin is prevented, indicating that microtubule dynamics is required for blebbistatin-induced neurite outgrowth [67].
Based on classical experiments using 2D neuronal cultures, following the establishment of the growth cone, axon elongation occurs through a molecular "clutch" that links the substrate to the actin cytoskeleton in the growth cone [68]. This attachment to the substrate is thought to be needed for the axon to exert forces necessary for axon extension. The molecular "clutch" is a three-step process sequentially involving protrusion and substrate-attachment of filopodia and lamellipodia, followed by the engorgement of the growth cone by microtubules and organelles, and finalized by the suppression of protrusive activity, and consolidation of a new stretch of stable axon shaft behind the advancing growth cone. In this model, axon growth is mediated by the growth cone that pulls itself and the axon along the substrate through actomyosin-mediated contraction [69,70]. NMII-based contractility restricts microtubules from engorging the growth cone [53,65,[71][72][73]. Accordingly, when NMII activity is inhibited by blebbistatin, filopodia in the peripheral domain present an increased number of microtubules [74]. Interestingly, microtubules can penetrate from the central domain towards the peripheral domain of the growth cone only if the forces generated by the retrograde motor dynein, overcome the NMIIB-driven forces [75].
When axon growth takes place in the presence of the permissive substrate laminin, inhibition of all NMII activity decreases axon elongation (reviewed in [71]). When analyzing the effect of different NMII isoforms in axon extension, several evidence support that in general terms, NMIIA participates in neurite retraction, while NMIIB is required for neurite outgrowth ( Figure 2B). In line with this view, when antisense oligonucleotides against NMIIB are used in neuroblastoma cells, a significant decrease in outgrowth takes place [60]. Moreover, NMIIB knockout superior cervical ganglia neurons have decreased rates of axon outgrowth [76] and axon growth is impaired in MYH10 knockout mice [71]. In relation to NMIIA, an important body of data supports the role of the RhoA/ROCK pathway as an upstream activator of this NMII isoform, modulating its ability to repress axon growth [61,[77][78][79][80]. Inhibition of NMIIA activity results in actin rearrangement in the growth cone and in the loss of focal contacts that culminates in increased axon growth [79,[81][82][83]. Additionally, it has been recently suggested that RhoA may inhibit axon growth by activating NMII in the actin arc, preventing microtubule protrusion towards the leading edge of the growth cone [54], which is essential for axon growth. Together these data suggest that a tight regulation of NMII activity in the transition zone is necessary to enable a balanced microtubule entry into the peripheral domain, compatible with optimal axon growth.
In summary, NMII regulates several aspects of axon extension, ranging from actin retrograde flow, growth cone engorgement by microtubules, and substrate adhesion. How these different functions are integrated and tuned by the different NMII isoforms during the process of axon growth, remains to be fully clarified. Despite the dominant view that neurons need adhesions to extend axons through actomyosin-mediated pulling force, one should however bear in mind that the molecular clutch model should probably be revisited. Very recently, it has been demonstrated that in the more physiological environment of 3D matrices, growth cones can extend axons independently of adhesions and pulling forces on their substrates [84]. In 3D, microtubules were shown to grow unrepressed by the actomyosin cytoskeleton into the growth cone peripheral domain, enabling a fast amoeboid-like axon elongation. Thus, whereas axon growth in 2D is enhanced by actin destabilization or inhibition of the actomyosin cytoskeleton, this is apparently not the case for axon growth in 3D. This novel view on axon elongation should certainly be further explored in the future.
NMIIA and NMIIB Play Central Roles in Axon Guidance
Neurons are able to respond to both attractant and repellent guidance cues that are translated into alterations in cytoskeletal dynamics leading to changes in growth cone motility, direction and growth rate. Many axon-guidance cues affect the contraction of NMII by modulating the balance between Rho, Rac, and Cdc42 activities [57,85]. The combination of individual NMII isoforms and guidance cues is important for each neuron to grow in a given direction. Although NMII isoforms are quite similar in structure and are capable of partially replacing each other [86], each NMII isoform has been implicated as playing different roles in axon guidance, in a context-dependent manner. As explored above, while NMIIA is generally thought to be responsible for neurite retraction, NMIIB is required for neurite outgrowth as a response to positive guidance cues [60,71,76,79]. Accordingly, NMIIA knockdown promotes neurite outgrowth while NMIIB knockdown inhibits outgrowth on permissive cues such as poly-L-lysine and laminin [59]. Work by Turney and colleagues has nicely shown how the substrate dictates the organization of NMII isoforms within the growth cone, modulating the balance of tension forces [87]. The authors have demonstrated that NMII activity is required for faster axon elongation in response to nerve growth factor (NGF). This occurs through the regulation of two actomyosin-dependent mechanisms: Transverse actin bundling and actin retrograde flow that oppose microtubule advance. In the presence of the permissive substrates laminin and fibronectin, NMIIA and NMIIB display differential roles [87]. Whereas large stable adhesions on fibronectin enhance NMIIA-dependent transverse actin bundling, small transient adhesions on laminin promote NMIIB-dependent slowdown of actin retrograde flow [87]. This is in accordance with the fact that NMIIB KO cells present weaker traction forces on laminin and a more undirected growth cone advance [88], as well as a higher rate of actin retrograde flow [71]. In the absence of NMII activity, NGF failed to stimulate axon elongation, supporting its importance in axon outgrowth [87].
The role of NMIIA and NMIIB in the response to inhibitory molecules may be more complex than that observed with attractive cues. When NMIIB knockout neurites are grown in alternating patterns of permissive and non-permissive guidance cues (laminin-1 and poly-L-ornithine, respectively), the absence of NMIIB enable neurites to cross the barriers without changing direction [59]. In the case of inhibitory chondroitin sulfate proteoglycans (CSPGs), knockdown of either NMIIA or NMIIB reduces axon growth capacity [81,89,90]. In response to the non-permissive cue semaphorin 3A, NMIIA and NMIIB are involved in growth cone collapse and neurite retraction, respectively [91]. NMIIA also mediates growth cone collapse and neurite retraction in response to repulsive guidance molecule (RGMa) [82]. In the case of ephrin-A5, a non-permissive molecule, by binding to EphA3 it triggers the activation of RhoA/ROCK activating NMIIA and ultimately leading to axon repulsion [92]. This context-specific role of NMIIA/B response to different guidance cues is certainly the subject of the intricate regulation, balance and spatial distribution of both NMII isoforms.
NMII as a Modulator of Axon Regeneration in the Adult
Following the establishment of synapses, mammalian central nervous system (CNS) axons fail to recapitulate development and are generally unable to regrow after injury. In the last decades, many mechanisms underlying axon regenerative failure have been identified. From the neuronal intrinsic standpoint, following lesion, axons need to induce local cytoskeleton remodeling to promote the formation of a new growth cone [93,94]. Several reports support a strong link between NMII inhibition and increased axon regeneration after injury. In rats, after spinal cord injury, phosphorylated MLC is up-regulated in axons close to the lesion site, in a Rho-dependent manner, and growth cone collapse is mediated by NMIIA [82]. Supporting the causative role of NMIIA in hampering axon regrowth, silencing its gene promotes axon regeneration after contusive spinal cord injury in rats [95]. In regenerating mouse sensory axons, when NMIIA and NMIIB are knocked down or pharmacologically inhibited by blebbistatin, axon regeneration occurs irrespectively of the presence of inhibitory cues including CSPGs and myelin-based inhibitors [81]. Inhibition of NMIIA and NMIIB results in loss of lamellipodia and actin arcs, causing significant microtubule protrusion towards the leading edge of the growth cone. As a result, axon growth rate over non-permissive substrate is accelerated [81]. Likewise, in vivo studies using double knockout mice for NMIIA/NMIIB in retinal ganglion cells showed increased optic nerve regeneration [96]. When the growth cone morphology and axon trajectory were analyzed, the absence of NMIIA and NMIIB abolished almost completely the formation of retraction bulbs, thus enhancing axon extension efficiency [96]. In contrast, in goldfish retinal ganglion cells, MLCK, the kinase that triggers RLC phosphorylation leading to the formation of active NMII bipolar filaments, is upregulated in regenerating axons [89]. In this system, if NMII activity is inhibited through the use of MLCK inhibitors (ML7 or ML9), growth cones of regenerating axons cease to move. This result indicates that, in contrast to mice, NMII activity is needed in goldfish retinal ganglion cells for successful axon regeneration [89]. It is therefore possible that in different species, the modulation of NMII activity generates different outcomes in axon regeneration.
Distribution of NMII Throughout the Axon Shaft: From Enrichment in the AIS to Its Presence Throughout the Axon Shaft
Beyond its function in the growth cone, the existence of active NMII in the AIS [97,98] and throughout the axon shaft [99,100] has recently gained attention. Here we will discuss the potential functional consequences of NMII enrichment in the AIS as well as its role and possible structural organization in the axon shaft.
Why Is Active NMII Enriched in the AIS?
The AIS is a highly organized region generally located in the proximal axon of neurons. It has a specialized cytoskeletal architecture, central for the generation of action potentials and for the establishment of neuronal polarity. The AIS displays structural plasticity, as it can change its length and location relative to the soma, in an activity-dependent manner, fine-tuning neuronal excitability [101]. Using rat hippocampal neurons, the downstream mechanisms enabling structural changes at the AIS begun to be unveiled, as both its long-term relocation and rapid shortening could be blocked by blebbistatin [98]. This initial data established a link between NMII and AIS function, suggesting that its primary role at the AIS might be to enable activity-dependent morphological alterations [98].
Later, it was demonstrated that pMLC and NMII activity are necessary and sufficient to initiate AIS assembly [97]. Although pMLC is initially abundant and uniformly distributed throughout the axon (DIV2), it accumulates very early at the AIS simultaneously with ankyrin G (the prototypic AIS marker normally considered the prime nucleator of its assembly) [97] (Figure 3). An asymmetric distribution of NMII kinases and phosphatases was suggested to underlie pMLC enrichment at the AIS [97]. From evidences collected using STORM nanoscopy, it was proposed that pMLC associates with actin rings [97] within the axonal membrane periodic skeleton (MPS) [102]. The MPS, a highly regular network composed of actin rings spaced by spectrin tetramers approximately every 190 nm, is thought to maintain axonal structural integrity [103,104]. More recently, tropomyosin 3.1 (Tpm3.1) was also shown to be necessary for the structural and functional maintenance of the AIS probably by recruiting NMIIB to this structure, thus mediating its possible contractility [105]. Interestingly, pMLC is rapidly lost from the AIS during neuronal depolarization, via Ca 2+ dependent mechanisms, leading to destabilization of the actin cytoskeleton [97]. This finding furthers elucidates the mechanism of activity-dependent structural plasticity of the AIS (Figure 3). Of note, it has long been know that axon diameter is regulated by activity-dependent mechanisms as axons swell during the generation of an action potential (reviewed in [106] and [107]). Later, NMII activity was also shown to be involved in axonal electrophysiology as blebbistatin increases action potential conduction velocities in hipppocampal neuron cultures [99]. This supports that in addition to modulating activity-dependent structural plasticity of the AIS, NMII may also be involved in the regulation of axonal conduction. One should bear in mind that the role played by NMII in the AIS, that is probably related to its contractile properties and ability to rearrange the actin cytoskeleton, as well as its spatial distribution in this axonal compartment are just starting to be unveiled.
The Actomyosin Cytoskeleton as a Key Regulator of Circumferential and Longitudinal Axonal Tension
Although pMLC is enriched in the AIS [97], it is also found throughout the axon shaft [99,100] ( Figure 3). The possible interplay between actin rings in the axon shaft, as potential anchors for NMII filaments, was explored by independent groups that reached similar complementary results [99,100,108]. Through the use of chemical inhibitors (blebbistatin [99,100,108], ML7 [99,100,108], and Y-27632 [108]) as well as shRNA-mediated downregulation, decreased NMII activity/expression was shown to lead to increased axonal diameter [99,100,108]. Whereas downregulation of NMIIA and NMIIB produced similar effects, NMIIC knockdown did not affect axonal caliber, suggesting that this specific NMII isoform is not involved in the regulation of axonal diameter [99]. Together, the data gathered by the above-referred groups support that the actomyosin cytoskeleton participates in the Interestingly, pMLC is rapidly lost from the AIS during neuronal depolarization, via Ca 2+ -dependent mechanisms, leading to destabilization of the actin cytoskeleton [97]. This finding furthers elucidates the mechanism of activity-dependent structural plasticity of the AIS (Figure 3). Of note, it has long been know that axon diameter is regulated by activity-dependent mechanisms as axons swell during the generation of an action potential (reviewed in [106] and [107]). Later, NMII activity was also shown to be involved in axonal electrophysiology as blebbistatin increases action potential conduction velocities in hipppocampal neuron cultures [99]. This supports that in addition to modulating activity-dependent structural plasticity of the AIS, NMII may also be involved in the regulation of axonal conduction. One should bear in mind that the role played by NMII in the AIS, that is probably related to its contractile properties and ability to rearrange the actin cytoskeleton, as well as its spatial distribution in this axonal compartment are just starting to be unveiled.
The Actomyosin Cytoskeleton as a Key Regulator of Circumferential and Longitudinal Axonal Tension
Although pMLC is enriched in the AIS [97], it is also found throughout the axon shaft [99,100] ( Figure 3). The possible interplay between actin rings in the axon shaft, as potential anchors for NMII filaments, was explored by independent groups that reached similar complementary results [99,100,108]. Through the use of chemical inhibitors (blebbistatin [99,100,108], ML7 [99,100,108], and Y-27632 [108]) as well as shRNA-mediated downregulation, decreased NMII activity/expression was shown to lead to increased axonal diameter [99,100,108]. Whereas downregulation of NMIIA and NMIIB produced similar effects, NMIIC knockdown did not affect axonal caliber, suggesting that this specific NMII isoform is not involved in the regulation of axonal diameter [99]. Together, the data gathered by the above-referred groups support that the actomyosin cytoskeleton participates in the generation of circumferential tension along the entire length of the axon shaft. Conceptually, this finding has important implications in axonal biology as by regulating radial contractility, NMII impacts action potential conduction [99], the efficiency of axonal transport [100] and possibly on the onset of axon degeneration as a sustained NMII inactivation disrupts the MPS entailing the formation of focal axonal swellings [100] (Figure 3).
Using super resolution microscopy, pMLC was shown to be organized in a circular, periodic conformation colocalizing with MPS actin rings, intercalating with βII-spectrin [99]. In turn, NMII heavy chains appeared distributed as multiple filaments with approximately 300 nm of length along the longitudinal axonal axis [99]. NMIIA heavy chains showed sites of colocalization with βII-spectrin [99], whereas NMII head domains colocalized more extensively with periodic actin rings [99,100]. This supports that NMII filaments can crosslink adjacent rings, as previously suggested by platinum-replica electron microscopy [109] (Figure 3). Occasionally, NMII heavy chains colocalized with phalloidin, indicating that these may also be located within individual actin rings [99,100] ( Figure 3). NMII filaments within single actin rings are in principle the conformation that is able to generate the highest contractile force. In contrast, NMII filaments crosslinking adjacent actin rings are not expected to provide for radial contractility but provide for scaffolding. The details of NMII filament composition and structural organization in the axon shaft are just starting to emerge. The nature of their interaction with actin (with MPS actin rings or even with deeper axonal actin structures), together with a more profound understanding of their spatial distribution, will certainly bring new light to our knowledge on the fluctuations in axonal diameter occurring during axonal transport, action potential firing and conduction, and axon degeneration. One should also take into account that recently, the MPS actin rings were suggested to be made of two long parallel intertwined actin filaments [109]. This unusual actin arrangement poses exciting questions on how such filaments might be able to dynamically adapt to oscillations in axonal diameter, and how NMII activity could orchestrate for their contraction.
In addition to controlling axon diameter, NMII is also central in modulating longitudinal axonal tension. In drosophila, NMII knock down or NMII reduced activity through treatment with ML7 or Y27632, lead to reduction in longitudinal axonal contraction [110]. In chick DRG neurons, the NMII inhibitor blebbistatin also induces a similar effect as it blocks axon straightening upon trypsin-induced de-adhesion [111]. Of note, when considering the longitudinal organization of the axonal cytoskeleton, neither drug-nor shRNA-mediated modulation of NMII activity, result in alterations of MPS periodicity [99,100,111]. Our knowledge on NMII positioning along the axon will certainly evolve in a manner that will allow us to understand its participation in axonal radial contractility and longitudinal tension, and comprehend how NMII activity does not interfere with the length of the extended MPS spectrin tetramers.
Conclusions
The actomyosin cytoskeleton contributes to various essential cellular processes. Although the NMII structure, regulation and function in diverse cell types have received intense attention, there are still several open questions related to the organization and role of actomyosin networks in neurons. What is the interplay between different NMII isoforms that allows them to perform independent functions during axon growth and growth cone guidance? How can we reconcile these different functions with the fact that under specific circumstances they may replace each other? Is it possible that NMII regulates axon growth not only in a growth cone-mediated manner but also by involving the actomyosin networks present in the axon shaft? What is the spatial distribution and structure of NMII in axons and how does it accommodate for the simultaneous regulation of axon longitudinal and radial tension? There are also unanswered questions as to the involvement of NMII in brain dysfunction. As detailed above, NMII is a crucial player during different stages of neural development. Accordingly, NMII mutations have been associated with several neurodevelopmental disorders as reviewed before [112]. Beyond development, NMII may also contribute to neurodegenerative diseases such as Alzheimer's disease [113][114][115][116]. In this disorder, among other mechanisms, NMIIB can exert a tensional pressure on the F-actin network that may impair amyloid precursor protein translocation towards the cell membrane [113]. In the context of neurodegenerative disorders, disassembly of the axonal MPS (an actomyosin network), has been implicated in trophic deprivation-mediated axon degeneration [117,118]. Further reinforcing the involvement of NMII in degeneration, prolonged NMII inactivation leads to disruption of periodic MPS actin rings and to the formation of focal axonal swellings [100]. In this respect, could NMII be used as a therapeutic target to revert axonal swellings and axon degeneration? With the recent and powerful advances on super-resolution microscopy and correlative techniques, these and other questions will certainly be an exciting area to explore in the coming years.
|
2020-08-27T09:07:41.735Z
|
2020-08-26T00:00:00.000
|
{
"year": 2020,
"sha1": "f07ac98e3b67cedb06f329db92bb82ab17a71a66",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/9/9/1961/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "538d524834973017d5c0aa96b99057a54cfa4115",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
198676910
|
pes2o/s2orc
|
v3-fos-license
|
Imagining American Indians and Community in Southeast Asia: Resistance, Experience, and History
,
Introduction
Indigenous North America and Southeast Asia are not often locations of comparison when considering Indigenous experiences.Although there has been a growing interest in the interconnected and shared experiences of Indigenous peoples around the world, there are more well-trodden comparisons-such as between Oceania and North America (Ford, 2008;Hall & Fenelon, 2009;Havemann, 1999)-than those of American Indians and Indigenous peoples in Southeast Asia.Yet, these regions share both colonial experiences and colonising personnel.During the 'Age of Exploration,' 1 Magellan, Del Cano and Drake explored both regions.Indigenous communities during the early colonial period were enlisted as proxies in wars between English, Dutch, Spanish and Portuguese colonialists.For instance, the Seven Years War was fought simultaneously in North America and Southeast Asia and, before undermining the Philippine Independence movement in the early 20th century, leaders of the U.S. Army-Arthur MacArthur, Joseph Wheeler, John Pershing and Leonard Wood-fought in the final 'Indian Wars' against the Apache and other southwest Native nations (Miller, 1983).Each region also saw the emergence of anticolonial movements in the 1960s with Indigenous activism (Go, 2003).
There is therefore a growing interest by scholars to examine the extent and terms of Indigenous intercommunity engagement.By 'Indigenous intercommunity engagement', we mean the contact, but also awareness, that Indigenous peoples have with each other, which crosses tribal, ethnic, linguistic and regional boundaries (Cornell, 1988;Lightfoot, 2016;Lima, 2013;Muehlebach, 2001Muehlebach, , 2003;;Rigney, 2018;Vivian et al., 2016;Wiessner, 1999).Work emphasising this engagement has multiple origins.In a broad sense, and one that emphasises the need for further empirical exploration, such scholarship brings to the forefront the complexity of Indigenous cultural (Forte, 2002), literary (Piatote, 2013) and political life (Alfred & Corntassel, 2005;Baracco, 2017;Singh, 2018) that was likely overlooked or simplified in the past.For instance, Indigenous peoples have collaborated and shared strategies in efforts toward greater self-determination; therefore, scholarship attends to these engagements in order to understand the nature of emancipatory agency (Baracco, 2017;Stastny & Orr, 2014).
Research on Indigenous intercommunity engagement also demonstrates an awareness that exists between Indigenous peoples that harkens to a shared experience.We label this an imagined transnational Indigenous community, borrowing from Benedict Anderson's notion of 'imagined communities' (Anderson, 1983).In defining these communities, Anderson identifies the role of symbols, such as popular media, in creating social relations.This might not be considered 'contact' in the direct or physical sense but a realisation of a shared Indigenous historical experience and contemporary condition between Indigenous peoples that transcends tribal, ethnic, national and regional distance.This article focuses on such an awareness among Filipino/a, Indonesian and Papuan communities in Southeast Asia as they understand American Indians through a shared history of colonial experience, dispossession and material culture.This work follows previous research on transnational understandings of Indigenous peoples' experiences and reflections (Medak-Saltzman, 2015;Muehlebach, 2001Muehlebach, , 2003;;Tilly, 2002).We seek to further this literature by elaborating on perceptions of American Indians among Indigenous peoples in Southeast Asia and therefore add to the growing literature on Indigenous transnationalism.This article's origin differs from most as these research themes were not the goal of our fieldwork.The material in this article was collected from several periods of fieldwork in Southeast Asia and the United States among tribal communities (2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011).This social science research was not focused on international indigenous studies, or perceptions of American Indians in Southeast Asia, or the reverse.The parallels between American Indians and their own communities that were found by informants inspired this paper and is reflected in its composition.We did not incorporate a research design that specifically attended to the perspectives Indigenous peoples have for each other because the primary goal of the interviews was to explore other questions.
Our research was open-ended and qualitative, through which we sought to understand meaning-making among our participants and their individual and communal narratives.Therefore, this paper utilised qualitative phenomenological methods whereby we could look for the 'thick descriptions' (Geertz, 1972;Lincoln & Guba, 1985) and the context in which they were embedded.Davidsen (2013) described phenomenological methods as emphasising 'interpretation [as] being inherent in experience ' (p. 318).Blended within phenomenological approaches, we used a narrative method (or narrative inquiry) whereby individuals relate their own story.Clandinin describes the narrative method as one in which, like phenomenological approaches, experience 'serves as "the cornerstone" of […] analysis ' (2007, p. 2).We feel that combining narrative and phenomenological methods to this material is congruent with our overall research aim, which is to understand how participants see themselves within a greater context of similarity or difference.
We have organised this article into three sections.The first section discusses why focusing on imagined transnational Indigenous communities challenges certain 20th century tenets in the social sciences.In particular, we suggest that Indigenous peoples' engagement with one another refutes the assumption that Indigenous peoples were isolated from each other.The second section focuses on Muslim (Moro) communities in the southern Philippines and their understanding of American Indian colonial experiences.Here we emphasise the U.S. military personnel who fought in the later 'Indian Wars' in the American southwest in the 19th century, who later were involved in colonising Moro populations in the early 20th century.This analysis also relies upon contemporary ethnographic work among communities still engaged in anti-colonial resistance.The third section examines Papuan communities in Eastern Indonesia and how they draw similarities between their experiences and those of American Indians.
Imagined transnational Indigenous communities
Classical works in the social sciences often portray or assume non-Western communities, especially Indigenous ones, as remaining fixed within a geographic region and thus isolated.Foundational theories in the social sciences in the 19th and 20th centuries, such as Marx (1867Marx ( /1992)), Weber (1922Weber ( /2013) ) and Durkheim (1912Durkheim ( /1915)), drew from ethnographic research that assumes social systems are insular.Portraying Indigenous populations as isolated from each other facilitated gradation in civilisations, as interactions between groups assume complex commerce, travel patterns and multiple forms of exchange that colonists were eager to dismiss so that Indigenous peoples could remain 'exploitable' (see Usner, 2009, for a colonial conception of Indigenous labor and its misrepresentation).
Criticisms of the academic literature assuming isolation are plentiful.These critical works typically centre on the material and ideological connections among Indigenous communities, between Indigenous and non-Indigenous communities, and such groups' interactions with international agencies, broader historical processes and the state.Research that has been critical of the isolation assumption, such as Wallerstein (1974) on world systems, Wolf (1982/2010) on the history of local interactions with global processes, Scott (1985) on resistance to the state, and Ferguson (1990) on contemporary development practices, became many of the new foundations for contemporary social science.This new perspective questioning the isolation of both Indigenous and non-Indigenous communities often took materialistic approaches such as political economy at global, national, regional and local scales.
During this re-evaluation of communities, a literature developed arguing that local smallscale communities were connected with one another, but this was a relatively new development.The most influential of this literature were Eugen Weber's Peasants into Frenchmen: The Modernization of Rural France 1870France -1914France (1976) ) and Benedict Anderson's Imagined Communities: Reflections on the Origin and Spread of Nationalism (1983), which placed nations as recent rather than longstanding social units.Nations were based on a common vernacular and identity, which was spread through schools, mass printing and modern mobility, as well as the technological projects undertaken to make these changes possible.Weber's analysis rests on both the material changes in France such as roads, railroads and military conscription, as well as the spread of a Parisianbased common culture through media and schools to what were at the time provincial regions.Anderson's work followed such an approach but focused on the creation of states as communities in Southeast Asia through historical comparisons with other forms of community such as a tribe or religious sect.
An element to these highly influential works 2 was the indispensable role of the state in creating a new identity in which other existing local identities based on kinship, religion, history and quotidian experience can merge into geographically and demographically larger units (Weber, 1976).Under this model, Indigenous communities came into contact with communities distant to their own, including other Indigenous communities, indirectly through the state, at the same time Indigenous peoples were pushed toward assimilation into a national, non-Indigenous culture (Adams, 1995;Armitage, 1995;Haebich, 2008;McGregor, 1999;McKenzie, 1914).Domestic examples in the United States are plentiful and would include residential boarding schools in the 19th century (McBeth, 1983) and urbanisation efforts (Cornell, 1988) in the 20th century.An international example comes from Medek-Saltzman (2010), who outlines an instance of this in the late 19th century and early 20th century world fairs, when Innu (Indigenous Japanese) and American Indians were housed together while on 'display' to the public.
According to Anderson, communities are 'imagined because the members of even the smallest nation will never know most of their fellow-members, meet them, or even hear of them, yet in the minds of each lives the image of their communion ' (1983, p. 6).We suggest that as the emergence of the state as a community rests on the imaginary, it has also produced an international imagined community of anti-state Indigenous communities. 3 Yet, this other imagined community does not solely rest on a shared experience of resistance to settler states.As Merlan (2009) has found, Indigenous peoples in transnational spaces have also relied on analogous characteristics such as appearance, material culture and stylised behaviour to build an imagined community.
In the three Southeast Asian communities where we conducted fieldwork, individuals identify with American Indians by using the concepts of locality, relationality to the state, race and marginality.These forms of identification also reflect scholarship that defines indigeneity as racial (Marks, 1995), marginal (Bell, 2014;Medak-Saltzman, 2010), relational to the state apparatus (Biolsi, 2001(Biolsi, , 2004;;Merlan, 2009), as a locality (Basso, 1996), or as a combination of these (Cabo, 1986).The source of our material comes from our experience conducting field research in Southeast Asia in periods from 2003 to 2011 on the islands of Mindanao and Jolo in the Philippines and New Guinea in Indonesia.
Colonial histories, geographies and personnel: Moros and American Indians
We conducted fieldwork on the islands of Mindanao and Jolo with the Maguindanao and Tausug ethnic groups during parts of 2003-05.Based on the literature on resistance to the state (Dove, 1996;Scott, 1985Scott, , 1990;;Wolf, 1969Wolf, /1999)), this study investigated the history of the rebellion by Muslim Filipinos, who referred to themselves as Moros, a word derived from the early colonial Spanish word for 'Muslim'.We also examined how economic change has shaped this conflict.This work brought us into contact with hereditary elites, including chiefs (datus) and descendants of sultans, as well as farmers, governors, senators, and former members of the rebel armies of the Moro Independence Liberation Front (MILF) and the Moro National Liberation Front (MNLF).In conversations about the Moro rebellion against the Philippine state that began in the 1960s, many of our informants placed their conflict for independence into a larger global movement that included American Indians.Our informants, therefore, echoed Williams's (1980) criticism histories of American colonialism in the Pacific which had 'a mistaken consensus […] that the United States did not have a tradition of holding alien peoples as colonial subjects before 1898' (p.810).Before we examine how American Indians were part of an imagined community of Moros in the South Philippines, we will construct how Moros understand themselves as Indigenous and give an account of how colonisation in this region was part of a similar history to that of American Indians, which included the same personnel undertaking both colonial projects.
When working with the concept of Indigeneity, it is inevitable that one runs into complexities in how to apply the term, as scholars differ regarding its conceptual boundaries.An analysis of the term and its usage can be found in Beteille (1998), Trigger and Dalley (2010), or Plaice (2006), but in these attempts to define the term, a tension remains between allowing for multiple forms of Indigenous existences while retaining a cohesive concept.Indigenous peoples themselves engage with the concept of Indigeneity differently.Many-but not all-American Indians see their Indigenous identity in their original relationship with space and place as the initial occupants of a region.This perspective, as discussed by Borrows (1999), presents potential limitations that rely on the ability to identify no earlier peoples occupying a territory.Indigeneity is not always connected to an initial occupation or connection to certain land.Maori, who consider themselves Indigenous peoples of Aotearoa (New Zealand), do not state that they originate from New Zealand but from a distant island they call Hawaiki (Hanson, 1989).Their claim does not result from an original inhabitancy from 'time immemorial' but that their claim is relative to that of the non-Maori New Zealanders.
Indigeneity in Asia is recognised as an unclear concept.Whether it be the absence of state recognition (Erni, 2008) or the notion of 'saltwater colonization' (Baird, 2016, p. 501), what constitutes a settler or Indigenous person is ambiguous.Moro identify their Indigenous status within the context of extended habitation, the long process of political colonisation, and being subject to settler communities.Moro recognise groups (such as the Sama and Atta) that predate their own communities in the area but cannot conceptualise that their own communities belong anywhere else.Undoubtedly, religion plays a considerable role in the distinctions they make between themselves and other parts of the Philippines, but it is the claim of being an autonomous people that has led to the confrontations in the northern Philippines and with the United States.Yet, Moro resistance has not been characterised by a rejection of other religions but has responded to their forced relocation from their lands by ethnic groups from other parts of the Philippines through the Homestead Act (McKenna, 1998).For example, Christians have been well tolerated in regions such as the Province of Sulu, where they consist of converts from among the local population rather than migrants.Instead, their perspective on indigeneity and their relationship with other such groups developed in a deep history of colonialism and settlement, and is grounded in their autonomy over land (Jubair, 1984).
Despite the Spanish colonisation of the Northern Philippines in 1565, Muslim communities and Sultanates in the southern island of Mindanao and the Sulu Archipelago remained independent (McKenna, 1998).Tensions over Spanish incursion into Muslim territories resulted in a series of wars spanning several hundred years.The Sultanates of Mindanao and Sulu, which were the major Muslim political entities in the region, remained independent of Spanish rule during this period, although there were both Spanish military installations, settlements, and missions in the Southern Philippines (Newson, 2009).
Following several revolts and rebellions against the Spanish colonial government by native Filipinos, including the Moro and the non-Muslim indigenous Lumad people of Southern Philippines, and Chinese and Indian ethnic groups as well as Philippine-born Spaniards or Insulares, the Treaty of Paris of 1898 ceded the Philippines and many of Spain's other colonial territories to the United States after the Spanish American War of 1898.This included the surrender of Muslim communities in the Southern Philippines even though many were not under the control of Spain, as they largely identified themselves as Indigenous and made their claim to their land through long-term habitation (Caballero-Anthony, 2007).However, with the treaty, they were ceded to the United States as concessions made by Spain to provide an additional pretext to controlling the entire archipelago (Linn, 2000).
After establishing control over the Northern Philippines during the Philippine-American War (1899-1902), the U.S. military moved into Mindanao and Sulu to subjugate the entire Philippine Archipelago.Many of the U.S. military leadership in the South Philippines had fought in the final Indian Wars in Arizona and New Mexico.All four of the U.S. military governors of the Philippines were veterans of frontier campaigns against American Indians before their arrival in Southeast Asia.John Persing and Leonard Wood, officers who served in both North America and the South Philippines, made comparisons between American Indians and Moros in their personal and official records of their time in the South Philippines (Wood, 1904).
During the lead author's fieldwork on the island of Jolo in the Sulu Archipelago this colonial history was not only recounted by many residents, but often they drew comparisons with the subjugation of American Indians.In pointing to the caldera in the centre of the island of Jolo where the Battle of Bud Dajo (1906) was fought between Moro troops and the U.S. military, they mentioned how the U.S. Army commander, Leonard Wood, was also the individual who captured Geronimo, the Apache resistance leader.Additionally, Tausug made comparisons to the killing of women and children who were hiding in the caldera with the massacre of American Indian women and children: 'this is where the American Army killed our people …' (Interview, 2004).Individuals recounted details of Wood's life, such as his medical degree and education at Harvard.They also described with admiration the American Indians' militant resistance to American colonialism, which was also a point of pride with their own history.Muslim communities of the Southern Philippines cite their bravery within the context of the U.S. military's need to invent a larger calibre sidearm and the use of repeating rifles.Officers found the current calibre of sidearm insufficient to stop Moro fighters before they reached U.S. lines (Avery, 2012), and infantry men needed the faster volleys to suppress attackers.Although it is unclear whether the shipments of larger calibre sidearms arrived for campaigns in the Philippines, Tausug and Maguindanao take satisfaction in requiring the U.S. military to raise its firearm standards.
Often during fieldwork interviews, individuals would present photocopied documents while speaking with us: copies of historical agreements between the Muslim Philippines and the U.S. Government dating back to the late 19th century.Most commonly, they would present The Bates Treaty of 1899, a conditional treaty between the United States and the Sultan of Sulu giving partial autonomy to the region in exchange for the free movement of U.S. troops in the area.Other documents that were displayed included a request by Moro noblemen in the 1920s that if the Philippines were given independence, then Muslims wished to remain part of the United States as a territory instead of being made part of a same country as the Northern Philippines.A retired congressman from Sulu made sure to inform us that this request was still standing: 'please, will you take this letter to your government and tell them that we still want to join the U.S.' (Interview, 2004).Both documents were brought to our attention for two purposes: to indicate that the Muslim Philippines had been treated as a political unit with autonomy rather than part of the Northern Philippines, and that such treaties had been either broken by the United States or that requests for further autonomy had been ignored.
Moro communities made these claims to moral and legal standards within a broader community of marginalisation during the colonial period.This imagined community specifically included American Indians.Informants would describe a history of treaty violations that occurred in North America between the U.S. government and American Indian tribes.Most specifically, they would include tribes from the western United States, such as the Apache and Comanche, as their colonial histories coincided most directly with those of the Southern Philippines, and the images of resistance were more salient.These referential acts, in which one's status or lack of status is identified or placed with other histories, are also practised in similar ways in American Indian communities.For instance, in the tribal meetings of Citizen Potawatomi, accounts of treaties between the Potawatomi and European and U.S. governments, as well as violations of those treaties, are used to explore the condition of the Potawatomi community and its broader outlook toward sovereignty.This history of broken treaties is central to how the tribe understands its history.A large part of the Potawatomi's contribution to the National Museum of the American Indian in Washington D.C. involves the history of treaty violations by the U.S. government.However, unlike in Southeast Asia, broader global context is not often provided.For instance, rarely do members of the Citizen Potawatomi Nation, in recounting European involvement in their tribe's history, reference the Philippines or other colonised peoples as part of a global community.
Although there have been multiple cases of American Indian activists engaging with other Indigenous groups (Muehlebach, 2000), we have experienced few examples of this at quotidian levels on reservations.However, one example of American Indians imagining the similarly marginal status of Southeast Asian communities was in a story about the Vietnam War told during fieldwork with American Indian communities in 2008.A Pueblo Indian in New Mexico recounted how his platoon was captured during the Vietnam War.Most captives were executed, with the exception of the American Indian and a Mexican American, who were tied to posts because the Vietnamese were unsure about their status as 'Americans'.During an attack by Americans on the village where they were being held, the Mexican American soldier was killed by the shelling of their location but the Pueblo Indian escaped.Now a medicine man in his tribe, his experience has been recounted by others within his community (personal communication, 2008).The several times the Pueblo veteran's story was mentioned by the authors in Southeast Asia in discussions with villages, communities mulled over the moral meaning of the story in which an Indigenous person was conscripted and then spared by another repressed group later to serve his community.This potentially became a meaningful story of marginalisation and recognition across two Indigenous peoples.
As a primarily imagined community, we did not hear of Moro in the Philippines and American Indians directly engaging each other as physical community.One such occasion took place when a Moro Datu (chief) was awarded a fellowship for emerging leaders to come to the United States to broaden his understanding of the U.S. government and people.Organised through the Eisenhower Foundation, Datu Ibrahim 'Toto' Paglas, from the island of Mindanao, visited officials and communities throughout the United States in 2005, in which the authors accompanied him.In addition to trips in Washington DC, New York and California, his itinerary took him to meet Navajo communities in Utah and Arizona.Datu Paglas spoke with intensity about his experiences on the Navajo Reservation and with the communities who hosted him.He described the similarities in the plight of American Indians and the Moro.The concept of the 'reservation' seemed of particular salience to Datu Paglas.Reservations do not have direct correlates with the political units of Muslim communities in the South Philippines, which are geographically associated with Muslim tribes instead of having defined rights and locations.However, in discussions with Datu Paglas, he reflected on the sense of his community and culture being diminished geographically as well as 'trapped'.Datu Paglas also admired the continuity and diversity of political powers of American Indian tribes and wished that such authority existed in his own community.Unlike American Indian tribes, power rooted in longstanding cultural norms among Moro communities is not recognised by the Philippine government.
Datu Paglas and other Moros' identification with American Indians was not absolute in the context of the American 'West'.Other symbols and culture of the American 'Wild West' is highly visible in the South Philippines today.Cowboys are associated with masculinity and ruggedness.Datu Paglas preferred to dress in Western ware with jeans, Concho belts, cowboy hats and boots.When required to wear formal attire for meetings in the United States, he would wear suits that had leather embroidery on the shoulders, appearing somewhat similar to the Lone Ranger.The local shipping company that he started was called 'Cowboy Transportation'.In the Southern Philippines, the association with cowboys and masculinity was stronger than other parts of the country.Forms of comportment fetishes (Orr, 2012) and displays of masculinity could be found in the form of carrying guns on holsters like cowboys throughout the Muslim Philippines.Masculinity, cowboys and violence were strong components of 'minimal alliance groups' in the Sulu Archipelago that behaved, according to their own definition, something like outlaws in the American West.The double identification of being both victimised and empowered also tethered imagined community between Moro and the Wild West of cowboys and Indians. 4
Indigeneity, race and material culture: Papuans and American Indians
Although there is a strong identification in experience with American Indians among Moro in the Philippines, Moro communities do not emphasise racial differences between themselves and the people of the Northern Philippines, whom they consider their contemporary colonisers.However, ethnic and racial differences are central to how Indigenous communities in other parts of Southeast Asia identify with American Indians in the context of hegemony.The Indonesian archipelago consists of approximately 17,000 islands, stretching a distance wider than the continental United States.Although Bahasa Indonesia (a variant of the Malaysian language that was historically used as a trade language in the region) is the national language, over 300 different languages are used by different communities in the country.There are major ecological and social divisions splitting Indonesia.These differences were first written about by Westerners as part of scientific expeditions in the area.English botanist Alfred Russell Wallace, in the 19th century, attempted to codify racial differences in what is now Indonesia in the same way that he divided the flora and fauna shift (see Vetter, 2006, for a description of Wallace's epistemology that attempted to covered botany and human cultural groups).Wallace's organisation drew from 'types' that were found on mainland Asia compared to those common in Oceania, with islands in the Indonesia archeology representing a graduate transition between the two.This distinction also marks a change from Austronesian to Papuan and Melanesian languages and cultures.The Austronesian migration began approximately 5,000 years ago from Taiwan or Southern China (Lansing et al., 2011), and communities that descend from them are phenotypically lighter in complexion than surrounding communities.This physical or racial distinction impacts how marginality and thus how identification with American Indians is constructed.
While a consultant in conjunction with a development and health NGO in the Indonesian province of Western Papua, the lead author worked with Indigenous tribal communities.Although part of the Indonesian Republic, West Papua was part of the island of New Guinea and its population often considered themselves colonised by Indonesia (Rutherford, 2012).Although the social fault lines between Papua and the political centres of Indonesia such as Java included religion, it was not the sole or central means through which Papuans described their status as Indigenous people or their distinction from Indonesia (Mote & Rutherford, 2001).The Papuan communities with whom the lead author worked were both negotiating and resisting engagement in national and global political and economic forces.Their interaction with what they viewed as colonial processes centred on an active mine in the region that brought international and national migration and investment beginning in the 1970s.During this period, there were several labour disputes and strikes of contractual workers who were mostly from local Papuan communities.These strikes were contextualised by workers and the union not only in terms of labour rights and fair treatment but also within the framework of international Indigenous rights.During periods of strikes, they would ask for assistance and recognition from U.S. government officials from the territories in the Western Pacific whom they considered to be sympathetic to their plight as Indigenous Pacific Islanders.
Indonesian Indigenous communities often conceptualise colonialism as a double process (Tajima, 2014).The original colonial period took place during European expansion into the region, which was followed by Indonesian 'independence' after World War II.However, the national period is often seen as an additional colonial period under the control of the island of Java where the majority of the population resides.This centralisation of power in a distant region for many Indonesians was exacerbated by the 53 years of dictatorships following independence.This centralisation also was conceptually overlapped with the physical characteristics of Javanese people, who are considered to be of lighter complexion than non-Javanese (Prasetyaningsih, 2007).
The combination of a centralised and racialised hierarchy in Indonesia is how many Indonesian communities both experience contemporary colonialism and relate to other such groups, including American Indians.An aspect of this difference between hegemonic Javanese and non-Javanese is expressed in the concept of 'indigeneity'.In the Indonesian language, the concept is expressed as orang asli, which directly translates as 'original person' (orang = person, asli = original).Although Indonesian is a second language to almost all of its speakers, and command, grammar and vocabulary differ greatly throughout the country, orang asli is a recognised and understood concept in most regions of Indonesia.Despite the fact that the ability to speak Indonesian decreases as one moves further from the centre of power in Java, the importance of orang asli as a concept increases and thus it is perhaps more widely used the further one is from Java.The term is applied to Indigenous communities outside of Indonesia and Southeast Asia as well.It is used to describe American Indians of both North and South America, though it is less commonly used to describe people in Africa or East Asia.Even though there are Indigenous communities in these areas, orang asli is selectively used by Indonesians to describe North American and Southeast Asian Indigenous communities.The reasons for the selective use of orang asli is not completely understood.The role of American Indians in popular media, especially film, have probably contributed to this.This imagined global community of Indigenous people is set against the more regional concept of bumiputra in Malaysian and pribumi in Indonesian.At least linguistically, this adds a complication to a ready aggregation of Indigenous community from local to global.Both terms are derivations of the Sanskrit words bumi (earth, soil) and putra/pri (prince, son).Thus the terms translate into something similar to native or Indigenous person akin to orang asli.However, bumiputra and pribumi do not connote a global sense of 'Indigenous'.Instead, they are linguistic stand-ins for being of or descendants of Southeast Asian culture distinct from foreign cultures.In Malaysia, the Constitution (Article 153) emphasises the importance of Islam for defining bumiputra, yet it also allows for non-Islamic Indigenous communities in Eastern Malaysia into this status.More broadly, in Indonesia pribumi and bumiputra are used to distinguish Southeast Asian culture and communities from longstanding Chinese and Indian communities who have resided in Southeast Asia.The terms are also used to signify the Indonesian or Malay elements within a syncretic cultural system that includes Middle Eastern, South Asian and East Asian components.These terms, in contrast to orang asli, reflect an insular perspective on being Indonesian or Malay apart from others, rather than using indigeneity as a link with other cultural groups.This difference in these meanings is apparent in interactions with communities in Indonesia.While speaking in Indonesian, the lead author described his background as orang asli from the United States.Communities understood that this meant 'American Indian' and accepted that the term was used outside of Indonesia, but when pribumi replaced orang asli, informants found it amusing.They explained that it is a term that although meaning 'son of the soil', it is highly Southeast Asian in meaning and applicability (Balasubramaniam, 2007).
In the communities of West Papua, the distinction between Javanese (Austronesian) and Papua (Melanesian) phenotypes was central to how indigeneity was understood.Papuans described how they were stereotyped in broader Indonesian models of appearance that favoured lighter skin.The abundance of personal care products such as soaps containing bleach or keputian (whitening) agents and the advertisements for these products supports Papuan claims.Additionally, Indonesians, like other Southeast Asians, avoid sun contact because of the potential darkening of the skin. 5Discrimination based on outward appearance is how Papuans describe the experience of being American Indian.Interpretations of old Western films follow this line of thought (Kelly, 2017).According to Papuans, 'dark' Indians were treated poorly by 'white' military officers.Papuans also described depictions of wars or battles between American Indians and European settler populations through the idiom of berburu (hunting) rather than berperang (to wage war), though we failed to clarify whether the accounts mentioned were from cinema or based in historical accounts.This is also how they described their longstanding conflict with the Indonesian government and, in particular, its military, whom they describe as 'hunting' Papuans.This diverges from how they describe violence between two Papuan tribes, which is given the term perang suku (tribal war).The use of berburu in Papuan parlance follows a pattern suggesting that they see hunting as existing between two groups in which there is a considerable technological advantage of one over the other.Moreover, they use 'hunting' to describe an ongoing and culturally accepted attack on a group of people to remove them from land or extinguish their existence, which might also be coterminous with 'genocide.' The technological difference between broader Indonesian societies and Papuan communities has also been a source of imagined community shared by American Indians and Papuans.Although technological and social gulfs have been produced in settler societies to legitimise colonial activities (Williams, 2012), such narratives of perceived backwardness can also become symbols connecting marginalised communities.Traditional Papuan comportment has involved the covering of men's genitalia with a gourd or grass skirt, and women's garments were comprised of grass skirts and beaded necklaces.The absence of livestock from which hides or wool could be produced, as well as few traditional trading opportunities with groups that had cloth, meant that Papuans had relatively less covering than other Indonesian communities.This lack of clothing is often conflated with backwardness in Indonesia and is similar to how the poverty and backwardness of American Indians was once referred to as being a 'blanket-ass Indian' (Orr, 2017, p. 105).Additionally, men carry bows and arrows and machetes in rural regions of New Guinea.This is also associated with primitiveness by Indonesians outside of Papua.For instance, while the lead author accompanied tribal leaders from Papua to Jakarta, they were routinely asked during their stay if they wore clothing while at home or carried bows and arrows.Indonesians were also surprised that Papuans spoke the Indonesian national language.Often Javanese who asked such questions confided in the lead author that they thought Papua was still backward (terbelakang).
Aspects of comportment were also ways in which Papuans identified with American Indians as Indigenous and marginalised communities.While in Papua, the lead author worked with school programs in which students learned about the outside world.Elementary and secondary school students, upon finding out that the lead author was an American Indian, would talk about the panah (bows and arrows) and feathered headdresses that were common in depictions of Papuan people.Papuan students could even reproduce what is stereotypically thought of as the 'war cry' of American Indians by moving their hands over their mouths while vocalising.These were highly salient connections they had between their own culture, which is known for such comportment, and elements of American Indians.Dances, war cries and feathered displays are also how Papuans are depicted culturally in Indonesia.This is a cultural form that they embrace as a contribution of their society to larger cultural forms in the region.Some also identify their production of large carvings out of a single tree as 'totem poles' and recognise that they are also produced among American Indians.While many Papuans described characteristics of American Indian cultures, distinctions between tribes-which are significant-were less salient to them.They recognised that American Indians, like Indigenous Papuans, were members of different tribes that varied in characteristics, but they focused on broader cultural assemblages.Despite Papua's reputation as being remote and isolated within broader Indonesian society, a number of Papuans the lead author knew had visited the United States and Canada.They typically did so through church groups that funded and organised their visit.Because of the nature of these organisations, much of their trip involved religious activities, including community service and outreach.Several of these visits had components in which recognition of underserved and marginalised communities among American Indians in rural regions of the Northwest took place.Papuans who returned from these trips described both cultural similarities (art, dance, bows and arrows) and sympathy for the marginalisation American Indians also experienced.
Conclusion
American Indian communities have proven socially and politically meaningful to those who directly or indirectly encounter them since some of their earliest contact with communities outside of the Americas (Elliott, 1970).The depth of their impact on the intellectual, cultural and political development of European society remains the subject of contemporary interest (Kupperman, 1995).They maintain a cultural reference point for broader discussions of society, power and politics (Chiappelli, 1976), and such relevance extends to other Indigenous peoples, to form an imagined transnational Indigenous community.
In this article, we have presented how in seemingly isolated areas in Southeast Asia (Ricklefs,s 1969), an imagined community has emerged based on a number of characteristics.Depending on the context of Southeast Asian communities' interactions with outside entities, such as the state or other hegemonic actors, elements of American Indian experiences became more salient for how continuity in this imagined community was constructed.Whether or not the source of their identification with American Indians was found in the material culture, race or resistance to colonialism, Southeast Asians focused on the marginality of American Indian communities, which they constructed as similar to their own.(1904).Diary of Leonard Wood.Library of Congress.PR 13 CN 2007:040. 1The Age of Exploration is generally defined as the exploration of Africa, Asia and North and South America by Europeans from the beginning of the 15th century to the end of the 18th century (Boorstin, 1985) 2 As of 2017, Peasants into Frenchmen has been cited 4,919 times and Imagined Communities has been cited in 85,384 works. 3One such example can be found within the Philippine Independence Movement in which José Rizal referred to himself and one his colleagues as Indios Bravos after reading of the brave Indians in the Wild West show that travelled through the United States and Europe at the end of the 19th century (Delmendo, 2004). 4Shively's account (1992) of why cowboy material culture is prominent among American Indians might offer an explanation as to the double identification with both cowboys and Indians in Southeast Asia.After showing Anglo Americans and American Indians the same 'western' or 'cowboy' film that both groups generally enjoyed, (1992) found that American Indians identified with the themes of autonomy, freedom and relationships to the land in the film, whereas Anglo Americans enjoyed their identification with the imposition of values onto new territory.This suggests the possibility that symbols that appear to represent contradictory or antagonistic groups may be disaggregated to allow for coherent appeal.A cowboy hat, boots or belt might represent a sense of independence that is congruent with values of autonomy associated with indigeneity. 5Because of their relatively darker skin tone and feeling of being discriminated against, Papuans also deeply identify with African American communities and culture.Papuan men often dress in clothing similar to Rastafarians in Jamaica.They show great interest in African Americans in cinema and broader culture.The most elated the lead author saw a Papuan village was during the visit of an African American man.This is in contrast to how many Africans and African Americans are treated on the island of Java.See the account of Barack Obama's childhood in Indonesia in 'The Real Story of Obama 's Mom' in The Atlantic, April 20, 2011.
|
2019-07-26T13:49:56.139Z
|
2019-07-03T00:00:00.000
|
{
"year": 2019,
"sha1": "15f45bd3d60274f68a9c97a369f0889ad6a30539",
"oa_license": "CCBY",
"oa_url": "https://ijcis.qut.edu.au/article/download/1113/770",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "51dd11a3999d6601278ec2093581ef4488a9a9ae",
"s2fieldsofstudy": [
"History",
"Sociology"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
88110054
|
pes2o/s2orc
|
v3-fos-license
|
Corneal Spheres derived from Human Embryonic and Human Pluripotent Parthenogenetic Stem Cells
Corneal blindness is common. Cornea transplants are the most commonly performed organ transplants, but the need for corneal grafts worldwide far outweighs the supply of healthy donor corneas. Here we describe a differentiation protocol that yields corneal orbs from human embryonic stem cells (hESC) as well as from human pluripotent parthenogenetic stem cells (hpSC), and therefore can be manufactured free of transmissible pathogens. Cornea and other tissues generated from parthenogenetic stem cells that are homozygous at HLA loci have a distinct immunologic advantage over fully allogeneic grafts, and this report is the first to describe multilayered cornea generated from hpSC. The differentiated corneal product is layered and anatomically similar to normal human cornea, expresses appropriate corneal markers at the mRNA and protein (and secreted protein) levels, and is permeable to topical ophthalmic drugs. This 3D stem cell-derived cornea is a foundational step in development of appropriately organized, functional corneal grafts from hESC and hpSC for use in in vitro assays as well as regenerative therapies.
Introduction
The cornea is more than a protective shield, it also serves as an external lens and is therefore critical for refraction and normal vision. An estimated 8 to 10 million worldwide are blind from corneal disease [1]. In developed countries, corneas are the most common 'organ' transplants, but a significant unmet medical need for corneal grafts persists. Fully artificial corneas have helped significant numbers of patients with severe corneal disease or injury, but do not meet the medical needs of most patients requiring corneal grafting, and so have limited impact on the need for donor corneas [2]. Stem cell-derived corneas, free of transmissible pathogens, represent a promising and safe alternative to cadaveric grafts, provided they can be generated costeffectively, with function (including barrier, absorption, refraction) equivalent to that afforded by cadaveric grafts. Here we describe a novel multilayer corneal structure generated from human pluripotent stem cells, a first step in the goal of developing a full-thickness corneal graft from a single differentiation protocol.
The cornea is the clear (transparent) front part of the eye and is comprised of three main cellular layers -an outer non-keratinized stratified epithelium, middle stromal layer containing a hydrated extracellular matrix (ECM) with scattered corneal fibroblasts (keratocytes), and an inner, single-cell endothelial layer. Each layer is separated by distinct, specialized basal lamina. Bowman's layer is most anterior, located between the epithelium and the stroma. Descemet's membrane separates the stroma and the endothelium, serving as a basement membrane for corneal endothelium [3]. Advances in cornea engineering techniques have just started as another important alternative to overcome the limitation of 3D-corneal tissue available for transplantation [7,29]. To this end, bioengineered corneas to replace partial or full-thickness corneal defects have recently been developed as an alternative to cadaveric grafts [6,7] and an acellular collagen-based cornea was recently tested in a Phase I trial [8]. Decellularization of animal corneas is another promising method for the development of artificial human corneas by tissue engineering [37].
Adult human cornea harbors resident stem cells in the limbus (between the cornea and conjunctiva) [reviewed in 4], and these limbal stem cells mediate regeneration and wound repair of the corneal epithelium throughout adult life. Recent success in transplanting (autologous, adult) limbal stem cells for corneal burns [5] points to the importance of stem cell biology for therapy of corneal disease. Limbal stem cell transplantation, both autologous and allogeneic, is a major stem cell success story and is now available in many centers, but is not useful for diseases where more than corneal epithelium is compromised; complete 3D-corneas containing all cellular compartments of the cornea are still a necessity for treatment of the many patients with corneal blindness.
Derivation of human pluripotent embryonic stem cell (hESC) lines and recent advances in hESC biology have generated great interest in the field of stem cell-based cornea engineering [9][10][11]. Cell culture technologies that expand undifferentiated hESCs and subsequent methods to direct differentiation into corneal-like cells are improving and open up the possibility of producing corneal constructs for use in cellular therapy. Corneal epithelial cells have been produced by culturing hESC on collagen IV using medium conditioned by limbal fibroblasts [12].
Our focus here is on differentiation of a whole cornea from pluripotent stem cells generally, but particularly human parthenogenetic stem cells or hpSC. hpSC are morphologically similar to hESC, express the same pluripotency markers, have high levels of alkaline phosphatase and telomerase activity, and give rise in vitro and in vivo to derivatives of all three embryonic germ layers. Unlike hESC, hpSC cannot differentiate into extraembryonic tissues [38,39]. Different activation techniques during isolation of hpSC from unfertilized eggs allow creation of either heterozygous hpSC that are genetically identical to the oocyte donor, or HLA homozygous hpSC which are immunologically less complex [13][14][15]. As such, derivatives of these homozygous lines transplanted in appropriate donors will be less immunogenic than traditional cadaveric allogeneic grafts. This report represents a first but significant step toward generation of sterile, functional, less immunogenic 3D corneal grafts for use in regenerative therapies.
To initiate differentiation the stem cells were allowed to grow for approximately 10 days (with medium replacement every other day) in a humidified 5% CO 2 incubator. When colonies formed (morphologically similar to burst forming units with sharp edges), both LIF and bFGF were removed from the medium, and on day 17, these differentiated cultures were passaged with Collagenase Type IV (890 U/ml; Lifeline Cell Technology) and transferred to deep 6-well plates (Greiner Bio-One, Frickenhausen, Germany) for further differentiation in lowattachment conditions. The cultures were maintained for an additional 100-120 days in a humidified atmosphere of 5% CO 2 /95% air, with medium added twice a week, during which the dome-like colonies grew in size and became fluid-filled clear spheres. The spheres break off from the adherent cells and float freely; these were retrieved from the medium for characterization.
Histology and immunocytochemistry
Ten corneal orb constructs derived from hESC or hpSC were used for this study. For histological analysis the corneal orbs were cut in half, each half measuring approximately 5 x 10 mm. One half was fixed for 12 hours in 10% buffered formalin, embedded in paraffin and sectioned to a thickness of 5 µm (small size hpSC-derived orbs were fixed in their entirety). Sections were stained with hematoxylin and eosin (H&E), Masson's trichrome, or periodic acid Schiff (PAS) using standard staining protocols. Sections were also stained for cytokeratins, using primary AE1/3 antibody (Chemicon) for pan-cytokeratins.
Preparation of human cornea samples
Non-diseased banked human corneas (n = 2) were obtained from San Diego Eye Bank. The eyes were procured and processed using criteria established by the Medical Standards of the Eye Bank Association of America and the U.S. Food & Drug Administration.
Real-time quantitative PCR (RT-qPCR)
Stem cell-derived corneal constructs were examined for expression of normal human cornea genes using real-time quantitative PCR (RT-qPCR). Total RNA was extracted using the QIAsymphony automatic purification system, according to the manufacturer's instructions (Qiagen, Valencia CA), and 100-500 ng total RNA was used for reverse transcription with the iScript™ cDNA synthesis kit (Bio-Rad,). PCR reactions were run in duplicate using 1/40-th of the cDNA per reaction and 400 nM forward and reverse primers or the QuantiTect ® Primer Assay, together with Quantitest SYBR ® Green master mix (Qiagen). Real-time PCR reactions were run on the Rotor-Gene ® Q (Qiagen). Relative quantification was performed against a standard curve and normalized to the expression levels of one of the following housekeeping genes: cyclin G (CYCG), beta-glucuronidase (GUSB) or TATA box binding protein (TBP). After normalization, the samples were plotted relative to the first sample in the data set and the standard deviation of the expression measurements was calculated. The source of primers is shown are shown in Table 1. RNA isolated from human donor cornea for use as a comparison for gene expression of stem cellderived corneas.
Drug permeability of stem cell-derived corneas
Corneal disease is often treated with topical drugs, possible because the normal cornea is permeable and allows rapid absorption [reviewed in 16]. The goal of these first permeability studies was qualitative, to test whether compounds with known high and low absorption behaved as expected relative to each other when applied to the differentiated corneal orbs. To test permeability function, intact corneal orbs were incubated with test compounds (10 μM) at 37˚C for 30 min. Two model compounds were used -atenolol and antipyrine, as low-and highpermeability reference compounds, respectively. After incubation, the J Stem Cell Res Ther ISSN:2157-7633 JSCRT, an open access journal Stem Cell Based Therapy fluid inside the orbs was carefully collected by needle aspiration, and used to measure drug concentrations by LC-MS/MS. The apparent permeability coefficient (P app ), a parameter commonly used to express in vitro permeability across a cell monolayer or tissue, was calculated as: P app = Q/(C 0 x A x T), where Q is the amount of drug accumulated in the orbs, C 0 is the applied drug concentration, A is the surface area of the orbs, and T is the incubation time.
Determination of growth factor levels in differentiating cornea conditioned medium
Commercial TGF-β1 and EGF ELISA immunoassay kits (Invitrogen) were used for quantitation of these factors in conditioned medium from cultured corneal constructs at different stages of differentiation. The conditioned medium was diluted 1:1 with serumfree DMEM, and assayed according to kit manufacturer's protocols.
Differentiation and morphology of stem cell-derived corneal orbs
The typical early appearance of cells that differentiated into corneal orbs was as small, tightly packed colonies in which individual cells were round, in comparison to the spindle-shaped feeder layer cells ( Figure 1A). Often a few healthy stem cell colonies were noticeable the first day after low density inoculation, and became more heterogenous in shape, containing flattened cells with epithelial morphology, attached to the culture dishes. (Figure 1B,C). After 5-7 days more of differentiation, the colonies became multilayered, dense clusters in which individual cells were indistinguishable by light microscopy. In some colonies, a distinct clear spherical (orb-shaped) dome was observed, indicating the formation of a more mature cornea-like structure ( Figure 1D). On day 17-19 the colonies were subcultured onto deep well plates under low attachment conditions, and within the next 3-4 weeks, freely floating pellucid spheres (corneal orbs) could be observed under a dissecting microscope. The orbs could be seen by eye at approximately 50-day culture, with diameters of 1-2 mm ( Figure 1E). Spheres with diameters as large as 9-15 mm developed over 120 days ( Figure 1F,G), however the orbs generated from hpSC were significantly smaller and reached diameters of about 4-7 mm. Orbs were generally very fragile until they reached 8-10 mm in diameter, when they could be manipulated without obvious damage. All orbs were translucent and fluid-filled.
Histological analysis demonstrated that the spheres have interior and outer layers similar in arrangement to normal human cornea. These first histological studies indicated that the epithelium of corneal constructs began as a single layer ( Figure 2A) and progressively developed additional layers ( Figure 1B), following normal developmental patterns. This epithelial layer was PAS-positive indicating glycogen or other polysaccharide content ( Figure 2C). PAS stains are used in pathologic evaluation of corneas to visualize corneal epithelial basement membrane (and Descemet's membrane). Histologically the largest layer of the corneal orbs was the stromal layer, morphologically consistent with fibrous tissue. Masson's trichrome staining ( Figure 2D) shows that the stromal layer contains collagen fibrils (blue) that were heterogeneous in diameter, some disorganized fibrin strands (pink), and the subepithelial fibrotic membrane, which stained red. (Retention of the red in trichome staining may indicate a poorly permeable structure.) Cytokeratin staining ( Figure 2E) was used to further analyze the corneal orbs (using a pan-cytokeratin AE1/3 antibody). Flattened (pancytokeratin-negative) cells were found dispersed between the collagen fibrils, consistent with corneal fibroblasts or keratocytes. The most intense staining was in the superficial epithelial layer (seen at higher power in Figure 2E,F). In addition, in some orbs we identified a thin membrane layer (PAS-negative), between the outer layer of epithelium and the large stromal layer, suggestive of a developing Bowman's membrane but this membrane was not fully formed and was not present in all orbs. The innermost surface of the orbs, where endothelium should be, was a single-cell layer with no evidence of keratinization (by pan-cytokeratin staining). Routine hematoxylin and eosin (H&E) staining was consistent with an endothelium but this stain is non-specific.
Overall, histologic evaluation of these novel stem cell-derived orbs indicated significant structural similarity to human cornea, though a distinct endothelium could not be identified. In addition cultivation of the orbs at an air-liquid interface resulted in thickening of the epithelial layer (comparing Figures 2E and 2F), similar to increased stratification of human corneal epithelium during development at the time of eyelid opening [17,18].
Immunocytochemistry of cornea-like constructs confirmed expression of typical cornea proteins, and normal adult corneas were used to validate the staining pattern. Stem cell-derived corneas expressed mucin-1, a marker of corneal stratified epithelium (and mucin is a component of tears), throughout the epithelial layer ( Figure 3A, normal corneal 3B), and the gap junction protein, connexin-43 ( Figure 3C, normal cornea 3D), most strongly by the suprabasal (central) epithelial cells. Cytokeratins (using a pancytokeratin antibody, Figure 3E and normal cornea 3F) were also expressed, as expected. Specific cytokeratin expression was also examined, and the orbs expressed cytokeratin 19 at low levels ( Figure 1G and normal cornea 1H) and cytokeratin 18 ( Figure 1I) for which staining was intense only in the superficial layers of the corneal orbs. Strongly positive staining for the tight junction protein ZO-1 was found in all studied specimens, with appropriate subcellular localization at cell boundaries suggesting the formation of tight junction complexes, a feature of endothelial cell differentiation ( Figure 1J). ZO-1 and connexin-43 staining patterns in the corneal orbs and normal human corneas were very similar, suggesting maturation of intercellular junctions in stem cell-derived orbs. Vimentin staining was positioned in all orbs at low levels, restricted to some stromal cells and the cell layer at the normal anatomic site (innermost) of corneal endothelium ( Figure 1K). Stratified epithelial cells were vimentinnegative.
Gene expression of corneal markers
RT-qPCR assays were used to compare corneal gene expression in fully developed human cornea vs. stem cell-derived corneal orbs. Several genes known to function in all three major layers of the cornea were chosen for study (Figure 4). The expressions of collagen 5 (Col5), decorin, biglycan, lumican, keratocan, CK8, and CK18 in all samples analyzed were significantly higher in stem-cell derived corneas than in adult cornea. The corneal orbs also expressed many characteristic markers of human corneal epithelium including importin 13, integrin alpha 9, connexin-43, and enolase-α. Increased expression (vs. adult cornea) of ZO-1, confirming impressions from immunocytochemistry, was also noted in all stem cell-derived corneal orbs.
Drug permeability of corneal orbs
Permeability and transcorneal transport of drugs are critical processes in pharmacologic treatment of eye disease. To evaluate these functions we performed a small pilot study with two drugs normally delivered topically to the eye. From these studies the permeability coefficient (P app ) of the beta-blocker atenolol was calculated to be 2.60 + 1.55 (mean + standard deviation) and the P app of antipyrine was 15.5 + 13.1. Thus, the low vs. high permeability drugs were transported as expected relative to each other by the stem-cell derived corneas.
Determination of growth factors in differentiating cornea conditioned medium (ELISA)
Corneal-derived EGF is long known as an important factor in corneal epithelial regeneration [reviewed in 19], while TGFβ1 has pleiotropic effects in the cornea including maintenance of the stromal layer [20]. Before ninety days of differentiation, neither EGF (in 4/4 samples) nor TGFβ1 (in 2/2 samples) was detectable using ELISA assays. After that time point, 3 of 4 samples had detectable EGF; the concentrations were 4.3, 0.8, and 2.2 pg/mL (at the lower limits of detection of the assay). TGFβ1 was detectable in 3 of 5 samples after 90 days of differentiation; the concentrations were 9.8, 20.6, and 2.8 pg/ mL (at the lower limits of detection of the assay).
Discussion
The 3D corneal constructs generated in this work emerge as spheres from pluripotent stem cells with several important features of normal cornea including the basic anatomic layering, gene and protein expression patterns, rapid permeability to ophthalmic drugs, and no obvious opacity. This 3D structure represents a significant advance, since generation of functional miniorgans from pluripotent stem cells is very unusual. When induced to differentiate without engineering scaffolds or support, differentiation usually produces random collections of cells in culture, without functional organization. In the case of cornea, the anatomic organization is especially notable, since during normal corneal development, the stroma is derived from neural crest (migrating in to periocular mesenchyme (Hay 1979) whereas local ectoderm (adjacent to lens) becomes corneal epithelium (Cintron, ), yet both layers were generated in a single differentiation protocol from pluripotent stem cells. Using the conditions we describe, generation of the epithelial layer in the corneal orbs appears to be robust by histology, by functional manipulation of the epithelium at an air-liquid interface, supported by expression of multiple developing epithelial genes (cytokeratins 8, 10, and 18, mucin-1, connexin-43, and enolase-α). Generation of the large stromal layer is also consistent, as seen histologically, by PAS staining, and gene expression (decorin, lumican, biglycan, and keratocan). The stroma appears to be quite differentiated and lacked expression of the early stromal marker, Pax6. Some optimization is likely required for the endothelial layer, which may require longer differentiation (or an added differentiation step) to develop. In some corneal orbs we identified cells histologically that were appropriately localized and shaped for endothelial cells. Vimentin and zona occludens staining (expressed during endothelial development) also indicate some endothelial cells in the orb interior, but we cannot yet determine if endothelium is reliably incorporated into the corneas at 100-120 days of differentiation. We have not yet quantified the number of limbal stem cells in the differentiating corneas, but their presence is suggested by the expression of importin-13 (IPO13) characteristic of corneal epithelial progenitor cells. In normal cornea, IPO13 is uniquely expressed by human limbal basal epithelial cells, and plays an important role in maintaining the progenitor phenotype and high proliferative potential [21].
In a small pilot study we examined permeability of the corneal orbs. The primary pathway of drug permeation from the surface tear fluid to the anterior chamber of the eye is transcorneal [22] and passage through the corneal epithelium is considered to be the rate-limiting step in the transcorneal penetration of most ophthalmic drugs. Our results suggested rapid corneal permeability of the two drugs tested and as well as the expected greater permeability of antipyrine over atenolol. Some secretory function of the corneas was confirmed at the later stages of differentiation. EGF, which stimulates corneal epithelial cell proliferation and migration [reviewed in 23] and TGFβ1, involved in maintaining corneal integrity, in part by counterbalancing the stimulatory effects of EGF [reviewed in 24] were detectable in medium around the developing corneas. Both EGF and TGFβ1 promote keratocyte differentiation [25] and chemotaxis [26].
Though the generation of a 3D cornea from a single cell source rather than via multiple differentiation procedures is somewhat surprising and promising for clinical manufacture, the pluripotent stem cell-derived corneas will also require further manipulation to optimize them for preclinical studies, particularly to perfect the alignment of collagen in the stromal layer to assure optimal refractive properties [27]. Normal corneal fibroblasts produce collagen fibrils of uniform size and spacing, which are then orthogonally arranged in the stroma matrix [28], and changes to this pattern of organization result in loss of corneal transparency. Finally, though the corneal spheres appear to be transparent, their true transparency needs to be quantified by physical measures.
The differentiation process yields corneal orbs with sufficient integrity that we anticipate they can be manipulated in engineering studies and moved to other culture (or co-culture) environments for further differentiation. The orbs are also suitable, after some further development, for combination with various bioengineered corneal materials in a 'combined device-cell therapy' approach taken by other groups [6,29]. These are all research agendas we are currently investigating, to take advantage of recent progress in generating biosynthetic corneas. Among the most promising of these biosynthetic corneas, a recent Phase I trial of cross-linked collagen corneal grafts were implanted into ten patients (9 with keratoconus) who had intact corneal endothelium. The grafts induced endogenous reepithelialization over the course of about a month, re-innervation starting at about a year, supported normal tear formation, without prolonged immunosuppression and without pain. Most of the treated patients could tolerate contact lenses to improve vision, which was not as good as the vision of comparable patients who received full thickness human allogeneic grafts [8].
The smaller size (though similar organization) of corneal orbs generated from hpSC compared to those generated from hESC will require the protocol for hpSC undergo modification relative to the protocol used with hESC. We have reported other differences in growth and differentiation patterns between hpSC and hESC lines. For example, when a protocol that yields efficient (~80%) differentiation of hESC into definitive endoderm [30] is applied to hpSC, the efficiency is about half that of hESC. But pre-treatment of the hpSC with trichostatin A increases the efficiency (~75%) to that of hESC differentiation [31]. The yield of neural progenitor cells from hpSC is also less than from hESC, which may depend in part on differences in expression of extracellular matrix proteins necessary to hold neurospheres together during differentiation [32]. Adaptation of protocols that do not require neurospheres is therefore a rational choice for hpSC. Differentiation of retinal pigment epithelial (RPE) cells from hpSC and hESC, accomplished without an intermediate sphere formation step, yields similar numbers of qualitatively similar RPE [32].
The focus on parthenogenetic stem cells as a source for corneas (and other tissues), is well-justified by the clinical literature on the immunogenicity of allogeneic corneal grafts. Rejection of cornea allografts is a major cause of graft failure [33], and a history of rejection puts patients at high risk of future (second) graft loss [34]. HLA matching can improve corneal allograft survival [35] but is not routinely done, based in part on cost, and because there is not global consensus on its value. Nonetheless, the major cause of graft failure in the first year after corneal transplantation is cell-mediated rejection [36]. A bank of HLA homozygous parthenogenetic stem cells to cover common HLA haplotypes in the population is, therefore, a potentially valuable source of transplantable cells that can be better matched to reduce the intensity and incidence of rejection.
In summary, we generated free-floating corneal orbs from both human ESC and smaller orbs from parthenogenetic pluripotent stem cells, and these corneas differentiate as a 3D layered structure similar to that of normal human corneas. A priority for continued pre-clinical development of these corneas is engineering approaches to improve stromal collagen organization and on developing variances in culture protocols aimed at increasing the size of hpSC-genreated orbs. The clinical need for corneal grafts for blinding diseases, especially outside the U.S., is great, highlighting the importance of further research to optimize these potential stem cell-derived grafts.
|
2019-02-08T17:21:37.091Z
|
2011-12-05T00:00:00.000
|
{
"year": 2011,
"sha1": "3dfb53717e0ea935ae55ae362f754d2a90556a53",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/corneal-spheres-derived-from-human-embryonic-and-human-pluripotent-parthenogenetic-stem-cells-2157-7633.S2-006.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3dfb53717e0ea935ae55ae362f754d2a90556a53",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
220260335
|
pes2o/s2orc
|
v3-fos-license
|
A proposed unified framework to describe the management of biological invasions
Managing the impacts of invasive alien species (IAS) is a great societal challenge. A wide variety of terms have been used to describe the management of invasive alien species and the sequence in which they might be applied. This variety and lack of consistency creates uncertainty in the presentation and description of management in policy, science and practice. Here we expand on the existing description of the invasion process to develop an IAS management framework. We define the different forms of active management using a novel approach based on changes in species status, avoiding the need for stand-alone descriptions of management types, and provide a complete set of potential management activities. We propose a standardised set of management terminology as an emergent feature of this framework. We identified eight key forms of management: (1) pathway management, (2) interception, (3) limits to keeping, (4) secure keeping, (5) eradication, (6) complete reproductive removal, (7) containment and (8) suppression. We recognise four associated terms: prevention; captive management; rapid eradication; and long-term management, and note the use of impact mitigation and restoration as associated forms of management. We discuss the wider use of this framework and the supporting activities required to ensure management is well-targeted, cost-effective and makes best use of limited resources.
Introduction
Managing the increasing environmental and socioeconomic impacts from invasive alien species (IAS) is a great societal challenge for the twenty-first century. This is addressed by the Convention on Biological Diversity (CBD 2010) and the Sustainable Development Goals (UN 2015), which commit signatories to introduce measures that prevent the introduction and significantly reduce the impacts of IAS, and control or eradicate priority species. Management involves multiple actions at different stages in the invasion process (Wilson et al. 2017). Management is defined in the EU IAS Regulation as 'any lethal or non-lethal action aimed at the eradication, population control or containment of a population of an invasive alien species.' In the US, the legal definition of invasive species control is ''eradicating, suppressing, reducing, or managing invasive species populations, preventing spread of invasive species from areas where they are present, and taking steps such as restoration of native species and habitats to reduce the effects of invasive species and to prevent further invasions''. Thus, active management may prevent a potential IAS from entering a new area; if introduced, may remove it before it becomes widely established; and if it becomes widely established, may limit its impact by reducing spread and abundance. Management may also include impact adaptation without species intervention or environmental restoration after species removal.
To meet these targets, a shared understanding of the processes involved and their description is needed (Keller et al 2011). Papers that define and standardise the terminology used to describe the invasion process (Blackburn et al. 2011), the biogeographical status of alien species , pathways (Hulme et al. 2008), risks (Roy et al. 2018) and their impact Jeschke et al. 2014;Bacher et al. 2018) all support this objective. By contrast, a range of studies, legislation and policy documents use diverse terms to describe the different elements of IAS management. These may be internally consistent, but there is a lack of consistency between them, creating uncertainty in the presentation and description of management amongst policy makers, researchers, stake-holders and managers.
The diverse terms currently in use to describe management can be a source of confusion. For example, 'containment' can either refer to the controlled keeping of an IAS under captive conditions (Scott 2005;Dobson et al. 2013), or reducing the spread of a population in the wild (Grice et al. 2010). 'Eradication' is a widely used term defined as the complete and permanent removal of a population (Bomford and O'Brien 1995). However, this definition does not cover situations where a population has been removed from an area, but there still is a need for the ongoing management of dormant life stages such as seeds (Klimešová and Klimeš 2007;Panetta 2015), or the continued influx of dispersing individuals from neighboring areas (Robertson et al. 2019). Some terms are often linked to advice on how they should be applied, such as 'rapid eradication, removal or response'. While good advice, many successful eradications have been of long-established species (Keitt et al. 2011) and do not fit this description. Appropriate terminology is influenced by spatio-temporal scale, for example eradication from an individual site might constitute spread reduction at larger scales. The terminology of management needs to include direct reference to scale if it is to be meaningfully interpreted. This needs to be flexible enough to include scales varying from the continental, to individual political entities, to particular sites and will also be reflected in the definition of 'borders'. Non-standard terminology or descriptions which do not specify a particular scale make the literature on IAS management difficult to interpret (McGeoch et al. 2010;Latombe et al. 2017). Terminology that does not cover all possible forms of management also risks excluding or under-valuing possible management approaches.
A lack of clarity over terminology can also impact on the effectiveness of legislation. For example, in Iceland, a non-English speaking nation, only two terms are available to describe the management of established species, útrýming (eradication), and stjórnun (all forms of intervention). These terms are used in Icelandic legislation which provides financial support for the management of American mink (Neovison vison) (Stefansson et al. 2016). However, the broad definition of stjórnun reduces its effectiveness, resulting in subsidies for local suppression by hunters at an estimated cost of over $21 m since 1958 (Robert Stefansson unpublished data). While complete eradication of the American mink is unlikely to be feasible in Iceland, more focused use of terminology to define specific management objectives (Bryce et al. 2011) might support a more cost-effective use of subsidies. Defining management terminology, typically produced in English, in ways that can readily be translated into other languages will be of broader benefit.
A range of other methods are widely used to support active management, but do not in themselves involve any form of intervention. These include public education, raising awareness, early detection, monitoring, risk analysis which includes risk assessment, risk management and risk communication; contingency planning and cost-benefit analysis. While important to support effective management, the terminology of these approaches is not considered further in this paper, which limits itself to forms of active intervention.
We see a need for a comprehensive and common terminology with agreed definitions for active IAS management, particularly when these terms are included in legislation, international policies and guidance, the scientific literature or used to define or disseminate best practice. In this paper we propose solutions to these problems. In particular, we: (1) Provide examples of the key terms currently used to refer to the sequence of IAS management to illustrate the diversity of terms in use; (2) Develop a novel IAS management framework compatible with the widely used invasion process framework of Blackburn et al. (2011); and (3) Propose terms and definitions to describe the key elements of this framework.
Current use of terms
We reviewed legislation, guidance and scientific publications dealing with the management of IAS. From this, we identified examples describing terms used and the recommended sequence of management actions to respond to IAS during the invasion process (Table 1). This review was not comprehensive, and other terms have undoubtedly been used to describe other forms of management. However, this selection was intended to highlight the differences in usage and the need for greater consistency. Many sources include terms describing supportive methods such as monitoring, detection or assessment throughout. Other reports restrict themselves to terms describing forms of direct intervention. There was a broad consensus in the literature that prevention formed the initial objective of management. Eradication was also a commonly used term, but used in a range of contexts including linkage to a rapid response, or as a separate term following this phase. A variety of other terms were used to describe the management of species where eradication is no longer practically feasible, including control, containment, removal, management, assetbased protection, suppression or long-term management. Mitigation often appeared at the end of this sequence, linked to terms such as rehabilitation or restoration, but also appeared as one of the initial management actions (McNeely et al. 2001). This variety of terms and sequence illustrates the problem based on a selection of the currently used terminology in policy and scientific documents.
A proposed IAS management framework
We used the invasion framework described by Blackburn et al. (2011) as a starting point, as it has become the standard framework to conceptualize the invasion process. This describes a series of barriers that a species must overcome if it is to become a successful invader. The description of these six barriers is supplemented by four further terms, describing the stages of the invasion process (a copy of this is included as a component of Fig. 2).
To define the possible management actions, we made two additions to this framework. Firstly, we produced descriptions of species status, including the status before and immediately after it progressed through each of the six barriers (Table 2). Secondly, we added reference to a defined 'area of interest' to contextualise the description. Blackburn et al. (2011) also categorised populations based on the route by which the species arrived at a particular status but without a spatial component, limiting its usefulness as the basis to describe management. (2004) Prevention-early detection and eradication-control Simberloff et al. (2005) Prevention-rapid response/eradication-control/containment-restoration/mitigation Hulme (2006) Risk assessment-pathway and vector management-early detection and rapid response-eradication-mitigation and restoration Pyšek and Richardson (2010) Prevention-detection and early response-long-term management Richardson and Blanchard (2011) Prevention-eradication-containment-asset-based protection IPAPF (2012) Prevention-eradication-containment-control-mitigation CBD (2010) Prevention-early detection and rapid eradication-management EU Regulation 1141/2014 (EU 2014) Prevention-eradication-containment-resource protection Harvey and Mazzotti (2014) Prevention-removal-remediation-monitoring van Wilgen et al. (2014) Prevention-eradication-control-monitoring Hawkins et al. (2015) Prevention-eradication-complete removal-control Robertson et al. (2017) Different forms of management can then be described by the effects they have on species status. Considering species status prior to management along with its desired status after management produces a matrix ( Fig. 1) which describes 21 potential changes in species status and seven cases where management may maintain a species at a particular status. These 28 possible management actions, each described by a separate element of the matrix, are thus an emergent feature of the Blackburn et al. framework. This long list of management actions can then be summarised down to eight more generic terms to provide a pragmatic and consistent set of descriptions. In some cases, these terms apply to only a single element of the matrix, such as Interception, in others the same management term applies to a range of elements, such as Eradication.
We mapped these management alternatives and their associated terms onto the invasion framework from Blackburn et al. (2011) (Fig. 2). Four further terms (Prevention, Captive Management, Rapid Eradication and Long-term Management) were also added to reflect the wider management groupings commonly used in legislation and guidance documents. These definitions are based on changing species status. However, there are cases where management may focus on the impacts associated with the presence of a species rather than the species itself, or deal with the environmental consequences of the removal of a species. To recognise these forms of active management that are not related to changing species status, we added two further terms, Impact Adaptation and Restoration.
Comparison with existing terminology and actions
This novel approach based on changes in species status has a number of advantages over previous definitions of individual management terms. The different forms of management are defined by the start-and end-points of the changes in species status, rather than requiring stand-alone definitions of their own. This obviates the need for complex definitions of often overlapping management terms, which has led to many of the current problems of interpretation. This approach also brings an element of completeness, as all possible changes in species status are included. In this section, we describe each management term used in our framework and compare it with other terms used in the literature. Table 3 provides a published example of each form of management. Table 2 Descriptions of status of species, populations and individuals at the point at which they overcome the different barriers to successful invasion described by Blackburn et al. (2011). These also include reference to a defined 'area of interest' in each case Barrier to successful invasion (see Fig. 2 To reduce the uptake of the species and its transport outside the area of interest. This can be defined as changing status from In Transit to No Risk, or maintaining a species as No Risk, with the objective of preventing or reducing the uptake or transport of individuals. Pathway Management is already widely recognised as a key element of IAS management (Hulme et al. 2008). These include measures to reduce the uptake of individuals, such as requirements for clean shipping materials and packaging prior to the shipment of goods; regulations such as The Ballast Water Management Convention (Werschkun et al. 2014) or the management of horticultural supply chains (Hulme et al. 2018).
Interception
To intercept individuals when they first enter into the area of interest. This can be defined as maintaining status as In Transit. This includes established processes of surveillance of imports and border inspections to intercept new arrivals. Accepted definitions include 'the detection of a pest during inspection or testing of an imported consignment' and 'the refusal or controlled entry of an imported consignment due to failure to comply with phytosanitary regulations' (FAO 2018).
b Fig. 1 Matrix of the possible changes in species status following management at different stages in the invasion process. The rows describe the different categories of species status' in the invasion process, ranging from 'no risk' to 'widespread, derived from Table 2 0 . The columns represent the desired change (or maintenance) of status to be achieved following management. The elements of the matrix describe the appropriate form of management to achieve such a change. The colours represent related management types, defined in the associated key (Cassey andHogg 2015, EU 2015).
Captive Management
This is the overarching term to describe Limits to Keeping and Secure Keeping. These actions are rarely explicit in the current descriptions of IAS management (Table 1).
Eradication
To remove the entire population from the area of interest-with no immediate risk of re-invasion. This Table 3 Example publications illustrating each of the management types described in Fig. 1
Management type References Notes
Pre-border pathway management Novoa et al. (2015) Assesses the risks posed by the introduction of potentially invasive cacti in South Africa, including recommendations for legislation Interception Kenis et al. (2007) Presents data on alien insect species introductions in Europe to identify the main source countries and pathways of introduction, with recommendations for pathway management Limits to keeping Keller and Lodge (2007) Provides evidence of the risks posed by the sale of live aquatic taxa in North America, recommending the removal of known and likely invasive species from trade, and reductions in the number of contaminant organisms Secure keeping Cassey and Hogg (2015) Describes escapes and thefts of invasive species from zoos in Australia, recommending biosecurity and licensing methods to reduce the risks Eradication Anderson (2005) Describes the eradication of the invasive marine alga Caulerpa taxifolia from California using coverings and chemical treatments Complete reproductive removal Bryce et al. (2011) Describes the removal of American mink from North-East Scotland using traps. Although populations remain on land neighbouring the managed area, ongoing monitoring and removal prevents the re-establishment of breeding individuals Containment Grice (2006) Identifies weed pest species that should be targeted for containment in Australia. Examines the factors affecting the feasibility of containment; proposes and evaluates the prospects for effective containment under different circumstances Suppression Panzacchi et al. (2007) Describes the cost-effectiveness of the wide-scale suppression of coypu Myocastor coypus populations in Italy through trapping and shooting can be defined as reducing status from either Surviving, Reproducing, Spreading or Widespread, to In Captivity/Cultivation or In Transit. Bomford and O'Brien (1995) provide a widely used definition of this term 'The complete and permanent removal of all wild populations from a defined area by a time-limited campaign', which is compatible with its use in this framework.
Rapid eradication
This is a specific form of Eradication, where the population is managed before it has begun to spread. This term is widely used (Table 1) and highlights a management priority. However, it is not a specific form of management in itself-'rapid' constitutes good advice rather than describing a change in status. Rapid Eradication does not cover all forms of Eradication, which has also been applied to species that have been long and widely established in an area. This is particularly the case for mammals (Keitt et al. 2011;Robertson et al. 2017) although the opportunities vary widely between taxa.
Complete reproductive removal
To remove the entire reproductive population from the area of interest-but with remaining risk of reinvasion or further reproduction if not managed, or the remaining presence of non-breeding forms. This can be defined as reducing status from either Reproducing, Spreading or Widespread to Surviving, or maintaining status as Surviving. Management of this sort requires an on-going effort to maintain the area clear in the face of dormant life stages such as seeds, or the continued influx of new individuals from neighbouring areas. This term does not feature explicitly in most of the existing descriptions of IAS management (Table 1) but is needed as there are a growing number of large-scale control programs (Bryce et al. 2011;Robertson et al. 2017) where the removal is not complete or permanent as required by the current definition of eradication (Bomford and O'Brien 1995;Robertson et al. 2019). However, the area of interest is effectively kept clear of the species, so it is different from Suppression. This form of management is likely to increase as more widespread species are managed at large scales.
Containment
To limit the spread of a reproducing population within the area of interest. This can be defined as maintaining status as Reproducing. This term is already widely used, for example 'Any action aimed at creating barriers which minimises the risk of a population of an invasive alien species dispersing and spreading beyond the invaded area' (EU 2014), or 'Application of phytosanitary measures in and around an infested area to prevent spread of a pest' (FAO 2018).
Suppression
To reduce the distribution or abundance of a population within the area of interest. It can be defined as changing status from either Spreading or Widespread to either Reproducing or Spreading respectively with the objective of reducing the distribution or abundance of a population. Synonyms include reduction, control or population control, or '…Action…with the aim of keeping the number of individuals as low as possible so that …its invasive capacity and impacts…. are minimised' (Population control, EU 2014). Reproducing populations remain after Suppression, so any management will typically need to be repeated indefinitely to maintain its effect. However, some forms of biological control can achieve effective suppression without ongoing management inputs and have particular value. Suppression is a widely used form of management, but its objectives in terms of the degree of suppression or the reduction of impact need to consider the context specific IAS density vs impact relationship (Norbury et al. 2015) if its effectiveness is to be assessed.
Long-Term Management
This is the overarching term which includes Containment, Suppression and Complete Reproductive Removal. This form of management requires the ongoing input of management if the desired outcome is to be achieved and maintained.
No management
For populations that are already widespread in an area and where there is no objective to reduce their abundance or extent, then no management is undertaken (Maintaining species status as Widespread). If a Widespread population is managed, then its abundance or distribution will be reduced-forming part of Suppression. No Management is synonymous with the concepts of 'Tolerance' or 'Acceptance'. Even with No Management of the species, its impacts may still be reduced through Impact Adaptation. When considering management to change the status of a species to be No Risk, in many cases no single method was considered able to achieve this, these cases were classed as Multiple Methods Required. For example, Eradication of a species from a particular area would need to be accompanied by effective Pathway Management to remove all risk of it returning. This is not to say that species cannot be managed to achieve this outcome, just that this would require multiple steps.
By being directly linked to the status of the population before and after management, these terms relate to the direct management of the species. However, management may also be motivated and directed to reduce the impact of an existing species, or one that has been removed from an area. We recognise two further terms, Impact Adaptation and Restoration. They are included here for completeness although they do not refer to changes in species status.
Impact adaptation
No change in the status of the species, but forms of management to reduce associated impacts. This includes payments to compensate for impact caused, changes in human behaviour to avoid situations where the impact might occur, operation of hatcheries or nurseries for native species, selection of resistant genotypes of species that may be impacted, control of nutrient inputs, placing protective covers or deterrents on young trees vulnerable to grazing, responding to increased erosion risk by mechanically stabilising habitats. These may also occur alongside the other direct forms of species management described here.
Restoration
The management of the environment following the change in the status of an IAS. Related terms describing different forms and intensities of management include regeneration, revegetation, replacement, rehabilitation and remediation of a habitat favouring native communities (van Andel and Aronson 2012), with definitions including 'restoring ecosystems following the removal of invasive species' (van Wilgen et al. 2014) and 'restore or rehabilitate degraded areas to their proper ecological function […] after invasive species removal' (USDA 2004).
Discussion
A variety of authors have provided definitions for different forms of IAS management and the sequence in which they might best be applied (see Table 1). However, differences in interpretation, partly due to different schools in invasion biology dealing with different types of environments and taxa (Keller et al. 2011), have led to the use of a wide diversity of overlapping terms and definitions. This brings problems for common understanding, effective communication, awareness raising, meta-analyses and the development of indicators.
In this paper, we propose a novel approach, recognising that management can be described by detailing the start-and end-points of the desired changes in species status. Considering management in the context of the key barriers and stages of the invasion process (Blackburn et al. 2011) and the changing species status associated with each, the alternative forms of management then become emergent features of this existing framework.
This approach has the advantage that different forms of management are defined by the start-and end-point of changing species status, rather than requiring individual definitions of their own. Defining management terms based on changes in species status also supports their effective translation into other languages. This approach also brings an element of completeness, as all possible changes in species status are included in the descriptions. It ensures that the framework is comprehensive, describes distinct management outcomes and includes approaches such as Captive Management or Complete Reproductive Removal which may not be widely used or made explicit in other lists of IAS management, but need to be considered, for example if we are to classify and assess the frequency and effectiveness of different management types. This approach defines IAS management based on the desired change in the status of the species. However, the motivation for management may be different. While management to prevent a species entering an area or becoming established may be driven by the precautionary principle, or by experience of its effects elsewhere; once a species has become widely established, it is likely that management will be motivated by the need to reduce impacts, rather than to manage the species.
Setting clear objectives for IAS management is important to assess success or failure, or to decide that the objective is not achievable. Some objectives are simple; for Interception we can assess if the species was effectively kept out. In others, objectives need greater refinement. When considering Suppression, by what degree should the extent or abundance of the species be reduced for this to be considered successful? The objectives of an action, and indicators to measure success, need to be carefully defined if the cost-effectiveness is to be meaningfully assessed. The framework also includes the need to define the spatiotemporal scale if management is to be usefully described. The removal of an invasive species from an enclosed water body may qualify as Eradication at the scale of the water body, but nationally only contribute to Suppression. The framework also contains a temporal dimension-some forms of management such as Eradication include a discrete end-point, while others such as Containment or Suppression require ongoing inputs. Species status will also change through time as invasion progress.
The framework describes discrete management terms. The management of an IAS may develop through time, undertaking a sequence of different management actions with limited objectives, but with cumulative effects. For example, the management of the Ruddy Duck (Oxyura jamaicensis) in the UK began with local Suppression, followed by Limits to Keeping and Complete Reproductive Removal. Given the continuing presence of mobile birds in neighbouring countries, further management is required before Eradication could be achieved (Robertson et al. 2015).
It is also worth emphasising the difference between the full matrix of 28 elements, which is an emergent feature of the invasion process, and our proposed summary of these down to eight management terms. For this summary stage, there is scope to produce other classifications, or to increase the number of sub-categories within the presented terms. However, we recommend that any further management terms are defined by reference to the start-and end-points of management rather than stand-alone definitions. The use and definition of various management terms are also embedded within existing advice and legislation and are unlikely to change in retrospect. However, a more complete and systematic approach to defining and classifying management is still needed, for example if the success and effectiveness of management are to be assessed in a systematic manner.
Effective management needs to be well-targeted, cost-effective and make best use of limited resources. This requires it to be embedded in a wider framework of supporting activities such as public education, risk awareness, detection, monitoring and risk assessment, contingency planning, cost-benefit analysis and risk management, all of which support and inform active management. In future it would be useful to map these supporting activities onto this management framework.
|
2020-06-30T15:30:22.675Z
|
2020-06-30T00:00:00.000
|
{
"year": 2020,
"sha1": "439ddb622339cda8409a07139684ad2c4d624c11",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10530-020-02298-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "439ddb622339cda8409a07139684ad2c4d624c11",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
264888903
|
pes2o/s2orc
|
v3-fos-license
|
The New Landscape of Renal Biopsy in Kidney Diseases
Abstract not available
Sir Salimullah Med Coll J 2023; 31: 65-66
The renal biopsy is an important diagnostic method for renal disease.It can help in accurate diagnosis, pathogenesis, and prognosis of the disease process with a rational approach to the treatment of a renal disorder.Moreover, in advanced stages of kidney damage, a biopsy can provide information regarding the possibility of recurrence of the disease following transplantation.The renal biopsy is also crucial in the management of the transplant recipient, representing the most accurate method for determining the presence of antibody-or T-cellmediated rejection, acute tubular necrosis, cyclosporine nephrotoxicity, or the development of de novo or recurrent glomerulonephritis in the allograft.
Understanding the clinical, morphologic, and histopathogical features of renal disease as well as in-depth understanding of the anatomy and function of the normal kidney are all necessary for the correct interpretation of a renal biopsy.The pathologist should compare the full set of clinical and laboratory data with findings from light microscopy, immunofluorescence, and electron microscopic analysis to assess a kidney sample.
Most renal biopsies are performed in two methods.1. Percutaneous needle biopsy 2. Open biopsy (Wedge sampling of outer cortex).
A renal biopsy specimen is considered acceptable if it has between 10 and 15 glomeruli, while it is considered insufficient if it has less than 6.The likelihood of making an accurate diagnosis will rise as the number of glomeruli rises.Additionally, assessment of the corticomedullary junction is necessary for the diagnosis of Focal Segmental Glomerulosclerosis (FSGS); otherwise, the precise diagnosis may be missed.
Indications for renal biopsy included-• Haematuria of presumed renal origin, (absence of infection and urological investigation normal)
The New Landscape of Renal Biopsy in Kidney Diseases is usually in association with other factors such as significant proteinuria, hypertension, and the presence of serum biomarkers (ANCA & dsDNA) • Significant proteinuria> 1 gm/ day.
• Renal involvement of systemic diseases.
Renal biopsy is absolutely contraindicated in case of small kidneys, abnormal coagulopathy, and uncontrolled hypertension; whereas, relative contraindications are solitary kidney, uncooperative patient, unable to lie flat on the bed etc.
Kidney biopsy under direct vision can be performed with an open incision or laparoscopically.
The possible indications for laparoscopic kidney biopsy include the following conditions: Failed percutaneous biopsy, Chronic anticoagulation state/ coagulopathy, Morbid obesity, Solitary kidney, Multiple bilateral kidney cysts, Kidney artery aneurysm, Uncontrolled hypertension.
The common complications include local pain, minor bleeding in the urinary tract, perinephric hematoma, and uncommonly arteriovenous fistula.
Electron microscopic (EM) examination along with light (LM) and immunofluorescence (IF) microscopic findings play a vital role in the analysis of biopsies.This triad of studies is generally employed in a concerted manner (correlative microscopy).Tissue for LM & EM is fixed rapidly in buffered formalin and glutaraldehyde, respectively.Additionally, tissue for IF is retained fresh on a saline-moistened gauze or Telfa pad (preferable to immersion in saline) for subsequent rapid freezing or placed in Michel's transport medium (Zeus medium) on a temporizing alternative until ready for processing.The biopsy simply includes three renal tissue cores each of which is placed in formalin, glutaraldehyde, and Michel's medium.
In order to subtype renal cell carcinoma, identify unusual kinds of renal neoplasms, and diagnose metastatic Renal Cell Carcinoma (RCC) in tiny biopsy specimens, immunohistochemical markers are crucial.
Hematoxylin and eosin staining is routinely used to assess the architectural pattern in paraffinembedded sections and to identify the types of inflammation; however, these sections fail to clearly distinguish the extracellular matrix from the cytoplasm of glomerular, tubular, and mesenchymal cells.Extracellular material, glomerular and tubular basement membrane, mesangial components, and tubulointerstitial compartment can all be defined to a high degree using periodic acid-Schiff (PAS), periodic acid-methenamine silver (Jones), and Masson Trichrome Stain.
A renal biopsy is a relatively safe procedure that can reveal detailed information about the molecular and cellular patterns of renal disease.Furthermore, renal biopsy is also helpful for study into the pathogenesis and mechanism of progressive renal injury as well as new targeted treatments for renal cancer.The clinicopathological correlation is a tremendous challenge for both pathologists and nephrologists.The new era of molecular pathology will definitely transform the landscape of renal pathology and broaden the new horizon of the diagnostic legibility of kidney biopsy.
Shahnaj Begum
Professor and Head, Department of Pathology Sir Salimullah Medical College, Dhaka
|
2023-11-02T15:10:15.068Z
|
2023-10-31T00:00:00.000
|
{
"year": 2023,
"sha1": "0d267d170ad63be9b46060f4629c9164946060bb",
"oa_license": "CCBYNC",
"oa_url": "https://www.banglajol.info/index.php/SSMCJ/article/download/69444/46513",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "92e51180c6ffa42cf0e69accec2a86637a4c9ba9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
210259638
|
pes2o/s2orc
|
v3-fos-license
|
Environmental factors influencing benthic communities in the oxygen minimum zones on the Angolan and Namibian margins
Thriving benthic communities were observed in the oxygen minimum zones along the southwestern African margin. On the Namibian margin, fossil cold-water coral mounds were overgrown by sponges and bryozoans, while the Angolan margin was characterized by cold-water coral mounds covered by a living coral reef. To explore why benthic communities differ in both areas, present-day environmental conditions were assessed, using conductivity– temperature–depth (CTD) transects and bottom landers to investigate spatial and temporal variations of environmental properties. Near-bottom measurements recorded low dissolved oxygen concentrations on the Namibian margin of 0–0.15 mL L−1 (,0 %–9 % saturation) and on the Angolan margin of 0.5–1.5 mL L−1 (,7 %–18 % saturation), which were associated with relatively high temperatures (11.8–13.2 C and 6.4–12.6 C, respectively). Semidiurnal barotropic tides were found to interact with the margin topography producing internal waves. These tidal movements deliver water with more suitable characteristics to the benthic communities from below and above the zone of low oxygen. Concurrently, the delivery of a high quantity and quality of organic matter was observed, being an important food source for the benthic fauna. On the Namibian margin, organic matter originated directly from the surface productive zone, whereas on the Angolan margin the geochemical signature of organic matter suggested an additional mechanism of food supply. A nepheloid layer observed above the coldwater corals may constitute a reservoir of organic matter, facilitating a constant supply of food particles by tidal mixing. Our data suggest that the benthic fauna on the Namibian margin, as well as the cold-water coral communities on the Angolan margin, may compensate for unfavorable conditions of low oxygen levels and high temperatures with enhanced availability of food, while anoxic conditions on the Namibian margin are at present a limiting factor for coldwater coral growth. This study provides an example of how benthic ecosystems cope with such extreme environmental conditions since it is expected that oxygen minimum zones will expand in the future due to anthropogenic activities.
Abstract. Thriving benthic communities were observed in the oxygen minimum zones along the southwestern African margin. On the Namibian margin, fossil cold-water coral mounds were overgrown by sponges and bryozoans, while the Angolan margin was characterized by cold-water coral mounds covered by a living coral reef. To explore why benthic communities differ in both areas, present-day environmental conditions were assessed, using conductivitytemperature-depth (CTD) transects and bottom landers to investigate spatial and temporal variations of environmental properties. Near-bottom measurements recorded low dissolved oxygen concentrations on the Namibian margin of 0-0.15 mL L −1 ( 0 %-9 % saturation) and on the Angolan margin of 0.5-1.5 mL L −1 ( 7 %-18 % saturation), which were associated with relatively high temperatures (11.8-13.2 • C and 6.4-12.6 • C, respectively). Semidiurnal barotropic tides were found to interact with the margin topography producing internal waves. These tidal movements deliver water with more suitable characteristics to the benthic communities from below and above the zone of low oxygen. Concurrently, the delivery of a high quantity and quality of organic matter was observed, being an important food source for the benthic fauna. On the Namibian margin, organic matter originated directly from the surface productive zone, whereas on the Angolan margin the geochemical signature of organic matter suggested an additional mechanism of food supply. A nepheloid layer observed above the coldwater corals may constitute a reservoir of organic matter, facilitating a constant supply of food particles by tidal mixing. Our data suggest that the benthic fauna on the Namibian margin, as well as the cold-water coral communities on the Angolan margin, may compensate for unfavorable conditions of low oxygen levels and high temperatures with enhanced availability of food, while anoxic conditions on the Namibian margin are at present a limiting factor for coldwater coral growth. This study provides an example of how benthic ecosystems cope with such extreme environmental conditions since it is expected that oxygen minimum zones will expand in the future due to anthropogenic activities. thic organisms (Oevelen et al., 2009;White et al., 2012). Some framework-forming scleractinian species, with Lophelia pertusa and Madrepora oculata being the most common species in the Atlantic Ocean (Freiwald et al., 2004;White et al., 2005;Roberts et al., 2006;Cairns, 2007), are capable of forming large elevated seabed structures, so-called coral mounds (Wilson, 1979;Wienberg and Titschack, 2017;Titschack et al., 2015;De Haas et al., 2009). These coral mounds, consisting of coral debris and hemipelagic sediments, commonly reach heights between 20 and 100 m and can be several kilometers in diameter. They are widely distributed along the North Atlantic margins, being mainly restricted to water depths between 200 and 1000 m, while records of single colonies of L. pertusa are reported from a broader depth range of 50-4000 m depth (Roberts et al., 2006;Hebbeln et al., 2014;Davies et al., 2008;Mortensen et al., 2001;Freiwald et al., 2004;Freiwald, 2002;Grasmueck et al., 2006;Wheeler et al., 2007).
A global ecological-niche factor analysis by Davies et al. (2008) and Davies and Guinotte (2011), predicting suitable habitats for L. pertusa, showed that this species generally thrives in areas which are nutrient rich, well oxygenated and affected by relatively strong bottom water currents. Other factors potentially important for proliferation of L. pertusa include chemical and physical properties of the ambient water masses, for example aragonite saturation state, salinity and temperature (Davies et al., 2008;Dullo et al., 2008;Flögel et al., 2014;Davies and Guinotte, 2011). L. pertusa is most commonly found at temperatures between 4 and 12 • C and a very wide salinity range between 32 and 38.8 (Freiwald et al., 2004). The link of L. pertusa to particular salinity and temperature within the NE Atlantic led Dullo et al. (2008) to suggest that they are restricted to a specific density envelope of sigma-theta (σ ) = 27.35-27.65 kg m −3 . In addition, the majority of occurrences of live L. pertusa comes from sites with dissolved oxygen (DO) concentrations between 6 and 6.5 mL L −1 (Davies et al., 2008), with lowest recorded oxygen values being 2.1-3.2 mL L −1 at CWC sites in the Gulf of Mexico (Davies et al., 2010;Schroeder, 2002;Brooke and Ross, 2014) or even as low as 1-1.5 mL L −1 off Mauritania, where CWC mounds are in a dormant stage presently showing only scarce living coral occurrences (Wienberg et al., 2018;Ramos et al., 2017). Dissolved oxygen levels hence seem to affect the formation of CWC structures as was also shown by Holocene records obtained from the Mediterranean Sea, which revealed periods of reef demise and growth in conjunction with hypoxia (with 2 mL L −1 seemingly forming a threshold value for active coral growth; Fink et al., 2012).
Another essential constraint for CWC growth and therefore mound development in the deep sea is food supply. L. pertusa is an opportunistic feeder, exploiting a wide variety of different food sources, including phytodetritus, phytoplankton, mesozooplankton, bacteria and dissolved organic matter (Kiriakoulakis et al., 2005;Dodds et al., 2009;Gori et al., 2014;Mueller et al., 2014;Duineveld et al., 2007). Not only quantity but also quality of food particles are of crucial importance for the uptake efficiency as well as ecosystem functioning of CWCs (Ruhl, 2008;Mueller et al., 2014). Transport of surface organic matter towards CWC sites at intermediate water depths has been found to involve either active swimming (zooplankton), passive sinking, advection, local downwelling, and internal waves and associated mixing processes resulting from interactions with topography (Davies et al., 2009;Thiem et al., 2006;White et al., 2005;Mienis et al., 2009;Frederiksen et al., 1992). With worldwide efforts to map CWC communities, L. pertusa was also found under conditions which are environmentally stressful or extreme in the sense of the global limits defined by Davies et al. (2008) and by Davies and Guinotte (2011). Examples are the warm and salty waters of the Mediterranean and the high bottom water temperatures along the US coast (Cape Lookout; Freiwald et al., 2009;Mienis et al., 2014;Taviani et al., 2005). Environmental stress generally increases energy needs for organisms to recover and maintain optimal functioning, which accordingly increases their food demand (Sokolova et al., 2012).
For the SW African margin one of the few records of living CWC comes from the Angolan margin (at 7 • S; Le Guilloux et al., 2009), which raises the questions whether environmental factors limit CWC growth due to the presence of an oxygen minimum zone (OMZ; see Karstensen et al. 2008), or whether this is related to a lack of data. Hydroacoustic campaigns revealed extended areas off Angola and Namibia with structures that morphologically resemble coral mound structures known from the NE Atlantic (M76-3, MSM20-1; Geissler et al., 2013;Zabel et al., 2012). Therefore two of such mound areas on the margins off Namibia and Angola were visited during the RV Meteor cruise M122 "ANNA" (ANgola and NAmibia) in January 2016 . During this cruise, fossil CWC mound structures were found near Namibia, while flourishing CWC reef-covered mound structures were observed on the Angolan margin. The aim of the present study was to assess present-day environmental conditions at the southwestern African margin to explore why CWCs thrive on the Angolan margin and are absent on the Namibian margin. Key parameters influencing CWCs, hydrographic parameters as well as chemical properties of the water column were measured to characterize the difference in environmental conditions and food supply. These data are used to improve understanding of the potential fate of CWC mounds in a changing ocean. The SW African margin is one of the four major eastern boundary regions in the world and is characterized by upwelling of nutrient-rich cold waters (Shannon and Nelson, 1996). The availability of nutrients triggers a high primary production, making it one of the most productive marine areas worldwide with an estimated production of 0.37 Gt C yr −1 (Carr and Kearns, 2003). Remineralization of high fluxes of organic particles settling through the water column results in severe mid-depth oxygen depletion and an intense OMZ over large areas along the SW African margin (Chapman and Shannon, 1985). The extension of the OMZ is highly dynamic, being controlled by upwelling intensity, which depends on the prevailing winds and two current systems along the SW African margin, i.e., the Benguela and the Angola currents (Kostianoy and Lutjeharms, 1999;Chapman and Shannon, 1987;Fig. 1). The Benguela Current originates from the South Atlantic Current, which mixes with water from the Indian Ocean at the southern tip of Africa (Poole and Tomczak, 1999;Mohrholz et al., 2008;Rae, 2005) and introduces relatively cold and oxygen-rich Eastern South Atlantic Central Water (ESACW; Poole and Tomczak, 1999) to the SW African margin (Mohrholz et al., 2014). The Angola Current originates from the South Equatorial Counter Current and introduces warmer, nutrient-poor and less oxygenated South Atlantic Central Water (SACW; Poole and Tomczak, 1999) to the continental margin (Fig. 1a). SACW is defined by a linear relationship between temperature and salinity in a T -S plot . While the SACW flows along the continental margin the oxygen concentration is decreasing continuously due to remineralization processes of organic matter on the SW African shelf (Mohrholz et al., 2008). Both currents converge at around 14-16 • S, resulting in the Angola-Benguela front (Lutjeharms and Stockton, 1987). In austral summer, the Angola-Benguela front can move southward to 23 • S (Shannon et al., 1986), thus increasing the influence of the SACW along the Namibian coast (Junker et al., 2017;Chapman and Shannon, 1987), contributing to the pronounced OMZ due to its low initial oxygen concentration (Poole and Tomczak, 1999). ESACW is the dominant water mass at the Namibian margin during the main upwelling season in austral winter, expanding from the oceanic zone about 350 km towards the coast (Mohrholz et al., 2014). The surface water mass at the Namibian margin is a mixture of sun-warmed upwelled water and water of the Agulhas Current, which mixes in complex eddies and filaments and is called South Atlantic Subtropical Surface Water (SASSW) (Hutchings et al., 2009). At the Angolan margin the surface water is additionally influenced by water from the Cuanza and Congo rivers (Kopte et al., 2017, Fig. 1). Antarctic Intermediate Water (AAIW) is situated in deeper areas at the African continental margin and can be identified as the freshest water mass around 700-800 m depth (Shannon and Nelson, 1996).
Coral mounds along the Angolan and Namibian margins
During RV Meteor cruise M122 in 2016, over 2000 coral mounds were observed between 160 and 260 m water depth on the Namibian shelf . All mounds were densely covered with coral rubble and dead coral framework, while no living corals were observed in the study area Fig. 2a, b). Few species were locally very abundant, viz. a yellow cheilostome bryozoan which was the most common species, and five sponge species. The bryozoans were encrusting the coral rubble, whereas some sponge species reached heights of up to 30 cm ( Fig. 2a, b). The remaining community consisted of an impoverished fauna overgrowing L. pertusa debris. Commonly found sessile organism were actiniarians, zoanthids, hydroids, some thin encrusting sponges, serpulids and sabellid polychaetes. The mobile fauna comprised asteroids, ophiuroids, two shrimp species, amphipods, cumaceans and holothurians. Locally high abundances of Suffogobius bibarbatus, a fish that is known to be adapted to hypoxic conditions, were observed in cavities in the coral framework . Dead corals collected from the surface of various Namibian mounds date back to about 5 ka pointing to a simultaneous demise of these mounds during the mid-Holocene (Tamborrino et al., 2019). On the Angolan margin CWC structures varied from individual mounds to long ridges. Some mounds reached heights of more than 100 m above the seafloor. At shallow depths (∼ 250 m) some isolated smaller mounds were also present . All mounds showed a thriving CWC cover, which was dominated by L. pertusa (estimated 99 % relative abundance), along with some M. oculata and solitary corals. Mounds with a flourishing coral cover were mainly situated at water depths between 330 and 470 m, whereas single colonies were found over a broader depth range between 250 and 500 m (Fig. 2c, d;Hebbeln et al., 2017). Additionally, large aggregations of hexactinellid sponges (Aphrocallistes, Sympagella) were observed. First estimates for coral ages obtained from a gravity core collected at one of the Angolan coral mounds revealed continuous coral mound formation during the last 34 kyr until today .
Methodology
During RV Meteor expedition M122 in January 2016, two conductivity-temperature-depth (CTD) transects and three short-term bottom lander deployments (Table 1, Fig. 1) were carried out to measure environmental conditions influencing benthic habitats. In addition, weather data were continuously recorded by the RV Meteor weather station, providing realtime information on local wind speed and wind direction.
Lander deployments
Sites for deployment of the NIOZ-designed lander (ALBEX) were selected based on multibeam bathymetric data. On the Namibian margin the bottom lander was deployed on top of a mound structure (water depth 220 m). Off Angola the lander was deployed in the relatively shallow part of the mound zone at 340 m water depth and in the deeper part at 530 m ( Fig. 1, Table 1). Additionally, a GEOMAR satellite lander module (SLM) was deployed off-mound at 230 m depth at the Namibian margin and at 430 m depth at the Angolan margin ( Fig. 1, Table 1). The lander was equipped with an ARO-USB oxygen sensor (JFE Advantech ™ ), a combined OBSfluorometer (Wet Labs ™ ) and an Aquadopp (Nortek ™ ) profiling current meter. The lander was furthermore equipped with a Technicap PPS4/3 sediment trap with 12 bottles (allowing daily samples) and a McLane particle pump (24 filter units for each 7.5 L of seawater, 2 h interval) to sample partic-ulate organic matter in the near-bottom water (40 cm above bottom).
The SLM was equipped with a 600 kHz ADCP Workhorse Sentinel 600 from RDI, a CTD (SBE SBE16V2 ™ ), a combined fluorescence and turbidity sensor (WET Labs ECO-AFL/FL), a dissolved oxygen sensor (SBE ™ ) and a pH sensor (SBE ™ ) . From the SLM only pH measurements are used here, complementing the data from the NIOZ lander.
CTD transects
Vertical profiles of hydrographic parameters in the water column, viz. temperature, conductivity, oxygen and turbidity, were obtained using a Sea-Bird CTD-Rosette system (Sea-Bird SBE 9 plus). The additional sensors on the CTD were a dissolved oxygen sensor (SBE 43 membrane-type DO Sensor) and a combined fluorescence and turbidity sensor (WET Labs ECO-AFL/FL). The CTD was combined with a rosette water sampler consisting of 24 Niskin ® water sampling bottles (10 L). CTD casts were carried out along two downslope (Fig. 1). Owing to technical problems turbidity data were only collected on the Angolan slope.
Hydrographic data processing
The CTD data were processed using the processing software Sea-Bird data SBE 11plus V 5.2 and were visualized using the program Ocean Data View (Schlitzer, 2011;Version 4.7.8).
Hydrographic data recorded by the landers were analyzed and plotted using the program R (R Core Team, 2017). Data from the different instruments (temperature, turbidity, current speed, oxygen concentration, fluorescence) were averaged over a period of 1.5 h to remove shorter-term trends and occasional spikes. Correlations between variables were assessed by Spearman's rank correlation tests.
Suspended particulate matter
Near-bottom suspended particulate organic matter (SPOM) was sampled by means of a phytoplankton sampler (McLane PPS) mounted on the ALBEX lander. The PPS was fitted with 24 GF/F filters (47 mm Whatman ™ GF/F filters precombusted at 450 • C). A maximum of 7.5 L was pumped over each filter during a 2 h period, yielding a time series of near-bottom SPOM supply and its variability over a period of 48 h.
C / N analysis and isotope measurements
Filters from the phytoplankton sampler were freeze-dried before further analysis. Half of each filter was used for phytopigment analysis and a 1/4 section of each filter was used for analyzing organic carbon, nitrogen and their stable isotope ratios. The filters used for carbon analysis were decarbonized by vapor of concentrated hydrochloric acid (2 M HCl supra) prior to analyses. Filters were transferred into pressed tin capsules (12 mm × 5 mm, Elemental Microanalysis), and δ 15 N, δ 13 C and total weight percent of organic carbon and nitrogen were analyzed by a Delta V Advantage isotope ratio MS coupled online to an Elemental Analyzer (Flash 2000 EA-IRMS) by a ConFlo IV (Thermo Fisher Scientific Inc.). The reference gas was purified atmospheric N 2 . As standards for δ 13 C benzoic acid and acetanilide were used, for δ 15 N acetanilide, urea and casein were used. For δ 13 C analysis a high-signal method was used including a 70 % dilution. Values are reported relative to vpdb and the atmosphere respectively. Precision and accuracy based on replicate analyses and comparison with interna-tional standards for δ 13 C and δ 15 N was ±0.15 ‰. The C/N ratio is based on the weight ratios between total organic carbon (TOC) and N.
Phytopigments
Phytopigments were measured by reverse-phase highperformance liquid chromatography (RP-HPLC, Waters Acquity UPLC) with a gradient based on the method published by Kraay et al. (1992). For each sample half of a GF/F filter was used and freeze-dried before extraction. Pigments were extracted using 95 % methanol and sonification. All steps were performed in a dark and cooled environment. Pigments were identified by means of their absorption spectrum, fluorescence and the elution time. Identification and quantification took place as described by Tahey et al. (1994). The absorbance peak areas of chlorophyll a were converted into concentrations using conversion factors determined with a certified standard. The phaeopigment / chlorophyll a ratio gives an indication of the degradation status of the organic material, since phaeopigments form as a result of bacterial or autolytic cell lysis and grazing activity (Welschmeyer and Lorenzen, 1985).
Tidal analysis
The barotropic (due to the sea level and pressure change) and baroclinic (internal "free waves" propagating along the pycnoclines) tidal signals obtained by the Aquadopp (Nortek ™ ) profiling current meter were analyzed from the bottom pressure and from the horizontal flow components recorded 6 m above the sea floor, using the T_Tide Harmonic Analysis Toolbox (Pawlowicz et al., 2002). The data mean and trends were subtracted from the data before analysis.
Namibian margin
The hydrographic data obtained by CTD measurements along a downslope transect from the surface to 1000 m water depth revealed distinct changes in temperature and salinity throughout the water column. These are ascribed to the different water masses in the study area (Fig. 3a). In the upper 85 m of the water column, temperatures were above 14 • C and salinities > 35.2, which correspond to South Atlantic Subtropical Surface Water (SASSW). SACW was situated underneath the SASSW and reaches down to about 700 m, characterized by a temperature from 14 to 7 • C and a salinity from 35.4 to 34.5 (Fig. 3a). A deep CTD cast about 130 km from the coastline recorded a water mass with the signature of ESACW, having a lower temperature ( 1.3 • C) and lower salinity ( 0.2) than SACW (in 200 m depth, not included in CTD transects of Fig. 4). Underneath these two central water masses Antarctic Intermediate Water (AAIW) was found with a temperature < 7 • C.
The CTD transect showed decreasing DO (dissolved oxygen) concentration from the surface (6 mL L −1 ) towards a minimum in 150 to 200 m depth (0 mL L −1 ). Lowest values for DO concentrations were found on the continental margin between 100 and 335 m water depth. The DO concentrations in this pronounced OMZ ranged from < 1 mL L −1 down to 0 mL L −1 ( 9 % to 0 % saturation). The zone of low DO concentrations (< 1 mL L −1 ) stretched horizontally over the complete transect from about 50 km to at least 100 km offshore (Fig. 4c). The upper boundary of the OMZ was relatively sharp compared to its lower limits and corresponded with the border between SASSW at the surface and SACW below.
Within the OMZ, a small increase in fluorescence (0.2 mg m −3 ) was recorded, whereas fluorescence was otherwise not traceable below the surface layer (Fig. 4d). Within the surface layer, highest surface fluorescence (> 2 mg m −3 ) was found ∼ 40 km offshore. Above the center of the OMZ fluorescence reached only 0.4 mg m −3 .
Angolan margin
The hydrographic data obtained by CTD measurements along a downslope transect from the surface to 800 m water depth revealed distinct changes in temperature and salinity throughout the water column, related to four different water masses. At the surface a distinct shallow layer (> 20 m) with a distinctly lower salinity (27.3-35.5) and higher temperature (29.5-27 • C, Fig. 3b) was observed. Below the surface layer, SASSW was found down to a depth of 70 m, characterized by a higher salinity (35.8). SACW was observed between 70 and 600 m, showing the expected linear relationship between temperature and salinity. Temperature and salinity decreased from 17.5 • C and 35.8 to 7 • C and 34.6. At 700 m depth AAIW was recorded, characterized by a low salinity (< 34.4) and temperature (< 7 • C, Fig. 3b).
The CTD transect showed a sharp decrease in the DO concentrations underneath the SASSW from 5 to < 2 mL L −1 (Fig. 5). DO concentrations decreased further to a minimum of 0.6 mL L −1 at 350 m and then increased to > 3 mL L −1 at 800 m depth. Lowest DO concentrations were not found at the slope but 70 km offshore in the center of the zone of reduced DO concentrations between 200 and 450 m water depth (< 1 mL L −1 ). Compared to the Namibian margin (see Fig. 4), the hypoxic layer was situated further offshore, slightly deeper, and overall DO concentrations were higher (compare Fig. 4c). Also, the boundaries of the hypoxic zone were not as sharp. Fluorescence near the sea surface was generally low (around 0.2 with small maxima of 0.78 mg m −3 ) and not detectable deeper than 150 m depth. A distinct zone of enhanced turbidity was observed on the continental margin between 200 and 350 m water depth.
Namibian margin
Bottom temperature ranged from 11.8 to 13.2 • C during the deployment of the ALBEX lander (Table 2, Fig. 6), showing oscillating fluctuations with a maximum semidiurnal ( T ∼ 6 h) change of ∼ 1 • C (on 9 January 2016). The DO concentrations fluctuated between 0 and 0.15 mL L −1 and were negatively correlated with temperature (r = −0.39, p<0.01). Fluorescence ranged from 42 to 45 NTU during the deployment and was positively correlated with temperature (r = 0.38, p<0.01). Hence, both temperature and fluorescence were negatively correlated with DO concentrations (r = −0.39, p<0.01) and turbidity (optical backscatter, r = −0.35, p<0.01). Turbidity was low until it increased markedly during the second half of the deployment. During this period on the 6 January, wind speed increased from 10 m s −1 to a maximum of 17 m s −1 and remained high for the next 6 . The wind direction changed from counterclockwise cyclonic rotation towards alongshore winds. During the strong wind period, colder water (correlation between wind speed and water temperature, r = −0.55, p<0.01) with a higher turbidity (correlation of wind speed and turbidity, r = 0.42, p<0.01) and on average higher DO concentrations was present. The SLM lander recorded an average pH of 8.01.
Maximum current speeds measured during the deployment period were 0.21 m s −1 , with average current speeds of 0.09 m s −1 ( Table 2). The tidal cycle explained > 80 % of the pressure fluctuations (Table 3), with a semidiurnal signal, M2 (principal lunar semidiurnal), generating an amplitude of > 0.35 dbar and thus being the most important constituent. Before the 6 January, the current direction oscillated between SW and SE after which it changed to a dominant northerly current direction (Fig. 6).
The observed fluctuations in bottom water temperature at the deployment site imply a vertical tidal movement of around 70 m. This was estimated by comparing the temperature change recorded by the lander to the respective temperature-depth gradient based on water column measurements (CTD site GeoB20553, 12.58 • C at 245 m, 12.93 • C at 179 m). Due to these vertical tidal movements, the oxygendepleted water from the core of the OMZ is regularly being replaced with somewhat colder and slightly more oxygenated water ( up to 0.2 mL L −1 ).
Angolan margin
Mean bottom water temperatures were 6.73 • C at the deeper site (530 m) and 10.06 • C at the shallower site (340 m, Fig. 7, Table 2). The maximum semidiurnal ( T ∼ 6 h) temperature change was 1.60 • C at the deepest site and 2.4 • C at the shallow site (Fig. 7). DO concentrations at the deep site were a factor of 2 higher than those at the shallow site, i.e., 0.9-1.5 vs. 0.5-0.8 mL L −1 respectively ( range between 4 % and 14 % saturation of both sites), whereas the range of diurnal fluctuations was much smaller compared to the shallow site. DO concentrations were negatively correlated with temperature at the deep site (r = −0.99, p<0.01), while positively correlated at the shallow site (r = 0.91, p<0.01). Fluorescence was low during both deployments and showed only small fluctuations, being slightly higher at the shallow site (between 38.5 and 41.5 NTU at both sites). Current speeds were relatively high (between 0 and 0.3 m s −1 , average 0.1 m s −1 ) and positively correlated with temperature at the shallow site (r = 0.31, p<0.01) and negatively correlated at the deep site (r = −0.22, p<0.01). Analysis of the tidal cycle showed that it explained 29.8 %-54.9 % of the horizontal current fluctuations. The M2 amplitude was 0.06-0.09 s −1 and was the most important signal (Table 3). A decrease in turbidity was observed during the deployment at the shallow station. This station was located directly below the turbidity maximum between 200 and 350 m depth as observed in the CTD transect (Fig. 5). In contrast, a relative constant and low turbidity was observed for the deep deployment. Turbidity during both deployments was positively correlated to DO concentrations (r = 0.47, p<0.01, shallow deployment and r = 0.50, p<0.01, deep deployment). The SLM lander recorded an average pH of 8.12.
The short-term temperature fluctuations imply a vertical tidal movement of around 130 m (12.9-9.1 • C measured by lander, 218-349 m depth in CTD above lander at station GeoB20966).
Namibian margin
The nitrogen (N) concentration of the SPOM measured on the filters of the McLane pump fluctuated between 0.25 and 0.45 mg L −1 (Fig. 8). The highest N concentration corresponded with a peak in turbidity (r = 0.42, p<0.01). The δ 15 N values of the lander time series fluctuated between 5.1 and 6.9 with an average value of 5.7 ‰. Total organic carbon (TOC) showed a similar pattern as nitrogen, with relative concentrations ranging between 1.8 and 3.5 mg L −1 . The δ 13 C value of the TOC increased during the surveyed time period from −22.39 ‰ to −21.24 ‰ with an average of −21.7 ‰ (Fig. 8a). The C/N ratio ranged from 8.5-6.8 and was on average 7.4 (Fig. 8b). During periods of low temperature and more turbid conditions TOC and N as well as the δ 13 C values of the SPOM were higher.
Chlorophyll a concentrations of SPOM were on average 0.042 µg L −1 and correlated with the record of the fluorescence (r = 0.43, p = 0.04). A 6 times higher amount of chlorophyll a degradation products was found during the lander deployment (0.248 µg L −1 ) compared to the amount of chlorophyll a, giving a phaeopigment / chlorophyll a ratio of 6.5 (not shown). Additionally, carotenoids (0.08-0.12 µg L −1 ) and fucoxanthin (0.22 µg L −1 ), which are common in diatoms, were major components of the pigment fraction. Zeaxanthin, indicating the presence of prokaryotic cyanobacteria, was only observed in small quantities (0.066 µg L −1 ).
Angolan margin
In general TOC and N concentrations of SPOM were higher at the shallow site compared to the deep site. Nitrogen concentrations varied around 0.14 mg L −1 at 340 m and around 0.1 mg L −1 at 530 m depth (Fig. 8b). The δ 15 N values at the shallow site ranged from 1.6 ‰ to 6.2 ‰ (3.7 ‰ average) and were even lower deeper in the water column, viz. range 0.3-3.7 ‰ with an average of 1.4 ‰. The TOC concentrations were on average 1.43 mg L −1 at 340 m and 0.9 mg L −1 at 530 m, with corresponding δ 13 C values ranging between −23.0 and −24.2 (average of −23.6 ‰) at the shallow, and between −22.9 and −23.9 (average −23.4 ‰) at the deep site.
The chlorophyll a concentrations of the SPOM collected by the McLane pump varied between 0.1 and 0.02 µg L −1 , with average phaeopigment / chlorophyll a ratios of 2.6 and 0.5 at the shallow and deep sites, respectively. Phytopigments recorded by the shallow deployment included 0.3 µg L −1 of fucoxanthin, while at the deep site only a concentration of 0.1 µg L −1 was found. No zeaxanthin was recorded in the pigment fraction.
Discussion
Even though the ecological-niche factor analyses of Davies et al. (2008) and Davies and Guinotte (2011) predict L. pertusa to be absent along the oxygen-limited southwestern African margin, CWC mounds with two distinct benthic ecosystems were found. The coral mounds on the Namibian shelf host no living CWCs; instead the dead coral framework covering the mounds was overgrown with fauna dominated by bryozoans and sponges. Along the slope of the Angolan margin an extended coral mound area with thriving CWC communities was encountered. It is probably that differences in present-day environmental conditions between the areas influence the faunal assemblages inhabiting them. The potential impact of the key environmental factors will be discussed below.
Short-term vs. long-term variations in environmental properties
On the Namibian margin, seasonality has a major impact on local mid-depth oxygen concentration due to the periodically varying influence of the Angola current and its associated low DO concentrations (Chapman and Shannon, 1987). The lowest DO concentration is expected from February to May when SACW is the dominating water mass on the Namibian margin and the contribution of ESACW is smaller (Mohrholz et al., 2008). Due to this seasonal pattern, the DO concentrations measured in this study (January; Fig. 4) probably do not represent minimum concentrations, which are expected to occur in the following months, but nevertheless give a valuable impression about the extent of the OMZ (February to May; Mohrholz et al., 2014). Interestingly, we captured a flow reversal after 6 January from a southward to an equatorward current direction during high wind conditions on the Namibian margin (Fig. 6), leading to an intrusion of ESACW with higher DO concentrations ( 0.007 mL L −1 on average) and lower temperatures ( 0.23 • C on average, Fig. 5) than the SACW. This led to a temporal increase in the DO concentrations. This shows that variations in the local flow field have the capability to change water properties on relatively short time scales, which might provide an analogue to the water mass variability related to the different seasons (Mohrholz et al., 2008). Such relaxations are possibly important for the survival of the abundant benthic fauna present on the relict coral mounds (Gibson et al., 2003). Other seasonal changes, like riverine outflow, do not have decisive impacts on the ecosystem since only relatively small rivers discharge from the Namibian margin. This is also reflected by the dominant marine isotopic signature of the isotopic ratios of δ 15 N and δ 13 C of the SPOM at the mound areas (Fig. 8, cf. Tyrrell and Lucas, 2002). Flow reversals were not observed during the lander deployments on the Angolan margin, where winds are reported to be weak throughout the year providing more stable conditions (Shannon, 2001). Instead river outflow seems to exert a strong influence on the DO concentrations on the Angolan margin. The runoff of the Cuanza and Congo river reach their seasonal maximum in December and January (Kopte et al., 2017), intensifying upper water column stratification. This stratification is restricting vertical mixing and thereby limits ventilation of the oxygen-depleted subsurface water masses. In addition rivers transport terrestrial organic matter to the margin, which is reflected by the isotopic signals of the SPOM (−1 ‰ to 3 ‰; Montoya, 2007) which is well below the average isotopic ratio of the marine waters of 5.5 ‰ (Meisel et al., 2011). Also δ 13 C values are in line (Boutton, 1991;Holmes et al., 1997;Sigman et al., 2009). The relative contribution of terrestrial material (green boxes) is increasing with a more negative δ 13 C value. (b) Total organic carbon (TOC) and nitrogen (N) concentration of the SPOM. Values of the Namibian margin are marked by a blue circle (C/N ratio = 7.8), values of the Angolan margin are marked by a green circle (C/N ratio = 9.6). Dissolved oxygen concentrations are included to show the higher nutrient concentrations in less oxygenated water.
with the δ 13 C values of terrestrial matter which is on average −27 ‰ in this area (Boutton, 1991;Mariotti et al., 1991). The C/N ratio of SPOM is higher compared to material from the Namibian margin, also confirming admixing of terrestrial matter (Perdue and Koprivnjak, 2007). This terrestrial matter contains suitable food sources as well as less suitable food sources, like carbon-rich polymeric material (cellulose, hemicellulose and lignin), which cannot easily be taken up by marine organisms (Hedges and Oades, 1997). The combined effects of decreased vertical mixing and additional input of organic matter potentially result in the lowest DO concentrations of the year during the investigated time period (January), since the highest river outflow and therefore strongest stratification is expected during this period.
Main stressors -oxygen and temperature
Environmental conditions marked by severe hypoxia and temporal anoxia (< 0.17 mL L −1 ) likely explain the presentday absence of living CWCs along the Namibian margin. During the measurement period the DO concentrations off Namibia were considerably lower than the thus far recorded minimum concentrations near living CWCs (1-1.3 mL L −1 ), which were found off Mauritania where only isolated living CWCs are found . Age dating of the Namibian fossil coral framework showed that CWCs disappeared about 5 ka which coincides with an intensification in upwelling and therefore most likely a decline of DO concentrations (Tamborrino et al., 2019), supporting the assumption that the low DO concentrations are responsible for the demise of CWCs on the Namibian margin. Although no living corals were observed on the Namibian coral mounds, we observed a dense living community dominated by sponges and bryozoans . Several sponge species have been reported to survive at extremely low DO concentrations within OMZs. For instance, along the lower boundary of the Peruvian OMZ, sponges were found at DO concentrations as low as 0.06-0.18 mL L −1 (Mosch et al., 2012). Mills et al. (2018) recently found a sponge (Tethya wilhelma) to be physiologically almost insensitive to oxygen stress and respires aerobically under low DO concentrations (0.02 mL L −1 ). Sponges can potentially stop their metabolic activity during unfavorable conditions and restart their metabolism when some oxygen becomes available, for instance during diurnal irrigation of water with somewhat higher DO concentrations. The existence of a living sponge community off Namibia might therefore be explained by the diurnal tides occasionally flushing the sponges with more oxic water, enabling them to metabolize, when food availability is highest (Fig. 6). Increased biomass and abundances in these temporary hypoxic-anoxic transition zones were already observed for macro-and megafauna in other OMZs and is referred to as the "edge effect" (Mullins et al., 1985;Levin et al., 1991;Sanders, 1969). It is very likely that this mechanism plays a role for the benthic communities on the Namibian as well as the Angolan margin. Along the Angolan margin, low oxygen concentrations apparently do not restrict the proliferation of thriving CWC reefs even though DO concentrations are considered hypoxic (0.5-1.5 mL L −1 ). The DO concentrations measured off Angola are well below the lower DO concentration limits for L. pertusa based on laboratory experiments and earlier field observations (Schroeder, 2002;Brooke and Ross, 2014). The DO concentrations encountered at the shallow mound sites (< 0.8 mL L −1 ) are even below the so far lowest limits known for single CWC colonies from the Mauritanian margin (Ramos et al., 2017b). Since in the present study measured DO concentrations were even lower than the earlier established lower limits, this could suggest a much higher tolerance of L. pertusa to oxygen levels as low as 0.5 mL L −1 at least in a limited time period (4 % O 2 saturation).
In addition to oxygen stress, heat stress is expected to put additional pressure on CWCs. Temperatures at the CWC mounds off Angola ranged from 6.4 to 12.6 • C, with the upper limit being close to reported maximum temperatures (∼ 12-14.9 • C; Davies and Guinotte 2011) and are hence expected to impair the ability of CWCs to form mounds (see Wienberg and Titschack 2017). The CWCs were also occurring outside of the expected density envelope of 27.35-27.65 kg m −3 in densities well below 27 kg m −3 (Fig. 3, Dullo et al., 2008). In most aquatic invertebrates, respiration rates roughly double with every 10 • C increase (Q 10 temperature coefficient = 2-3, e.g., Coma, 2002), which at the same time doubles energy demand. Dodds et al. (2007) found a doubling of the respiration rate of L. pertusa with an increase at ambient temperature of only 2 • C (viz. Q 10 = 7-8). This would limit the survival of L. pertusa at high temperatures to areas where the increased demand in energy (due to increased respiration) can be compensated by high food availability. Higher respiration rates also imply that enough oxygen needs to be available for the increased respiration. However this creates a negative feedback, since with increased food availability and higher temperatures the oxygen concentration will decrease due to bacterial decomposition of organic substances.
Survival of L. pertusa under hypoxic conditions along the shallow Angolan CWC areas is probably positively influenced by the fact that periods of highest temperatures coincide with highest DO concentrations during the tidal cycle. Probably here the increase of one stressor is compensated by a reduction of another stressor. On the Namibian margin and the deeper Angolan mound sites the opposite pattern was found, with highest temperatures during lowest DO concentrations. However, at the deeper Angolan mound sites DO concentrations are higher and temperatures more within a suitable range compared to the shallow sites (0.9-1.5 mL L −1 , 6.4-8 • C, Fig. 7). Additionally it was shown by ex situ experiments that L. pertusa is able to survive periods of hypoxic conditions similar to those found along the Angolan margin for several days, which could be crucial in periods of most adverse conditions (Dodds et al., 2007).
Food supply
As mentioned above, environmental stresses like high temperature or low DO concentration result in a loss of energy (Odum, 1971;Sokolova et al., 2012), which needs to be balanced by an increased energy (food) availability. Food availability therefore plays a significant role for faunal abundance under hypoxia or unfavorable temperatures (Diaz and Rosenberg, 1995). Above, we argued that survival of sponges and bryozoans on the relict mounds off Namibia and of CWCs, and their associated fauna at the Angolan margin, may be partly due to a high input of high-quality organic matter, compensating oxygen and thermal stresses. The importance of the food availability for CWCs was already suggested by Eisele et al. (2011), who mechanistically linked CWC mound growth periods with enhanced surface water productivity and hence organic matter supply. Here we found evidence for high quality and quantity of SPOM in both areas indicated by high TOC and N concentrations (Figs. 6 and 7) in combination with a low C/N ratio (Fig. 8), a low isotopic signature of δ 15 N and only slightly degraded pigments.
The Namibian margin is known for its upwelling cells, where phytoplankton growth is fueled by nutrients from deeper water layers producing high amounts of phytodetritus (Chapman and Shannon, 1985), which subsequently sinks down to the relict mounds on the slope. Benthic communities on the mounds off Namibia occur at relatively shallow depths, hence downward transport of SPOM from the surface waters is rapid and time for decomposition of the sinking particles in the water column is limited. The higher turbidity during lower current speeds provides additional evidence that the material settling from the surface is not transported away with the strong currents ( Fig. 6).
At the Angolan coral mounds, SPOM appeared to have a signature corresponding to higher quality organic matter compared to the SPOM off Namibia. The phytopigments were less degraded and the δ 15 N, TOC and N concentrations of the SPOM were lower. However, here lower δ 15 N and a higher phaeopigment / chlorophyll a ratio are likely connected to a mixture with terrestrial OM input, which might constitute a less suitable food source for CWCs (Hedges and Oades, 1997). On the other hand, the riverine input delivers dissolved nutrients, which can support the growth of phytoplankton, indirectly influencing food supply (Kiriakoulakis et al., 2007;Mienis et al., 2012). Moreover, the variations in food quality at the shallow Angolan reefs, which were relatively small during this study, did not seem to be related to the presence of other environmental stressors. At the Angolan margin we see a rather constant availability of SPOM. The slightly higher turbidity during periods of highest DO concentrations (Fig. 7) suggests that the SPOM on the Angolan margin originates from the bottom nepheloid layer on the margin directly above the CWC mounds (Fig. 5e), which may represent a constant reservoir of fresh SPOM. This reservoir is probably fueled by directly sinking as well as advected organic matter from the surface ocean.
Tidal currents
The semidiurnal tidal currents observed probably play a major role in the survival of benthic fauna on the SW African margin. On the Namibian margin, internal waves deliver oxygen from the surface and deeper waters to the OMZ and thereby enable benthic fauna on the fossil coral framework to survive in hypoxic conditions (Fig. 9a). At the same time these currents are probably responsible for the delivery of fresh SPOM from the surface productive zone to the communities on the margin, since they promote mixing between the water masses as well as they vertically displace the different water layers.
On the Angolan margin, internal tides produce slightly faster currents and vertical excursions of up to 130 m which are twice as high as those on the Namibian margin. Similar to the Namibian margin, these tidal excursions deliver oxygen from shallower and deeper waters to the mound zone and thereby deliver water with more suitable characteristics over the whole extent of the parts of the OMZ which otherwise may be unsuitable for CWCs (Fig. 9b). Internal tides are also responsible for the formation of a bottom nepheloid layer at 200-350 m depth (Fig. 5e). This layer is formed by trapping of organic matter as well as bottom erosion due to turbulence created by the interaction of internal waves with the margin topography, which intensifies near-bottom water movements. These internal waves are able to move on the density gradient between water masses, which are located at 225 and 300 m depth (Fig. 3). Tidal waves will be amplified due to a critical match between the characteristic slope of the internal M2 tide and the bottom slope of the Angolan margin, as is known from other continental slope regions (Dickson and McCave, 1986;Mienis et al., 2007). As argued above, this turbid layer is likely important for the nutrition of the slightly deeper situated CWC mounds, since vertical mixing is otherwise hindered by the strong stratification.
Conclusions
Different environmental properties explain the present conditions of the benthic communities on the southwestern African margin including temperature, DO concentration, food supply and tidal movements. The DO concentrations probably define the limits of a suitable habitat for CWCs along the Namibian and the Angolan margin, whereas high temperatures constitute additional stress by increasing the respiration rate and therefore energy demand. On the Namibian margin, where DO concentrations dropped below 0.01 mL L −1 , only fossil CWC mounds covered by a community dominated by sponges and bryozoans were found. This benthic community survives as it receives periodically waters with slightly higher DO concentrations (> 0.03 mL L −1 ) due to regular tidal oscillations (semidiurnal) and erratic wind events (seasonal). At the same time, a high quality and quantity of SPOM sinking down from the surface water mass enables the epifaunal community to survive despite the oxygen stress and to sustain its metabolic energy demand at the Namibian OMZ, while CWCs are not capable to withstand such extreme conditions. In contrast, thriving CWCs on the Angolan coral mounds were encountered despite the overall hypoxic conditions. The DO concentrations were slightly higher than those on the Namibian margin but nevertheless below the lowest threshold that was so far reported for L. pertusa Davies et al., 2008Davies et al., , 2010. In combination with temperatures close to the upper limits for L. pertusa, metabolic energy demand probably reached a maximum. High energy requirements might have been compensated by the general high availability of fresh resuspended SPOM. Fresh SPOM accumulates on the Angolan margin just above the CWC area and is regularly supplied due to mixing by semidiurnal tidal currents, despite the restricted sinking of SPOM from the surface due to the strong stratification.
CWC and sponge communities are known to play an important role as a refuge, feeding ground and nursery for commercial fishes (Miller et al., 2012), and have a crucial role in the marine benthic pelagic coupling (Cathalot et al., 2015). Their ecosystem services are threatened by the expected expansion of OMZs due to anthropogenic activities like rising nutrient loads and climate change (Breitburg et al., 2018). This study showed that benthic fauna is able to cope with low oxygen levels as long as sufficient high-quality food is available. Further, reef-associated sponge grounds, as encountered on the Namibian margin could play a crucial role in taking over the function of CWCs in marine carbon cycling as well as in providing a habitat for associated fauna, when conditions become unsuitable for CWCs.
Author contributions. UH analyzed the physical and chemical data, wrote the paper and prepared the figures with contributions from all authors. FM, GD and ML designed the lander research. DH and CW led the cruise and wrote the initial cruise plan. FM and ML collected the data during the research cruise. WCD was responsible for water column measurements with the CTD. AF and ML provided habitat characteristics, including species identification of both CWC areas. KJ performed the tidal analysis and provided together with SF data from the SML lander. All authors contributed to the data interpretation and discussion of the paper.
|
2019-11-22T01:31:21.306Z
|
2019-11-15T00:00:00.000
|
{
"year": 2019,
"sha1": "d309b6516d15d1764c3f43c57179197fc4718350",
"oa_license": "CCBY",
"oa_url": "https://bg.copernicus.org/articles/16/4337/2019/bg-16-4337-2019.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f455d86ce4a9e84629c89a86a21c93abf4f87fe8",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
218758959
|
pes2o/s2orc
|
v3-fos-license
|
Long‐term temporal patterns in flight activities of a migrant diurnal butterfly
Recent studies demonstrated that the Painted Lady (Vanessa cardui), a cosmopolitan diurnal butterfly performs long‐range migration between subtropical Africa and north‐western Europe, covered by individuals belonging to up to six generations. Here we analyze temporal patterns of complete annual migratory activity of the Painted Lady in Hungary, located in its Central European migratory route, almost completely unstudied before. To do so, we used field occurrence data collected between 2000 and 2019 and estimated temporal patterns in migratory activity by fitting kernel density functions on the daily mean number of individuals and observation frequency. The temporal distributions of kernel density estimates were analyzed as a function of time and key climatic predictors of the study area. We found that (i) the timing of spring arrivals has been advancing; (ii) the relative intensity of the first and last migratory peaks of the Painted Lady significantly increased during the past decades; and (iii) intensity of the last migratory peak is related to the mean temperature of the previous month, inferring that the migration is shifting to earlier dates and their volume of the migration has substantially intensified, evoking mutually nonexclusive, competing hypotheses. Our study indicates the strengthening migration activities of a southerly distributed, long‐distance migrant diurnal butterfly, most probably linked to the northward shift of wintering areas induced by warming trends of the southern parts of Europe. However, the complexity of the likely processes leading to changing migratory strategies calls up for further research in both breeding and wintering areas.
Introduction
During the past decades, evidence has been mounting that insects include a number of taxa which fully or partially migrate between breeding areas and wintering grounds with sufficient nutrient availability, even on large geographical or even intercontinental scales, involving regularity both in time and space (Rainey, 1963;Schaefer, 1969;Urquhart & Urquhart, 1978;Dingle, 1996;Hu et al., 2016). On global scales, among a number of migratory insects, a handful of iconic migrants have also been described, mostly consisting of large butterflies, dragonflies or major crop pests (Holland et al., 2006;Chapman et al., 2015). Moreover, other less conspicuous groups of insects do also migrate, such as hoverflies (Jauker & Wolters, 2008) and aphids (Tenhumberg & Poehling, 1995).
Recently, climate-induced phenological shifts in migration regimes have frequently been documented in a wide range of organisms, including vertebrates, plants and migratory insects (Cleland et al., 2007;Gordo, 2007;Taylor, 2008;Bell et al., 2015). For example, evidence is accumulating that both diurnal and nocturnal lepidopterans respond to current climatic trends by advancing first emergence dates and prolonging late autumn activities (Dell et al., 2005;Sparks et al., 2006;Végvári et al., 2015). These behavioral changes have been shown to affect migration and wintering strategies as well as population dynamics and extinction risks in a number of species (Bale & Hayward, 2010).
For example, several studies have been carried out in the migration strategies of the Monarch Butterfly (Danaus plexippus) (Zipkin et al., 2012;Zhan et al., 2014), conclusions of which highlight the complex relationship between climate and migratory performance and suggest that attempts to understand how Monarchs will be able to cope with predicted climate conditions will be challenging. This implies that current and predicted climatic processes might have a key relevance for the conservation of migratory lepidopterans (Lemoine & Böhning-Gaese, 2003;Oberhauser & Peterson, 2003). Among diurnal migratory lepidopterans, the Painted Lady (Vanessa cardui, Nymphalidae) has been proved to be an optimal model organism for insect migration research, as (i) it exhibits long-distance migratory movements between subtropical Africa and northern Europe, covered by up to six generations (Stefanescu et al., 2013;Talavera & Vila, 2016;Talavera et al., 2018); (ii) although the western flyway connecting NW-Europe and West-Africa has been thoroughly investigated, migratory routes further to the east are practically undescribed (Stefanescu et al., 2007;Stefanescu et al., 2013;Talavera & Vila, 2016;Talavera et al., 2018;Menchetti et al., 2019); (iii) as it is common and nonprotected, Painted Ladies can be easily collected and reared in artificial conditions; (iv) larval food include common plants (e.g., thistles Carduus and Cirsium spp. and nettles Urtica spp.) that are easy to be grown in laboratory studies (Stefanescu, 1997;Janz, 2005; own unpublished observations); (v) its cosmopolitan distribution covers all continents with the exception of South-America. Although the migratory movements of the western population of the Painted Lady have long been studied (Pollard et al., 1998), the effects of climatic variability on migratory strategies have not been investigated before.
Here we aim to analyze temporal patterns of the complete annual migratory activity in the Painted Lady throughout Hungary. To do so, we used field occurrence data collected by volunteers and all of the authors of this work, between 2000 and 2019; thus spanning 20 years, which has been shown to be long enough to detect climatic fingerprints on migration strategies (Végvári et al., 2015). We hypothesized that as a response to current climatic patterns, (i) the spring migration is advancing as a reaction to warming springs in North-Africa and South-Europe; (ii) the population size is growing, as the winter mortality decreases in this warm-adapted species, due to elevated temperatures; and (iii) the intensity of migration is related to mean temperature values and precipitation totals (Holland et al., 2006;Sparks et al., 2007).
Painted Lady data
We collected occurrence data of adult Painted Ladies between 2000 and 2019 using (i) publicly available records of entomological websites, which collate data on a broad range of insects between late February and late November-thus covering the whole length of the migratory period of the easily identifiable study species in the study area-uploaded by citizen scientists the data of whom are regularly controlled by specialists (www.izeltlabuak.hu, lepketerkep.termeszet.org) and (ii) our observations recorded between 2014 and 2019, following the methodology of the Hungarian Butterfly Survey (www.lepkeszet.hu). The observations were collected using standard manual GPS devices or mobile phones, providing location coordinates with a resolution less than 10 meters. Applying these records, first we created a database including the following variables for each observation: date, time of the day (hours and minutes), geographical coordinates, and number of individuals counted along a transect. To control for observation effort, we grouped the complete set of observations by day and calculated the daily mean number of individuals. As the length of survey walks showed no temporal differences, we consider the daily mean value of observed individuals to be unaffected by sampling bias. Further, as we found no effect of weekend days on the number of individuals (linear regression with Poisson error distribution, F 1, 1945 = 1.131, P = 0.288, multiple R 2 = 0.001), we consider the dataset to be unaffected by temporal variance in observation activity.
To estimate potential spatial autocorrelation patterns in the records, we performed Moran's I test for the (i) mean and (ii) median number of individuals as well as for (iii) the observation frequency, defined as the number of species occurrences reported per day for each year. As a result, no spatial autocorrelation pattern emerged in the (i) mean number of individuals (Moran's I, observed = 0.010, expected = −0.0001, N = 1102, P = 0.268); (ii) median number of individuals (Moran's I, observed = 0.009, expected = −0.001, N = 1102, P = 0.310); and (iii) observation frequency (Moran's I, observed = 0.004, expected = −0.001, N = 1102, P = 0.619). This implies that the response variables are not affected by spatial autocorrelation.
To investigate the potential effects of the temporal distribution of sample days, we considered all sampling dates of the citizen science project for collecting butterfly data and analyzed the temporal patterns of the number of sampling days divided into 10-d periods. To do so, for each study year we fitted linear regressions on the number of sampling dates as a function of the number of 10-d periods during the flight season of the Painted Lady. This calculation showed that the number of survey dates were independent of time in all of the study years (linear regression, N = 16, P ≥ 0.227 for all cases). Therefore, we consider that the temporal distribution of the Painted Lady data was not influenced by temporal changes in sampling intensity in our dataset.
To test the effect of elevation on the temporal distribution of Painted Lady records, first we divided the dataset into data with low (< 300 m) and high (≥ 300 m) altitudes, as the elevation of 300 meters above sea level is supported by habitat distributions in the study region and also by the histogram of elevation data in our dataset (Mezősi, 2016). In the next step, we calculated the temporal distribution of the Painted Lady by fitting kernel density estimates and compared the distribution of the (i) Julian date and (ii) relative height of the 95 % quantile of the kernel density peaks by two-sample t-tests, as the kernel density curves are normally distributed. We found that both the 95 % Julian dates and relative heights were not correlated between high and low elevations (t-test, t = −0.055, df = 24.073, P = 0.619 for Julian dates and t = −1.652, df = 28.479, P = 0.109 for relative heights) indicating no difference in kernel density distributions between low and high elevations.
Flight activity
To assess temporal changes in migration phenology, first we computed two metrics of spring migration phenology, which controls for the nonindependency of the records collected on the same observation days: (i) earliest arrival dates, defined as the Julian date of the 5% percentile of the height of first migration wave; (ii) median arrival dates, defined as the Julian date of the 50% percentile of the height of the first migration wave, estimating the arrival time of the bulk of the population; as well as a temporal predictor of the last migratory peak, quantified as the Julian date of the 95% of the height of the last migratory peak, characteristic for late summer migration (Tryjanowski & Sparks, 2001;Végvári et al., 2015).
To identify separate peaks of flying activity which identify separate migratory waves, we applied kernel density estimation fitted on the (i) daily means of individual numbers and (ii) frequency of observations restricted for years with at least 15 individuals in total, using the default function available in the R 3.4.4 statistical programming environment (R Development Core Team, 2018). Kernel density estimates are considered as a 'smoothed' version of a histogram. The height of distinct modes assesses the relative intensity of migration waves. We used the default kernel bandwidth, which was derived from the raw data and which is scale-invariant (R Development Core Team, 2018). For example, in the resulting distributions, unimodal kernel density distribution indicates single migratory wave in a year, while bimodal distributions are proxies for two migratory waves in a given year. Consequently, we calculated the number of flight activity peaks by numerically computing the first derivatives of the curves and identifying local maxima by computationally finding zero slopes of the derivative (Végvári et al., 2015).
Climatic responsiveness
Temporal patterns in activity peaks have been analyzed by two approaches.
First, to investigate temporal patterns in the timing of activity peaks, we fitted linear regressions on the (a) Julian dates and (b) relative intensity (height) of the (i) 5% (FED), (ii) 50% (MED), and (iii) 95% (LED) percentiles of kernel density maxima fitted on the mean number of individuals per Julian date as a function of year. FED-shifts describe the modifications in the migration strategies of the earliest arriving part of the population, which is a robust predictor of population responses in fast changing climatic processes. In contrast, changes in MED represent the temporal shifts in the timing of arrival of the population bulk. Alternatively, changes in LED is a predictor of modification in wintering/hibernation strategies (Diamond et al., 2011;Végvári et al., 2015).
Next, we repeated kernel density analyses separated for the southern and the northern part of Hungary divided by the mean latitude of the latitudinal range of Hungary, as the southern part of the country is a candidate as a wintering area of Painted Ladies (Z. Varga, pers. comm.), which implies that the number of migratory waves can be different from that of the northern part, owing to differential dispersal and migration strategies.
In the second approach, we repeated these analyses for kernel density estimates fitted on the frequency of observations (Supp et al., 2015) To estimate relationships among flight activity and climatic parameters, we conducted two approaches. First, for each year, we calculated the precipitation totals and mean temperature averaged for Hungary and fitted linear regressions on the annual number of flight activity peaks as a function of climatic proxies. Second, for each activity peak, we calculated the precipitation totals and mean temperature for the (i) actual and (ii) previous months as well as for the (iii) previous year, averaged for the study area, and fitted multivariate linear regression on the relative height of the kernel density estimate as a function of Julian date of the activity peak as well as the precipitation sum and mean temperature of the (i) actual and (ii) previous month as well as of (iii) the previous year.
Results
Our dataset includes 1947 records in total, spanning 20 years (2000-2019, Fig. 1). The number of kernel density peaks ranged between 1 and 3 (Fig. 2) considering the whole of the country, and ranged between 1 and 5 in South-Hungary, and between 1 and 3 in the northern part of the country. In the whole of the country, the mean of annual medians of observation data was found to be 22 June (interquantile range: 15 May to 9 July, Table 1).
Mean number of individuals
Temporal trends We detected no temporal trend in the number of activity peaks, identified as the number of local maxima of kernel density estimates (linear regression, b = 0.018, df = 12, P = 0.579). Similarly, no temporal patterns emerged when considering the southern and the western part of the country separately (linear regression, b = 0.206, df = 6, P = 0.155 for South-Hungary and linear regression, b = 0.004, df = 11, P = 0.911 for North-Hungary). We found no correlation between the annual number of migration peaks of the separate regions (Pearson's correlation, r = 0.095, P = 0.824).
Climatic responsiveness
We detected significant advancement of temporal patterns in FED (linear regression, b = −2.788, df = 12, P = 0.017, Table 2, Fig. 3), but not in MED and LED dates (linear regression, N = 13, P ≥ 0.080 for both cases, Table 2). In contrast, temporal patterns in the intensity of activity showed significantly increasing trends in the timing of all migration metrics: (linear regression, df = 12, P ≤ 0.031 for all cases; Table 2; Fig. 4A-C).
Across the 13 years with representative sample size, we found no relationship between the annual number of flight activity peaks and annual precipitation totals or mean temperatures calculated either for the actual or the previous year (linear regression, N = 13, P ≥ 0.598 for all cases, Table 2).
In contrast, the relative intensity of the 95% percentile of the last flight peak was negatively associated to the Julian date (linear regression, b = −0.015, df = 24, P = 0.047) and was positively related to the mean temperature of the previous month (linear regression, b = 0.111, df = 25, P = 0.008, Table 2). All other associations among 5%, 50%, and 95% percentiles of the kernel density estimates as well as monthly climatic parameters calculated for the actual or previous months proved to be nonsignificant (linear regressions, df = 25, P ≥ 0.055 for all cases, Table 2).
Frequency of observations
Similar to temporal patterns of FED-trends calculated for the mean number of individuals, we detected a significant temporal pattern in FED dates as a function of year (linear regression, b = −3.410, df = 12, P = 0.017), indicating the advancement of the arrival of the first individuals. In line with MED-and LED-trends for the mean number of individuals, no temporal patterns emerged for the timing of the bulk and last of the migrating waves (linear regressions, df = 12, P ≥ 0.075 for both cases).
In contrast to kernel density estimates fitted on the mean number of individuals, the relative intensity of the 5% percentile of the first migratory wave estimated by kernel density distribution fitted on observation frequency showed no temporal trend (b < 0.0001, df = 12, P = 0.459). Whereas the 50% percentile of the first migratory peak exhibited no temporal pattern across the years (b = 0.024, df = 12, P = 0.139), the volume of the last migratory peak, measured as the 95% percentile of the kernel density estimate increased significantly as a function of years (b = 0.161, df = 12, P = 0.043), similarly to kernel density estimates fitted on the mean number of individuals.
Across the 12 years with representative sample size, we found no relationship between the annual number of flight activity peaks and annual mean temperature (linear regression, b = −0.030, df = 11, P = 0.904), as well as between annual precipitation totals (linear regression, b = 0.001, df = 11, P = 0.587). Similarly, the annual number of migratory peak was independent of both annual temperature (linear regression, b = −0.417, df = 11, P = 0.086) and precipitation totals (linear regression, b = 0.001, df = 11, P = 0.648) of the previous year.
The relative intensity of the flight peak was neither related to the Julian date of the peak nor to any of the climatic proxies, calculated both for the actual and previous months (multivariate linear regression, df = 21, P ≥ 0.214 for all cases), similarly to the results of kernel density estimates fitted on the mean number of individuals.
Discussion
Our study provided the following key results. First, we consistently (i.e., supported by models fitted on kernel density estimates computed using both the mean number of individuals or observation frequency) detected the advancement of the arrival of the 5% percentile of the first migratory wave across the study years, which indicates the shift of the timing of the spring arrival of Painted Ladies to earlier dates. Second, the intensity of the first and last migratory peaks (which is a proxy of late summer migration), measured as the 5% and 95% percentile of the kernel density estimates increased significantly between 2000 and 2019. Third, the volume of the 95% of the kernel density estimate of the last peak during late summer migration was positively related to the mean temperature of the previous month, shown only by linear regressions fitted on the kernel density estimates computed using the mean number of individuals.
The advancement of the timing of the earliest arrivals is in line with a number studies that consistently show in a large taxonomic scale that even long-distance migrants change migration strategies by earlier arrivals in the breeding grounds which is considered an adaption to warmer temperatures (Jonzén et al., 2006, Sparks et al., 2007. Indeed, the earliest arriving individuals benefit by (i) reduced competition for resources and by (ii) avoiding predators which focus on the arrival of the bulk of the population, that is, which optimize for matching food peaks (Jonzén et al., 2007). In contrast, the timing of the median Julian dates of the first migratory wave and the 95% percentile of the last migratory wave did not change during the past two decades, implying that both the bulk of the population and those arriving during the late summer migration wave exhibits a conservative migration strategy, which we previously demonstrated in a larger taxonomic subset of a lepidopteran family (Végvári et al., 2015). Our finding that the relative intensity of both spring and late summer activity peaks of the Painted Lady significantly and consistently increased during the past decades infers that the volume of the migration waves of Painted Ladies has substantially intensified, independent of the timing of migration. To explain this pattern, we propose several mutually nonexclusive hypotheses. First, this finding is in line with a number of investigations demonstrating increasing population sizes in insects with southern distribution areas, which benefit from current warming trends (Vanhanen et al., 2007). Indeed, this pattern might be related to decreased mortality due to improved wintering conditions or owing to decreased distances between breeding and wintering sites, thus involving less predation risks and smaller likelihood of adverse climatic events. In line with this hypothesis, current field observations indicate that Painted Ladies might winter as far as both South-and North-Hungary (János Tóth, Zoltán Varga, unpublished data). Second, warming winters might increase the fitness of first emerging individuals, which thus might have higher breeding success that results again in increased population sizes arriving in Central Europe (Sparks et al., 2007). Third, bioclimatic studies indicate that climatic processes might induce substantial changes in migratory routes, which predict that during the study period we might have detected individuals from various wintering areas, the plausibility of which is also coupled with fast changes in precipitation patterns of the Sahel zone in Africa where Painted Ladies are known to winter (Biasutti, 2019). Further, several migratory animal species have changed migration strategies not only by advancing the onset of spring migrations, but also by shifting their wintering grounds northwards (Bókony et al., 2019). Indeed, current climatic scenarios consistently indicate the fast northward shift of the frost-free zones within Europe, supported by the novel overwintering of southerly distributed bird species, not experienced before the 1980s (Leito et al., 2015;Bókony et al., 2019).
Our finding that the volume of the last migratory wave was positively related to the mean temperature of the previous month supported by models developed using the mean number of individuals and not by observation frequency, indicates that increased temperatures improve survival rates, which parallels the preference of Painted Ladies for hotter and drier climates.
Parallel to our findings, the high complexity of relationships among climatic parameters and breeding phenology in Monarch Butterflies along their migratory route has already been documented by previous studies, which found that climate acts in conflicting ways during the spring and summer seasons, as various generations of the same migratory track experience various climatic trends (Zipkin et al., 2012). Similarly, a study on this migrant diurnal butterfly indicates increased climatic sensitivity by the predicted fast northward shift of the wintering areas (Batalden et al., 2014).
In line with our findings, previous studies anticipate the increase of the migratory population of the Painted Lady, eventually leading to its classification as agricultural pest, thus evoking further studies in its migratory behavior and food plant selection that might inform agricultural planning. For example, the caterpillars of migrant Painted Ladies can also feed on soybean and sunflower, if allowed by the timing of migration (Poston et al., 1977;Charlet et al., 1987), which has been shown to be related to current climatic processes (Hódar & Zamora, 2004).
The increasing migratory population size of the southerly distributed Painted Ladies parallels the findings of previous studies showing that the number of species of migratory lepidopterans in the south of the United Kingdom has been rising steadily, which is closely linked to rising temperatures in SW Europe. It is predicted that further climate warming within Europe will increase the number of migratory lepidopterans reaching the United Kingdom and the consequences of this influx also need urgent attention (Sparks et al., 2007).
Migratory animals have been shown to be especially vulnerable to current global climatic trends owing to their dependence on spatially distributed resources, which may be differentially influenced by climate (Lemoine & Böhning-Gaese, 2003). Thus, a number of populations of migratory lepidopterans (Oberhauser & Peterson, 2003) are considered as endangered by current climatic trends, highlighting the importance of longitudinal studies on the climatic responsiveness of model butterflies, that motivated us to analyze the temporal patterns of migratory activity in the Painted Lady throughout Hungary. Indeed, recent studies highlight the highly complex responsiveness of insects to current climatic trends, which is amplified by the interconnectedness of climatic processes acting on various (i) generations; (ii) stages of the life cycle; and (iii) sections of migratory routes (Forrest, 2016). These relationships provide exceptional challenges for predicting changes in phenology, breeding and wintering range as well as population dynamics and extinction risks in migratory insects.
In sum, our study indicates the strengthening migration activities of a southerly distributed, long-distance migrant diurnal butterfly, most probably linked to the northward shift of wintering areas induced by warming trends of the southern parts of Europe. However, the complexity of the likely processes leading to changing migratory strategies calls up for further research in both breeding and wintering areas.
|
2020-05-21T09:07:38.595Z
|
2020-05-19T00:00:00.000
|
{
"year": 2021,
"sha1": "32cd1f3ceda2bc65d8fa291a3dde747eb0e657a1",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1744-7917.12815",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "b51d69133cd17a78111f47e509684cc8195ce123",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
257242523
|
pes2o/s2orc
|
v3-fos-license
|
The Bioaccessibility of Yak Bone Collagen Hydrolysates: Focus on Analyzing the Variation Regular of Peptides and Free Amino Acids
The lack of a bioaccessibility test for yak bone collagen hydrolysates (YBCH) limits their development as functional foods. In this study, simulated gastrointestinal digestion (SD) and absorption (SA) models were utilized to evaluate the bioaccessibility of YBCH for the first time. The variation in peptides and free amino acids was primarily characterized. There was no significant alteration in the concentration of peptides during the SD. The transport rate of peptides through the Caco-2 cell monolayers was 22.14 ± 1.58%. Finally, a total of 440 peptides were identified, more than 75% of them with lengths ranging from 7 to 15. The peptide identification indicated that about 77% of the peptides in the beginning sample still existed after the SD, and about 76% of the peptides in the digested YBCH could be observed after the SA. These results suggested that most peptides in the YBCH resist gastrointestinal digestion and absorption. After the in silico prediction, seven typical bioavailable bioactive peptides were screened out and they exhibited multi-type bioactivities in vitro. This is the first study to characterize the changes in peptides and amino acids in the YBCH during gastrointestinal digestion and absorption, and provides a foundation for analyzing the mechanism of YBCH’s bioactivities.
Introduction
Collagen peptide is mainly produced by extraction, hydrolysis, and refining with fresh animal tissues which are rich in collagen such as skins, bones, tendons, and scales [1]. In addition to providing a nutritional function, collagen peptide possesses the ability to regulate physiological activities such as modulating immunity, reducing obesity, alleviating osteoporosis, improving bone density, promoting skin health, etc. [2][3][4][5][6]. With people's the increasing attention to body health, especially since the COVID-19 outbreak, the demand for collagen peptide is increasing. It has been reported that the global collagen peptide market reached USD 598.1 million in 2020 alone. The annual projected growth rate is 5.8% from 2021 to 2028 (https://www.zionmarketresearch.com/report/collagen-peptides-market, accessed on 14 January 2023). Currently, considering the product cost and the fact that peptides are the main components of protein hydrolysates, most collagen peptide in the market is sold in the form of collagen hydrolysates.
Yak bone collagen hydrolysates (YBCH) are manufactured by the enzymatic hydrolysis of yak bone collagen. They are a mixture of peptides and free amino acids. Among these, peptides account for about 88% (w/w) [7]. YBCH have been widely applied in the fields of food and cosmetics due to its various physiological activities. In recent years, the studies on YBCH have mainly focused on two aspects. One is to obtain bioactive peptides (BAP) through multistage chromatography purification and separation. For example, Ye et al. isolated and identified two novel peptides (GPSGPAGKDGRIGQPG and GDRGETGPAGPAGPIGPV) with osteoblast proliferation-promoting activity by employing an ultrafiltration membrane system and high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) [8]. In a similar way, it has been reported that antioxidant peptides could be isolated from YBCH [9]. In addition, many researchers and consumers are more interested in the bioactivity of YBCH rather than isolating and purifying BAP from YBCH. Therefore, the other research aim regarding YBCH is to verify its bioactivities through in vivo animal experiments. For example, the immunomodulatory effects of YBCH were tested on cyclophosphamide-induced immunosuppression BALB/c mice [6]. After the intervention, signs related to the immunity of mice indicated that the produced YBCH could effectively prevent and ameliorate immunosuppression by improving innate and adaptive immunity. Additionally, to further certificate the positive effects of YBCH on osteoporosis in vivo, the YBCH were supplemented to ovariectomy-induced osteoporotic rats [5]. After ovariectomy, the osteoporosis-related indices of the rats who received YBCH were significantly ameliorated when compared with those of the control group. The serum untargeted metabolomics revealed that YBCH intake could protect or recover ovariectomy-induced osteoporosis by regulating the amino acid metabolism and the lipid metabolism. In our previous study, the modulating effects of YBCH on the gut microbiota of mice were investigated [10]. After a 30-day intervention, the ratio of Firmicutes to Bacteroidetes in the fecal microbiota was reduced and the amount of short-chain fatty acids in the fecal matter of mice were remarkably elevated. Moreover, the anti-obesity effects of YBCH on high-fat-diet mice were investigated [11]. The joint analysis of the microbiome and untargeted metabolomics suggested that the alleviation effects of YBCH on obesity might be achieved by modulating gut microbiota amino acid metabolism.
In summary, these studies showed that YBCH possess multi-type bioactivities and can be utilized as functional foods or dietary supplements. However, different from polysaccharides, which could resist gastrointestinal digestion and absorption, peptides face severe challenges in the digestive tract. In addition to the harsh gastrointestinal digestive environment, a variety of proteases or peptidases in the digestive fluid further promote the decomposition of peptides [12]. More importantly, the instability of peptides significantly influences their function and metabolism due to the loss of key amino acids or the alteration in peptide size [1]. Therefore, whether for developing functional foods or clarifying the mechanism of their biological effects, it is necessary to evaluate the bioaccessibility of YBCH during gastrointestinal digestion and absorption. Nevertheless, there were fewer reports investigating the changes in YBCH during gastrointestinal digestion and absorption. Owing to their simplicity and universality, simulated gastrointestinal digestion and absorption have been widely implicated in the field of food to predict outcomes of in vivo digestion [13]. Moreover, with the rapid development of peptide identification technology and bioinformatics, a variety of peptide databases and in silico prediction technologies have been developed. These tools make it easier to identify bioactive peptides and analyze their bioactivities [14]. In this study, the objective was to forecast the variation regular of free amino acids and peptides of YBCH during in vivo digestion by employing simulated gastrointestinal digestion and absorption. Moreover, the peptides in the samples were identified by HPLC-MS/MS. The biological activities of the bioavailable peptides were predicted by an in silico analysis and verified by an in vitro test. This study not only provides a foundation for analyzing the mechanism of YBCH's biological activity but also promotes the application of YBCH in the fields of food and medicine.
Chemical Agents and Preparation of YBCH
The preparation of YBCH was conducted as previously described [10], and the details as shown in the Supplementary Materials. The pepsin from porcine gastric mucosa (P6887) and pancreatin from porcine pancreas (P7545) were purchased from Sigma-Aldrich Co. (St. Louis, MO, USA). Caco-2 cells and murine macrophage RAW 264.7 cells were bought from the Cell Bank of Chinese Academy of Sciences (Shanghai, China). Angiotensinconverting enzyme (ACE), enzyme from rabbit lung, and dipeptidyl peptidase IV (DPP-IV)inhibitor screening kit (MAK203) were purchased from Sigma Chemical Co. (St. Louis, MO, USA); Hippuryl-Histidine-Leucine (HHL) and Hippuric Acid (HA) were from Shanghai Maclean Biochemical Technology Co. (Shanghai, China). Cell counting kit (CCK-8) was bought from Dojindo Laboratories (Kumamoto, Japan). The IL-1β assay kit, IL-6 assay kit, TNF-α assay kit, and NO assay kit were purchased from Nanjing Jiancheng Bioengineering Institute (Nanjing, China). Other cell culture-related agents such as Dulbecco's Modified Eagle Medium (DMEM), 0.25% (w/v) trypsin-0.91 mM EDTA, and Hank's balanced salt solution (HBSS) were obtained from Gibco Life Technologies (Grand Island, NY, USA). 1 nmol/µL essential amino acids standard solutions were purchased from Sigma-Aldrich Co. (St. Louis, MO, USA). Unless stated, all other chemicals were analytical-grade and purchased from China Pharmaceutical Group Chemical Reagent Co., Ltd. (Shanghai, China).
Simulated Gastrointestinal Digestion (SD)
The SD was operated as previously described with a slight modification [15]. Briefly, 10 g YBCH was added in 385 mL deionized water. The solution was stirred well and adjusted to pH 2.0 by 1 M HCl. Pepsin was added to the solution in 1:50 (enzyme to the substrate (E/S), w/w), and the mixture was incubated for 2 h in a water bath shaker (37 • C, 0.01 g) to simulate the gastric digestion. The pepsin activity was terminated by adjusting the pH to 7.0 with 1.0 M NaOH. Then, pancreatin was added into the mixture at the ratio of E/S 1:25. Subsequently, the mixture was placed in the water bath shaker for 2 h (37 • C, 0.01 g) to simulate intestinal digestion. Samples were taken every 30 minutes during the whole process. The simulated digestion was inactivated by incubating the mixture in boiling water for 10 min. After cooling to room temperature and centrifugation (20,000× g, 20 min), the supernatant of samples was then lyophilized and stored at −80 • C until further analysis (for a maximum of 2 weeks).
Cell Culture
Caco-2 cells were cultured in a DMEM medium (containing 10% fetal bovine serum, 1% penicillin-streptomycin), and then incubated at 37 • C with 5% CO 2 . The medium was changed every day. The cells were digested and sub-cultured with 0.25% trypsin solution when the cells reached 80-90% confluence. Cells in the logarithmic growth phase were taken for the experiment. Caco-2 cells used in this experiment were 25~35 generations.
Cytotoxicity Test
The CCK-8 kit was employed to detect the survival rate of Caco-2 cells after obtaining different concentrations of simulated digested YBCH. Caco-2 cells were seeded in a 96-well microplate with a density of 1 × 10 5 cells/mL. After incubating for 24 h, the culture medium was removed. The Caco-2 cells were randomly divided into a blank control group (only DMEM medium) and the test groups (simulated digested YBCH concentrations were 30 mg/mL, 10 mg/mL, 3.3 mg/mL, 1.1 mg/mL, 0.37 mg/mL, and 0.12 mg/mL). The blank wells only included the same amount of phosphate buffer without cells. Each test was performed in triplicate. After incubation for 6 h, we aspirated the culture medium and added 100 µL of fresh culture medium. Then, 10 µL of CCK-8 was added to each well and incubated for 4 h under 37 • C with 5% CO 2 . After that, the optical density (OD) value of each well was measured by a microplate reader at the wavelength of 450 nm. The cell survival rate was calculated according to the below Formula (1): where OD 1 , OD 2 , and OD 3 are the absorbance of the test groups, blank control group, and blank wells group, respectively.
Transport Studies
The cellular transport study of the simulated gastrointestinal digested YBCH was conducted as previously described with a slight modification [16]. Briefly, the Caco-2 cells were seeded in 24-well transwell inserts (6.5 mm diameter, 0.4 µm pore size, Corning, City, NY, USA) with a density of 1 × 10 5 cells/cm 2 . The volume of medium on the apical side (AP) and basolateral side (BL) was separately 0.4 mL and 0.6 mL. During the incubation, the medium was refreshed every two days in the first week and every day in the subsequent culture until the monolayer integrity evaluation. The monolayer integrity was evaluated by measuring the transepithelial electrical resistance (TEER) value, detecting paracellular permeability of the fluorescein sodium, and determining the alkaline phosphatase (ALP) activity of the cell culture medium on the two sides of the transwell. The method was operated as previously described [17][18][19]. On the test day, the culture medium on both sides was aspirated and the cell monolayers were rinsed twice with the HBSS. After that, HBSS was added to both sides (AP, 0.4 mL; BL, 0.6 mL) and incubated for 30 min to stabilize prior to the transport studies. Samples (0.4 mL) with a nontoxic concentration (3.3 mg/mL) were added to the AP side, and the fresh blank HBSS solution (0.6 mL) was added to the BL side. After incubation for 2 h, samples on two sides were aspirated and stored at −80 • C until further analysis (for a maximum of 2 weeks). Each test was performed in triplicate.
After the SA, the concentration of peptides on the two sides of the transwell were analyzed by the OPA (o-phthalaldehyde) assay as previously described [20]. The transport rate was calculated using the below Formula (2): transport rate (%) = C BL × 0.6 C AP × 0.4 + C BL × 0. 6 × 100 (2) where the C BL and C AP are the peptide concentration (mg/mL) on the AP and BL sides after the SA, respectively.
Characterization of the Samples
To comprehensively understand the changes in YBCH during the SD and the SA, the YBCH before and after SD and the solutions on the AP and BL sides at the end of the cell test were harvested and separately termed initial samples (STA), samples after gastrointestinal digestion (SGID), samples on the AP side (SIA), and samples on the BL side (SIB). Then, all these samples were characterized by molecular weight distribution, peptide concentration, free amino acid concentration, and peptides sequence identification. The detecting was conducted as previously described with a slight modification [6,20,21], and the details are provided in the Supplementary Materials.
In Vitro Verification
The screened peptides were synthesized by Fmoc solid-phase chemical synthesis (Shanghai RoyoBiotech Co., Shanghai, China). Their structures were verified by LC-MS and the peptide with a purity above 98% was utilized for subsequently verifying their antihypertension, antidiabetic, anti-inflammatory, and antioxidation abilities. The detecting methods were performed as previously described and the details are provided in the Supplementary Materials [27][28][29].
Statistical Analysis
The software SPSS 25.0 (SPSS Inc., Chicago, IL, USA) was employed to analyze the data. One-way ANOVA followed by an LSD test (equal variances assumed) or a Games-Howell test (equal variances not assumed) were performed to evaluate the significant differences between samples. All tests had three replicates. Unless stated, each test was performed in triplicate and the results are presented by mean ± SD. A significant difference was accepted at p < 0.05, and p < 0.01 was a highly significant difference.
Molecular Weight Distribution and Concentration of Peptides
The molecular weight distribution during the SD and the SA is shown in Figure 1A. In the SD, the molecular weight distribution of peptides with a molecular weight below 1000 Da gradually increased, while that of the peptides whose molecular weight was above 5000 Da and 2000-3000 Da decreased. Interestingly, the molecular weight distribution of 3000-5000 Da (around 9%) and 1000-2000 Da (around 19%) was not significantly altered. After the 2 h SA, the molecular weight distribution on the AP side was similar to that at the beginning (SGID-4.0, Figure 1A). Only the molecular weight distribution of 189-500 Da was significantly reduced, from 23.09 ± 2.06% to 13.75 ± 1.09%. For the BL side, the molecular weight distribution of peptides with a molecular weight below 500 Da occupied more than 75%, which indicated that the tripeptides, dipeptides, and free amino acids were the main components of the BL side.
The alteration in peptide concentration was also monitored during the SD and the SA ( Figure 1B). The concentration of peptides in the STA was 21.95 ± 1.73 mg/mL, while it was 19.87 ± 0.40 mg/mL after the SD. Although there was a reduction in the concentration of peptides during the SD, statistical analysis indicated that there were no significant differences (p > 0.05, Games-Howell test). After the SA, the peptide concentration on the AP and BL side was 1.92 ± 0.04 mg/mL and 0.37 ± 0.04 mg/mL, respectively.
Free Amino Acids Alteration
The total concentration of free amino acids increased nearly one time after the SD (from 135 ± 3 × 10 −2 mg/mL to 266 ± 8 × 10 −2 mg/mL, Figure 2A). This might be induced by the increase in arginine, tyrosine, phenylalanine, leucine, and lysine ( Figure 2A). These kinds of amino acids were the main components of the free amino acids after the SD; the total mass of these free amino acids was above 2.0 mg/mL and their increase ratios were all above 80%. Interestingly, all five kinds of amino acids were not significantly altered during the simulated gastric digestion. However, there was a remarkable increase in the previous The alteration in peptide concentration was also monitored during the S SA ( Figure 1B). The concentration of peptides in the STA was 21.95 ± 1.73 mg/ it was 19.87 ± 0.40 mg/mL after the SD. Although there was a reduction in the tion of peptides during the SD, statistical analysis indicated that there were no differences (p > 0.05, Games-Howell test). After the SA, the peptide concentrat AP and BL side was 1.92 ± 0.04 mg/mL and 0.37 ± 0.04 mg/mL, respectively.
Free Amino Acids Alteration
The total concentration of free amino acids increased nearly one time af (from 135 ± 3 × 10 −2 mg/mL to 266 ± 8 × 10 −2 mg/mL, Figure 2A). This might be in the increase in arginine, tyrosine, phenylalanine, leucine, and lysine (Figure 2 Through comparing the alteration in the concentration of free amino acids before and after SA, it was found that the most increased amino acid was glycine ( Figure 2B). The concentration of glycine increased from 0.98 × 10 −2 mg/mL to 5.89 × 10 −2 mg/mL; the increase ratio was 501.02%. This was followed by proline and tyrosine, whose increase ratio was 338.46% and 234.41%, respectively. Moreover, leucine, phenylalanine, methionine, alanine, and histidine all increased by more than 100%. The free amino acid distribution on the two sides of the transwell is shown in Figure 2B. It indicates that 82.5% of the proline was transported to the BL side. Noteworthily, only 6.40% of the arginine was distributed on the BL side. These results suggest that intestinal cells might have different transport capacities for different free amino acids in YBCH. Through comparing the alteration in the concentration of free amino acids before and after SA, it was found that the most increased amino acid was glycine ( Figure 2B). The concentration of glycine increased from 0.98 × 10 −2 mg/mL to 5.89 × 10 −2 mg/mL; the increase ratio was 501.02%. This was followed by proline and tyrosine, whose increase ratio was 338.46% and 234.41%, respectively. Moreover, leucine, phenylalanine, methionine, alanine, and histidine all increased by more than 100%. The free amino acid distribution on the two sides of the transwell is shown in Figure 2B. It indicates that 82.5% of the proline was transported to the BL side. Noteworthily, only 6.40% of the arginine was distributed on the BL side. These results suggest that intestinal cells might have different transport capacities for different free amino acids in YBCH.
Transport Study
The cell toxicity and monolayer integrity were first studied before the SA. Results indicated that the digested YBCH had no toxicity to the Caco-2 cell. The survival rate of the cell under different concentrations of the digested YBCH was above 90%. Notably, the cell survival rate was almost 100% under the concentration of <3.33 mg/mL ( Figure 3A).
Transport Study
The cell toxicity and monolayer integrity were first studied before the SA. Results indicated that the digested YBCH had no toxicity to the Caco-2 cell. The survival rate of the cell under different concentrations of the digested YBCH was above 90%. Notably, the cell survival rate was almost 100% under the concentration of <3.33 mg/mL ( Figure 3A). To avoid inaccurate results of transport and absorption due to the saturation of the peptide during transport, the concentration was set at 3.33 mg/mL for the subsequent SA test. After 21 days of incubation, the TEER value was 595 ± 18.06 Ω·cm 2 ( Figure 3B) and the ALP ratio of the AP to BL was 7.43 ± 0.51 ( Figure 3C). Moreover, the paracellular permeability rate of fluorescein sodium was 2.63 ± 0.34%, which was significantly lower than that of the blank control (23.89 ± 0.6%, Figure 3D). These results indicate that the cell monolayer was suitable for the transport study. Calculated according to the concentration of peptides on the two sides ( Figure 1B), the transport rate was 22.14 ± 1.58%. test. After 21 days of incubation, the TEER value was 595 ± 18.06 Ω·cm 2 ( Figure 3B) and the ALP ratio of the AP to BL was 7.43 ± 0.51 ( Figure 3C). Moreover, the paracellular permeability rate of fluorescein sodium was 2.63 ± 0.34%, which was significantly lower than that of the blank control (23.89 ± 0.6%, Figure 3D). These results indicate that the cell monolayer was suitable for the transport study. Calculated according to the concentration of peptides on the two sides ( Figure 1B), the transport rate was 22.14 ± 1.58%.
Identification of Peptides
Amongst all samples taken, a total of 440 peptides were identified (information of 440 peptides was uploaded to Mendeley Data, https://data.mendeley.com/datasets/s3j9vpfdff/1, accessed on 23 February 2023, file name: S1-peptideSummary). The number of peptides in STA, SGID, SIA, and SIB was 251, 248, 232, and 97, respectively ( Figure 4A). Among these identified peptides, one peptide (PGPAGPAGP) in the STA and two peptides (PGPAGPA and PGAVGPA) in the SIA both belonged to the collagen alpha-1(I) chain and collagen alpha-2(I) chain. The minimized peptide length was 7, while the longest one was 31 ( Figure 4B). Among these peptides, the percentage of peptides with lengths ranging from 7 to 15 was more than 75%. The uniqueness and repeatability of peptides in each sample were displayed by the Venn diagram ( Figure 4C). The number of peptides that could be identified in both STA and SGID was 193. During the SD, 55 new peptides were produced. Compared with the number of peptides in the SIA, only six new
Identification of Peptides
Amongst all samples taken, a total of 440 peptides were identified (information of 440 peptides was uploaded to Mendeley Data, https://data.mendeley.com/datasets/s3j9 vpfdff/1, accessed on 23 February 2023, file name: S1-peptideSummary). The number of peptides in STA, SGID, SIA, and SIB was 251, 248, 232, and 97, respectively ( Figure 4A). Among these identified peptides, one peptide (PGPAGPAGP) in the STA and two peptides (PGPAGPA and PGAVGPA) in the SIA both belonged to the collagen alpha-1(I) chain and collagen alpha-2(I) chain. The minimized peptide length was 7, while the longest one was 31 ( Figure 4B). Among these peptides, the percentage of peptides with lengths ranging from 7 to 15 was more than 75%. The uniqueness and repeatability of peptides in each sample were displayed by the Venn diagram ( Figure 4C). The number of peptides that could be identified in both STA and SGID was 193. During the SD, 55 new peptides were produced. Compared with the number of peptides in the SIA, only six new peptides appeared in the SIB after the SA. Significantly, 145 peptides simultaneously existed in the STA, SGID, and SIA. A volcano plot was employed to display the alteration in the relative content of peptides during the SD and SA. Compared with SGID, the relative content of 25 peptides in the STA was significantly upregulated, and that of 19 peptides was downregulated ( Figure 4D). In addition, the relative content of 12 peptides was altered in the SIA when compared with that of SIB. Among these peptides, the upregulated and the downregulated contents account for half, respectively ( Figure 4E).
Prediction of the Biological Activity of Bioavailable Peptides
BAP can exert their physiological activity in vivo only after they resist decomposition in the digestive tract. Therefore, the 145 peptides that simultaneously existed in the STA, SGID, and SIA deserve more attention. Moreover, 73 of these peptides were also identified in the SIB. These results suggest that the 73 peptides might be absorbed by the intestinal epithelial cells and the other 72 peptides might resist absorption. For more convenient analysis and statistics, the 73 peptides and the 72 peptides were divided into two groups and separately named anti-digestion group (AD) and anti-digestion and antiabsorption group (ADA) (the information of 145 peptides was uploaded to Mendeley Data, https://data.mendeley.com/datasets/s3j9vpfdff/1, accessed on 23 February 2023, file name: S2-basic information of AD and ADA). The bioactive activity of these peptides was predicted by employing in silico tools.
The peptides with a predicted score of biological activity above 0.8 were selected and reordered according to their intensity in the SIA (BAP score of 145 peptides was uploaded to Mendeley Data, https://data.mendeley.com/datasets/s3j9vpfdff/1, accessed on 23 February 2023, file name: S3-properties of AD and ADA). Finally, a total of 6 and 10 typical anti-digestion peptides were screened from the AD group and the ADA group, respectively ( Table 1). The shortest peptide length was 7, while the longest one was 19. The molecular weight of these peptides ranged from 571.75 to 1484.94, and most of them were hydrophobic. Moreover, only five peptides were originated from the collagen alpha-2(I) chain. Furthermore, the potential bioactive activity of the nontoxic and non-allergenic peptides was predicted ( Table 2). It suggested that all these peptides (except FGFDGDF) might be applied as AHTP, ADP, and AIP. In addition, the antioxidant activity of peptides in the AD and ADA group might be similar. The activity prediction scores of these peptides were very close.
Verification of the Biological Activity of Bioavailable Peptides
To further confirm the prediction results, the biological activities of the screened peptides were verified in vitro. The antihypertension and antidiabetic abilities of these peptides were evaluated by detecting their IC 50 values on ACE and DPP-IV. For ACE inhibition, only PGPMGPSGPR had an IC 50 value below 10 mM, while that of other peptides (except FGFDGDF) ranged from 11.7 ± 1.34 mM to 17.07 ± 2.45 mM (Table 3). Considering the application in vivo, such a high IC 50 value of these peptides suggested their poor ability to develop as ACE inhibitors. The IC 50 value on ACE of FGFDGDF was not determined due to its low predicted score as AHTP (Table 2). More importantly, we found that FGFDGDF could not be totally dissolved in the reaction solution. Among the seven peptides, GPAGPAGPIGPVG had the best inhibition ability on DPP-IV, with a low IC 50 value of 0.07 ± 0.01 mM. In addition, GPPGPAGPAG, FGFDGDF, and PAGPAGPIGPV also had a good performance; their IC 50 values on DPP-IV were lower than 1 mM (Table 3). To verify the ability of these peptides as AIP, their alleviation effects on the lipopolysaccharide (LPS)-induced murine macrophage inflammation models were evaluated. Results indicated that these peptides had no cytotoxicity on macrophages and they could significantly inhibit the release of inflammatory factors (Figures S1 and 5). The inhibition rates of these peptides on IL-1β and NO were all above 50%. Except for AGPAGPAGPAGPR, which had a low inhibition rate on TNF-α (mean value 34.96%) and IL-6 (mean value 25.9%), most peptides possessed an inhibition rate above 50% on these two inflammatory factors. The hydroxyl radical scavenging activity and ferric ion chelating activity of the seven peptides were detected to characterize their antioxidation ability. Results indicated that the above two activities of the seven peptides were all below 50%. However, consistent with the prediction results (Table 2), there was no difference in the anti-oxidant activity of Seq1 and Seq2 in the AD group, and the analogous situation was also observed between Seq3, Seq4, Seq5, Seq6, and Seq7 in the ADA group ( Figure 5).
Discussion
Bioaccessbility should be paid more attention to when developing new functional products because it is the most important factor affecting the biological activity of functional ingredients. In this study, to further expand the application of YBCH, the bioaccessbility of YBCH was evaluated by employing simulated gastrointestinal digestion and absorption. The changes in peptides and free amino acids were monitored during the SD and the SA. Results indicated that only slight changes in the concentration of peptides and free amino acids were observed (Figures 1 and 2). Moreover, the identification of peptides showed that about 77% of the peptides in the STA could be identified in the SGID after the SD, and about 76% of the peptides in the SGID could be observed in the SIA after the SA ( Figure 4C). Although 91 peptides in the SIA were also identified in the SIB, the transport rate of peptides was only 22.14 ± 1.58% ( Figure 1B). These results suggest that most peptides in YBCH resist gastrointestinal digestion and absorption. The structural parameters of peptides could significantly influence the bioaccessbility of peptides during digestion and absorption in vivo [12]. From the changes in peptides and free amino acids, we speculated that the stability of YBCH might be attributed to their specific physicochemical properties, including molecular weight and amino acid composition.
The low molecular weight not only promotes the absorption of peptides but also might be beneficial for the stability of peptides. It has been reported that the low-molecular-weight peptides might avoid protease enzymes due to smaller number of protease recognition and cleavage sites in their sequence [12]. In this study, the peptides with a
Discussion
Bioaccessbility should be paid more attention to when developing new functional products because it is the most important factor affecting the biological activity of functional ingredients. In this study, to further expand the application of YBCH, the bioaccessbility of YBCH was evaluated by employing simulated gastrointestinal digestion and absorption. The changes in peptides and free amino acids were monitored during the SD and the SA. Results indicated that only slight changes in the concentration of peptides and free amino acids were observed (Figures 1 and 2). Moreover, the identification of peptides showed that about 77% of the peptides in the STA could be identified in the SGID after the SD, and about 76% of the peptides in the SGID could be observed in the SIA after the SA ( Figure 4C). Although 91 peptides in the SIA were also identified in the SIB, the transport rate of peptides was only 22.14 ± 1.58% ( Figure 1B). These results suggest that most peptides in YBCH resist gastrointestinal digestion and absorption. The structural parameters of peptides could significantly influence the bioaccessbility of peptides during digestion and absorption in vivo [12]. From the changes in peptides and free amino acids, we speculated that the stability of YBCH might be attributed to their specific physicochemical properties, including molecular weight and amino acid composition.
The low molecular weight not only promotes the absorption of peptides but also might be beneficial for the stability of peptides. It has been reported that the low-molecular-weight peptides might avoid protease enzymes due to smaller number of protease recognition and cleavage sites in their sequence [12]. In this study, the peptides with a molecular weight below 3000 Da were the major components of the STA (occupied more than 70%). After the SD, the molecular weight distribution of peptides whose molecular weight was above 3000 Da was finally reduced to around 13.36% ( Figure 1A). Thus, a large number of peptides with low molecular weight existing in YBCH might be an important reason for the stability of YBCH during the SD. Moreover, the steric hindrance of these low-molecular-weight peptides might be another important factor to avoid protease or peptidase cleavage. GPAGPPGPIGNV and NAPHMR are BAPs derived from YBCH and sea cucumber gonad, respectively [30,31]. It has been reported that they could resist simulated gastrointestinal digestion. The steric hindrance value of these two BAPs was found to be 0.57 and 0.52 by calculating with an in silico tool named AntiCP 2.0 (https://webs.iiitd.edu.in/raghava/anticp2/predict.php, accessed on 14 January 2023) [32]. However, most of the 145 peptides which were found simultaneously in the STA, SGID, and SIA possessed a higher predicted steric hindrance value than that of the above-reported BAPs (steric hindrance predict score of 145 peptides was uploaded to Mendeley Data, https://data.mendeley.com/datasets/s3j9vpfdff/1, accessed on 23 February 2023, file name: S3-properties of AD and ADA). Therefore, the steric hindrance of these lowmolecular-weight peptides might increase the difficulty of the enzymatic hydrolysis process during gastrointestinal digestion.
In addition, peptides with different molecular weights might have their own characteristics during gastrointestinal digestion and absorption. For example, during simulated gastric digestion, the molecular weight distribution of the peptides with a molecular weight above 5000 Da was only reduced from 17.65 ± 1.23% to 13.31 ± 0.72%. However, at the beginning of the simulated intestinal digestion, the molecular weight distribution of these peptides sharply decreased from 13.31 ± 0.72% to 6.33 ± 0.37%. On the contrary, for the peptides with a molecular weight 2000-3000 Da, the molecular weight distribution was reduced by half (19.22 ± 0.91% to 10.81 ± 1.15%) during the simulated gastric digestion, while only a slight decrease (10.08 ± 0.54% to 7.45 ± 0.49%) was observed in the simulated intestinal digestion (Figure 1). Additionally, the molecular weight could not only influence the permeability rates of peptides but also affect their absorption pathway. Many reports have suggested that the molecular weight of peptides has a negative relationship with the permeability rates [33,34]. This might be attributed to their different path through intestinal endothelial cells. For example, most di-and tripeptides were transported via PepT1, which is a member of the H + -dependent carried family and widely distributed on intestinal endothelial cells. The larger ones might be absorbed through a paracellular route via tight junctions or transcytosis [35]. However, it should be noted that the paracellular diffusion area only occupied 0.01% of the human gut surface area, and transcytosis is an energy-dependent transport route [36,37]. Therefore, in fact, most of the peptides could not be effectively absorbed by the intestinal epithelial cells. Recently, it has been difficult to accurately quantify and identify the small peptides from the protein hydrolysates, especially for di-and tripeptides [38]. In this study, limited by the detection technology, no peptides with a length below seven were identified. However, it could be speculated that most di-/tripeptides (molecular weight below 500 Da) in the YBCH were absorbed by the Caco-2 cell monolayers based on the molecular weight distribution of the peptides in the SIB and the SIA ( Figure 1A). Meanwhile, the major parts of YBCH, peptides with molecular weight ranging from 500 Da to 2000 Da, were not altered significantly during the SA.
The number and location of different amino acids in the peptide sequence could significantly alter the bioavailability of the peptide. Especially for collagen-derived peptides, it is necessary to pay more attention to the effects of proline and glycine on the bioaccessibility of peptides. This is mainly because there are a large number of repeated "G-X-Y" structures in collagen alpha chains, and the X and Y usually are occupied by proline or hydroxyproline [1]. Proline and hydroxyproline have long been recognized as important factors to increase the stability of peptides during gastrointestinal digestion [35]. On the one hand, this might be due to the fact that proline and hydroxyproline are not the cleavage sites of digestive proteases and peptidases including pepsin, trypsin, and chymotrypsin, etc. [12]. During the SD and the SA, free proline content was very low (<0.01 mg/mL, Figure 2). Inversely, the amino acids which are the cleavage sites for most proteases, such as arginine, lysine, phenylalanine, leucine, and tyrosine, were the major components of the free amino acids (Figure 2). On the other hand, the proline has a γ-lactam structure.
This structure is similar to the γ-lactam moiety in the pyroglutamyl peptides, which could increase the steric hindrance of peptides to proteases [39,40]. In this study, the composition of peptide sequences in STA showed that only 15 peptides did not contain proline. The number of peptides containing two, three, and four prolines in their sequence was 55, 86, and 50, respectively. Notably, there were seven peptides that contained eight prolines in their sequence (Supplementary Materials, Figure S2). Therefore, the high amount of proline in their sequence might enhance the steric hindrance of these peptides to protease and peptidase. In addition, prior studies have noted the effects of different amino acids located in the N-and C-terminal of peptides on their ability to resist digestion and absorption. For example, the peptides with an N-terminal containing isoleucine, lysine, methionine, proline, and valine or a C-terminal containing valine have high permeability [41]. In this study, a total of 125 peptides in STA had glysine in their N-terminal, which accounted for 49.8% of all peptides in STA. In the C-terminal of the peptides in STA, the most widespread amino acid was alanine (57 peptides), followed by arginine (46 peptides) and glycine (45 peptides) (Supplementary Materials, Figure S3). Although there were 26 peptides with C-terminals containing valine, it has been reported that the amino acids in the N-terminal contribute more to bioaccessbility than those in the C-terminal [40]. Therefore, the absence of permeability-promoting amino acids in the N-terminal of peptides in YBCH might enhance the ability of these peptides to resist digestion and absorption.
Moreover, we found that the difference in molecular weight and amino acid composition might explain why the 73 peptides in the AD group might be absorbed by the intestinal epithelial cells and the other 72 peptides in the ADA group might resist absorption. Regarding molecular weight (Supplementary Materials, Figure S4A), the number of peptides with a molecular weight below 1000 Da was 44 in the AD group, while that in the ADA group was 36. Meanwhile, there was a lower number of peptides with molecular weight between 1000 Da and 1500 Da in the AD group. Regarding peptide length (Supplementary Materials, Figure S4B), it was found that the peptide length of most of the peptides in the AD group was located between 7 and 10, which accounts for about 60%. These results indicated that the molecular weight of peptides in the AD group might be lower than that in the ADA group with the same peptide length. Thus, the higher molecular weight in the ADA group than that in the AD group might be one reason for the peptides' different performance during intestinal epithelial cell absorption. After statistical analysis, there were 15 peptides in the AD group with C-terminal-containing valine, which accounts for about 20.5% of the total number of peptides in the group (Supplementary Materials, Figure S4C). However, that number was five in the ADA group, which accounts for only 6.9%. Thus, the high ratio of peptides with C-terminal-containing valine in the AD group might contribute to their absorption during SA. To sum up, the molecular weight and amino acid composition of the peptides might be two important reasons that influence their bioaccessibility during gastrointestinal digestion and intestinal epithelial cell absorption.
The traditional screening process of BAP usually requires multiple purification steps combined with in vitro activity verification to continuously narrow the search area in the protein hydrolysates. Finally, the sequence composition of the peptides was determined by LC-MS/MS. Moreover, this procedure needs a lot of time, and more important is that the obtained BAP be tested to evaluate its ability to resist digestion if it is developed as a functional ingredient. This screening method might cause the obtained BAP to be unsuitable for in vivo use. By constructing a reliable simulated in vitro digestion and absorption model, the changes in peptides during the entire digestion and absorption process can be effectively monitored. This not only helps us comprehensively know about the characteristics of peptides during digestion but also obtain results that are closer to the real situation in vivo. Then, combined with the current increasingly mature in silico tools, we can predict the desired bioactivity and further conduct in vitro and in vivo verification. This method might be more efficient and reliable. In this study, although limited by the detection technology in that the peptides below hepteptide were not identified, 440 peptides were obtained in all taken samples. This undoubtedly gives us a treasure trove of many peptides which deserve a further explore into their bioactivity. The most interesting finding was the 145 peptides that existed from the start to the end of the simulated digestion and absorption. After the theoretical calculation and in silico prediction, two typical anti-digestion BAPs and five typical anti-digestion and -absorption BAPs were separately screened (Tables 1 and 2). After in vitro activity verification, some peptides showed that they deserved further testing in vivo. In addition, combined with our previous work, their bioactivity and their effects on the composition of gut microbiota should be further investigated and validated.
Conclusions
The bioaccessibility test of functional ingredients is very important for illustrating their mechanisms of action and promotion in the application field. Although YBCH have been developed as dietary nutrients and functional foods for a long time in China due to their various biological activities such as antioxidant and immunomodulation effects, their bioaccessibility has not been reported. Notably, fewer reports exist regarding the variation regular of free amino acids and peptides during gastrointestinal digestion and adsorption, which limits the application of YBCH. Different from previous studies focused on investigating the activity of YBCH, in this study, simulated gastrointestinal digestion and absorption were conducted to characterize the alteration in peptides and free amino acids of YBCH in the digestive tract. Results indicated that most peptides in YBCH resist digestion and absorption, which might be due to the low molecular weight of peptides, the high-frequency distribution of proline, and the absence of permeability-promoting amino acids (high glycine in the N-terminal and less valine in the C-terminal) in the terminal of peptides. Moreover, the joint utilization of in silico analysis and in vitro tests suggested that various types of BAPs could be obtained after the YBCH digestion and absorption. This study promotes new insights into the application of YBCH, and the bioactivities of the obtained peptides should be further verified in the future.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/foods12051003/s1, Figure S1: The screened out seven peptides on the toxicity of murine macrophages; Figure S2: The number of prolines in the peptide sequence; Figure S3 The terminal residue of peptides in start digestion samples; Figure S4 The statistical analysis of molecular weight distribution (A), peptide length (B), and amino acids composition (C) of peptides in the AD group and ADA group.
|
2023-03-01T16:23:58.758Z
|
2023-02-27T00:00:00.000
|
{
"year": 2023,
"sha1": "d4a0a8b02460ca1cebe976056ebfbc5bab9d2429",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/12/5/1003/pdf?version=1677480423",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fa19f05199827f8b127a638de390301b30bf4c2",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212818313
|
pes2o/s2orc
|
v3-fos-license
|
Research on Factors Affecting Employee Productivity in Shanghai
With the development of social economy, all industries are constantly reforming and innovating. In the process of enterprise development, human resource has become the key resource for enterprise survival and development. Employee retention has been paid attention by the top management of the enterprise today and is generally aware of the complexity and importance of employee management. The main purpose of this study was to determine the significant impact of factors on Employee Retention in the manufacturing sector. For the study, the researcher got 112 questionnaires from the online questionnaire survey, but there have 1 questionnaire was invalid data, therefore, there are 111 valid questionnaire data. The delivery of the questionnaires was carried out by physical "delivery and collection" of data to the manufacturing industries in Shanghai, China, such as: Shanghai automotive group co., LTD., bright food (group) co., Shanghai pharmaceutical group co. Therefore, this study includes an analysis of the main three influence factors that, in the opinion of the researcher, directly affect the productivity of employees. These factors are: working environment, welfare measures and rewards & recognition.
INTRODUCTION
The human resource is the most important resource of the enterprise and the head spring of the core competence. The competing for their essence of enterprises is talents' competition nowadays, the enterprise wants to develop and strong that must find ways to keep employees, contribute the employees' full potential, fully motivate the employees' initiative for work, and makes the employees work hard for the enterprise. The employee retention strategies are the enterprise through maintaining a good working environment, propose employee retention policy and meet the needs of employees to retain the employees. The ultimate purpose is improves the employee's job satisfaction and reduce the costs of recruit and train new employees. The employee retention strategies can help the enterprise provide effective employee communication (Vishwakarma and Rao, 2017).
PROBLEM STATEMENT
Nowadays, retain the best employees is become one of the biggest challenges of the enterprises. Because the employee turnover brings various costs to the enterprise, the enterprise will strive to retain competent employee. With the competent employees leave, the employee turnover makes the enterprise's performance decreased. Therefore, the enterprise must find out the reason makes employee leave the enterprise first, and then develop necessary strategy to retain employees. The enterprise can retain employees and gain competitive advantage with the competitor, this include the research of comprehensive human resource practices (Lewis and Sequeira, 2016).
There have many factors can affect the employee turnover and have been verified by different models and theories. The literature surveyed by this study mention working environment, welfare measures and rewards & recognition as the main factors that influence employee retention (Mahesh, 2017). Usually the enterprise will apply these influence factors in the employee retention strategy formulation and implementation. In this research, the researcher will base on the manufacturing sector in Shanghai, China, to find out the effects of working environment, welfare measures and rewards & recognition on employee retention. The objectives are examining the relationship between working environment, welfare measures and rewards & recognition with employee retention.
OBJECTIVES
The main objective of the research is to identify the effect of working environment, welfare measures and rewards & recognition on employee retention among employees of manufacturing sector in Shanghai, China.
Relationship between Working Environment and Employee Retention
Social scientist Abraham Maslow outlined a pyramid that showed what he called the human being's hierarchy of needs, he divided human needs into five basic-level, and the safety and health needs of the level of demand in the basic of the pyramid . Lack of these conditions will trigger anxiety or depression, and further deviates the employees cannot work. In general, the working environment is the physical environment, such as the noise, equipment condition, ventilation and temperature (Tetteh, Fentim and Dorothy, 2015). The employees would like to stay in the enterprise which have good working environment and atmosphere, and if the working situations so terrible the employees are unwilling to stay and work (Tsai, 2016).
Main focus of enterprise has remained on providing better jobs to the employees as well as to retain them by providing good work environment. Thus, enterprise generates and maintains an environment which can make the employees feel comfortable and retain them in the enterprise (Ali and Zia-ur-Rehman, 2014). Mohanty and Mohanty (2014) considered that employee would like to stay in the enterprise which can provide a positive working environment, employees will feeling their works are valuable and they are get attention (Mohanty and Mohanty, 2014). The enterprise should focus on working environment management to make better use of human resources, to create more value.
Employees in the organization need three types of environment that are learning, supportive and work environment. The supportive environment that the organization can provided form the work-life balance, such as the flexible working hours, good work schedule, vacations, dependent care, telecommuting and so on. The working environment includes an efficient management, challenge assignments, advanced equipment, friendly colleagues, clarity of work and responsibilities, and recognition. Lack of these environment will promote the employees find other new chance, so the working environment should be connected to the employees feel in every respect. This forms the basis of first hypothesis of this paper, which is: H1: There is significant relationship between working environment and employee retention.
Relationship between Welfare Measures and Employee Retention
Naveen (2017) think that improve welfare is a critical component of organizational productivity. After globalization, because a various factors that the employee's working condition have been continuously changing, the organization planning to implement various welfare measures to reduce employees' physical and psychological problems (Naveen, 2017).
According to Stoffers, Neessen and Gorissen (2015), if the work is simple and requires only minimal training to achieve expert results, the enterprise can adopt the low wages strategy, and if the enterprise is compete in high labor market then it can adop the high wages strategy. The enterprise which offers high welfare measures to the employees will have a large number of candidates applying for induction and have lower turnover rate than other enterprises (Stoffers, Neessen and Gorissen, 2015).
According to Nandhini, Usha and Palanivelu (2015), the employees need regular welfare measures for their up progression and presentation on this field. The service region is the most important division which creates additional service, and that needs welfare measures for their improvement. These welfare measures are help to protect employee's welfare and motivate employees, this guarantees employees pride result in expand efficiency (Nandhini, Usha and Palanivelu, 2015). This forms the basis of first hypothesis of this paper, which is: H2: There is significant relationship between welfare measures and employee retention.
Relationship between Rewards & Recognition and Employee Retention
Rewards & recognition play an important role in attract and retain competent employees, especially the employees who have providing excellent performance or unique skills to the enterprise. The enterprise has invest much capital in the training and development, it is concerned with the formulation and implementation of strategies and policies that aim to reward people fairly, equitably and consistently in accordance with their value to the enterprises (Mbugua, Waiganjo and Njeru, 2014).
The rewards & recognition are focus on the development of corporate culture, support core values, and increase employee motivation and input. According to Sankalpana and Jayasekara (2017), the rewards & recognition are the basic elements of employee retention management, and it indicates how much the employees can gain from the enterprise. Therefore, the enterprises have the responsibility to design an attractive rewards & recognition to attract and retain valuable employees (Sankalpana and Jayasekara, 2017). Rewards as part of the incentive, it can improve employee retention rate. There is a close positive correlation between rewards and job satisfaction, which in turn helps to retain employees. Terera and Ngirande (2014), recognize that the flexibility along with lucrative career options, is a critical incentive for all employees. The enterprise must recognize the importance of the rewards & recognition, because the employees have lasting impression on the employee and continue to substantiate the employee's perception of their value of the enterprise. All enterprises can use intrinsic rewards and recognition to improve employee commitment and retention. This forms the basis of first hypothesis of this paper, which is: H3: There is significant relationship between rewards & recognition and employee retention.
Vroom's Expectancy Theory
The expectancy theory believes that people's initiative positively correlates with their expectation and motivation. The expectation theory of incentive theory integration model reflects the relationship between needs and goals by three factors: first, the employee should believe that effort will result in acceptable performance; secondly, the employee should believe that acceptable performance will produce the desired reward; thirdly, the employee values are the rewards, if they work hard the performance will increased (Rakhra, 2018). The theory assumes that individuals can accurately predict future events about first -and second-order outcomes and valence. If this were the case, employers would not view Labour mobility as a management challenge. This eliminates the possibility of labor mobility, since employees are expected to calculate results accurately before joining the organization. In practice, employees do not accurately predict the first-level and second-level results, so there is no cost-effectiveness for the preferred employer. This limits the ability of the theory to fully explain the determinants of labor mobility in public organizations (Abdullah Al Mamun and Nazmul Hasan, 2017).
Research Framework
Based on the above reviews, the following conceptual framework has been developed.
Sampling and Location
The sample size of this study was determined by a general rules proposed by Hair, Black, Bablin, Anderson, and Tatham (2006). According Hair et al. (2006), a minimum of 20 cases needed for each of the variables. In this research, there have 1 dependent variable and 3 independent variables, thus, a total of 80 samples are acceptable number of the sample size of this study. In order to get more opinion from the different respondents, the researcher will distribute 100 questionnaires in a specific area. The targeted population of this research was the employee in the manufacturing sector in Shanghai, China. These respondents will focus on the employees which more than 18 years old, but regardless of the gender and race. In addition, the targeted respondents will from at least three different companies to make sure the information more precise and complete.
Research Instrument
The data will collect from both primary and secondary resources. The primary data will be obtained through the survey by the researcher. The questionnaire will solicit responses from employees on various aspects of employee retention. The questionnaire will be designed on a five point Likert scale where 1 stands for strongly disagree, 2 for disagree, 3 for neutral, 4 for agree and 5 for strongly agree. The target population in this study was the three manufacturing enterprises in Shanghai China. The study population for this study was the younger employees in these three enterprises. Due to manageability of the population, the researcher used census study in which all the elements participated in the study.
DATA ANALYSIS
There have 111 questionnaires with 23 questions was published on the web to collect the data, and the 111 investigators are the employees in the manufacturing sector in Shanghai, China. The results of the questionnaires will be put in the SPSS software and through this to do the data analysis, and then the results will be generated.
Reliability Test
According to Hair et al. (2006), reliability tests were performed and used to ensure consistency of measurement results in repeated tests. In addition, the results of Cronbach Alpha can be obtained by measuring internal consistency. Therefore, according to Nunnaly and Bernstein (1994), the Cronbach Alpha coefficient should not be lower than 0.7, but according to Sekaran (2003), the Cronbach Alpha coefficient should be greater than 0.5. Researchers will follow Sekaran's (2003) proposal, where the Cronbach Alpha value should be at least 0.5, and less than 0.5 will not be accepted. Table 2 shows the results of the reliability test study. As can be clearly seen from Table 2 above, absolutely all Cronbach Alpha values are exceed the allowable coefficient of 0.5, thereby undermining the reliability of the study.
Correlation Analysis
The table 3 shows the results of correlation between the dependent variable (employee retention) and independent variables (working environment, welfare measures and rewards & recognition) which are used to examine the hypothesis in this research. Therefore, the employee retention shows a low correlation with working environment, and the employee retention shows a low correlation with welfare measures, the employee retention shows a low correlation with rewards & recognition. In addition, the working environment shows a moderate correlation with welfare measures, and the working environment shows a low correlation with rewards & recognition. Moreover, the welfare measures show a low correlation with rewards & recognition. The data of correlation of the dependent variable and independent variables are given in the table 3 below. editor@iaeme.com
Multiple Regression Analysis
In this research has used the multiple regression analysis to check these variables. According to the table 4, the value of R² is 0.020 and means the given value (P2) shows that have 2% of all probable independent variables (working environment, welfare measures, rewards & recognition) that are associated with the dependent variable (employee retention) are presented in this study. All the independent variables will influence the dependent variable. The arrangement of strengths among the independent variables (working environment, welfare measures, rewards & recognition) that contributes to the dependent variable (employee retention) are working environment (B=.056), welfare measures (B=.066), and rewards & recognition (B=-.045).
Hypotheses Results
The table below shows the results of hypothesis in the research based on multiple regression analysis.
To Find the Significant Relationship between Employee Retention and Working Environment
One of the objectives in this study is to find the relationship between employee retention and working environment. H1 was created based on this objective. The hypotheses have been rejected based the analysis conducted in this research. Based on this research result, it is different from the previous researchers indicate that there is a significant relationship between employee retention and working environment. According to Mohanty and Mohanty (2014), though employees would like to stay in the enterprise which can provide a positive working environment, employees will feeling their works are valuable and they are get attention (Mohanty and Mohanty, 2014). Based on the previous research, the working environment will influence the employee's retention, having a good working environment will make the employees like this enterprise and keep stay in there. Besides, based on the Tsai (2016), the employees would like to stay in the enterprise which have good working environment and atmosphere, and with insufficient working situations like poor lighting, unsatisfactory furniture etc. The employees are unwilling to perform for longer periods of time (Tsai, 2016).
To Find the Significant Relationship between Employee Retention and Welfare Measures
The second objective of this research is to find the relationship between employee retention and welfare measures. H2 was created based on this objective. The hypotheses have been rejected based the analysis conducted in this research. Based on this research result, it is different from the previous researchers indicate that there is a significant relationship between employee retention and welfare measures. It is different with the previous researchers indicate that there is a positive relationship between employee retention and welfare measures. Naveen (2017) think that improve welfare is a critical component of organizational productivity. In Naveen's research, he find out the welfare not only can make the employees get better result in their work, but also influence the sentiments of employees when they feel that the management are interested in their wellness and happiness (Nirushan, 2017). In addition, according to Nandhini, Usha and Palanivelu (2015), the employees need the welfare measures for their up progression and presentation on this field (Nandhini, Usha and Palanivelu, 2015).
To Find the Significant Relationship between Employee Retention and Rewards & Recognition
The last objective of this research is to find the relationship between employee retention and rewards & recognition. H3 was created based on this objective. The hypotheses have been rejected based the analysis conducted in this research. Based on this research result, it is different from the previous researchers indicate that there is a significant relationship between employee retention and rewards & recognition. It is different with the previous researchers indicate that there is a positive relationship between employee retention and rewards & recognition. According to Sankalpana and Jayasekara (2017), the rewards & recognition are the basic elements of employee retention management. There is a close and positive correlation between promotions and job satisfaction and which in turn helps in retaining employees.
Besides, Terera and Ngirande (2014), recognize that importance of the rewards & recognition, is a critical incentive for all employees.
IMPLICATION
This study aims to study the factors that can affect employee retention in manufacturing industry in Shanghai, China. Therefore, in China's manufacturing industry, there are some problems that can affect the employee's retention or departure. There are several key factors that affect employee retention, such as the working environment, welfare measures, and rewards & recognition. When the other factors of the enterprises are the same, the employee retention rate will be higher if the above factors are in good condition. The employees' loyalty is the driving force for the enterprise development (S and Krishnan, 2016). The high loyalty of employees means that employees love the enterprise and are willing to work hard for the enterprise. The highly loyal employees are the guarantee for the sustainable development of the enterprise.
Implication of the Employee
The study of the employee retention and the influencing factors can help to improve Chinese manufacturing enterprises retain their outstanding employees. In this study, the problem was fully identified and its symptoms were revealed, and also described the main factors that can affect employee retention. Therefore, this research can help employees to consider all the factors that affect their stay or leave, and help them to make decision to choose their work place. Therefore, this research can help employees choose the enterprise that suits them, and do not often to change the work, working and developing in one enterprise is more be helpful for improving work efficiency (Mohanty and Mohanty, 2014). The longer an employee stays in an enterprise, the employee will more proficient in the enterprise's business and work, and the more experience has accumulated, and the work efficiency also will be higher.
Implication of the Employer
In addition, the research on employee retention issues and their influencing factors can help employers reduce employee turnover by examining factors that affect employee retention. In addition, employers can limit themselves to the risk of massive employee losses while investing money and knowledge in them. When a large number of employees leave the enterprise, it will seriously affect the enterprise's operation and bring huge losses to this enterprise (Harshani and Welmilla, 2017). Thus, this study could help employers increase the enterprise's profits by reducing employee turnover and retaining good talent. If the enterprise is usually has employee turnover, the enterprise needs to constantly recruit new employees and organize training for the new employees. These not only waste time and energy, but also increase the cost of human resource management (Naveen, 2017). If the employee loyalty of the enterprise is high, the enterprise can save these costs, and also use the time and energy to create more value.
Implication of the Enterprises
In addition, this research can help develop China's manufacturing industry. China's manufacturing sector is on the rise. Therefore, China's manufacturing industry needs a large number of excellent employees to achieve excellent brand performance. Besides, productive employees can help the enterprise to successfully compete in the market and attract new customers, and then providing them with better services. The stable relationship between employees and enterprises is the foundation of enterprise development, and employee loyalty is the powerful driving force of enterprise development (Terera and Ngirande, 2014). Only excellent employees can make the high-quality products, and high-quality products can generate huge benefits, so as to become an excellent enterprise.
LIMITATION
Like other studies, this study also has some limitations. This is just a tentative case study and the small sample size limits to some extent the generalization of the findings made in the study. Therefore, it is recommended to future researches to take more time and reach to a larger sample. It was a one major limitation on this research is only focus on Shanghai, China, it is very modern and developed indeed, but it cannot represent all the China as it is a very large country. It is Due to the difficulty of time availability, and less financial resources, therefore, the researcher choose Shanghai as the target area. Another limitation of the research is that a survey questionnaire was used to measure the independent and dependent variables. When doing the questionnaire survey, it is hard to get a perfect answer of the question because of the respondents' time constraints (Chiekezie, Emejulu and Nwanneka, 2017).
CONCLUSION
In this paper, the general conclusions based on research purposes are presented. Furthermore, there have the results of previous studies to prove the result of this study. In addition, the results of this study were confirmed and also explain the reasons for these results were revealed. This paper makes a detailed analysis of the factors that can influence the employee retention, and discusses the importance of employee retention for the future development of the enterprise. If the enterprise wants development better, it must pay more attention to employee retention. In addition, the implications of this study are introduced and described. In addition, there have introduce the limitations of this study, and at end of this chapter, propose various suggestions for future researchers.
RECOMMENDATION
Future researchers can based on this research and suggest finding more independent variables that different with those variables in this study. About employee retention there are many influencing factors can affect it, and future researchers can use other factors to do the research. This study has been affected by limitations, because the number of respondents is too small, and the results are not comprehensive and accurate. Future researchers can increase the number of respondents to get more accurate results, which will help analyze the root cause of the problem and possibly to solve it. In addition, if the future researchers can interview employees from different industries and other countries, they can more accurately know which kind of employees have which problems with the employee retention. In addition, future researchers can expand the scope of employee retention issues in research organizations, not only in Shanghai, China, but also in other countries and cities around the world. Because different country has different cultural environment and laws and regulations, there will be some different influencing factors. And the last, future researchers can re-evaluate and extend the theories and frameworks that investigators mention in their research. Future research can solve certain specific events and put forward new theories or arguments to influence the research problem.
|
2020-01-09T09:14:46.284Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "7be26e0cb5ed3fb8e32e34a31edbf58e82db78c6",
"oa_license": null,
"oa_url": "https://doi.org/10.34218/ijm.10.6.2019.015",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "b4a104b8c7cc68552dc18ba4e2e3b0aa1448d119",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
53379444
|
pes2o/s2orc
|
v3-fos-license
|
A NOVEL STRAIN OF SHIGELLA SPECIES OUTBREAK IN A RESIDENTIAL SCHOOL IN PEMAGATSHEL , BHUTAN , 2012
Background: A cluster of suspected shigellosis was reported from health center in Pemagatshel district to Royal Center for Disease Control on 14 May 2012. The investigation was done to determine the cause and risk factor for the outbreak so that appropriate control and prevention measures can be implemented. Methods: A descriptive study was used for the outbreak investigation. The food items and drinks served to boarding students were collected from the mess in-charge in order to assess their risk for the outbreak. The kitchen and its premises were inspected to study the likely contamination by rodents and other animals. The water and stool specimens were tested in the laboratory to identify all possible enteric pathogens. Results: 82 boarding students were affected with an overall attack rate of 28% (82/294). Diarrhea was the predominant symptom followed by abdominal pain and headache. The onset date of the cases varied between 11 and 18 May, 2012. Shigella species was isolated from stool specimens that showed resistant to amoxicillin, nalidixic acid, chloramphenicol and sulfamethoxazole. Water specimen collected from source, distribution reservoir and tap water at school were found grossly contaminated. Conclusion: The outbreak was caused by novel strain of Shigella species which was not detected earlier in Bhutan. The promotion and provision of boiled water will greatly reduce the incidence of shigellosis especially in boarding facility.
INTRODUCTION
Shigellosis or bacillary dysentery is an acute gastroenteritis which occurs in areas with crowding and poor sanitary conditions. 1 Shigella species are the major etiological agents of bacillary dysentery. 2lobally, it is estimated that 164.7 million people are infected annually by Shigella often through contaminated food or water. 1,3higella serogroups are considered to be highly infectious due to their low infectious dose (10 -100 organisms). 4,5ysentery bacillus encompasses four serogroups namely Shigella dysenteriae, Shigella flexneri, Shigella boydii and Shigella sonnei.Each of these is composed of different serotypes, which are identified based on the structure of lipopolysaccharide O-antigen repeats: Shigella dysenteriae has 15 serotypes, Shigella flexneri, 14 serotypes, Shigella boydii, 20 serotypes and Shigella sonnei, a single serotype. 6,7higellosis is a national notifiable disease that every health centers in the country has to notify to Public Health Laboratory, now renamed as Royal Center for Disease Control (RCDC) for verification and activating an outbreak response. 8Annually, suspected shigellosis is reported sporadically from many of the health centers in Bhutan. 9,10On 14 th May, 2012, a cluster of suspected shigellosis among boarding students of Nganglam Higher Secondary School (NHSS) in Pemagatshel district was reported to RCDC by medical officer of Basic Health Unit 1 (BHU -1).A team from RCDC was sent to school on 16 th May, 2012 to investigate an outbreak.The investigation was conducted to determine the etiological agent of suspected shigellosis and their concomitant study of antimicrobial susceptibility pattern.The investigation was also aimed to trace the source of outbreak so that appropriate control measures are implemented to prevent from further spread to general population in the community.
Epidemiological investigation
A descriptive study was used for the investigation of outbreak.A suspected case was defined as any boarding students studying in Nganglam Higher Secondary School (NHSS) in Pemagatshel district, Bhutan with clinical manifestation of diarrhea with or without abdominal pain, nausea, vomiting or fever from 10 th to 18 th May, 2012.All school students and teachers were assembled in the dining hall with permission from the Principal to seek active case finding.
The food items and drinks served to boarding students in the past few days were collected from the mess in-charge.The personal hygiene and stool specimens of the cooks were also examined.Face to face interview was conducted among all cases to study the exposure of food items in the past few days.
Environmental investigation
The hygiene of the kitchen and its premises was inspected by the team.The store where vegetables and other culinary items were stacked was also inspected to study likely contamination by rodents and other possible sources.The team visited school water source to inspect surrounding sanitation and collect water samples.Water samples were collected from reservoir, distribution tank, and taps from kitchen, boys and girls hostel for testing indicator bacteria using fecal coliform medium (mFC broth). 11
Microbiological investigation
A total 12 stool specimens from both hospitalized and outpatient unit were collected and subjected to standard microbiological test.Both macroscopic and microscopic examination was done for all the collected specimens.For microscopic examination, a wet mount was prepared using 0.85% Normal Saline and observed using a light microscope for cells, ova and parasites.
Furthermore, the same specimens were processed for culture and identification of bacterial pathogens.Briefly, suspensions of stool specimens were made in 0.85% Normal Saline.The suspension was enriched in Buffered Peptone Water (BPW), Alkaline Peptone Water (APW) and Preston, and plated on Mac-Conkey Agar, Hecton Enteric Agar and modified Charcoal Cefoperazone Deoxycholate Agar (mCCDA).All media were incubated aerobically except for mCCDA which was incubated at microaerophilic atmosphere at 37⁰C.The colonies from each media were identified biochemically using Kligler Iron Agar (KIA), indole, bile esculin, lysine decarboxylase, ornithine decarboxylase, arginine dihydrolase and also by using bioMerieux Analytical Profile Index (API) 20-E.Identified organism was then subjected to antimicrobial susceptibility testing using Clinical Laboratory Standard Institute (CLSI) guideline. 12
Statistical analysis
The demography of the cases and antimicrobial susceptibility pattern of bacterial pathogen were presented in terms of numbers and percentages.The distributions of cases were presented graphically by using their date of onset of illness.Ethical clearance from Research Ethics Board of Health (REBH), Ministry of Health, Bhutan was not required for the investigation conducted in response to disease outbreaks.
Epidemiological investigation
The school had a total of 482 students out of which 294 are boarding students (110 girls and 184 boys).About 82 boarding students were affected in the outbreak with an overall attack rate of 28% (82/294).Sex specific attack rate was observed high among girls (40%) than boys (20%).The median age of affected students was 17 years which ranged from 15 -21 years.Diarrhea was the predominant symptom observed in all cases.The other symptoms included abdominal pain, nausea, vomiting, fever and headache in this order (Table 1).The index case was detected on 13 th May 2012 in health center.However, on active case finding, the cases actually emerged on 11 th May but they didn't seek medical care and were not detected in the health center.
The onset date was between 11 th May and 18 th May 2012 with majority of the cases reporting their onset of symptoms on 13 th May.No case was detected on 15 th May.
With one case on each 16 th and 17 th May, the case rose to five on 18 th May and thereafter, no cases were reported (Figure 1).None of the students were reported to have consumed food from nearby commercial shops or restaurants.All boarding students had consumed same food items provided in the mess.As per the information provided by the students, foods provided in the mess were cooked adequately and no hint was ever reported on the possibility of food contamination.Even the personal hygiene of all four cooks was found satisfactory on screening their health.However they pointed out that the consumption of un-boiled water could be the risk for the outbreak as they are not provided with boiled water in the boarding facility (student hostel). .
Environmental investigation
The kitchen and its premises were found hygienic.Vegetables and other culinary items were also properly stacked in kitchen-store with no trace of rodents.Water source was located around 20 -30 minutes' walk from the school.Water source for the school was a running stream in the forest connected to reservoir.During inspection, the surrounding of water source was found contaminated with cattle feces with few cattle grazing near it.Water specimen collected from source, distribution reservoir and tap water at school were found grossly contaminated.(Table 2)
Microbiological Result
Six stool specimens were collected from each of the hospitalized and nonhospitalized cases.Mucus and blood were visible in nearly half of the collected stool specimens.Red blood cells and white blood cells were found in all the specimens.
From culture and biochemical test, Shigella was also isolated from five of 12 stool specimens.The identification was further confirmed by Analytical Profile Index (API) 20 E as "highly pathogenic shigella species".However on serotyping, none of the isolates were agglutinated with all available antisera (DENKA SEIKEN, JAPAN) and additional antisera of S. dysenteriae serotype 13, 14 and 15 (Reagensia AB, Solna, Sweden).The antibiotic susceptibility test showed that all isolates were susceptible to cefazolin, ciprofloxacin, cephalexin, ceftriaxone and gentamycin, and resistant to amoxicillin and nalidixic acid.Furthermore, except for one isolates, the four other isolates were resistant to chloramphenicol and sulfamethoxazole.
DISCUSSIONS
This is the first documented outbreak of shigellosis in the country caused by novel strain of Shigella species which could not be serotyped with the available antisera.The past outbreaks in Bhutan are mostly caused by either Shigella sonnei or Shigella flexneri which were confirmed by serotyping with the existing antisera 8-10 .The clinical manifestation caused by serogroups of Shigella is similar with mucus and bloody diarrhea 13 .However, the dysentery caused by S. dyenteriae is severe as compared to other serogroups 14 .The current outbreak is most likely caused by S. dysenteriae based on non-fermentation of mannitol although isolates tested negative with all antisera.Moreover, the current outbreak has caused hospitalization of the cases due to loss of body fluids and severe dehydration.With timely intervention in the health center, no case fatality was reported in the outbreak.
Except for one pathogen, the remaining four pathogens were resistant to amoxicillin, chloramphenicol, sulfamethoxazole and tetracycline.Such Multi-Drug Resistant Shigella (resistant to three or more antibiotics) was also reported in Nepal, Africa, India and Zimbabwe [15][16][17][18][19] .An emergence of MDR-Shigella might be associated with the irrational or overuse of antibiotics in the healthcare facilities.
All cases were residing in the boarding facility with preponderance of girls.The exclusion of cases among dayscholar students suggests that they have more leverage than boarding students on their style of dietary habits.Moreover, they also reported that they drink boiled or filtered water all the time, whereas, boarding students were not supplied with such facilities at the hostel.
A single isolation of nonserotyping Shigella species from the stool specimens suggest that all cases were exposed to same source.Drinking water could possibly be the source of this outbreak because all water specimens tested at different sampling units were contaminated.This finding was also supported by the number of cases detected on different dates which indicates that they might be exposed to contaminated water at different time periods.The epidemic curve shows a difference of seven days between the first and the last case.If it was a common point source outbreak, all cases would have their onset within 1-3 days after infection because the incubation period of Shigella is 1-3 days 20 .The isolation of Shigella from water specimens would have confirmed the source of outbreak but RCDC do not have those sophisticated testing facility for water and other environmental specimens for isolating bacterial pathogens.
Interventions taken
Water tanks that supplied water to school was cleaned thoroughly and provision of boiled water in boarding facility was suggested.Health education was also given to all students and teachers on the transmission and prevention of shigellosis.
CONCLUSIONS
The outbreak is caused by novel strain of Shigella species which was not detected in any health centers of Bhutan.Surveillance on bacillary dysentery has to be continued to monitor the distribution of serogroups and their antimicrobial susceptibility pattern to guide the treatment of patients.School administration should provide enough boiled water for drinking purpose in school including hostels.
ACKNOWLEDGMENT
The team thanks all students, teachers and cooks of Nganglam Higher Secondary School for providing candid responses during the investigation.The team also thanks Armed Forces Research Institute of Medical Sciences (AFRIMS), Bangkok, Thailand for helping us confirm the bacterial isolates.
Figure 1 .
Figure 1.Epidemic curve of shigellosis outbreak in NHSS, 2012, by date of onset
Table 1 .
Symptoms of cases in an outbreak of suspected shigellosis in NHSS, 2012 (n = 82)
Table 2 .
Water quality test results at different water sources in NHSS, 2012
|
2018-10-18T03:31:18.633Z
|
2016-12-28T00:00:00.000
|
{
"year": 2016,
"sha1": "0b71a32e6c0de58472062e2735ca188b0425cc05",
"oa_license": "CCBYNC",
"oa_url": "http://stikbar.org/ycabpublisher/index.php/PHI/article/download/98/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0b71a32e6c0de58472062e2735ca188b0425cc05",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256813164
|
pes2o/s2orc
|
v3-fos-license
|
A multidimensional sensory evaluation model to investigate the (dis)comfort of body parts in a supine sitting position during a lunch break
Employees who work long hours frequently complain of muscle fatigue caused by prolonged sitting. As a result, products that assist them when resting in a chair in a reclining position, in order to relieve fatigue and improve comfort are required. To ensure that the new product works as intended, a usability test based on prototyping must be developed. The research process was divided into three stages: firstly, the development of the perception assessment questionnaire; secondly, a validated factor analysis (CFA) was conducted on the perception assessment data of 26 subjects and the measurement model was fitted to verify the reliability and validity of the questionnaire; finally, the sEMG technique was used to verify the comfort level of 21 subjects. Based on usability experiments and an exploration of human factor relationships, this study develops a prototype testing model, which focuses on the comfort perception of body parts, as a means of promoting innovation in the design and manufacturing industry.
not. Thus, the benefits after 10 min are evident immediately after the nap, whereas 20-and 30-min naps initially lead to sleep inertia [3]. According to the aforementioned study, healthy young people should ideally nap for approximately 10-20 min. Due to the constraints of office conditions and, consequently, short rest periods, a specific function that relieves muscle fatigue, by resting in a supine position, and improves physical comfort will be designed to meet the needs of the user for health reasons. However, the success of a new product is dependent on its ability to perform the pre-defined functions and be utilized correctly by the user. A prototype, regardless of fidelity, can be represented in physical or digital form and is used to answer a question or test a hypothesis [4]. It is a method of converting a theory or concept into a real, working system [5] and is crucial in usability testing. According to Kondaveeti et al. [6], prototypes play an important role in the product development process, such as the communication phase, in which the creative originator communicates the needs and functions of the product to the developer, designer or user [7]. Equally significant is the design phase, in which items of concern are transformed into features that are simple to observe and realize. The goal of the modelling phase is to visualize the conceptualized idea and the intended product in a clear and understandable form. Following this phase, the project is reviewed for flaws and improvements are made.
Sitting rest aids generally define excellent comfort in terms of use as a high level of usability, for example: subjective and objective comfort measurements to improve car seats and an assessment of the comfort questionnaire development [8][9][10], research methods for classroom seating comfort to help researchers analyse the perception of comfort (or discomfort) under dynamic conditions [11], as well as design and validation [12]. Furthermore, with regard to the resting state in the seat, some studies have compared the habitual relaxed sitting position with the neutral sitting position [13]. In addition, subjective measures have also involved the use of body mapping techniques, which aim to assess local comfort more intuitively and accurately, by providing visual recognition rather than text.
Not only is comfort embodied in subjective perception methods, but the objective state of the body is also one of the critical evaluation conditions for verifying the accuracy of subjective assessments. Lindegård et al. [14] investigated the perceived effort, comfort, and work techniques of professional computer users and their relationship with the incidence of neck and upper limb symptoms using a combination of subjective questionnaires and behavioral observations. Smulders et al. [15] conducted their study Fig. 1. A development method for prototype usability testing based on perceptual evaluation and sEMG.
Y.-L. Fu et al. using a combination of subjective comfort questionnaires, postural observations and surface electromyography (sEMG).
However, current comfort scales only provide simple degree judgements [19,20], whereas advanced CNN techniques can only categorize and identify, neither of which is sufficient to capture the complexity and diversity of user perceptions or to provide a broad concept for new research. They are insufficient in embodying the complexities of user perceptions and providing a diverse range of creative concepts for new designs. Therefore, research into sitting-related interaction comfort and the related new design concepts requires a variety of perceptual measures to provide an innovative design basis. This study develops a new prototype usability testing model to guide design research by combining perceptual evaluation of sensory vocabulary with sEMG measurements to investigate the interaction between the human body, the product, and the effects of physical comfort. To validate the method, a CFA was used to perform a multi-factor model fitting of the developed, multi-perceptual assessment scale, as well as local muscle fatigue measurements. The method applies to the development of products that are closely related to human posture, while aiming to improve product comfort and to function as a general product usability testing and development method.
Approach of the proposed method
The (dis)comfort of the prototype was measured using both subjective and objective methods in this study. The research was carried out in six steps ( Fig. 1): 1. create the prototype; 2. conduct a literature review and design a body mapping and comfort scale; 3. conduct a usability experiment to assess the subjective perception of comfort following the prototype experience; 4. validate the reliability and validity of the comfort scale using CFA, based on the statistical results of the subjective assessment data; 5. determine which body parts were the most affected; 6. validate the prototype's comfort by creating a control and an experimental group to measure the physiological signals of the body part using sEMG.
Prototype of the experiment -supine sitting position cushion
Supine sitting is a relaxed and languid sitting position whereby one reclines backwards in a chair to rest. The experiment's prototype is based on a previous literature study [19], which demonstrated that people experience the most physical comfort when resting in a chair in a supine sitting position, by maintaining a neutral position, which is the most comfortable position and minimizes the range of muscle movement [13,15,18,21]. The prototype is a three-part, strip-shaped pillow that wraps around and supports the neck, underarms, waist, chest and belly like a scarf ( Fig. 2-a). When the user is sitting back in a chair, the prototype supports and stabilizes the neck, and the middle part of the pillow supports and stabilizes the neck and shoulders, preventing neck swaying and discomfort ( Fig. 2-b).
The left section of the prototype has a concealed pouch and strap at the end, which can be folded from the right end to the left end to wrap the cylindrical prototype in the pouch, then it is tied tightly with the strap (Fig. 3). The weight of the prototype is 1 kg and the overall dimensions are 36 × 20 cm, therefore, it is easy to carry outdoors, to the office, to the car or may be used on a plane.
Factors relating to the degree of sitting (dis)comfort
Zhang et al. [22] proposed a model that illustrates the interaction between comfort and discomfort, showing how the two transition into one other. For example, when performing a task over a longer period of time, fatigue and discomfort increase, while comfort decreases. Conversely, when biomechanics feel good, the comfort factor increases. Vink [23] proposes a comfort model based on Vink and Hallbeck [24] and Naddeo et al. [25] to describe the relationship between product comfort and product design characteristics. In this paper, we combine these studies and develop a new comfort model for product experience (Fig. 4), in which an artifact (A) and a human (H) are in an environment where the act of use (U) leads to an interaction between the human and the artifact (I), resulting in a human response (B), which is perceived in the human brain (P) and influenced by expectations (E), resulting in feelings of comfort (C) or discomfort (D).
The prototype used in this study is a new reclining seat rest pillow, for which user experience feedback was continuously collected between 2020 and 2021, with graphic feedback obtained through reviews and interviews on online sales platforms. Based on the most frequently mentioned user perception terms and referring to the relevant literature for terms related to sitting comfort and discomfort, a total of 65 terms were collected from 21 references [9,16,[25][26][27][28][29][30][31][32][33][34][35]. Potential factors were selected, analysed and generalized from the literature, to obtain multiple sets of semantic differential terms as factors for the assessment of comfort and discomfort. Comfort and discomfort were described as having both psychological and physiological origins, therefore, no clear distinction was made; instead the words were classified according to the semantic differences between comfort and discomfort. As shown in Fig. 5 (a) the diagram summarizes the physical and psychological sensations and extracts representative terms, including relaxation, fatigue, pain, etc. and Fig. 5 (b) the diagram summarizes the physical and environmental sensations and extracts representative terms, including support, shape, temperature, etc. The diagram also summarizes the physical and environmental sensations and extracts representative terms, including support, shape, temperature, etc.
In order to verify the suitability of the factors summarized for the assessment of (dis)comfort, a user survey questionnaire with a five-point scale (evaluating the perceptual terms of subjective (dis)comfort) will be conducted, in which the following responses 1 (very unsuitable), 3 (average) and 5 (very suitable) are used (Appendix A). The first part of the questionnaire is a personal data survey, which includes questions relating to gender, age, height, weight, occupation and time spent using the product, and asks whether there have been any joint or muscle problems in the last three months. The second part was a semantic differential term used to describe the user's suitability to the comfort and discomfort experienced with the prototype, e.g., "do you think [fatigue-relief] is an appropriate description of the subjective physical comfort and discomfort after using the product?" Finally, the questionnaire was then distributed to the users (Table 1) of the product (who purchased and used it) via email and social networking software, who were asked to evaluate the suitability of the terms to describe comfort and discomfort, and to filter out terms with appropriate semantic differences by evaluating the scores. However, in terms of the generality of the questionnaire data, a mean score greater than the middle value (3) indicates that the term is suitable as a description of comfort and discomfort experienced by the body, however, the assessment of the middle value is still rather vague and not sufficiently precise in terms of perception.
A cut-off score of >3.5 was established and checked for reasonableness. The selected body parts were combined with the perceptual terminology of (dis)comfort to form an evaluation questionnaire and measurement model, and a perceptual evaluation experiment of the prototype's usability was set up. CFA was then used to fit the model to the evaluation data. If the measurement model fits well, the results of the selection of body parts and (dis)comfort perceptual terms are statistically justified.
Body mapping
Body mapping is a visual representation of the human body divided into parts, which are then evaluated using a standardized scale for each part. Currently, the types of body parts in the perception questionnaire are determined by the study's needs, and there is no agreement or standard for the scale. The body mapping diagram, proposed by Fu et al. [19] for the supine sitting posture was used in this study, with 10 body parts, including six above the chest area and four from the waist to the chest area.
Subject demographics
The survey had 27 respondents, 14 men and 13 women. The subjects' ages ranged from 18 to 50, with 51.85% between the ages of 31 and 40, who had used the product for more than a week. In addition, the height range was 160-185 cm, the weight range was 49-90 kg and the body mass index (BMI) range was 17.36-26.42, which is generally in line with health standards ( Table 2). The subjects included individuals from a range of occupations, including a student, a producer, a marketer, a technicist, a teacher as well as managers.
Experimental methods for evaluating the perceived usability of prototypes
The prototype usability tests for this study were carried out in a university's product design laboratory. The laboratory is divided into three sections: the experimental area, the observation area and the waiting area, with a desk and an office chair in the experimental area to simulate a typical office. The temperature in the room is kept at 25-28 • C, and the experimental process is quiet, with an experimentalist in charge of maintaining order and managing the experimental process. Since most people are more likely to be tired after lunch, the subjects were placed on their backs in office chairs for a simulated rest between 11:00 a.m. and 15:00 p.m. The experiment lasted for three to 4 h per day over a six-day period. The control group (resting in a supine position without the prototype) was tested first, followed by the experimental group (resting in a supine position with the prototype), with each participant completing both the control and experimental groups for 12-15 min each. Finally, the "Questionnaire for the Evaluation of Perceived (Dis)Comfort of Body Parts" was completed within 10 min, with timely feedback on the perceived comfort level experienced.
Hwang and Salvendy [36], based on predictions using observational data with a variety of experimental conditions, suggest that the optimal number of usability assessment subjects is generally [10 ± 2] and this can be applied to general or basic assessment situations. However, to obtain more accurate data, certain studies have recruited a larger number of subjects for their experiments [37,38]. In this study, 26 subjects were used, 11 males and 15 females, all aged 18-25 years; details of their height, weight and BMI are shown in Table 2.
Confirmatory factor analysis
Byrne [39] proposes five steps that are more commonly used to perform a confirmatory factor analysis: the first step is to build a hypothetical measurement model based on theory; the second step is to evaluate model identification, that is, to convert the model that the researcher wishes to test into a statistical model for analysis; the third step is to estimate the parameters, both by implementing a structural equation model and by selecting an appropriate path analysis; the fourth step is to assess the fitness of the model. The fitness index distinguishes between models that are grossly misspecified by absolute fitness, relative fitness, refined fitness and message criterion indicators, and must be analysed for convergent and discriminant validity. The fifth step is the model correction stage, where standardized residuals and modification indices (MI) are also useful statistical calculations that detect model irregularities and are used to correct the model. A low number of MI can reflect the good fitness of each model, generally keeping MI < 15 [40,41].
Maximum likelihood (ML) is a common method of parameter estimation, which is a ML method based on the assumption of multivariate non-kernels. The method generally requires a sample size of at least 100-200 before it is considered good and can be used to obtain reliable findings [42,43]. In this study, the ML method was used to estimate the parameters of the data, and when the absolute value of the skewness coefficient was greater than 3 or the absolute value of the kurtosis was greater than 10, the data deviated from the normality. According to Jaccard and Wan [44], the most appropriate approximation method is not suitable, and the asymptotic distribution-free (ADF) method should be used instead [45,46], but a larger sample size is required. In this study, Amos 22.0 software was used to further validate the sexuality factor analysis. Factor structure validation for measuring potential variables in the model included: goodness-of-fit, Cronbach's alpha coefficients, component reliability (CR) and average variance. Nunnally and Bernstein [47] suggested a CR of >0.70 as an indicator of construct reliability for potential variables (Equation (1)), and Bagozzi and Yi [48] suggested an evaluation criterion of average variance extracted (AVE >0.50) for mean-variance extraction, indicating that the potential variables analysed in this study have convergent validity (Equation (2)).
In this formula, λ is the standardization factor loading and e is the standardization error. The construct validation indexes included goodness-of-fit, Cronbach's alpha coefficient, composite reliability and convergent validity (Table 3), which were used to verify the fit of the perceived body part comfort/discomfort constructs in relation to the index scales.
sEMG measurements 2.5.1. Subject demographics
The sEMG measurement experiment involved 21 healthy adults (11 males and eight females), whose ages ranged from 20 to 27 years, In addition, the height range was 154-180 cm, the weight range was 41-85 kg and the BMI range was 16.51-26.23, which is generally in line with health standards, as detailed in Table 4. None of the subjects had any history of neck, back or shoulder pain or any neurological disorders (Appendix C). After each subject provided informed consent, the Medical Ethics Committee of the School of Medicine of Huaqiao University approved the study, and conducted an ethical review of the psychological and ergonomic experiments.
Methods for measuring sEMG and analyzing data
The sEMG measurement experiment was also carried out in a university's Human Factors Engineering Laboratory. The laboratory is divided into three sections: an experimental area, an observation area, and a waiting area, with the experimental area outfitted with desks and office chairs to simulate a typical office setting. The room temperature is kept between 25 and 28 • C. During the experiment, silence is required, and an experimentalist is in charge of managing the flow and order of the experiment. The subjects were instructed to sit on their backs in office chairs between 11:00 a.m. and 15:00 p.m. after lunch. The experiment was conducted over 10 days, lasting three to 4 h per day, and was similar to the perception assessment experiment in that it was divided into a control and an experimental group, with each test lasting 15 min for each subject. sEMG data were collected from 21 subjects at a sampling rate of 1000 Hz.
Based on the results of the subjective assessment of the perceived comfort of body parts in this study, combined with the study by Xu et al. [49], it was concluded that the sternocleidomastoid muscle (SCM) in the neck region was more likely to produce fatigue than the trapezius muscle and splenius capitis muscle after prolonged sitting at work. SCM fatigue occurs after about 20 min, according to the research, it was decided to use the first 15 min of data.
As a result, the electrodes were placed roughly one-third of the way between the sternal groove and the mastoid muscle excess [15,50,51], and data from the left and right SCMs were collected independently (Fig. 6). The surface sensor had a 10 mm diameter and a 20 mm inter-axis distance [52]. Furthermore, before using the sensor, the skin was washed with water and then wiped with alcohol (electrodes).
Muscle potential activity signals were recorded in this experiment using the surface EMG module of a US BIOPAC MP160 polysomnographic recorder. The instrument had a sampling frequency of 1000 Hz, and most of the data acquisition and filtering was handled by the direct transmission system, the EMG100C EMG amplifier, wireless signal transceiver, and Acqknowledge 5.0 software. The power spectrum shifts from high to low frequency as muscles fatigue, and the mean power frequency (MPF) decrease. The MPF indicator was used to extract sEMG spectrum or power spectrum features before performing linear regression analysis with MatLab. Muscle fatigue could be indicated if the MPF value fell over time, indicating a downward slope or vice versa.
Results of the screening of factor terms for (dis)comfort
The (dis)comfort factor terminology was also screened using feedback from real users regarding their experience with the prototype. The physical characteristics and occupational descriptions of 26 users' are shown in Table 1. A Cronbach's alpha coefficient of 0.769 indicated high reliability, with a significant gender difference for [fatigue-relief] and no significant difference for the others. There was no significant difference between the [fatigue-relief] and the remainder; the data results are shown in Table 5.
As shown in evaluation of each group of terms, indicating a high degree of consistency. To screen for more appropriate descriptions, the terms with assessment scores above 3.5 were selected. In this study, these six groups of terms were used as factors to assess comfort and discomfort; they were combined with 10 body parts and a "perception of body part (dis)comfort evaluation questionnaire" was constructed as the basis for a measurement model, then a CFA fit was performed to verify the validity.
Questionnaire for the evaluation of perceived (dis)comfort of body parts
A semantic differential terminology, suitable for describing comfort and discomfort, was combined with an established body mapping, and a five-point scale was used as a measure to construct a questionnaire for the evaluation of the perception of (dis)comfort of body parts (Appendix B).
Experimental results of the perceived (dis)comfort evaluation of the prototype experience
After the completion of the prototype usability experiment, 26 questionnaires for the control group and 26 questionnaires for the experimental group were collected, totalling 52 questionnaires, each with 10 body parts and six question items for each body part. To facilitate the identification and statistical analysis, the scale scores in the questionnaires were adjusted from [-2-2] to [1][2][3][4][5], and then statistical analysis was conducted. The mean of the factors assessed for (dis)comfort was C p , In this study, the (dis)comfort threshold value was set at 3; when C p = 3, this means that there is no significant tendency to experience comfort, C p < 3 indicates a bias towards the perceived factor of discomfort and C p > 3 indicates a bias towards the perceived factor of comfort. The perceived comfort of the control group was evaluated in Appendix D, where the mean (C p ) of all six factors was greater than 3 in the case of the upper arm (3.038-3.423) and buttock (3.230-3.615 The perceived comfort evaluation data for the experimental group are presented in Appendix E, where the C p values for the six assessment factors for the 10 body parts were all greater than 3. The C p values for the four components, namely, the side of the head, side of the neck, upper arm and elbow were all greater than 3.5. C p ≤ 3. The perception assessment data from the control group (CG) and the experimental group (EG) were compared, as shown in Fig. 7. The multiple curves in Fig. 7(a) were divided into the mean C p values of the assessment factors, and the multiple curves in Fig. 7(b) are the standard deviations (SD) of the assessment values. The two sets of data have distinctive features, with the C p values of the six factors for each body part showing some clustering, indicating that the differences in scores between the assessment factors are minimal. The standard deviation is also clustered, with the curves fluctuating less, indicating that there is less overall variation in the assessment. However, the difference between the CG and the EG for the 10 body parts was more obvious, with the overall curve fluctuating more, indicating that there were significant differences in the comfort of the body parts. The area with the least variation was the buttocks, indicating that the prototype had less influence on the experience of comfort in this area.
CFA measurement models and fitted indexes
In this study, a six-factor scale for the evaluation of body perception of (dis)comfort was developed and research data were obtained by establishing a prototype usability experiment. Although the data have initially presented positive findings indicating that the scale is a good measure of (dis)comfort, further validation of the observable variables constituted by the CFA is required. These six variables include strain-relaxation, fatigue-relief, ache-wellbeing, support (bad-good), shape (unsuitable-suitable) and thicknesses (unsuitablesuitable).
The differences in perceptual experience under the influence of the prototypes are shown in Fig. 7, where some body parts are more strongly perceived as comfortable, such as the side of the head (SH), side of the neck (SN), back of the head (BH), back of the neck (BN) and back (BC), while others are not. In this study, the five body parts with the most significant perceived differences in terms of comfort experience were selected from the "Questionnaire for the Evaluation of Perceived (Dis)Comfort of Body Parts" and data, and CFA models were constructed for the CG and the EG, respectively. A 10-group measurement model was constructed to identify and validate the veracity of the measurements of the six observed variables and to ensure that they were also reflective of the unobservable variables. This method was used to confirm the feasibility of the perceived (dis)comfort of the body parts on an evaluation scale in prototype usability testing.
Construct validity was assessed based on the model's goodness-of-fit index. The measurement models of CG and EG for the SH are represented in Fig. 8. The CFA results for both models indicate that the structural fit of the assessed models based on the perceptual factors of the six (dis)comforts was satisfactory. The CMIN (Chi-square, χ 2 ) = 12.275 for the SH(CG) model, with smaller values indicating that the hypothetical model differs less from the actual data, p = 0.584 (0.05 ≤ p ≤ 1.00) and the fit index of CMIN/DF = 0.877 (<2-5) is at the required level. The absolute fit indices of RMSEA = 0.000, (<0.08), GFI = 0.882 and AGFI = 0.822 were close to 0.9. The incremental fit indices of CFI = 1.000 (≥0.90) were at the required level ( Fig. 8-a). In addition, the initial fitting of the SH(EG) Fig. 8. CFA measurement model for Side of head (CG and EG). model revealed that its absolute fit index was poor, and after amending the model in relation to the MI, it reached the required level and also confirmed that the data had a potential multicollinearity problem (Fig. 8-b). Comparing the fit indices of the SH(CG) model, all the indicators of the SH(EG) model are slightly better than those of the SH(CG) model, except for the four indexes p, GFI, AGFI and TLI, which are slightly worse, as shown in Table 5.
The fit indices for the SN(CG) model were slightly poor with GFI = 0.876 and AGFI = 0.782, however, all other indices were at the required level ( Fig. 9-a). The fit of the SN(EG) model was significantly better than that of the SN(CG) model, with all fit indexes reaching the required level and GFI = 0.951 and AGFI = 0.906, indicating that the model fit was excellent ( Fig. 9-b). However, both models were modified using the MI, indicating a potential multicollinearity problem in the observed variables. Specific data are presented in Table 6.
In addition, the measurement models of CG and EG for three other body parts are shown in Appendix F, namely the BH, BN and BC. The fit of the CG model for the BH was slightly inadequate with AGFI = 0.852 and the fit of the BH(EG) model was slightly inadequate with AGFI = 0.871, while all other indexes met the requirements. The fitted indexes of the CG measurement model for the BN, AGFI = 0.83, RMSEA = 0.124, GFI = 0.833 and AGFI = 0.708, are slightly underestimated, while all other indexes are met. All fit indices for the 10 measurement models were not optimal but achieved a marginal fit and the models were acceptable.
Composite reliability and convergent validity of the measurement model
In this study, the composite reliability and convergent validity were calculated using Equations (1) and (2), and the average variance extracted, suggested by Bagozzi and Yi (1988) was evaluated as AVE>0.50; the composite reliability suggested by Nunnally and Bernstein (1994) was evaluated as CR > 0.70. The two indexes attain the standard level, which indicates good composite reliability and convergent validity of the potential variables.
As shown in Table 7, some of the standardized factor loads did not meet the standard requirements, but the values were >0.5 and were relatively close to 0.70, therefore could be accepted.
sEMG measurement experimental strategy
Based on the assessment of the prototype's perceived (dis)comfort on body parts, it was discovered in this study that the neck region had the highest value of perceived comfort and that the greatest difference in perception was found between the prototype's use (EG) and its unuse (CG). An objectively measured prototype usability experiment was therefore established, using sEMG for measurement, to further analyse the differences between the SN (left and right) and the level of comfort. Over the previous three months, the subjects had been free of joint and muscle pathologies. They were asked to sit in an office chair during lunchtime hours for the experiment and measure both conditions with and without this product, after which the physiological data were counted and analysed. Chairs with armrests but no headrests were chosen because they are commonly used at work and for computer work.
The sEMG experiment lasted for 10 days, with two subjects per day, and was conducted at one-day intervals between 11:00 a.m. and 15:00 p.m. The test required dividing the participants into two groups: non-wearing and wearing, with each group alternating between using the product and resting in an office chair for around 30 min. Data for the left and right SCMs were collected using sEMG sensors placed in the 1/3 position between the sternal groove and the mastoid muscle overload.
sEMG data analysis for experimentation
The sEMG experiment was carried out on the SCM muscles of 21 subjects divided into two groups: control and experimental. To compare SCM muscle fatigue between the CG and the EG, the raw data from the MPF values were exported and fitted using MatLab for linear regression analysis. As a result, the sEMG data had high precision, accuracy and reliability.
In Fig. 10, the sEMG filtering data of the left and right SCMs for five of the total (n = 16) subjects are specifically shown, and the CG (Fig. 10-a) and the EG (Fig. 10-b) are compared side by side. It was found that there were significant differences in both the left and right SCM signal fluctuations in the subjects, with more significant differences between the CG and the EG.
In conclusion, in the experiment with the CG (unused), the left and right SCMs showed significant muscle fatigue (P < 0.05) after 15 min of rest and the data were statistically significant (Table 8). In the CG unused ( Fig. 11-a, and b), significant muscle fatigue was observed in the left and right SCMs (P < 0.05) and the slope of the MPF regression line was negative (CG-SCM-R, β = − 0.142, CG-SCM-L, β = − 0.097). The left SCM in the EG (used) showed no muscle fatigue and the slope of the regression line for MPF was positive (EG-SCM-L, β = 0.185), however, In the right SCM, the slope of the MPF regression line was negative. (EG-SCM-R, β = − 0.128), indicating the presence of muscle fatigue, but to a lesser extent than the CG (Table 8). Fig. 11 (c) and (d) further demonstrates that resting in an Note: * indicates that the standardized factor loadings are not at the standard level, but the value is > 0.5. office chair with this prototype is effective in relieving neck muscle fatigue during short (15 min) rests. Therefore, it has been illustrated that there is a significant usability difference between the EG and the CG, and that the EG has a good comfort effect.
Discussion
This study examines the existing literature on the subject, creates a prototype usability experiment with a perceived comfort assessment model and sEMG measurements, and validates and investigates the data generated by the experiment. The procedure was divided into three stages: the first was the creation of a perception assessment questionnaire, the second was the analysis of validating factors and the third was the verification of comfort using sEMG on the neck area.
Questionnaire to assess the perception of (dis)comfort of body parts based on prototype experience
In the first phase, eight pairs of perceptual terms from the literature were summarized, and the questionnaire was used to filter the perceptual terms based on the prototype's experience. The final six pairs of terms with the highest scores were chosen, and the data not only had high reliability but also did not differ by gender, indicating that the perceptual terms could be used to describe the prototype's comfort and discomfort. The questionnaire's reliability and validity have not been confirmed, and only a hypothesis has been proposed; this will be used as a subjective perception measurement tool in the second phase of the experiment.
Prototype (dis)comfort perception evaluation and sEMG measurement
Perceptions of the comfort experiment and factor validation made up the second stage of the study, which involved setting up the experimental site, organizing the subjects and planning and managing the experimental process. Based on the findings of this phase, five of the 10 body parts with strong perceptions of comfort were measured in a CFA model, and some of the models had slightly poorer AGFI indicators, such as SH(EG), SN(CG), BN(EG) and BC(EG) but the GFI indicators were all satisfactory. The GFI indicators, on the other hand, were all satisfactory, implying that the CG and EG measurement models for the five body parts fit well and achieved satisfactory levels of construct reliability and convergent validity. This implies that the six-factor comfort and discomfort perception scale, proposed in this study, is appropriate for use in the prototype test and can provide an accurate measure of the perceived user experience. Furthermore, the measurement model's good fit justifies the use of a cut-off value of 3.5 in the screening process of "2.3.1 Factors relating to the degree of sitting (dis)comfort".
The third stage comprises the sEMG measurement of body parts and data linear regression analysis, which focuses on muscle physiological signal measurement and comfort identification, and necessitates the establishment of a prototype usability experiment similar to the second stage. In order to further validate sEMG for comfort measurement in the prototype usability test, sEMG measurement experiments are relatively expensive and time-consuming, and the data are detailed and elaborate. The goal is to validate sEMG's reliability for (dis)comfort measurements in prototype usability testing. The use of sEMG to measure muscle fatigue more accurately in the SCM not only validated the logic and reliability of the body perception assessment method, but also demonstrated the prototype's effectiveness in improving local body comfort and relieving muscle fatigue in the supine lying position. The CG-SCM-L, CG-SCM-R and EG-SCM-R all showed significant (p < 0.05) fatigue, whereas the EG-SCM-L did not show fatigue but had significantly less fatigue data than the CG-SCM-L.
According to the behavioral observations of the subjects in the experiment, the body would have three reclining postures when resting in a chair in a supine position, such as neutral, left, and right. There was no significant recline bias in the sEMG CG, but in the sEMG EG, the subjects were used to reclining to the left, so there was no significant fatigue in the left SCM under the intervention of the prototype (Fig. 11 c), while the right SCM was less affected by the prototype, so it showed some fatigue (Fig. 11 d). Add the above to the article.
The effect of time and posture on experimental results
The best time to conduct the experiment is between 11:00 a.m. and 15:00 p.m. after lunch, which means that the time available for the experiment is limited and it can only be completed over several days. The perception of body comfort assessment experiment can be completed an average of four times per day, as participants only need to complete the prototype experience and answer the questionnaire, whereas the sEMG measurement experiment can only be completed twice per day, due to the influence of the equipment. The ambiguity and variability of human perception are high, therefore, accurate measurement timing is critical for data accuracy and reliability.
Based on the observations and brief exchanges with the subjects during the experiment, it was interesting to note that the differences in height and weight of the subjects resulted in significant differences in the description of feelings related to the prototype. There were also differences in sitting posture, with some leaning left, others leaning right, some stretching their legs forward and others retracting them backwards. These phenomena result in varying levels of comfort in the body parts during the perceptual assessment, as well as significant differences in the sEMG data of the neck muscles between the left and right sides. The duration of the test and the subjects' postural adjustment may be the main reasons for this, as the subjects naturally adjust their posture to relax their muscles when they feel weak. This is why, as a normal physiological phenomenon, the two sides of the SCM produce different muscle responses.
Conclusion
The goal of this research was to create a prototype testing model based on body part comfort perception experiments, as well as to investigate human factors based on experimental data analysis. The purpose of this was to effectively validate new products and design concepts, thereby promoting innovation in industrial design and manufacturing. After summarizing previous literature on body part types, scales and perceptual terms of comfort and discomfort, a "Questionnaire for the Evaluation of Perceived (Dis)Comfort of Body Parts" based on body mapping and a "Factor Word Screening Questionnaire for (Dis)Comfort", based on semantic difference terms were developed. The questionnaire data were then used to select body parts and perceptual terms appropriate for the prototype test.
The results of the CFA showed that the CG and EG models for the five important body part comfort/discomfort perception evaluation factors fitted satisfactorily. The CG and EG models for five important factors in the perception evaluation of body part comfort/ discomfort were fitted to a satisfactory level, with the fit metrics satisfying RMSEA <0.08, SRMR ≤0.10, CFI >0.90, TLI >0.90, GFI >0.80, AGFI >0.80 or ≈0.80 and AVE >0.50, CR > 0.70. 0.70, with a normalized loading factor value of ≥0.5 for each model variable, which was considered to be an acceptable fit model. The experimental subjects were healthy young adults, however, the gender-and age-based groupings have not yet been explored in depth. Differences in physical conditions may have had an impact on the study, which could be further investigated in the future. Multi-sensory comfort is a complex psychological, physiological, physical and environmental characteristic, and therefore affects the fit of the measurement model, which is not perfect but achieves the standard level. In addition, the type of office chair may also affect the results of the experiment, such as the difference between a head-supported chair-back and a chair-back without head support. From the experimental observations, it was found that there are three leaning directions in the supine sitting position, and there is some stability when there is a prototype intervention, such as a continuous leaning to the left; however, the significance of the leaning direction could not be confirmed due to the inadequacy of the sample. The sEMG results of the muscles on both sides of the neck also support this finding.
The results of the perceptual evaluation and sEMG experiments support the fact that our prototype usability testing model is a useful tool, not only in terms of assessing the comfort of sitting in a supine sitting position but also for validating the new design's feasibility. If the results show that the prototype is capable of significantly improving comfort for the body part, the product prototype's details design will be optimized, and comfort improvement measures will be proposed to improve the prototype from the standpoint of the overall design concept in a cycle. Our proposed method is a generic model that can be used not only for the prototypes in this study, but also for usability testing of other similarly functioning products, such as a reference for cushion or sleep aid comfort studies.
Ethics approval
All human subjects in this study have given their written consent for the participation of our research.
Author contribution statement
You-Lei Fu, Ph.D.: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Ruoqi Dai: Performed the experiments; Analyzed and interpreted the data. Xiaoshun Xie; Wu Song: Contributed reagents, materials, analysis tools or data.
Data availability statement
Data will be made available on request.
Declaration of competing interest
The authors declare no competing interests.
|
2023-02-13T16:07:26.258Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "95d63c34d10436af53241ce2e692597a20a52f0d",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844023008319/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3d7f4b44f2035593746d7e4c2aee2ebeaca7171",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
199090775
|
pes2o/s2orc
|
v3-fos-license
|
A Conceptual Framework for Equipment Maintenance Automation under a Pyroprocessing Automation Framework
. For most of the remote maintenance activities of equipment in a hot cell, replacing breakdown modules is preferred over in situ repair because of insufficient space in the cell and the limited operability of remote handling tools. In such cases, the maintenance operation can be decomposed into transport of the new modules to the failed equipment, replacement of the broken modules with new ones, and then transport of the broken parts to the reserved space for further repair or disposal. In this respect, transfer is the most basic operation during remote maintenance, which is also true for the maintenance of pyroprocessing equipment. Hence, thispaper proposesa maintenance automation framework for automated pyroprocessing equipment fromthestandpoint ofmodule transfer.Forthemaintenanceautomationframework,maintenance-relatedfunctionsandeventsaredefined,andtheyareintegrated withthepyroprocessautomationframework.Theproposedframeworkisverifiedbyacasestudyonthemaintenanceofalarge modulethroughahardware-in-the-loopsimulation.
Introduction
Pyroprocessing is a technology that reduces the volume and toxicity of spent nuclear fuel that uses electrolysis with high-temperature molten salt.In these electrochemical processes, the material from nuclear fuel is transmuted into a metal form by an electrolytic reaction with electrodes.The group of transuranic elements is recovered and distilled for reuse.Korea Atomic Energy Research Institute (KAERI) has been studying the pyroprocessing technology and developed PRIDE (PyRoprocessing Integrated inactive DEmonstration facility) for studying integrated engineering scale pyroprocess performance and scale-up issues using depleted uranium with surrogate materials [1][2][3].PRIDE has an argon cell where the oxygen and moisture level are controlled within 50 ppm.In the argon cell, major process units such as the electrolytic reducer, cathode processor, electrorefiner, salt distiller, liquid cadmium cathode type electrowinner, residual actinide recovery apparatus, cadmium distiller, and waste molten salt treatment apparatus were installed [1].In addition, auxiliary apparatuses for basket replacement or material distribution have also been developed to support the integrated processes.
Integrated processes at PRIDE are carried out by sequentially transferring the process material contained in baskets to the next equipment.The transfer of baskets between equipment during the processes is carried out by remote handling devices such as mechanical master-slave manipulators, cranes, or bridge-transported dual arm servo manipulators [2].As a result, the progress speed of the process depends mainly on the skill of the operator, which means uniform process quality control is difficult to achieve.Moreover, as the capacity or number of items of equipment increases, there is a limitation depending only on conventional manual operations, since the conventional remote tools do not have sufficient workspace, degrees of freedom, and payload.The exposure time of the workers can also increase.In these circumstances, automation of the pyroprocess is a possible solution, and an automation framework for pyroprocessing based on material transfer automation was proposed for automating integrated processes [4].However, there has not been a great deal of research on the automatic control of integrated processes with multiple equipment.In previous studies, automation in radiation environments such as hot cells mainly focused on the automation of unit equipment or handling devices that use robot systems in order to increase the workers' safety during routine processes and to reduce the errors caused by workers [5][6][7].
On the other hand, the remote maintenance of process equipment that operates in hot cells should be considered in cases of failure.In general, when equipment failure occurs, maintenance operations based on replacement are preferred to in situ repair due to insufficient workspace in the hot cells and the limited operability of remote handling tools.In this case, remote maintenance is performed in the following order: (1) New modules are brought into the hot cell.(2) They are then transferred to the broken equipment.(3) The failed modules are replaced with new ones.(4) The failed parts are transferred out of the cell.Hence, maintenance tasks can be interpreted as the sequential transport of modules.This means that automation of maintenance can be achieved in a similar way with material transfer automation if the replacement operation is simplified.Upon this background, this paper proposes a concept of maintenance automation in hot cells from the viewpoint of module transfer automation.For this purpose, the functions and events for automating module maintenance have been defined taking into account the pyroprocess equipment characteristics, and they are integrated with a pyroprocessing automation framework [4].Maintenance-related functions specified in the automation standards are also considered in the proposed framework.In order to verify the possibility of the proposed concept, a hardware-in-the-loop simulation environment is developed, in which equipment and cell resources necessary for maintenance operations are virtually implemented.A case study shows the feasibility of the proposed concept.
This paper is organized as follows.Section 2 reviews the pyroprocessing execution system (PES), which is a pyroprocessing automation framework, and Section 3 addresses the proposed maintenance automation concept and considers the integration of the concept with PES.Section 4 presents the results of a case study for maintenance automation in the pyroprocess.
Automation Framework of Pyroprocessing
Since the pyroprocess is a batch process in which the material flows and the basket transfer are closely related, the concept of automating the pyroprocess that links the transfer tasks of baskets with the progress of the process has been presented.Ryu et al. analyzed the transfer routes of the pyroprocess with respect to material flow and classified pyroprocesses into process blocks in which the same baskets can be used [8].In addition, based on the material transfer relationship between the equipment or between the process blocks, studies on the equipment layout for automated pyroprocessing have been performed [9,10].In addition, the PES as a pyroprocessing automation framework was proposed in order to control the automated integrated pyroprocessing operations and to manage the material flows during the process [4].[4].The manufacturing execution system (MES) is a control execution system for automated production presented in the international standard, ISA-95 [11].The MES includes level 3 functions for manufacturing operations management systems such as production planning, production execution, operation management, and production management.The PES implements the system of the MES into pyroprocessing automation.For this purpose, the MES' functions were modified to apply them to the pyroprocessing.As a result, the PES can perform the control of integrated process operations through basket transfer automation and also can conduct the systematic lot management by tracking material flow such as the creation, distribution, or merging of materials during processes.Figure 1 shows the functional architecture of the PES and the messages exchanged between those functions.
Pyroprocessing Execution System
The PES receives process plans from a higher planning level and schedules and arranges the detailed orders based on the plans.The detailed schedules are transmitted to a lower control level such as block controllers for process equipment (pEC) and block controllers for material handling equipment (mhEC).During the given tasks performed by the process and handling equipment, the block controllers report the results to the PES whenever noticeable events occur.The PES manages and monitors the entire workspace by using the event's messages and sends some information to the higher layer.
Pyroprocessing is a static process whose sequence does not change frequently, and its production rate is generally stable.As a result, the process plans will be simple, but the detailed operation orders can be more complex because the orders should be scheduled under simultaneous consideration of the process plan, raw process materials, work-inprocess (WIP), and the availability of all equipment.
For this purpose, in PES, the Pyro-DSP (pyroprocessing dispatcher) schedules and arranges the detailed orders based on the predefined dispatch rule, shop-floor status, and material handling requests (MHReq) for the basket loading or unloading sent from the pEC.The detailed orders are then sent to the pEC and Pyro-MCS (pyroprocessing material control system) simultaneously, and the pEC commands the relevant process units to prepare for the handling operations and the Pyro-MCS commands mhEC to perform the tasks.According to the commands, material handling equipment transfers the process baskets from the equipment of one process to that of another.During this operation, the material handling equipment can communicate with the process equipment to check and confirm the basket identification to be handled.After the transportation is completed, the material handling equipment reports the command results, and the process equipment informs the start of the process.
The PES monitors the status of workspaces by using a WIP manager (WIP manager) and equipment manager (EQPMgr).All information related to material handling during the process is stored and tracked by the material handling history.The information includes the event's name, date, container information (identification of current and previous baskets), WIP information (identification of lot, product, and process), and moving paths (current equipment and destination).This information can be used to track and analyze the material's history and control the product quality.
Conceptual Framework for Maintenance Operation Automation
In general, the maintenance of equipment that operates in a hot cell takes more time since a human operator cannot access the equipment directly and the performance of the available remote handling tool is not sufficient.For this reason the development of equipment to be operated in a hot cell primarily focuses on the operability and maintainability of the remote handling equipment.For example, International Thermonuclear Experimental Reactor (ITER) utilizes a remote maintenance management system that monitors and maintains equipment during its life cycle, and the system also emphasizes the remote compatibility of equipment with remote handling devices [12].On the other hand, in the general manufacturing industry, maintenance is closely related to process planning since equipment failure directly affects the production rate.Hence, computerized maintenance management systems have been applied in order to reduce the downtime of equipment and manage maintenance schedules and resources efficiently [13].Similarly, ISA-95 also defines the required activities and information for maintenance operations management at an MES-level, as shown in Figure 2 [11].These functions for managing maintenance instruction and required devices, maintenance capability, maintenance history, and so on are also required for pyroprocessing automation so that maintenance operations are systematically controlled.
Equipment Model for Automated Pyroprocessing.
Since maintenance operations depend greatly on the equipment, a specified equipment model is required for maintenance automation.The main equipment of pyroprocessing uses electrochemical reactions between the electrodes and materials contained in a basket in a molten salt bath.Because of this, it is common that electrodes and baskets are inserted into the vessel from above.Therefore, the general structure of the equipment has a stacked form with a heater, react vessels, and cover for heat shields on a frame.In addition to the insert slots, the heat-shielded cover can have additional mechanical structures for lifting electrodes and baskets or utility modules for off-gas treatments.
In order to maintain this stacked equipment remotely, it is advantageous for remote handling equipment to access the equipment from above.When one module is transferred during a maintenance operation, the modules stacked above are also transferred.From this observation, it is possible to generate a work order in order to maintain modules automatically.
As shown in Figure 3, this study adopted the class diagram of the unified modeling language (UML) to define the equipment modules systematically.Since this equipment module model is based on the physically replaceable module, this model is not exactly the same as the equipment model in ISA-88, which is based on the functional module [14].The attributes for a module have items about equipment, identification, and status of the module, as well as maintenancerelated information and parent and child modules.The maintenance-related information includes remote handling (RH) equipment and tools, RH class, installation date, and expected life.The RH equipment and tools are assigned to consider the module's weight, sizes, and mechanical interfaces.The RH class is determined based on the maintenance period and safety, as shown in Table 1 [15,16].It can be noted that the expected life for preventive maintenance can be used to plan the material handling schedule, whose period corresponds to the life.
Each module can have a parent module and one or more child modules, and the relation between those modules can have an aggregation relation, they can be transferred individually.Therefore, these relations can be used to generate a work order for modular equipment maintenance systematically.
Maintenance Concept under PES.
When operating equipment in a hot cell is broken, the equipment is maintained by replacing the fault modules rather than fixing them directly, since the spare space is not sufficient, and the operator cannot access directly.The replaced parts are decontaminated in a separated space before they are repaired by a human operator or disposed as waste [17].A similar method can be considered as a maintenance method of the pyroprocessing equipment.
In this case, maintenance in a hot cell is carried out in the order of bringing in new modules, transferring them to the faulty devices, disparting faulty modules, installing new modules, and transferring out the faulty modules.Except for the replacement, most of the maintenance operations consist of transferring modules.In the case of preventive maintenance, it can be regarded as handling material that has a relatively long handling cycle, but is considered a high priority.Thus, maintenance tasks can be interpreted as a series of module transportation tasks, and therefore maintenance operations can be automated in a similar way as process automation.In other words, the equipment module is automatically maintained in the same manner as the baskets that are transferred automatically during the pyroprocess.
To automate the maintenance operations under the PES, the activity models for maintenance operation management defined in ISA-95 (Figure 2) should be considered.Since the replacement of modules in maintenance is basically similar to the automated transfer of process material, most of the activity models can be managed by the PES functional architecture shown in Figure 1, except for maintenance definition management and maintenance resource management activities.Hence, WDMgr and RSCMgr for those activities are implemented respectively into PES.Moreover, a block controller for the cell equipment (rscEC) is added, which controls the cell ports that bring a new module into the cell and transfers them out during maintenance.The updated PES functional architecture for maintenance automation is shown in Table 2 and Figure 4.
In addition, ProcAbort as a maintenance-related event has been added in addition to the EqpDown event when a process is interrupted by the equipment breaking down.When a ProcAbort event occurs, an event message is transmitted from the pEC to the RSCMgr.The PES then checks the inventory for modules and generates the maintenance work order.The internal messages exchanged in the PES include the information of the failed equipment (EqpID), the modules to be transferred (ContID and LotID), and the handling equipment for maintenance (mheID).Identification is assigned to the maintenance process (ProcID = FIX).EqpID and ContID can be used as error codes to be reported.
An Example Implementation of the Maintenance Automation Concept
A case study was performed to verify the performance of the proposed remote maintenance automation concept.In this case study, a reaction vessel was maintained as a large module within a hardware-in-the-loop system (HILS) that was developed using Siemens Tecnomatix [18].
Virtual Pyroprocessing Hot Cells.
A virtual environment for this study was built to simulate an automatically operated pyroprocessing hot cell.As shown in Figure 5, the virtual hot cell was divided into two areas, (i.e., a process operation area and an in-cell maintenance area), so that the maintenance operations can be performed safely in a separated space without any interference from operations in the process operation area.Moreover, the repaired equipment can be tested in the space before it is moved to the process operation area for operation.
In the process operation area, various conceptual designs of the pyroprocess equipment and support equipment were implemented for process automation studies.Cell equipment such as transfer ports and auxiliary equipment were placed in the in-cell maintenance area.In addition, remote handling equipment for process operations and maintenance that can be covered inside this hot cell were also considered.
As mentioned, since the main pyroprocessing equipment has a stacked form, it can be modularized to be assembled from above using remote handling equipment.Each module has a physical interface, which is compatible with the handling equipment, and it is assumed that the electricity or utilities are connected automatically when the module is installed on the parent module.In this equipment model, the vessels were put on the frame, which is equipped with a heater.In a similar way, a heat-shielded cover was installed on the vessels.It can be said that the relation between the reaction vessel and the heatshielded cover is composition, since the cover is installed statically on the vessel, and the vessel cannot be replaced without removing the cover.On the other hand, baskets and electrode modules have an aggregation relationship with the cover because they can be transferred to other equipment frequently during the process.As a result, they may not be transferred with the cover during the cover's maintenance.By considering these relationships between the parent and child modules, the work order for maintenance can be managed systematically.control, and a virtual pyroprocessing hot cell.In the virtual environment, 3-dimensional models for automated process equipment, handling equipment, cell equipment, inventories, and controllers are realized.Some signals and actions required for equipment control in the virtual hot cell are linked with the hardware user interfaces such as push buttons, motors, and limit switches.The PES and block controller can command and monitor both the virtual environment and hardware for automation studies.Each system in the HILS communicates via an OPC protocol.
HILS for Pyroprocessing Automation Study
In simulations, the PES and block controller managed the integrated processes.When an event signal was triggered through hardware user interfaces during the process, the signal was transmitted to the block controller and PES via an OPC server.The PES then generates process schedules or a work order, including material handling commands, and the block controller sends the schedules to the virtual environment and hardware.After the given tasks, the real hardware devices and virtual equipment report the results to the block controller.In such a sequence, this HILS can be used to verify the automated operations, including maintenance.
A Case Study for Maintenance Automation.
Using the HILS, a case study was performed to verify the proposed maintenance automation concept.A fault event for process equipment can be created by an operator using hardware user interfaces, and the PES and block controller receive the event's message through the OPC server.The PES then generates work orders for maintenance based on error codes in the message, and the block controller sends commands sequentially to the process equipment, material handling equipment, and cell equipment.Material handling equipment communicates with the process equipment and cell equipment to check and confirm the module to be replaced.Figure 8 shows a simplified sequence diagram for these maintenance operations.
As an automated maintenance scenario, it was assumed that a reaction vessel filled with molten salt failed, for example, when a crack was found on the vessel wall.In a real system, when the equipment is broken down, the pEC informs the EQPMgr and RSCMgr of this fault event.Instead, in this study, the event was generated when a button on the hardware user interfaces was pressed by an operator.The signal was then updated when the event updated the related memory on the OPC server.Referring to the OPC server, the block controller sent the event messages about the fault to the PES and also requested the maintenance (MHReq for unloading the failed part).The fault event was visualized in the virtual hot cell (the color of the equipment changed, as shown in Figure 9(a)).The PES then creates a work order for the reaction vessel replacement by using the handling equipment for large modules.According to the order, the block controller commands the related process equipment, handling equipment, and cell equipment sequentially in the virtual hot cell.Figure 9 shows all of the maintenance procedures of the reaction vessel.
First, when the maintenance operations start, the largemodule handling equipment transfers the entire equipment automatically to the in-cell maintenance area (Figure 9(b)).The reason why the entire equipment is ordered to be moved is that the process operations can be interrupted during the maintenance of large modules, and the workspace in the process operation areas is insufficient to replace large modules.In addition, transferring the entire equipment makes it easier to treat the process materials in the vessel during maintenance, and it is also possible to test the equipment before transferring it back to the process operation area.
Afterwards, a new module is brought into the hot cell for replacement (Figure 9(c)), and the vessel is replaced with a new one (Figure 9(d)).In this step, the failed vessel is moved with the cover to a temporary stand, the new vessel is installed on the equipment, and then the cover is transferred back onto the new vessel of the equipment.After this replacement, the repaired equipment is tested and transferred back for reinstallation (Figure 9(e)).The maintenance operations are completed after the failed module is transferred to a decontamination hot cell for further repair or disposal (Figure 9(f)).
As shown, it was verified that the maintenance operations were automatically performed according to the detailed schedules commanded by the PES and the block controller systems in the virtual pyroprocessing hot cell.This concept can also be applied to the replacement automation of other modules.
Conclusion
In this paper, a conceptual framework for automating the maintenance of pyroprocess equipment in a hot cell was proposed for the first time.In this framework, maintenance operations can be interpreted as sequential transportation of equipment modules so that the maintenance tasks can be automated by automating the module transfer.For this reason, a conceptual framework for maintenance automation was developed based on a process automation framework.The possibility of the proposed framework was verified by a simulation study of a large module's automated maintenance in a hardware-in-the-loop system with virtual pyroprocessing hot cells.Further studies will be performed for various situations by using the pyroprocess automation framework, including maintenance automation.
Figure 6 (
a) shows a modeling example of the pyroprocess equipment in the virtual hot cell, and Figure 6(b) indicates the corresponding UML class diagram of the equipment.
Figure 6 :Figure 7 :
Figure 6: Equipment model.(a) An example of process equipment and (b) UML model.
Figure 8 :
Figure 8: Simplified sequence diagram for automated maintenance operation.
(a) An item of equipment fails (b) It is transferred to the in-cell maintenance area (c) A new module is brought into the cell (d) The failed module is replaced (e) The repaired equipment is transferred back (f) The failed module is transferred out
Figure 9 :
Figure 9: Maintenance automation procedure in the case of the reaction vessel replacement.
Table 1 :
Remote handling (RH) classification for process equipment maintenance.
be defined as composition or aggregation.If two modules have a composition relation, both modules should be transferred together during the maintenance operations.If they
Table 2 :
Implementation of maintenance operations management functions in PES.
|
2019-08-02T18:20:41.845Z
|
2019-07-16T00:00:00.000
|
{
"year": 2019,
"sha1": "d09d64e4e87f13dd60c66d7c0c3e20dc0f73b734",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/stni/2019/4908191.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d09d64e4e87f13dd60c66d7c0c3e20dc0f73b734",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119140897
|
pes2o/s2orc
|
v3-fos-license
|
Electric-Magnetic Duality of Topological Gauge Theories from Compactification
In this note, we discuss electric-magnetic duality between a pair of 4d topological field theories (TQFTs) by considering their compactifications to 2 dimensions. These TQFTs control the long-distance behavior of loop and surface operators in 4d gauge theories with gapped phases. These were recently used in work by S. Gukov and A. Kapustin in detecting phases not distinguishable by the Wilson-'t Hooft criterion and by A. Kapustin and the author to construct discrete theta-angles for lattice Yang-Mills theories. The strong-weak duality is manifested in an exchange of dynamical and background degrees of freedom in the compactified TQFTs.
Introduction
The two theories that were discussed in [2] describe a Higgs and a confining phase where the unbroken gauge group and 't Hooft fluxes each take values in some finite group. Electric-magnetic duality in general should exchange these phases while dualizing the gauge group. We review these theories below.
What we will call the Higgs theory is defined by a high energy gauge group G which is broken down to a finite abelian subgroup Γ. Writing P for the principal G-bundle, there is a gauge field A, which is a connection on P , and the phase of a Higgs field φ, which is a section of the associated bundle P × G h, where h is the Lie algebra of the quotient G/Γ = H. There is also a Langrange-multiplier field λ, which is a 3-form valued in the dual space P × G h * . We will also write g for the Lie algebra of G and t : G → H * Department of Mathematics, UC Berkeley. ryan.thorngren@berkeley.edu for the quotient map. In fact, g and h are isomorphic by the induced map t * , but we keep the distinction explicit.
The action of this theory is a simple constraint, This action is manifestly topological. The 1-form wedging λ is the covariant derivative d A φ on the associated bundle where φ lives. The action thus enforces the constraint that φ is covariantly constant. This implies that A is flat and has holonomies in ker t = Γ. In fact, this theory has a lattice description as an untwisted Dijkgraaf-Witten theory for Γ.
This TQFT has unscreened Wilson loops with charges inΓ = Hom(Γ, U (1)) and unconfined magnetic surface operators with charges in Γ. For more on this TQFT's relationship to Higgs phases, we refer the reader to [2].
The other theory is what we will call the confining theory. Its field content is slightly harder to describe mathematically. It has a high energy gauge group G with a monopole condensate whose charges generate the finite normal subgroup Γ of G. We will again write H for the quotient G/Γ and t : G → H for the quotient map. The field content includes a 2-form field B valued in g. There is also a connection A on a principal H-bundle and a Langrange-multiplier 2-form λ valued in h * .
It is worth discussing gauge transformations in the abelian case for 2bundles. The gauge field A has its ordinary gauge transformations, which don't affect B, but B can be shifted by a connection α on an arbitrary principal G-bundle. This gauge transformation is Note that the topological class of the A gauge bundle can change under this transformation. For more on gauge transformations, as well as the local description for the non-abelian case, see [2].
The action for the confining theory is This action is also manifestly topological. The expression wedging λ is something like a covariant derivative of A with respect to B. The constraint dA = t * (B) implies a sort of flatness for B, where B's holonomies around surfaces depend only on their homology class. For the abelian case this just means dB = 0, but it is more complicated in general. It also forces these holonomies to be in ker t = Γ. Indeed, in [4] it is shown that this theory has a formulation depending only on Γ. This TQFT has unscreened 't Hooft loops with charges in Γ as well as unconfined electric surface operators with charges inΓ. For more on this TQFTs relation to confining phases, we refer the reader to [2] and [4].
Note that a theory with this field content and action can also be defined if t is not surjective but instead has finite cokernel. A detailed analysis of this case is forthcoming in [5].
The result, proven in [4] using lattice methods, which we explore in the continuous regime via compactification is that the Higgs theory defined by the data is dual to the confining theory associated to the dual data 1 →Γ →Ĥ →Ĝ → 1, whereĜ andĤ denote the Langlands-dual groups to G and H, respectively. This is an instance of electric-magnetic duality. This duality does not hold in general when there are theta-angles; see [4].
We analyze these theories by compactifying them. This takes some explanation, since they by nature don't depend on the metric. What we mean by compactification is that we consider quantities integrated over the compactified directions local in the lower dimensional effective theory. For more on this technique, see [3].
The paper is organized as follows. First we give a path integral argument for the duality in the abelian case. Then we use compactification to determine the partition function, vector space assigned to a 3-manifold, and category assigned to a Riemann surface. We conclude with a discussion of the duality from this perspective.
The author would like to thank Anton Kapustin, Alex Rasmussen, and Brian Swingle for useful discussions. This work is based on the author's SURF project in the Summer of 2012, and the author would like to thank the California Institute of Technology for the funding.
Path Integral Proof of Duality in the Abelian Case
In the abelian case, there is a path integral argument that the two theories are dual. It suffices to consider the case G = H = U (1) and Γ = Z n . The map t is given by multiplication by n. We follow [7]. One proceeds by introducing an auxiliary H gauge field to the Higgs theory and adding a Lagrange multiplier B setting to be trivial modulo gauge transformations. The Lagrangian is thus modified to The 2-form field B needs to be more general than a global 2-form for it to kill the holonomies of around non-trivial cycles. We can integrate it out to obtain the Lagrangian for the Higgs theory. Instead of integrating out B, however, we can gauge φ to zero and integrate out λ to obtain the Lagrangian nBdA.
Dualizing A in a similar manner, we arrive at the Lagrangian of the confining theory, proving that the abelian theories are dual.
Note that in the non-abelian case, it is not clear how to make sense of terms such as BdA, but this is a convincing argument for duality in general since it is shown in [4] that the theories only depend on Γ anyway, so we can always use an abelian model. Also non-trivial theta-angles make integration of the fields not always possible.
Codimensions 0,1, and 2 for the Higgs Theory
The theories we've discussed so far are presented by Lagrangians, but since they are topological, there is another perspective one can take which is due to Atiyah [1]. That is, these TQFTs should assign numbers to closed 4-manifolds, vector spaces to 3-manifolds, categories to 2-manifolds, and increasingly complicated algebraic objects to closed manifolds of higher codimension. These should all be compatible in the sense that this defines a functor from a certain 4-category of cobordisms to a C-linear 4-category of vector spaces.
To summarize, the functor Z Higgs sends Throughout, the notation C[A] will mean the vector space with basis elements corresponding to elements of the set A. The rest of the data describe how the mapping class group acts in each case. For codimension 1, it acts through its action on the homology. In codimension 2 it is less clear, as we discuss at the end.
First, we calculate the partition function on a closed 4-manifold Σ 4 . we use the Lorentz gauge d ⋆ A = 0. The action then gets a new set of terms where χ 0 is a 0-form Lagrange multiplier field valued in g * . Since the induced map t * : g → h is an isomorphism on, one can change variables, instead integrating over the h valued forms t * A. One should also change variables and integrate over t †−1 * χ 0 , where t † * : h * → g * is the adjoint. We're going to need to take determinants of these maps, so we should use compatible Killing forms on G and H and use these to identify g and g * , h and h * . Then it is easy to check that the determinants are simply the number of sheets of the cover: det t * = |Γ|, det t †−1 * = det t −1 * = |Γ| −1 . The resulting action is that for the theory with G = H and t = id. The only contribution is thus from how the path integral measure transforms under this change of variables. Using the determinants calculated above, we have DA = |Γ| b 1 −B 1 Dt * A and Dχ 0 = |Γ| B 0 −b 0 Dt † * χ 0 , where b k is the kth Betti number, and B k is the "number" of k-forms. The Betti numbers appear because they are the dimension of the space of harmonic forms, which are the zero modes for this action, and we must remove them when considering the determinants in the path integral measure. The B k give cutoff dependent terms, so we may discard them. Noting b 0 = 1 for connected Σ 4 , we thus obtain the result (3). The cut-off dependent terms are discussed in greater detail in a similar context in [7].
To calculate the Hilbert space assigned to a closed 3-manifold Σ 3 , we compactify the theory on Σ 3 to obtain a 0+1 dimensional theory, whose Hilbert space is that which we seek. This theory has only finitely many configurations, so this space will be a sum of 1-dimensional vector spaces for each one, which are in bijection with Hom(H 1 (Σ 3 ), Γ). We thus obtain the result (4).
The other important datum in codimension 1 is the action of the mapping class group of Σ 3 . This has an evident action on the space of configurations via its action on H 1 (Σ 3 ), and there is no source of phase factors in the data of this theory to complicate this action, so this must be it.
We can check the action of the mapping class group in a special case as well. Consider Σ 3 a 3-torus, and glue Σ 4 from Σ 3 by exchanging the longitude and latitude of a 2-torus factor of Σ 3 . The result is Σ 4 = S 1 × X 3 , where X 3 is the 3-sphere with the two components of the Hopf link identified. This space has H 1 (X 3 ) = Z corresponding to a segment from one component to the other, so b 1 (Σ 4 ) − 1 = 1. Meanwhile, the trace of the corresponding operator on the Hilbert space is |Γ|, so we see that the two calculations agree.
To calculate the category assigned to the Riemann surface Σ 2 , consider another Riemann surface X and Σ 4 = Σ 2 ×X. This 4-manifold is necessarily torsion-free. This allows us to write A = A 1,0 + A 0,1 , with A 1,0 locally a 1-form along Σ 2 and a 0-form along X, A 0,1 a 0-form along Σ 2 and locally a 1-form along X and likewise for the Lagrange multiplier, λ. We use this notation throughout. The Lagrangian is then where d 2 is the covariant derivative along Σ 2 and d X is that along X. Integrating over Σ 2 , we see that the resulting theory is a direct sum of Higgs theories on X labeled by background Γ-connections on Σ 2 , ie. by Hom(H 1 (Σ 2 ), Γ) = H 1 (Σ 2 , Γ).
Boundary conditions for the 2d Higgs theory are given by Wilson lines, so for each of these theories the category is Rep Γ , the category of complex representations of Γ. The whole category is the disjoint union of these for each direct summand.
We can check this answer with the previous analysis. It is easily seen that the compactification gives the correct Hilbert space and partition function for Σ 2 × S 1 and Σ 2 × S 1 × S 1 , respectively.
We can also see this directly from the category. The Hilbert space assigned to Σ 2 × S 1 should be the "dimension" of this category, the vector space of natural transformations from the identity functor to itself. Each
Codim 0,1, and 2 for the Confining Theory
We perform the same analysis for the confining theory. To summarize, we find with the action of the mapping class group in codimension 1 acting through its action on cohomology. As with the Higgs theory, the action is unclear in codimension 2. We discuss this at the end.
To calculate the partition function on a closed 4-manifold Σ 4 , we use the Lorentz gauge d ⋆ B = 0. This requires a two-stage BRST action, where the g * -valued Lagrange multiplier π 1 needs to be gauge fixed as well. This is given by We perform a change of variables as before, instead integrating over t * B, t †−1 * π 1 , and t * E 0 . The resulting action is that for a confining theory with G = H and t = id, which describes a trivial theory, so again the only contribution is from the change of measure in the path integral.. Once again, zero modes are harmonic forms, so the path integral measure transforms by |Γ| b 2 (Σ 4 )−b 1 (Σ 4 )+1 , as well as some cut-off dependent terms we discard. Since the remaining integral is the partition function of a trivial theory, we can normalize it to be just 1. This gives the answer quoted above in (6).
The Hilbert space assigned to a closed 3-manifold Σ 3 will again be a sum of 1-dimensional vector spaces for each of the finitely many vacuum configurations on Σ 3 × R, which is homotopy equivalent to Σ 3 .
There is some subtlety to these configurations when Σ 3 has torsion. To see this, first consider the abelian case H = G = U (1), t is multiplication by n, and so Γ = Z n . The 1-form gauge transformations are where ξ is an arbitrary U (1) connection.
The equations of motion imply that B defines a homomorphism H 2 (Σ 3 ) → Z n . This homomorphism determines B up to gauge transformations.
It remains to see what data A contributes to the configuration. Once we fix a representative for B, the remaining gauge transformations are those determined by flat connections ξ. These are determined by their holonomy morphism H 1 (Σ 3 ) → U (1). We see that we can use this to cancel all but the n-torsion holonomy of A. Thus, the vacuum data sits in a short exact sequence {A data of n torsion holonomy} → {vacuum data} → {B holonomy data}.
In fact, this is the universal coefficient sequence so the vacuum configurations are cohomology classes in H 2 (Σ 3 , Z n ).
The non-abelian case is harder to consider this way because the gauge transformations are much more involved. However, as explained in [4], the answer only depends on Γ. Thus we obtain the Hilbert space Z conf (Σ 3 ) = C[H 2 (Σ 3 , Γ)]. It is easy to check that the dimension of this is Z conf (Σ 3 ×S 1 ).
The mapping class group of Σ 3 permutes these configurations via its action on the cohomology group, and there is no source of phases that could complicate this action, so it acts by permutation also on the Hilbert space.
To go to codimension 2, we will use the abelian notation for simplicity, but, as before, the same results hold in the non-abelian case.
On Σ 4 = Σ 2 × X, as in the Higgs case, we can split the fields so the Lagrangian is A connection α = α 1,0 + α 0,1 defines a gauge transformation and likewise for B 0,2 and A 0,1 .
For any 1-cycle γ in Σ 2 , we obtain a gauge field C γ on X by integrating B 1,1 over γ. Under a gauge transformation, it becomes The integral on the right is a scalar field on X, so this is indeed an ordinary gauge transformation. Though these fields are nonlocal in the 4d theory, we consider them local in the effective 2d theory on X obtained by compactifying Σ 2 . We will use them to define boundary conditions for this theory.
Consider a 2-chain Λ in Σ 2 with ∂Λ = γ − γ ′ . Then The equations of motion imply dB = 0 and hence d 2 B 1,1 = −d X B 2,0 , so on-shell, which is a gauge transformation. Thus, if we want to consider boundary conditions, we can consider γ to be a homology class an take C γ defined with respect to some representative. The middle term in the action above sets C γ to be flat and have holonomy in Z n . Identifying C γ and C γ ′ for homologous 1-cycles γ and γ ′ , we thus obtain b 1 (Σ 2 ) copies of the 2d Higgs theory. The rest of the action is composed of a 2d confining theory (which is in fact trivial) and a background Z n flux through Σ 2 .
The partition function as calculated by this decomposition is in agreement with equation (6). For X = S 1 × R, there are n sectors for the background flux through Σ 2 , in which the b 1 (Σ 2 ) gauge fields each contribute n vacua, while the B field on X doesn't contribute any nontrivial configurations since b 2 (X) = 0. Thus, the Hilbert space is also in agreement with equation (7).
Only the gauge fields can be used to define boundary conditions for the effective 2d theory on X. These are given by Wilson loops. The category of boundary conditions is thus where the tensor product denotes the Deligne product. For general Γ we have the result in equation (8). Just as in the Higgs case, one can verify that this category has the right dimension.
Duality and Concluding Remarks
From the results argued above, we see some superficial disagreement between the two theories. For one the partition functions (3) and (6) differ by a factor |Γ| χ(Σ 4 ) . However, such a factor can be obtained by adding a curvature dependent term to the action, and topological theories should be considered only modulo such terms.
The Hilbert spaces (4) and (7) for Γ and its Pontryagin dual are isomorphic by Poincaré duality. In detail, H 2 (Σ 3 , Γ) is naturally isomorphic to Hom(H 1 (Σ 2 ),Γ), and the action of the mapping class group on the Hilbert spaces is via its action on these groups.
The question of how the categories (5) and (8) are isomorphic is more subtle. They have the correct dimension, but it is unclear how the mapping class group could act on them. While a cohomology group appears in (5), for the confining theory there is no obvious topological index that the mapping class group could permute. We leave the solution to this puzzle to later work and perhaps to a different approach.
Another curious aspect of the duality is that in compactifying we see certain degrees of freedom become background fluxes, while others remain dynamical data. For instance, in the 2d compactification of the confining theory, we saw a background B-flux and dynamical gauge fields, while the 2d compactification of the Higgs theory had background electric fluxes and a single dynamical gauge field. The dynamical and background data appear to be interchanged by the duality map. This is a manifestation of the strongweak coupling exchange.
|
2013-09-05T20:58:57.000Z
|
2013-09-05T00:00:00.000
|
{
"year": 2013,
"sha1": "2e518da2a695bc095c90476f977e4527b2410305",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2e518da2a695bc095c90476f977e4527b2410305",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
214728391
|
pes2o/s2orc
|
v3-fos-license
|
On the role of theory and modeling in neuroscience
In recent years, the field of neuroscience has gone through rapid experimental advances and a significant increase in the use of quantitative and computational methods. This growth has created a need for clearer analyses of the theory and modeling approaches used in the field. This issue is particularly complex in neuroscience because the field studies phenomena across a wide range of scales and often requires consideration of these phenomena at varying degrees of abstraction, from precise biophysical interactions to the computations they implement. We argue that a pragmatic perspective of science, in which descriptive, mechanistic, and normative approaches each play a distinct role in defining and bridging levels of abstraction will facilitate neuroscientific practice. This analysis leads to methodological suggestions, including selecting a level of abstraction that is appropriate for a given problem, identifying transfer functions to connect models and data, and the use of models themselves as a form of experiment.
Introduction
Recent technological advances in neuroscience have prompted the growth of new experimental approaches and subfields that investigate phenomena from single neurons to social behavior. However, rapid growth has also revealed a need to develop new theoretical frameworks (Phillips 2015) that integrate the growing quantities of data and to establish relationships between their underlying processes. While neuroscience has a strong history of interactions between experimental and theoretical approaches (Marr 1991;Hodgkin and Huxley 1952;O'Keefe and Nadel 1978), there is still disagreement as to the nature of theory and its role in neuroscience, including how it should be developed, used, and evaluated by the community (Goldstein 2018;Bialek 2018).
We argue that an idealized view of scientific progress, in which science is a problem-solving enterprise that strives to explain phenomena, is well-suited to inform scientific practice. In neuroscience, the phenomena of interest are those that pertain to neurons, the nervous system, and its contribution to cognition and behavior. Because these phenomena span a wide range of spatiotemporal scales, their explanations often require a "multi-level" approach that combines data from dramatically different modalities. Descriptive, mechanistic, and normative explanations each play distinct roles in building a multi-level account of neural phenomena -descriptive explanations delineate an abstract characterization of a phenomenon, while mechanistic and normative explanations bridge abstractions of different levels. Collectively, these operations unify scientific theories across disparate experimental approaches and fields. We show how this view facilitates the bidirectional interaction between theory and experimentation as well as theory development.
What is a theory and what is it good for?
Theories are the primary tools by which scientists make sense of observations and make predictions. Given this central role, it is surprising how little methodological attention is given in scientific training to the general nature of theories. Traditional descriptions of science tend to be based on the processes of theory identification and falsification, in which theories are proposed as universal truths about the world, tested, provisionally accepted if found to be compatible with experimental data, and rejected when found to be incompatible (Popper 1959). According to these traditional descriptions, when theories are incompatible with experimental data, the conceptual framework on which they are based is called into question and a new framework is found that can better account for the data (Popper 1959;Kuhn 2012;Lakatos 1980). However, historical, philosophical, and sociological analyses argue that these views do not account for how theory is used in practice (Lakatos 1980;Firestein 2015;Godfrey-Smith 2003;Feyerabend 1993;Ben-Ari 2011;Kaiser 2014;Laplane et al. 2019). For example, theories are rarely, if ever, decisively testable, scientists can have a variety of attitudes towards a theory rather than to simply accept or reject it (Lakatos 1978;van Fraassen 1980;Mermin 1989;Ben-Ari 2011;Kaiser 2014), and although new discoveries can provide answers to open questions, the new questions they prompt may be more consequential (Firestein 2012).
A pragmatic view: science as problem-solving
We propose that a pragmatic view of the scientific enterprise (James 1907;Ben-Ari 2011;Laudan 1978;Douglas 2014) is better suited to inform scientific practice. In this view, science is a process through which we solve empirical problems and answer questions about observable phenomena (Laudan 1978;Douglas 2014;Firestein 2015;Haig 1987;Nickles 1981). Empirical problems can range from matters of basic scientific interest (for example, "How does the brain process visual signals?" or "How does an animal select between alternative choices?"), to those with more obvious applications (such as "Which brain functions are disrupted in schizophrenia?"). Like any other problem, a scientific problem can be seen as a search to achieve a desired goal, which is specified by the statement of the problem (Newell and Simon 1972). However, scientific problems are often ill-defined (Bechtel and Richardson 2010), in part because the search space and solution criteria are not always explicitly stated and in part because they evolve with additional discoveries (Firestein 2012). For example, the discovery of multiple memory and decision-making systems raises further questions of how those systems interact (Balleine and Dickinson 1998;O'Keefe and Nadel 1978;Daw, Niv, and Dayan 2005;Scoville and Milner 1957;Squire 1987;Nadel 1994;Schacter 2001), while the question "How does the pineal gland generate consciousness?" (Descartes 1637) is now considered outdated. Further, scientific problems are never definitively solved, but are only deemed "adequately solved" by a research community. What is seen as an adequate solution in one socio-historical context may not be in anotheras new data become available, standards change, or alternative solutions are presented. While a continuously evolving landscape of problems and proposed solutions might seem to counter a notion of progress in science, scientific theories have been used to explain and control progressively more phenomena over the course of the scientific record (Laudan 1978;Douglas 2014). According to the pragmatic view, this progress results from community-maintained standards of explanation, under an overarching drive to better predict and control natural phenomena of potential relevance to society (Hacking 1983;Douglas 2014).
We can thus define a scientific explanation (Hempel and Oppenheim 1948; Woodward 2019) as a proposed solution to an empirical problem, and scientific theories to be the ideas we use to form explanations. Where traditional views have tried to specify the form theories take, the pragmatic view sees theory structure as closely tied to its function and context. As a result, a theory can include a wide and complex range of structural elements, including those that are not formalized (Winther 2021). While theories may be spelled out in the scientific literature, they are more often used implicitly in the explanation of phenomena and design of experiments. By shifting theories from "proposals of truth to be falsified" to "proposed problem-solving tools", the pragmatic view prompts us to assess a theory by its utility: what empirical problems it can solve, how easily it can be used to solve them, and how good its solutions are. It also requires criteria to evaluate the quality of solutions to a problem and a set of standards by which we measure the utility of the theory such as accuracy, simplicity, falsifiability, generalizability, and reproducibility (Chang 2007b(Chang , 2011van Fraassen 1980;Laudan 1978;Schindler 2018). Through competition to solve empirical problems, theories become more precise, provide clearer and more concise explanations, can be used to make more reliable and accurate predictions, and can be applied to larger domains.
Conceptual frameworks provide constructs and constraints
Assessing scientific explanations inevitably involves considerations that are not directly related to solution quality, but are instead constraints on the form solutions can take. These constraints constitute a conceptual framework (Table 1): a language within which explanations are proposed. In effect, a conceptual framework is a set of foundational theories that provide a conceptual structure on which further theories within that program are built (Lakatos 1978;Laudan 1978;Kuhn 2012).
The stability of such a framework allows its component theories to change without rebuilding their conceptual foundations. For example, under the modern framework of neuropsychiatry, psychiatric disorders are framed in terms of biophysical dysfunctions in neural structure. Current debates about the underpinnings of schizophrenia include hypotheses of dysfunction within dopaminergic or glutamatergic systems, dysfunctional pruning of dendrites, and dysfunctional oscillatory dynamics (Moghaddam and Javitt 2012;Glausier and Lewis 2013;Uhlhaas and Singer 2015;Howes et al. 2017). However, they all lie within a general framework of biophysical changes in neural processes. The consistency of this founding idea allows us to modify theories without disrupting the foundational premise, which allows them to be directly compared and contrasted.
While explanations are naturally comparable within a framework, theories under different frameworks are composed of fundamentally different objects and describe the world in different terms, which makes them difficult to compare. For example, explanations under the traditional psychoanalytic framework (Luyten et al. 2015) are fundamentally different from those under the modern neuropsychiatry framework (World Health Organization 2021; American Psychiatric Association 2013; Cuthbert and Insel 2013;Insel and Cuthbert 2015). The two frameworks are composed of fundamentally different objects and are described in different terms: in contrast to the neuropsychiatric framework, psychoanalytic explanations for schizophrenia invoke unconscious conflicts and/or distorted ego functions as the key factors underlying psychosis (Luyten et al. 2015). Even the categorizations of psychiatric phenomena are different under these frameworks, making direct comparisons of explanations for the same phenomena across frameworks difficult (Feyerabend 1993).
Despite the difficulties in directly comparing theories across frameworks, all frameworks are not equivalent. One can compare conceptual frameworks by asking how well their theories allow us to predict and control our environment (Lakatos 1978). This is not to say that all research requires a direct application, but rather that consideration of practical components is necessary for a complete understanding of scientific progress (Laudan 1978;Douglas 2014). For example, the psychoanalytic framework implies treatment based analytic therapy, while the modern neuropsychiatry framework suggests medication as a key component. Furthermore, under the new framework known as computational psychiatry, psychiatric disorders are attributed to computational "vulnerabilities" in the systems architecture of the brain (A. D. Redish 2004;A. D. Redish, Jensen, and Johnson 2008;Montague et al. 2012;A. D. Redish and Gordon 2016;Huys, Maia, and Frank 2016). Theories in this new framework suggest that such disorders would be treatable by changing information processing -by modifying the physical substrate (e.g. through electrical stimulation or pharmacological changes), enhancing compensation processes (e.g. through cognitive training), or changing the environment (e.g. by giving a student with ADHD extra time on a test). The pragmatic view suggests that the ultimate adoption (or not) of this framework will come down to how successfully it can be applied to unsolved problems.
Models as the interface between theories and phenomena
While "theoretical" work may appear further from "applied" science than its experimental counterpart, models can act as an interface between theory and phenomena. A model consists of a structure and an interpretation of how that structure relates to its target phenomena ( (Frigg and Hartmann 2006), also known as the model's "construal" (Weisberg 2013)). For example, the equation / is a mathematical structure that is interpreted to represent the temporal dynamics of the membrane potential, , of a passive cell with time constant, , and resting potential, (Hille 2001;Hodgkin and Huxley 1952;Koch and Segev 1989;Rall 1992;Gerstner et al. 2014). Models whose structure consists of mathematical equations or computational processes are amenable to simulation and analytical treatment. Models can be constructed from many different kinds of interpreted structures, such as physical structures that are interpreted to represent the double helix of DNA (Watson and Crick 1953) or diagrammatic structures that are interpreted to represent protein interactions involved in signaling cascades (Alon 2006). Many "animal models" used in experimental neuroscience are physical structures interpreted to represent other phenomena, such as the 6-OHDA rat or the MPTP monkey, which are interpreted to represent the pathology of Parkinson's disease (Dorval and Grill 2014;Schultz et al. 1989).
In creating a model, a researcher has to make foundational assumptions in the terms they use, the form those terms take, and the relationships between them. These assumptions instantiate aspects of a theory in an explicit expression with a well-defined form. The voltage equation above instantiates the theory that a neuron's electrical properties arise from a semipermeable membrane (Hodgkin and Huxley 1952;Rall 1992), while the 6-OHDA model instantiates the theory that Parkinson's disease arises from dopaminergic dysfunction (Langston and Palfreman 2013). This explicit formulation of theories can force us to confront hidden assumptions (Marder 2000), and provide useful insights for the design of experiments or potential engineering applications.
Further, in selecting some aspects of a phenomenon to include, and others to ignore, creating a model abstracts a multi-faceted phenomenon into a concise, but inevitably simplified, representation. Thus, models simultaneously act as an instantiation of a theory and an abstraction of a phenomenon (Rosenblueth and Wiener 1945;Stafford 2009). This dual role of models is the foundation of their use in explanation (Cartwright 1997). [
Descriptive, mechanistic, and normative explanation
The terms "descriptive", "mechanistic", and "normative" are widely used in neuroscience to describe various models. A pragmatic view prompts us to consider how these terms relate to the type of problem they are used to solve (Kording et al. 2020). In doing so, we find that these labels correspond to three different explanatory approaches in neuroscience, which are used to solve three different types of problems: "what" problems, "how" problems, and "why" problems (Dayan and Abbott 2001). (See Figure 1).
Descriptive explanations
The first problem often encountered in scientific research is: What is the phenomenon? Phenomena are not divided into discrete entities a priori, but instead appear as a continuous multifaceted stream with many possible methods of observation and many aspects that could be observed. Thus, the set of characteristics that define a phenomenon are often unclear. This problem is addressed with a descriptive explanation (David M. Kaplan and Bechtel 2011). For example, to explain the spikes observed from a hippocampal neuron we could use a theory of "place cells" (O'Keefe and Nadel 1978): a collection of ideas that defines the relationship between neural activity in the hippocampus and an animal's position in an environment, which can be instantiated in a model that specifies that relationship in an equation (O'Keefe and Nadel 1978;A. D. Redish 1999;Laura Lee Colgin 2020). Descriptive models are founded on basic assumptions of which variables to observe and how to relate them. At its heart, a descriptive explanation is simply a selective account of phenomenological data; indeed descriptive models are often called phenomenological models (Craver 2007;David Michael Kaplan 2011) or, when they are well-established, phenomenological laws (Cartwright 1997).
Mechanistic explanations
After addressing the "what" question, one might ask: How does the phenomenon arise? This problem is addressed with a mechanistic explanation, which explains a phenomenon in terms of its component parts and their interactions (Machamer, Darden, and Craver 2000;Craver 2007;Bechtel and Richardson 2010). For example, to explain the activity of place cells, we can create explanations based on afferent information from other structures, internal connectivity patterns, and intra-neuronal processing, which can be instantiated in a model that specifies how they interact to produce neural firing (A. D. Redish 1999;Hartley et al. 2000;Barry et al. 2006;Solstad, Moser, and Einevoll 2006;Fuhs and Touretzky 2006;Giocomo, Moser, and Moser 2011;Sanders et al. 2015). A mechanistic model is founded on an assumption of which parts and processes are relevant, and illustrates how their interaction can produce a phenomenon or, equivalently, how the phenomenon can emerge from these parts. Often these parts are considered to be causally relevant to the phenomenon, and a mechanistic explanation is often also referred to as a causal explanation (Machamer, Darden, and Craver 2000;Craver 2007;Bechtel and Richardson 2010).
Mathematical mechanistic models in neuroscience often take the form of a dynamical system (Koch and Segev 1989;Ellner and Guckenheimer 2006;Izhikevich 2007;Ermentrout and Terman 2010;Gabbiani and Cox 2017;Gerstner et al. 2014;Börgers 2017), in which a set of variables represent the temporal evolution of component processes or their equilibrium conditions. For example, the classic Hodgkin-Huxley model uses a set of four coupled differential equations to represent the dynamics of membrane potential and voltage-dependent conductances, and shows how an action potential can emerge from their interaction by producing a precise prediction of the progression of the membrane potential in time (Hodgkin and Huxley 1952). However, qualitative mechanistic models, in which complex processes are summarized in schematic or conceptual structures that represent general properties of components and their interactions, are also commonly used in neuroscience. For example, Hebb considered a conceptualization of neural processing in which coincident firing of synaptically connected neurons strengthened the coupling between them. From this model, Hebb was able to propose how memories could be retrieved by the completion of partial patterns and how these processes could emerge from synaptic plasticity, as cells that were coactive during a particular stimulus or event would form assemblies with the ability to complete partially-activated patterns (Hebb 1949).
Mechanistic models represent the (assumed) underlying processes that produce the phenomenon (Craver 2007;David M. Kaplan and Bechtel 2011). They can be used to make predictions about situations where the same processes are presumed to operate (Ellner and Guckenheimer 2006). This includes the effects of manipulations to component parts, and circumstances beyond the scope of data used to calibrate the model.
Normative explanations
In addition to the mechanistic question of "how", we can also ask the question: Why does the phenomenon exist? This kind of problem is addressed with a normative explanation, which is used to explain a phenomenon in terms of its function (Barlow 1961;Kording, Tenenbaum, and Shadmehr 2007;Bialek 2012). A normative explanation of place cells would appeal to an animal's need to accurately encode its location, and could instantiate that need in a model of a navigation task (O'Keefe and Nadel 1978;A. D. Redish 1999;McNaughton and Nadel 1990;Zilli and Hasselmo 2008). Appealing to a system's function serves as a guiding concept that can be a powerful heuristic to explain its behavior based on what it ought to do to perform its function (Dennett 1989). This kind of explanation has a long history in the form of teleological explanation, which explains a thing by its "purpose" (Aristotle, n.d.), and is often used implicitly in biological sciences -for example, stating that the visual system is "for" processing visual information. In neuroscience, functions often come in the form of cognitive, computational, or behavioral goals.
When quantified, normative models formalize the goal of the phenomenon in an objective function (also known as a utility or cost function), which defines what it means for a system to perform "well". These models are founded on an assumed statement of a goal and the constraints under which the system operates. For example, models of retinal function formalize the goal of visual processing using equations that represent the ability to reconstruct a sensory signal from neural responses, under the constraints of sensory degradation and a limited number of noisy neurons (Rieke et al. 1997;Field and Rieke 2002;Doi and Lewicki 2014). Such an approach also relies on an assumption of an underlying optimization process. This assumption is often justified by appealing to evolution, which might be expected to optimize systems (Parker and Smith 1990;Barlow 1961;Bialek and Setayeshgar 2008). However, evolution does not guarantee optimality due to limitations of genetic search (Gould 1983;Gould and Lewontin 1979). Moreover, there are numerous processes in physical, biological, neurological, and social systems that can drive phenomena towards a state that maximizes or minimizes some objective function; however, these processes each also have their own unique limitations. For example, physical processes that minimize surface-to-volume ratio create hexagonal tessellations in beehives, but this process is limited by the physical properties of construction (Thompson 1992;Smith, Napp, and Petersen 2021). Economic markets might be expected to optimize the balance between offer and selling price, but are limited by imperfect and unbalanced information and the limited decision-making abilities of agents (Kahneman, Knetsch, and Thaler 1991;Gigerenzer and Gaissmaier 2011;Fox 2009;Shleifer 2000;Akerlof 1978). Similarly, supervised learning might be expected to optimize object discrimination, but its implementation in the brain would be limited by constraints such as synaptic locality and the availability of credit signals and training data (Hunt et al. 2021;Hamrick et al. 2020;Häusser and Mel 2003;Richards et al. 2019;Takeuchi, Duszkiewicz, and Morris 2014;McNaughton, Douglas, and Goddard 1978). Where each of these processes might be expected to bring systems toward an optimal solution, the constraints under which they operate may themselves impose distinct signatures on the systems they optimize.
The descriptive / mechanistic / normative classification depends on context
Theories and models do not exist in isolation, but are embedded in scientific practice. As the descriptive/mechanistic/normative categorization reflects the problem being solved, it can be applied to both theories and models depending on the context, i.e., kind of explanation, in which they are being used. In general, this categorization is independent of whether an explanation is accepted by the scientific community. For instance, a mechanistic explanation does not cease to be mechanistic if it is not adopted, e.g., because some of its predictions are not experimentally corroborated. Further, models with the same structure can be used for different purposes, and can thus be assigned to a different category in different contexts. For example, the integrate-and-fire model can be used as a descriptive model for membrane potential dynamics, or as a mechanistic model for the neuronal input-output transformation; and while the Hodgkin-Huxley model was discussed above as a mechanistic model for the problem of spike generation, it was originally proposed to be "an empirical description of the time course of the changes in permeability to sodium and potassium" (Hodgkin and Huxley 1952). In fact, theories often start as an effort to solve one class of problem, and over time develop aspects to address related problems of different classesresulting in a theory with descriptive, mechanistic, and normative aspects.
Levels of abstraction
In selecting some aspects of a phenomenon to include, and others to ignore, a model abstracts a multifaceted phenomenon into a more concise, but inevitably simplified, representation. That is, in making a model we replace a part of the universe with a simpler structure with arguably similar properties (Rosenblueth and Wiener 1945;Weisberg 2013). It could be argued that abstraction is detrimental to model accuracy (i.e. that "The best material model for a cat is another, or preferably the same cat"), and is only necessary in light of practical and cognitive limitations (Rosenblueth and Wiener 1945). However, abstraction is important in scientific practice, and its role extends beyond addressing those limitations (Potochnik 2017).
Descriptive models define abstractions at different levels
Abstraction is most obvious when we consider the construction of descriptive explanations. First, abstractions are made when researchers decide which aspects of a phenomenon not to include. For example, the cable equation which describes the relationship between axonal conductance and membrane potential (Rinzel and Ermentrout 1989;Rall 1992;Gerstner et al. 2014) does not include details about intracellular organelles, the dynamics of individual ion channels, or the impact of nearby neurons on the extracellular potential. Importantly, these models do not include many larger scale effects (such as the neuron's embedding in a circuit, or the social dynamics of the agent) as well as smaller scale factors (Vinogradov, Hamid, and Redish 2022). The process of abstraction thus applies to both phenomena at smaller scales (organelles) and at larger scales (social interactions of the agent) that are hypothesized to be unnecessary to address the question at hand. Each of these factors are abstracted away, leaving only the features chosen to be represented in a model's structure.
Second, the aspects that are included must be represented in an idealized form. For instance, ionic flux through the cell membrane is not a strictly linear current function of voltage and conductance, but we often idealize it as such for tractability (Rall 1992;Koch and Segev 1989;Hille 2001). These idealizations are assumptions about a phenomenon which are, strictly speaking, false, but are used because they serve some purpose in creating the model (Potochnik 2017).
Classic accounts of neuroscience emphasize analysis at different levels of abstraction Craver 2007;Marr 1982;Shepherd 1994;Sejnowski, Koch, and Churchland 1988;Wimsatt 1976) (Box 1). However, despite the ubiquity of level-based views of neuroscience and a number of proposed schemes, no consensus can be found on what the relevant levels of abstraction are, or even what defines a level (Guttinger and Love 2019). Suggestions of different level schemes range from those of computational abstraction (Colburn and Shute 2007;Wing 2008), which simplifies a process to be independent of its specific implementation or physical substrate, to levels of conceptual abstraction, which delineate the degree of idealization vs relatability to data (O'Leary, Sutton, and Marder 2015), and levels of physical abstraction, which are used to deal with different spatiotemporal scales . However, recent analyses suggest that natural phenomena are not organized into levels in a universally coherent manner (Potochnik and McGill 2012;Potochnik 2017Potochnik , 2020. From a pragmatic view, levels of abstraction need not reflect discrete "levels" in nature, but are indicative of our problem-solving strategies and constraints. Because different abstractions can facilitate different research aims (Potochnik 2017), multiple descriptive models are needed to represent the same phenomenon that abstract different features to different degrees.
Mechanistic and normative models connect levels of abstraction
Without links between them, we would be left with a hodgepodge of different descriptions. However, unification has been noted as a strong desideratum for scientific theories (Schindler 2018;Keas 2018). The relationship between different descriptions of the same phenomena can often be expressed in terms of a mechanistic explanation. For example, we might describe single-neuron activity in terms of membrane currents, or by listing a set of spike times: a natural reduction in the dimensionality that can result from many possible combinations of currents (Golowasch et al. 2002;Prinz, Bucher, and Marder 2004). A mechanistic model (e.g., (Hodgkin and Huxley 1952)) that demonstrates how spike times emerge from currents connects the descriptions at the two levels and, in addition, does so asymmetrically, as it does not claim to be a mechanism by which currents emerge from spike times. By bridging descriptions that each abstract different features to different degrees, mechanistic explanations create a multi-level 'mosaic unity' in neuroscience (Craver 2007), in which descriptions are grounded through their interconnections, and more abstract features are grounded in their emergence from less abstract counterparts (Craver 2007;David M. Kaplan and Bechtel 2011;Bechtel 2008;Craver 2002;Oppenheim and Putnam 1958).
In contrast, a normative explanation connects descriptions by appealing to the ability of less abstract features to satisfy a description of more abstract goals. For example, the mammalian hypothalamus could be described as maintaining body temperature like a thermostat (Tan and Knight 2018;Morrison and Nakamura 2011) or as a circuit of interconnected neurons. A normative model connects the two descriptions by explaining the negative feedback loop in the circuit through its ability to achieve those thermostatic functions. Because functions exist over a range of levels, from cellular to behavioral or computational, we could imagine a "multi-level" approach to understanding the mammalian hypothalamus that in turn uses the goal of a negative feedback loop to explain the developmental processes that establish hypothalamic connectivity. Like their mechanistic counterparts, normative explanations establish links between descriptions which each have their own utility for different problems, by virtue of their unique abstractions.
Thus, the three-fold division of explanatory labor in neuroscience falls naturally into the different roles a model can play in terms of levels of abstraction. Descriptive explanations define abstractions of phenomena at different levels, while mechanistic and normative explanations bridge levels of abstraction. Descriptive models, rather than "mere" descriptions of phenomena (as they're sometimes dismissed), are the necessary foundation of both normative and mechanistic models. In turn, mechanistic and normative explanations connect a description at a "source" level to a description at a higher or lower "target" level ( Figure 2). Each of the terms that represent the components of mechanistic models and the constraints of normative models are descriptive models at a lower level of abstraction, while those that represent the emergent properties of mechanistic models and the goals of normative models are descriptive models at a higher level of abstraction. Given their multi-level nature, a dialogue between descriptive, normative, and mechanistic models is needed for a theoretical account of any neuroscientific phenomenon.
[FIGURE 2 NEAR HERE]
At what level of abstraction should a model be built?
As different abstractions trade-off advantages and disadvantages, the selection of which abstraction to use is highly dependent on the problem at hand (Herz et al. 2006). Current neuroscientific practice generally attempts two approaches for selecting the appropriate level of abstraction, which serve different purposes. The first approach is to try to find as low a level as possible that still includes experimentallysupported details and accounts for the phenomenon. For example, one might explain the phenomenon of associative memories using compartmental models of pyramidal cell networks, including specific active conductances, dendritic compartments, pharmacological effects on different inputs arriving at different compartments and identifying the consequences for learning and recall (Hasselmo 1993). The multiplicity of parameters and variables used in this approach provides many details that can be matched to observable features of a phenomenon and can capture unexpected properties that emerge from their interaction. However, these details need to be extensively calibrated to ensure the model is accurate, and can be very sensitive to missing, degenerate, or improperly tuned parameters (Traub, Jefferys, and Whittington 1999;Traub et al. 1991). The second approach is to try to find the most abstract level that can still account for the phenomenon. For example, we might instead appeal to the classic Hopfield network, in which units are binary (+1, -1), connections are symmetrical, and are updated using a very simple asynchronous rule (Hopfield 1982;Hertz, Krogh, and Palmer 1991). While more abstract models sacrifice the ability to make predictions about lower level details, their insights are often more robust to specific (e.g. unobserved) physiological details, and by reducing a complicated system to a small number of effective parameters, they allow for powerful analysis on the influences to the system properties. Further, abstract models can provide conceptual benefits such as intuition for how the system works and the ability to generalize to other systems that can be similarly abstracted (Gilead, Trope, and Liberman 2019;Gilead, Liberman, and Maril 2012;O'Leary, Sutton, and Marder 2015).
Another important consideration is the ability of models at different levels to interface with different experimental modalities or scientific fields. Every measurement is itself an abstraction, in that it is a reduced description of the part of the universe corresponding to the measurement (Chang 2007b). For example, fMRI measures blood flow across wide swaths of cortex, but abstracts away the interactions between individual neurons, while silicon probes measure extracellular voltage but abstract away intracellular processes, and calcium imaging measures neuronal calcium levels, but abstracts away the electrophysiology of neuronal spiking. All of these are discussed as "neural activity", but they likely reflect different aspects of learning, performance, and dynamics. Moreover, subsequent processing abstracts these signals even further, such as correlation (functional connectomics) in fMRI, sorting voltage signals into putative cell "spiking" from silicon probes, and treating calcium transients as "events" from calcium imaging. The abstraction made by one measurement device might lend itself to explanations at a given level, but not others, and the measurements available are important considerations when selecting which abstractions to make in our models.
Similarly, models at different levels are often used by distinct scientific fields or communities. The existence of a literature with a rich body of relevant work can provide details and support for components of a model outside of the immediate problem of interest. Integrating theories and models across these different fields can be particularly beneficial for scientific progress (Wu, Wang, and Evans 2019; Grim et al. 2013); however, crossing levels can be a sociological problem as well as a methodological one because different fields of study often use different languages and operate under different conceptual frameworks.
In general, it is important that researchers spell out the abstractions being made in their models, including their purposes as well as their limitations. By being concrete about the abstractions made, researchers can increase the reliability of their theories. Importantly, as noted above, it is useful to acknowledge not only the simplifications made about smaller-scale phenomena, but also the simplifications made as to larger-scale interactions that have been abstracted away from a theory.
Theory development and experimentation
Traditional views emphasize the use of experiments to test proposed theories (Popper 1959), and even consider an interplay in which theories suggest new experiments and unexpected experimental results reveal the need for new theories (Firestein 2015;Laudan 1978). However, theories do not arise fullyformed, but are developed over time through an interaction with experimentation (Bechtel 2013;Laudan 1978;Hacking 1983;Douglas 2014;Firestein 2015). We now consider two crucial pieces of that dialogue: the domain of a theory, or phenomena it is intended to pertain to, and a translation function, which specifies how it should relate to phenomena in its domain. Experimentation plays two key roles in relation to theory: 1) grounding model assumptions and 2) assessing the quality of model-based explanations. We then discuss an often underappreciated form of experimentation, in which models themselves are the experimental subjects. These modeling experiments allow us to explore the (sometimes hidden or unexpected) implications of a theory itself, identify its underlying inconsistencies, and can be used to predict novel phenomena. Together, this reveals a picture in which theory development is not relegated to simply proposing theories-to-be-tested, but instead entails a complex experimental paradigm in which models play an active role in the simultaneous development, assessment, and utilization of theories within explicit conceptual frameworks.
Linking Theory and Phenomena
The domain of a theory is the set of phenomena that it purports to explain (Kuhn 2011(Kuhn , 2012Mitchell, Keller, and Kedar-Cabelli 1986;A. D. Redish 1997). The domain is therefore a set of data-imposed constraints, and the theory should provide an explanation consistent with those constraints. Theoretical studies should be explicit about what phenomena do and do not lie in their intended domain. In practice, nascent theories are often evaluated not only by their ability to explain data in their proposed domain (Feyerabend 1993;Laudan 1978) but also by their potential to expand beyond the initial domain with further development (Lakatos 1978). For example, the theory that action potentials arise from voltagedependent changes in ionic permeability (Hodgkin and Huxley 1952;Goldman and Morad 1977;Katz 1993;Hille 2001) should apply to the domain of all action potentials in all neurons. Early theories of action potential function identified voltage-gated sodium currents as the primary depolarizing component and formalized their action in models that developed into the Hodgkin-Huxley framework (Hodgkin and Huxley 1952). When some action potentials were later found to be independent of sodium concentrations, it was straightforward to incorporate other voltage-gated channels within the same framework (Hille 2001;Koch and Segev 1989;Gerstner et al. 2014).
By instantiating a theory in a specific structure (Rosenblueth and Wiener 1945;Stafford 2009), models play a key role in connecting a theory to phenomena in its domain. However, no model is directly comparable to experimental data by virtue of its structure alone. As noted above, a model also consists of an interpretation of how that structure relates to its target phenomena (Weisberg 2013). This interpretation is specified by a translation function: a statement of how the model's components map onto its target phenomena. A translation function may be as straightforward as "variable V represents the membrane potential in millivolts", but it can also be less constrained, e.g., "variable V describes the slow changes in the membrane potential and ignores all spiking activity". In other cases, the translation function can be complex, as parts of the model can have a loose correspondence to general features of large classes of data, and can represent highly abstract effective parameters or qualitative behaviors. For example, the units in Hopfield's attractor network models (Hopfield 1982;Hopfield and Tank 1985;Hertz, Krogh, and Palmer 1991) are not meant to directly correspond to measurable properties of biological neurons, but are instead intended to reflect qualitative features, namely that neural populations are "active" or not. In effect, the translation function spells out the abstractions made by the model. Specifying the translation function of a model is as important as defining its structure (Weisberg 2013). While these descriptions are often provided for highly abstract models, models that describe finer spatio-temporal scales (such as detailed compartmental models of neurons) are often considered to be "biologically realistic" and assume a simple or obvious translation function. However, it is important to remember that these models are also abstractions, albeit at a different level, and a proper description of the abstractions made will help clarify both the uses and the limitations of such models. By specifying the intended correspondence between model terms and phenomena, the translation function operationalizes the concepts associated with those terms in the theory (Bridgman 1927;Chang 2007a).
Experiments ground model assumptions
With a well-defined translation function in hand, we can now consider the ways in which models are informed by experimental data. As outlined above, the components of descriptive, mechanistic, and normative models are each based on a different set of foundational assumptions. These assumptions are not generally arbitrary, but are informed by experimental observations and results.
Descriptive models are founded on an assumed relationship between variables, which is generally formulated to capture an observed regularity in experimental data. These initial observations often rely on "exploratory" experiments, which attempt to identify empirical regularities and the constructs with which to describe them (Steinle 1997). In specifying the characteristic properties of a phenomenon, descriptive explanations delineate the attributes that are expected to be replicable in future experiments and play a foundational role in subsequent mechanistic and normative models. This is extremely important for the current replication controversy (Baker 2016;Goodman, Fanelli, and Ioannidis 2016;Fanelli 2018;. A recent National Academy report (National Academies of Sciences and Medicine 2019) characterizes replicability as the ability to obtain consistent results across multiple studies, and contrasts it with reproducibility, defined as the ability to get the same results when applying the same analyses to the same data. Several authors have suggested that the replication crisis is in fact a crisis of theory development, as it is the scientific claims (not data) that should be replicable (drugmonkey 2018; A. D. Redish et al. 2018;Smaldino 2019). We suggest that this crisis stems from three sources: 1) a failure to define domains correctly, assuming that limited observations correspond to a much larger range of phenomena than they actually do, 2) a failure to formalize observations in adequate descriptive models (e.g., an overreliance on correlation, or assumed simple relationships), and 3) a failure to connect those descriptive models with mechanistic or normative models that integrate descriptions at different levels of abstraction.
Mechanistic models are founded on a set of parts and interactions that are assumed to be relevant to a target phenomenon. The existence of candidate parts/interactions can be informed by experimental observations, and their relevance (or irrelevance) to a given target phenomenon is often derived from experimental or natural interventions (Pearl 2009). Once the decision is made to include a part/interaction in a mechanistic model, its corresponding terms can be parameterized by virtue of the descriptive models at their source level of abstraction. For example, when trying to explain the phenomenon of burst spiking in thalamocortical neurons, we might observe the presence of a hyperpolarization-activated current (Ih) which, when blocked, disrupts burst spiking (McCormick and Pape 1990). We can then calibrate the parameters used to model Ih with values acquired through slice experiments.
Experimental data can also inform the founding assumptions (goal/constraints) of normative models. For example, when trying to explain the responses of visual neurons, we might parameterize the constraints of an efficient coding model with data from retinal photoreceptors (Field and Rieke 2002). As with mechanistic models, normative parameters rely on the descriptive models we have for photoreceptor properties. However, grounding an assumed function (e.g. "vision") in experimental data can be more challenging. This arises from a notable asymmetry between mechanistic and normative approaches: while the founding assumptions of a mechanistic model (parts/interactions) are less abstract than their target phenomena, the founding assumptions of normative approaches (a function/goal) are generally more abstract than the phenomenon they are used to explain. This often results in normative approaches being termed "top-down", in contrast to "bottom-up" mechanistic modeling. In practice, functions are often operationalized via performance on a specified task, rendering them groundable in experimental data. For example, the assumed goal of primate facial recognition areas is grounded in the change in facial recognition abilities when those neural systems are manipulated or absent, neural responses to facial stimuli, and in the coupling of those areas with sensory and motor areas providing a behavioral circuit (Gross, Bender, and Rocha-Miranda 1969;Tsao et al. 2006;Moeller et al. 2017;Grimaldi, Saleem, and Tsao 2016).
Experiments assess solution quality
As has been noted by many previous authors, we cannot definitively "confirm" theories (Popper 1959), nor can we definitively test/falsify the validity of a theory in isolation (Duhem 1991;Lakatos 1980). However, a theory's utility does not require absolute confidence in its validity, but only a track record of solving problems in its domain. By instantiating theories in a model with a well-defined translation function, we can assess the quality of solutions proposed with a given theory by comparing the behavior of those models to experimental observations. In the case of descriptive models, model fitting can estimate confidence intervals and goodness-of-fit for the best-fitting parameter values, and can even be used to quantitatively compare candidate models to determine which can best explain experimental data with the fewest parameters. A researcher might build a mechanistic model with terms that correspond to the proposed parts to see if they are able to reproduce features of the data, or test the model's ability to predict the effect of experimental manipulations. Alternatively, a researcher can hypothesize that the system is performing some function, make a normative model that instantiates the goal, and see if properties of the data match those expected from a system optimizing that goal. In these "confirmatory" (theory-driven/hypothesis-testing) experiments, models are used to apply existing theories to account for observed phenomena, compare possible instantiations of a theory, or even compare theories with overlapping domains to see which better accounts for the phenomenon. In each case, the assumptions of the model act as a hypothesis and the degree of similarity between model and experimental data is used to assess the sufficiency of a theory (and its specific model instantiation) to account for a phenomenon.
However, the value of modeling is often in its ability to show insufficiency of a theory/model to account for experimental data. Rather than invalidating the theory, this can often prompt updates to the theory or a search for yet-unobserved relevant phenomena. For example, early models of head-direction tuning found that a mechanism based on attractor networks required recurrent connections not supported by anatomical data (A. D. Redish, Elga, and Touretzky 1996). This incompatibility led to subsequent analyses which found that the tuning curves were more complicated than originally described, matching those seen in the model without the recurrent connections (Blair, Lipscomb, and Sharp 1997). Similarly, the usefulness of normative models often lies in their ability to identify when a system is performing suboptimally (Parker and Smith 1990). Such a finding can provide additional information about unexpected functions or constraints. When there is a mismatch between a normative model and observed phenomena, one could hypothesize that the agent is optimizing a different goal (Fehr and Schmidt 1999;Binmore 2005), new constraints that limit the processes available (Simon 1972;Mullainathan 2002), historical processes that could limit the optimization itself (Gould and Lewontin 1979;Gould 1983), or computational processes that limit the calculations available to the system (A. Nadel 1994;Schacter 2001;Webb, Glimcher, and Louie 2021). For instance, several studies have found that foraging subjects tend to remain at reward sites longer than needed (Nonacs 2001;Camerer 1997;Carter and Redish 2016) and accept longer-delay offers than would be expected to maximize total reward (Wikenheiser, Stephens, and Redish 2013;Schmidt, Duin, and Redish 2019;. However, optimality could be restored by assuming an additional factor in the cost function (Simon 1972) subsequently characterized as "regret": an increased cost of making a mistake of one's own agency compared to equivalently poor outcomes that were not due to recognizable mistakes (Wikenheiser, Stephens, and Redish 2013;Steiner and Redish 2014;Zeelenberg et al. 2000;Coricelli et al. 2005). Similarly, Fehr and colleagues have found that normative explanations of behavior in a multi-player game require an additional component with information about one's companion's success in addition to one's own, in order to account for the observed behavior (Fehr and Krajbich 2014;Fehr and Schmidt 1999;Binmore 2005).
Modeling experiments explore theory implications
Confirmatory experiments can even be carried out without direct comparison to data, as phenomena at both the target and source levels of abstraction can be pure theoretical entities. Similar to their benchtop counterparts, we can treat different parameters or model instantiations as independent variables in the experiment, and test their sufficiency to account for different aspects of the phenomenon as the dependent variables (Omar, Aldrich, and Gerkin 2014;Gerkin, Jarvis, and Crook 2018). One can use these models as experiments to test the feasibility of theoretical claims in tractable idealized systems. For example, Hopfield's attractor network models (Hopfield 1982;Hopfield and Tank 1985) provided strong support for Hebb's theory (Hebb 1949) that increased connectivity from co-active firing could create associative memory, by showing that strong connections between simple neuron-like entities were sufficient to produce cell assemblies that could be accessed through a pattern-completion process (Hertz, Krogh, and Palmer 1991).
Like their physical analogues (e.g. the 6-OHDA rat or the MPTP monkey), models can be used for exploratory experiments as well. Exploration of the Hopfield model (Hopfield 1982;Hopfield and Tank 1985;Kohonen 1984Kohonen , 1980 revealed novel properties of categorization, tuning curves, and pattern completion in the neuron-like entities, which were later identified experimentally (K. Obermayer, Blasdel, and Schulten 1992;Klaus Obermayer et al. 2001;Swindale and Bauer 1998;Swindale 2004;de Villers-Sidani and Merzenich 2011;Nahum, Lee, and Merzenich 2013;Freedman et al. 2001Freedman et al. , 2003Lakoff 1990;Rosch 1983;Wills et al. 2005; L. L. Colgin et al. 2010;Yang and Shadlen 2007;Jezek et al. 2011;Kelemen and Fenton 2016). Exploratory modeling experiments can instantiate idealized aspects of a theory to help build intuition for the theory itself. Hopfield's model and its subsequent derivatives have provided researchers with a deeper understanding of how memories can be accessed by content through pattern-completion processes and given rise to concepts such as "basins of attraction" (Hopfield 1982;Hertz, Krogh, and Palmer 1991). These computational discoveries can help build understanding of the theory, and lead to predictions and ideas for new experiments.
Modeling experiments are especially useful in the context of theory development (Guest and Martin 2020). When a phenomenon cannot be readily explained using an existing theory, assumptions can be made as the basis of a modeling experiment. The behavior of this model can then be used to evaluate the sufficiency of these assumptions to account for the phenomenon. Often, these modeling experiments precede a well-formed theory, and a theorist will perform numerous experiments with different models in the process of developing a theory (van Rooij and Baggio 2020). Over time, specific successful model formulations can become closely associated with the theory and develop into its canonical instantiations that make the theory applicable to a wider range of problems and give more precise solutions.
Conclusions
A scientific theory is a thinking tool: a set of ideas used to solve specific problems. We can think of theoretical neuroscience as a field which approaches problems in neuroscience with the following problem-solving methodology: theories exist within conceptual frameworks and are instantiated in models which, by virtue of a translation function, can be used to assess a theory's ability to account for phenomena in the theory's domain or explore its further implications. (See Figure 3.) We identified three kinds of explanations that play distinct roles in this process: those in which descriptive theories and models are used to define the abstractions by which we describe a phenomenon; those in which mechanistic theories and models are used to explain phenomena in terms of lower-level parts and their interactions; and those in which normative theories and models are used to explain phenomena in terms of a function at a higher level of abstraction.
These considerations lead to a more concrete view of theory in neuroscience under the pragmatic view: a theory is a set of assumptions available to be instantiated in models, whose adequacy for problems in their domain has been vetted via experimentation, and with a well-established translation function that defines their connection to phenomena. Over time and through the development of canonical model formulations, theories become more rigorous, such that researchers agree on how they should be implemented to explain specific domains. A theory in this sense is not a formal set of laws, but a continuously developing body of canonical models and model-phenomenon correspondences, bound together partly by history and partly by shared problem-solving methods and standards (Bechtel 1993).
What recommendations can we take away from this perspective? First and foremost, that scientists should be explicit about the underlying components of their theory. Reliability of theoretical work depends on being explicit about the domain that the theory purports to cover, the abstractions used (what has been ignored and left out), and the translation function to connect the theory to actual measurements. Furthermore, thinking of the pragmatic aspects suggests being explicit about what problems the work proposes to solve, what conceptual frameworks the theory fits in, and what the founding assumptions of the models are.
Finally, it is interesting to consider that we might apply our taxonomy to our own framework. The concept that 'the ultimate goal of a theory is to provide tools that allow one to better explain and control one's environment' is a normative theory of the goal of scientific theories; the concept that 'models instantiate theories and allow one to test their viability and their relationship to phenomena' is a mechanistic theory of how those theories achieve that goal; and the concept that 'theories live within a framework that a community applies to them' is a descriptive theory of theories. One could imagine a metascientific research program which studies the available phenomena -for example, the scientific literature -to test and further develop those theories, and even the use of models of the scientific process itself (e.g. (Devezer et al. 2019)). The benefits of such a research program could prove as impactful for scientific practice as other theories have proven for manipulation of phenomena in their domain. The three explanatory processes that underlie scientific explanations. Descriptive theories address the question of "what is the phenomenon?" and identify the repeatable characteristics of that phenomenon. Mechanistic theories address the question of "how does the phenomenon arise?" and explains the phenomenon in terms of the parts and interactions of other phenomena at lower levels of abstraction. Normative theories address the question of "why do the phenomena exist?" and allow a comparison of the phenomenon to an identified function or goal. Normative theories allow the determination of whether a process is achieving its goal -inadequacies generally imply an incomplete understanding of the limitations engendered by processes at a lower level of abstraction.
Figure 2:
Interactions between three explanatory processes and levels of abstraction. Descriptive explanations define an idealized abstraction of specific aspects of a phenomenon for discussion, measurement, and repeatability. Mechanistic explanations account for properties of a phenomenon by their emergence from less abstract phenomena, while normative explanations account for those properties by appealing to their ability to perform more abstract goals.
Figure 3:
How the various components discussed in this manuscript interact. The domain of a theory is the set of phenomena which it purports to explain. Theories are instantiated in models, which are an abstraction of phenomena in the domain, as specified by a translation function. By constraining the form solutions can take, a conceptual framework defines a way of looking at a problem, within which models and theories can be proposed. Note that a given model can instantiate more than one theory and a theory can be instantiated by more than one model.
Box captions
Box 1: Levels of abstraction Box 2: What makes a good neuroscientific theory? What makes a good model?
|
2020-04-01T01:01:11.619Z
|
2020-03-30T00:00:00.000
|
{
"year": 2020,
"sha1": "c2d63b0defe22256a5ad2c18ba52f5df4a4e32ac",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c2d63b0defe22256a5ad2c18ba52f5df4a4e32ac",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Computer Science"
]
}
|
205247325
|
pes2o/s2orc
|
v3-fos-license
|
High-fidelity CRISPR-Cas9 variants with undetectable genome-wide off-targets
CRISPR-Cas9 nucleases are widely used for genome editing but can induce unwanted off-target mutations. Existing strategies for reducing genome-wide off-targets of the broadly used Streptococcus pyogenes Cas9 (SpCas9) are imperfect, possessing only partial or unproven efficacies and other limitations that constrain their use. Here we describe SpCas9-HF1, a high-fidelity variant harboring alterations designed to reduce non-specific DNA contacts. SpCas9-HF1 retains on-target activities comparable to wild-type SpCas9 with >85% of single-guide RNAs (sgRNAs) tested in human cells. Strikingly, with sgRNAs targeted to standard non-repetitive sequences, SpCas9-HF1 rendered all or nearly all off-target events undetectable by genome-wide break capture and targeted sequencing methods. Even for atypical, repetitive target sites, the vast majority of off-targets induced by SpCas9-HF1 were not detected. With its exceptional precision, SpCas9-HF1 provides an alternative to wild-type SpCas9 for research and therapeutic applications. More broadly, our results suggest a general strategy for optimizing genome-wide specificities of other RNA-guided nucleases.
SpCas9-HF1 retains high on-target activities
To determine how robustly SpCas9-HF1 functions at a larger number of on-target sites, we performed direct comparisons between this variant and wild-type SpCas9 using additional sgRNAs. In total, we tested 37 different sgRNAs, 24 targeted to EGFP and 13 targeted to endogenous human gene targets. For 20 of the 24 sgRNAs tested using the EGFP disruption assay (Extended Data Fig. 2a) and 12 of the 13 sgRNAs tested using a T7 endonuclease I mismatch assay (Fig. 1c), we found SpCas9-HF1 exhibited at least 70% of the on-target activities observed with wild-type SpCas9 at the same sites (Fig. 1d). Indeed, SpCas9-HF1 showed highly comparable activities (90-140%) to wild-type SpCas9 with the vast majority of sgRNAs (Fig. 1d). Three of the 37 sgRNAs tested showed essentially no activity with SpCas9-HF1 (EGFP sites 9 and 23, and RUNX1 site 2), and examination of these target sites did not suggest any obvious differences in the characteristics of these sequences compared to those for which we saw high activities (Supplementary Table 1). Overall, SpCas9-HF1 possesses comparable activities (greater than 70% of wild-type SpCas9 activities) for 86% (32/37) of the sgRNAs we tested.
Genome-wide specificity of SpCas9-HF1
To test whether SpCas9-HF1 exhibits reduced off-target effects in human cells, we used the genome-wide unbiased identification of double-stranded breaks enabled by sequencing (GUIDE-seq) method 8 to assess eight different sgRNAs targeted to sites in the endogenous human EMX1, FANCF, RUNX1, and ZSCAN2 genes. The sequences targeted by these sgRNAs have variable numbers of predicted mismatched sites in the reference human genome (Extended Data Table 1). Assessment of on-target double-stranded oligodeoxynucleotide (dsODN) tag integration (by restriction-fragment length polymorphism (RFLP) assay) and indel formation (by T7 endonuclease I assay) for the eight sgRNAs revealed comparable on-target activities with wild-type SpCas9 and SpCas9-HF1 (Extended Data Fig. 3a and 3b, respectively), demonstrating that these GUIDE-seq experiments were working efficiently and comparably with the two different nucleases.
These GUIDE-seq experiments showed that with wild-type SpCas9, seven of the eight sgRNAs induced cleavage at multiple off-target sites (ranging from 2 to 25 per sgRNA), whereas the eighth sgRNA (FANCF site 4) did not yield any detectable off-target sites (Fig. 2a, b). The off-target sites identified harboured one to six mismatches distributed throughout various positions in the protospacer and/or PAM sequence ( Fig. 2c and Extended Data Fig. 4a). However, with SpCas9-HF1, a complete absence of GUIDE-seq detectable off-target events was observed for six of the seven sgRNAs that induced off-target effects with wild-type SpCas9 (Fig. 2a, b). Among these seven sgRNAs, only a single detectable genome-wide off-target was identified, for FANCF site 2, at a site harbouring one mismatch within the protospacer seed sequence (Fig. 2a). As with wild-type SpCas9, the eighth sgRNA (FANCF site 4) did not yield any detectable off-target cleavage events when tested with SpCas9-HF1 (Fig. 2a). Notably, with all eight sgRNAs, SpCas9-HF1 did not create any new nuclease-induced offtarget sites (not already observed with wild-type SpCas9) detectable by GUIDE-seq.
To confirm these GUIDE-seq findings, we used targeted amplicon sequencing to more directly measure the frequencies of indel mutations induced by wild-type SpCas9 and SpCas9-HF1. For these experiments, we transfected human cells only with sgRNA-and Cas9encoding plasmids (without the GUIDE-seq tag). We used nextgeneration sequencing to examine the on-target sites and 36 of the 40 off-target sites that had been identified for six sgRNAs with wild-type Figure 1 | Identification and characterization of SpCas9 variants bearing substitutions in residues that form non-specific DNA contacts. a, Schematic depicting wild-type SpCas9 interactions with the target DNA-sgRNA duplex, based on PDB accession 4OO8 and 4UN3 (adapted from refs 28 and 29, respectively). b, Characterization of SpCas9 variants that contain alanine substitutions in positions that form hydrogen bonds with the DNA backbone. Wild-type SpCas9 and variants were assessed using the human cell EGFP disruption assay when programmed with a perfectly matched sgRNA or partially mismatched sgRNAs. Error bars represent s.e.m. for n = 3; mean level of background EGFP loss represented by red dashed line. c, On-target activities of wild-type SpCas9 and SpCas9-HF1 across 13 endogenous sites measured by T7 endonuclease I assay. Error bars represent s.e.m. for n = 3. d, Ratio of on-target activity of SpCas9-HF1 to wild-type SpCas9. The median and interquartile range are shown; the interval with > 70% of wild-type activity is highlighted in green.
ARTICLE RESEARCH
SpCas9 in our GUIDE-seq experiments (four of the 40 sites could not be specifically amplified from genomic DNA). These deep sequencing experiments showed that: (1) wild-type SpCas9 and SpCas9-HF1 induced comparable frequencies of indels at each of the six sgRNA on-target sites, indicating that the nucleases and sgRNAs were functional in all experimental replicates (Fig. 3a, b); (2) as expected, wildtype SpCas9 showed statistically significant evidence of indel mutations at 35 of the 36 off-target sites (Fig. 3b) at frequencies that correlated well with GUIDE-seq read counts for these same sites (Fig. 3c); and (3) the frequencies of indels induced by SpCas9-HF1 at 34 of the 36 off-target sites were statistically indistinguishable from the background level of indels observed in samples from control transfections (Fig. 3b). For the two off-target sites that appeared to have statistically significant mutation frequencies with SpCas9-HF1 relative to the negative control, the mean frequencies of indels were 0.049% and 0.037%, levels at which it is difficult to determine whether these are due to sequencing or PCR error or are bona fide nuclease-induced indels. Based on these results, we conclude that SpCas9-HF1 can completely or nearly completely reduce off-target mutations that occur across a range of different frequencies with wild-type SpCas9 to levels generally undetectable by GUIDE-seq and targeted deep sequencing.
We next assessed the capability of SpCas9-HF1 to reduce genomewide off-target effects of sgRNAs designed against atypical homopolymeric or repetitive sequences. Although we and other researchers now try to avoid on-target sites with these characteristics due to their relative lack of orthogonality to the genome, we wished to challenge the genome-wide specificity of SpCas9-HF1 with sites that have very large numbers of known off-target sites in human cells. Therefore, we used previously characterized sgRNAs 4,8 that target either a cytosine-rich homopolymeric sequence or a sequence containing multiple TG repeats in the human VEGFA gene (VEGFA site 2 and VEGFA site 3, respectively) (Extended Data Table 1). In control experiments, we again found that each of these sgRNAs induced comparable levels of GUIDE-seq dsODN tag incorporation (Extended Data Fig. 3c) and indel mutations (Extended Data Fig. 3d) with both wild-type SpCas9 and SpCas9-HF1, demonstrating that SpCas9-HF1 is not impaired in on-target activity with either of these sgRNAs. Importantly, these GUIDE-seq experiments revealed that SpCas9-HF1 was highly effective at reducing off-target sites of these sgRNAs, with 123/144 sites for VEGFA site 2 and 31/32 sites for VEGFA site 3 not detected ( Fig. 4a and Extended Data Fig. 5). Examination of wild-type SpCas9 off-target sites not detected with SpCas9-HF1 showed that they each possessed a ----------3,605 2,837 1,060 313 96 67 34 17 15 2,882 --------3,502 3,115 1,243 888 848 695 549 158 152 148 145 111 33 33 30 26 25 22 20 20 6 5 5 5 4 4 4,203 | Genome-wide specificities of wild-type SpCas9 and SpCas9-HF1 with sgRNAs targeted to standard, non-repetitive sites. a, Off-target cleavage sites of wild-type SpCas9 and SpCas9-HF1 with eight sgRNAs targeted to endogenous human genes, as determined by GUIDE-seq. Read counts represent a measure of cleavage frequency at a given site; mismatched positions within the spacer or PAM are highlighted in colour. b, Summary of the total number of genome-wide off-target sites identified by GUIDE-seq for wild-type SpCas9 and SpCas9-HF1 with the sgRNAs used in panel a. c, Off-target sites identified for wild-type SpCas9 and SpCas9-HF1 for the eight sgRNAs, binned according to the total number of mismatches (in the protospacer and PAM) relative to the on-target site. range of total mismatches distributed at various positions within their protospacer and PAM sequences: 2 to 7 mismatches for the VEGFA site 2 sgRNA and 1 to 4 mismatches for the VEGFA site 3 sgRNA (Fig. 4b and Extended Data Fig. 4b); also, nine of these off-targets for VEGFA site 2 may be recognized by an alternate potential base pairing interaction with the sgRNA that might occur with a single bulged base 12 at the sgRNA-DNA interface (Extended Data Figs 5 and 6). Overall, the sites that were still mutated by SpCas9-HF1 possessed a range of 2 to 6 mismatches for the VEGFA site 2 sgRNA and 2 mismatches in the single site for the VEGFA site 3 sgRNA (Fig. 4b), with three of the offtarget sites for the VEGFA site 2 sgRNA having an alternative potential single bulge alignment (Extended Data Figs 5 and 6). Notably, no new nuclease-induced off-target sites were induced by SpCas9-HF1 with either of the two sgRNAs. Collectively, these results demonstrate that SpCas9-HF1 can be highly effective at reducing off-target effects of sgRNAs targeted to simple repeat sequences and can also have substantial impacts on sgRNAs targeted to homopolymeric sequences.
Refining the specificity of SpCas9-HF1
Previously described methods such as truncated sgRNAs 14 and the SpCas9 D1135E variant 15 can partially reduce SpCas9 off-target effects, and we therefore wondered whether these might be combined with SpCas9-HF1 to further improve its genome-wide specificity. Testing of SpCas9-HF1 with matched full-length and truncated sgRNAs targeted to four sites in the human cell-based EGFP disruption assay revealed that shortening sgRNA complementarity length substantially impaired on-target activities (Extended Data Fig. 7a). By contrast, SpCas9-HF1 with an additional D1135E substitution (a variant we call SpCas9-HF2) retained 70% or more activity of wild-type SpCas9 with six of eight sgRNAs tested using our human cell-based EGFP disruption assay ( Fig. 5a and Extended Data Fig. 2b). We also constructed SpCas9-HF3 and SpCas9-HF4 variants harbouring additional L169A or Y450A substitutions, respectively, at positions whose side chains are believed to mediate non-specific hydrophobic interactions with the target DNA on its PAM proximal end 28,31 (Fig. 1a). The Y450 residue is notable for participating in a base stacking interaction with the sgRNA 31 and undergoing a 120 degree shift upon target binding to create its hydrophobic interaction with the DNA 28,32 . SpCas9-HF3 and SpCas9-HF4 retained 70% or more of the activities observed with wild-type SpCas9 with the same six out of eight EGFP-targeted sgRNAs ( Fig. 5a and Extended Data Fig. 2b).
We next sought to determine whether SpCas9-HF2, -HF3, or -HF4 could reduce indel frequencies at two off-target sites that remained susceptible to modification by SpCas9-HF1, one with the FANCF site 2 sgRNA and another with the VEGFA site 3 sgRNA. For the FANCF site 2 off-target, which bears a single mismatch in the seed sequence of the protospacer, we found that SpCas9-HF4 (containing the additional Y450A substitution) reduced indel mutation frequencies to near background level as judged by T7 endonuclease I assay, while also beneficially Table 4). Hypothesis testing using a one-sided Fisher exact test with pooled read counts found significant differences (P < 0.05 after adjusting for multiple comparisons using the Benjamini-Hochberg method) for comparisons between SpCas9-HF1 and the control condition only at EMX1 site 2 off-target 1 and FANCF site 3 off-target 1. Significant differences were also found between wild-type SpCas9 and SpCas9-HF1 at all off-target sites, and between wild-type SpCas9 and the control condition at all off-target sites except RUNX1 site 1 off-target 2. c, Scatter plot of the correlation between GUIDE-seq read counts (from Fig. 2a) and mean per cent modification determined by deep sequencing at on-and off-target cleavage sites with wild-type SpCas9.
ARTICLE RESEARCH
increasing on-target activity (Fig. 5b), resulting in the greatest increase in specificity among the three variants (Fig. 5c). For the VEGFA site 3 off-target site, which bears two protospacer mismatches (one in the seed sequence and one at the nucleotide most distal from the PAM sequence), SpCas9-HF2 (containing the additional D1135E substitution) showed near background levels of indel formation as determined by T7 endonuclease I assay while showing modest effects on on-target mutation efficiency (Fig. 5b), leading to the greatest increase in specificity for this off-target site among the three variants tested (Fig. 5c).
Discussion
The SpCas9-HF1 variant characterized in this report reduces all or nearly all genome-wide off-target effects to undetectable levels as judged by GUIDE-seq and targeted next-generation sequencing, with the most robust and consistent effects observed with sgRNAs designed against standard, non-repetitive target sequences. Our observations suggest that off-target mutations might be minimized by using SpCas9-HF1 to target non-repetitive sequences that do not have closely matched sites (for example, bearing 1 or 2 mismatches) elsewhere in the genome; such sites can be easily identified using existing publicly available software programs 33 . An interesting question will be to determine whether SpCas9-HF1 induces off-target mutations at frequencies below the detection limit of existing unbiased genome-wide methods (Supplementary Discussion). We also discuss other practical considerations for targeting sites of interest with SpCas9-HF1, including the use of sgRNAs with non-G or mismatched 5′ nucleotides (Extended Data Fig. 7b) and altering the PAM recognition specificity of SpCas9-HF1 (Extended Data Fig. 8), in the Supplementary Discussion.
Further biochemical experiments and structural characterization will be required to define the mechanism by which SpCas9-HF1 achieves its high genome-wide specificity. We do not believe that the four substitutions we introduced alter the stability or steady-state expression level of SpCas9 in human cells, because titration experiments with decreasing concentrations of expression plasmids suggest that wild-type SpCas9 and SpCas9-HF1 behave comparably as their amounts are lowered (Extended Data Fig. 9). Although our initial rationale for making the substitutions in SpCas9-HF1 was to decrease the energetics of interaction between the Cas9-sgRNA complex and the target DNA (as has been previously proposed to explain the increased specificities of transcription activator-like effector nucleases bearing substitutions at positively charged residues 34 ), recent work has provided greater mechanistic insights into SpCas9 recognition and cleavage. These studies suggest alternative and more detailed models (for example, formation of an active cleavage complex through conformational changes or kinetics of off-target site recognition 35,36 that might be affected by the substitutions in our SpCas9-HF1 variant (Supplementary Discussion)).
More broadly, our results validate a general strategy for the engineering of additional high-fidelity variants of CRISPR-associated nucleases. We found that introducing substitutions at other non-specific DNA contacting residues can further reduce some of the very small number of residual off-target sites that persist for certain sgRNAs with SpCas9-HF1. Thus, we envision that variants such as SpCas9-HF2, SpCas9-HF4, and others might be used in a customized fashion to eliminate any potential off-target sites that might be resistant to the specificity improvements of SpCas9-HF1. In addition, our variants might be combined with substitutions in residues that contact the nontarget DNA strand, alterations that have been shown to reduce SpCas9 off-target effects while our manuscript was under review 37 . Overall, our results demonstrate that the approach of mutating non-specific DNA contacts is highly effective at increasing SpCas9 specificity and suggest it might be extended to other naturally occurring and engineered Cas9 orthologues [38][39][40][41][42] , as well as other CRISPR-associated nucleases 43,44 . and HF variants at the FANCF site 2 and VEGFA site 3 on-target sites, as well as off-target sites from Fig. 2a and Extended Data Fig. 5 resistant to the effects of SpCas9-HF1. Per cent modification determined by T7 endonuclease I assay; background indel percentages were subtracted for all experiments; error bars represent s.e.m. for n = 3. c, Specificity ratios of wild-type SpCas9 and HF variants with the FANCF site 2 or VEGFA site 3 sgRNAs, plotted as the ratio of on-target to off-target activity (from panel b).
|
2018-03-30T13:05:11.971Z
|
2015-12-11T00:00:00.000
|
{
"year": 2015,
"sha1": "3c8b586afbd22f28d46d52e6dc5c6f9ebf48a0a6",
"oa_license": "unspecified-oa",
"oa_url": "https://europepmc.org/articles/pmc4851738?pdf=render",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "c5ae722b5c7b10f169818828785f54fe455ddd70",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
240558147
|
pes2o/s2orc
|
v3-fos-license
|
The Limits of Free Health Care Schemes: the Case of Access to Care for Beneficiaries of the Moroccan Medical Assistance Scheme (RAMed) Suffering From Cancer
Background: The article discusses the limitations of a free-of-charge scheme for the poor in the case of cancer patients. The literature on free access to hospitals, especially on the African continent, has already mentioned these limits: occasional payments for care and transport. The particularly ambitious Moroccan free-of-charge scheme (RAMEd) presents the same problemsMethods: It is based on a qualitative survey of 120 patients and 30 doctors or nurses with whom we conducted semi-structured interviews over several months.Results: The results show that patients continue to pay for care and medical imaging as well as their transport to the hospital. They pay for care and examinations that are not available at the hospital or wait for them to be available, which is a danger to their chances of survival.Conclusions: The limitation of the RAMed is that it does not cover the cost of transport or the structural deficiencies of the hospital. The result is the paradox of a free service that is costly for patients. We stress that targeted policies cannot replace structural policies and, on their own, do not remedy inequalities, particularly territorial inequalities.
generalization, the RAMed had more than ten million a liated persons, including 6,345,525 with active rights [15]. However, as early as 2014 the public authorities began to worry about the sustainability of the scheme. Evaluations were initiated, including one conducted by the National Observatory of Human Development (ONDH).
Several questions arose. (1) The rst concerned the number of bene ciaries. Initially, it had been planned that they would be divided into non-contributory "poor" and contributory "vulnerable" (with a ceiling of 600 DH per household). The project designers had estimated that 65% would be contributors. After the implementation of RAMed, they represented only 16% of the population covered while, during the same period, the High Commission for Planning (HCP) counted, in 2014, 1,605,000 poor people (in terms of relative poverty). The criteria used to be considered "poor" by the RAMed were thus much more inclusive than those used by the HCP, which seems to be due primarily to the targeting instruments used [16]. This resulted in strong pressure from the demand for free care on a hospital system that was already considered insu ciently equipped at the time of the generalization of RAMed [17]. (2) The second issue was the nancing of the system. None of the nancial modalities planned to support the scheme had been implemented; they still are not [18]. Free care is currently nanced by the ordinary operating budget of hospitals, whereas it was planned that the care provided would be invoiced and the bills paid by the State according to a mechanism to be put in place. (3) The state of the health infrastructure and its capacity to accommodate RAMed bene ciaries. As indicated in the report on the sectoral strategy of the Ministry of Health for the period 2012-2016 [17], the health infrastructure at the time of the launch of RAMed was already insu cient, since at the time when the Moroccan government was beginning to worry about RAMed's sustainability, the share of the State budget devoted to it was 5.6% (2014), whereas the share recommended by the WHO was 9%.
Overall impact on bene ciaries
In addition to these strictly nancial issues, there is also the question of the impact of the scheme on its bene ciaries [19,20,21]. The rst scienti c publications point to a number of problems, this even highlighted by Yates: the continuation of patient payments and the di culties related to their transportation from their places of residence to the care structure. A survey of 186 hemodialysis bene ciaries of RAMed in the Souss-Massa region highlighted these di culties: lack of medical transport and ambulances to take patients from their homes to the hemodialysis center; payment of at least part of the medical analyses, as hemodialysis patients have to resort to private imaging and medical analysis centers to make up for the shortcomings of the public sector. To meet these expenses, patients are forced to go into debt [21]. This pathology-related situation is con rmed, from a general point of view, by a study based on the analysis of 2013 and 2015 data from the Moroccan Household Survey Panel Data (MHSPD) of the National Observatory of Human Development (ONDH). It shows that the expenditures of households bene ting from RAMed are equivalent to those of comparable households not bene ting from RAMed [20]. In other words, the effects of the disease on household impoverishment are equivalent; however, they are not equivalent in terms of the effects on health, since the main part of the care is covered by the RAMed, and from this point of view, populations suffering from serious pathologies have effective access to the care they need (as shown by the case of dialysis patients). The result is that, while the public health objective seems to be substantially being achieved, the objective of combating inequalities is marking time and the overall well-being of patients remains limited.
Method And Approach
The choice of a qualitative approach, involving semi-structured interviews, stems from the fact that it is necessary to enter into the daily life of people in order to evaluate the moral and nancial costs of a care system. At this stage, we cannot operate from above the bene ciaries of the system. This information bias stems from a general tendency, present in the evaluation of public policies, to underestimate the practical and ordinary consequences of these policies [22]. The paradox, as Anne Revillard [23] strongly emphasizes, is that evaluation bypasses the consequences on individuals in order to focus on the performance of public action, yet this performance is only a means to an end; and it is in relation to the end -the impact on the lives of everyone -that a public policy must be evaluated.
Description of the survey
We conducted a qualitative survey using semi-structured interviews, involving patients (N = 120) and health care personnel (N = 30), notably doctors and nurses. We did not distinguish between forms of cancer, as we were interested in the di culties of access for patients. Certainly, di culties or facilities inherent to particular forms of cancer appeared, but this point remains secondary in our survey. A little more than two thirds (68%) of the patients we interviewed came from or resided in rural areas. This was due to the cities we had chosen: Khenifra, the capital of a rural province in the Middle Atlas, Meknes and Fez, which receive patients from the surrounding rural areas, especially Khenifra. This over-representation of rural people was deliberate, since our research focused on identifying and assessing access di culties. In both Khenifra and Meknes, our surveys were conducted with patients from a provincial hospital; in Fez, it was a University Hospital Center (CHU).
Most of the interviews were conducted in situ, in the oncology centers of Meknes and Fez. This allowed us to observe at the same time the daily routine of the care facilities, the arrival of patients, their care, interactions and, sometimes, altercations with the caregivers. The interviews generally lasted between half an hour and an hour. They were conducted either in Moroccan dialect Arabic (Darija) or in French, especially with the doctors and head nurses. The interviews we conducted focused on access to care. To do so, we asked patients to describe the steps they have taken and are taking to remedy their ailments, from the symptoms or incentives (e.g., prevention campaigns) that led them to seek care to the sequence of diagnosis and care. The survey was authorized by the Ethics Committee of the Faculty of Medicine of Fez and supported by the Moroccan Cancer Research Institute (IRC).
In addition, we use, in this article, some interviews conducted at the Hassan II University Hospital Center (CHU) in Fez on the occasion of an between survey conducted in 2015, concerning access to care for RAMed bene ciaries. The more general framework of the care sectors considered allows us to better situate the di culties encountered by cancer patients. This survey was authorized by the Ethics Committee of the Faculty of Medicine of Rabat and nanced by the National Observatory of Human Development (ONDH) as part of an evaluation program of RAMed.
Results Of The Survey
Our survey showed that the initiation of patients' † therapeutic itineraries did not concern the main course of care, generally surgery (if necessary) and chemotherapy, but sometimes radiotherapy, examinations and, more broadly, the phases preceding this main course or following it. Examinations are generally problematic, in the sense that their availability is not guaranteed. These problems are all the more present when dealing with care structures far from the university hospital, as shown in the interview below: "Until now, we have been able to have free chemo sessions, scans and certain analyses, but we have to go elsewhere when the analyses do not exist here. We had problems when she had complications in the brain, these analyses cost 2000 DH ‡ , and they did not exist in the hospital, neither in Sidi Said nor in Mohamed V. So we were obliged to take an appointment here, at the CHU, we had an appointment for one month (...) -So, what are the things that your mother has been paying for all along? -Doctor's fees, biopsy, MRI, ECG, chemo tests, injections at 100 DH, and treatment after chemo.
-And the analyses for the ganglion, did she also pay for them? -Yes, the biopsy at Moulay Ismail Hospital, but the analysis at the private laboratory at 600 DH. » § In addition to these problems of availability of medical services, there are problems of transport, as patients have to travel from their homes to the place of care or examination. Depending on the circumstances, transportation problems also involve accommodation problems, if patients are received as outpatients but cannot be cared for on the day of their arrival. In general, however, patients note the advantages of the RAMed and consider quite unanimously that "without it, [they would] not have been able to be treated". Nevertheless, their journeys are marked by payments and these are linked to de ciencies affecting the place of care to which they belong.
The unavailability of care
The notion of de ciency appears to be important for qualifying the unavailability of care. Under the terms of the law instituting the RAMed, as well as from the point of view of the organization of the health system, not all care and not all examinations are and need to be available in each medical facility. Some require the presence of a category of care or examination, others do not. We considered unavailability to be a de ciency when the care or test was supposed to be available but was not. The interviews conducted with medical and nursing staff were very useful in that they gave us an overview of the care and examinations available and supposed to be available in the various institutions.
In fact, the medical pathways appear to be riddled with de ciencies: "People in Oujda [the respondent lives in Oudja] do the chemotherapy sessions, but when they want to do the radiotherapy, they have to go to the clinic and there they have to have a lot of money.
-And what did you do?
-We came to Fez where they refused to give him radiotherapy and then to Rabat where we were told that radiotherapy was available in Oujda. In Oujda, they gave us a technical form (sic) to get rid of us. When we showed the form to Fez and Rabat, the people in charge of the department told us that we should not come here at all.
-And? -Did we go to a private clinic? -How much did it cost you?
-9000 DH for a session plus 900 DH for the scanner, he needed 3 or 4 sessions.
-So in total?
-And for MRI, -We were given an appointment for two months from now but we did it in private the same day. We gave him a 500 DH injection.
(…) -When she found out she was sick, did she have the RAMed card? -Yes, she already had it.
-So thank God.
-Yes, of course, you know the people who don't have RAMed in Oujda do the same treatments and they pay 600 DH per session; some have to do 30 sessions.
-600 DH the session at the clinic?
-No 600 DH at the public hospital but in a clinic it costs 1500 DH.
(…) -So at rst you paid for the MRI, -Yes 300 DH.
-And the analyses? How much is it?
-The biopsy at 500 DH and the rst analyses at 300 DH.
-So we loaned you the money for the MRI, -Yes... 3000 DH -And to get here? Transportation costs?
-Someone else loaned us the money.
-How many kilometers is that?
-How long does it take? -Three hours by car and ve hours by bus.
-And by train?
-The train is too expensive.
-How much does a round trip cost? -She must always be accompanied because of her health condition.
-Are you the one who always comes with her? -Yes, it's still me and I have to leave my little girl in Oujda.
-This time who came with her? -This time there is his sister and me.
-How much did it cost you? -It is 100 DH the place in bus, that is to say 600 DH for a round trip.
-Were you asked for money (bribery) to get the RAMed card?
-Not at all.
(…) -Has your budget been affected by the disease? -Yes, a lot, when we want to make purchases we give up because we prefer to keep the money for the expenses of the illness.
-And morally? -It has changed our lives.
-When you arrive in Fez at the bus station, how do you get to the hospital? -By cab (20 DH) and also a cab when we want to go from our home to Oujda to the bus station.
-What about fatigue? -Yes, a lot especially since she is sick. » ** This case is quite long. The therapeutic itinerary began with radiotherapy, which was not available in Oujda, where the patient lived, so she moved to Fez and then to Rabat, in order not to have to leave the public system where she bene ted from RAMed coverage of her care. The patient was nally treated in Fez, at the Oncology Center of the University Hospital. She therefore travels from Oujda to Fez for her radiotherapy sessions, covering more than 300 kilometers. This trip is the consequence of a de ciency, since radiotherapy should be available in Oujda. The cost of transport and accommodation is doubled, since the patient is accompanied by her daughter. It should also be noted that the MRI was performed in the private system, in order to avoid the two-month waiting period indicated to the patient. This was due to the large number of people who needed to undergo this examination and the lack of available machines. However, the question of delays does not only concern the examinations. It can be a more global issue for the entire medical care, as shown in the case below. The patient was sent by a clinic to the University Hospital of Fez after it diagnosed a brain cancer requiring surgery that the patient could not handle. He therefore turned to the public system but had to wait several months before he could be operated on. When he was discharged, a CT scan was required, which had to be done in a clinic, again because of the length of the waiting time: "Who gave him this appointment? -University Hospital. It was very far away as an appointment, that's the problem. The appointments are very far away.
-Patients cannot bear to wait for these long periods.
-Yes, so when he was admitted to the hospital, he had surgery last November. He was here for 15 days. When he was discharged, he was asked to have a CT scan, we were given a very distant appointment, so we did it at the clinic. » † † This situation is re ected in all the interviews we conducted. The long waiting times are a direct consequence of the disproportion between the number of people being cared for and the resources available. In general, we found that all patients, at one time or another, had to leave the medical pathway, either for procedures that were covered but unavailable, or for expenses that were not covered, such as medication (other than those provided as part of hospital care) or transportation to get from their home to the hospital facility. At this point, the patient and family are left to fend for themselves, i.e., to take care of the step that cannot be accomplished within the medical pathway. As the interviews show, these situations are painful. First of all, because they involve choosing to pay not to wait or to pay for access to medication. The patient and his or her family are then forced to make a trade-off between the stress felt, the pain or the risk of loss of life chances, on the one hand, and the expenses required for other essential household needs, on the other. This often results in the use of assistance from close family members (children, parents, siblings) and sometimes more distant ones (cousins). However, these auxiliary resources can only be called upon on an ad hoc basis. As for arbitration, when it is negative, i.e. when the patient cannot make a detour through the private sector in order to bene t from examinations or care not available in the public sector, it implies a risk that can be signi cant for him or her, as one nurse explains: "If the patient is developing complications, the examinations allow us to know it, but if we have to wait three months to have the appointment, then carry out the examinations, wait months for a specialized opinion, during this time the pathology evolves and we nd ourselves with situations where, because of this delay, complications appear with irreversible consequences. Nothing can be done. We watch helplessly as the patient's condition deteriorates because he is poor. What are you going to do? » ‡ ‡
The problem of transport
The problem of transportation appears to be considerable, as patients must, in most cases, travel to the cancer center. There are a dozen public centers in a territory comprising 62 provinces and 12 regions, but there are not as many centers as regions. Rabat (the capital) has 3 structures dedicated to oncology. There are 9 for the rest of the territory. Regardless of whether this is su cient, the relatively small number of oncology facilities indicates that some of their patients have to travel to receive care. It should be added that, if certain facilities are de cient, patients have to travel to another oncology center than the one to which they belong, and therefore generally to a more distant center. We have thus noted displacements from Oujda to Rabat, Fez and Meknes and displacements from Meknes to Fez. Distance has direct effects on the duration and cost of travel; these effects are all the more important because they are often accompanied by a family member. The two interviews below describe these di culties: "What do you use for transportation to get to the cancer center?
-A bus at 40 DH one way.
-How long does the trip take? -3 hours.
-Do you go alone or are you accompanied? -My husband always accompanies me.
-How much does it cost you to go there and back? -200 DH approximately, 40 DH multiplied by 4, that is to say 160 DH, but the remainder is for the cabs. In any case, when you bring back a blue ticket [a 200 DH ticket], you get it at the end of the day.
-Since the beginning of your illness, how many times have you traveled for treatment? -I would not know how to answer you, at the beginning I stayed quite often in Meknes, now I move almost every two or three weeks. » § § "Since 5 o'clock in the morning, I am awake, I made the dawn prayer and I came to Meknes from Khenifra. I would have liked to have these treatments in Khenifra without moving, just walking and coming back, but unfortunately I have transportation costs in addition... Once I nish, I return to Khenifra the same day. It costs me 110 DH to make a round trip the same day, not to mention the cabs for 20 DH, in Meknes, plus 14 DH of cab in Khénifra, without forgetting the food. For example, I haven't eaten yet, plus I have to eat light, diet, it's even more expensive! (...) It's not only the fatigue on the physical level, because I'm waiting for the moment when I go home myself, but especially the fatigue of asking your relatives for money that weighs heavily on my conscience and affects my dignity. » *** As can be seen, the problem of transport does not follow on from Thaddeus and Maine's work on the three delays [24], the second delay being that for going from the patient's home to the place of care. In fact, transport is considered in relation to its duration and the urgency of the need for it. Thus, much of the literature on this issue, beginning with Thaddeus and Maine's article, focuses on access to care for childbirth, i.e., the accessibility of care in the event of an emergency. It is not the emergency that prevails in the case of cancer patients. On the other hand, transportation di culties can lead to irregularities in access to care, and these can be cumulative. In interviews with both health care staff and patients, it appeared that a chemotherapy session could be postponed for three weeks (until the next scheduled session) because the patient could not nd or pay for transportation or because the road was not passable (this situation only occurs in winter when there is snow). In some cases, however, the period without care may be longer, when the patient could not nd or pay for transportation ( rst postponement) and then could not leave home because of snow (second postponement). These situations do not appear to be the most numerous, but they do exist. They show that what is at stake in the lack of transport, in the case we are interested in, is not the immediate risk but the long-term risk, the stress and discomfort of the patients as well as their impoverishment. This, however, is not the result of inadequate hospital or RAMed services, but is the result of the lack of a stable, e cient, and robust ambulance and medical transport system in Morocco, and the lack of a system for paying for transport costs. There is therefore not only a problem of access to care, but also a speci c problem of access to ambulances [25]. Patients are thus required to pay for their own travel from their homes to hospitals and oncology centers. † The continuation of the remedies necessary for the treatment of the disease, when they differ from the care pathway provided by the RAMed.
Discussion
What emerges from the eld survey that we conducted is the importance of conversions of medical itineraries into therapeutic itineraries, i.e. situations where the different therapeutic phases do not take place within the same care system, because the latter is incomplete. In other words, the RAMed bene ciaries we surveyed, even though they are included in a free access to care system, are still looking for health resources and are obtaining them at their own expense. The same phenomenon can be found in other situations as indicated in the introduction (see above "Introduction"). These are not isolated cases but all the people we met (N=120). Of course, the payments are of varying importance, but there are always payments: intra-urban and inter-city transport costs, examinations (most often medical imaging), care (radiotherapy, in particular), purchase of medicines, accompaniment costs (for patients who cannot travel alone); and several payments are often combined.
Structural problems
This situation suggests the existence of structural problems, at least two of which seem massive: (1) the organization and management of transport and (2) the availability of the necessary resources in the public system for the completion of the medical pathways for people with cancer. To become aware of these problems, a particular investigation is needed, involving not just looking at the available gures for the increase in hospital care or the continuation of household health expenditure, but at what points in the medical pathway this health expenditure takes place. They could take place upstream of the medical pathway, in a phase of self-medication or independent consultation of private practitioners; they could involve recourse to alternative medicine or result from a voluntary departure from the medical pathway.
On the contrary, what appears to be happening is that patients' expenditures are taking place within the stages of the medical pathway, in order to complete them. In other words, patients subsidize the free treatment they receive. This state of affairs is general and characterizes access to care for RAMed bene ciaries. However, people with cancer are not ordinary bene ciaries since they are supposed to be part of the National Cancer Prevention and Control Plan (2010-2019) or NCCP † † † , launched in 2010. This plan includes ensuring the provision of care for patients at all levels of the public health system, coordinating them and developing the human resources assigned to these different levels. This plan is supported by the Lalla Salma Foundation ‡ ‡ ‡ , wife of King Mohammed VI, whose action is undeniable [26]. However, this plan does not seem to have a signi cant impact on the two types of dysfunction that we have mentioned, because they are not so much the result of shortcomings in oncology as of shortcomings in access to care as a whole. Just as the RAMed, as a free-of-charge system, cannot ensure that the necessary resources are present in the health care system, the PNPCC cannot ensure that these resources, which are lacking in general, are present in particular, except possibly in a one-off manner.
In fact, what emerges here is the di culty of targeted policies. Some resources can only bene t speci c categories if they bene t all categories of the population. This is the case for transport infrastructure, including roads, the education system, and the health system. Roads are not built for one type of user but for all users. Similarly, a health system can only take care of a certain category of patients if it is able to take care of all patients. We do not train oncologists, we train doctors rst. Thus, when we describe the di culties of a certain category of patients by following their therapeutic itineraries, we realize that their di culties are the di culties of other categories of patients. Looking at things more globally, it appears that the free-of-charge system is not able to guarantee access to all the care involved in the medical pathways, because it is dependent on the state of the hospital system, to which it makes no contribution. Initially, it was planned that the bills of the patients bene ting from free care would be paid by the State, which would have helped to support the hospitals according to their activity. As this mechanism was never put in place, hospitals took over the RAMed from their regular resources, which impoverished them [18]. Indeed, RAMed is not exactly a social security system in that it does not pay for care (and medical transport) where and how it is available (in the public or private sector). It only provides free access to a prede ned set of resources (those of the hospital), which it does not help to nance. As a result, cancer patients who are bene ciaries of RAMed nd themselves in di cult human and economic situations, even though they are covered by a free-of-charge system. The implementation of a National Cancer Prevention and Control Plan (2010-2019) does not profoundly change the situation in the short or medium term, even if it contributes to an overall improvement in the long term.
A holistic perspective
This situation should lead us to consider the contribution of free health care systems differently: if they are not linked to efforts to develop the health care system, involving an increase in hospital resources and the existence of real out-of-hospital logistics for access to care, they lead to relatively large payments by patients. The question of infrastructure and resources therefore remains central to access to care, as does the question of the share of the state budget devoted to health. From this point of view, the improvement of the situation of a category of patients -in this case those with cancer -is largely dependent on the overall improvement of the resources of the health system and the overall development of certain services such as medical transport.
Perhaps a transformation of the paradigm that applies to the management of the social sector and more broadly to state interventions in developing countries is needed. For the sake of e ciency, localized and sectoral solutions seem to be preferred by state or international actors. In fact, it is easier to pursue a circumscribed project of excellence than to reform all the ins and outs of a dysfunction. This is notably the logic of action of NGOs. It is thus simpler and more rewarding to constitute "islands of prosperity and excellence" [27] than to improve entire sectors of public action. However, these islands themselves are dependent on the rest of the sectors to which they are interconnected, and some of what they do continues to depend on them. Holistic perspectives, structural actions, are therefore still needed. In this case, it is clear that the creation of the RAMed has improved access to care for cancer patients, but it has not done more for them than it has for other categories of patients. Similarly, the establishment and conduct of a National Cancer Prevention and Control Plan (2010-2019) has not improved their situation with respect to the two major problems we have indicated: the cost to patients of certain unavailable medical procedures and the cost of transportation. In fact, there seem to be only two solutions to this: (1) increase the resources of the hospital system and set up a real system of medicalized transport or (2) compensate the payments of cancer patients by a speci c allowance. This would certainly be a localized but transitory action and above all a compensation and not a public action improvement device. (1) and (2) are complementary. These two solutions are based on two simple principles: the holism of public action (recurrent dysfunctions cannot be resolved without getting to the bottom of things) and the right to compensation when a public service fails to produce the goods it is supposed to produce. The application of these principles has a budgetary cost; their non-application has a human cost. This only underlines an empirical reality which is manifested in other situations of implementation of free access to health care: it is not enough. It depends both on the general conditions of free care delivery, as we have shown, and on the local circumstances of its implementation [28,29]. In other words, free care is always limited and variable in scope. † † † see https://www.sante.gov.ma/Documents/Synthese_PNPCC_2010-1019.pdf ‡ ‡ ‡ https://www.contrelecancer.ma/fr/
Conclusion
Our results underline that the success of free health care is largely dependent on the state of the health infrastructure but also on the transport infrastructure and the development of the territory. This question should be developed by also addressing the effect of distance on cancer diagnosis. A study conducted in Morocco on breast cancer showed that diagnosis and treatment were delayed when patients lived more than 100 km from the place of treatment [30]. This shows that free care is far from su cient to compensate for territorial inequalities. In fact, it leaves them unchanged. Of course, it increases access to care but inequalities in access remain. In other words, the ght against health inequalities cannot be limited to the implementation of free hospital care. Paradoxically, this can increase inequalities, insofar as those who can afford the transport and care or medical imaging that is lacking in the hospital bene t more. This situation creates a great disarray among the bene ciaries.
From a methodological point of view, it seems that the survey by semi-structured interviews is better able to show the importance of this disarray, by showing the place it occupies in the lives of patients and their families. It is not a secondary problem in relation to care. On the contrary, it is about the possibility to bene t from care. When the free care system is not su cient, the quality of life of patients is directly affected.
Declarations Ethical Approval and Consent to participate
The study was validated by the Ethics Committee of the Faculty of Medicine of Fez
Consent for publication
The authors agree to the publication of the article Availability of supporting data These are qualitative interviews, several of which are being used for another publication. The data is under embargo until this publication has been published. Thereafter, the interviews will be conditionally available.
|
2021-10-20T16:02:56.129Z
|
2021-09-15T00:00:00.000
|
{
"year": 2021,
"sha1": "628d413853f26aa36e6ec0e7bda891df3aaaf003",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-875705/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "5d99f2e161365c40fb3f376897dbbc0128ab21b9",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257857268
|
pes2o/s2orc
|
v3-fos-license
|
Basal glutamate in the hippocampus and the dorsolateral prefrontal cortex in schizophrenia: Relationships to cognitive proficiency investigated with structural equation modelling
Abstract Objectives Schizophrenia is characterised by deficits across multiple cognitive domains and altered glutamate related neuroplasticity. The purpose was to investigate whether glutamate deficits are related to cognition in schizophrenia, and whether glutamate-cognition relationships are different between schizophrenia and controls. Methods Magnetic resonance spectroscopy (MRS) at 3 Tesla was acquired from the dorsolateral prefrontal cortex (dlPFC) and hippocampus in 44 schizophrenia participants and 39 controls during passive viewing visual task. Cognitive performance (working memory, episodic memory, and processing speed) was assessed on a separate session. Group differences in neurochemistry and mediation/moderation effects using structural equation modelling (SEM) were investigated. Results Schizophrenia participants showed lower hippocampal glutamate (p = .0044) and myo-Inositol (p = .023) levels, and non-significant dlPFC levels. Schizophrenia participants also demonstrated poorer cognitive performance (p < .0032). SEM-analyses demonstrated no mediation or moderation effects, however, an opposing dlPFC glutamate-processing speed association between groups was observed. Conclusions Hippocampal glutamate deficits in schizophrenia participants are consistent with evidence of reduced neuropil density. Moreover, SEM analyses indicated that hippocampal glutamate deficits in schizophrenia participants as measured during a passive state were not driven by poorer cognitive ability. We suggest that functional MRS may provide a better framework for investigating glutamate-cognition relationships in schizophrenia.
Introduction
Schizophrenia (SCZ), a neurodevelopmental disorder with illness onset typically in teens and young adults (Insel 2010), is one of the most debilitating, life-long mental illnesses (WHO 2002;Kessler et al. 2005) and treatment has had limited impact in restoring function.The pathology of SCZ has been attributed to a combination of brain network and neurotransmitter dysfunction that may also be inter-related and lead to subsequent cognitive dysfunction (Benes 2000;Abbott and Bustillo 2006;Carlsson 2006;Brambilla et al. 2007;Diwadkar 2012;Diwadkar, Bustamante, et al. 2014;Diwadkar, Bakshi, et al. 2014).Cognitive dysfunction in SCZ is highly generalised, cutting across domains such as working memory (WM) (Jansma et al. 2004;Tan et al. 2005), executive function (Sullivan et al. 1994), cognitive control (Carter et al. 2001), and learning and memory (Toulopoulou et al. 2003).Notably, sub-networks associated with these domains intersect in the dorsolateral prefrontal cortex (dlPFC) and hippocampus, two areas that lie at the core of the syndrome.The effects of these regional and network deficits are presumably exacerbated by dysfunctions in the interplay between glutamate (Glu) and c-aminobutyric acid (GABA), the brain's major excitatory and inhibitory (E/I) neurotransmitters.Glu and GABA function are tightly integrated and together help to facilitate neural engagement.More importantly this integration mediates the neuroplasticity of microcircuits sub-serving cognitive proficiency across multiple domains (Stephan et al. 2006;Isaacson and Scanziani 2011;Lauritzen et al. 2012;Tatti et al. 2017).It has been presumed that Glu is altered in SCZ participants; however, little evidence has attempted to directly relate Glu levels in key areas such as the dlPFC and hippocampus to cognitive ability in SCZ participants using multivariate analytical approaches such as structural equation modelling (SEM) (Castner and Williams 2007;Eichenbaum 2017).
Over the past three decades magnetic resonance spectroscopy ( 1 H MRS) has been a viable method for estimating in vivo Glu levels from localised brain areas in both health and disease. 1 H MRS studies in SCZ have demonstrated somewhat consistent results of lower Glu levels in the medial PFC (mPFC) but less consistent results in the dlPFC and hippocampus (Keshavan et al. 2000;Steen et al. 2005;Marsman et al. 2013;Merritt et al. 2021;Smucny et al. 2021).These inconsistencies have vexed the field (Keshavan et al. 2000;Steen et al. 2005;Marsman et al. 2013;Smucny et al. 2021) and are probably driven by multiple factors including (a) sub-optimal methodological applications in detecting and measuring Glu with low precision and accuracy [e.g.reporting Glu as a summation of Glu plus Gln (Glx) or reporting Gln greater than Glu levels], (b) expressing outcome measures as a ratio of levels between metabolites, (c) partial volume effects due to poor localisation of structures, and (d) using inappropriate/incomplete a priori knowledge when modelling 1 H MRS data.Another crucial limitation is that 1 H MRS was typically acquired without constraining behaviour (i.e.relaxed and simply keeping the head still), creating interpretational challenges.As we have noted (Stanley and Raz 2018), not only is 1 H MRS sensitive in detecting dynamic changes in Glu levels induced by task, but the signal is also sensitive in differentiating Glu levels between varying 'rest' conditions (e.g.eye closed vs fixating on a crosshair) (Lynn et al. 2018), or sleep/non-sleep (Bartha et al. 1999).Given that behaviour is a strong modulator of neuronal signals and their emergent signatures (Logothetis 2008), we surmise that a lack of behavioural constraint during 1 H MRS acquisitions may increase the variability of Glu, undermining sensitivity for detecting group differences, and leading to inconsistencies across studies (Bartha et al. 1999;Lynn et al. 2018).
Here we used a simple behavioural constraint (attention to a visual flashing checkerboard) to constrain the acquisition of Glu in the dlPFC and the hippocampus.The active state provides a measure of visuo-attentional constraint, but without 'loading' on either the dlPFC or the hippocampus.Then, using structural equation modelling (SEM) analyses, Glu measures were related to behavioural data in multiple domains [WM, episodic memory (EM), processing speed, PS)], acquired from the same participants in a separate session.The following questions were addressed: (a) do basal Glu levels from the dlPFC and/or hippocampus mediate differences in cognitive ability between SCZ and HC participants; (b) does diagnosis moderate distinct associations between basal Glu levels and cognitive ability?
Participants
Forty-four DSM-V diagnosed SCZ participants (35 males and 9 females; mean age ± SD, 31.8 ± 8.4 years; age range: 18.9-49.2years; 40 right-handed) and 39 HC individuals (29 males and 10 females; mean age ±SD, 28.6 ± 6.7 years; 19.8-49.2years of age; 33 righthanded) provided informed consent to participate.All procedures were approved by Wayne State University's IRB.SCZ patients were identified by the treating physicians (AA and LH).A research psychologist (UR) confirmed diagnosis using the Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders Axis I (SCID-I) (APA 2013).All SCZ participants were stabilised on antipsychotic medication for at least 3 months.Participants with a schizoaffective disorder were excluded (minimizing confounding effects from mood dysregulation).
Clinical symptom severity ratings were carried out using the Positive and Negative Syndrome Scale (PANSS) instrument (Kay et al. 1987) and general intelligence was estimated using the Wechsler Abbreviated Scale of Intelligence (Psychological Corporation 1999).The most likely date of onset of psychotic symptoms (hallucinations, delusions, or disorganisation of thinking; bizarre or catatonic behaviour) and date of diagnosis for SCZ participants were determined using all clinical information, including medical records, reports by family members or significant others, and the SCID interview.HC participants were free of psychiatric treatment or Axis-I psychopathology (past/present).Participants were screened prior to entering the study to exclude any significant past/current medical and/or neurological illness (e.g.hypertension, thyroid disease, diabetes, asthma requiring prophylaxis, seizures, or significant head injury with loss of consciousness).The two groups did not differ in age or gender distribution though Full-Scale IQ was expectedly lower in the SCZ group.Table 1 provides demographic information.
1 H MRS/MRI protocol MR data were acquired on a 3 Tesla Siemens Verio system using a 32-channel volume head-coil.The acquisition occurred in the morning (10:00-12:00 h) to reduce circadian confounds.A set of T 1 -weighted axial images covering the brain was first collected [3D Magnetisation Prepared Rapid Gradient Echo, TR ¼ 2150 ms, TE ¼ 3.5 ms, TI ¼ 1100 ms, flip-angle ¼ 8 , FOV ¼ 256 Â 256 Â 160 mm 3 , 1601 mm axial slices, pixel resolution ¼ 1 Â 1 Â 1 mm 3 and acquisition time ¼ 4:59 min].These images were resampled and used to prescribe in order, the placement of the 1 H MRS voxel locations: right hippocampus (anterior and body portion; 1.8 Â 3.0 Â 1.3 cm 3 or 7.0 cm 3 ) followed by the right dlPFC (Brodmann area 9/46; 2.0 Â 1.5 Â 1.5 cm 3 or 4.5 cm 3 ).The placement of both voxels in the right hemisphere was driven by previous studies demonstrating 1 H fMRS effects in the right hippocampus (Stanley et al. 2017).Angulation and rotation of the MRS voxel was allowed accordingly to minimise the partial volume effect.Additionally, to ensure consistent and reliable placement of the 1 H MRS voxels between participants, the location and orientation of these voxels were systematically derived by using a predefined location mapped on a standard brain as described in (Woodcock et al. 2018).
The behavioural constraint imposed during both 1 H MRS measurements involved participants passively viewing a flashing visual checkerboard (4 Hz).As we have shown, the constraint ensures that the measurement condition reflects a 'non-task-active' steady-state level of Glu with minimal variability (2018).The singlevoxel 1 H MRS protocol included: the PRESS sequence (The MRS package was developed by Edward J. Auerbach and Małgorzata Marja nska and provided by the University of Minnesota under a C2P agreement.)with outer volume saturation and Variable Power and Optimised Relaxation Delays for water suppression based on Tk ac and Gruetter (2005), TE ¼ 23 ms, TR ¼ 3.37 s, 2048 complex points, 2 kHz bandwidth, nine consecutive measurements for the hippocampus and seven consecutive measurements for the dlPFC, with each measurement consisting of 8 averages.Additionally, a fully relaxed water unsuppressed signal with a TR of 10s and 4 averages was acquired in both locations to eliminate any potential T 1 partial saturation effects for absolute quantification.
Prior to combining the individual consecutive measurements for each region, the 0th and 1st order phase and frequency shift were corrected (Zhu et al. 1992) using the LCORAW option in LCModel (Provencher 1993).Also, the first measurement in each scan was removed due to signal not reaching the equilibrium steady state.For each combined 1 H MRS spectrum, LCModel with a simulated basis-set (Provencher 1993) was used to quantify Glu and the other 1 H metabolites including N-acetyl aspartate (NAA), phosphocreatine plus creatine (PCr þ Cr), glycerophosphocholine plus phosphocholine (GPC þ PC) and myo-Inositol, glutamine, alanine, aspartate, gamma-aminobutyric acid, glucose, glutathione, lactate, n-acetylaspartylglutamate, scyllo-inositol and taurine.FreeSurfer and FSL tools were used to tissue segment the T 1 -weighted images and estimate the tissue fraction values within each voxel location (Woodcock et al. 2018).The waterunsuppressed water signal, grey matter, white matter, and CSF voxel content values from the tissue segmentation procedure, along with T 1 and T 2 relaxation times of metabolites, were utilised to quantify absolute levels (Gasparovic et al. 2006;Posse et al. 2007).
Cognitive test battery
Episodic memory (EM) (a) The Logical Memory subtest from the Wechsler Memory Scale (Wechsler 2009) measures the ability to verbally recall the reading of two stories (passages) one at a time.A score of total correct items for both stories was calculated for the immediate and delayed (20 min) recall.(b) The Memory for Names task from the Woodcock-Johnson Psychoeducational Battery (Woodcock and Johnson 1989) measures associative memory (learning to associate pictures of an imaginary 'space creature' with the creature's name) and was tested twice (immediately and after a 20 min delay) by identifying the creatures named by the examiner.The total number of correct responses were scored for the immediate and delayed recall modes.Working memory (WM) (a) The Listening Span task (Salthouse et al. 1989) requires participants to answer simple questions about a sentence while simultaneously remembering the last word of the sentence.Each block contained three trials and the test item increased by one for each successive block with a maximum of seven test items.A total score of correctly answering the question and correctly recalling the final word of the sentence across all trials was calculated.(b) The Computation Span (Salthouse et al. 1989) requires participants to solve simple arithmetic problems while simultaneously remembering the last digit of each problem.Each block contained three trials and the test item increased by one for each successive block with a maximum of seven test items.A total score of correctly answering the question and correctly recalling the final digit of the problem across all trials was calculated.
Processing speed (PS) Three two-choice reaction time tasks (numeric: odd vs. even numbers; verbal: consonants vs. vowels; and figural: symmetric vs. asymmetric figures) consisting of 40 stimuli each were presented in a randomised fast or slow condition and each task had two sets of trials (Schmiedek et al. 2010).The key outcome measurements reflecting PS were the mean reaction time for each task.
Statistical analyses and modelling
Prior to analysis, data were evaluated for missing patterns, and screened for assumptions of normality and linearity.Of the available sample, 6% (n ¼ 5) were missing regional metabolite measures, 1.2% (n ¼ 1) missing the logical memory test and 2.4% (n ¼ 2) missing the CSPAN test.Data were missing at random (Little's v 2 ¼ 42.59, p ¼ .49)and cases missing data were removed by pairwise deletion in primary analysis of regional metabolites.The second analysis of cognitive correlates in the SEM framework included all cases with full information maximum likelihood estimation; a covariance estimation that does not require imputation and introduces no bias under the assumption data missing at random (Little et al. 2014).With the available data, one case presented as a univariate outlier in hippocampal NAA and another case in a reaction time measure; and one case was a multivariate outlier.Because univariate and multivariate normality were reasonably met, these cases were maintained in the analyses.
A series of general linear models (including age and grey matter tissue fraction as covariates), were used to determined group differences in metabolite levels (Glu, NAA, PCr þ Cr, GPC þ PC and myo-Inositol) from each location.As we subsequently note in the results, the high quality of the 1 H MRS spectra allowed us to confidently report absolute Glu levels (obviating the need to embed values in a summation of Glu plus glutamine, as is typically done).A Bonferroni correction was applied for multiple comparisons for the five 1 H MRS outcome measurements per voxel locations as well as for the ten outcome measurements from the cognitive tasks (i.e.significant p-value threshold of .01 and .005,respectively).
SEM implemented in MPlus (ver7.4) was applied to evaluate the relation between basal Glu levels and cognitive ability as a function of diagnosis.SEM simultaneously tests hypotheses of group differences in latent cognitive ability, accounting for correlations among cognitive domains, and quantifying the unique relation with basal Glu levels in each region.Two alternative hypotheses were tested: (a) do basal Glu levels partially account for cognitive deficits in SCZ participants as compared to HC (i.e.mediation); and (b) does the relation between basal Glu level and cognition differ between SCZ participants and HC (i.e.moderation).Confirmatory factor analysis specified latent constructs of EP, WM, and PS that were equivalent between groups and used in further hypothesis testing.All models included age and gray matter tissue fraction as covariates.Model reliability was evaluated by a compendium of fit indices: v 2 non-significance, comparative fit index (CFI >0.9 indicates good fit), root mean square error of approximation (RMSEA <0.10 indicates good fit), and standardised root mean residual (SRMR <0.08 indicates good fit).Path coefficients, and indirect effects (mediation), were interpreted for effect magnitude and statistical significance (p < .05);all coefficients were bootstrapped with bias correction (5000 draws) (Hayes and Scharkow 2013) to estimate 95% confidence intervals (BS 95% CI).Moderation was tested with a grouped modelling approach including constraints for equal factor loadings and variances, and freely estimating the intercept and variance of regional Glu measures.Group differences in the magnitude of the path coefficients were tested for statistical significance by an approximate ztest (p-value of threshold <.05).
H MRS spectral quality and voxel placement
Four 1 H MRS spectra were rejected for poor quality (dlPFC: 2 SCZ; hippocampus: 1 HC and 1 SCZ), one to error in voxel placement (dlPFC: 1 SCZ) and there were four incomplete scans (dlPFC: not collected in 1 SCZ; hippocampus: not collected in 1 HC and 2 SCZ).The S/N ratio of NAA was comparable between groups for the dlPFC (X 2 ¼ 1.44, p ¼ .23)and hippocampus (X 2 ¼ 2.92, p ¼ .087)(Table 2).The full-width-at-halfmaximum (FWHM) values of NAA were comparable between groups in the dlPFC (X 2 ¼ 3.77, p ¼ .052),but demonstrated broader spectral peak in the hippocampus of SCZ participants compared to HC (X 2 ¼ 4.57, p ¼ .033)(Table 2).The correlation between FWHM and Glu in either region was not significant.The Cramer-Rao Lower Bound (CRLB) values of Glu ranged between 3% and 8% (mean±SD; 4.2 ± 0.7) for the dlPFC and between 5% and 14% (6.9 ± 1.6) for the hippocampus.An example of a typical quantified 1 H MRS spectrum from the dlPFC and hippocampus is shown in Figure 1.Additionally, regarding the consistency in placing the 1 H MRS voxel in the two locations, the grey matter tissue fraction values were not significantly different between groups in the right dlPFC (X 2 ¼ 3.35, p ¼ .067),though values were lower in the right hippocampus of SCZ vs HC (X 2 ¼ 4.81, p ¼ .028)(Table 2).
Group differences in Glu and other metabolite levels
There were no significant group differences in the Glu level or in the other four metabolites (NAA, PCr þ Cr, GPC þ PC and myo-Inositol) in the right dlPFC.However, Glu levels were significantly lower in the SCZ group (X 2 ¼ 8.12, p ¼ .0044) in the right hippocampus, an effect in which the grey matter tissue fraction term was not significant in the model (X 2 ¼ 2.05, p ¼ .15).Additionally, myo-Inositol levels in the right hippocampus were significantly lower in the SCZ group (X 2 ¼ 5.14, p ¼ .023)(Table 2).
Cognitive performance
SCZ participants demonstrated deficits on EP performance with significant impairments on all outcome measurements (i.e.immediate, and delayed recall on the Logical Memory and Memory for Names; all p < .0001)(Table 3).Similarly, performance during the WM tests (Computation and Listening Span) was poorer in SCZ participants (both p < .0001)(Table 3).Finally, performance on PS was significantly poorer in SCZ participants with Numerical tokens (p ¼ .0032)but not with Verbal or Figural tokens (Table 3).
Mediation and moderation effects between Glu and cognition
Prior to testing hypotheses in relation to SCZ diagnosis, we examined the bivariate relations between Glu levels and latent cognitive ability within the entire sample.Right hippocampal Glu showed a significant positive correlation with WM, although other correlations were weak (Table 4).There was little support for the hypothesis that Glu levels may mediate SCZrelated differences in cognitive ability on any of the three domains.The model had excellent fit: v 2 ¼ 83.64, p ¼ .07,CFI ¼ .97,RMSEA ¼ .05,SRMR ¼ .06,and accounted for 95% of variance in latent EM ability (p < .001),46% in WM ability (p < .001),and 28% in PS (p < .01)(Figure 2).Independent of diagnosis, Glu levels in the right hippocampus (all b ¼ À.11 to .08,p ! .24)and right dlPFC (all b ¼ .00 to .11,p ! .36)were not significantly correlated with cognitive performance (Figure 2).The cumulative indirect effect of SCZ diagnosis on cognitive ability via Glu levels and gray matter fraction was not significant for EP (standardised indirect ¼ À.03, p ¼ .52;BS 95% CI: À.12, .04),WM (standardised indirect ¼ À.06, p ¼ .19,BS 95% CI: À.14, .01),or PS (standardised indirect ¼ À.03, p ¼ .60;BS 95% CI: À.15, .07).These results indicate that while SCZ participants demonstrated lower Glu levels in the hippocampus, this reduction did not significantly account for diagnosis-related differences in cognitive ability.
Discussion
To our knowledge, this is the first study to investigate inter-group differences in basal levels of Glu (and other metabolites) from both the dlPFC and hippocampus in SCZ participants and controls, when 1 H MRS data were acquired under specific behavioural constraint.These constraints are an important determinant of the stability and reliability of estimate Glu in either region (Lynn et al. 2018).We then used SEM analyses to evaluate inter-relationships between Glu, cognitive ability (WM, EP and PS) and diagnosis.We demonstrated: (1) significantly lower basal levels of Glu and myo-Inositol in the right hippocampus of SCZ participants, combined with non-significant group differences in the neurochemistry from the right dlPFC; (2) evidence of poorer performance on all three cognitive domains, WM, EM and PS, in SZ participants compared to HC; (3) a significant association between right hippocampal Glu levels and WM performance across the full sample; (4) SEM-analyses demonstrated non-significant mediation effects between right dlPFC and right hippocampal Glu levels, and group differences on cognitive ability; and (5) while moderation effects were non-significant, the associations between right dlPFC Glu levels and PS were distinct between the groups.
Implication of Glu and Myo-Inositol in SCZ
Consistent with several (including recent 7 Tesla) 1 H MRS studies (Merritt et al. 2016;Wang et al. 2019;Godlewska et al. 2021;Smucny et al. 2021;Wijtenburg et al. 2021), basal Glu levels from the dlPFC were not significantly different between SCZ participants and HC.However, basal Glu and myo-Inositol levels from the hippocampus were significantly lower in SCZ participants.This significant effect is notable because both metabolites were expressed as absolute levels, and stands out amidst a majority of null effects [for review, see (Smucny et al. 2021)].It is worth noting that two recent studies have reported decreased hippocampal Glu and myo-Inositol ratios relative to PCr þ Cr in SCZ participants compared to HC, consistent with our current results (Stan et al. 2015;Singh et al. 2018).Additionally, these biochemical deficits in the anterior and body of the right hippocampus were specific to only Glu and myo-Inositoli.e.NAA, PCr þ Cr and GPC þ PC basal levels were not different between groupsand after adjusting for tissue content despite SCZ participants demonstrating lower grey matter content in the right hippocampus (Table 2).
As the central excitatory neurotransmitter, Glu is actively engaged in facilitating the neural activity and plasticity that sub serves cognitive ability across domains and brain regions.Moreover, task-induced changes in Gu levels (indexed using 1 H functional MRS) show that changes in Glu are driven by shifts in the E/I balance of microcircuits in response to taskrelated changes in neural engagement (Stanley and Raz 2018).Therefore, if behaviour is not constrained (absent of any neural perturbation) during 1 H MRS acquisition, observed deficits in Glu cannot 'directly' support a dysfunction in the E/I balance, or an alteration in the modulation or neurotransmission of the glutamatergic system.Instead, lower basal Glu levels in the right hippocampus of SCZ participants may reflect differences in the tissue morphology such as a reduction in the density of the neuropil associated with the glutamatergic system (i.e.related cell bodies, dendritic arbour and supporting cells or astrocytes).This would also be consistent with post-mortem studies in SCZ demonstrating decreased spine density in the hippocampus (Rosoklija et al. 2000).
The other significant finding is reduced hippocampal myo-inositol level in SCZ, which has been previously reported in SCZ participants but mainly in the mPFC (Das et al. 2018;Jeon et al. 2021).Myo-Inositol, which is generally viewed as a cerebral osmolyte, is an intermediate of several important pathways involving inositol-polyphosphate second messengers (Ross and Bluml 2001;Maddock and Buonocore 2012).More importantly, there is evidence of preferential localisation of myo-Inositol in astrocytes and hence, myo-Inositol is often viewed as a marker of glia (Ross and Bluml 2001;Coupland et al. 2005;Kim et al. 2005)e.g.myo-Inositol levels are significantly higher in the cerebellum where astrocyte content is greater compared to cortical areas (Pouwels and Frahm 1998).Therefore, the evidence of lower Glu levels, as noted above, is consistent with lower myo-Inositol levels as both may potentially reflect reduced neuropil density including cell bodies/dendritic arbour and astrocytes in the right hippocampus of SCZ vs HC.
Mediation and moderation effects
The SCZ sample demonstrated deficits across three domains (WM, EP, and PS) all of which are commonly reported as core features in SCZ (Gold and Harvey 1993;Heinrichs and Zakzanis 1998;Aleman et al. 1999).Also, while WM performance correlated with hippocampal Glu levels across participants, there was no evidence that the observed inter-group difference in hippocampal Glu was driven (or mediated) by the disparity in cognitive ability between groups in any of the three domains.More generally, there was no evidence that associations between Glu and cognition were moderated by diagnosis group.The only exception was a weak and indirect moderation effect demonstrated by the group interaction between dlPFC Glu and PS association.Here, HC demonstrated increasing right dlPFC Glu levels with improving PS while an opposite trend was observed in SCZ.Because SEM is specifically suited for identifying potential mediation/moderations effects, the lack of an inter-relationship between Glu, cognition and diagnosis is an important negative finding.Our effects extend recent evidence (Reddy-Thootkur et al. 2020) from conventional 1 H MRS studies demonstrating little to no support for associations between hippocampal or dlPFC Glurelated measurements (Glu levels or ratios and Glx) with cognitive measurements.Therefore, the above noted hippocampal Glu and myo-Inositol deficits in SCZ, which are presumed to implicate the tissue morphology of the hippocampal microstructure, are not associated with cognitive ability related to WM, EM and PS.This reemphasizes the utility of task-based 1 H fMRS over conventional 1 H MRS, given the former's ability to detect dynamic changes in Glu in response to task perturbation.Thus, task-based 1 H fMRS may provide greater insight in probing potential dysfunctions in shifting the E/I balance under specific cognitive processes in SCZ (Stanley and Raz 2018).
Strengths and limitations
In general, the hippocampus is particularly difficult to acquire high-quality 1 H MRS data, which tends to lead to unreliable Glu measurements [e.g.reporting glutamine levels greater than Glu levels (van Elst et al. 2005;Olbrich et al. 2008;Rusch et al. 2008)].In this study, the data quality was relatively poorer from the hippocampus compared to the dlPFC; however, the key metrics, S/N, FWHM and CRLB values of Glu (5-14%), were reasonable for reliable Glu quantification.This also suggests that the acquisition of 1 H MRS under behavioural constraint is an important determinant in reducing variability (Lynn et al. 2018).We reiterate several additional strengths including (a) utilising an automated voxel placement procedure for consistent/reliable voxel placement across participants (Woodcock et al. 2018), (b) optimising the voxel dimensions of both locations to ensure minimal partial volume effects, (c) utilising a short TE to minimise potential T 2 relaxation effects between groups, as well as a long TR of 10s for the water unsuppressed measurement to ensure the complete water signal is acquired, (d) utilising the appropriate tissue fraction values for estimating absolute levels, and (e) performing both the 1 H MRS and cognitive battery assessment within days of each other.
Conclusion
These results are the first to show significant and specific reductions in basal Glu and myo-Inositol from the right hippocampus (though not the right dlPFC) in SCZ participants.The observed Glu deficits in the hippocampus were not associated with cognitive performance, and this notable null effect must shape our understanding of brain-behaviour relationships in the illness.Glu and myo-Inositol reductions in the hippocampus appear to reflect more general pathology potentially related to a loss of cellular processing in the neuropil, a pathology that may be uncoupled from cognitive deficits in the illness.Approaches utilising task-based 1 H fMRS would be better positioned to address potentially dysfunctions in Glu related to task condition.
Figure 1 .
Figure 1.From left to right, voxel location (red box) superimposed on the structural MRI images from the (a) right dlPFC and (b) right hippocampus next to examples of typical modelled 1 H MRS spectra (black: acquired; red: modelled; blue: modelled Glu signal; and residual below) and plots depicting mean metabolite levels (±SEM) for Glu, NAA, PCr þ Cr, GPC þ PC and myo-Inositol between HC (blue bars) and SCZ (red bars) participants.Ã Signify significantly lower Glu and myo-Inositol levels in SCZ compared to HC participants.
Figure 2 .
Figure 2. Latent variable path model assessing the effects of diagnosis (SCZ vs HC) on cognition (Episodic Memory, Working Memory and Processing Speed) mediated by neurochemistry (hippocampal Glu vs dlPFC Glu).Latent factor loadings were fixed to 1 and measurement residuals were freely estimated (denoted with à ) to specify latent cognitive constructs of each domain.Estimated paths are illustrated with straight arrows, with solid lines indicating significant paths (p < .05)and dashed lines indicating non-significant paths.Model R 2 values and significance for each latent cognitive outcome are reported.
Table 3 .
Mean cognitive test scores (±SEM) of both HC and SCZ subjects.
Table 4 .
Association between Glu levels and cognitive ability by region.Standardised estimates within the structural equation model are reported with respect to latent cognitive domain constructs.Coefficients were bootstrapped with bias-correction to produce 95% confidence intervals (CI), reported with lower and upper level (LL, UL).
|
2023-04-01T06:18:16.371Z
|
2023-03-31T00:00:00.000
|
{
"year": 2023,
"sha1": "cf8b7eefdd30650eae7f4781492a9a507384f44b",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1976234/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "6fbf8e2ea70f0fb315df910403c02c7469ab4bf6",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238527401
|
pes2o/s2orc
|
v3-fos-license
|
Review on radiological evolution of COVID-19 pneumonia using computed tomography
BACKGROUND Pneumonia is the main manifestation of coronavirus disease 2019 (COVID-19) infection. Chest computed tomography is recommended for the initial evaluation of the disease; this technique can also be helpful to monitor the disease progression and evaluate the therapeutic efficacy. AIM To review the currently available literature regarding the radiological follow-up of COVID-19-related lung alterations using the computed tomography scan, to describe the evidence about the dynamic evolution of COVID-19 pneumonia and verify the potential usefulness of the radiological follow-up. METHODS We used pertinent keywords on PubMed to select relevant studies; the articles we considered were published until October 30, 2020. Through this selection, 69 studies were identified, and 16 were finally included in the review. RESULTS Summarizing the included works’ findings, we identified well-defined stages in the short follow-up time frame. A radiographic deterioration reaching a peak roughly within the first 2 wk; after the peak, an absorption process and repairing signs are observed. At later radiological follow-up, with the limitation of little evidence available, the lesions usually did not recover completely. CONCLUSION Following computed tomography scan evolution over time could help physicians better understand the clinical impact of COVID-19 pneumonia and manage the possible sequelae; a longer follow-up is advisable to verify the complete resolution or the presence of long-term damage.
INTRODUCTION
SARS-CoV-2, which stands for severe acute respiratory syndrome coronavirus 2, was first identified in December 2019 in Wuhan, China. The coronavirus disease 2019 caused by SARS-CoV-2 has rapidly spread from China to all around the world within a few months, leading the World Health Organization to declare it a pandemic on March 11, 2020[1].
The transmission of SARS-CoV-2 happens through direct, indirect or close contact with infected people through infected secretions, such as saliva and respiratory secretions or their respiratory droplets. The main organ affected is the lung, with pneumonia being the major manifestation of the infection [2].
The gold standard for SARS-CoV-2 diagnosis is real-time reverse transcriptionpolymerase chain reaction. However, computed tomography (CT) is recommended for initial evaluation and diagnosis, and it is also useful in monitoring the disease progression and evaluating the therapeutic efficacy [3,4].
Until now, many reports have focused on CT scan features at diagnosis [5][6][7]. On the other hand, there are relatively few studies evaluating serial temporal changes in patients who underwent repeated CT examinations and, particularly, in the late follow-up.
Our aim is to review the literature currently available on the radiological follow-up of COVID-19-related lung alterations using the CT scan to describe the evidence about the dynamic evolution of COVID-19 pneumonia.
MATERIALS AND METHODS
We conducted this systematic review according to the Preferred Reporting Items guidelines for Systematic Reviews and Meta-Analysis (PRISMA) Statement [8]. The primary aim was to collect, describe and discuss the dynamic radiological evolution of COVID-19 pneumonia.
Data extraction and synthesis
The study characteristics (first author, year of publication, type of study, number of patients included, CT scan follow-up, dynamic evolution and main CT manifestations) were extracted from the included articles by a single author (Casartelli C). Two reviewers (Perrone F and Casartelli C) initially performed the data extraction, and then it was independently reviewed by an additional reviewer (Bersanelli M).
Any doubt or disagreement was discussed with a fourth investigator (Buti S) and resolved with all investigators' consensus.
Most of the reports have considered moderate/common pneumonia; if pneumonia was not explicitly classified, most of the articles included patients with a good and defined prognosis, who were ultimately discharged from the hospital, while patients with severe/critical pneumonia were generally excluded.
Four studies have also included a minority group of patients showing severe/critical pneumonia [10,14,17,20]; the 11 patients described by Sun Q et al[23] case series had severe pneumonia[23].
Scoring system
The most common score used to evaluate dynamic CT evolution was a semiquantitative scoring system, which considered the total area of involvement of the lesions. The nature of the semi-quantitative scoring system was similar in the studies considered, even with some adjustments and discrepancies among them.
For example, Liang et al [11] assigned a 0-4 score based on the percentage of each lung lobe involvement; in agreement with this, the overall lung total severity score was reached by summing up the five lobe scores, with a possible range from 0 to 20.
Zhou et al [12] divided each lung into six zones, and the total score, given by the sum of the different lung regions, could reach a maximum of 48. Zhang et al[15] used yet another adaptation of the system based on the lung segments involved, assigning a score based on the percentage of ground glass opacities (GGOs) and consolidation, with a possible range from 0 to 36.
The study from Liu et al[17], analyzing the CT of discharged patients, focused the score on non-GGO lesions since extended GGO areas were defined as a basic manifestation of convalescence, which could lead to an overestimation of the CT score.
Other authors, considering the limited accuracy and sensitivity of the semiquantitative score based mainly on visual evaluation, proposed evaluating dynamic evolution by quantitative techniques. For example, Feng et al [16] measured the total volume (V T ) and mean CT value (CT), and from these, they calculated the mass (m): V T × (CT + 1000) [16].
In the report from Wang et al [10], quantitative CT measurements of pulmonary opacities, including volume, density and location, were extracted through deep learning algorithms.
In another report, quantitative CT features were automatically calculated using intelligent artificial algorithms, giving back the percentage of GGO volume, consolidation volume and total lesion volume [15].
Radiological dynamic evolution: Severity and timing
Almost all the reports present a short-term radiological follow-up, focusing on the first few weeks from the symptoms appearance and studying serial CT scan approximately in the first 4 wk during hospitalization (Table 1). It has been observed that the initial CT features and dynamic evolution of COVID-19 pneumonia have specific characteristics and regularity.
Several reports identify well-defined stages, from the onset of the symptoms to radiological recovery.
The most common pattern of radiographic evolution found is as follows. First, there is a progressive rapid radiographic deterioration, during which the lesions keep growing until they reach a peak; once this peak is reached, the lesions stop growing and are gradually reabsorbed and repairing signs appear. Almost all the studies found that the peak was reached roughly within 2 wk after the symptoms appearance, and after that lung abnormalities started to decrease.
There are some exceptions. Zhang et al[15] found an earlier peak, 8 d after symptoms onset, and lung lesions improved after 11 d. Wang et al [22] discovered a similar peak at around 6-11 d; in this case, though, a significant extent of lung lesions was found for longer times after the peak, showing a slower recovery.
Specific patterns of temporal evolution and relative peaks are shown in Table 2. When severe pneumonia was considered separately, the disease seemed to have a slightly longer evolution, showing the peak later than for moderate pneumonia cases.
In the report from Zhang et al [20], severe pneumonia exhibited a peak approximately 17 d after symptoms onset (compared to moderate pneumonia, which peaked at 12 d in the same study). In the report from Wang et al[10], the opacity volume kept increasing even after 15 d in the severe/critical group. Four reports had taken into account a longer CT follow-up, considering CT scan after discharge [14,17,19,24].
Zhuang et al [19] considered both CT during hospitalization and the first CT after discharge (22-51 d after symptoms onset). During the latter phase, further absorption of the lung lesions compared with the previous radiological exam was observed, but not all patients showed a complete resolution. Liu et al[17] studied the radiological evolution during the first few weeks after discharge, in particular 1, 2 and 3 wk after discharge. The aim was to determine the cumulative percentage of complete radiological resolution at each time point. They discovered that lung lesions could be entirely absorbed with no sequelae, and they suggested that the optimal time point for an early radiological estimation might be 2 Han et al [9], 2020 Initial deterioration to a peak at the 2 nd week followed by improvement in the 3 rd and 4 th week GGO decreased from 1 st week to 2 nd week, then increased in 3 and 4. Consolidation and a mixed pattern noted in 2 wk. Crazy paving pattern had the highest frequency in 2 nd week [22], 2020 Lung abnormalities increased quickly after the onset of symptoms, peaked around 6-11 d, and were followed by persistence of high levels in extent for a long duration (slow absorption of the lesions) GGOs trend: "first falling then rising". Consolidation was the second most common feature seen in the first 11 d. wk after discharge. In their analysis, the cumulative percentage of the complete radiological resolution was 8%, 42%, 50% and 53% at discharge and during the 1 st , 2 nd and 3 rd week after discharge, respectively [17]. Wang et al[14] conducted a study including both common and severe pneumonia, showing that approximately 1/3 of cases had complete absorption of lesions in the first 1-2 mo after symptom onset (median day 38). In their study, patients with more severe lung involvement at days 8-14 (peak) were more prone to have pulmonary residuals. Urciuoli and Guerriero[24] considered a longer follow-up, with the study of CT up to 4 mo after the onset of the symptoms; the sample of this report was relatively small, as it considered only 6 patients with mild pneumonia. Interestingly, the follow-up CT scan revealed the persistence of lung abnormalities in 5 cases out of 6, even if all patients were completely asymptomatic at that point [24]. September 28, 2021 Volume 13 Issue 9
CT scan features of lung lesions at follow-up
The main features of lung lesions in the retrieved reports were multiple, bilateral, with a peripheral subpleural distribution.
In the short-term follow-up some features recurred. Consolidations and GGOs were always described, and often a mixed pattern was noted. Consolidations were more frequent during the peak, sometimes with accompanying signs such as a "crazy paving pattern" or "vascular thickening sign;" after the peak, they were gradually absorbed.
GGOs were described mainly in the early phase, but they could be observed also in later stages. In fact, in the report from Pan et al[18] the proportion of GGOs was similar in each stage. In those from Wang et al [22], the observed trend of GGOs was described as "first falling then rising" as they were present both in the first phase and in the last CT scan.
After the peak, besides GGOs, repairing CT signs, such as linear opacities, fibrous stripes, subpleural line sign and fibrosis shadows, were noted. Wang et al[13] proposed, in the absorption process, a particular sign called "fishing net on trees." This sign "indicated that the pulmonary lesions were in the stage of obvious absorption but not complete absorption. CT showed that the large area of consolidation was reduced, the density was reduced, the edge had shrunk, and there were significantly more bands and incomplete absorption of fibrosis shadows. The area was similar to a fishing net hanging on a branch that was not fully spread under the background of the increased bronchovascular bundle" [13].
In the longer-term follow-up, CT scans showed various presentations. Zhuang et al [19] observed in the first CT scan after discharge further absorption of the lung lesions. Also, GGOs, consolidations and linear opacities were still found in some patients. In the case series of Urciuoli and Guerriero[24], 2 patients presented persistence of a mixed pattern with GGO and fibrous streaks, 1 patient fibrotic stripes, 1 patient a mixed pattern with interlobular septal thickening and patchy GGOs and 1 patient fibrotic pattern [24].
Wang et al [22], who followed the CT scan until 4 wk after discharge, found mainly linear opacities. Liu et al[17] still observed in some patients GGOs and fibrous stripes even at the 3 wk radiological follow-up, even with a decreasing trend (GGO during the 1 st week and fibrous stripes during the 3 rd wk). Two additional signs were found during the evolution: "tinted" sign and bronchovascular bundle distortion. The "tinted" sign was demonstrated to coincide with an extension of the GGO area and a decrease in its density. According to the authors, the appearance of this pattern probably implied the gradual resolution of inflammation with re-expansion of alveoli. The bronchovascular bundle may be caused by inflammatory distraction or subsegmental atelectasis [17].
DISCUSSION
Current evidence of the temporal evolution of COVID-19 pneumonia derives from studies evaluating a relatively short follow-up period, and data about long-term radiological (and clinical) sequelae are still awaited [17,22,25,26]. The hallmark of early COVID-19 pneumonia includes bilateral, peripheral GGOs and consolidation often showing features resembling organizing pneumonia, such as a perilobular distribution and "reversed halo" sign (i.e. a focal, rounded area of ground-glass surrounded by a ring or arc of denser consolidation) [27,28]. These findings are non-specific and variably comprise foci of edema, organization and diffuse alveolar damage that are not too far removed from patients with other acute injuries, even noninfectious [29,30]. Notably, up to 56% of patients have been reported to demonstrate no abnormalities in the first 3 d after onset of symptoms, while conversely patients with no symptoms may show abnormal CT findings [31]. Moreover, still in the initial phase of the disease, pulmonary opacities may be unilateral and lack the characteristic peripheral distribution, possibly reducing diagnostic confidence in differentiating COVID-19 from potential mimickers such as heart failure and other infections [21,32].
The severity of acute COVID-19 manifestations is likely to peak within 2 wk from the disease onset, though reported temporal evolution varies depending on the studied population [12,13,18,21,31]. In this phase, patients may show an increasing extent of pulmonary consolidation, which parallels lung injury evolution. With the awareness of the heterogeneous studies included in the present analysis and intrinsic individual variation of the disease course, patients have been found to enter the socalled absorption stage roughly 14 d from the disease onset [12,13,18,21]. During this period, consolidation tends to wane, while other findings such as linear opacities, parenchymal bands and reticulation possibly emerge, sometimes leading to a "fibroticlike" appearance [26]. Even in this last case, it remains unclear whether residual abnormalities truly represent irreversible disease or will solve over time as no studies with a follow-up period greater than 6 mo have been performed so far [26,33]. Remarkably, most studies examined CT patterns in isolation at various time points rather than temporal changes of each pulmonary finding, providing valuable information about the overall disease evolution but missing the opportunity to examine regional linkages between patterns. Future studies are needed to explore how underlying pathogenetic pathways such as diffuse alveolar damage and an autoinflammatory response would determine imaging features of COVID-19. In this regard, the role of baseline risk factors such as vascular thrombosis and interstitial lung abnormalities remains poorly investigated.
Besides providing clues to assess COVID-19 morphological changes, CT has been used to enrich clinical and laboratory findings to quantify disease severity in the acute setting and longitudinal evolution [12,18,21]. Various methods have been employed to assess CT lung involvement in COVID-19, including qualitative, semi-quantitative and software-based quantitative scoring systems [12,18,21,[34][35][36][37]. In the included works, most CT scores were based on semi-quantitative methods, while only two studies used artificial intelligence techniques. Several parameters such as symptoms, oxygenation status and laboratory measures of infection and inflammation have been found to correlate with parenchymal involvement at CT, highlighting the potential role of imaging in predicting the clinical course of COVID-19 and optimizing patient care [38][39][40]. However, further evidence is needed to demonstrate CT scoring usefulness to manage COVID-19 and its actual impact on clinical decision-making in the acute and follow-up setting.
Clinical compendium: Pulmonary sequelae of COVID-19
The clinical counterpart of long-term radiological outcomes of COVID-19 pneumonia is a topic of growing interest. After the first wave of COVID-19, the awareness of patients suffering from residual symptoms, persistent beyond the acute phase of the disease, became very common, leading to the description of a post-COVID syndrome or Long-COVID [41]. However, the type and severity of respiratory impairment or functional sequelae are still unknown.
The current knowledge gained from the previous coronavirus outbreaks (SARS-CoV-1 in 2002-2004 and Middle East respiratory syndrome coronavirus in 2012) and the general understanding about outcomes in the acute distress respiratory syndrome suggest that some COVID-19 survivors might experience impaired lung function and exercise limitation, and some of them develop interstitial lung disease in the mid-long term [42][43][44].
Up until recently, only a few retrospective studies, including small samples, showed that patients might experience a reduction of forced vital capacity (13 patients at 6 wk) [45] and of forced vital capacity, forced expiratory volume in the first second, total lung capacity (TLC) and diffusion lung carbon monoxide (DLCO) (55 patients at 3 mo) [46].
In one of the largest cohorts studied to date describing the medium-term consequences of the infection (767 patients, follow-up at median time of 81 d after discharge), 51.4% of the patients reported being still symptomatic, with fatigue (55.0%), exertional dyspnea (45.8%) and post-traumatic psychological consequences (30.5%) as the most reported symptoms. Impaired lung function was found in 19% of the patients (reduced DLCO with or without restrictive pattern) [47].
Anastasio et al [48] recently published a study on 379 patients evaluated 4 mo after the diagnosis of COVID-19. Almost 69% of the patients reported almost one residual symptom. Patients who had pneumonia showed lower SpO 2 at rest and during the sixminute walking test and TLC compared with patients without prior pneumonia. Furthermore, the authors found an association between SpO 2 /FiO 2 ratio and the pneumonia severity index during the acute phase, and mid-term alteration in SpO 2 at rest and during six-minute walking test, TLC, residual volume and forced vital capacity [48].
In an Italian study with 238 patients enrolled, DLCO was reduced less than 80% of the predicted value in more than half of the patients at 4 mo follow-up, and in 15.5% of the cases were less than 60%. More than 50% of the patients showed functional impairment assessed with Short Physical Performance Battery and 2-minute walk test [49].
In another large cohort of 647 patients evaluated at 3 mo follow-up, patients reported ongoing symptoms, in particularly fatigue (13%), palpitation (10%) and dyspnea (9%). Those symptoms were significantly higher in patients who experienced severe COVID-19 compared to non-severe patients. In this cohort, only 81 patients were assessed with lung function test. More than half of the patients showed reduced DLCO. Similarly to symptoms, an impaired DLCO was more frequently associated with severe cases than non-severe (68% vs 42%). On a multivariate analysis, a CT total severity score > 10.5 and acute distress respiratory syndrome were significantly associate with impaired DLCO [50].
Similar results were found in a smaller cohort of 22 patients at 3 mo follow-up. Furthermore, on multivariate analysis, low TLC was associated with the need for mechanical ventilation and low forced expiratory volume in the first second with a high APACHE II score [51].
Long-term follow-up will help understand the impact of COVID-19 pneumonia on lung pathophysiology. Therefore, it is advisable to schedule serial follow-up in patients that still present lung function impairment or exercise limitation.
CONCLUSION
At present, the available literature focus on the acute phase of radiological follow-up of COVID-19 pneumonia and describes well-defined stages in the first few weeks after the onset of the symptoms.
The most common finding seems to be a peak of lung involvement reached roughly within the first 2 wk, characterized mainly by the growth of GGOs and consolidations. After that peak, these manifestations are gradually absorbed, and repairing signs, such as linear opacities, fibrous stripes, subpleural line sign and fibrosis shadows, tend to appear.
When considering later follow-up, up to 4 mo, lesions are usually not completely absorbed. A longer follow-up is definitely needed, especially to check whether the later signs are reversible and how they affect patients' conditions. Following CT scan evolution over time could help physicians better understand the clinical impact of COVID-19 pneumonia and manage the possible sequelae.
Research background
Pneumonia is the main manifestation of severe acute respiratory syndrome coronavirus 2 infection. Chest computed tomography is an effective way to detect and keep track of coronavirus disease 2019 pneumonia cases over time.
|
2021-10-10T05:20:24.919Z
|
2021-09-28T00:00:00.000
|
{
"year": 2021,
"sha1": "1b4356638ee7df5b2d60181703f7ef680587c231",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4329/wjr.v13.i9.294",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b4356638ee7df5b2d60181703f7ef680587c231",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9007853
|
pes2o/s2orc
|
v3-fos-license
|
CFA with binary variables in small samples: a comparison of two methods
Asymptotically optimal correlation structure methods with binary data can break down in small samples. A new correlation structure methodology based on a recently developed odds-ratio (OR) approximation to the tetrachoric correlation coefficient is proposed as an alternative to the LPB approach proposed by Lee et al. (1995). Unweighted least squares (ULS) estimation with robust standard errors and generalized least squares (GLS) estimation methods were compared. Confidence intervals and tests for individual model parameters exhibited the best performance using the OR approach with ULS estimation. The goodness-of-fit chi-square test exhibited the best Type I error control using the LPB approach with ULS estimation.
INTRODUCTION
In the behavioral and social sciences, datasets often consist of binary variables. For example, essentially all test data are binary because multiple choice, true/false, and other question formats are usually coded in terms of whether the answer is correct or not. Many other types of tests require a diagnosis; e.g., classifying someone as depressed, mentally ill, or having a learning disability, also results in binary data. A critical question in such data is whether they represent indicators of underlying latent categorical variables, or, instead, are indicators of underlying continuous latent variables. In medical diagnosis, such as the outcome of an HIV test, the latent attribute is often considered binary, i.e., a person is either HIV positive or HIV negative. With most educational and psychological data, on the other hand, it is typically believed that the latent construct of interest is continuous, and a positive score on a binary indicator simply means that a certain threshold on the latent trait has been exceeded.
When a distinction is made between continuous latent attributes and their observed binary indicators, the Pearson correlations among the binary variables will not accurately represent the correlations among the latent attributes. The oldest measure of a relationship between two dichotomous variables that represent categorized continuous variables is the tetrachoric correlation coefficient (Pearson, 1900). In the population, the tetrachoric correlation is defined simply as a product-moment correlation between two underlying quantitative variables that have a joint bivariate normal distribution. The sample tetrachoric correlation is computed on two dichotomous variables and represents an estimate of the association between the underlying continuous constructs.
The matrix of sample tetrachoric correlations can be used to conduct a factor analysis of binary variables and to fit more general structural equation models (Christoffersson, 1975;Muthén, 1978Muthén, , 1984Muthén, , 1993Lee et al., 1990Lee et al., , 1995Jöreskog, 1994Jöreskog, , 2002Jöreskog, -2005. The approach of Christoffersson (1975) obtains parameter estimates directly by fitting the model to sample proportions using a generalized least squares (GLS) approach based on the asymptotic covariance matrix of sample proportions. This approach has recently been extended and generalized (Maydeu-Olivares and Joe, 2005;Maydeu-Olivares, 2006). Muthén (1978Muthén ( , 1984 proposed a less computationally intensive approach which first estimates sample thresholds and sample tetrachoric correlations, then fits the model to sample tetrachoric correlations using a GLS approach based on the asymptotic covariance matrix of the tetrachoric estimator. Lee et al. (1995) have proposed yet another approach that estimates thresholds and tetrachorics simultaneously (for each pair of variables) rather than sequentially and incorporates continuous variables.
Whether one fits the model to sample frequencies or to sample tetrachorics, this methodology is mathematically and computationally complex. The definition of the tetrachoric correlation itself involves an integral (see below), and requires complex computational algorithms (Kirk, 1973;Brown, 1977;Divgi, 1979). Many approximations to this coefficient have been proposed to reduce the computational intensity at a time when computer time was limited and costly. At least ten simple approximations have been proposed over the years, starting with Pearson (1900) and continuing on with Walker and Lev (1953), Edwards (1957), Lord and Novick (1968, p. 346), Digby (1983), and two more by Becker and Clogg (1988). Even though nowadays computers can handle large tasks, some of these proposed approximations are so good the question naturally arises whether they can be used directly to fit factor analytic and more general correlation structure models. These approximations may be particularly useful in smaller sample sizes, when more computationally intensive approaches may break down. For example, simulation work implies that sample sizes of 100, 250, or even 1000 may be needed at a minimum for these methods, depending on the model and the particular version of the estimation method (Flora and Curran, 2004;Beauducel and Herzberg, 2006;Nussbeck et al., 2006). Yet many researchers have smaller data sets and are faced with understanding their latent structure. Small samples are common in applications where measurements are expensive (e.g., fMRI measurements of absence or presence of activity in multiple brain regions), when specific types of participants are difficult to obtain (e.g., Parkinson's patients, executives, identical twins), or when research volunteers must be monetarily compensated for their participation in a lengthy assessment. When the purpose of the study is to assess the tau-equivalence of a unidimensional scale, a large sample size may not be required to accurately estimate the common factor loading. Bonett and Price (2005) proposed yet another approximation based on the odds ratio (OR) which improves on Becker and Clogg (1988) in terms of accuracy. They also provided asymptotic standard errors for this approximation. Additionally, Bonett and Price (2007) suggested that this methodology could be adapted to correlation structure models if a consistent estimator of the covariance matrix of the new tetrachoric approximation is obtained. In this paper, we develop the technical details for this new correlation structure methodology based on the Bonett and Price (2005) coefficient, and we compare the performance of this odds-ratio methodology (hereafter, OR) against the methodology of Lee et al. (1995, hereafter, LPB). The LPB methodology is available in EQS (Bentler, 2006).
Three simulation studies were conducted to compare the OR and LPB approaches. In Study 1, sample tetrachoric correlations and their standard errors were compared, without any structured model. In Study 2, a confirmatory factor analysis (CFA) model was fit to data using GLS with either the LPB or the OR asymptotic covariance matrix estimator. In Study 3, a CFA model was fit to data using unweighted least squares (ULS) estimation with robust standard errors and test statistics (Satorra and Bentler, 1994) using either the LPB or the OR asymptotic covariance matrix estimator.
CORRELATION STRUCTURE MODELS WITH BINARY VARIABLES
Without loss of generality, assume that each observed variable takes on values 1 or 2. For each pair of binary variables, a 2 × 2 contingency table can be computed, using either sample frequencies or sample probabilities. Table 1 illustrates the notation used in such contingency tables. Here, f ij is the sample frequency, p ij is the sample probability that the pair of variables takes on values (i, j), and the "+" notation is used to indicate marginal sample frequencies and probability. We add 0.5 to each cell in the frequency table before computing sample probabilities. It can be shown that adding 0.5 to each cell frequency of the 2 × 2 table minimizes the bias of the log-transformed odds ratio (Agresti, 2013, p. 617). This small sample correction disappears asymptotically.
Let z = (z 1 , ..., z s ) be an s × 1 vector of observed binary variables. Let y = (y 1 , ..., y s ) be an s × 1 vector of underlying continuous variables, and we assume thaty ∼ N(0, ). The variables z were obtained by categorizing variables y as follows: where a = 1, ..., s. The threshold h a for each variable is related to the probabilities for z a as follows: where (x) is the cumulative distribution function for standard normal distribution. Thus, observed marginal probabilities p 2+ can be used to obtain estimates of thresholds. Without loss of generality, assume that diag( ) = I, since the scale for the underlying continuous variables generally cannot be recovered after categorization has occurred. The off-diagonal elements of are tetrachoric correlations. The tetrachoric correlation ρ ab between y a and y b is related to the probabilities of z a and z b as follows: Thus, observed sample probabilities p 22 from each bivariate contingency table can be used to compute an estimate of the tetrachoric correlation, but the computations involved are complicated. We assume that the continuous latent variables y are generated by a latent variable model. In this study, we hypothesize that the underlying continuous variablesy were generated from a factor model: where is the s × m matrix of factor loadings with many elements fixed to 0, ξ is the m × 1 vector of factors, and ζ is the s × 1 vector of errors. This implies the following covariance structure for : where = cov(ξ ) with diag( ) = I for model identification, = cov(ζ ), and θ is a vector of all model parameters (i.e., factor loadings and factor covariances). The diagonal of (θ) is fixed to be 1 and hence the parameters in are dependent on other parameters and do not need to be directly estimated.
THE OR METHOD
Instead of computing the tetrachoric correlation as defined implicitly by (2), the OR method computes another coefficient of association between z a and z b , defined in the population as where π in the numerator refers to the irrational number (3.1415. . . ),w ab = π 11 π 22 π 12 π 21 , π ij is the population counterpart of p ij in Table 1 corresponding to variables z a and z b , c = 0.5(1 − |π 1+ − π +1 | /5 − (0.5 − π min ) 2 ), and π min is the smallest marginal probability. In the sample, we estimate the odds ratio asŵ ab = (f 11 + 0.5)(f 22 + 0.5) (f 12 + 0.5)(f 21 + 0.5) so that the sample odds ratio is defined even if the frequency table has zero counts. Estimates of cell probabilities are also computed from the 2 × 2 table of frequency counts with the 0.5 additions to obtainĉ and the following tetrachoric estimatê ρ * ab = cos π 1 +ŵˆc ab .
(5) Bonett and Price (2005) found that this approximation to the tetrachoric correlation was more accurate than the existing most accurate approximation of Becker and Clogg (1988). The quality of the approximation in (4) varies as a function of the population tetrachoric correlation and of the population thresholds for the two variables. We have studied the difference between the tetrachoric correlation implicitly defined by (2) and the approximation in (4) using the plotting feature of Mathematica5. It was found that the larger the correlation between the variables, the greater the potential bias was, and the more extreme the thresholds were, as long as they were oppositesigned, the worse the approximation was. Figures 1, 2 illustrate the approximation error of ρ * ab . In Figure 1, the difference (ρ * ab − ρ ab ) is plotted as a function of the tetrachoric correlation ρ ab when thresholds are fixed to −0.8 and 0.3. The approximation gets worse for higher absolute values of the correlation, peaking when the correlation is about 0.9, at which point the OR approximation underestimates the tetrachoric by 0.08. If the threshold −0.8 is replaced with −1.5, the approximation error at this point reaches −0.13. Of course, when thresholds are high and opposite-signed, all existing methods will have trouble because the cell probabilities will be close to zero. Figure 2 plots ρ * ab − ρ ab as a function of one threshold, fixing the other threshold to 0.8 and the tetrachoric correlation to 0.5. The approximation error is minimal for any positive value of the other threshold, and does not exceed 0.08 if the other threshold is less extreme than −1.2. For high negative values of this threshold, however, the approximation error becomes considerable. Again, this is the situation where the standard tetrachoric approaches tend to break down as well. We will provide some empirical evidence on the breakdown of these estimators below. A particular advantage of the OR method is that an estimate of the asymptotic covariance matrixV ρ * of the s(s − 1)/2 vector ρ * can be computed easily. First, the covariance matrix of the vector of log-odds ratios log (w) is computed, using standard results about multinomial distributions (e.g., Agresti, 2013). Then, the asymptotic covariance matrix of the transformation given by (5) is computed using the delta method. In this step,ĉ is treated as a constant, since its variance is small relative to the variance of ρ * (Bonett and Price, 2005). The resulting expressions for elements ofV ρ * are simple compared to the complicated expressions for covariances of the tetrachoric correlations, and can be easily programmed using matrix-based languages such as R, SAS IML, Gauss, or Matlab. Details of the derivation and the typical elements ofV ρ * are given in the Appendix (see Supplementary Material).
In our OR approach, GLS parameter estimates are obtained by minimizing the fitting function: where θ is the vector of parameters from (θ) = + ψ. We note that becauseρ * is consistent for ρ * in (4) but not for ρ, the vectorized version of implicitly defined by (2), the estimator in (6) is not consistent for θ when the model holds. Thus, this estimator should not be used in large sample sizes, but its simplicity may offer advantages at smaller sample sizes. Approximate standard errors for model parameters can be obtained from the roots of the diagonal of (ˆ * V −1 ρ * ˆ * ) −1 , whereˆ * is the matrix of model derivatives evaluated at the OR parameter estimates. An approximation to the model fit statistic can also be computed as T OR = (N − 1)F OR and referred to a chi-square distribution with s * − q degrees of freedom, but the quality of this approximation is not known.
THE LPB METHOD
The LPB method (Lee et al., 1995) was developed to handle any combination of categorical and continuous variables by estimating a correlation matrix that is a mixture of Pearson, polyserial, and polychoric correlations and obtaining an appropriate estimate of its variability. Note that a polychoric correlation between two binary variables is a tetrachoric correlation. A unique feature of the LPB approach is that it estimates sample thresholds and each polychoric correlation simultaneously. For binary variables, the LPB method is asymptotically equivalent to all other existing methods, e.g., Christoffersson (1975), Jöreskog (1994), Muthén (1984). All of these are limited information approaches, estimating each ρ ab from the corresponding 2 × 2 contingency table based on variables z a and z b .
Let i, j = 1, 2 and let f ij be the sample frequencies, as before. We again employ the 0.5 addition to these frequencies to reduce the likelihood of non-convergence. Binary variables only have one finite threshold, but for convenience let us define, for variable z a , h a,1 = −∞, h a,3 = ∞, and h a,2 = h a . Then, estimates of thresholds and tetrachoric correlations are obtained by minimizing the negative log-likelihood: Denote the maximizer of (7) byβ ab = (ĥ a ,ĥ b ,ρ ab ) . Letρ = {ρ ab } be the vector of estimated tetrachoric correlations. The LPB method obtains parameter estimates by minimizing the fitting function: where θ is the vector of parameter estimates from the correlation structure model = (θ). The matrixV ρ is the appropriate submatrix of the covariance matrix of threshold and tetrachoric estimates, computed as a triple productĤ −1ˆ Ĥ −1 , whereĤ is a block-diagonal matrix with blocks of the formĤ ab , consistently estimating H ab = lim N→∞ 1 N ∂ 2 F ab ∂β ab ∂β ab , andˆ is an estimate of the asymptotic covariance matrix of 1 Details can be found in Poon and Lee (1987) and Lee et al. (1995). Standard errors for parameter estimates can be obtained from the roots of the diagonal of (ˆ V −1 ρˆ ) −1 , whereˆ is the matrix of model derivatives evaluated at the LPB parameter estimates. The test statistic T LPB = (N − 1)F LPB is asymptotically chi-square distributed with s * − q degrees of freedom.
ROBUST APPROACHES BASED ON ULS ESTIMATION
The OR and LPB approaches described above involve GLS estimation as the fitting functions (6) and (8) involve inverses of asymptotic covariance matrices of sample estimates of tetrachoric correlations. These weight matrices grow very quickly in size as the number of variables increases and may be very unstable in smaller sample sizes. GLS estimation, although asymptotically efficient, may not perform properly in small samples (Hu et al., 1992;West et al., 1995). Evidence exists that its analogs for categorical data also perform poorly in smaller sample sizes (Muthén, 1993;Flora and Curran, 2004). ULS estimation, which uses a simpler consistent but inefficient estimator, uses corrected standard errors and test statistics (Yang-Wallentin et al., 2010;Savalei, 2014). These ULS methods exist for both continuous and categorical data (Muthén, 1993;Satorra and Bentler, 1994) and have been found to perform well in smaller samples (Yang-Wallentin and Jöreskog, 2001;Savalei and Rhemtulla, 2013). We develop and study ULS estimates with robust standard errors and test statistics for both the OR and the LPB approaches. ULS estimation with robust standard errors is implemented as follows for the OR approach. Saturated estimates of population tetrachoric correlations are obtained according to (5). Estimates of model parameters are obtained by minimizing F LSOR = (ρ * − ρ(θ)) (ρ * − ρ(θ)), and standard errors for these parameter estimates are computed from the roots of the diagonal of the robust covariance matrix (ˆ * ˆ * ) −1ˆ * Ṽ ρ * ˆ * (ˆ * ˆ * ) −1 .
The model test statistic is computed as The correction by k is intended to bring the mean of the distribution of T LSOR closer to that of a chi-square distribution with s * − q degrees of freedom, but because the OR correlations are approximate, this statistic may be a very rough approximation, and its usefulness is to be determined. The robust LPB method is developed similarly. Saturated estimates of population tetrachoric correlations are obtained from (7). Estimates of model parameters are obtained by minimizingF LSLPB = (ρ − ρ(θ)) (ρ − ρ(θ)), and the robust covariance matrix is computed We now describe the results of three simulation studies designed to investigate the performance of GLS and ULS estimation with OR and LPB methods. The goal of Study 1 was to compare the saturated estimates of tetrachoric correlations: the OR approximationρ * and the LPB estimateρ. Study 2 investigated parameter estimates, standard errors, and test statistics obtained from GLS estimation. Finally, Study 3 investigated the performance of ULS estimation with robust standard errors and test statistics. The focus was on small sample performance.
METHOD
Data were generated from a model similar to one used by Lee et al. (1995) to evaluate their method. This is a 2-factor CFA model with 8 variables and 2 factors, with covariance structure (θ) = + ψ, where = λ λ λ λ 0 0 0 0 0 0 0 0 λ λ λ λ , = 1.0 0.5 0.5 1.0 , The factor loadings λ were set to equal either 0.6 or 0.8. With factor loadings of 0.6, the correlations among variables within the same factor are 0.36, and the correlations among variables across different factors are 0.18. With factor loadings of 0.8, the correlations among variables within the same factor are 0.64, and the correlations among variables across different factors are 0.32. The generated continuous data were then categorized to create dichotomous data using a set of eight thresholds. The thresholds were chosen to be either mild or moderate. The mild set of thresholds was set to be (0.5, −0.5, 0.5, −0.5, 0.5, −0.5, 0.5, −0.5). This set of thresholds is relatively homogenous and cuts the continuous distribution very near its center. The moderate set of thresholds was chosen to be (−1, 0.8, −0.6, 0.2, −0.2, 0.6, −0.8, 1). This set of thresholds is more heterogeneous and the cut-off point is often far from center. This set of thresholds also creates some pairings of high opposite-signed thresholds, a difficult situation for most methods to handle. Sample size was set to N = 20, 50, or 100. With continuous data, sample sizes in the 20-40 range were studied by Nevitt and Hancock (2004). Thus, there were a total of 12 conditions in this 2 (λ = 0.6 or 0.8) × 2 (thresholds are mild vs. moderate) × 3 (N = 20, 50, 100) design. This design remained the same across the three studies. Although some SEM simulation studies have used 5000 or more replications per condition, the LPB method is computationally intensive and 500 replications were generated within each condition.
The goal of Study 1 was to examine the correlations and their standard errors produced by the OR method and the LPB method. Saturated model was thus fit to data. The goal of Study 2 was to assess the GLS estimates in both OR and LPB methods. The 2-factor model was fit to data, and GLS estimation was carried out with the weight matrix computed using either the OR or the LPB formulae. The goal of Study 3 was to examine the ULS estimates with robust standard errors and test statistics. The 2-factor model was fit to data using ULS estimation and the standard errors and test statistics were corrected using the asymptotic covariance matrix computed based on either the OR or the LPB formulae.
To compare accuracy of estimated parameters, average estimates of all parameters were computed as well as their empirical standard deviations. Additionally, root mean squared error (RMSE), which is the square root of the average squared deviation of the parameter estimate from its true value, was also computed. This measure may be preferred to the empirical standard deviation measure because it combines bias and efficiency, and is thus an overall measure of the quality of an estimator. The OR method relies on an approximation to the tetrachoric correlation and will produce biased parameter estimates. To compare accuracy of standard errors, estimated standard errors are reported, to be compared to both the empirical standard errors and to the RMSE. To evaluate the performance of the test statistics in Studies 2 and 3, empirical rejection rates are reported.
STUDY 1
The results for Study 1 are presented in Table 2. The four types of generated data are labeled as follows: Condition I represents mild (homogenous) thresholds; Condition II represents moderate (heterogeneous) thresholds; Condition A represents high factor loadings (0.8); and Condition B represents lower factor loadings (0.6). For readability, the results are combined by the size of correlation. In the A conditions, all population correlations were either 0.64 or 0.32. In the B conditions, all population correlations were either 0.36 or 0.18. The LPB method had trouble achieving convergence in some conditions. When fewer than 500 replications converged, the actual number of replications is noted in the last column of the table. The OR method converged for all replications under all conditions. LPB method did not converge in about 4% of the cases at the smallest sample size and with heterogeneous thresholds (the II conditions). For the converged replications, standard error estimates associated with the LPB estimator were sometimes enormous, leading to non-sensical average estimated standard errors. To deal with this problem, estimated standard errors greater than 100 were excluded from that column only. This occurred only at N = 20, and the number of replications that were thus removed is noted in the table. This problem largely went away when the sample size was N = 50 or higher.
Examining average parameter estimates, we find that both the OR and the LPB method underestimate the size of the correlations, and this bias is worse for (a) smaller sample sizes, (b) larger correlations, and (c) heterogeneous thresholds. The worst case is in Condition IIA for N = 20, when the average estimate of the correlation of 0.64 is 0.43 for the OR method and 0.45 for the LPB method. The LPB correlations are slightly closer to the true value but this difference is small. We have reason to believe that this downward bias occurs because of the addition of 0.5 to the frequency tables to remedy zero frequency cells. Without the 0.5 addition, the LPB method is extremely unstable and often cannot proceed with the computations. We advocate this small sample correction, therefore, despite its impact in terms of small sample bias. By N = 100, the average value of the estimated correlations is reasonably close to the true value.
Even though we report empirical standard errors, the comparison of empirical and estimated standard errors is technically only appropriate for the LPB method, because this method produces consistent parameter estimates. However, we find that empirically, the two methods do not differ much in terms of bias, and we proceed with comparing estimated standard errors to both empirical standard errors and to the RMSEs. For the LPB method, the empirical and the estimated standard errors are very close in most cases. However, the estimated standard error is always less than the actual empirical standard error. This is expected as estimated standard errors are based on asymptotic results. This pattern is reversed for the OR method. The estimated standard error for the OR method is always larger than the empirical standard error, which is actually appropriate given the bias. The difference is most pronounced for the largest correlation of 0.64 when thresholds are heterogeneous and sample size is small. The most appropriate measure of the overall quality of the estimator, combining both bias and efficiency, is the RMSE. The average RMSE difference between OR and LPB methods is −0.00004, which is slightly in favor of the OR method but is tiny. The largest difference is in Condition IIB at N = 20, where the difference in RMSEs is −0.01 (0.32 vs. 0.33). The RMSE difference is in favor of OR for smaller correlations. Based on number of converged cases, the RMSE measure of bias and efficiency of parameter estimates, and the quality of estimated standard errors, we conclude that the OR method slightly outperforms the LPB method, and this difference is most pronounced in smaller samples.
STUDY 2
The results for Study 2 are presented in Table 3. For readability, the results are combined by type of parameter: factor correlation or average factor loading. The population factor correlation was 0.5 in all conditions. In the A conditions, all loadings were 0.8, and in the B condition, all loadings were 0.6, so that an average is appropriate. The LPB method failed to converge in all replications for N = 20. Fitting even a small structural model with six parameters to such a sample size may be difficult. Notably, the OR method reaches convergence for the majority of the cases at N = 20. In addition to convergence problems, outlying cases presented more of a problem in this study. Whereas in Study 1, outlying cases were only observed for estimated standard errors, in this study outlying cases were observed for parameter estimates as well, and they occurred for both methods, making it difficult to conduct any meaningful comparisons. Thus, outlying replications were defined as any replication where the absolute value of any parameter estimate exceeded 100. The columns labeled "OR N" and "LPB N" report the number of cases used in the analysis, with the number of excluded outliers in parentheses. The difference is due to non-convergence. For example, in Condition IA, the OR method produced 488 converged cases, of which 2 were outliers, resulting in a total of 486 usable cases. The LPB method generally had more trouble with convergence than the OR method did, with the most pronounced difference occurring when factor loadings were high and thresholds were heterogeneous (Condition IIA). Only 168 cases converged for LPB method in this condition at N = 50, compared to 494 cases for the OR method. Convergence was generally worse for both methods when thresholds were heterogeneous. Examining average estimates of the factor correlation, we find that both methods overestimate its value, more so at the smaller sample sizes, and LPB is more biased than OR in all conditions. By N = 100, the estimates produced by the OR method are reasonable (the average estimated factor correlation is around 0.56-0.59 across the four conditions), but the bias of the LPB method is still substantial, with the average estimate ranging from 0.58 to 0.70. The bias of the LPB estimator is worse for heterogeneous thresholds. The averaged factor loadings are somewhat biased downward for the OR method at N = 20, and LPB is unable to produce any estimates at this sample size. The average factor loadings for higher sample sizes for the OR methods are very reasonable, but for the LPB method they are somewhat biased upward. The surprising conclusion, therefore, is that the OR method seems to be less biased, on average, than the LPB method, despite the theoretical prediction of the opposite pattern. This result illustrates the difference between asymptotic results and small sample behavior.
Because in smaller samples the bias of parameter estimates is substantial, the RMSE and the empirical standard error often differ significantly. It is thus unclear how to evaluate the performance of the estimated standard errors. However, comparing them to either the empirical standard errors or to the RMSE leads to similar conclusions: the estimated standard error is severely downward biased for both methods at smaller sample sizes. The empirical standard error is huge especially for factor correlations at N = 20 (OR only), and this is not reflected in the estimated standard error. The difference is substantial for factor loadings as well: it is in the magnitude of 0.1 for homogenous thresholds and 0.2 for heterogeneous thresholds. However, at N = 50, and when thresholds are homogenous, the OR method produces more comparable empirical and estimated standard errors. The LPB method still exhibits substantial bias. For heterogeneous thresholds, both methods require at least N = 100 before the estimated standard errors are reasonably similar to empirical standard errors. The difference in the RMSEs is in favor of the OR method in 14 out of 16 comparisons, and this difference is more pronounced for factor loadings. The OR method thus appears to be superior both in terms of convergence rates and the overall quality, using the bias/efficiency RMSE measure. Table 3 also presents the estimated coverage probabilities for the 95% confidence intervals of the two model parameters. The estimated coverage probabilities for the OR and LPB approaches are far below the nominal 0.95 level and neither confidence interval approach can be recommended with GLS estimation. Table 4 reports the rejection rates of the goodness-of-fit test statistics using the OR and LPB approaches with GLS estimation. Good performance is not expected here, as sample sizes are too small to have reached convergence to chi-square for the LPB statistic, and the OR statistic is not chi-square distributed because the OR estimator is not consistent.
The LPB statistic rejects too many models across all sample sizes and conditions. It therefore cannot be used to evaluate model fit in such small samples. The OR statistic performs poorly at N = 20, over-accepting models. At larger sample sizes, it performs nearly optimally for higher factor loadings (the A conditions), and over-rejects models for lower factor loadings (the B conditions), though not nearly as much as the corresponding LPB statistic. The goodness-of-fit test using GLS estimation performs better using the OR approach than the LPB approach.
STUDY 3
The results for Study 3 are presented in Table 5. The format of presentation is the same as for Study 2. The most noticeable difference as compared to Study 2 is that ULS estimation has led to drastically fewer convergence problems as compared to GLS estimation. Convergence is still worse for heterogeneous thresholds, but at least 85% of cases converged in all conditions even at the smallest sample size. There is generally no difference in convergence rates between OR and LPB methods, except at the smallest sample size of N = 20 for conditions with heterogeneous thresholds, when the LPB method produces quite a few more non-convergent cases. We implemented the same method of outlier deletion based on parameter estimates as in Study 2. Interestingly, the number of outlying cases that had to be excluded is somewhat more for ULS estimation than for the GLS estimation; it may be that cases that failed to converge under GLS are more likely to produce poor parameter estimates under LS. Even though ULS estimation is used in both approaches, they still differ because a different saturated estimator of the tetrachoric correlation was used in optimization. For small sample sizes (100 or less) ULS estimation is better than GLS estimation: the average ULS parameter estimates appear to be much more accurate than the GLS estimates from Study 2. There is not much difference across methods or across conditions in the estimates of the factor correlation. Interestingly, the factor correlation is almost always overestimated. The OR method does somewhat better, producing averages closer to the true value of 0.5. The average factor loading is again underestimated, but the bias is considerably less. Here, the OR method does better with higher factor loadings (the A conditions), while the LPB method does better with lower factor loadings (the B conditions).
Estimated robust standard errors with ULS estimation are much more similar to actual empirical standard errors than for the GLS estimates in Study 3. With ULS estimation, the OR method tends to match empirical and estimated standard errors a bit better for the factor correlation, while the LPB method does a bit better with factor loadings, excluding some cases where at N = 20 this method still produces very large standard errors. Interestingly, empirical standard errors across methods are nearly identical for the A conditions (higher factor loadings), but the LPB method is slightly more efficient in the B conditions (lower factor loadings). Returning to the RMSE as a global measure of estimator quality, we find that the differences in RMSEs are in favor of the LPB method in 19 out of 24 conditions; however, the largest difference in RMSEs is 0.016, and the average is 0.004, so that the advantage is minimal. Table 5 also presents the estimated coverage probabilities for 95% confidence intervals of the model parameters. The Type I error rates for a test that the parameter value equals zero is (100-Cov)/100 where Cov is the estimated coverage probability. The OR approach has estimated coverage probabilities that are closer to 0.95 and Type I error rates that are closer to 0.05 than the LPB approach. (2) "Phi" refers to the factor correlation (always 0.5). "L" refers to the factor loading (0.8 or 0.6). Conditions I and II correspond to factor loadings of 0.8 and 0.6, respectively; Conditions A and B correspond to mild and moderate thresholds, respectively. "Mean," "Est SE," "Emp SE," "RMSE," and "Cov" refer to the average estimated correlation, average estimated standard error, empirical standard error of estimates, the root mean squared error, and coverage of 95% CIs. "OR N" and "LPB N" refer to the number of converged cases with no outliers, used in all of the computations. In parentheses is the number of outlying cases (p > 100).
Conditions with *, **, and *** had additional 11, 1, and 10 outliers removed, respectively, when computing the average estimated SEs only, for the LPB method. Table 6 reports the rejection rates for the robust goodnessof-fit test statistics for both methods. These are Satorra-Bentler scaled chi-square statistics (Satorra and Bentler, 1994), which rely on the estimated asymptotic covariance matrix of sample correlations but do not require its inverse. Neither of these statistics is chi-square distributed, and both are approximations. The LPB statistic has mean that is equal that of a chi-square variate, while the OR scaled statistic is incorrect even in the mean because the original OR saturated estimator is biased. The ULS test statistic based on the OR method over-accepts models in nearly all conditions. The LPB robust statistic performs quite well, except in the A conditions (lower factor loadings and heterogeneous thresholds), where it over-rejects models.
Lastly, we briefly compare the results of Study 2 and Study 3. It is often said that GLS estimation is asymptotically efficient while ULS estimation is inefficient. Our results show that the word "asymptotically" is important in the definition of efficiency of the GLS estimator. Not only does the simple ULS estimator have the advantage of greater stability, as captured by high convergence rates, but it also appears to be more efficient in smaller sample studies studied here. The average difference in the RMSEs between the GLS and the ULS estimators is 0.036 for the OR method and 0.058 for the LPB method, so that the ULS estimator actually has less empirical variability around the true parameter values in the sample sizes studied. While these numbers are small, they nonetheless demonstrate that an estimator with best asymptotic properties is not necessarily the best estimator in practice.
DISCUSSION
This paper developed the statistical theory for a new structural modeling methodology based on a recently proposed OR estimator of the tetrachoric correlation (Bonett and Price, 2005), including both GLS and ULS estimation methods. We also extended the Lee et al. (1995) method to ULS estimation with robust corrections to the standard errors and test statistics. The algebra and statistics used to develop these extensions follow directly from Satorra and Bentler (1994).
The new OR methodology is easy to implement. It does not require integration as does the direct tetrachoric estimator and can be easily programmed. Its asymptotic covariance matrix also is easy to compute. The GLS OR approach outperforms the GLS LPB method in all conditions. Perhaps the main advantage of the OR method is that it converges more often than the LPB method, especially when sample size is small and/or there are moderatesize thresholds. Moderate-size opposite-signed thresholds often lead to breakdown of traditional methods. The ULS OR approach is largely equivalent to the ULS LPB approach.
Obviously, larger sample sizes will give more reliable parameter estimates as well as more powerful test results. The corrected test statistic (Satorra and Bentler, 1994) for the ULS LPB method worked well in much smaller samples than have recently been studied or recommended in categorical variable research (e.g., Flora and Curran, 2004;Beauducel and Herzberg, 2006;Nussbeck et al., 2006). Of course, at very small sample sizes the test statistic may not be very useful as it may lack power. However, the power issue notwithstanding, this robust statistic for the LPB approach maintains Type I error remarkably well.
In the conditions studied, there was no detectable greater bias in parameter estimates when the OR methodology was used. Asymptotically, there will be a bias, particularly when the correlations are very large and based on very dissimilar thresholds, as we illustrated with Mathematica5 plots. The values of correlations and thresholds we used in our simulations were chosen to represent more typical values that should show some minimal bias. Evidently, when the sample size is not too large and considered within a structural model based on many correlations with varying potential for bias, such bias is not necessarily visibly propagated to the model's fundamental parameters. Further research is needed to determine the sample size at which the LPB method performs better than the OR methods. Such a determination should, however, be made in a relative sense, since the very conditions that likely will yield problems for the OR methodsuch as extreme but opposite thresholds associated with positively correlated variables-also will cause traditional tetrachoric-based methods to break down. While under some circumstances no method may perform perfectly, we predict relatively favorable success for the OR method in moderate sample sizes.
We developed robust least squares approaches both for the OR and LPB methods based on the Satorra and Bentler (1994) methodology, and found that the ULS estimator and the associated robust standard errors were very good. Whether or not an estimator that may be more efficient asymptotically, such as the diagonally weighted least squares (DWLS) estimator, would perform better at such small samples as those studied here remains to be determined in future research. The ULS and the DWLS estimators have been found to perform similarly (Maydeu-Olivares, 2001), and ULS may be preferred (Rhemtulla et al., 2012). We suspect that the stability of ULS in small samples may be more important in practice than any theoretical and asymptotic improvements in efficiency.
In addition to CFA applications, the OR approach is promising in other applications. The OR computations are extremely fast and could have important applications in the exploratory factor analysis of questionnaires with a large number of dichotomous items. Zao (2007) developed accurate methods of constructing confidence intervals for the difference in Pearson correlations computed from the same sample. The Zao confidence interval approach can now be extended to OR tetrachoric approximations using the new results given in the Appendix.
The OR approach has now been implemented in the current version of EQS so that researchers can compare the results of this new method with other methods 1 . Programmers who want to develop OR methods for other SEM packages will now be able to check their results against the EQS results.
|
2016-06-21T16:43:48.603Z
|
2014-11-19T00:00:00.000
|
{
"year": 2014,
"sha1": "e730f324f9d1ec342c5609431ce1a14a90916877",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2014.01515/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e730f324f9d1ec342c5609431ce1a14a90916877",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
55848367
|
pes2o/s2orc
|
v3-fos-license
|
Non-Epicurean Desires
In this paper, it is argued that there can be necessary and non-natural desires. After a discussion about what seems wrong with such desires, Epicurus’ classification of desires is treated similarly to Kripke’s treatment of the Kantian table of judgments. A sample of three cases is suggested to make this point.
Introduction
There are impossi le desires, those that cannot be satisfied due to physical limitations: flying like a bird, teleporting to the other side of the world, and the like.But this paper wi l ta k about conce tual impossibilities, that is, whatever cannot be thought of by definition.Let us see why it is largely assumed that there are also conceptua ly impossi le desires.
Epicurean desires
After describing Epicurus' classification of desires, we wi l focus on his assumed op osition between necessity and vanity.
Analogy I: Modal tables
Due to its connection with epistemological concepts like belief, wi l, or intention, desire has been studied as a modality of a ion impa ing on human judgment (see, e.g., Gardiès, 1979).Like necessity or knowledge, desire can be studied as a proper, "boulic" modality of judgment.
There also seems to be a structural analogy between epistemic and boulic modalities, based on the distinction between subjective and objective contents of judgment.The objective part is expressed by the concept of knowledge, which makes the transition from belief to truth through justification; in the same way, the objective part of desire is expressed by the concept of wi l, which transitions from desire to reality through intention.Ta le 1 helps to see this analogy in a clearer way.
The sort of impossibility on which I want to focus is not located in the object of desire; rather, it relates to inconceivability and lies at a definitional level.A look at Epicurus' theory of desire may throw some light upon it.non-necessary desires; and vain desires.The first category splits into three different goals: hap iness (eudaimonia); bodily we l-being (aoc lèsia); and life for itself (survival means).The second category cor esponds to merely natural desires that can be di ensed with: sexual a ivities, pleasures of the ta le, and the like.The third category of desires is considered to be the source of pain and an unhap y life: avidity, anger, thirst for power, etc. Epicurus strongly advises us to avoid these bad desires because they are "devoid" (kenai) or empty desires.These are the most important in what fo lows, due to their radical op osition to necessary desires: quests for power, riches, or honour are said to be "empty" desires insofar as they do not have a proper end.
Vanity against necessity!
This statement may seem difficult to accept for anyone who has not studied theoretical philosophy: who can claim that power, riches, and honour are not frequent targets in our everyday life?Who has never dreamt about seducing the prettiest boy or girl in one's class, or becoming the president of one's state?Of course, Epicurus would reply that such desires are not "empty" in the sense of having no finality.Rather, the trou le with these vain desires is their self-growing and en less development: they can never be fu ly satisfied, insofar as their temporary satisfa ion always leads to other, more demanding desires.For this reason, Epicurus urges us to renounce to them because attempting to fulfil them completely is a vain project.In this re ect, the third category of desires cor esponds to the sort of impossi le desires mentioned at the beginning of the paper.
Vanity for necessity?
And yet, a capitalist-frien ly agent could reply that the self-cumulative feature of vain desires is a virtue: just as Prote ants less a sense of effort and hardworking behaviour, it might be replied to Epicurus that nothing great can be accomplished without obstinacy.Or even that the criterion of asceticism for a good life is just a bad excuse for lazy losers.Here is where Epicureans depart from capitalists, roughly eaking: vain desires are vicious for the former; while they are virtuous for the latter.Pleasure is not the whole story.
Therefore, the controversy lies in the moral value of necessity and finiteness: according to the Epicureans, good desires are those that can be satisfied within the limits of human nature; knowledge of this nature is a precondition of hap iness.
Is there any sort of sufficient reason behind such a classification of desires?Let us try to tackle this issue, even to challenge the Epicurean taxonomy of desires.
Non-Epicurean desires
First, let us consider the way in which Epicurus came to his famous classification.Then let us see how far it can be altered in a relevant way.
A combinatorial picture
Two basic elements are used by Epicurus, namely, naturalness and necessity.A combination of both results in the three above kinds of desires; however, a pure combination of the two elements should yield not three but, rather, four elements.Let Na and Ne be symbols for naturalness and necessity, re ectively.Then the powerset of the basic set of desires D = {Ne, Na} is P(D) = {{Ne,Na}, {Ne}, {Na}, {}}.The first subset {Ne,Na} is the set of both necessary and natural desires, whilst the third {Na} is the set of natural and not necessary desires.Somehow ironica ly, the fourth subset of "empty", i.e., neither necessary nor natural desires, cor esponds to the empty subset {}.As for the second element {Ne}, it is never mentioned in Epicurus' theory of desire: the subset of necessary and not natural desires.Admitte ly, philosophers wi l mostly reply that such a combinatorial picture presents a typica ly anachronistic reading that is both immaterial and misleading for any serious philosophical investigation.But does it?
Our initial question about whether there are conceptua ly impossi le desires can thus be reformulated as follows: can there be desires that are both necessary and not natural?A momentary reflection should be sufficient to answer negatively; but this is just the beginning of the investigation: the point is not to rely upon commonsensical beliefs but, rather, to find their roots and see how these can be relevantly questioned.
Epistemic modalities
Boulic modalities objective knowledge will subjective knowledge ("I am sure of it!")subjective will ("I want it!")belief desire doubt aboulia justification intention Table 1.An analogy between epistemic and boulic modalities.
Analogy II: Kantian table
An analogy between two sorts of modalities was drawn above.The same can be drawn between Epicurean desires and Kantian judgments (in Kant, 2007Kant, [1781]]), despite their assumed difference in nature.At the same time, the comparison made by Gardiès (1979) between epistemic and aboulic modalities refers to two kinds of judgment.Could desire be viewed as a ecial sort of judgment?Or should so-ca led boulic modalities be restricted to the sole case of wi l, i.e., the objective part of desire?In fact, the fo lowing analogy does not need to fulfi l the condition that desires be proper sorts of judgment.For analogy differs from identity: an analogy consists in saying that whatever holds for a with re ect to b also holds for c with re ect to d, even if there is no logical interconnection between the elements of the pairs a,b and c,d.For our present concerns, let a and b be symbols for the Epicurean necessary and natural desires, re ectively; and let c and d be symbols for the Kantian analytic and a priori judgments.The same ta le (Ta le 2) ap ears as with the previous analogical ta le (Ta le 1) of epistemic and boulic modalities, accordingly:
Epicurean desires Kantian judgments
(1) necessary, natural analytic, a priori (2) necessary, cultural analytic, a posteriori (3) contingent, natural synthetic, a priori (4) contingent, cultural synthetic, a posteriori This combination seems meaningless, indeed, given that culture is the domain pa excellence of contingent things like habits, norms, or taboos.Nevertheless, such an idea is not more absurd than the hypothesis of analytic a posteriori judgments.
Ta le 3 can be qualified in two op osite ways: by restricting, extending, or squarely cance ling its valid combinations.
Restriction of Kant's table
According to Kant (2007Kant ( [1781]], B15-16), the judgment "7 + 5 = 12" is both synthetic and a priori: it is synthetic because it is not analytic, insofar as the predicate concept "equal to 12" is not contained within the subject concept "7 + 5"; it is a priori, because the justification of such a predication does not depend upon experience.The main pro lem concerns what Kant meant with "analyticity": a containment relation between subject and predicate in a judgment.How can it be war anted that the number 12 contains the sum of 7 and 5? What is the source of such a relation?
A controversy arose at the end of the 19 th century between those who took logic to ground mathematics (e.g., Frege, 1980Frege, [1884]]; Carnap, 1947) and those who did not (e.g., Poincaré, 1968Poincaré, [1902]]).According to the former, "7 + 5 = 12" is not a synthetic but, rather, an analytic a priori judgment: the concept "7 + 5" is taken to be necessarily identical to the concept "12" .Like Kant, this assumes a connection between arithmetic and apriority: no such justification can stem from the domain of experience, given that it is in principle not possi le to find counterevidence against what is grounded a priori, i.e., universa ly.Against Kant, the concept of analyticity has been separated from the criterion of containment and updated by the logical positivists: an analytical judgment is a judgment that is true by definition, according to the meanings given to its terms in a given lan uage.This more conventional definition helps to avoid the psychologist connotation of analyticity.Above a l, it shows how a controversy can be raised in philosophy with a redefinition of its main concepts.Why not do the same with the Epicurean ta le of desires?
Extension of Kant's table
According to Kripke (1980), there is no restriction at a l in the Kantian ta le of judgments: each row constitutes a proper judgment on its own, including the case of necessary a posteriori judgments.Again, a prior redefinition of the basic terms is required to go from a negative to a positive reception of Kripke's strategy.The same should ap ly to an extension of Epicurus' ta le of desires.Take Kripke's famous case of analytic a posteriori judgment: "Water is H 2 O".Strictly speaking, "analytic" should be replaced by "necessary" in Kripke's terminology; and "judgment" should be turned interchangea ly into "proposition" or "sentence".How can such a sentence be true in every case, anyway?
Note that Kantian judgments are lexicalized by positive concepts, unlike some of the Epicurean desires.However, it is not difficult to find positive counterparts to "not necessary" and "not natural": contingent, for the former; cultural, for the latter.Consequently, our main issue can be reformulated as fo lows: can there be desires that are both necessary and cultural?
An obvious similarity arises between the way in which Epicurus and Kant made use of their re ective concepts: there are three and only three possi le combinations, in both cases.This is displayed in Ta le 3, where the same shaded row (2) is ruled out by the two theories.
Firstly, Kripke proposes a redefinition of analyticity in terms of necessary truth, i.e., truth in every pos i le world.Whether "possi le" is to be taken in the same sense as "conceiva le" or not is not at issue now, despite the close connection between conceivability and our issue of conceptua ly impossi le desires.
Second, Kripke's point is that there are some sentences that are both necessary and based on experience.Thus, the chemical nature of water is taken to be a scientific fact; but as a fact, it needs to be discovered by experimental methods before it is shown to be true necessarily.There seems to be a clear-cut difference in the present controversy between Kant and Kripke and the op osition between Kant and logical positivists: in the latter case, the two sides agreed that experience has no role to play in the justification of arithmetic judgments; in the former case, however, Kripke claims that experience does contribute to the justification of analytic judgments.
Isn't there some misunderstanding here when it comes to the usual distinction between the origins of a concept and its justification?For if Kant accepted the empirical origin of concepts like numbers and operation signs, this does not mean that he thereby accepted the empirical foundation of an arithmetical identity such as "7 + 5 = 12" .In a nutshe l, doesn't Kripke, with his necessary aposteriority, reproduce the mistake made by John Stuart Mi l?According to Mi l (1974 [1843]), the empirical observation that a ding seven ap les to five oranges resulted in a set of twelve fruits was taken to be an ar ument for the empirical foundation of mathematics.Just as Frege (1980Frege ( [1884]]) stressed this confusion between occasion and foundation, Kripke (1980) could equa ly be lamed for reproducing the same conceptual flaw.
Yet this is not the case.I take this distinctive reception of Kripke's hypothesis to rely upon a deep revision of what "analyticity" means.From a Kantian per ective, analyticity is closely related to the categories of pure reason, i.e., to what stands in the a priori conditions of thought.No such transcendental analysis seems to be at hand in the Kripkean account of necessary a posteriori sentences: "Water is H 2 O" is a necessarily true sentence not in the light of pure reason but, rather, as a discovery holding in every world.Kantians must view rigid designators as a regressive emancipation of metaphysics from epistemology; in any case, the changes in analytic philosophy from Kripke (1980) to the two-dimensionalism of Chalmers ( 2006) should be tolerated with re ect to the classification of desires too.
Cancellation of Kant's table
Reca l how Quine also shook the ongoing debate around the distinction between analytic and synthetic.According to Quine (1951), there is no difference in nature but, rather, a mere difference in degree between these two kinds of judgment.Mathematical and logical sentences are "more analytica ly true" than truths from empirical sciences, but there is no purely analytic or synthetic sentence in the sense assumed by p q (1) T T (2) T F (3) T F (4) F F the Kantian ta le of judgments.Besides, Quine claimed that Carnap's distinction between "external" and "internal" questions relies upon an arbitrary distinction between facts and theories.Every true sentence has an empirical content, Quine ar ued, in the sense that true sentences always have a lin uistic and factual component.Whether Quine' s rejection of pure analyticity should be endorsed is not the point; rather, the controversy helps to ca l attention to those who accept the Epicurean ta le unreflectively.
Entailment
In any case, there is something common between the aforementioned ta les: both locate the pro lem in row (2).A logical analysis shows that this cor esponds to the issue of entailment.
Analogy III: Truth-table
Another deep epistemological obstacle seems to justify the open consensus around Epicurus' taxonomy of desires: the set-theoretical relation of inclusion between necessary and natural desires.A logical link with set theory is easily made through the connective of the conditional, which is said to ap roximate the relation of entailment.Little wonder that the Kantian ta le of judgment nicely matches with the truth-ta le chara erizing the logical connective of conditional, "if … then" (Ta le 4).
The prohibited shaded row (2) is the case in which the antecedent p is true and the consequent q is false, in the conditional sentence p ⊃ q.A logical interpretation of this ta le comes to the same result as a Kantian interpretation of analytic a posteriori judgments: it is impossi le for the complex sentence p ⊃ q to be true whenever p is true and q is not true (i.e., false), just as it is impossi le for a desire to be entertained whenever it is said to be necessary and not natural (i.e., cultural).The same sort of inclusive relation is presup osed by Kant's transcendental philosophy: no judgment can be analytica ly true and a posteriori at once, according to the Kantian reading of analyticity in terms of the categories of pure reason inherent to human nature.Similarly, no desire can be necessary and cultural at once, according to the Epicurean reading of necessity in terms of the properties inherent of human nature.Drug a diction cannot be said to be "necessary" in this sense: it is made necessary, and such a necessitation is not a feature of human nature at a l but a mere by-product of cultural devolution.
By analogy, any disagreement about the content of the preceding ta le is a disagreement about the logical relation between its terms.On the one hand, the logicists think that there is no entailment but, rather, an equi alence or bi-conditional relation between analyticity and a priority: whatever is analytic (or synthetic) is therefore a priori (or a posteriori), and conversely; on the other hand, Kripke thinks that there is no logical connection at a l between the two: whether a sentence is necessary (or not) entails nothing particular about its being a priori or a posteriori.Fina ly, Quine cancels the logical relation by refusing any single occur ence of antecedent and consequent.
At any rate, whoever wants to break with the limits of Kant's transcendental reason and opt for Kripke's possi le worlds should also tolerate the same stance with re ect to the limits of human nature.If so, why not extend necessity beyond the realm of naturalness, just as Kripke did by building a channel between necessity and aposteriority?
Relative necessity
Nothing of this kind can be conceived with necessity, so long as the latter is associated with eternity.On the one hand, whatever is eternal is standing and cannot change, by op osition to the poietic feature of cultural things.On the other hand, the conjunction of necessity and culture can be validated if necessity is reduced to a relative or context-dependent sense of ir evocability.
Bor owing from the Aristotelian distinction between relative (haplos) and absolute (katolou) forms of true predication, a logical truth is said to be either relative or absolute according to whether its truth depends on given premises or not.Sy logisms are relatively necessary truths, insofar as the conclusion cannot be validated without accepting at least two prior sentences.The same is held about theorems in modern logic: any given formula can be a theorem in this logical system and not a theorem in that one, as witnessed by "p or not p": this is a theorem in classical logic, but not in intuitionist logic.
Comparison is not reason, however, and an advocate of natural philosophy might reply that the way in which logicians han le necessity has nothing to do with the topics that Epicurus dealt with.My reply, again, is that Kant's concept of analyticity also differs from what Kripke meant by necessity: the former was in accordance with the epistemic categories of pure reason, whereas the latter concerns metaphysical truth in every possi le world.Therefore, in the fo lowing I sha l give a relative non-answer to the initial question: are there impossi le desires, conceptua ly speaking?
Necessitation
A relative sense of necessity implies that whatever has been accepted in the past cannot be modified afterwards, just like the rules of a game.Such an analogy with game theory echoes what Bouveresse (1987) has said about Wittgenstein's lan uage-game theory: quoting Goethe in his preface, Bouveresse compares the foundations of lan uage games with the mysterious sources of human societies by depicting them as products of neces itated conventions.This quotation insists on a paradoxical link between necessity and conventionality: convention is a product of contingent decision, and whatever is contingent cannot be necessary, by definition.The explanation is this: conventions are contingent by definition; however, they are made necessary or necessitated once accepted within a given area of study: lan uage, with Wittgenstein; mind, with Epicurus.
An ultimate epistemological obstacle has to be overcome in order to proceed with desires as Kripke did with judgments: the view that human nature is gi en once and for a l independently of human cultures, just as Kant located analytic judgments beyond particular experience.Anyone who has sympathy for transhumanism should make the jump without difficulty.But it can be done without even assuming such a metaphysical stance.
Three cases for cultural necessitation wi l now be sugge ed, namely: neurotic desires; mimetic desires; taboos, as a negative version of necessitated "counter-desires" .
In Freud (1991Freud ( [1916]])' s theory of the unconscious mind, "the ego is not the master in its own house" .According to Freud, this is due to the op osition between two instances of mind, viz.conscious and unconscious.Whether or not psychoanalysis is a relia le theory is not the point; rather, the explanatory role of neuroses is that they give an example of unnatural desires that are formed after birth and through the negative effect of repressive education.If such uncontro la le desires are accepted as neuroses, then there are some desires that are both impossi le for the agent to restrain and non-natural.
Fo lowing Girard (1966), desire is an essentia ly mimetic process: far from the naturalistic picture given by Epicurus, some desires result from a trian ular relationship between the owner's desired object, the object itself, and the desiring agent.Desire has thus been made necessary by socialization, and everyone desires what another has.Such a cogent theory helps to account for cur ent human behaviors like seduction or consumption.La Rochefoucauld (2002Rochefoucauld ( [1665]], n. 136) summarizes it in the fo lowing words: "There are some people who would never have fa len in love if they had not heard there was such a thing" .Anyone who admits the prominence of such social desires may object to their necessity, in the sense of their being inherent to human nature.But sti l, it can be accepted by anyone who sees an impetus in them that cannot be cance led out by the social conditions of life.
Fina ly, a reverse form of cultural necessity may be found in taboos.Fo lowing Lévi-Strauss (1969[1949]), taboos can be seen as a third example of what socialization may make necessary through the force of education, after Freud's neuroses and Girard's desires of desires.In fact, taboos proceed as counter-desires: they are feelings produced by prohibited rules in a given community, and the stronger they are the more natural they ap ear.Dis ust provoked by the incest taboo, for example, is a feeling where agents do more than merely not desire something: they desire not to do what is made shameful by the tacit rules of the community.Therefore, anthropology and psychoanalysis jointly ar ue for the necessitation of some desires under the impetus of socialization, whether in a positive sense of lust or a negative sense of reluctance.
Conclusion
I have attempted to make room for an a lege ly conceptual impossibility: necessary and non-natural desires, starting from the ancient classification of Epicurus.By means of a comparison with the controversial Kantian ta le of judgments, what entitled Kripke to justify the fourth "forbi den" kind of judgments should equa ly make the fourth kind of desires conceiva le.
And yet, why has the latter never been even mentioned in the philosophical literature?It may be because of the commonsensical opinion that nothing necessary can stand outside the range of natural things.Such entrenched opinion has to face the ar uments above regarding necessitation.It may also be because this fourth combination goes beyond the definition afforded by Epicurus in his theory of desires.If a whole philosophy of nature is implied by this classification, then no desire can be said to be both necessary and non-natural.The commonsensical opinion is much stronger, su gesting that such desires are barely unconceiva le whether inside or outside of Epicurus' theory.
Did Kripke defeat common opinion about analyticity and aposteriority, if there is one?Or did he only make sense of such combined judgments outside of Kant's philosophy?This is what fo lows from his non-Kantian definition of analyticity, after the first amendment of the concept by logical positivists.This is what has been undertaken in the present paper, thereby showing that what is said to be impossi le is not so inconceiva le after a l.
In fact, the central pro lem concerns the conditions for making sense of an idea inside or outside of a philosophical system.Is the occur ence of necessary and cultural desires an external or internal question?Quineans should question the very distinction between what a concept means inside and outside of a philosophical system.
Are there non-Epicurean desires, in conclusion?Not impossi ly, at any rate.
Table 2 .
An analogy between Epicurus' classification of desires and Kant's classification of judgments.
Table 3 .
An analogy between Epicurus' forbidden desires and Kant's forbidden judgments.
Table 4 .
The truth-table of logical conditional and its "forbidden" truth-condition.
|
2018-12-05T09:40:06.256Z
|
2016-06-21T00:00:00.000
|
{
"year": 2016,
"sha1": "2dc5db595e85e001787406695f3707bc2f85ab0f",
"oa_license": "CCBY",
"oa_url": "http://revistas.unisinos.br/index.php/filosofia/article/download/fsu.2016.171.08/5612",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2dc5db595e85e001787406695f3707bc2f85ab0f",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
}
|
6677927
|
pes2o/s2orc
|
v3-fos-license
|
Temporal Anchoring of Events for the TimeBank Corpus
Today’s extraction of temporal information for events heavily depends on annotated temporal links. These so called TLINKs capture the relation between pairs of event mentions and time expressions. One problem is that the number of possible TLINKs grows quadratic with the number of event mentions, therefore most annotation studies concentrate on links for mentions in the same or in adjacent sentences. However, as our annotation study shows, this restriction results for 58% of the event mentions in a less precise information when the event took place. This paper proposes a new annotation scheme to anchor events in time. Not only is the annotation effort much lower as it scales linear with the number of events, it also gives a more precise anchoring when the events have happened as the complete document can be taken into account. Using this scheme, we annotated a subset of the TimeBank Corpus and compare our re-sults to other annotation schemes. Additionally, we present some baseline experiments to automatically anchor events in time. Our annotation scheme, the automated system and the annotated corpus are publicly available. 1
Introduction
In automatic text analysis, it is often important to precisely know when an event occurred. A user * Guest researcher from the School of Electrical and Computer Engineering, University of Tehran. 1 https://www.ukp.tu-darmstadt.de/data /timeline-generation/temporal-anchoring -of-events/ might be interested in retrieving news articles that discuss certain events which happened in a given time period, for example articles discussing car bombings in the 1990s. The user might not only be interested in articles from that time period, but also in more recent articles that cover events from that period. Knowing when an event happened is also essential for time aware summarization, automated timeline generation as well as automatic knowledge base creation. In many cases, time plays a crucial role for facts stored in a knowledge base, for example for the facts when a person was born or died. Also, some facts are only true for a certain time period, like being the president of a country. Event extraction can be used to automatically infer many facts for knowledge bases, however, to be useful, it is crucial that the date when the event happened can precisely be extracted.
The TimeBank Corpus (Pustejovsky et al., 2003) is a widely used corpus using the TimeML specifications (Saurí et al., 2004) for the annotations of event mentions and temporal expressions. In order to anchor events in time, the Time-Bank Corpus uses the concept of temporal links (TLINKs) that were introduced by Setzer (2001). A TLINK states the temporal relation between two events or an event and a time expression. For example, an event could happen before, simultaneous, or after a certain expression of time. The TimeBank Corpus served as dataset for the shared tasks TempEval-1, 2 and 3 (Verhagen et al., 2007;Verhagen et al., 2010;UzZaman et al., 2013).
In this paper we describe a new approach to anchor every event in time. Instead of using temporal links between events and temporal expressions, we consider the event time as an argument of the event mention. The annotators are asked to write down the date when an event happened in a normalized format for every event mention. The annotation effort is for this reason identical to the number of event mentions, i.e. for a document with 200 event mentions, the annotators must perform 200 annotations. When annotating the event mentions, the annotators are asked to take the complete document into account. Section 3 presents our annotation scheme, and section 4 gives details about the conducted annotation study.
The number of possible TLINKs scales quadratic with the number of events and temporal expressions. Some documents of the TimeBank Corpus contain more than 200 events and temporal expressions, resulting in more than 20.000 possible TLINKs. Hand-labeling all links is extremely time-consuming and even when using transitive closures and computational support, it is not feasible to annotate all possible TLINKs for a larger set of documents. Therefore, all annotation studies limited the number of TLINKs to annotate. For example, in the original TimeBank Corpus, only links that are salient were annotated. Which TLINKs are salient is fairly vague and results in a comparably low reported inter-annotator agreement. Furthermore, around 62% of all events do not have any attached TLINK, i.e. for most of the events in the original TimeBank Corpus, no temporal statement can be made.
In contrast to the sparse annotation of TLINKs used in the TimeBank Corpus, the TimeBank-Dense Corpus ) used a dense annotation and all temporal links for events and time expressions in the same sentence and in directly succeeding sentences were annotated. For a subset of 36 documents with 1729 events and 289 time expressions, they annotated 12,715 temporal links, which is around 6.3 links per event and time expression. Besides the large effort needed for a dense annotation, a major downside is the limitation that events and time expressions must be in the same or in adjacent sentences. Our annotation study showed that in 58.72% of the cases the most informative temporal expression is more than one sentence apart from the event mention. For around 25% of the events, the most informative temporal expression is even five or more sentences away. Limiting the TLINKs to pairs that are at most one sentence apart poses the risk that important TLINKs are not annotated and consequently cannot be learned by automated systems.
A further drawback of TLINKs is that it can be difficult or even impossible to encode temporal information that originates from different parts in the text. Given the sentence:
December 30th, 2015 -During New
Year's Eve, it is traditionally very busy in the center of Brussels and people gather for the fireworks display. But the upcoming [display] Event was canceled today due to terror alerts.
For a human it is simple to infer the date for the event display. But it is not possible to encode this knowledge using TLINKs, as the date is not explicitly mentioned in the text.
To make our annotations comparable to the dense TLINK annotation scheme of the TimeBank-Dense Corpus , we annotated the same documents and compare the results in section 5. For 385 out of 872 events (44.14%), our annotation scheme results in a more precise value on which date an event happened.
Section 6 presents a baseline system to extract event times. For a subset of events, it achieves an F 1 -score of 49.01% while human agreement for these events is 80.50%.
Previous Annotation Work
The majority of corpora on events uses sparse temporal links (TLINKs) to enable anchoring of events in time. The original TimeBank (Pustejovsky et al., 2003) only annotated salient temporal relations. The subsequent TempEval competitions (Verhagen et al., 2007;Verhagen et al., 2010;UzZaman et al., 2013) are based on the original TimeBank annotations, but tried to improve the coverage and added some further temporal links for mentions in the same sentence. The MEANtime corpus (van Erp et al., 2015) applied a sparse annotation and only temporal links between events and temporal expressions in the same and in succeeding sentences were annotated. The MEANtime corpus distinguishes between main event mentions and subordinated event mentions, and the focus for TLINKs was on main events.
More dense annotations were applied by Bramsen et al. A drawback of the previous annotation works is the limitation that only links between expressions in the same or in succeeding sentences are annotated. In case the important temporal expression, that defines when the event occurred, is more than one sentence away, the TLINK will not be annotated. Consequently, retrieving the information when the event occurred is not possible. Increasing this window size would result in a significantly increased annotation effort as the number of links grows quadratic with the number of expressions.
Our annotation is the first for the TimeBank Corpus that does not try to annotate the quadratic growing number of temporal links. Instead, we consider the event time as an argument of the individual event mention and it is annotated directly by the annotators. This reduces the annotation effort by 85% in comparison to the TimeBank-Dense Corpus. This allows an annotator to annotate significant more documents in the same time. Also, all temporal information, independent where it is mentioned in the document, can be taken into account resulting in a much more precise anchoring of events in time, as section 5 shows.
Event Time Annotation Scheme
The annotation guidelines for the TimeBank Corpus (Saurí et al., 2004) define an event as a cover term for situations that happen or occur. Events can be punctual or last for a period of time. They also consider as events those predicates describing states or circumstances in which something holds true. For the TimeBank Corpus, the smallest extent of text (usually a single word) that expresses the occurrence of an event is annotated.
The aspectual type of the annotated events in the TimeBank Corpus can be distinguished into achievement events, accomplishment events, and states (Pustejovsky, 1991). An achievement is an event that results into an instantaneous change of some sort. Examples of achievement events are to find, to be born, or to die. Accomplishment events also result into a change of some sort, however, the change spans over a longer time period. Examples are to build something or to walk somewhere. States on the other hand do not describe a change of some sort, but that something holds true for some time, for example, being sick or to love someone. The aspectual type of an event does not only depend on the event itself, but also on the context in which the event is expressed.
Our annotation scheme was created with the goal of being able to create a knowledge base from the extracted events in combination with their event times. Punctual events are a single dot on the time axis while events that last for a period of time have a begin and an end point. It can be difficult to distinguish between punctual events and events with a short duration. Furthermore, the documents typically do not report precise starting and ending times for events, hence we decided to distinguish between events that happened at a Single Day and Multi-Day Events that span over multiple days. We used days as the smallest granularity for the annotation as none of the annotated articles contained any information on the hour, the minute or the second when the event happened. In case a corpus contains this information, the annotation scheme could be extended to include this information as well.
For Single Day Events, the event time is written in the format YYYY-MM-DD. For Multi-Day Events, the annotator annotates the begin point and the end point of the event. In case no statement can be made on when an event happened, the event will be annotated with the label not applicable. This applies only to 0.67% of the annotated events in the TimeBank Corpus which is mainly due to annotation errors in the TimeBank Corpus.
He was sent into space on May 26, 1980. He spent six days aboard the Salyut 6 spacecraft.
The first event in this text, sent, will be annotated with the event time 1980-05-26. The second event, spent, is a Multi-Day Event and is annotated with the event time beginPoint=1980-05-26 and endPoint=1980-06-01.
In case the exact event time is not stated in the document, the annotators are asked to narrow down the possible event time as precisely as possible. For this purpose, they can annotate the event time with after YYYY-MM-DD and before YYYY-MM-DD.
In 1996 he was appointed military attache at the Hungarian embassy in Washington. [...] McBride was part of a seven-member crew aboard the Orbiter Challenger in October 1984 The event appointed is annotated after 1996-01-01 before 1996-12-31 as the event must have happened sometime in 1996. The Multi-Day Event part is annotated with beginPoint=after 1984-10-01 before 1984-10-31 and endPoint=after 1984-10-01 before 1984-10-31.
To speed up the annotation process, annotators were allowed to write YYYY-MM-xx to express that something happened sometime within the specified month and YYYY-xx-xx to express that the event happened sometime during the specified year. Annotators were also allowed to annotate events that happened at the Document Creation Time with the label DCT.
The proposed annotation scheme requires that event mentions are already annotated. For our annotation study we used the event mentions that were already defined in the TimeBank Corpus. In contrast to the annotation of TLINKs, temporal expressions must not be annotated in the corpus.
Annotation Study
The annotation study was performed on the same subset of documents as used by the TimeBank-Dense Corpus with the event mentions that are present in the TempEval-3 dataset (UzZaman et al., 2013). Cassidy et al. selected 36 random documents from the TimeBank Corpus (Pustejovsky et al., 2003). These 36 documents include a total of 1498 annotated events. This allows to compare our annotations to those of the TimeBank-Dense Corpus (see section 5).
Each document has been independently annotated by two annotators according to the annotation scheme introduced above. We used the freely available WebAnno (Yimam et al., 2013). To speed up the annotation process, the existent temporal expressions that are defined in the TimeBank Corpus were highlighted. These temporal expressions are in principle not required to perform our annotations, but the highlighting of them helps to determine the event time. Figure 1 depicts a sample annotation made by WebAnno. The two annotators were trained on 15 documents distinct from the 36 documents annotated for the study. During the training stage, the annotators discussed the decisions they have made with each other.
After both annotators completed the annotation task, the two annotations were curated by one person to derive one final annotation. The curator examined the events where the annotators disagreed and decided on the final annotation. The final annotation might be a merge of the two provided annotations.
Inter-Annotator-Agreement
We use Krippendorff's α (Krippendorff, 2004) with the nominal metric to compute the Inter-Annotator-Agreement (IAA). The nominal metric considers all distinct labels equally distant from one another, i.e. partial agreement is not measured. The annotators must therefore completely agree.
Using this metric, the Krippendorff's α for the 36 annotated documents is α = 0.617. reported a Kappa agreement between 0.56 − 0.64 for their annotation of TLINKs. Comparing these numbers is difficult, as the annotation tasks were different. According to Landis and Koch (1977), these numbers lie on the border of a moderate and a substantial level of agreement.
Disagreement Analysis
In 648 out of 1498 annotated events, the annotators disagreed on the event time. In 42.3% of the disagreements, the annotators disagreed on whether the event mention is a Single Day Event or a Multi-Day Event. Such disagreement occurs when it is unclear from the text whether the event lasted for one or for several days. For example, an article reported on a meeting and due to a lack of precise temporal information in the document, one annotator assumed that the meeting lasted for one day, the other that it lasted for several days. A different source for the disagreement has been the annotation of states. They can either be annotated with the date where the text gives evidence that they hold true, or they can be annotated as a Multi-Day Event that begins before that date and ends after that date.
Different annotations for Multi-Day Events account for 231 out of the 648 disagreements (35.6%). In this category, the annotators disagreed on the begin point in 110 cases (47.6%), on the end point in 57 cases (24.7%) and on the begin as well as on the end point in 64 cases (27.7%). The Krippendorff's α for all begin point annotations is 0.629 and for all end point annotations it is 0.737.
A disagreement on Single Day Events was observed for 143 event mentions and accounts for 22.1% of the disagreements. The observed agreement for Single Day Events is 80.5% or α = 0.799. Most disagreements for Single Day Events were whether the event occurred on the same date as the document was written or if it occurred before the document was written.
Measuring Partial Agreement
One issue of the strict nominal metric is that it does not take partial agreement into account. In several cases, the two annotators agreed in principle on the event time, but might have labeled it slightly differently. One annotator might have taken more clues from the text into account to narrow down when an event has happened. One annotator for example, has annotated an event with the label after 1998-08-01 before 1998-08-31. The second annotator has taken an additional textual clue into account, which was that the event must have happened in the first half of August 1998 and annotated it as after 1998-08-01 before 1998-08-15. Even though both annotators agree in principle, when using the nominal metric it would be considered as a distinct annotation.
To measure this effect, we created a relaxed metric to measure mutual exclusivity: The metric measures whether two annotations can be satisfied at the same time. Given the event happened on August 5th, 1998, then the two annotations after 1998-08-01 before 1998-08-31 and after 1998-08-01 before 1998-08-15 would both be satisfied. In contrast, the two annotations after 1998-02-01 and before 1997-12-31 can never be satisfied at the same time and are therefore mutual exclusive.
Out of the 648 disagreements, 71 annotations were mutually exclusive. Computing the Krippendorff's α with the above metric yields a value of α M E = 0.912. Table 1 gives an overview of the assigned labels. Around 58.21% of the events are either instantaneous events or their duration is at most one day. 41.12% of the events are Multi-Day Events that take place over multiple days. While for Single Day Events there is a precise date for 55.73% of the events, the fraction is much lower for Multi-Day Events. In this category, only in 19.81% of the cases the begin point is precisely mentioned in the article and only in 15.75% of the cases, the end point is precisely mentioned.
Annotation Statistics
The most prominent label for Single Day Events is the Document Creation Time (DCT). 48.28% of Single Day Events happened on the day the article was created, 33.49% of these events happened at least one day before the DCT and 17.43% of the mentions refer to future events. This distribution shows, that the news articles and TV broadcast transcripts from the TimeBank Corpus mainly report on events that happened on the same day.
For Multi-Day Events, the distribution looks different. In 76.46% of the cases, the event started in the past, and in 65.10% of the cases, it is still ongoing.
Most Informative Temporal Expression
Not all temporal expressions in a text are of the same relevance for an event. In fact, in many cases only a single temporal expression is of importance, which is the expression stating when the event occurred. Our annotations allow us to determine most informative temporal expression for an event. We define the most informative temporal expression as the expression that has been used by the annotator to determine the event time. We checked for all annotations whether the event date can be found as a temporal expression in the document and computed the distance to the closest one with a matching value. The distance is measured as the number of sentences. 421 out of 1498 events happened on the Document Creation Time and were excluded from this computation. The Document Creation Time is provided as additional metadata in the TimeBank Corpus, and it is often not explicitly mentioned in the document text. Figure 2 shows the distance between the most informative temporal expression and the event mention. In 23.68% of the cases, the time expression is in the same sentence, and in 17.59% of the cases, the time expression is either in the next or in the previous sentence. It follows that in 58.72%, of the cases the most informative time expression cannot be found in the same or in the preceding or succeeding sentence. This is important to note, as previous shared tasks like TempEval-1,-2, and -3 (Verhagen et al., 2007;Verhagen et al., 2010;UzZaman et al., 2013) and previous annotation studies like the TimeBank-Dense Corpus only considered the relation between event mentions and temporal expressions in the same and in adjacent sentences. However, for the majority of events, the most informative temporal expression is not in the same or in the preceding / succeeding sentence.
For 7.31% of the annotated events, no matching temporal expression was found in the document. Those were mainly events where the event time was inferred by the annotators from multiple temporal expressions in the document. An example is that the year of the event was mentioned in the beginning of the document and the month of the event was mentioned in a later part of the docu-ment.
Comparison of Annotation Schemes
Depending on the application scenario and the text domain, the use of TLINKs or the proposed annotation scheme may be advantageous. TLINKs have the capability to capture the temporal order of events, even when temporal expressions are completely absent in a document, which is often the case for novels. The proposed annotation scheme has the advantage that temporal information, independent where and in which form it is mentioned in the document, can be taken into account. However, the proposed scheme requires that the events can be anchored on a time axis, which is easy for news articles and encyclopedic text but hard for novels and narratives.
In this section, we evaluate the application scenario of temporal knowledge base population and time-aware information retrieval. For temporal knowledge base population, it is important to derive the date for facts and events as precisely as possible (Surdeanu, 2013). Those facts can either be instantaneous, e.g. a person died, or they can last for a longer time like a military conflict. Similar requirements are given for time-aware information retrieval, where it can be important to know at which point in time something occurred (Kanhabua and Nørvåg, 2012).
We use the TimeBank-Dense Corpus with its TLINKs annotations and compare those to our event time annotations. The TimeBank-Dense Corpus annotated all TLINKs between Event-Event, Event-Time, and Time-Time pairs in the same sentence and between succeeding sentences as well as all Event-DCT and Time-DCT pairs. Six different link types were defined: BEFORE, AFTER, INCLUDES, IS INCLUDED, SIMULTANOUS, and VAGUE, where VAGUE encodes that the annotators where not able to make a statement on the temporal relation of the pair.
We studied how well the event time is captured by the dense TLINK annotation. We used transitive closure rules as described by to deduct also TLINKs for pairs that were not annotated. For example, when event 1 happened before event 2 and event 2 happened before date 1 , we can infer that event 1 happened before date 1 . Using this transitivity allows to infer relations for pairs that are more than one sentence apart. For all annotated events, we evaluated all TLINKs, including the TLINKs inferred from the transitivity rules, and derived the event time as precisely as possible. We then computed how precise the inferred event times are in comparison to our annotations. Preciseness is measured in the number of days. An event that is annotated with 1998-02-13 has the preciseness of 1 day. If the inferred event time from the TLINKs is after 1998-02-01 and before 1998-02-15, then the preciseness is 15 days. A more precise anchoring is preferred.
The TimeBank-Dense Corpus does not have a link type to mark that an event has started or ended at a certain time point. This makes the TLINK annotation impractical for the durative events that span over multiple days. According to our annotation study, 41.12% of the events in the TimeBank Corpus last for longer time periods. For these 41.12%, it cannot be inferred from when to when the events lasted.
In 487 out of the 872 Single Day Events (55.85%), the TLINKs give a result with the same precision as our annotations. For 198 events (22.71%), our annotation is more precise, i.e. the time window where the event might have hap-pened is smaller. For 187 events (21.44%), no event time could be inferred from the TLINKs. This is due to the fact that there was no link to any temporal expression even when transitivity was taken into account.
For the 487 events where the TLINKs resulted in an event time as precise as our annotation, the vast majority of them were events that happened at the Document Creation Time. As depicted in Table 1, 421 events happened at DCT. For those events the precise date can directly be derived from the annotated link between each event mention and the DCT. For all other events that did not happen at the Document Creation Time, the TLINKs result for the most cases in a less precise anchoring in time and for around a fifth of these cases in no temporal anchoring at all while we do anchor them.
We can conclude, that even a dense TLINK annotation gives suboptimal information on when events have happened, and due to the restriction that TLINKs are only annotated in the same and in adjacent sentences, a lot of relevant temporal information gets lost.
Automated Event Time Extraction
In this section, we present a baseline system for automatic event time extraction. The system uses temporal relations in which the event is involved and anchors the event to the most precise time. For this purpose, we have defined a two-step process to determine the events' time. Given a set of documents in which the events and time expressions are already annotated, the system first obtains a set of possible times for each of the events. Second, the most precise time is selected or generated for each event.
For the first step, we use the multi-pass architecture introduced by that was trained and evaluated on the TimeBank-Dense Corpus . Chambers et al. describe multiple rules and machine learning based classifiers to extract relations between events and temporal expressions. This architecture extracts temporal relations of the type BEFORE, AFTER, INCLUDES, IS INCLUDED, and SIMULTANOUS. The classifiers are combined into a precision-ranked cascade of sieves. The architecture presented by Chambers et al. does not produce temporal information that an event has started or ended at a certain time point and can therefore only be used for Single Day Events.
We use these sieves to add the value of the temporal expression and the corresponding relation to a set of possible times for each event. In fact, for each event we generate a set of <relation, time> tuples in which the event is involved.
Police confirmed Friday that the body found along a highway For example, the one sieve adds [IS INCLUDED, F riday 1998−02−13 ] and a second sieve adds [BEFORE, DCT 1998−02−14 ] to the set of possible event times for the confirmed event.
Applying the sequence of the sieves will obtain all various temporal links for each event.
In the next step, if the event has a relation of type SIMULTANEOUS, IS INCLUDED or INCLUDES, the system sets the event time to the value of the time expression. If the event has a relation of type BEFORE and/or AFTER, the system narrows down the event time as precisely as possible. If the sieve determines the relation type as VAGUE, the set of possible event times remains unchanged.
Algorithm 1 demonstrates how the event time is selected or generated from a set of possible times. Applying the proposed method on the TimeBank-Dense Corpus, we obtained some value for the event time for 593 of 872 (68%) Single Day Events. For 359 events (41%), the system generates the event time with the same precision as our annotations. Table 2 gives statistics of the automatically obtained event times.
Algorithm 1 Automatic Event Time Extraction
To evaluate the output of the proposed system, we evaluated how precise the automatically obtained event times are in comparison with our annotations. Table 3 shows for 41% of events, the proposed system generates the same event time Table 3: Evaluation results of proposed system in comparison with our annotations.
In this work we focused on the automated anchoring of Single Day Events and presented a baseline system that relies on the work of . The F 1 -score with 49.01% is in comparison to the human score of 80.50% comparatively low. However, only in 5.38% of the cases, the automatically inferred event time is plain wrong. In the most cases, no event time could be inferred (31.99%) or it was less precise than the human annotation (21.44%).
Extending the described approach to Multi-Day-Events is not straight forward.
The TimeBank-Dense Corpus and consequently the system by Chambers et al. does not include a TLINK type to note that an event has started or ended at a certain date, hence, extracting the begin point and end point for Multi-Day-Events is not possible. A fundamental adaption of the system by Chambers et al. would be required.
In contrast to Single Day Events, extracting the event time for Multi-Day Events requires more advanced logic. The start date of the event must be before the end date of the event. The relation to events that are included in the Multi-Day Events must be checked to avoid inconsistencies. The development of an automated system for Multi-Day Events is subject of our ongoing work.
Conclusion
We presented a new annotation scheme for anchoring events in time and annotated a subset of the TimeBank Corpus (Pustejovsky et al., 2003) using this annotation scheme. The annotation guidelines as well as the annotated corpus are publicly available. 2 In the performed annotation study, the Krippendorff's α inter-annotator agreement was considerably high at α = 0.617. The largest disagreement resulted from events in which it was not explicitly mentioned when the event happened. Using a more relaxed measure for Krippendorff's α which only assigns a distance to mutual exclusive annotations, the agreement changed to α M E = 0.912. We can conclude that after little training, annotators are able to perform the annotation with a high agreement.
The effort for annotating TLINKs on the other hand scales quadratic with the number of events and temporal expressions. This imposes the often used restriction that only temporal links between events and temporal expressions in the same or in succeeding sentences are annotated. Even with this restriction, the annotation effort is quite significant, as on average 6.3 links per mention must be annotated. As Figure 2 depicts, in more than 58.72% of the cases the most informative temporal expression is more than one sentence apart from the event mention. As a consequence, inferring from TLINKs when an event happened is less precise as temporal information that is more than one sentence away can often not be taken into account.
For the 872 Single Day Events, the correct event time could be inferred from the TLINKs only in 487 cases. For 187 Single Day Events, no event time at all could be inferred, as no temporal expression was within the one sentence window of that event.
A drawback of the proposed scheme is the lack of temporal ordering of events beyond the smallest unit of granularity, which was in our case one day. The scheme is suitable to note that several events occurred at the same date, but their order on that date cannot be encoded. In case the temporal ordering is important for the application scenario, the annotation scheme could be extended and TLINKs could be annotated for events that fall on the same date. Another option is to increase the granularity, but this requires that the information in the documents also allow this more precise anchoring.
|
2016-08-09T08:50:54.084Z
|
2016-08-01T00:00:00.000
|
{
"year": 2016,
"sha1": "b5ef007ba38d83b2446521a7ca786d9c11ef06ae",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/P16-1207.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "b5ef007ba38d83b2446521a7ca786d9c11ef06ae",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
153337802
|
pes2o/s2orc
|
v3-fos-license
|
Sources of economic fluctuations in France: A structural VAR model
This paper studies the economic fluctuations of an open economy such as the French economy. A system of variables containing output, price level, trade balance, real exchange rate and oil prices is analyzed by applying the structural vector autoregressive (SVAR) methodology initiated by Sims (1980). This set of variables allows to evaluate the main sources of impulses of the French economy fluctuations. The results show that five structural shocks are identified using the long-run constraints implemented by Blanchard and Quah (1989). From the SVAR dynamic properties, impulse response functions and variance decomposition, the French economy is shown to be particularly vulnerable to supply and oil price shocks, where these two shocks respectively contribute to 40% and 35% of the economic disturbance. France is also hit by important external shocks which damage its trade balance position. Finally, it is found that shocks related to economic policy (demand shocks) have a quite limited impact on the economic activity.
Introduction
The 1980s have operated a methodological and a theoretical revival on the economic fluctuations analysis.The aim of this paper is to deal with empirical treatment of economic disturbances.Sims (1980) was the pioneer of the fluctuations analysis within the vectorial autoregressive model, where impulses are apprehended as innovations in a statistical term.These VAR models were introduced as an alternative to the traditional econometric models.Sims proposed a new form of modeling based on no a priori and where no distinction is made between exogenous and endogenous variables.
Since pioneer work of Sims (1980), the main empirical work dealing with the sources of economic fluctuations lay on autoregressive vectorial model.These canonical VAR models however posed some problems related to the shocks identification, they faced a lot of criticisms, qualifying them as "atheoretical" models.These criticisms lead to the birth of structural VAR models, models in which shocks identification is conducted by the imposition of constraints drawn from economic theory.It is this methodology of structural VAR which will be applied to the French economy.We will apply a structural VAR model to the French economy in order to identify the main shocks which are the origin of the economic activity fluctuations.
In the second section we will reconsider the theoretical and the methodological revival of fluctuations analysis.In the third and the fourth part of the paper we explore the data used and estimate the structural VAR model.Finally, we interpret the results.
The methodological and theoretical revival of the fluctuation analysis:
The 1980s has operated a methodological and a theoretical revival on the analysis of economic fluctuations.The methodological revival was initiated by Sims (1980); it was inscribed on the line of the impulse-propagation approach suggested by Frisch (1933) and Slutsky (1927).
On a theoretical level, real business cycle theory constitutes a true theoretical revival on the fluctuations analysis: it proposes to explain the main part of the economic fluctuations within the neo-classic growth model disturbed only by shocks affecting the total factor productivity.This marks the abandonment of the debate on the relative importance of monetary versus fiscal shocks.The debate on the relative importance of supply and demand shocks emerges.
Real Business cycle theory
At the beginning of the eighties, the relevance of the equilibrium monetary theory was rejected in a theoretical as in an empirical level.It is in this context that appears the real business cycles theory, or RBC 1 , with the pioneers' models of Kydland and Prescott (1982) and Long and Plosser (1983) in closed economy.
The real business cycle theory considers economic fluctuations as the optimal response of economic agents to shocks on the total factor productivity.The models of real business cycle thus conceive the evolution of economic aggregates as the decision result of a great number of agents seeking to maximize their utility and only constrained by technological resource.The real business cycle theory attributes an insignificant role, even no role, to the monetary policy 1 For Real Business Cycles.These basic models were followed by many extensions: extensions to open economies, with the international real business cycle of Backus, Kehoe and Kydland (1992, 1994, and 1995).Extensions to others shock in addition to the technological shock, by borrowing theoretical assumptions from the Keynesian theory.
Criticisms addressed to the basic real business cycle models lead to the development of an abundant literature, with increasingly sophisticated models.Results of these developments were not always satisfactory especially concerning the reproduction of the stylized facts.
The methodological contribution of the real business cycles theory is however admitted by a large part of economists.Parallel to this movement within the real business cycle theory, a new school of thought was emerging; it is the new Keynesian macroeconomics.The New Keynesian (NK) shares with the partisans of the real business cycle theory the fact that macroeconomic requires more microeconomic bases.However, NK economists believe that market imperfections are the key to understanding the real-world.The introduction of NK ideas into RBC models seems to make results definitely more satisfactory, in the sense that these models are accepted by economics profession and that their empirical results are more realistic.
The introduction of the prices rigidity was sufficient to join again with the monetary policy, neutral and without effect in basic RBC models.Some economists saw in this "marriage" between RBC and Keynesian, the birth of "the New Neo-classical Synthesis" (Goodfriend and King 1997).
Nowadays, macroeconomic models incorporate the principal theoretical elements of RBC models.They adopt their general structure; seek to identify the impulses response function of agent in a general equilibrium structure.On the other hand, the way in which the models define and identify the cycles is substantially different from the original contributions, various types of imperfections and rigidities are introduced.These imperfections proposed by New Keynesian are related to the imperfect nature of competition on goods market, the specificity of financial market exchange, etc.
During last decades, RBC initials disappeared gradually and those of DSGE appear (dynamic stochastic general equilibrium).Sims (1980) proposed a tool for fluctuations analysis based on impulses, defined as statistical innovations.Since Sims contribution of 1980, the mains empirical work on the economic fluctuations sources lay on autoregressive vectorial methodology.
The methodological revival: VAR model
The purpose of Sims consists in evaluating the contribution of various innovations of a system to the dynamics of each variable.To distinguish the impulses response from the propagation mechanisms, he proposes the Choleski method of orthogonalization.Following criticisms and in particular those concerning the impossible interpretation of shocks economically through the Choleski decomposition, many authors suggest to base the orthogonalization of shocks on structural model of innovations, the structural VAR model.Shapiro and Watson (1988), Blanchard and Quah (1989) and Gali (1992), proposed to identify structural impulses, which are interpretable economically: supply shocks, demand, economic policy…Their methods of identification are based on restrictions drawn from the economic theory.From an econometric point of view the structural impulses are estimated as a function of the canonical innovations, obeying to constraints resulting from the economic theory.The imposed restrictions can be of different kind and their economic implications diametrically opposite.One distinguishes two types of restrictions used in the recent literature: short term restrictions and long run restrictions.
The short run constraints relate to the instantaneous answers of variable to shocks.Long run restrictions are related to the long term shocks responses.Those developments make the birth of the structural VAR model, i.e VAR models where it is possible to give economic definition to various shocks.
Data characteristics and VAR model estimation
The purpose of this section is to analyze the economic disturbances in an open economy, the French economy.Empirical study presented in this section is based on the VAR methodology incited by Sims (1980).
These recent developments on time series econometrics are applied to a system of variable including output, prices, trade balance, real exchange rate and oil price.This system of variables makes possible the evaluation of the main source of disturbance in the French economy.So, in an autoregressive vectorial model including these variables, five structural shocks are identified with the help of the Blanchard and Quah (1989) method of decomposition.
Long term characteristics of the data
Before the model estimation, we must preliminary check the order of integration and test the possible presence of cointegration relationship between variables.
We use quarterly data extending from 1978Q1 to 2007Q4:2 -y: GDP logarithm p: logarithm of consumer price index -se : logarithm of trade balance -tc: logarithm of the real effective exchange rate -pp: logarithm of the oil price
Tests of stationarity
To analyze the long-term properties of the data, we use three different methods: Augmented Dickey-Fuller test (1979), Phillips-Perron test (1988) and Kwiatkowski-Phillips-Schmidt-Shin test (1992).-The character ∆, indicates the first difference of the variable.
-All the variables are in logarithm.
According to unit root tests, it appears that all the variables of the model are nonstationary; they are integrated of order one.
Cointegration relationship
To test the possible existence of cointegration between variables, we use the test implemented by Johansen (1991) and Johansen and Juselius (1990). 3o we suppose the vector X of dimension (5×1): The general representation of the model in VECM 4 form is given by the following expression: The method suggested by Johansen and Juselius is based on two assumptions: on one hand the vector X must be I (1) and in addition the vector of the residual must be a white noise.The strategy of the test consists in analyzing the rank of the matrix .If the rank of is zero then there is no cointegration between the variables.If the rank of the matrix is r, there exist two matrices of dimension (n×r), and as: is a matrix which contains the r vectors of cointegration.
is a matrix which contains the weights associated to each vector of cointegration.
To determine the number of vectors of cointegration r, Johansen proposes two statistics: the trace test and the maximum eigenvalue test.
The trace statistic is the following: The Ho hypothesis is: r ≤ q, i.e. there are at least r vectors of cointegration.This test is equivalent to test the rank of the matrix since testing the existence of r vectors is equivalent to test the following null assumption: Three cases can be presented: Rg ( ) = 0, this means that r = 0: there is no cointegration, in other words t X is integrated of order 1 but not cointegrated.It is then possible to estimate a VAR model on t X .
Rg ( ) = r, with r<N: t X is cointegrated with r rank, thus r relations of cointegration exist.A VECM can be estimated.
Rg ( ) = N, in this case the matrix is full rank, t X is stationary, and there is no cointegration.A VAR model can be estimated directly on t X .
The maximum eigenvalue test statistic is: The Ho hypothesis is r= q, the alternative one is r= q+1.
The results of these tests are conditional to the VAR estimation, and consequently to the lag choice.We have used AIC criterion, the selected lag number is equal to 3. We carry out tests by supposing the absence of trend and constant in the relationship of cointegration and in the VECM model.
VAR model identification
Basing on results found below, the VAR model in matrix form is the following: ' , the column vector of the explained variables which depends on its p lags.i A , are the square matrices of the coefficients to be estimated.t , is the vector of the residuals.
' represents at each moment T, the value of t X which is not explained by its past.These residuals are also regarded as innovations or impulses.
From the vector of variables t X , we will define five shocks: a supply shock, a real demand shock, a nominal demand shock, an external shock (shock on the trade balance) and an oil price shock.
We consider the residual resulting from the first equation as an oil price shock, the one resulting from the second equation as an external shock, that resulting from the GDP equation as a supply shock and finally residuals resulting from the fourth and the fifth equation are supposed to be demand shocks (real and nominal).
However, this definition of the shocks is likely to be false since the residuals resulting from a canonical VAR model are often correlated.Consequently, we will adopt the structural VAR method in which shocks identification is based on constraints resulting from the economic theory.
Two types of restrictions are mentioned in the literature; short-run restrictions and long-run restrictions.To identify the structural shocks, we choose the identification of Blanchard and Quah (1989) imposing long run constraints. 5In other words we will constrain certain variables not to have long-term effects.
We attempt to estimate the following structural VAR model: The left part of this equality describes the vector of variables entering into the VAR system.The right-hand side, is a mixture of the structural shocks, (exogenous forces of the system), and of the matrix D (L) which describes the coefficients associated to these shocks.
The identification of the structural shocks is done by imposing long run constraints using the matrix D (1).
The long-term answers to shocks are defined by this matrix: The identification of shocks in a system of 5 variables requires 10 constraints: First constraints are drawn from the open economy assumption.It results from this hypothesis that domestic shocks do not affect the variables generating the external shocks: . We can then identify 6 constraints.
Following constraints are drawn from the theoretical assumption commonly accepted since the work of Blanchard and Quah (1989).It is about the distinction between supply and demand shocks.Indeed, economic theory supposes that supply shocks can affect economic activity on the long term.Whereas demand shocks affect economic activity only on the short term.It results two additional constraints:
d d
Concerning the constraints that allow the distinction between the two demand shocks, real demand shock (generally assimilated to a fiscal shock or a foreign exchange rate shock) and the nominal demand shock (monetary shock for example).A real demand shock is supposed to have an effect on real foreign exchange rate whereas a nominal shock does not affect the foreign exchange rate.
d
Finally the last constraint is related to the oil price: an oil price shock is supposed to be the only shock which can have long-term effects on the oil price.This assumption gives us a last constraint: 6 .Thus the matrix representing the effects of structural shocks on the model variables is the following one:
Lag selection criteria
The results of AIC, SC and HQ are exposed in the following table.The results of AIC tests, SC and HQ, give different conclusions: the order of the VAR model is equal to 1 according to criteria SC and HQ, it is equal to 3 according to the criterion of AIC, FPE and LR The majority of lag length criteria suggest the use of a lag of 3 in the analysis.We then take p = 3.
Impulse response functions and variance decomposition
SVAR model estimation is conducted by estimating the ordinary VAR model7, then we apply the long term identifying constraints previously identified, and finally we determinate the structural shocks, their impacts and their contributions to the French economy fluctuations.
Sources of growth rate fluctuations
Chart 1 represents the response impulse functions of growth rate to the various shocks.It shows the effects of the five shocks on the growth rate.We notice a positive and significantly persistent effect of supply shocks in short-term us in the long run.However, the real and nominal demand shocks affect the economic activity only transitorily.Response ot the growth rate to oil price shock Indeed, as it was predicted by economic theory, a positive supply shock involves an improvement of the activity level, this improvement remains rather bearable in long-term.However, demand shocks have insignificant impact, their impacts tend to be equal to zero in long term.
Concerning the external shocks, the impulse response functions show a negative effect of these shocks on the trade balance.This negative effect can be explained by the degradation of price competitiveness of France resulting from the progressive appreciation of the euro compared to the dollar, which leads to a fall in exports and to an economic activity decrease.This degradation can also be explained by the rise of the oil price or other reasons.
Moreover and basing on the model results we notice the importance of the oil price effect on economic activity.The GDP impulse response function to shocks shows an important effect of oil price shock on the activity level.The growth rate responds negatively to oil price shock.The initial response is the largest one; in long run the effect tends to be insignificant.Jalles ( 2009) adopts several oil price specifications and also found a significant impact of oil price shock on French economic activity.The table above reveals the contribution of each shock to the growth rate fluctuations.We notice the prevalence of supply and oil price shocks in the explanation of the growth rate dynamics.Indeed, whatever the chosen horizon, short or long run, oil price shock explains between 40% and 35% of the activity variability, the supply shock explains between 21 % and 45% of the variability.
France, as an importer of oil energy, is particularly vulnerable to oil price shock.
Concerning demand shocks, it arises from the variance decomposition, that these shocks have a limited contribution on the long-term economic activity.This consolidates our theoretical assumptions, stipulating that demand shocks do not have a permanent effect on GDP.Demand shocks (real or nominal) contribute to less than 6 % in the economic activity variability.
In addition, the introduction of the assumption of an open economy into our model enables us to evaluate the contribution of external shocks apprehended in this case by shocks on the trade balance.This shock has a considerable effect on the economic activity; its contribution turns around 12% in the long term and exceeds 22% in the short term.
To summarize, the variance decomposition of GDP shows a prevalence of supply and oil price shocks in the economic fluctuation explanation.
Sources of price fluctuations
Chart 2 indicates a positive impact of nominal demand shocks.These nominal demand shocks reflect the evolution of the money supply and highlight the narrow correlation between the price level and the monetary aggregates.Chart 2 highlights the fact that a nominal demand shock leads to an increase in the general prices level.The raising of prices remains constant on the long term.
In addition, the impulse response functions reveal the importance of supply shocks contribution in the variation of the price level.A positive supply shock (technological shock for example) will make the production more efficient, leads to an increase in output and thus lower the price level as it shown in the chart below.The effects of supply shocks are also more important than those of the real demand shocks.
This result can be explained by the limited effects of the French fiscal policy since the European Pact of Stability constraints the budget deficit to only 3% of the GDP.
Chart 2. Impulse response functions of the inflation rate The variance decomposition of inflation rate shows a very substantial contribution of impulses conducted by the economic policy.The contribution is more important for nominal demand shocks, about 40%.This prevalence of nominal demand shocks remains stable whatever the time horizon.In addition, as we have notice when dealing with impulse functions, supply shocks contribute to a significant part on inflation rate variation, a contribution which turns around 35%.
Finally, considering the importance of the French economy openness degree, it would be interesting to highlight the external shocks impact on the price level.
Through the variance decomposition, we notice a very weak contribution of the trade balance shock on prices disturbances.This contribution is approximately about 4% in the long term, less than 1% in the short term.
We also notice that an oil price shock is accompanied by an increase in the price level essentially in long term.In the short term the relative contribution of the oil price shock is almost equal to zero, however in the long term it turns around 13%.This increase (even if it remains controlled enough) cuts down the purchasing power of households, decreases consumption and consequently the growth rate.
The oil prices rise of these last years, combined with an unstable geopolitical environment, seems to be durable not temporally.One can expect a continuous rise of oil prices; the economic policies should take into account this new international evidence by defining more rigorous energetic policies.
Sources of foreign exchange rate fluctuations
Chart 3. Impulse response functions of the foreign exchange rate Concerning the nature of the impact, theoretically a real demand shock due to the aggravation of budget deficit involves an appreciation of the real foreign exchange rate and a deterioration of the external position.
Our empirical results are in conformity with this theoretical assumption.Impulses response functions show an appreciation of the real foreign exchange rate when the economy is hit by a real demand shock which is probably caused by the overvaluation of the euro/dollar exchange rate since the fiscal policy in France is relatively controlled (the European pact of stability).
Concerning the remainder of domestic shocks, we notice a quite high contribution of supply shocks and a very limited contribution of nominal demand shocks; they respectively contribute to 36% and 2% of disturbances.Impulses response functions reveal that the effects of nominal demand shocks are unstable and close to zero.In other hand a positive supply shocks involves a depreciation of the real foreign exchange rate.
Finally concerning the contribution of external and oil price shocks to foreign exchange rate fluctuations; it appears from the variance decomposition that these two shocks have a significant contribution but still lower in comparison to internal shocks.Indeed, the external shocks contribute to approximately 11% of the foreign exchange rate variability, oil price shocks contribute to 7%.
Taking into account these results, we can conclude that foreign exchange rate disturbance is due primarily to real demand shocks; they contribute to approximately 50%.
Sources of trade balance fluctuations
Chart 4. Impulse response functions of the trade balance Based on the response impulse functions and the Variance decomposition table, one notices the importance of nominal demand shocks; they contribute to approximately 30% of the trade balance variation in short as in long term.The remainder shocks have an insignificant effect on the trade balance.However, it is important to underline that an increase in oil prices contributes to the appreciation of real exchange rate in the short term and to its depreciation in the long term.
Since the EMU, appreciation of the euro/dollar exchange rate worsens the French external position.The improvement of the French external balance position should be made by the amelioration of terms of trade, by the diversification of its business partners especially by the acquisition of new market shares (emergent market), by reducing labour cost, or by investing in research and innovation, etc.
Conclusion
The results founded in this paper underline the French vulnerability to internal as to external shocks.They especially highlight the importance of supply and oil price shocks in economic fluctuations.Indeed, through the two VAR models instruments (impulse response function and variance decomposition), we clearly notice the prevalence of supply shocks in the explanation of GDP fluctuations (between 21% and 45% depending in time horizon).
Oil prices shocks explain between 40% and 35% of economic disturbance.France, as a net importer of oil energy, is particularly vulnerable to oil price shocks; oil shock has a negative and durable effect on the economic activity.So it is important to find alternative solutions to reduce the economy dependence on oil energy.
Concerning demand shocks, particularly those relating to monetary and fiscal policies.Results show that their effects are quite limited; this is probably due to the restrictive European economic policies adopted since the 1990s, and which is strongly controlled since the 1st January 1999 with the establishment of the European Monetary Union.
The improvement of the French economic activity requires a better coordination, a better governance of the European economic policies.It also requires the improvement of the energy policies in order to attenuate the French economy dependence.
Regarding the French external position, a position to be taken seriously into account, France must encourage firms to invest and innovate to improve the competitiveness.France should benefit from the strong growth of emergent countries like China or India by orientating its exports towards sectors in expansion.It should also follows the German example by encouraging investments in small and medium-size companies.
After the constraints identification, the estimation of the VAR model requires the determination of the lag.We use the criterion test of information.
three remaining ones were already taken into account using the assumption of independence between internal shocks and external shocks.
Table 1 .
Unit root tests Notes: -* This critical value is relating to the model with constant and without trend.
Table 3 .
Cointegration test: Maximum Eigenvalue test Finally, the various tests carried out, we conclude by the rejection of the stationarity and cointegration of the five series in level, so we specify a model in first difference.The VAR model will be consequently built on the growth rate of GDP ( y ), on the first difference of real effective exchange rate ( ) tc , on the first difference of trade balance rate ( se ), on the inflation rate ( p ) and on the first difference of oil price (∆pp).
Table 3 .
Lag selection criteria of the VAR model
7
Main validation tests of the VAR model are reported in the appendix.
Table 4 .
Variance decomposition of the growth rate
Table 5 .
Variance decomposition of the inflation rate
Table 6 .
The variance decomposition of the foreign exchange rate From the table above, we notice a prevalence of domestic shocks in the explanation of exchange foreign rate fluctuations; particularly the real demand shock, which is assimilated in our model to a fiscal shock or to an adjustment of foreign exchange rate.Indeed, they contribute to approximately 50 % of the foreign exchange rate disturbance.
Table 7 .
Variance decomposition of the trade balance
|
2021-03-12T09:31:22.332Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "2490362d5b318b35967c497f4c16659d0cb74387",
"oa_license": "CCBY",
"oa_url": "http://revistas.udc.es/index.php/ejge/article/download/ejge.2012.1.1.4277/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5f4b4494fa2de4f2365eca9dc1b2f31c003c6755",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
51957037
|
pes2o/s2orc
|
v3-fos-license
|
Alkali Release from Aggregates in Long-Service Concrete Structures: Laboratory Test Evaluation and ASR Prediction
This paper proposes a simple model for predicting the development of deleterious expansion from alkali-silica reaction (ASR) in long-service concrete structures. This model is based on some composition and reactivity parameters related to ASR, including the long-term alkali contribution by aggregates to concrete structures. This alkali contribution was estimated by means of a laboratory extraction test, appositely developed in this study in order to maximize the alkali extraction within relatively short testing times and with low leaching solution/aggregate ratios. The proposed test is a modification of the Italian Standard test method UNI 11417-2 (Ente Nazionale Italiano di Normazione) and it consists of subjecting an aggregate sample to leaching with saturated calcium hydroxide solution in a laboratory autoclave at 105 °C. Nine natural ASR-susceptible aggregates (seven sands and two coarse aggregates) were tested and the following optimized test conditions were found: leaching solution/aggregate weight ratio = 0.6; solid calcium hydroxide/aggregate weight ratio = 0.05; test duration = 120 h. The results of the optimized alkali extraction tests were used in the proposed model for predicting the potential development of long-term ASR expansion in concrete dams. ASR predictions congruent with both the field experience and the ASR prevention criteria recommended by European Committee for Standardization Technical Report CEN/TR 16349:2012 were found, thus indicating the suitability of the proposed model.
Introduction
Deleterious expansions associated to Alkali-Silica Reaction (ASR) have been repeatedly reported in the last decades among the main causes of deterioration of concrete structures [1,2].
ASR is a slowly expansive reaction between certain forms of alkali-reactive silica (opaline silica, flint, and cryptocrystalline quartz) and/or certain silicate minerals present in concrete aggregates and the hydroxyl ions in concrete pore solution, mainly associated with the alkaline metal ions (sodium and potassium) [3][4][5]. This reaction leads to the formation of an alkali-silicate gel, which absorbs water and swells, causing internal expansive pressure [6][7][8].
Most of the alkalis are available in the concrete since the construction phase of the structures and come primarily from the cement, with eventual minimal contributions from the mix water and the chemical admixtures. However, enhancement of the alkali concentration in concrete may arise during the service life of concrete structures when aggregates with alkali-bearing minerals (i.e., volcanic glasses, feldspars, micas, clay minerals, nepheline, and zeolites) release their alkalis into the concrete pore solution, over a long time [9][10][11][12].
The concrete alkali enrichment is a consequence of the alkaline cations release from aggregates. In order to maintain the electroneutrality of the pore solution, it is assumed that the hydroxide ion concentration increases as well [13], thus triggering and/or accelerating ASR expansion development, even if the initial alkali content of the concrete is insufficient to promote ASR expansion development [10].
This chemical mechanism is generally accepted to explain the observed cases of discrepancy between measured and expected alkali contents in field concrete structures showing late or progressive ASR-induced distress, such as, for example, the case of concrete dams, in consideration of their very high content of aggregates and their required long service life. In concrete dams containing alkali-reactive aggregates, deleterious ASR expansion may occur long time after construction (initiation phase of more than 20 years) even if low-alkali Portland cement is used as an ASR preventive measure [14]. Slowly increasing expansions together with resulting distress may be still observed over a very long timescale, even 40 year after construction (undefined propagation phase) [15,16]. This expansive phenomenon may gradually reduce the serviceability and even the load carrying capacity and safety of the concrete structures, with high cost of repairs and replacements [17][18][19][20][21].
However, based on their own experimental results, some authors [22] recently questioned the existence of a relationship between alkali metal enrichment and hydroxide ion concentration increase in the concrete pore solution.
Enhancement of the alkali concentration in concrete may also arise during the service life of concrete structures when they are exposed to external sources of potential alkalis (for example, deicing salts in the case of concrete pavements and bridges) [23][24][25].
Differently from the alkali release from concrete aggregates, the alkali contribution from external sources concerns only some concrete structures and it is mainly localized at structure surface layers of some components (e.g., bridge decks). Moreover, it is very difficult to quantify this contribution in terms of increased concrete alkali content, also due to surface leaching phenomena [26]. Therefore, long-service concrete structures not exposed to external alkali sources were only considered in the present study.
For such structures, the knowledge of the amount of alkalis releasable from aggregates represents an issue of particular concern in developing models aimed at predicting eventual long-term deleterious ASR expansion of mass concrete.
Considering that the determination of the alkali release from aggregates in the field conditions is very difficult, laboratory test methods have been proposed in the literature for estimating the long-term alkali contribution by aggregates to concrete structures.
These tests are directly carried out on concrete aggregates, by using caustic solutions such as saturated calcium hydroxide solution or 0.7 M NaOH and 0.7 M KOH solutions, as extraction media. The use of saturated calcium hydroxide solution at boiling temperature is specified by the Italian standard test method UNI 11417-2 (test duration 6 h) [27] and the French LPC 37 test method (test duration 7 h) [28]. The use of 0.7 M NaOH and 0.7 M KOH solutions at relatively low temperatures (38 • C or 60 • C) was investigated by several researchers [10,[29][30][31] for contact times varying between 180 and 580 days. The test with NaOH and KOH solutions with solid lime in excess is currently under investigation and validation by RILEM (International Union of Laboratories and Experts in Construction Materials, Systems and Structures) [32]. The role played by calcium hydroxide in extracting alkalis from aggregates is of great importance. Indeed, sodium and potassium hydroxides alone can give rise to ion exchanges between the solid phase and the aqueous solution placed in contact without increasing the pH or hydroxide ion concentration. On the contrary, the presence of calcium ions in this solution produces Ca-Na and Ca-K ion exchanges able to enrich the solution with Na + and K + ions, and solid calcium hydroxide dissolve, thus supplying OHions to maintain the solubility equilibrium and electroneutrality with the alkali ions. This ion exchange with Ca +2 can potentially increase the alkali leaching [29]. The above-mentioned leaching tests provide different results, depending on the aggregate fineness, the solution adopted, the operating conditions (solution/aggregate and calcium hydroxide/aggregate ratios, temperature and testing times etc.). If carried out at the same temperature, saturated calcium hydroxide solution is much less aggressive compared to sodium and potassium hydroxide solutions, due to a lower pH of the solution and the formation of reaction products able to incorporate some of the released alkali ions [10,30]. However, being generally performed at the boiling temperature, leaching tests with saturated calcium hydroxide solutions are very rapid compared to the time consuming leaching tests based on sodium and potassium hydroxide solutions performed at 38 • C or 60 • C.
Presently, alkali releases obtained through all laboratory tests proposed in the literature still need to be verified with those in field conditions, and procedures for this verification need to be developed. By the way, this is within the present plan of activity of RILEM TC 258 AAA.
In the present study, in order to maximize the alkali release from concrete aggregates within relatively short testing times and with low leaching solution/aggregate ratios, an appositely modified version of the current Italian UNI standard test method was developed using nine natural ASR-susceptible aggregates (seven sands and two coarse aggregates) with different petrographic and alkali-reactivity characteristics.
Moreover, a simple model was developed to predict the potential effect of alkali release from the aggregates investigated on the development of deleterious ASR expansion in long-service structures, such as concrete dams. In this model, the results of the proposed alkali extraction test were used together with some parameters that are characteristic of alkali-silica reaction and related to the composition of concrete.
A Simple Model for Evaluating the Effect of Alkali Release from Aggregates on Deleterious ASR Expansion Development
The proposed model was based on the knowledge of four key parameters: (1) the initial alkali content of the concrete mix used for the structure construction, Lac 0 ; (2) the Threshold Alkali Level, TAL, of the aggregate used in the concrete mix; (3) the long-term alkali contribution by this aggregate to concrete mix, Lac agg ∞ ; and (4) the efficacy parameter, R max , related to the cement used in concrete mix. As reported in our previous papers [33,34], the Threshold Alkali Level is an appropriate reactivity parameter for assessing the alkali-reactivity of concrete aggregates. The aggregate TAL (kg Na 2 Oeq/m 3 ) is defined as the minimum alkali content of a standard concrete mix made with Portland cement and the aggregate under examination, above which deleterious expansion of concrete will occur. The higher is the aggregate reactivity, the lower the TAL value. The aggregate TAL can be determined through laboratory accelerated expansion tests using standard concrete mixes with different alkali contents [33].
The efficacy parameter R max (kg Na 2 Oeq/m 3 ) is defined as the maximum alkali consumption by a blended cement in correspondence of the maximum concrete alkali content, Lac * , that does not produce deleterious ASR expansion [35,36]. This efficacy parameter is related to the adsorption/incorporation of alkalis by the hydration products of the cement. In the case of using Portland cement, R max is taken equal to zero. For blended cements containing active mineral additions such as granulated blast furnace slag, coal fly ash, or natural pozzolan, R max is above zero. The parameter R max depends on the type, properties and amount of active mineral addition of blended cements [37,38] and can be determined for each blended cement through laboratory expansion tests on concrete mixes of standard composition made with the cement under examination, an aggregate with known TAL value and different alkali contents [36].
In concrete structures containing ASR-susceptible aggregates, deleterious expansion will occur when the amount of alkali-silicate gel formed by alkali-silica reaction and the concomitant water absorption by this gel are sufficient to develop deleterious swelling pressures.
Based on the TAL and R max definitions, the maximum concrete alkali content Lac * (kg Na 2 Oeq/m 3 ) not producing deleterious ASR expansion in concrete can be expressed as: where TAL and R max are as previously defined. The concrete alkali content, Lac (kg Na 2 Oeq/m 3 ), related to the alkali contributions by the various concrete mix components (cement, admixtures, and aggregates) can be expressed as: where Lac 0 (kg Na 2 Oeq/m 3 ) is the alkali content of the concrete mix at the time of structure construction (initial concrete alkali content) and Lac agg ∞ (kg Na 2 Oeq/m 3 ) is the long-term alkali contribution from the aggregates.
The value of Lac 0 is calculated from the concrete mix composition by assuming a total release of alkalis from Portland or blended cement and eventual chemical admixtures and no alkali release from the aggregates.
As anticipated, the value of Lac agg ∞ may not be determined in the field conditions. In this study, the long-term alkali contribution from aggregates was assumed to be equal to the alkali contribution, Lac agg lab , evaluated from laboratory extraction tests under optimized conditions, established as described at Section 4.1.1. Therefore, Lac agg ∞ was replaced by Lac agg lab in Equation (2). According to Equations (1) and (2), deleterious ASR expansion in concrete will develop only if the following condition is satisfied: Combining Equations (1) and (3) yields: Equation (4) holds for blended cements containing active mineral additions (R max > 0), while this equation reduces to Lac > TAL for concrete mixes made with Portland cements (R max = 0). Therefore, the value of R max represents a measure of the capability of a specific blended cement in counteracting ASR expansion development in concrete. Values of R max equal to 3.80, 5.50, and 8.10 kg Na 2 Oeq/m 3 have been reported [39] for some commercial cements, i.e., a blast-furnace slag cement (CEM III/B), a coal fly ash cement (CEM IV/A-V) and a natural pozzolan cement (CEM IV/B-P), respectively. The cement nomenclature is in accordance to European Standard EN 197-1 [40].
It follows that the development of deleterious ASR expansion in long-service concrete structures will be mainly dependent on the composition of the concrete mix (Lac 0 ), the type of cement (R max ) and the alkali-reactivity of aggregate (TAL) used in the concrete mix.
It is evident that, irrespective of the type of cement (Portland or blended cement) and the long-term alkali contribution by aggregates (Lac agg ∞ ), the risk of deleterious ASR development will be irrelevant if aggregates with high TAL values and concrete mixes with relatively low Lac 0 values are selected.
On the other hand, if aggregates with relatively low TAL values are used (that is increasingly frequent in the concrete construction industry) together with Portland cements, the long-term alkali contribution by aggregates (Lac agg ∞ ) could be very critical in promoting the long-term deleterious ASR expansion development.
In this regard, for concrete mixes designed with given Lac 0 values, combination of Equations (2) and (4) allows the calculation of the maximum long-term alkali contribution from aggregates, Lac agg ∞ , for which no deleterious ASR expansion will develop in concrete (maximum tolerable alkali contribution). Conversely, if the long-term alkali contribution from aggregates is known (i.e., evaluated by laboratory extraction tests), the above two equations may be used to calculate the maximum Lac 0 value that will not produce long-term deleterious ASR expansion (maximum tolerable Lac 0 ). On the basis of the cement content of the concrete and the maximum tolerable Lac 0 value, it is also possible to calculate the maximum tolerable alkali content of the cement, in terms of percentage Na 2 Oeq.
Aggregates Tested
Seven natural sand-sized aggregates were tested in this study: four sands came from Italian quarries (referred to as S1-S4), the other three sands, known for their high alkali contents, came from Norway, Portugal and Spain (labelled as S5, S6, and S7, respectively). Table 1 gives the lithological composition of these sands.
Two coarse aggregates (size gradation 12-32 mm), coming from the same quarries of sands S1 and S2 and exhibiting similar lithological composition as the respective sands, were also tested and designated as aggregates C1 and C2, respectively.
According to the RILEM AAR-1 petrographic method [41], all the aggregates investigated were classified as Class III-S (very likely to be alkali-silica reactive).
Each aggregate was analyzed for the contents of sodium and potassium by the following test procedure: (1) grinding of the aggregate sample to fineness below 45 µm; (2) alkaline fusion of a powdered aggregate-lithium metaborate mixture (weight ratio 1:6) in platinum crucible at a temperature of 1000-1200 • C; (3) quenching of the melt with water and dissolution of the solid with 10% HNO 3 solution; (4) appropriate dilution of the resulting solution with deionized water and determination of the sodium and potassium ion concentrations by flame Atomic Absorption Spectrophotometer (AAS) according to the procedure reported in [42]. Table 1. Lithological compositions of sands investigated.
Sand
Lithological Composition S1 medium to fine grained sedimentary carbonate rocks including mono-or polycrystalline quartz, rarely showing undulatory extinction angle, flint and chalcedony S2 sedimentary carbonatic rocks and sandstones with flint as the main alkali reactive phase S3 arenaceous, quartzitic-feldspatic and epidote rocks, with fine flints (sometimes with a fibrous-radiate texture typical of chalcedony), mono-and polycrystalline quartz and fine-grained quartzites with a marked undulatory extinction angle S4 similar to sand S3, except for a smaller amount of flint and a remarkable presence of carbonate rocks S5 cataclasite, a metamorphic rock formed by mechanical fracturing on fault lines. Main constituents of this sand were feldspar particles within a strongly stressed quartz matrix, fractured feldspars, dark minerals and mica S6 granite, an intrusive igneous rock mainly containing strained and micro-crystalline quartz, K-feldspar, plagioclase and biotite S7 granodiorite, an intrusive igneous rock similar to sand S6, with a higher content of plagioclase and a lower content of K-feldspar than S6, containing strained and poorly crystalline quartz, biotite and hornblende Table 2 gives the contents of sodium and potassium of each aggregate, both in terms of g alkaline metal (Na, K)/kg dry aggregate and in terms of g alkalis (Na 2 O, K 2 O, Na 2 Oeq)/kg dry aggregate.
Prior to leaching tests (extraction of alkalis from aggregates), all the aggregates were characterized for their grain size distribution by dry sieving. The two coarse aggregates were used as received, while the different sands were used with the same grain size gradation that was obtained by appropriate recombination of the various grain size fractions. Tables 3 and 4 give the grain size gradations of the recombined sands and the coarse aggregates, respectively.
Leaching Test Procedure
As anticipated, the release of alkalis from aggregates was evaluated by using the extraction test method reported in the Annex A of the Italian Standard UNI 11417-2 [27], both in its current version and in a modified version, appositely developed in the present study.
The current UNI 11417-2 test method consists of subjecting the aggregate sample under examination to a solubilizing attack by reflux with a saturated calcium hydroxide solution for a contact time, t c , of 6 h, at the boiling temperature. The weight ratio between the saturated Ca(OH) 2 solution (leachant) and the dry aggregate sample (L/S ratio) is equal to 0.6 g leachant/g dry aggregate and the weight ratio between solid calcium hydroxide (CH) and dry aggregate (CH/S ratio) is equal to 0.05 (5 g CH/100 g dry aggregate). The solubilizing attack is due to the OHions action on the siloxane and silanol groups of the reactive silica contained in the aggregate and to the subsequent ion exchange between alkaline metals (Na and K) and calcium ions arising from calcium hydroxide with formation of NaOH and KOH (base exchange) [29]. Therefore, this process is commonly referred to as extraction (or release) of alkalis from aggregates.
In the modified version of the UNI test method, the glass boiler equipped with the reflux condenser was replaced by a laboratory autoclave having a volumetric capacity of 1.3 L (Figure 1) placed in a laboratory oven operating at a temperature of 105 • C. Moreover, the amount of aggregate subjected to leaching tests was reduced from 400 g (standard test method) to 200 g. With the use of this autoclave it was possible to greatly prolong the test duration beyond six hours, in order to maximize the alkali release from aggregates. Successively, leaching tests with the above test modifications were performed on sand S1 by changing the CH/S ratio from 5 to 16 g CH/100 g dry aggregate, the L/S ratio from 0.6 to 3.3 g leachant/g dry aggregate and the contact time, tc, over the range from 6 to 240 h.
At the end of each leaching test, the autoclaved sample was rapidly cooled, filtered on 0.45 μm paper filter and the solid residue was washed with deionized water in order to quantitatively remove the leachate. The resulting solution (filtrate + washing) was acidified and then analyzed for the concentrations of sodium and potassium ions by AAS. In some cases, pH measurements were also performed on filtrate solutions and blank samples (i.e., samples obtained from tests performed under the same conditions but without aggregate specimens).
Finally, under the optimized CH/S and L/S ratio conditions, leaching tests at different tc values were also performed on the other aggregates investigated in order to better characterize their alkaline metal releases.
In the case of coarse aggregates C1 and C2, due to their much lower specific surface area as compared to the respective sands, the release of sodium and potassium was evaluated only at the ultimate testing time investigated (240 h). Moreover, in order to guarantee the complete immersion of coarser grains of the aggregates in the leachant solution, the L/S ratio was equal to 3.3 g leachant/g dry aggregate, corresponding to the upper value of the L/S ratio range investigated for sands. The CH/S ratio was kept equal to 5 g CH/100 g dry aggregate.
For each aggregate and test condition considered, three replicate tests were performed and the results obtained were averaged. For each test specimen, the average value was obtained from measurements on at least three solution sampling, by discarding results giving a coefficient of variation higher than 10%.
Alkali Releases from Sands
The results of preliminary leaching tests on sand S1 showed that the release of sodium and potassium determined according the UNI standard test method (use of a glass boiler with reflux; amount of aggregate tested = 400 g) or the modified version of the UNI standard test method (use of autoclave; amount of aggregate tested = 200 g) were similar where the same operating conditions (L/S ratio, CH/S ratio, tc) were adopted. The releases of sodium or potassium, expressed in terms of mg metal released/kg dry aggregate, , were equal to: 10.8 mg Na/kg dry aggregate and 12.5 mg K/kg dry aggregate (standard test); 12.9 mg Na/kg dry aggregate and 13.9 mg K/kg dry aggregate (modified test). Figure 2 shows the effect of increasing the CH/S ratio on the release, , of sodium and potassium from sand S1 when the modified test method was used and the values of the contact time, Using the two different equipment (glass boiler or autoclave) and the different amounts of aggregate, leaching tests on sand S1 were first performed under the test conditions, specified in the UNI standard test method.
The main reasons for the choice of testing sand S1 were the great availability of this material in both fine and coarse grain size gradations and its known relatively high alkali-reactivity in field conditions.
Successively, leaching tests with the above test modifications were performed on sand S1 by changing the CH/S ratio from 5 to 16 g CH/100 g dry aggregate, the L/S ratio from 0.6 to 3.3 g leachant/g dry aggregate and the contact time, t c , over the range from 6 to 240 h.
At the end of each leaching test, the autoclaved sample was rapidly cooled, filtered on 0.45 µm paper filter and the solid residue was washed with deionized water in order to quantitatively remove the leachate. The resulting solution (filtrate + washing) was acidified and then analyzed for the concentrations of sodium and potassium ions by AAS. In some cases, pH measurements were also performed on filtrate solutions and blank samples (i.e., samples obtained from tests performed under the same conditions but without aggregate specimens).
Finally, under the optimized CH/S and L/S ratio conditions, leaching tests at different t c values were also performed on the other aggregates investigated in order to better characterize their alkaline metal releases.
In the case of coarse aggregates C1 and C2, due to their much lower specific surface area as compared to the respective sands, the release of sodium and potassium was evaluated only at the ultimate testing time investigated (240 h). Moreover, in order to guarantee the complete immersion of coarser grains of the aggregates in the leachant solution, the L/S ratio was equal to 3.3 g leachant/g dry aggregate, corresponding to the upper value of the L/S ratio range investigated for sands. The CH/S ratio was kept equal to 5 g CH/100 g dry aggregate.
For each aggregate and test condition considered, three replicate tests were performed and the results obtained were averaged. For each test specimen, the average value was obtained from measurements on at least three solution sampling, by discarding results giving a coefficient of variation higher than 10%.
Alkali Releases from Sands
The results of preliminary leaching tests on sand S1 showed that the release of sodium and potassium determined according the UNI standard test method (use of a glass boiler with reflux; amount of aggregate tested = 400 g) or the modified version of the UNI standard test method (use of autoclave; amount of aggregate tested = 200 g) were similar where the same operating conditions (L/S ratio, CH/S ratio, t c ) were adopted. The releases of sodium or potassium, expressed in terms of mg metal released/kg dry aggregate, M r , were equal to: 10.8 mg Na/kg dry aggregate and 12.5 mg K/kg dry aggregate (standard test); 12.9 mg Na/kg dry aggregate and 13.9 mg K/kg dry aggregate (modified test). Figure 2 shows the effect of increasing the CH/S ratio on the release, M r , of sodium and potassium from sand S1 when the modified test method was used and the values of the contact time, t c , and L/S ratio were identical to those adopted by the UNI Standard test method (t c = 6 h; L/S = 0.6 g leachant/g dry aggregate). tc, and L/S ratio were identical to those adopted by the UNI Standard test method (tc = 6 h; L/S = 0.6 g leachant/g dry aggregate). These results indicated that the CH/S ratio did not significantly affect the alkaline metal release from the aggregate (percent variations of Mr lower than about 12% for both sodium and potassium), at least within the range of CH/S values considered (5-16 g CH/100 dry aggregate). This was because the amount of solid CH used in the leaching test was largely in excess of the quantity needed to maintain the lime saturation condition in the leaching solution throughout the test.
Therefore, the CH/S ratio of 5 g CH/100 g dry aggregate was maintained in the modified version of the UNI standard test method.
As shown in Figure 3, the release of sodium and potassium was only slightly increasing with the L/S ratio when the leaching tests were carried out using the same values of tc (6 h) and CH/S ratio (5 g CH/ 100 g dry aggregate) of the UNI Standard test method. Maximum percentage variations of about 14% for sodium and 8% for potassium were observed. Sand S1; CH/S = 5 g CH/100 g dry aggregate; t c = 6 h K Na Figure 2. Effect of solid calcium hydroxide (CH)-to-dry aggregate (S) ratio (CH/S) on alkaline metal release from sand S1.
These results indicated that the CH/S ratio did not significantly affect the alkaline metal release from the aggregate (percent variations of M r lower than about 12% for both sodium and potassium), at least within the range of CH/S values considered (5-16 g CH/100 dry aggregate). This was because the amount of solid CH used in the leaching test was largely in excess of the quantity needed to maintain the lime saturation condition in the leaching solution throughout the test. Therefore, the CH/S ratio of 5 g CH/100 g dry aggregate was maintained in the modified version of the UNI standard test method.
As shown in Figure 3, the release of sodium and potassium was only slightly increasing with the L/S ratio when the leaching tests were carried out using the same values of t c (6 h) and CH/S ratio (5 g CH/ 100 g dry aggregate) of the UNI Standard test method. Maximum percentage variations of about 14% for sodium and 8% for potassium were observed.
In contrast, the release of the two alkaline metals was found to greatly increase with increasing contact time, t c , irrespective of the L/S ratio used. This effect is highlighted by Figure 4 where the M r -t c curves for K and Na are reported for two different L/S ratios (0.6 and 3.3 g leachant/g dry aggregate) and the same CH/S ratio (5 g CH/100 g dry aggregate). Exception made for the series (K; L/S = 3.3), for which the points were interpolated with a polynomial equation by the method of least squares (R 2 = 0.9782), all the curves in this figure were drawn joining experimental points. For both sodium and potassium, the M r -t c curves exhibited an asymptotic trend. Values of M r greater than 90% of those achieved at the ultimate testing time investigated (240 h) were attained after 120 h of testing. Therefore, the CH/S ratio of 5 g CH/100 g dry aggregate was maintained in the modified version of the UNI standard test method.
As shown in Figure 3, the release of sodium and potassium was only slightly increasing with the L/S ratio when the leaching tests were carried out using the same values of tc (6 h) and CH/S ratio (5 g CH/ 100 g dry aggregate) of the UNI Standard test method. Maximum percentage variations of about 14% for sodium and 8% for potassium were observed. In contrast, the release of the two alkaline metals was found to greatly increase with increasing contact time, tc, irrespective of the L/S ratio used. This effect is highlighted by Figure 4 where the -tc curves for K and Na are reported for two different L/S ratios (0.6 and 3.3 g leachant/g dry aggregate) and the same CH/S ratio (5 g CH/100 g dry aggregate). Exception made for the series (K; L/S = 3.3), for which the points were interpolated with a polynomial equation by the method of least squares (R 2 = 0.9782), all the curves in this figure were drawn joining experimental points. For both sodium and potassium, the -tc curves exhibited an asymptotic trend. Values of greater than 90% of those achieved at the ultimate testing time investigated (240 h) were attained after 120 h of testing. As expected, at long contact times (120-240 h), the values obtained for both sodium and potassium with the two L/S ratios differed more significantly than the respective values determined at shorter contact times. At 240 h, an increase of the L/S ratio from 0.6 to 3.3 g leachant/g dry aggregate produced a percentage increase of 30% for sodium and 14% for potassium, against 15% for Na and 6% for K at tc = 6 h.
Based on the results of Figure 4, and also considering that: (1) the L/S ratio in real concrete structures is much lower than 0.6 g leachant/g dry solid and (2) that the laboratory tests with L/S ratios of less than 0.6 could pose operational difficulties, the L/S ratio of the UNI standard test method (L/S = 0.6 g leachant/g dry solid) was maintained in the modified version of the alkali extraction test.
Using the values of CH/S = 5 g CH/100 g dry aggregate, and L/S = 0.6 g leachant/kg dry aggregate of the UNI standard test method, leaching tests were performed on the other six sands by varying the contact time, tc, in order to evaluate the alkaline metal releases from these sands and, at the same time, to confirm the asymptotic trend of the -tc curves. Figures 5 and 6 show, respectively, the -tc curves obtained for sodium and potassium relevant to sands S2-S7. For comparison, the results relevant to sand S1 (same data reported in Figure 4) are re-proposed.
The results in Figures 5 and 6 confirmed the asymptotic trend of the -tc curves for all the As expected, at long contact times (120-240 h), the M r values obtained for both sodium and potassium with the two L/S ratios differed more significantly than the respective values determined at shorter contact times. At 240 h, an increase of the L/S ratio from 0.6 to 3.3 g leachant/g dry aggregate produced a percentage M r increase of 30% for sodium and 14% for potassium, against 15% for Na and 6% for K at t c = 6 h.
Based on the results of Figure 4, and also considering that: (1) the L/S ratio in real concrete structures is much lower than 0.6 g leachant/g dry solid and (2) that the laboratory tests with L/S ratios of less than 0.6 could pose operational difficulties, the L/S ratio of the UNI standard test method (L/S = 0.6 g leachant/g dry solid) was maintained in the modified version of the alkali extraction test.
Using the values of CH/S = 5 g CH/100 g dry aggregate, and L/S = 0.6 g leachant/kg dry aggregate of the UNI standard test method, leaching tests were performed on the other six sands by varying the contact time, t c , in order to evaluate the alkaline metal releases from these sands and, at the same time, to confirm the asymptotic trend of the M r -t c curves. Figures 5 and 6 show, respectively, the M r -t c curves obtained for sodium and potassium relevant to sands S2-S7. For comparison, the results relevant to sand S1 (same data reported in Figure 4) are re-proposed. Therefore, apart from the use of an autoclave as extraction reactor (in place of a glass boiler with reflux) and the reduction of the amount of aggregate tested (from 400 g to 200 g), the only substantial modification of the operating parameters of UNI standard test method consisted of prolonging significantly the duration of the extraction test from 6 h to 120 h. Table 5 summarizes the values of sodium and potassium determined for all the sands investigated under the optimal test conditions (CH/S = 5 g CH/100 g dry aggregate; L/S = 0.6 g leachant/g dry aggregate; tc = 120 h). In this table, the percentage release (wt %) of sodium and potassium and the alkali release, ′ , expressed in terms of mg Na2O, K2O, or Na2Oeq/kg dry aggregate are also reported.
The results in Table 5 revealed that, in terms of percentage metal release, the releases of sodium from the sands S1-S4 (0.93-2.90%) were similar to those determined for sands coming from other countries, S5-S7 (1.29-2.84%). In contrast, the release of potassium from sands S1-S4 (0.41-1.47% for potassium) were much lower than those determined for sands S5-S7 (1.00-3.35%). Due to the great differences in alkaline metal contents of sands from different origin ( Table 2) Therefore, apart from the use of an autoclave as extraction reactor (in place of a glass boiler with reflux) and the reduction of the amount of aggregate tested (from 400 g to 200 g), the only substantial modification of the operating parameters of UNI standard test method consisted of prolonging significantly the duration of the extraction test from 6 h to 120 h. Table 5 summarizes the values of sodium and potassium determined for all the sands investigated under the optimal test conditions (CH/S = 5 g CH/100 g dry aggregate; L/S = 0.6 g leachant/g dry aggregate; tc = 120 h). In this table, the percentage release (wt %) of sodium and potassium and the alkali release, ′ , expressed in terms of mg Na2O, K2O, or Na2Oeq/kg dry aggregate are also reported.
The results in Table 5 revealed that, in terms of percentage metal release, the releases of sodium from the sands S1-S4 (0.93-2.90%) were similar to those determined for sands coming from other countries, S5-S7 (1.29-2.84%). In contrast, the release of potassium from sands S1-S4 (0.41-1.47% for potassium) were much lower than those determined for sands S5-S7 (1.00-3.35%). Due to the great The results in Figures 5 and 6 confirmed the asymptotic trend of the M r -t c curves for all the sands investigated. The M r values determined after 120 h for both alkaline metals were 92-98% of the respective values determined after 240 h. Therefore, it was possible to significantly reduce the test duration with respect to the ultimate testing time investigated. So, as an optimal contact time, a contact time of 120 h was selected in the modified version of the alkali extraction tests. The selection of this time also arose from the experimental evidence of the slurry stiffening when some sands (particularly, sands S3 and S4) were tested until 240 h. This phenomenon greatly hindered the quantitative recovery of the leachate from the autoclaved sample. Therefore, apart from the use of an autoclave as extraction reactor (in place of a glass boiler with reflux) and the reduction of the amount of aggregate tested (from 400 g to 200 g), the only substantial modification of the operating parameters of UNI standard test method consisted of prolonging significantly the duration of the extraction test from 6 h to 120 h. Table 5 summarizes the M r values of sodium and potassium determined for all the sands investigated under the optimal test conditions (CH/S = 5 g CH/100 g dry aggregate; L/S = 0.6 g leachant/g dry aggregate; t c = 120 h). In this table, the percentage release (wt %) of sodium and potassium and the alkali release, M r , expressed in terms of mg Na 2 O, K 2 O, or Na 2 Oeq/kg dry aggregate are also reported. The results in Table 5 revealed that, in terms of percentage metal release, the releases of sodium from the sands S1-S4 (0.93-2.90%) were similar to those determined for sands coming from other countries, S5-S7 (1.29-2.84%). In contrast, the release of potassium from sands S1-S4 (0.41-1.47% for potassium) were much lower than those determined for sands S5-S7 (1.00-3.35%). Due to the great differences in alkaline metal contents of sands from different origin (Table 2), sands S5-S7 exhibited values of M r (mg metal/kg dry aggregate) four-six times higher than those determined for sands S1-S4. The overall results of this study were congruent with those reported in the literature by other researchers for alkali extraction tests with sodium and potassium hydroxide solutions as leaching media [29][30][31].
In order to confirm the assumed mechanism of alkali release reported in the Introduction section, pH measurements were performed on filtrate solutions obtained for sands S1 and S5, whose relatively high reactivity in the field was known. The measurements performed at different contact times between solid specimens and saturated calcium hydroxide solution revealed negligible changes in the pH values (pH = 12. 16-12.36) in comparison to blank samples (pH = 12. 22-12.32). Due to the alkali reactivity of these sands, a reduction of pH would have been expected with increasing testing time. Therefore, the results obtained appeared to be consistent with the hypothesis of a progressive replacement of OH − ions consumed by ASR, through lime dissolution, balancing alkaline metals released by aggregates. Table 6 gives the releases of sodium and potassium, M r (mg alkaline metal/kg dry aggregate), determined for the coarse aggregates C1 and C2 under the following test conditions (CH/S ratio = 5 g CH/100 g dry aggregate; L/S ratio = 3.3 g leachant/g dry aggregate; t c = 240 h). In this table, the percentage releases of sodium and potassium and the alkali releases (M r ) expressed in terms of mg alkaline metal oxide/kg dry aggregate are also reported. As expected, in spite of the higher L/S ratio and the longer contact time used for testing coarse aggregates, the percentage releases of sodium and potassium from the aggregates C1 and C2 (0.05 and 0.22% for sodium; 0.11 and 0.16% for potassium) were found to be much lower than those obtained for the respective sands (0.93 and 0.71% for sodium; 1.43 and 0.41% for potassium).
Alkali Releases from Coarse Aggregates
Comparing the data in Tables 5 and 6, the ratios between the alkali releases (mg Na 2 Oeq/kg dry aggregate) from coarse aggregates and sands coming from the same source ranged from 0.13 to 0.14.
These results show how important is the role of the specific surface area of aggregate in the releasing process by attack of the saturated calcium hydroxide solution, as already pointed out in the literature [13].
Although the release of alkalis from coarse aggregate is very low, this release cannot be neglected in calculating the alkali contribution by combined aggregate (sand + coarse aggregate) to concrete mix. This is because of the high combined aggregate content in the concrete mixes (1800-2100 kg/m 3 depending on the type of structure) and the high percentage of the coarse aggregate in the combined aggregates (55-65% by weight).
If the release of the coarse aggregate is unknown, it is possible to estimate its alkali contribution to concrete in terms of an equivalent percentage increase of the sand content, ∆X S %, the latter being calculated on the basis of the following equations: with: where M r Comb is the alkaline metal release (Na or K) from the combined aggregate, X S is the weigh fraction of sand in the combined aggregate, X C is the weigh fraction of coarse aggregate in the combined aggregate, M rS and M rC are the alkaline metal releases from sand and coarse aggregate, respectively. The increased weight fraction of sand, X S , is given by: Combining Equations (5)-(7) yields: The percentage increase of sand content ∆X S % is calculated as: and the estimated alkaline metal release from the combined aggregate, M * r Comb , can be calculated as: Using the M r values in Table 5 (M rS ) and Table 6 (M rC ) and Equations (5)-(9), the ∆X S % values were calculated for the combinations S1-C1 and S2-C2 for X S equal to 0.35 or 0.45. The ∆X S % values thus calculated are reported in Table 7. As can be noted from Table 7, the ∆X S % values depended on both the type of alkaline metal and the sand-coarse aggregate combination considered. The ∆X S % values ranged from 14.4 to 24.2% for sodium releases and from 18.9 to 36.1% for potassium releases. An overall average ∆X S % value of 23.3% was calculated.
Using the average ∆X S % value and the M rS values reported in Table 5, the estimated values of M r Comb for sodium and potassium, indicated as M * r Comb , were calculated with Equation (10) and compared in Table 7 with the respective M r Comb values.
The values of M r Comb and M * r Comb were found in good agreement, their maximum percentage difference being equal to 11.5%. Therefore, a ∆X S % value of 23% may be used for estimating the alkali contribution by coarse aggregate to concrete mix, in terms of equivalent sand content.
Application of the Proposed Model for Evaluating the Effect of Alkali Release from Aggregates on Deleterious ASR Development in Long-Service Structures
The model described in Section 2 was applied to predict the potential effect of alkali release from the aggregates investigated in this study on the development of ASR expansion in concrete dams.
A typical concrete mix composition for concrete dams is as follows: cement content = 200 kg/m 3 ; water content = 120 kg/m 3 ; aggregate content = 2100 kg/m 3 , of which 35% as sand and 65% as coarse aggregate. Use of two types of cement was considered: a Portland cement (CEM I) with Na 2 Oeq = 1.0% and a pozzolanic cement made with natural pozzolan (CEM IV/B-P) with Na 2 Oeq = 2.20% and R max = 8.10 kg Na 2 Oeq/m 3 [40].
Based on the assumed concrete composition, the Lac 0 values were equal to 2.0 kg Na 2 Oeq/m 3 for Portland cement concrete and 4.4 kg Na 2 Oeq/m 3 for pozzolanic cement concrete.
An alkali release from coarse aggregate equivalent to that resulting from an increase of the sand content equal to 23% by weight (Section 4.1.2) was considered.
Using the values of alkali releases from each sand determined by laboratory extraction tests and expressed in terms of g Na 2 Oeq/kg dry aggregate (Table 5) and the above concrete mix composition, the Lac agg lab (kg Na 2 Oeq/m 3 ) values were calculated for each combined aggregate. In Table 8, the values of Lac agg lab are reported together with the TAL values of the aggregates investigated and the Lac values calculated with Equation (1) for concrete mixes made with Portland cements (R max = 0). The TAL values of aggregates 1 and 2 were available in [36], those of aggregates 3 and 4 in [33] while the TAL of aggregate 5 was calculated from the data reported in [43]. No information was available about the TAL values of aggregates 6 and 7.
With respect to the Lac 0 value of the considered Portland cement concrete mix (2.0 kg Na 2 Oeq/m 3 ), the significance of such contributions was very different: marginal significance for concretes made with aggregates 1-4 (percentage Lac 0 increases of 5-13%), great relevance for concretes containing aggregates 5-7 (percentage Lac 0 increases of 30-56%).
According to this classification, aggregate 5 was classed as rapidly reactive, aggregate 1 as moderately reactive, aggregates 3 and 4 as slowly reactives and aggregate 2 as non-reactive. The classification of aggregate 2 was in contrast with its reactivity classification (Class III-S) obtained from petrographical examination (Table 1).
Comparing the Lac and TAL values reported in Table 8 for each aggregate, deleterious ASR expansion development may be predicted only for the dam concrete mix made with Portland cement and aggregate 5 (Lac > TAL). It was notewhorty that no deleterious ASR expansion was predicted for the concrete mix made with aggregate 1 (Lac < TAL), although this aggregate was characterized by a moderate alkali-reactivity according to its TAL value and was also known for its deleterious expansive behaviour in some ordinary concrete structures.
In the case of the Portland cement concrete mix made with aggregate 5, for which deleterious ASR expansion was predicteted in the long term, Equations (2) and (4) were used to calculate the maximum Lac 0 value for which no deleterious ASR expansion will develop (maxLac 0 = TAL − Lac agg lab ). A maxLac 0 value of 1.68 kg Na 2 Oeq/m 3 was obtained, corresponding to an alkali content of cement equal to 0.84% Na 2 Oeq, for concrete mix with 200 kg/m 3 of cement.
Alternatively, replacement of Portland cement with pozzolanic cement CEM IV/B-P (R max = 8.1 kg Na 2 Oeq/m 3 ) in concrete made with aggregate 5 (Lac 0 = 4.4 kg Na 2 Oeq/m 3 ) would yield a Lac value of 5.52 kg Na 2 Oeq/m 3 that was much lower than the TAL + R max value of 10.9 kg Na 2 Oeq/m 3 (Lac * value). As a result, no deleterious ASR expansion will be expected in the long-term for such a concrete mix.
This result confirmed the ability of pozzolanic cements containing natural pozzolan in counteracting ASR expansion development in concrete structures, in accordance with both the practical experience and the precautionary measures recommended by the European Report CEN/TR 16349 [45] and by RILEM [46].
It should also be emphasized that differences between laboratory expansion measurements and field behavior of concrete may often exist [47]. This should be taken into account as much as possible, eventually looking for suitable correlations, able to verify and validate the laboratory predictions.
Conclusions
Based on the results of this study, the following conclusions can be drawn: (1) The modified alkali extraction test with saturated calcium hydroxide solution is suitable for maximizing the alkali extraction from concrete aggregates. With respect to the Italian Standard test method UNI 11417-2, the modifications consist of replacing the glass boiler with reflux by a laboratory autoclave operated at 105 • C, reducing the amount of aggregate tested from 400 to 200 g, and prolonging the test duration from 6 to 120 h. No change of the liquid/aggregate ratio (L/S = 0.6 g leachant/g dry aggregate) and the solid lime/aggregate ratio (CH/S = 5 g CH/100 g dry aggregate) was found to be necessary. (2) With the use of the modified extraction test, the amounts of alkaline metals released from the sands investigated were found to vary from 46 to 690 mg/kg dry aggregate for sodium and from 71 to 576 mg/kg dry aggregate for potassium, corresponding to percentage releases of 0.93-2.84% for Na and 0.41-3.35% for K. These results are congruent with those available in the literature for concrete aggregates subjected to leaching tests using sodium hydroxide and potassium hydroxide as leaching media. (3) For coarse aggregates, much lower releases of alkaline metals were found (6-11 mg/kg dry aggregate for both sodium and potassium), due to their lower specific surface area, compared to the respective sands. However, these very low releases may not be neglected in calculating the alkali release from combined aggregates (sand + coarse aggregate), in consideration of the very high content of coarse aggregates in concrete mixes. As demonstrated in this study, if the alkali release from sand is only known, it is possible to estimate the alkali release from coarse aggregate in terms of an equivalent sand content of 23 wt %. (4) A simple model is proposed to predict the potential effect of alkali release from aggregates on deleterious ASR expansion development in long-service concrete structures. This model is based on the knowledge of four key parameters relevant to the components of the concrete mix, such as the initial alkali content of the mix used for the structure construction (Lac 0 ), the efficacy parameter (R max ) related to cement, the Threshold Alkali Level (TAL) of the aggregate, and the long-term alkali contribution by this aggregate to concrete mix (Lac agg ∞ ), the last being estimated from the results of laboratory optimized extraction tests (maximum alkali release). (5) Application of the above model to a typical dam concrete mix leads to ASR expansion predictions that are congruent with both the field experience and the ASR prevention criteria recommended by European Technical Report CEN/TR 16349:2012 and by RILEM Specifications, thus indicating the suitability of the proposed model.
|
2018-08-14T20:56:46.723Z
|
2018-08-01T00:00:00.000
|
{
"year": 2018,
"sha1": "170548441f13e8418bcb05623c71d9fb7526f307",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/11/8/1393/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "170548441f13e8418bcb05623c71d9fb7526f307",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
20021539
|
pes2o/s2orc
|
v3-fos-license
|
Middle East respiratory syndrome coronavirus neutralising serum antibodies in dromedary camels: a comparative serological study
Summary Background A new betacoronavirus—Middle East respiratory syndrome coronavirus (MERS-CoV)—has been identified in patients with severe acute respiratory infection. Although related viruses infect bats, molecular clock analyses have been unable to identify direct ancestors of MERS-CoV. Anecdotal exposure histories suggest that patients had been in contact with dromedary camels or goats. We investigated possible animal reservoirs of MERS-CoV by assessing specific serum antibodies in livestock. Methods We took sera from animals in the Middle East (Oman) and from elsewhere (Spain, Netherlands, Chile). Cattle (n=80), sheep (n=40), goats (n=40), dromedary camels (n=155), and various other camelid species (n=34) were tested for specific serum IgG by protein microarray using the receptor-binding S1 subunits of spike proteins of MERS-CoV, severe acute respiratory syndrome coronavirus, and human coronavirus OC43. Results were confirmed by virus neutralisation tests for MERS-CoV and bovine coronavirus. Findings 50 of 50 (100%) sera from Omani camels and 15 of 105 (14%) from Spanish camels had protein-specific antibodies against MERS-CoV spike. Sera from European sheep, goats, cattle, and other camelids had no such antibodies. MERS-CoV neutralising antibody titres varied between 1/320 and 1/2560 for the Omani camel sera and between 1/20 and 1/320 for the Spanish camel sera. There was no evidence for cross-neutralisation by bovine coronavirus antibodies. Interpretation MERS-CoV or a related virus has infected camel populations. Both titres and seroprevalences in sera from different locations in Oman suggest widespread infection. Funding European Union, European Centre For Disease Prevention and Control, Deutsche Forschungsgemeinschaft.
Introduction
In 2012, a new betacoronavirus-Middle East respiratory syndrome coronavirus (MERS-CoV)-was identifi ed in patients with severe respiratory disease in the Middle East. As of Aug 2, 2013, 94 laboratory-confi rmed cases, including 46 deaths, have been reported to WHO. 1 Illness associated with MERS-CoV infection is characterised primarily by mild-to-severe respiratory complaints, most requiring hospital admission for acute respiratory distress syndrome. Comorbidities and immunosuppression seem to predispose for infection and severe disease, [2][3][4][5][6] and unpublished serological studies suggest that asymptomatic infections occur. 7 All cases reported so far have been linked to Jordan, Qatar, Saudi Arabia, and United Arab Emirates. Humanto-human transmission has been reported, particularly in health-care settings, but on the basis of available evidence the basic reproduction number (R 0 ) is thought to be low, suggesting that the virus is not transmitted readily. 6,8 Therefore, the primary reservoir of MERS-CoV is probably animals. Diff erent coronaviruses have various hosts including wildlife, livestock, poultry, pets, and human beings. Coronaviruses can adapt to new host species, as shown by the zoonotic origin of several human coronaviruses. 9 Human coronavirus OC43 has recent common ancestry with bovine coronaviruses. 10 Rhinolophid bats were identifi ed as a likely reservoir for severe acute respiratory syndrome coronavirus (SARS-CoV), which emerged in people in 2002-03, through intermediate carnivorous hosts. 11 Molecular clock analysis 12 showed that bat and civet strains of viruses closely related to SARS-CoV only diverged a few years before the outbreak. Human coronavirus 229E has a common ancestor with coronaviruses found in Ghanaian Hipposideros spp bats. 13 MERS-CoV is able to replicate in various bat cell lines 14 and phylogenetic analyses show that it is closely related to betacoronavirus lineage C viruses from Pipistrellus spp bats in Europe and Asia. [15][16][17][18] Molecular clock dating of epidemiologically unlinked isolates of human MERS-CoV estimated their divergence from a common ancestor in mid-2011, 4,19 with a cluster of isolates from the eastern Arabian peninsula diverging in late 2012. 4 This fi nding could suggest that the diversity of MERS-CoV in people is the result of multiple independent, geographically structured, zoonotic events in the Middle East. 4,19 Possible animal reservoirs need to be identifi ed to determine how circulation of MERS-CoV is maintained and to break the chain of transmission. 20 MERS-CoV can infect cells of several species, including human beings and bats. 14 The functional receptor is conserved between species, suggesting that receptor use is not an important barrier to cross-species transmission. 21 Data for exposure history of patients are scarce, but suggest contact with livestock, including dromedary camels and goats. 2,4,5 Food and Agriculture Organization data from 2011 show that cows, goats, sheep, and dromedary camels are the main sources of meat and milk in Jordan, Saudi Arabia, and United Arab Emirates. 22 Serological studies are best suited to screen animal populations, but have not yet been reported for MERS-CoV in animals, although several methods have been described for testing antibodies of people. 23,24 For specifi city, WHO recommends use of a combination of screening assays with recombinant spike protein, and confi rmatory testing by neutralisation assays. Here, we describe antibody profi ling of serum samples from major livestock species that might be relevant to the epidemiology of MERS-CoV in the Middle East, using samples collected from herds inside and outside the region.
Serum sample collection
We sampled a cohort of 105 dromedary camels (Camelus dromedarius) from two herds on the Canary Islands. 50 were male, 55 were female, 88 were adults, nine were age 3-4 years, seven were age 2 years, and one was age 3 months. Both herds had the same owner, with frequent exchange of animals between the herds. One herd is from a coastal dune habitat with no other livestock, while the other herd is in an inland valley close to a tropical fruit farm, in particular mango and papaya-which could attract fruit bats-and nearby (roughly 500 m) to horse and goat farms with 25 and 300 animals, respectively. The camels were born in the Canary Islands except for three adults, which were imported from Morocco. 25 The camels are used in the tourist industry. 80 sera were taken April-June, 2012, nine in May, 2013, and 16 paired sera were taken in these months in 2012, and 2013, all for routine veterinary purposes. Samples were obtained by jugular puncture.
50 female dromedary camels from Oman were sampled in March, 2013. The camels were aged 8-12 years and belonged to diff erent owners from separate locations. The camels are retired racing camels now used for breeding, and blood was taken by jugular puncture for routine screening for brucellosis. Omani dromedaries Sera were collected for veterinary purposes from two llamas (Lama glama), six alpacas (Vicugna pacos), and two Bactrian camels (Camelus bactrianus) in the Netherlands. Sera were collected for veterinary purposes from two Bactrian camels, 18 alpacas, fi ve llamas, and two guanaco (Lama guanicoe) in Buin Zoo in Chile. Sera from cattle (n=40), domestic goats (n=40), and sheep (n=40) were from routine submission to the Dutch Animal Health Service. Sera from Spanish domestic goats (n=40) were provided by the Instituto de Investigación en Recursos Cinegéticos (Ciudad Real, Spain) from submissions for tuberculosis control in 2011. All sera were obtained in agreement with local regulations and Dutch import regulations with regard to animal disease legislation. Positive human control sera for the three antigens used on the microarray were taken as described previously. 24 All samples were stored at -20°C until testing.
Laboratory procedures
We tested the sera for the presence of IgG antibodies reactive with MERS-CoV, SARS-CoV, and human coronavirus OC43 S1 antigens in a protein microarray. The receptor-binding domains, which contain the S1 subunit of spike proteins of MERS-CoV (residues 1-747), SARS-CoV (residues 1-676), and human coronavirus OC43 (residues 1-760) were expressed, purifi ed, and spotted on glass slides. Slides were incubated with serum and species-specifi c conjugates, as previously described. 24 Goat sera were incubated with Alexa Fluor 647-conjugated rabbit anti-goat IgG Fc fragment (Jackson Immuno Research, West Grove, PA, USA);
Figure 2: MERS-CoV and human coronavirus OC43 or bovine coronavirus cross-reactivity
Combinations of the mean fl uorescent intensities of reactions of sera with MERS-CoV and human coronavirus OC43 antigens from 105 Spanish dromedary camels (A). plaque reduction neutralisation tests for bovine coronavirus and MERS-CoV (B): two representative sera are shown (numbers 15 and 5, corresponding to camel ID numbers in table 2) in dilutions of 1/40, 1/160, and 1/640 as well as the virus input control. All samples were tested in duplicates (only one well shown) and titres were expressed as the serum dilution resulting in a plaque reduction of at least 90%. IgG reactivity of both camel sera to MERS-CoV antigen and human coronavirus OC43 antigen in a two-step dilution series in the microarray (C). IgG reactivity of two two-step serially diluted Omani dromedary camel sera with human coronavirus EMC antigen and human coronavirus OC43 antigen in the microarray (D). RFU=relative fl uorescence units. MERS-CoV=Middle East respiratory coronavirus. cow sera with Alexa Fluor 647-conjugated goat antibovine IgG (H+L; Jackson Immuno Research); sheep sera with Alexa Fluor 647-conjugated donkey anti-sheep IgG (H+L; Millipore, Temecula, CA, USA); and camelid sera with Dylight 650-conjugated goat anti-llama IgG (H+L; Agrisera, Vännas, Sweden). Fluorescence signals were quantifi ed as described previously. 24 We tested the sera for IgG reactivity at a dilution of 1/20 and set an arbitrary cutoff at an average signal intensity of 20 000 relative fl uorescence units (RFU). This high cutoff was chosen to reduce cross-reactive false positives. 24 We present results as RFU for each set of quadruplicate spots per antigen. Negative fl uorescent intensities (caused by background correction) were assigned to 0. Analyses were done with GraphPad Prism (version 6.02). Sera were heat-inactivated before virus neutralisation by incubation for 30 min at 56°C. Two-fold serial dilutions of sera were prepared using 96-well plates, starting dilution 1/10. MERS-CoV was diluted in Iscove's modifi ed Dulbecco's medium (IMDM) supplemented with clemizole penicillin (penicillin G), streptomycin, and 1% fetal bovine serum, to a dilution of 2000 tissue culture infective dose 50 per mL. 50 μL virus suspension was added to the plates and the plates were incubated at 37°C for 1 h. The mixtures of virus and serum were then incubated on 96-well plates containing Vero cells for 1 h followed by washing with phosphate buff ered saline and incubation with IMDM and 1% fetal bovine serum for 3-4 days, after which endpoint titres were measured. All tests were repeated twice independently.
We tested neutralisation activity of sera against MERS-CoV (Erasmus MC isolate) and bovine coronavirus (Nebraska strain) by plaque reduction neutralisation test (90% plaque reduction) with African green monkey kidney cells (cell line Vero B4; DSMZ ACC 33) or bovine kidney cells (cell line PT; CCLV-RIE11) in a 24-well plate format. Virus (30-60 plaqueforming units) and heat-inactivated sera (diluted from 1/40 to 1/640) were pre-incubated in 200 μL of serumfree OPTIpro medium (Life Technologies, Karlsruhe, Germany) at 37°C for 1 h. Virus adsorption was done at 37°C for 1 h. Supernatants were removed and overlaid with Avicel resin (FCM BioPolymer, Brussels, Belgium). 5 Assays were stopped after 3 days by fi xation with 8% paraformaldehyde for 30 min. All samples were tested in duplicate and titres were expressed as the serum dilution resulting in a plaque reduction of at least 90%.
Role of the funding source
The sponsors had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author had full access to all the data in the study and had fi nal responsibility for the decision to submit for publication.
Results
Sera were tested for IgG antibodies reactive with MERS-CoV, SARS-CoV, and human coronavirus OC43 S1 antigens in a protein microarray (fi gure 1). Human coronavirus OC43 is serologically closely related to bovine coronavirus, 26 diverging at the end of the 19th century. 10 Bovine coronavirus circulates in cows, sheep, goats, and Old and New World camelids. [27][28][29][30] Because bovine coronavirus S1 was not available, human coronavirus OC43 S1 antigen was used as a proxy. Sera from three llamas, four alpacas, one guanaco, and two Bactrian camels reacted with human coronavirus OC43 antigen. One cow and one goat serum reacted with human coronavirus OC43 antigen as did sera from 16 of 105 (15%) Spanish dromedary camels. All sera from cattle, sheep, and goats tested negative for MERS-CoV antigen, but sera from 15 Spanish camels (14%) did react with MERS-CoV antigen (fi gure 1). The reactivity was highly specifi c-the same sera did not bind to SARS-CoV antigen but a positive control specimen did. No correlation existed between the reactivity of sera with MERS-CoV antigen and human coronavirus OC43 antigen (fi gure 2). All but one serum sample that reacted with MERS-CoV antigen were from adult animals. One reactive serum was from a 2-year-old animal. To confi rm the presence of MERS-CoV specifi c IgG in the Spanish camel sera, we used a MERS-CoV neutralisation assay to test a subset of 49 camel sera with diff erent degrees of reactivity with MERS-CoV and human coronavirus OC43 antigen according to microarray. Nine Spanish camels had MERS-CoV neutralising antibodies with titres varying between 1/20 and 1/320 (table 1). Three of the 12 sera reacted with MERS-CoV spike antigen but did not neutralise MERS-CoV, most likely because of recognition of nonneutralising epitopes. All MERS-CoV neutralising sera had (almost) saturating reactivity with MERS-CoV antigen on the microarray, whereas reactivity with human coronavirus OC43 antigen varied from negative to 50% of saturating reactivity (table 1). The variable human coronavirus OC43 signals suggest that MERS-CoV did not generally cross-react with human coronavirus OC43 or bovine coronavirus antigens. All nine camels with MERS-CoV neutralising antibodies were born and raised on the Canary Islands; seven were female, two were male. Eight camels were adults, one was 2 years old.
To show that the reactivity of the camel sera with human coronavirus OC43 antigen according to the microarray was caused by the presence of bovine coronavirus IgG and to further exclude MERS-CoV neutralising activity caused by cross-neutralisation by the bovine coronavirus antibodies, we tested camels that had suffi cient serum left (n=15) in a comparative MERS-CoV and bovine coronavirus plaque reduction neutralisation test (fi gure 2, table 2). All camel sera neutralised bovine coronavirus, but with varying titres, suggesting a lower cutoff than 20 000 RFU for OC43 in the microarray (fi gure 1). Five camels had high neutralising antibody titres against bovine coronavirus (and a mean signal intensity of greater than 50 000 RFU for human coronavirus OC43 antigen on microarray) but were negative for MERS-CoV neutralisation, suggesting that cross-neutralisation in this direction did not occur and that the MERS-CoV neutralising activity was not caused by the presence of bovine coronavirus neutralising antibodies. A serum sample from a patient who had MERS, neutralised MERS-CoV with a high titre (1/640) but neutralised bovine coronavirus less effi ciently (titre 1/80). The latter fi nding was most probably caused by previous infection with human coronavirus OC43-this patient had a high titre (1/>5120) in a human coronavirus OC43 recombinant spike immuno fl uorescence assay and a saturating signal with human coronavirus OC43 antigen in the microarray. Two human serum samples positive for human coronavirus OC43 did not neutralise MERS-CoV, one of which neutralised bovine coronavirus at a titre of 1/80 (table 2).
We tested 50 sera from dromedary camels in Oman at a dilution of 1/20 by microarray and MERS-CoV neutralisation test. All the sera showed saturating reactivity with MERS-CoV antigen on the microarray, no SARS-CoV antigen reactivity, and human coronavirus OC43 antigen reactivity varying between negative (below the cutoff of 20 000 RFU) and saturating signals (fi gure 1, table 1). Serial dilution of two sera with saturating reactivity for both antigens at the initial dilution of 1/20 showed that MERS-CoV antigen reactivity was still above the cutoff at 1/5120, whereas human coronavirus OC43 antigen reactivity fell below the cutoff at dilutions of 1/80 to 1/320 (fi gure 2D). Consistent with the microarray data, all sera had high MERS-CoV neutralising capacity, with titres varying between 1/320 (seven of 50 samples) and 1/2560 or more (16 or 50 samples).
Discussion
In this study we describe the presence of MERS-CoV neutralising antibodies in dromedary camels both in a MERS-CoV linked (Oman) and unlinked regions (Canary Islands). All the sera from dromedary camels from Oman and some from Spain had specifi c IgG reactivity with the MERS-CoV receptor binding domain S1. We confi rmed our expectation that another betacoronavirus-bovine coronavirus-circulated in these camelids. 29 Spanish camels (9%) had specifi c neutralising antibodies against MERS-CoV that were clearly not caused by cross-neutralisation by bovine coronavirus antibodies.
Our study is the fi rst in which animals have been tested for the presence of antibodies specifi c to MERS-CoV (panel). Animal screening is necessary to understand the epidemiology of MERS-CoV. At present, bats are thought to be the ultimate reservoirs for several established human coronaviruses as well as SARS-CoV. Accordingly, phylogenetic analysis has shown that MERS-CoV is related to betacoronavirus lineage C viruses found in Pipistrellus spp bats. 15,16 However, direct transmission of MERS-CoV to people from bats seems unlikely. 4,19 The identifi cation of possible intermediate hosts that are probably in closer contact with people (eg, livestock) is urgently needed. Common livestock species in the Middle East include dromedary camels but also cattle, sheep, and goats. Based on the available data, we cannot rule out circulation of a MERS-related coronavirus in these species-sera were not available from epi demi ologically linked regions.
The high prevalence of MERS-CoV neutralising antibodies in dromedary camels from Oman suggests circulation of MERS-CoV or a closely related virus in this population. However, attempts to identify viral sequences in Spanish camel sera and faecal samples using pancoronavirus and specifi c betacoronavirus 2C PCR methods 15,31,32 were unsuccessful (unpublished data), as was untargeted amplifi cation followed by deep sequencing of faecal samples (unpublished data). These results imply that the camels were not actively shedding the virus at the time of sampling.
Less than 10% of the animals in the Canary Islands had MERS-CoV neutralising sera with titres up to 1/320. This low seroprevalence means either that exposure of the animals to other putative reservoirs is rare 33 or that the virus is absent in this closed-off population of roughly 2000 animals. 25 We cannot rule out that the population might have once had an outbreak but that by the time of sampling, antibody titres had waned and no new introductions of the virus had occurred. The camels have contact with wild rodents, rabbits, pigeons, and doves and possibly also with bats. Seven insectivorous bat species, including three Pipistrellus spp, are native to the Canary Islands, while Egyptian fruit bats (Rousettus aegyptiacus) have been introduced. 34 The 100% seroprevalence with high titres in Omani camels from diff erent owners and locations suggests a diff erent situation in the Middle East, with widespread circulation of MERS-CoV or a closely related virus. This diff erence of epidemiology might be because the virus circulating in the Middle East is diff erent to that circulating in Spain, with increased animal transmissibility and human infections. 35 In addition, the Omani camels were once racing camels now held for breeding and might be kept in circumstances that favour virus transmission. For cattle, a relation has been established between the incidence and eff ects of respiratory diseases, management practices, and animal transport. 36,37 To our knowledge, the camel populations in Oman and the Canary Islands are not connected. Camels on the Canary Islands were originally imported in the 15th century from the Horn of Africa for labour and transport. Nowadays, import of animals from Africa is banned because of the risk of foot-and-mouth disease. Only three camels in our study were originally imported from Morocco, more than 18 years ago. Because the closest relatives of MERS-CoV were identifi ed very recently in Neoromicia zuluensis bats from Africa, 38 the introduction of MERS-CoV or related viruses into some African camel populations could have occurred decades ago, giving a possible explanation for MERS-CoV antibodies in camels from the Canary Islands. In the Middle East, huge numbers of camels are imported from Africa to meet the demand for meat. The top fi ve camel breeding countries are all African, and Saudi Arabia and United Arab Emirates are in the top fi ve camel meat producing countries. 22 This increased turnover of animals in the Middle East
Systematic review
We searched PubMed for "novel coronavirus EMC" or "MERS-CoV", we identifi ed 43 reports in English linked to the Middle East respiratory syndrome coronavirus (MERS-CoV) published before July 22, 2013. None of these reports described a serological study for MERS-CoV-specifi c antibodies in animals.
Interpretation
Our report describes the fi rst MERS-CoV serological study of major livestock relevant to the Middle East. Our study shows that MERS-CoV or a related virus has infected dromedary camel populations. Both titre levels and seroprevalences in sera from diff erent locations in Oman suggest widespread infection of camelids with MERS-CoV or a closely related virus. Targeted studies are needed to confi rm our fi ndings and their possible relevance to human cases of MERS-CoV. Comparative seroprevalence testing of historical and more recent samples from camels from diff erent regions for which epidemiological background information is available, as well as virological assessment of samples from seroconverting animals are needed to identify and characterise this MERS-CoV-related virus. In the meantime, we recommend a detailed case history of confi rmed MERS-CoV cases, with review of any animal exposures including animal products, and targeted, prospective serosurveys to establish whether camels or their products are a potential source of human infections. compared with the Canary Islands could also aff ect the epidemiology of a virus, through more frequent infl ux of immunologically naive animals.
Targeted studies are needed to confi rm our fi ndings and their possible relevance in relation to the human cases of MERS-CoV. Comparative seroprevalence testing of historical and more recent samples from camels for which epidemiological background information is available, as well as virological assess ment of specimens from seroconverting animals are needed to identify and characterise this MERS-CoV-related virus. In the meantime we recommend a detailed case history of people with MERS-CoV, with review of any animal exposures including animal products, and targeted, prospective serosurveys to establish whether camels or their products are a potential source of human infections.
|
2017-06-16T10:55:10.171Z
|
2013-08-09T00:00:00.000
|
{
"year": 2013,
"sha1": "7bdabb88d7414c8860a2e482508ad2fd69b170bf",
"oa_license": null,
"oa_url": "http://www.thelancet.com/article/S1473309913701646/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d236e23d1cf9ecd36bd089790a86e525880e5ba3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
245188226
|
pes2o/s2orc
|
v3-fos-license
|
Finding the Missing Pieces of Food Safety Training Puzzle on Nile Cruises: a Delphi Approach
The Nile cruises operating between Luxor and Aswan are major contributors to tourism and hospitality in Egypt. Research has found that one of the vital challenges is the absence/ lack of effective management of food safety since some food poisoning incidences on Nile cruises were reported. However, there is no in-depth evidence on food safety training features on Nile cruises. This study aims to determine the consensus among a sample of experts on the main features of food safety training on the Nile cruises. A panel of 30 experts participated in a modified, three-round Delphi technique (DT) for conducting this study. The panel included academics, food safety trainers, Nile cruises managers, food safety auditors, and tourism and health ministries experts. The findings exposed that ineffective training needs analysis, evaluation, and vague legal requirements are the most important feature of food safety training on Nile cruises. The findings of this study may be useful for cruisers management, food safety trainers and auditors, as well as policymakers for future effective food safety training.
Introduction
Foodborne diseases (FBD) are general health-related issues, with an estimated more than 600 million persons falling sick each year (WHO, 2019). In the case of developing countries, particularly the Middle East and North Africa (MENA) region has the third premier estimated burden of FBD per population (Todd, 2017). WHO (2015) estimated that 100 million persons living in the MENA region fall sick with an FBD illness yearly, and 32 million of those affected are children below five years. Additionally, around 70% of the burdens of FBD in this region are due to Escherichia coli (E. coli), norovirus, Campylobacter, and Nontyphoidal Salmonella (Todd, 2016).
improper food-holding temperatures and poor staff personal hygiene during preparing and handling food. On the other side in the hotels' sector, there is much evidence that food poisoning occurred due to malpractices of food handlers. Unfortunately, such incidences influence the tourism and hospitality industry negatively. The top Egyptian tourist destination (Hurghada) was ranked on the top of the "worst" destinations in terms of illnesses (2017)(2018) with380 cases. Tragically, In August 2018, a couple died at Steigenberger Aqua Magic Hotel (a five-star hotel) in Hurghada. The Post-mortem tests showed E. coli bacteria was a factor in both deaths. In particular, it was also found that those resorts are accounted for 95% of food illness claims in Egyptian resorts (Arabian Business, 2019). Another outbreak had occurred in Alexandria since four persons were hospitalised after consuming spoiled Fesikh (fermented mullet fish). This increased the total reported cases of food poisoning due to consuming spoiled Fesikh in Alexandria to 70 patients, including one fatality (Osama, 2019;Khalife, 2019). Many food safety-related incidents occurred last few years on Nile cruises. Specifically and for instance, in 2018, three food-poisoning incidents occurred in a week in the floating hotels in Luxor and Aswan. A hundred and twenty-five persons were involved in those incidents, which led to a huge drop in tourism and the close of the floating hotels in the governorates of Luxor and Aswan. Meanwhile, the General Administration of Food Control launched a campaign against 28 floating hotels in Luxor Governorate. The campaign resulted in the issuance of 92 reports regarding food safety violations, which were sent to the competent prosecution, which is conducting investigations. In addition, it was reported that 278.5 kg of food and 69.5 liters of drinks and juices were spoiled, and unfit for human consumption were destroyed. Unfortunately, the causes of such incidents were mainly lack of hygienic/clean potable water, malpractices of food handlers, and unclean kitchens and worktops onboard the ships. Obviously, the investigation of the incidents underlined the ineffective food safety training as well as the inconsistency of training requirements among related entities, including the General Administration of Food Control and the Ministry of health (Elhawary, 2018). To minimise FBD incidents, food handlers must be trained/instructed effectively on food safety and hygiene. Food safety training is a fundamental element of every foodservice business for ensuring food safety and in applying food safety management systems, i.e., HACCP and ISO 22000. It is also a legal requirement that all food handlers have been trained and/or supervised commensurate with their job activities. Every public or private food business is responsible for training all employees whether they are full-time, part-time, or casual (Nguyen-Viet et al., 2017). Previous research has underlined that adequate food safety training of all food handlers may have a positive effect on the scores of the health inspection and improve some food safety behaviours, such as hand hygiene (Bryan, 2002). Additionally, food safety training is important for preventing/minimizing FBD (Seaman and Eves, 2010;Wolfe et al., 2010). The training features should be considered and be dependable on the characteristics of the food handling and roles of food handlers (Abdelhakim, 2016). However, to date and according to the best of our knowledge, there is no in-depth study on food safety training on Nile cruises based on the experts' opinion on the required effective features of food safety training. Thus, this study aims to judge the features of the current food safety training on Nile cruises using a modified Delphi technique. The findings of this study may help in deciding the priorities of this sector, particularly after launching the National Food Safety Agency of Egypt (NFSA) in 2017
Importance of food safety training
Food safety training is one of the keys to effective food safety systems implementation. In addition, it must be considered at different stages of employment to produce safer food as well as it must be developed to meet new challenges and requirements. This means to train newly hired before starting work, then after a year and son on (Springer, 2009). It was also found that effective training of food handlers assists in the production of safe food, maintaining the product quality and decreasing waste, and complying with legal requirements. Adikari et al. (2016) stated that food handlers' knowledge should be improved by training programsabout food safety and hygiene to provide safe food. According to Adam (2018), food handlers on Nile Cruisers were found to have a positive attitude towards food safety systems, but they had a low level of food safety knowledge, ineffective training. Such training is essential for combating these food safety risks is through food safety training (Cotterchio et al., 1998). However, although food safety training significantly improved food handlers' knowledge, it did not positively impact food handling related practices or behaviours (Seaman and Eves, 2006). Besides, although certification helps managers to a better understanding of food safety practices, it does not mean that food handlers will transfer gained knowledge from training to the workplace (Duffy, 2008).
Regulation requirements for food safety training
Legalizations requirement is the main power/enforcement for the implementation of food safety training among food businesses. In sequence, most if not all countries around the world have their legislation requirements of food hygiene training. In Ireland, for instance, the law specified that "food handlers are supervised and instructed or trained in food hygiene matters commensurate with their activity" managers and supervisors must ensure that this requirement is met (Food Safety Authority of Ireland, 2006). Another example, in the European Union Regulation (EC) No 852/2004 in Annex II (General Hygiene Requirements), in Chapter XII (Training) stated that food business operators are to ensure firstly, "Food handlers are supervised and instructed and/or trained in food hygiene matters commensurate with their work activity" and secondly that "Those responsible for the development and maintenance of the HACCP system or for the operation of relevant guides, have received adequate training themselves in the application of the HACCP principles". In Egypt, despite the establishment of the National Food Safety Authority (NFSA) by virtue of the new Law 1/2017 (NFSA, 2019) as the national food control system, there are no general legalisations that require all food handlers to be trained/instructed on food safety. However, with time, it is expected that the FSA will issue such legalisations in the near future.
Timing of Training
According to the requirements of the legislation, all food handlers must be trained/instructed on food safety and hygiene in relation to their job tasks. The stages of food safety training are different based on the legal requirements, the organization requirements and food handlers' experience, and work duties. For instance, in Ireland and UK, the "Industry Guide" (JHIC, 1997) planned various stages (1, 2, &3) of food safety training based on three categories of food handlers; A, B, and C in the catering industry. Firstly, stage (1), which normally takes place before the job and it ideally, can be included in the induction training programs. In this stage, topics, including personal hygiene and kitchen sanitation, are covered for any new food handlers in categories A, B, or C. It is recommended that this stage of training does range between half and one hour because further training will go later. Second (Stage 2) develops further food safety awareness. It should be provided within the first few weeks of starting work for full-time staff and may be extended to eight weeks for part-time staff (JHIC, 1997). In this stage, the basic principles of food hygiene should be covered in relation to the business and the duties of employees. This level of training should take about three hours and cover many topics. Finally (Stage 3) considers food handlers with highrisk duties (Categories B and C) who require training further than informal training (Stage 2) to comply with legislation, while this need not lead to a qualification (JHIC). This stage is highly developed compared to the previous stages as it includes training food safety management systems such as the HACCP system. In Egypt, although there is not much evidence on the required levels of food safety training that food handlers should obtain, some catering business may train their full-time staff on advanced levels of food safely provided by several accredited examination organizations, including Highfield Qualifications.
Training sources (Internal vs. External)
In general, training may be delivered by in-house trainers/instructors or by external or outsourced trainers/instructors. Deciding to depend on internal or external or both sources of food safety training is subject to many factors, including nature of process/ food service, required available facilities for training delivery, the type of training: practical or theoretical or on-job or off-job training, the number of trainees, the management role in training, the available budget for training, and the legal requirement (e.g., Abdelhakim, 2016;Ajlouni, & Gaungoo, 2018). Either in-house or external, food safety training should be delivered based on TNA and food handlers' roles. In addition, trainers need to be qualified and have the required knowledge with the competencies which must be achieved, have practical skills in the subject, and be experienced in training and presentation (Gruenfeldova et al., 2019). Furthermore, food safety training may be formal or informal, and it may be delivered to groups or on an individual basis according to the needs of the food business (Seaman& Eves, 2006).
Training levels and staff roles
Food safety training themes/contents/ levels should be comprehensive and meet the level of food handlers (Abelkaim et al., 2018). For example, the basic level should be delivered to professionals such as assistant chefs and waiters, intermediates for supervisors, and the advanced level should be delivered to managers. While the topics are the same at all levels, the training outcomes are different according to the target group of food handlers (Abelkaim et al., 2018). The main food safety issues that should be covered during training include foodborne illness, food safety legislation and enforcement, food hygiene, food safety hazards, food handlers, and personal hygiene, the design and layout of food premises and equipment, cleaning and disinfection, HACCP, ISO 22000, pest management, food receiving, storing, preparing, and serving, food safety training and education (e.g., Ajlouni, & Gaungoo, 2018;Sprenger, 2009).
Training evaluation and effectiveness
Training evaluation is a continuous process. It starts with the TNA stage and goes in conjunction with all stages of the training cycle until after training delivery. This means that training evaluation is conducted before, during, and after training. The main purpose of training evaluation is to determine the effectiveness of training programmes (Kirkpatrick and Kirkpatrick, 2006) as well as to help the organizations choose, monitor, evaluate different training courses (Seaman, 2010). Training evaluation helps in evaluating the financial investment in training (Kirkpatrick and Kirkpatrick, 2006). According to the Kirkpatrick model, the training evaluation should be according to four levels (reactions, learning, behaviour, and results) that reflect the hierarchy of stages of evaluating training programmes (Kirkpatrick and Kirkpatrick, 2006). Training evaluation allows trainees and managers/ supervisors to identify the productivity resulting after training (Wallace, 2014). However, in the case of food safety training, most evaluations are knowledge (learning) based (Abdelhakim, 2018), and consequently, most of the food safety cognition of food handlers focused on knowledge, attitudes, and reported practices model (Rennie, 1994). On the other hand, the results of training are rarely evaluated on the organisational level (Seaman, 2010). Thus, most training evaluation is ineffective and requires improvement.
Training recurrence/refreshing
Refresher training or retraining is fundamental for maintaining and improving the food safety behaviours of food handlers. Food safety training should not be a one-time occurrence. Training recurrence gives food handlers repeated exposure and more opportunities to bring up to date, review, and perfect learned skills (Soon et al., 2012;McFarland et al., 2019). Food safety training should be recurrent annually at the minimum and based on the risk of handling food. The refresher training should focus on the hot issues in handling food, including hand washing and temperature control along the food supply chain. It is also desirable that food handling behaviours are observed after refresher training and food handlers be awarded recertification (Soon et al., 2012).
Barriers to effective food safety training
The previous research had identified many barriers to effective food safety training, including the sociocultural characteristic barriers of the food handlers such as the low educational level and languages (Seaman and Eves, 2006). Additionally, the ineffective legal requirements (obligation) attending food safety training, such as in the case of Egypt, where NEFSA was recently launched in 2017 (NEFSA, 2020). Moreover, the availability of financial sources affects the type/model of training, as well as the size of the catering establishment and its effect on the type and quality of training programmes. Furthermore, time availability, management and peers attitude, formal or informal delivery of food safety training courses, and external or internal providers may all affect the level of choice and effectiveness of a training programmes (Worsfoldet al., 2004;Abdelhakim 2016;Fox, 2020). Besides that, Abdelhakim et al. (2018) and Abdelhakim et al. (2019) underlined that the absence or ineffective training needs assessment (TNA) and evaluation play as barriers to effective food safety training.
The study context
According to the Egyptian hotels Guide (2010), there are 264 Nile Cruises of different categories: five-stars (n=189); four-stars (n= 46); three-stars (n=24); twostar (n=3); and unclassified/ categorized (n=2). These Nile cruises are mainly owned and managed by four companies, Travco Nile cruises, Spring tour, Nile exploration, and Seti first. There is disparity among these companies regarding the number and classification of the owned cruisers. For instance, while Travco Nile cruises owned 14 five-stars and two four-stars, Seti was first company-owned and managed two five-stars and five three-stars.
Delphi technique
The root of DT (DT) is referenced to the Ancient Greek god Apollo, whose Delphi oracle was seen as his most expert, truthful, and reliable informant (Delbecq et al., 1975). The DT is a research method that is "designed to obtain the consensus of opinions of a group of experts (via) a series of intensive questionnaires interspersed with controlled opinion feedback" (Dalkey &Helmer, 1963:458). The DT is also known as the "expert judgment approach". This method is based on the opinion of a group of experts in the field of the study, so that indirect discussion takes place, meaning that each member of the experts' panel shows an excellent away from the influence of the group's opinion (Delbecq et al., 1975). The DT is one of the methods for data collection and analysis in different research designs: qualitative, quantitative and mixed methods. DT has been used in different disciplines, including tourism and hospitality (e.g., Jones et al., 2013;Gil-Lafuente et al., 2014;Fefer et al., 2016;Sobaih et al., 2012;Paraskevas, 2012). Additionally, many studies on food safety and food safety training were Delphic in approach (Johnston et al., 2014;Kim et al., 2013). For conducting this study, a modified two-round DT was used. The DT was modified by eliminating the firstround questionnaire (Murry & Hammons, 1995) and replaced by a semi-structured interview with a panel of experts (n=7) as well as content analysis of the current food safety training courses. The answers of the first round (#1) were used to develop the questionnaire for the second round (#2) to get hold of the panel perspectives on the features for effective food safety training on cruises (Figure 1).
Delphi panel selection
A purposive sample was used to employ the panel of experts for this study. All the targeted panelists were working in Luxor and were related to food safety issues on cruises. They were hired as representatives from the most related entities: Tourism Ministry (TM), Egyptian Tourism Federation (ETF), cruisers manager, Luxor-Health office, Luxor-Veterinary office, and Food safety Auditors and third parties (e.g., crystal -SGS). Obtaining responses from different backgrounds of experts served to diversify the perspectives. For obtaining high reliable results, Dalkey (1969) recommended that if the Delphi approach is used with more than 13 subjects, the level of reliability will exceed 0.80. In the same vein, Delbecq et al. (1975) suggest that 10-15 subjects are sufficient if their background is homogeneous. Thus, based on criteria suggested by Keeney et al. ( 2011), this study selected 30-panel experts categorised for three sub-groups: 1)Food safety auditors and/or third parties (e.g. URS; SGS; TUV) (n=9); 2) Cruisers managers (e.g., F&B managers, executive chefs, quality manager and/or supervisors) (n=15); and 3) formal bodies and entities (e.g., Health office; Luxor-Veterinary office; Luxor-Tourism office) (n=6). The experts were nominated based on their work experience in relation to food safety and cruises.
Interview and Beta test results
The interview results revealed that there is a variation in features of current food safety training provided for cruises employees. Table 1 summaries the common and generic features according to the interview. Table 1 The general features of food safety training on Nile cruises
Features Evidence/s Availability of training
Mainly there is generic food safety training for cruises staff.
Legal requirement
There is a general requirement but not a specific legal requirement for food safety training.
Rate of recurrence
Infrequently, that was done, and it is not periodic and not constant.
Mainly outsource training by food safety auditors, Egyptian Tourism federation.
Training needs analysis (TNA) The first aspect of food safety training on Nile cruises was the availability of such specific training. The findings exposed that most of the experts (5/7) reported that they provided and/or received food safety training. For instance, (expert 4) mentioned that "Of course, there are training courses offered by the ministry of tourism to the quality controllers and inspectors the food monitor before they are hired" .
The nature of training was also highlighted as (expert 4) underlined that the training is "It is only a theoretical but not practical training". The provided training was provided by both public entities, including the "ministry of tourism and ministry of Health" (experts 3, 5, 5), and private organisations such as: "Crystal and SGS "(experts 3, 5). Additionally, after some food poisoning outbreaks have taken place in some resorts in Sharm El-Sheikh, the Ministry of tourism reconsidered the food safety training for all food handlers in the foodservice sector; this was explained by experts 3&5" Yeah. The training is carried out within the Ministry's programs for training. Currently, there is a study for the preparation of various exercises by the Minister's advisors for training". On the other side, nearly the third of respondents (n=2) exposed that there is no food safety training on Nile cruises, and this is clear by experts 6&7 "This is supposed to provide periodic programs to clarify healthy methods of food hygiene and safety". Based on the interview results, the DT findings are presented in the following sections.
Profile of Delphi panel
After piloting 23, out 30 experts completed the two rounds of DT. degrees, respectively. Most panelists (52.2%) had more than ten years of work experience, e.g., food safety auditor, trainer (13%). Also, more than half of them (60.9%) reported that they had attended three times or more food safety training programs, and 82.6% of them reported that the training was certified Table 3 shows the main objectives of food safety training among the panelists. It is obvious that the top five objectives of food safety training were crosscontamination, food poisoning and foodborne illness (M=1.67); personal hygiene (hand hygiene and protective uniform) (M=2); food safety legislation/requirements (M=4.47); cleaning, disinfection, and sterilisation of equipment and tools (M=6.13); and Cross-contact and food allergies (6.60). Table 3 Objectives of food safety training On the other hand, the lowest three-five objectives of food safety training were the stream of food service, food safety management systems (HACCP, ISO22000) and building an employees' food safety culture. These findings are inconsistent with previous research (e.g., Ajlouni, &Gaungoo, 2018;Sprenger, 2009;Seaman & Eves, 2006) mentioned that the main food safety issues that should be covered during training include foodborne illness, food safety legislation and enforcement, food hygiene, food safety hazards, food handlers and personal hygiene, the design and layout of food premises and equipment, cleaning and disinfection, HACCP, ISO 22000, pest management, food receiving, storing, preparing and serving. Most if not all countries around the world have their legislation requirements of food hygiene training. The findings showed that the experts consented that the current food safety training in Egypt is due to the main legal requirements of the Tourism Ministry, authority requirements (Health Center), third-party auditors (e.g., SGS, URS, Crystal), Health Ministry requirements, and Veterinary office. These entities are different in their requirements and enforcement of legal concerns of food safety and food safety training (Table 4). This may be due to the absence of a national authority/ agency of food safety. However, recently the National Food Safety Authority (NFSA) established by virtue of the new Law 1/2017 (NFSA, 2019) as the national food control system. However, there are no general legalisations that require all food handlers to be trained/instructed on food safety.
Source of food safety training on Nile cruises
Tabulated findings revealed that the food safety training on Nile cruises is mainly delivered by external providers (e.g., food safety auditors/certification bodies). These certification bodies are contacted with Nile cruises for all food safety-related issues, including training. On the other hand, some Nile cruises provide training internally by their supervisors' trainers and managers. Finally, a few cruisers depend on both external and internal food safety tinning as the last source of training on Nile cruises. These findings support the previous studies (e.g., Abdelhakim, 2016;Ajlouni & Gaungoo, 2018). These studies underlined that depending on internal or external or both sources of food safety training are subject to many factors, including nature of process/ food service, required available facilities for training delivery, the type of training, number of trainees, the management role in training, and the available budget for training. In addition, trainers need to be qualified and have the required knowledge with the competencies which must be achieved, have practical skills in the subject, and be experienced in training and presentation (Gruenfeldova et al., 2019).
Food safety retraining on Nile cruises
Results in Table (6) indicate that the panelists agreed on three issues related to Food safety retraining on Nile cruises. Firstly, and on the top of the list, there is agreement that the recurrence of food safety training is infrequently/ rarely (M=1.2). Surprisingly enough that in the second order, they agreed that there is no food safety training recurrence or retaining of food handlers. Finally, on a few cruisers, panelists agreed that training refreshing is conducted annually and/ or according to timetables. These findings are in line with previous research (Soon et al., 2012;McFarland et al., 2019). Training recurrence gives food handlers repeated exposure and more opportunities to bring up to date, review, and perfect learned skills. Therefore, food safety training should be recurrent periodically according to the cruisers' training policy.
Training evaluation
The consensus of food safety training evaluation on Nile cruises is illustrated in Table 7. Out of the 23 experts, 21 participated in the current evaluation of food safety training. The findings revealed that trainers/instructors who deliver food safety training are the main ones responsible for evaluating the training. Based on the source of training, the instructors/trainer may be internal (e.g., quality managers/chefs/supervisors) and/or external from third-party auditors, e.g. SGS. In general, the most methods used by instructors for evaluation courses are exams/tests for knowledge as well as questionnaires for assessing the reaction of trainees toward the training course methods, time, classes, etc. These findings are well cited in previous literature, for instance (Kirkpatrick, 2006). et al., 2016Abdelhakim, 2018, and consequently, most of the food safety cognition of food handlers focused on knowledge, attitudes, and reported practices model (Rennie, 1995(Rennie, , 1994. On the other hand, the results of training are rarely evaluated on the organisational level (Seaman, 2010). Thus, most training evaluation is ineffective and requires improvement. The findings of this study are in line with previous research that indicated that barriers to effective food safety training, including the socio-cultural characteristic barriers of the food handlers such as the low educational level and languages (Seaman and Eves, 2006). Additionally, the ineffective legal requirements (obligation) of obtaining and/or attending food safety training and or lack of legal enforcement, such as in the case of Egypt, where NEFSA was recently launched in 2017 (NEFSA, 2020). Moreover, the availability of financial sources affects the type/model of training, as well as the size of the catering establishment and its effect on the type and quality of training programmes. Furthermore, time availability, management and peers attitude, formal or informal delivery of food safety training courses and external or internal providers may all affect the level of choice and effectiveness of a training programme (Worsfold et al., 2004;Abdelhakim 2016;Fox, 2020). Besides that, Abdelhakim et al. (2019) and Abdelhakim et al. (2018) underlined that the absence or ineffective training needs assessment (TNA) and evaluation play as barriers to effective food safety training.
The consensuses for effective future food safety training on Nile cruises
The consensuses of main procedures for effective food safety management on Nile cruises are ranked in table 9. Table 9 Consensuses for effective food safety training on Nile cruises Increase the practical aspects of training using case studies and the analysis of reports of previous audits/ inspections 4 6.13 Conduct effective and continuous evaluation of training (before, during and after). 5 6.60 Update the training steps based on the previous courses as well as the legal requirements and incidents. The tabulated consensuses are in agreement with the systematic approach of any training cycle, which basically aims to develop, deliver, and continuous improvement of a training program. It contains a systematic series of phases to help ensure that the food safety training on cruises is meeting and achieving the planned objectives and intended outcome (Egan et al., 2007).
Conclusions, implications, and further research
This study aimed to identify the current food safety training features for Nile cruises in Upper Egypt. A sample of 30 experts was used in a modified tworound DT. The findings exposed that Nile cruises managers and supervisors as well as experts from the governmental entities that the most important features of food safety training are the ineffective and vague legal requirements of food safety training among Nile cruises. Besides, this study demonstrated that the current food safety training is not effective since there many criteria and sources of training, which are different from each other. This led to variation in the feature of food they provided food safety training for Nile cruisers' staff. The study concluded with consensuses for future effective food safety training on Nile cruises. This study has significant implications for the Nile cruises and other hospitality establishments. First, it may be helpful for food safety trainers and auditors to place order effective food safety training. This is because of the lack of a comprehensive and risk-based food safety training material available in Egypt, specifically considering the Nile cruises. Also, all governmental entities (e.g., Tourism ministry) and private Food safety sectors (Food safety auditor) trainers should choose/develop food safety training courses that focus on current food safety NFSA requirements, the risk-based on /related to the current food handling practices, and specific behaviours that can help in reducing the potential risks of FBD. Like other research, this study has some limitations that should be considered in future research. Firstly, using modified two-rounds, DT seems perfect for collecting data from panelists. It is still imperfect forgiving an in-depth understanding of the features of food safety training. Thus using other qualitative methods such as a face-to-face interview individually or as focus groups with the same panelists after the Delphi sessions may be useful to stipulate the findings. Secondly, a quantitative study for assessing the food safety knowledge, attitudes, and behaviours of food handlers on board the Nile cruisers would be helpful in better understanding the features of the current food safety training. In addition, observing actual foodhandling practices from the cruises during work would be helpful to develop and evaluate food safety training. Finally, this study targeted cruises in Upper Egypt (Luxor); other future studies may survey all cruises in the country.
|
2021-12-16T17:50:25.104Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "17d86a865ef3b16efd84d78e7d21c3d175081671",
"oa_license": null,
"oa_url": "https://ijhth.journals.ekb.eg/article_208665_180c1a59a7fefee301fb7d1f9202edef.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "701c4268f72b5816f93d45cf2ee0803514f6c81c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
51623230
|
pes2o/s2orc
|
v3-fos-license
|
AEG-1 is involved in hypoxia-induced autophagy and decreases chemosensitivity in T-cell lymphoma
Background This study was to examine the link between astrocyte elevated gene-1 (AEG-1) and hypoxia induced-chemoresistance in T-cell non-Hodgkin’s lymphoma (T-NHL), as well as the underlying molecular mechanisms. Methods Expression of AEG-1, LC3-II, and Beclin-1 were initially examined in human T-NHL tissues (n = 30) and normal lymph node tissues (n = 16) using western blot, real-time PCR and immunohistochemistry. Western blot was also performed to analyze the expression of AEG-1, LC3-II, and Beclin-1 in T-NHL cells (Hut-78 and Jurkat cells) under normoxia and hypoxia. Additionally, the proliferation and apoptosis of Hut-78 cells exposed to different concentration of Adriamycin (ADM) in normoxia and hypoxia were evaluated by MTT and Annexin-V FITC/PI staining assay. Finally, the effects of AEG-1 on Hut-78 cells exposed to ADM in hypoxia were assessed by MTT and Annexin-V FITC/PI staining assay, and 3-MA (autophagy inhibitor) was further used to determine the underlying mechanism. Results AEG-1, LC3-II and Beclin-1 expression were significantly increased in T-NHL tissues compared with normal tissues. Incubation of Hut-78 and Jurkat cells in hypoxia obviously increased AEG-1, LC3-II and Beclin-1 expression. Hypoxia induced proliferation and reduced apoptosis of Hut-78 cells exposed to ADM. AEG-1 overexpression further increased proliferation and decreased apoptosis of Hut-78 cells exposed to ADM in hypoxia. Moreover, overexpression of AEG-1 significantly inversed 3-MA induced-changes in cell proliferation and apoptosis of Hut-78 cells exposed to ADM in hypoxia. Conclusions This study suggested that AEG-1 is associated with hypoxia-induced T-NHL chemoresistance via regulating autophagy, uncovering a novel target against hypoxia-induced T-NHL chemoresistance. Electronic supplementary material The online version of this article (10.1186/s10020-018-0033-6) contains supplementary material, which is available to authorized users.
Background
The lymphoma, a type of blood cancer, is roughly classified as Hodgkin's lymphoma (HD) and non-Hodgkin's lymphoma (NHL), and NHL represents the most common malignancy (Hadzipecova et al., 2007). T-cell lymphoma (T-NHL) accounts for approximately 15% of NHL in the United States (Tian et al., 2016). Currently, chemotherapy still remains the major choice for the treatment of T-NHL, especially at the advanced stages, but T-NHL is not that sensitive to conventional chemotherapy (R et al., 1987). These chemotherapy options ultimately yield poor outcomes in T-NHL patients, mainly resulted from the chemoresistance development of T-NHL. Actually, more than 90% of deaths from cancer are associated with drug resistance and metastasis (Ahmad et al., 2012).
Hypoxia is a common characteristic in solid tumors (Zhang et al., 2016a). Hypoxic environment triggers various adaptive responses in hepatocellular carcinoma (HCC) to survival in tough environment, and it provides a strong selective pressure for the survival of HCC, which results in the "survival of the fittest" and elimination of the inferior (Bogaerts et al., 2015;Zhang et al., 2016b). Reports also revealed that HCC cells in hypoxia are more resistant to chemotherapy than the cells growing in normoxia (Bogaerts et al., 2015;Zhang et al., 2016b;Lionel et al., 2012). Hypoxia in the tumor microenvironment is the major cause of drug resistance in cancer chemotherapy (Cosse & Michiels, 2008), but the mechanism by which hypoxia induces drug resistance in tumors is unclear. Several studies have shown that this process is mediated by autophagy. Song et al. (Song et al., 2009) found that autophagy was a protective way to participate in HCC chemotherapy resistance under hypoxic conditions, and chemotherapy induced-cell death in hypoxia was less than that in normoxia. They also observed that autophagy was significantly increased in hypoxia, and inhibition of autophagy by 3-MA or RNA interference increased cell death and improved drug resistance. In normoxia, antitumor drug 4-HPR resulted in cell death by inducing the apoptosis; while in hypoxia, 4-HPR induced autophagy, and 3-MA or chloroquine further enhanced apoptosis and reduced the survival of cells exposed to 4-HPR, suggesting that autophagy can prevent tumor cell death and may induce hypoxia-induced drug resistance to 4-HPR (Liu et al., 2011;XW et al., 2010). These studies fully demonstrate that autophagy is involved in the process of resistance induced by hypoxia.
Astrocyte elevated gene-1 (AEG-1) was initially cloned as neuropathology related gene in primary human embryos astrocytes in 2002 (Kang, 2002). Several researches have demonstrated the important role of AEG-1 in the progression of different tumors, including proliferation, metastasis, chemoresistance, and angiogenesis (Chang et al., 2016;X M & KK, 2013). Autophagy can be reflected by monitoring the accumulation of autophagy marker LC3-II. Silencing AEG-1 in a variety of tumor cell lines reduced LC3-II accumulation and restored chemosensitivity (Bhutia et al., 2010;Zou et al., 2016;Xie & Zhong, 2016). Besides, hypoxia inducible factor (HIF-1α) promoted AEG-1 expression by binding to the AEG-1 promoter (Zhao et al., 2017). However, it is unclear whether AEG1 participates in the regulation of autophagy and chemoresistance induced by hypoxia in T-NHL.
Tissue samples
Patients who were diagnosed with T-NHL at The First Affiliated Hospital of Zhengzhou University were included in the study after obtaining their oral and written informed consent. The biopsy specimens of patients (n = 30) were prepared by the Department of Clinical Pathology for paraffin-embedded tumor tissue sections. The control group consisted of 16 samples of lymph node that were obtained from normal lymph nodes in the disused tissues after standard operations, and the candidates were excluded from all kinds of tumors. This study was reviewed and approved by the Ethics Committee of the Medical Faculty at the First Affiliated Hospital of Zhengzhou University (Scientific Research-2017-LW-73).
Cell culture and treatment
T-NHL cell lines (Hut-78 and Jurkat) were obtained from the Cell Bank of Chinese Academy of Science (Shanghai, China). Cells were cultured in RPMI-1640 medium supplemented with 10% heat-inactivated FBS (fetal bovine serum), 50 U/ml penicillin and 50 U/ml streptomycin (Sigma-Aldrich, St. Louis, MO, USA) at 37°C in a humidified atmosphere containing 5% CO2.
Hypoxia treatment was performed by placing the cells in a sealed chamber (Thermo Forma) filled with mixture gases of 1% O2, 5% CO2, and 94% N2.
Plasmid construction and cell transfection
The pcDNA3.1 vector was purchased from Invitrogen (USA). PcDNA3.1-AEG-1, a plasmid containing AEG-1, was constructed by Invitrogen (USA). The plasmid constructs carrying siRNA against AEG-1 and HIF-1α were designed and constructed as previously described (Yan et al., 2012) . Hut-78 and Jurkat cells were seeded in six-well plates at a density of 1 × 10 6 cells per well. Subsequently, the transfection was performed by Lipo-fectamine™ 2000 (Invitrogen, USA) according to the manufacturer's instructions. The stable transfection cells were verified for RT-PCR and western blot analysis.
Immunohistochemical assay
Standard immunoperoxidase procedures were used to visualize AEG-1 and LC3-II expression, as previously described (Yan et al., 2012). Briefly, paraffin sections were deparaffinized in xylene, followed by a graded series of alcohols (100, 95 and 75%) and re-hydrated in water followed by Tris-buffered saline. Following antigen retrieval, slides were incubated with 3% H 2 O 2 to prevent endogenous peroxidase. Then slides were blocked with 5% normal serum and incubated with anti-AEG-1 and anti-LC3-II antibody. After washing, the tissue sections were treated with biotinylated anti-rabbit secondary antibody (Zymed Laboratories Inc., South San Francisco, CA, USA), followed by further incubation with streptavidin-horseradish peroxidase complex (Zymed). Tissue sections were then immersed in 3, 3′-diaminobenzidine and counterstained with 10% Mayer's hematoxylin, dehydrated and mounted.
RNA extraction, reverse transcription and real-time PCR
Total-RNA from cultured cells was extracted using the TRIzol reagent according to the manufacturer's instructions. The cDNA synthesis was performed in accordance with the protocol of the Takara Reverse Transcription System for real-time PCR [Takara Biotechnology (Dalian) Co., Ltd., China] with 2 μg RNA and reverse transcription performed with random primers. Real-time PCR primers were designed according to http://www.ncbi.nlm.nih.gov. The sequences of the PCR primers used were as follows: AEG-1, forward 5'-CGGTACCCCGGCTGGGTGAT-3′ and reverse 5'-CTCCTCCG CTTTTTGCGGGC-3′; HIF-1α, forward 5'-GTCGGACAGCCTCACCAAACAG AG C-3'and reverse 5'-GTTAACTTGATCCAAAGCT CTGAG-3′; GAPDH, forward 5'-CGGAGTCAACGGAT TTGGTCGTATTGG-3′ and reverse 5'-GCTCCTGGA AGA TGGTGATGGGATTTCC-3′. Real-time PCR analysis was carried out on a LightCycler real-time PCR instrument using SYBR Green I kit (Tiangen Biotech Co., Ltd., Beijing, China) according to the manufacturer's instructions. Each reaction was carried out in triplicate. Data were analyzed using the 2 -ΔΔCt method as described elsewhere (Fan et al., 2005).
Western blotting assay
Total proteins were extracted by lysing cells in buffer (50 mM Tris pH 7.4, 150 mM NaCl, 0.5% NP-40, 50 mM NaF, 1 mM Na 3 VO 4 , 1 mM phenylmethylsulfonyl fluoride, 25 mg/ml leupeptin and 25 mg/ml aprotinin). The lysates were cleared by centrifugation and the supernatants were collected. Proteins were extracted using the protein extraction kit following the manufacturer's instructions. Protein concentration was determined using protein assay reagent (Bio-Rad, Hercules, CA, USA). Equal amounts of protein were separated on SDS-PAGE, transferred to PVDF membranes, incubated with antibodies against AEG-1, HIF-1α, LC3-I, LC3-II, Beclin-1, and GAPDH, followed by incubation with the secondary antibodies. The membrane was then washed three times and visualized with diaminobenzidine. Quantification of the proteins was detected with the ECL system (Pierce Biotechnology Inc., Rockford, IL, USA). Each value represents the mean of triple experiments, and is presented as the relative density of protein bands normalized to GAPDH.
MTT cell viability assay
MTT assay was carried out as previously described (Yan et al., 2012). Cells were seeded in a 96-well plate at a concentration of 2.5 × 10 4 /ml (100 μl/well). Six parallel wells were assigned to each group. Then, 20 μl/well of 5 mg/ml MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) was added at different time after seeding and were then incubated for another 4 h. The supernatant was removed and the product converted from MTT was dissolved by adding 150 μl/well dimethylsulfoxide (DMSO). The plate was gently shaken for 15 min at room temperature and an enzyme-linked immunosorbent assay reader was used to measure the absorbance of each well at 570 nm.
Annexin V-FITC flow cytometric analysis
Annexin V-FITC apoptosis detection kit (BD Biosciences, San Jose, CA, USA) was adopted to detect early apoptosis, as previously described (Yan et al., 2012). Briefly, after culturing for 48 h, each group of cells was harvested, washed twice with pre-chilled PBS and resuspended in binding buffer (HEPES-NaOH 10 mM pH 7.4, 144 mM NaCl and 25 mM CaCl 2 ) at a concentration of 1 × 10 6 cells/ml. One hundred microliters of this solution (1 × 10 5 cells) was mixed with 5 μl of Annexin V-FITC and 5 μl of PI (BD Biosciences) according to the manufacturer's instructions. The mixed solution was gently vortexed and incubated in the dark at room temperature (25°C) for 15 min. Four hundred microliters of 1X dilution buffer were added to each tube and cell apoptosis analysis was performed by flow cytometry (BD FACSCalibur) within 1 h. At least 10,000 events were recorded and represented as dot plots.
Statistical analysis
The SPSS13.0 software (SPSS, Inc., Chicago, IL, USA) was used for all statistical analyses, and results are expressed as mean ± SEM. The comparison between two groups was evaluated by Student's t test; the comparison between multiple groups was performed using one-way analysis of variance (ANOVA), followed by the Tukey's test. Results were considered statistically significant at P < 0.05.
Results
AEG-1, LC3-II, Beclin-1, and HIF-1α are significantly up-regulated in T-NHL tissues To examine the expression of AEG-1 in T-NHL, tumor tissues (n = 30) and normal lymph node tissues (n = 16) were first employed and analyzed by RT-PCR and western blot. AEG-1 expression was significantly up-regulated in tumor tissues compared with normal tissues, both in mRNA (Fig. 1a) and protein levels (Fig. 1b). Western blot analysis also revealed the elevated levels of autophagy-related markers LC3-II and Beclin-1 in T-NHL tissues (Fig. 1b). Additionally, HIF-1α level was also elevated in T-NHL tissues (Fig. 1b). Immunohistochemical staining further confirmed high levels of AEG-1 and LC3-II in T-NHL tissues, which were rarely detected in normal tissues (Fig. 1c).
Knocking down HIF-1α inhibits expression of AEG-1, LC3-II and Beclin-1 in T-NHL cells under hypoxia
Transcriptional factor HIF-1α is a master regulator upon hypoxia, and it has been reported that HIF-1α promoted AEG-1 expression by binding to its promoter (Zhao et al., 2017). Here, to further elucidate the detailed role of HIF-1α in T-NHL under hypoxia, HIF-1α was first silenced in Jurkat (Fig. 3a) and Hut-78 (Fig. 3b) cells, and RT-PCR was performed to assess transfection efficiency. Moreover, western blot results revealed that AEG-1, LC3-II and Beclin-1 expression as well as LC3-II/LC3-I ratio were remarkably decreased in Jurkat cells transfected with HIF-1α siRNA under hypoxia (Fig. 3c-e). Similarly, under the hypoxic condition, Hut-78 cells transfected with HIF-1α siRNA exhibited the same trend ( Fig. 3f-h).
AEG-1 reduces chemosensitivity of Hut-78 cells under hypoxia
Hut-78 cells were first treated with different doses of ADM under normoxia and hypoxia for 24 h. MTT assay Fig. 1 Relative expression of AEG-1, Beclin-1 and LC3-II in T-NHL tissues and normal lymphoid tissues. a Detection of AEG-1 in 30 T-NHL tissues and 16 normal lymphoid tissues using RT-PCR. b Expression of AEG-1, Beclin-1, LC3-I, LC3-II and HIF-1α were detected by western blot. c AEG-1 and LC3-II were detected by immunohistochemical assay. Bar = 20 μm. *** p < 0.001, T-NHL tissues vs. normal lymphoid tissues revealed that ADM dose-dependently decreased cell viability both in normoxia and hypoxia, while cell viability in hypoxia was much higher than that in normoxia (Fig. 4a). In contrast, cell apoptosis was significantly increased in a dose-dependent manner both in normoxia and hypoxia, but the apoptosis of cells incubated in hypoxia was signally decreased compared with that in normoxia (Fig. 4b). These results indicated that hypoxia attenuated the response of Hut-78 cells to ADM. Then RT-PCR was performed to assess the transfection efficiency of pcDNA3.1-AEG-1 and AEG-1 siRNA in Hut-78 cells. AEG-1 expression was significantly increased in cells transfected with pcDNA3.1-AEG-1, but that was significantly decreased in cells transfected with AEG-1 siRNA (Fig. 4c). Besides, western blot revealed that AEG-1 overexpression markedly up-regulated Beclin-1 expression and LC3-II/LC3-I ratio, but those were significantly down-regulated in cells transfected with AEG-1 siRNA, both in normoxia and hypoxia (Fig. 4d). In contrast, p62 expression was markedly down-regulated in cells with AEG-1 overexpression, but AEG-1 siRNA significantly up-regulated p62 expression, both in normoxia and hypoxia (Fig. 4d). Especially, Beclin-1 expression and LC3-II/LC3-I ratio in hypoxia were prominently increased, while p62 expression in hypoxia were prominently decreased in comparison to normoxia (Fig. 4d). Further, under hypoxic conditions, AEG-1 overexpression signally enhanced the viability of Hut-78 cells following ADM treatment (Fig. 4e), while cell apoptosis was noteworthy reduced (Fig. 4f). These results indicated that AEG-1 blunted sensitivity of Hut-78 cells to ADM in hypoxia.
Discussion
Increasing evidence suggests that AEG-1 acts as an oncogene and is involved in many aspects of tumorigenesis, including protection from serum starvation-induced apoptosis, promoted tumor growth, angiogenesis and migration (Emdad et al., 2009;Emdad et al., 2007). High expression of AEG-1 has been reported in ovarian cancer tissues compared to normal ovarian tissues (Blanco et al., 2011). Besides, microarray analysis also confirmed that AEG-1 is associated with the regulation of chemoresistance (Meng et al., 2013). Actually, AEG-1 has been verified to be up-regulated in T-NHL and is associated with tumor growth in our previous study (Yan et al., 2012), but its effect on chemosensitivity in T-NHL is not understood.
In addition, we also proposed a mechanism by which AEG-1 enhanced chemoresistance of Hut-78 cells in hypoxia. A large amount of studies have demonstrated that autophagy plays a vital role in hypoxia-induced drug resistance (Liu et al., 2010;Ko et al., 2012;Rzymski et al., 2009). Thus, to illuminate the specific role of autophagy in chemoresistance of Hut-78 cells exposed to hypoxia, 3-MA (autophagic inhibitor) was selected. We found that inhibition of autophagy under hypoxia attenuated the cell viability and increased the apoptosis rate of Hut-78 cells, further, AEG-1 partially abolished the effect of 3-MA on the response of Hut-78 cells to ADM in hypoxia as revealed by MTT and apoptosis assays, indicating that AEG-1 reduced chemosensitivity of Hut-78 cells by inducing autophagy. It was reported that activation of autophagy inhibits tumor metastasis through the induction of HIF-1α (Indelicato et al., 2010). Previous studies have confirmed that hypoxia can induce autophagy through at least three pathways including activating transcription factor4, hypoxia-inducible factor1 and AMP-activated protein kinase (Liu et al., 2010;Rzymski et al., 2009;Kim et al., 2011). Actually, we also observed that the inhibition of HIF-1α significantly down-regulated AEG-1, LC3-II, Beclin-1 expression, and LC3-II/LC3-I ratio in T-NHL cells exposed to hypoxia. Unfortunately, the detail relationship between HIF-1α and AEG-1 in chemoresistance of Hut-78 cells exposed to hypoxia is not clear, which needs further investigation.
Conclusion
In this paper, our data presents evidence that AEG-1, LC3-II, Beclin-1, and HIF-1α are significantly up-regulated in T-NHL tissues, and hypoxia triggers AEG-1, LC3-II and Beclin-1 expression in T-NHL cells (Hut-78 and Jurkat cells). AEG-1 also reduces chemosensitivity of Hut-78 cells in hypoxia. Further, AEG-1 enhances chemoresistance of Hut-78 cells exposed to hypoxia by promoting autophagy. This study contributes to the target therapy against the drug resistance in T-NHL.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
|
2018-07-11T12:26:16.962Z
|
2018-07-09T00:00:00.000
|
{
"year": 2018,
"sha1": "1dccb644843983903beb15f82735b0ea283a43a6",
"oa_license": "CCBY",
"oa_url": "https://molmed.biomedcentral.com/track/pdf/10.1186/s10020-018-0033-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0dcee2ed64f895ed63d04ea3874a619e499183d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
235727792
|
pes2o/s2orc
|
v3-fos-license
|
Generalized and multiplexed $q$-plates: experimental implementation
In this paper we generalize the concept of $q$-plate, allowing arbitrary functions of both the radial and the azimuthal variables, and study their effect on uniformly polarized beams in the near and far-field regime. This gives a tool for achieving beams with hybrid states of polarization (SoPs), and alternative phase and intensity distributions. We also implement an experimental device based on a liquid crystal on silicon (LCoS) display for emulating these generalized $q$-plates. Moreover, we propose an application that takes advantage of the pixelated nature of this kind of devices for creating arbitrary superpositions of vector and vortex beams by representing onto the LCoS randomized combinations of two different $q$-plates, i.e. multiplexed $q$-plates. Great agreement is found between theoretical and experimental results.
Introduction
Vortex beams carrying orbital angular momentum (OAM) have proven to be useful in a large number of applications ranging from classical implementations such as optical communications [1], microscopy [2], micro-manipulation [3] and micro-machines design [4], to the realization of quantum information protocols in high dimensional Hilbert spaces [5] and multilevel quantum key distribution [6]. On the other hand, vector beams, characterized by showing a non-uniform distribution of the state of polarization (SoP), have been widely studied because of their tight focusing properties [7], besides its potential application to communications [8], optical tweezers [9], quantum entanglement [10] and more.
While light propagates through a homogeneous and isotropic medium, SoP and vorticity are separately conserved; but they may be coupled in presence of anisotropic and inhomogeneous media. In 2006 Marrucci et al. introduced for this purpose the q-plate, which can be thought of as a half-wave retarder where the principal axis rotates with the azimuth angle [11]. Hence, its matrix representation in the Jones formalism has the form where 2q is the times the principal axis of the retarder gives a whole turn around the center of the element. Although in the first years the design of q-plates was mainly oriented to the conversion of spin angular momentum to OAM, over time these elements evolved towards the objective of obtaining vector and vortex beams from complex superposition of SoPs and OAMs. Q-plates are typically inhomogeneous and anisotropic devices where the spin to orbital conversion (STOC) is related to the Pancharatnam-Berry phase. Even though they are highly versatile elements, with many applications in the field of singular optics, different approaches extending the concept of q-plates were proposed in order to obtain greater flexibility in the design and diversity of responses. Some of them are based on metasurfaces [12] which allow the combined use of the dynamic and geometric phases. Others, on creating q-plates with different q values depending on the region of the element [13], or making use of spatial light modulators (SLMs) to design q-plates with a nonlinear dependence of the azimuthal coordinate for binary codification [14]. In a recent paper [15], we proposed the use of a generalized q-plate, allowing in its design arbitrary functions of the azimuthal coordinate for giving place to the generation of alternative kinds of vector and vortex beams. We also came up with a device based on a parallel aligned LCoS display, capable of achieving such distributions, emulating the generalized q-plate proposed.
Here we make use of this experimental device to create generalized q-plates with arbitrary modulations of the polarization field of a beam, in both polar coordinates r and θ, in such a way that we are able to explore complex vector and vortex beams, with alternative polarization and phase structures. Besides, we take the pixelated nature of the LCoS display as an advantage and propose a scheme for achieving arbitrary superposition of vector or vortex beams with different q values by emulating multiplexed q-plates, defined as discontinuous random combinations of two different qplates. Superposition of vortex beams carrying OAM has shown multiple applications, for example, for creating arbitrary OAM qudit states for quantum information [16], for optical trapping and micro-manipulation using residual OAM resulting from a superposition [17], or for optical communications [18]; while superposition of vector beams has been used for 3D polarization control [19], improved interferometry [20] and more.
The Jones matrix that describes a generalized q-plate is and it represents a half-wave retarder in which the principal axis angle is an arbitrary function Φ(r, θ). When a linearly polarized beam passes through such an element, it becomes a vector beam with a structured linear polarization pattern in which the azimuth of the polarization vector varies as a function 2Φ(r, θ). On the other hand, when impinging with a circularly polarized beam, it acquires a phase 2Φ(r, θ), while the electric vector inverts its sense of rotation [15].
This way, generalized q-plates allow both the generation of vortex beams with phase singularities carrying OAM, and vector beams with polarization singularities, showing many potential applications in the field of singular optics. As seen in a previous work [15], interesting effects arise when these fields are propagated towards the far field regime, as the singularities tend to split giving place to different combinations.
In section 2 we present the experimental device used and explain how it can manage to mimic the behaviour of a generalized q-plate. In section 3 we show simulated and experimental results for some of these generalized q-plates on uniformly polarized input beams, in the near and far field approximation. In section 4 we show how to create superposition of vector or vortex beams by means of multiplexing different q-plates in the same element. This can be seen as a discontinuous generalized q-plate. Pixel by pixel modulation offered by SLMs makes this approach possible for experimental implementation. The main conclusions are given in section 5.
Experimental device
We propose a compact device that emulates the effect of the generalized q-plates described above by making use of a parallel aligned reflective liquid crystal on silicon (PA-LCoS) display. In this case we use a PLUTO-NIR-010-A phase only SLM by HOLOEYE, which adds a programmable pure phase modulation to the horizontal component of the input beam. The experimental setup is sketched in Fig. 1.
A He-Ne laser beam is focused on a pinhole (P) by means of a microscope objective (O), and then collimated by using a convergent lens L1. A polarization state generator (PSG) composed by a linear polarizer and a quarter wave plate is used to create arbitrary uniformly polarized beams, which then go through the generalized q-plate stage. For our purpose, the first half of the SLM is programmed with a phase modulation ζ = −2Φ(r, θ) − π and the second half with a phase modulation η = −ζ = 2Φ(r, θ) + π. The quarter wave plate QWP2 is oriented at 45 • with respect to the LC director, introducing a net −90 • rotation of the polarization vector due to the double passage; and the lens L2, in a 4f configuration with a mirror (M) at focus, forms a real inverted image of the first half of the SLM over the second half. A pair of non-polarizing beamsplitters is used to redirect the incident beam through the 4f system (transmission of the input beam by the first beam-splitter is blocked). This way, the Jones matrix describing the whole effect of both halves of the SLM is The phase programmed in the first half of the SLM is added to the horizontal component of the incident field which, after the 4f system, turns vertical. Then, the phase programmed in the second half is added to the former vertical component, which turns horizontal. This resembles the regular behaviour of a q-plate but applied to orthogonal linear states of polarization. Quarter wave plates QWP1, oriented at 45 • , and QWP3, oriented at −45 • , transform the input and output beams accordingly, in order to add the respective phase modulations to circularly polarized orthogonal components of the input field, hence emulating the behaviour of a generalized q-plate. Jones matrix of the whole device can be calculated as This matrix representation coincides with that of the generalized q-plate shown in Eq. 2, then emulating all the expected behaviors. In addition, since it is possible to program the PA-LCoS pixel by pixel and it allows to make modifications at video rates, this scheme gives great flexibility in the design of the generalized q-plates. Figure 2 shows an example of a phase function that can be addressed to the SLM. The function shown is the one required for emulating the conventional q-plate with q = 1/2. Phase functions addressed to the SLM are flipped in order to compensate the inversion caused by lens L2 and the odd number of reflections. In the lab situation, a blazed grating is added to the phase functions in order to get the desired beam in a diffracted order on the Fourier plane, allowing to filter out the spurious light with the spatial filter SF. Also, a uniform phase value can be added to the respective phase function, in order to correct the phase shift introduced by reflection in the beam-splitters (see appendix). Finally, the beam that emerges from the generalized q-plate goes through a characterization stage. Lenses L3 and L4 form a second 4f system which images the output beam onto a CCD camera. The spatial filter (SF) located at the Fourier plane is used to block out the spurious diffracted light, leaving only that coming form the programmed phase modulation. Alternatively, L4 may be replaced by a microscope objective in order to obtain a magnified image of the Fourier plane onto the CCD. A stokes polarimeter formed by a rotating quarter wave plate (QWP) and a fixed linear polarizer (LP) is used to perform polarization measurements.
Generalized q-plates with radial and azimuthal dependence
Beams created from q-plates with arbitrary functions Φ of the azimuthal and radial coordinates may show a variety of interesting effects in their amplitude, phase and polarization structure. In this section we show some examples of the different behaviors found.
Polynomial growth in θ
First we study functions that depend only on the azimuthal coordinate, including regular q-plates. The first example is the non-linear function Φ(θ) = q(2π) (1−p) θ p , with p any integer power. The multiplicative constant (2π) (1−p) fixes the total azimuthal variation in q times 2π, avoiding discontinuous phase steps after a 2π period in θ. These beams illustrate in a simple way the effect of non-linearities in the q-plate function. Figure 3 shows the simulated and measured intensity and polarization structure of the created beams, as well as the azimuth and ellipticity of the polarization ellipses, at the output plane of the generalized q-plates, for q = 1/2 with powers p = 1 and p = 2. Input beam (emerging from the PSG) is linearly polarized in the vertical direction. As anticipated, the result is a vector beam with a structured linear polarization pattern in which the azimuth of the polarization vector varies as a function 2Φ(r, θ). Figure 4 shows the corresponding results at the Fourier plane (far field diffraction). Polarization measurements were performed by using an imaging Stokes polarimeter, composed of a rotating quarter wave plate followed by a vertical linear polarizer. Intensity sensed at the CCD for a set of different angles (see appendix) of the wave plate gives the necessary information to calculate the Stokes parameters S i (x, y) of the measured beam. The azimuth angle gives the orientation of the polarization ellipses and is obtained as ψ = arctan(S 2 /S 1 )/2, while the ellipticity angle gives the form of the polarization ellipses and can be calculated as χ = arcsin(S 3 /S 0 )/2, it ranges from −π/4 to π/4 and it is positive for right-handed sense of rotation of the electric vector, and negative for left-handed sense of rotation [21]. We used these two parameters instead of the raw Stokes parameters because they show more effectively the local nature of the polarization ellipses, which helps to identify singular behaviours, as shall be seen shortly. From these parameters we plotted the polarization ellipses over the intensity distribution using a color code based on χ: for χ = 0 (linear polarization) we used green and for χ = ±π/4 (circular polarization) we used red. Intermediate colors represent elliptical polarization.
For a regular q-plate (p = 1) the polarization field in the Fraunhofer regime shows a "donut" intensity distribution, due to the central polarization vortex [22]. Conversely, the beams obtained from the non-linear polynomial q-plates show a break in the cylindrical symmetry of the SoP and intensity distributions. In the general case it can be seen that, instead of showing a central singularity with topological charge 2q, they show 4q isolated points of circular polarization (c-points) around which polarization azimuth angle ψ rotates in π. The topological charge is measured as the times ψ gives a whole turn around the beams axis, so the topological charge of these c-points is ±1/2.
When input light is circularly polarized, the STOC phenomenon can be observed. Fig. 5 shows the intensity and polarization structure of the created beams, as well as the phase profile and ellipticity angle, at the Fourier plane (far field diffraction) of the generalized q-plates, for q = 1 with powers p = 1 and p = 2, when the input beam is left circularly polarized. Phase measurements were performed using a phase shifting interferometry technique. An outer region of the SLM is used to create a reference beam which interferes with the object beam at the Fourier plane. This is achieved by adding a blazed grating to the phase function of the q-plate and the same grating to a circular region of the SLM with uniform phase. An example of the SLM phase function addressed during a measurement with this technique is shown in Fig. 6. Displacing the reference grating results in a phase shift of the reference beam on the Fourier plane [23]. Measuring the intensity of the interference pattern for 4 consecutive π/2 phase shifts of the reference beam, the phase of the object beam can be obtained according to Ψ(x, y) = arctan I 4 (x, y) − I 2 (x, y) where I i represents the intensity measured in the i-th step [24]. Figure 6: Phase addressed to the SLM in order to perform phase shifting interferometry. A blazed grating is added to the q-plate function, obtaining a characteristic fork diagram. The reference beam phase can be shifted by displacing its respective blazed grating, which has the same period and orientation than the object grating.
In the circularly polarized case for a linear q-plate, the phase distribution shows a central singularity (phase vortex) with topological charge 2. Polarization remains uniformly circular, but inverts its sense of rotation to right handed, which shows the STOC phenomenon. In the non linear case, two isolated vortexes with topological charge 1 appear, and cylindrical symmetry is lost. In the general case, when losing linearity, the central singularity of topological charge 2q is divided into 2q singularities of topological charge ±1. In all these cases experimental and theoretical results show great agreement, except in the regions with very low intensity, in which measurements have a higher discrepancy.
Polynomial q-plates in r and θ
Following a wider scheme we implemented generalized q-plates of the form Thus, the phase addressed to the SLM can be thought of as two phase masks dependent of r and θ respectively. We used for the sake of simplicity only polynomial functions for Φ R and Φ Θ , but the following treatment can be equally done using arbitrary 2D functions, as shall be seen in the next section.
The generalized q-plate function studied is 2Φ(r, θ) = 2q r π(r/r 0 ) pr + 2q t (2π) 1−pt θ pt . This describes a family of spiral functions like those shown in Fig. 7. Regarding the azimuthal dependence, this function shows polynomial growth with power p t and total variation q t 2π. Radial function shows polynomial growth from the center with power p r , reaching a value of q r π when r = r 0 , being r 0 the plate radius. Results obtained in the far field, when input light is vertically polarized, are shown in Fig. 8. When p t = 1 (linear in θ), cylindrical symmetry in polarization and intensity distributions is preserved. In the case shown in the first two rows of figure 8, the result is a cylindrical vector beam with a central singularity of topological charge 2q t = 1. In this case, in addition, intensity and polarization azimuth varies radially. For instance, along the first concentric intensity maximum the polarization is radial, while along the next minimum it is azimuthal. This is the effect of the radial dependence. Figure 8: Results for generalized q-plates with polynomial growth both in r and θ, in the far field regime, when input beam is vertically polarized. The first and second rows show the case with q r = 1, p r = 1, q t = 1/2 and p t = 1. The third and fourth rows show the case with q r = 1, p r = 2, q t = 1/2 and p t = 2.
There is a wide variety of vector beams that can be created by changing the four parameters in the expression of Φ. By increasing the power p t , it is observed that the central singularity splits into several singularities with lower topological charge, and the linearly polarized vector beam becomes a beam with hybrid SoP. When q r = 1, p r = 2, q t = 1 2 and p t = 2, as shown in the third and fourth columns of Fig. 8, there is a central intensity minimum around which polarization vector rotates, with topological charge 2q t = 1. This singularity takes place between two regions of opposite polarization rotation handedness. The experimental results are less satisfactory in the regions with very low intensity since the relative error of the polarization measurement is increased.
The degree of freedom in r allows to modulate the spacial distribution of intensity and, thus, the position of polarization singularities and critical points. We chose to show the linear and quadratic cases because they are commonly used in singular optics for representing phase functions related to axicons and lenses, respectively. Figure 9 shows results for the same cases than figure 8, when input light is left circularly polarized. Low intensity regions match with ellipticity minima (left handed polarization) of the case with linearly polarized input, and polarization azimuth singularities are "replaced" by phase singularities. A vertically polarized beam can be described as a balanced superposition of left and right circularly polarized beams, and after passing through the generalized q-plate, left circular polarization turns right, and vice versa. Then, it is reasonable that when input light is left circularly polarized, regions of the output beam corresponding to left circular polarization show no intensity.
In this case, when growth in θ is non-linear, output intensity, phase and polarization distributions lose the cylindrical symmetry. These phase singularities can be locally seen as vortexes that carry OAM with topological charge 2q t = ±1.
Multiplexed q-plates and beam superposition
The possibility of using arbitrary functions Φ in the definition of the generalized qplate and the ability of implementing them by means of SLMs, which allow pixel by pixel modulation, gives the capability of multiplexing various plates in the same device simultaneously. This creates a superposition of vector or vortex beams coming from different phase functions.
The use of a SLM leads to represent the function Φ (as a discontinuous version of itself) onto an array of square elements (pixels), which can take discrete phase values. Multiplexing can be achieved by selecting randomly two complementary sets of pixels, and representing on each set a different function. This random multiplexing scheme has been used previously, for example, to increase depth of focus of diffractive lenses [25]. This feature is possible due to the pixelated structure of the SLM and cannot be accomplished with conventional q-plate devices.
As a simple example, we show the result of combining pairs of linear q-plates, where Φ(θ) = qθ, with different q values. The functions for q 1 = 1/2 and q 2 = 1 are shown in figure 10, together with the discontinuous Φ(r, θ) resultant from multiplexing both. Results for other different combinations are shown in figure 11 (for input linear polarization) and figure 12 (for input left circular polarization). The superposition of beams coming from each set of pixels is obtained. This scheme can generate superposition of alternative vector or vortex beams with arbitrary topological charges. The relative weight of the components in the superposition can be varied by changing the size of the set of pixels assigned to each component, which can be easily done given the flexibility and speed provided by the SLMs. The number of superimposed beams can be increased, and is limited by the resolution of the SLM. Fraunhofer diffracted field of the generated beams coincides with the Fourier transform of the field at the q-plate plane. For a function g(r, θ) separable in polar coordinates this can be written in terms of an infinite sum of weighted Hankel transforms [23], where c k is a complex coefficient and H k is the Hankel transform operator of order k, being J k the kth-order Bessel function of the first kind, and g(r, θ) = g R (r)g Θ (θ). Figure 11: Far field diffraction resulting from multiplexed q-plates when input beam is vertically polarized. The first and second rows show the combination of q = 1/2 and q = 1 (columns 1-3), and the combination of q = 1/2 and q = 3/2 (columns 4-6). The third and fourth rows show the combination of q = 1/2 and q = 2 (columns 1-3), and the combination of q = 3/2 and q = −3/2 (columns 4-6).
Far field obtained from a q-plate with even 2q value only shows terms with even k value in the expression of Eq. 7, so the factor (−i) k = ±1. When 2q is an odd number, only terms with odd k value appear, then (−i) k = ±i. In the case of superposition of qplates with topological charges of the same parity, the phase factors of both contributions differ in an even multiple of ±π/2, i.e. they are either in phase or in counterphase. This creates uniform linearly polarized vector beams with polarization dark singularities caused by destructive interference where the added polarization vectors are parallel and have opposite phases. On the other hand, if the combined 2q values have different parity, the phase factors of the contributions differ in an odd multiple of ±π/2, i.e. they are in quadrature. Hence, c-points take place at spots where added polarization vectors are orthogonal. The central singularity, characteristic to all cylindrical vortex beams irrespective of their parity, appears in all cases. Figure 12: Results for the same multiplexed q-plates than Fig. 11, when input beam is left circularly polarized.
Those q-plates with higher q value create donut shaped beams with larger radii, so in the superposition, the inner region of the beam shows the structure of the one created by the q-plate with lower q value, while in the outer region the higher q value predominates. When input polarization is left circular (Fig. 12), only the right circular projection of the beams shown in Fig. 11 remains. Intensity minima in these cases are phase vortexes that carry OAM, as expected. If q-plates with opposite q value are represented, the result is a beam that carries no OAM and whose polarization is uniform and with opposite handedness to the input polarization. The intensity distribution shows 4q azimuthal interference fringes, for which these are usually referred to as petal beams [26]. Azimuth angle measurement for this case shows a noisy alternation between the values −π/2 and −π/2, this error is irrelevant since both values represent indistinctly vertically polarized light. As previously addressed, regions of the images with very low intensity show less agreement with the theoretical results.
The scheme that we propose not only allows theoretically the use of generalized and multiplexed q-plates with arbitrary functions (of which only a few examples are given here as a demonstration), but also gives speed and flexibility to the experimental implementation, due to the use of a phase only LCoS display with high spatial resolution and video rate operation. This opens a wide range of possibilities for the creation of alternative kinds of vector and vortex beams.
Conclusions
We proposed a generalization of the concept of q-plate, including in its definition nonlinear functions of both radial and azimuthal variables, which allow creating new vector or vortex beams, depending on the input state of polarization. We simulated the effect of these kind of element on uniformly vertical and left circular polarized beams and implemented them experimentally. As a result, great agreement between simulated and experimental results was achieved.
In the near field regime, function Φ determines the polarization azimuth for incident linear polarization and the phase profile for incident circular polarization. In the far field regime, it is found that when losing linearity in the azimuthal variable, the conventional central singularity of vector/vortex beams divides into several singularities of minimum topological charge. In those cases where the input light is linearly polarized, the output beam can exhibit, either c-points with topological charge ± 1 2 , as well as other types of critical points of the ellipticity, or dark polarization singularities. Distribution of left and right circular polarization regions is symmetrical. Circularly polarized input beams result in the appearance of phase vortexes, which carry OAM with topological charge ±1. The intensity profiles and singularity distributions depend on the particular chosen function Φ(r, θ), which gives the chance to model distributions of any optical singularity known. This provides a tool for achieving beams with hybrid SoPs, and novel intensity distributions.
We applied this generalization to the creation of multiplexed q-plates, defined by discontinuous functions that consisted of different q-plates encoded on complementary sets of randomly picked pixels from a SLM. We obtained, in the far field regime, the superposition of vortex/vector beams generated by the individual q-plates involved, a result that cannot be accomplished by conventional q-plate devices. This has potential application in many fields including quantum and classical communications, interferometry, optical trapping and micromanipulation, and singular optics in general.
Acknowledgments
This work was supported by UBACyT Grant No. 20020170100564BA. M.V. holds a CONICET Fellowship.
Appendix: Mueller-Stokes polarimetry and the effect of the beam-splitters
The state of polarization (SoP) of a light beam (represented by its Stokes parameters) and the polarimetric behavior of a material medium (represented by its Mueller matrix), can be experimentally determined by means of a Stokes or Mueller polarimeter, respectively. A Mueller polarimeter is composed of two modules. The first one, the polarization state generator (PSG), is used to generate light beams with different and carefully selected SoPs. These beams, after interacting with the sample, are characterized by a polarization state detector (PSD). By taking an appropriate number of measurements in specific PSG and PSD configurations, a linear system of equations can be constructed, whose solution is the Mueller matrix of the sample. A Stokes polarimeter is just the PSD of a Mueller polarimeter. Given an incident light beam, its intensity must be registered for a sufficient number of properly selected PSD states. Then, the Stokes parameters can be reconstructed by solving another linear system of equations.
In this work we used a Mueller polarimeter and also its PSD as a Stokes polarimeter. Each module was composed of a fixed linear polarizer and a fixed-retardance rotating retarder. For both instruments a measurement algorithm known as the Synchronous detection scheme was implemented. In this method each module adopts at least five measurement configurations, which correspond to equally-spaced angular positions of the fast axis of the rotating retarder. The Mueller polarimeter was calibrated by measuring the Mueller matrix of air. Since the selected Stokes polarimeter was its PSD, it became calibrated as well. For further details see reference [27].
Since beam splitters are crucial components of the device described in Section 2, we measured their Mueller matrices in order to characterize any polarimetric properties that could affect the set up. The already described Mueller polarimeter was implemented. In the ideal case, the Mueller matrix for transmission is the identity, while the matrix for reflection it is that of an ideal mirror, such that To account for these effects in our model we calculated, from these Mueller matrices, the respective Jones matrices by means of the method described in reference [21]. Thus, This can be interpreted as follows. During transmission, the horizontal and vertical polarization components are unequally affected, while during reflection an additional spurious phase is added to the vertical component. Even if the amplitude modulation is unbalanced, the imbalance produced by a transmission is compensated with that produced by a subsequent reflection, and vice versa, so we can consider this as a global real factor A which represents a neutral attenuator. On the other hand, by assuming that each beam-splitter introduces a different spurious phase shift, say α and β, the Jones matrix that describes the net Therefore, the total effect of the beam-splitters can be decomposed into a complex factor Ae iα , which does not affect the polarization performance of the device, and a phase shift that can be corrected by adding a uniform term α − β to the function addressed to the first half of the LCoS. This had to be taken into account when generating the vector and vortex beams with the proposed device.
|
2019-11-13T19:14:39.000Z
|
2019-11-13T00:00:00.000
|
{
"year": 2019,
"sha1": "5b3ea537c246e41a57a1657f0f9878c8d9e71ecb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1911.05766",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8bfeef24636ee56222c4d664f4a91934f6177279",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
238719963
|
pes2o/s2orc
|
v3-fos-license
|
The competitive effect of Electro-chlorination over chlorination for controlling disinfection by-product formation in phenol and aniline enriched groundwater
: 7 Disinfection is an essential step to keep humans healthy from microorganisms present in 8 drinking water. However, the formation of disinfection by-products (DBPs) is associated with 9 adverse health effects, and the presence of organic pollutants in groundwater results in even 10 more detrimental effects. Therefore, a better treatment technique is required to disinfect and 11 remove organic pollutants simultaneously to control the formation of DBPs. Electro- 12 chlorination (EC) was carried out using graphite electrode at the current density of 0.54–1.09 13 mA/cm 2 and sodium chloride for in-situ hypochlorite generation to treat groundwater 14 contaminated with phenol and aniline. The comparative study between chlorination and EC 15 showed a significant level of oxidation of phenol and aniline, resulting in their reduction up to 16 98.48% and 99.47%, respectively, in the EC process. Due to the higher mineralization rate of 17 aniline, both chlorination and EC method are found to be effective. However, only the EC 18 method is found to be appropriate and effective for treating phenol-contaminated water as the 19 chlorination method resulted in the formation of complicated phenolic by-products. Gas- 20 Chromatography/Mass-Spectrometry (GCMS) was used to assess the by-product formation of 21 chlorination and EC in contaminated groundwater through the full-scan.
25
The existence of anthropogenic organic contaminants in groundwater has been the subject of 26 profound studies in recent years worldwide. Anthropogenic organic pollutants in surface water, 27 sewage and groundwater, and potable water have been identified as contaminants (Postigo and 28 Barceló 2015; Lapworth et al., 2015). Pesticides and pharmaceuticals with metabolite, steroid, 29 industrial additive, hormone, water treatment by-products, personal care products, fire 30 retardants, and food additives are the most prominent of these pollutants (Stuart et al., 2012). 31 Several of them may have a detrimental effect on human health and the environment, 32 emphasizing that they need to consider their environmental role more effectively. In addition, 33 coke-based factories are deemed responsible for generating large amounts of wastewater 34 containing extremely hazardous, mutagenic as well as carcinogenic contaminants, including including THMs, HAAs, haloacetonitriles (HANs), and emerging iodinated THMs (i-THMs) 55 in the treated water, which was found to be cytotoxic and genotoxic in nature.
56
To overcome these problems, a treatment technique is needed that will treat the organic loading 57 as well as disinfect the water at the same time. The human health risk was estimated due to exposure to aniline and phenol-contaminated Ys(t) is the phenol and aniline concentration in the shower room at time t (mins).
129
According to USEPA, the threshold limit is considered as 1 for HQ (Mohanta et al., 2020).
131
A comparison study was done to identify the suitable and effective method for chlorinating 132 aniline and phenol-contaminated groundwater. The chlorine demand of the raw water sample In this study, EC has been executed to avoid the formation of chlorinated by-products, and its
Results and discussions
178 UIT E V=
3.1.Effect of chlorination and electro-chlorination in aniline and phenol concentration
In the raw groundwater sample, the aniline and phenol concentrations were found to be 0.34 180 and 0.271mg/L, respectively, which is 58.4 and 292 times higher than the desired limit in 2.50E-06, respectively, therefore contributing negligible weightage to the total hazard quotient.
223
In the case of chlorination, there was an insignificant reduction in HQ, while a considerable 224 reduction was observed compared to the electro-chlorinated water sample. Many phenolic 225 compounds with complicated structures were present in the raw water. Simple chlorination 226 resulted in the formation of even more complex by-products of phenolic compounds. However, in the EC process, these compounds were dissociated into simpler forms, resulting in a drastic 228 reduction of phenol content. THQ due to aniline contaminated groundwater was found to be 229 3.08, 2.73, and 3.33 for men, women, and children, respectively. High HQ value even for lower 230 aniline concentration was due to its lower Reference dose (RfD) value, i.e., 0.0068 mg/Kg-day, 231 representing a higher probabilistic non-cancer risk to the exposed population. However, a 232 remarkable reduction in HQ was observed in the case of both the chlorination and EC process 233 (
243
The organic compounds in the raw and treated groundwater were studied and calculated using 244 the mass spectral library database of the National Institute of Standards and Technology (NIST) 245 (Fig 3). The results showed that organic pollutants found in groundwater were evidently 246 degraded, and only Decanedioic acid was found in abundance in EC treated water, which is 247 neither toxic nor hazardous in nature. Phenol was found in all the samples; however, its relative 248 abundance was found to be negligible in the case of EC treatment. Many new by-products such 249 as 2-Hydroxybiphenyl, 2,3,4,6-tetrachlorophenyl ester, 6-Fluoro-2-trifluoromethylbenzoic 250 acid, 2-Phenylamino-5,6(4H)dihydro-1,3-thiazine, 5-chloro-4,6-diphenyl-, 2-
251
Hydroxybiphenyl, 2-Oxo-4-phenyl-6-(4-chlorophenyl)-1,2-dihydropyrimidine, etc were 252 formed in chlorination process (Table 2). In the EC process, phenol present in the water sample 253 was converted into a carboxylic acid, carbon dioxide and water, as shown in fig 2, and was 254 found at the low relative abundance in the form of 1,4-Benzenedicarboxylic acid. Some of 255 these compounds are corrosive, toxic and may cause environmental as well as health hazards.
256
Aniline was present in the form of 2-Chloroaniline-5-sulfonic acid in raw water; however, it 257 was not identified in the chromatograms of both the treated groundwater due to its higher 258 mineralization rate into the end product and intermediate products (Singh et al., 2021). GC-MS analysis chromatograms (a) Raw groundwater (b) Ex-situ chlorinated water (c) In-situ electrochlorinated water
|
2021-09-27T19:57:34.184Z
|
2021-08-09T00:00:00.000
|
{
"year": 2021,
"sha1": "b32e4f513774da93641c9f2f720d2537dae5ba8e",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-791204/v1.pdf?c=1631889954000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3c89a60d60fbffba4657c1071d65bd9334466bbc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
17700084
|
pes2o/s2orc
|
v3-fos-license
|
Renormalization group and relations between scattering amplitudes in a theory with different mass scales
In the Yukawa model with two different mass scales the renormalization group equation is used to obtain relations between scattering amplitudes at low energies. Considering fermion-fermion scattering as an example, a basic one-loop renormalization group relation is derived which gives possibility to reduce the problem to the scattering of light particles on the"external field"substituting a heavy virtual state. Applications of the results to problems of searching new physics beyond the Standard Model are discussed.
Introduction
An important problem of nowadays high energy physics is searching for deviation from the Standard Model (SM) of elementary particles which may appear due to heavy virtual states entering the extended models and having the masses much greater than the Wboson mass m W [1]. One of approaches for the description of such phenomena is the construction of the effective Lagrangians (EL) appearing owing to decoupling of heavy particles. In principle, it is possible to write down a lot of different EL describing effects of new physics beyond the SM. In Ref. [2] the EL generated at a tree level in a general renormalizable gauge theory have been derived. These objects by construction contain a great number of arbitrary parameters responsible for specific processes. But it is well known that a renormalizable theory includes a small number of independent constants due to relations between them. The renormalizability of the theory is resulted in the renormalization group (RG) equations for scattering amplitudes [3]. In Ref. [4] it has been proven that RG equation can be used to obtain a set of relations between the parameters of the EL. Two main observations were used. First, it has been shown that a heavy virtual state may be considered as an external field scattering SM light particles. Second, the renormalization of the vertices, describing scattering on the external field, can be determined by the β-and γ-functions calculated with light particles, only. Hence, the relations mentioned above follow. As an example the SM with the heavy Higgs scalar has been investigated. In the decoupling region the RG equations for scattering amplitudes have been reduced to the ones for vertices describing the scattering of light particles on the external field substituting the corresponding virtual heavy field. In Ref. [4] the only scalar field of the theory was taken as the heavy particle, and no mixing between the heavy and the light fields at the one-loop level has been considered. Here, we are going to investigate the Yukawa model with a heavy scalar field χ and a light scalar field ϕ. The purposes of our investigation are two fold: to derive the one-loop RG relation for the four-fermion scattering amplitude in the decoupling region and to find out the possibility of reducing this relation in the equation for vertex describing the scattering of light particles on the external field when the mixing between heavy and light virtual states takes place. In Ref. [4] the specific algebraic identities originated from the RG equation for scattering amplitude have been derived. When the explicit couplings in EL are unknown and represented by the arbitrary parameters, one may treat the identities as the equations dependent on the parameters and appropriate β− and γ− functions. If due to a symmetry the number of β− and γ− functions is less than the number of RG relations, one can obtain non trivial system of equations for the parameters mentioned. This was shown for the gauge couplings [4]. In present paper we derive RG relations for the EL parameters in the model including one-loop mixing of heavy and light fields.
Renormalization group relation for amplitude
The Lagrangian of the model reads where ψ is a Dirac spinor field. The S-matrix element for the four-fermion scattering at the one-loop level is given bŷ where s = (p 1 + p 2 ) 2 , S 1P R is the contribution from the one-particle reducible diagrams shown in the Figs.1-2 and S box is the contribution from the box diagram. The one-loop polarization operator of scalar fields Π φ 1 φ 2 and the one-loop vertex function Γ are usually defined trough the Green functions: where S ψ is the spinor propagator in the momentum representation. The renormalized fields, masses and charges are defined as follows Using the dimensional regularization (the dimension of the momentum space is D = 4 − ε) and the MS renormalization scheme [5] one can compute the renormalization constants From Eq.(5) we obtain the appropriate β− and γ− functions [5] at the one-loop level: Then, the S-matrix element can be expressed in terms of the renormalized quantities (4). The contribution from the one-particle reducible diagrams becomes where the functions Π f in φ 1 φ 2 and Γ f in are the expressions Π φ 1 φ 2 and Γ without the terms proportional to 1/ε. Since the quantity S box is finite, the renormalization leaves it without changes. Introducing the RG operator at the one-loop level [6] we determine that the following relation holds for the S-matrix element where the S 1P R and the S 1P R are the contributions to the S 1P R at the tree level and at the one-loop level, respectively: The first term in Eq.(11) is originated from the one-loop correction to the fermionscalar vertex. The rest terms are connected with the polarization operator of scalars.
The third term describes the one-loop mixing between the scalar fields. It is canceled in the RG relation (9) by the mass-dependent terms in the β− functions produced by the non-diagonal elements in Z φ . Eq.(9) is the consequence of the renormalizability of the model. It insures the leading logarithm terms of the one-loop S-matrix element to reproduce the appropriate tree-level structure. In contrast to the familiar treatment we are not going to improve scattering amplitudes by solving Eq.(9). We will use it as an algebraic identity implemented in the renormalizable theory. Naturally if one knows the explicit couplings expressed in terms of the basic set of parameters of the model, this RG relation is trivially fulfilled. But the situation changes when the couplings are represented by unknown arbitrary parameters as it occurs in the EL approach [1], [2]. In this case the RG relations are the algebraic equations dependent on these parameters and appropriate β− and γ− functions. In the presence of a symmetry the number of β− and γ− functions is less than the number of RG relations. So, one has non trivial system of equations relating the parameters of EL. Such a scenario is realized for the gauge coupling as it has been demonstrated in [4]. Although the considered simple model has no gauge couplings and no relation between the EL parameters occurs, we are able to demonstrate the general procedure of deriving the RG relations for EL parameters in the theory with one-loop mixing. This is essential for dealing with the EL describing deviations from the SM. At energies s ≪ Λ 2 the heavy scalar field χ is decoupled. So, the four-fermion scattering amplitude consists of the contribution of the model with no heavy field χ plus terms of the order s/Λ 2 . The expansion of the heavy scalar propagator in Eq.(10) is resulted in the effective contact four-fermion interaction and the tree level contribution to the amplitude becomes In the decoupling region the lowest order effects of the heavy scalar are described by the parameter α, only. The method of constructing the RG equation in terms of the low energy quantities G ϕ , λ, m, M, α was proposed in [6]. As it has been demonstrated in [6], the redefinition of the parameters of the model allows to remove all the heavy particle loop contributions to Eq.(11). Let us define a new set of fields, charges and massesψ,G ϕ ,G χ ,Λ,m,M One is able to rewrite the differential operator (8) in terms of these new low-energy parameters: whereβ− andγ− functions are obtained from the one-loop relations (6) and (15) Hence, one immediately notices thatβ− andγ− functions contain only the light particle loop contributions, and all the heavy particle loop terms are completely removed from them. The S-matrix element expressed in terms of new parameters satisfies the following RG relation whereα =G 2 χ /Λ 2 is the redefined effective four-fermion coupling. As one can see, Eq.(20) includes all the terms of Eq.(11) except for the heavy particle loop contributions. It depends on the low energy quantitiesψ,G ϕ ,α,λ,m,M . The first and the second terms in Eq.(20) are just the one-loop amplitude calculated within the model with no heavy particles. The third and the fourth terms describe the light particle loop correction to the effective four-fermion coupling and the mixing of heavy and light virtual fields.
Elimination of one-loop scalar field mixing
Due to the mixing term it is impossible to split the RG relation (18) for the S-matrix element into the one for vertices. Hence, we are not able to consider Eq.(18) in the framework of the scattering of light particles on an external field induced by the heavy virtual scalar as it has been done in [4]. But this is an important step in deriving the RG relation for EL parameters. Fortunately, there is a simple procedure allowing to avoid the mixing in Eq.(20). The way is to diagonalize the leading logarithm terms of the scalar polarization operator in the redefinition of theφ,χ,G ϕ ,G χ The appropriateβ− functions contain no terms connected with mixing between light and heavy scalars. So, the fourth term in Eq.(20) is removed, and the RG relation for the S-matrix element becomes Atα = 0 Eq.(23) is just the RG identity for the scattering amplitude calculated in the absence of the heavy particles. The terms of orderα describe the RG relation for the effective low-energy four-fermion interaction in the decoupling region. The last one can be reduced in the RG relation for the vertex describing the scattering of the light particle (fermion) on the external field √α substituting the virtual heavy scalar: whereD (1) Eqs.(23)-(27) is the main result of our investigation. One can derive them with only the knowledge about the EL (13) and the Lagrangian of the model with no heavy particles. One also has to ignore all the heavy particle loop contributions to the RG relation and the one-loop mixing between the heavy and the light fields. Eqs.(23)-(27) depend on the effective low-energy parameters, only. But as the difference between the original set of parameters and the low-energy one is of one-loop order, one may freely substitute them in Eqs.(23)-(26).
Discussion
Let us discuss the results obtained. The RG relation for the four-fermion scattering amplitude is derived in the decoupling region s ≪ Λ 2 . It was shown that one can redefine the parameters and the fields of the model in order to remove all the heavy particle loop contributions to the RG relation. Then the RG relation becomes dependent on the low-energy physics parameters, only. As the RG operator coefficients and the difference between the original parameters and the redefined ones are of the one-loop order one can substitute one set of parameters by another at the lowest level. Thus, we extend the result of Ref. [4] to the case when mixing terms are present. The additional transformation of fields and charges allows one to diagonalize the leading logarithm terms of the scalar polarization operator and to avoid the contributions to the RG relation originated from the one-loop mixing between heavy and light field. Since the difference between the diagonalized fields and charges and the original ones is of oneloop order, one may simply omit one-loop mixing terms in the RG relation at the lower level. Then it is possible to reduce the RG relation for S-matrix element to the one for vertex describing the scattering of light particles on the external field induced by the heavy virtual particle. In fact, this result is independent on the specific features of the considered model, as it was shown in [4].
The RG relations of the considered type may be used in searching for the dependences between the parameters of EL describing physics beyond the SM. For example, let a symmetry requires the same charge structure for some effective Lagrangians. Then the number of unknownβ− andγ− functions is less than the number of RG relations, and it is possible to derive non-trivial solutions for the parameters. The present results allow to omit the one-loop mixing diagrams in construction of the RG relations for the tree-level EL. j , c
|
2014-10-01T00:00:00.000Z
|
1998-11-10T00:00:00.000
|
{
"year": 1998,
"sha1": "2e024598b2956f1a77c8f03e5e75d40d06ac0276",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9811286",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b34578d9c5f9f171bb8672a3e7da72c9c3f0da4c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
249534275
|
pes2o/s2orc
|
v3-fos-license
|
Comparison and classification of six reference currents extraction algorithms for harmonic compensation on a stochastic power network: Case of the TLC hybrid filter
Abstract This article highlights a comparison and classification of six algorithms for extracting reference currents used in the context of harmonic compensation for the case of a TLC hybrid filter on a stochastic electrical network. With the ultimate goal of determining the most robust and reliable algorithm, the electrical network presents four variable configurations such as, operation in sinusoidal and balanced voltage, sinusoidal and unbalanced voltage, balanced and disturbed voltage, or even in unbalanced voltage and disrupts. Firstly, a presentation of these six algorithms or methods identified in the time domain will be made with injections of the fifth-order harmonics taken in positive sequences during the execution or compilation of the six algorithms for the four network operating hypotheses. And in a second step, a cross-comparison of the performances on the basis of indicators such as the THD (harmonic distortion rate) and UFI (unbalance factors) required by the IEEE 519–2014 and EN 50160 standard, then a cross-validation is based on some results available in the literature. These results will now facilitate the choice of the most appropriate algorithm depending on the nature of the electrical network in force according to the case studies.
Introduction
The proliferation of non-linear loads is growing exponentially with technological evolution. In general, energy electronics receive special attention for several researchers and engineers at the level of harmonic contamination for harmonic power systems, which are at the origin of the degradation of the power factor, the overheating and the complete destruction of equipments, the errors in the measurements with the instruments, the breakdown of the capacitors (Akagi et al., 1999). In order to reduce the harmonic currents, the standard IEEE 519 − 2014 was adopted. The overall harmonic distortion must be at most 5% of the initial value before filtering (Kim & Akagi, 1999). Therefore, the current 5% THD limit has always been the performance target that all researchers and designers do their best to achieve. In order to deal directly with harmonic problems and to respect the 5% limits, conventional passive harmonic filters are applied. However, due to their major weaknesses of bulky sizes and patch mitigation tools, innovative harmonic mitigation tools and efficient hybrid filters, combining passive and active models are being developed to replace them. In addition, the development of hybrid filters is also being spurred by the emergence of power semiconductor switching devices such as insulated gate bipolar transistors (IGBTs), thyristor and the availability of powerful controllers such as processors and digital signals (DSP; Shu et al., 2022). On the other hand, the success of the compensation or filtering process depends on the reliability and robustness of the algorithm for extracting the reference currents while knowing that the current control laws that cannot act correctly when the network is disturbed and unbalanced in tension. Based on this observation, the aim of this work is to provide a solution to this problem by the comparative evaluation of the techniques for extracting the reference currents. This paper presents the review of six control strategies (original pq, modified pq, pq pseudo mapping matrix, pqr, NFp-q and DCAP methods) for operating conditions in sinusoidal and balanced voltage, sinusoidal and unbalanced voltage, voltage balanced and disturbed, or in unbalanced and disturbed tensions, with the ultimate goal of determining the most suitable method in the event of disturbances.
The P-Q method ON A 4-wire network
The original p-q theory is defined by a transformation of the electrical quantities of coordinates a, b, c into coordinates α, β, 0. A homogeneity exists between the powers expressed in the two coordinates. This method is based on the notion of instantaneous active power P (t) and reactive power q (t), which is the originality of the method (Marini et al., 2019) as shown in figure 1.
With v h t ð Þ ¼ 3v 0 t ð Þ and i n t ð Þ ¼ 3i 0 t ð Þ, the neutral unbalance voltage and current in the reference a, b, c, v 0 t ð Þeti 0 t ð Þ are the homopolar in the coordinates a, b, c. The zero sequence voltage and the current in the coordinates α, β, 0, are given by: The transformation in the orthogonal coordinate system α, β, 0 of the current of a three-phase system are In addition, the tension is: With C 33 is the Concordia transformation matrix defined by: In this theory for a four-wire application, we consider that the zero sequence circuit is independent of the circuit in the coordinate, we define two active powers P0 and Pαβ in addition to reactive power qαβ. The total instantaneous active power is defined as the sum of the two active powers and the instantaneous reactive power is the same for the 3-wire p-q system ("IEEE Recommended Practice and Requirements," 2014; Mistry & Patel, 2017;Shu et al., 2022) Or, The powers in the reference α, β, 0 are expressed as below: Currents in the system α, β, 0 are given as The reference currents as a function of the powers to be compensated are expressed as follows: Once the reference currents in the coordinate α, β, 0, are calculated, the inverse formulation of the matrix by Clark/Concordia C À 1 33 gives these currents in the coordinates a, b, c, according to the following relation (Hanna Nohra et
Modified P-Q method on the 4-wire network
This technique explores the Concordia transformation applied to simple source voltage and line currents in order to obtain real, imaginary, and zero sequence instantaneous powers (Dubey et al., 2021;Sun et al., 2020;Yu & Tao, 2016). The direct component must be eliminated by transforming the fundamental component into a direct component and the harmonic component into oscillatory harmonics (Marini et al., 2019;Naderipour et al., 2018;Shu et al., 2022) This principle is as follows: consider simple voltage v a; t ð Þ; v b; t ð Þ; v c t ð Þ and line currents i a t ð Þ; i b t ð Þ; i c t ð Þ of a three-phase homopolar system. Each block of the current extraction diagram contains an equation as shown in figure 2. The Concordia transformation makes it possible to bring a three-phase system from a, b, c ax to α, β, and 0 coordinated as follows: These powers can be expressed as the sum of the direct components and the ripple component as follows: The load current in the axes α, β, 0: Using 2.30, we get the expression of real and imaginary powers: The expression of the current in the plane α, β, 0 as a function of the instantaneous power is given by: Likely equation II.31, equation II.32 becomes Depending on the function given to the filter, it is possible to compensate the harmonic current and the reactive power or either the harmonic current or the reactive power (Abolfathi et al., 2017;Bhople & Rayarao, 2018;Chang et al., 2006;Hoon et al., 2017;Mistry & Patel, 2017).
We want to compensate for both at the same time, equation II.23 becomes: Finally, the reference current is obtained using the inverse Concordia transformation.
P-Q pseudo-mapping matrix method
The modified p-q method could not compensate for the neutral current because the expression of the zero sequence reference component is not equal to the zero sequence current of the circuit. Then, the author proposes to modify the matrix so that the zero sequence currents and that of the power circuit are equal. This led to the pseudo-mapping matrix method. Powers are defined in the same way and the concept remains defined (Chicco et al., 2009;Hernández et al., 2011;Li et al., 2020;Santos et al., 2020;Suresh et al., 2011) as shown in figure 3.
The diagram in figure II.3 shows the extraction of the reference currents, which only changes at the level of equation (II.25) instead of (II.26)
The P-Q-R method
This method introduced by Kim performs a double transformation, a first transformation of the phase-to-neutral voltage and line currents of the coordinate a, b, c to the coordinate α, 0, then a second transformation of the coordinate α, 0 to coordinate PQR as shown in figure 4. Its principle is stated as follows: -Mawali et al., 2011;Andari & Beheshti, 2011;Barutçu et al., 2019;De Araujo et al., 2021;Edri et al., 2021;Oliveira et al., 2018). The figure below illustrates the identification of the reference currents during harmonic current and reactive power compensation. The equations to be introduced in each block are collected as data as follows: Using the Concordia transformation, we get the two relations asfollows: The following transformation called p-q-r we give: The instantaneous active and reactive powers are given by the equation below: This gives us the current in the p-q-r axis expressed by: Depending on the function assigned to the filter, it is possible to compensate the harmonic current and the reactive energy or one of them at the same time. If we want to compensate the harmonic current and the reactive power, Equation 1 becomes: What gives the currents in the axes α, β, 0:
The NFP-Q method
This method is based on the p-q theory with a difference that the zero sequence component of the current is separated from the zero sequence power (Andari & Beheshti, 2012;Chang & Low, 2008;Hu et al., 2015;Infield et al., 2004;Jannesar et al., 2019). The zero sequence component is distributed over the three line currents and the compensation is independent of the zero sequence voltage. This method is intended for an equilibrium voltage network as shown in figure 5. Now, the system of currents expressed as a function of the quantities in the reference α, 0 in (.35). With We can then write the currents in the reference a, b, c as: With this equation, the zero sequence component is separated from the current system. The currents will be treated as in a 3-wire network and we then add the zero sequence component (Hanna Nohra et al., 2019).
The reference current becomes: The current in the reference a, b, c is:
The DCAP method
The patented DCAP method proposed (Hanna Nohra et al., 2014) is based on the distribution of sinusoidal and balanced currents on the source side and not on the distribution of equal powers in the doubled source. It is developed for a polyphase system. Subsequently, we sought to determine the active sinusoidal currents desired in the source. i_sa(t), which provides a unit power factor with a corresponding voltage even in the event of a disturbed voltage system. However, this sinusoidal current makes it possible to ensure a unit power factor with a disturbed voltage necessarily a linear relationship with the fundamental component. v_f (t) disturbed voltage v (t) as (II.41) as shown in figure 6.
Or Gf is the conductance of the fundamental current and voltage. It should be noted that a sinusoidal current is equivalent to its fundamental component.
The snapshots of the desired sinusoidal currents at the source in phase with their corresponding fundamental voltages are given by: Finally, by substituting the expression of the voltage in (II.42), the desired active currents at the source become The fundamental conductance of the polyphase system is: The system of currents in the phases is obtained as a function of the conductance as follows By replacing the conductances with their values, it finally allows to see the desired active sinusoidal currents in the source, in phase with their fundamental voltage, and as a function of the total power in the source (Hanna Nohra et al., 2019).
The reference currents can simply be deduced from the load currents and the currents in (II.80) noted: With i = a, b, c the phase index of the three-phase system.
In calculating the power of the load, we do not capture the fundamental components of the voltages. In fact, the instantaneous voltages which exist in the circuit are taken into account. The power of the load is (Akagi et al., 1999;Hanna Nohra et al., 2014;Harrison;Kalair et al., 2017;Kim & Akagi, 1999;Padilla-Vento, 2021) P f is the power absorbed by the filter and required to regulate the DC bus voltage. He expresses himself thus: The basic concept of compensation in the known methods or already classic algorithms for extracting reference currents consists of distributing the power equally over the three phases in order to balance the currents, which is not valid when the voltage are unbalanced. From these elements, the method (DCAP-Direct Control for Active Power) which is based on another approach for compensation and which makes it possible to formulate the active current (then the reference currents), in a way other than those traditionally used. We have developed thus resulting in a currentbalanced system when the network is even highly unfavourable. Thus, the compensation is done in the presence of the negative sequence and zero sequence component of the network. The first author to have developed this new control approach is (Hanna et al, 2018). Subsequently, we sought to determine the desired sinusoidal active current in the source i(t) which ensures a unity power factor with its corresponding voltage v(t) even in the event of disturbance of the voltage system. However, this sinusoidal current making it possible to ensure a unity power factor with a disturbed voltage v(t) therefore has a linear relationship with the fundamental component vf(t) of the disturbed voltage v(t).
System modeling
At this level, it is important to recall the spirit of the research in this work by contributing to the classification of the six algorithms for extracting reference currents according to certain indicators. To do so, the electrical system in this article will be studied in four cases of network voltage as shown in table 1: Hypothesis 1: Sinusoidal and balanced voltage.
Hypothesis 3: Balanced and disturbed voltage.
The synoptic of the TLC hybrid filter will as shown in figure 7 be studied in its four cases with several sought-after objectives which will be developed and commented in our results. On the other hand, the constituted voltage system consists of a fundamental voltage and the harmonic of the order 5, ðV a;5 Þ expressed as follows: With as expressions of the fundamental quantities: And order 5 harmonic voltages taken in positive sequence as follows: With V 5 is the maximum amplitude of the order harmonic 5. Table 1 shows the numerical values of the voltage in the four cases studied.
The filter parameters used in the rest of our work have become in the following
Result and disscussion
It is important to remember at this level that a fourth connection is made on the neutral wire to observe the behavior of that over time for the four operating hypotheses.
Original PQ method
Figure 8(a) shows a THD of 1.73% and UFI 0.33% satisfactory results compared to the standard (IEEE 519-2014) for voltage below 69kv, for a THD between 3% and 5%. And the EN 50160 standard which requires an UFI of less than 2%. Hypotheses 2, 3, 4, respectively, present 1.79%; 2%; 0.33% in THD results also in accordance with the standard. On the other hand in UFI boxes 2.3 present respectively 5.33%, 16.66% which are of poor quality with regard to the requirements of the standard above and finally case 4 which offers 1.73% which on the other hand is validated by Figure 8. Results with the original PQ method for current (ia, ib,ic) and neutral current for 4 hypothesis (A, B, C, D, E, F, G, H).
the standard in force stated above. Figure 8 also shows the evolution of the current in the neutral for each case. Figure 9 shows, respectively, THDs of: 1.73%, 2%, 1.8%, 1.7% for hypothesis 1, 2, 3, 4, which also comply with the IEEE 519-2014 standard; on the other hand, only case 1 has a UFI of 0.33% which also conforms to the standard but cases 2, 3.4 present, respectively, 5%; 8.33% and 3.33%, which are not of good quality in accordance with the standard. Any time on the remark that the modified PQ method succeeded in improving by half the case 3. With a UFI of 8.33%, it seems to be the most indicated method for this hypothesis of functioning. Figure 10 shows THDs of: 1.73%, 2.1%, respectively; 0.33%; 2% for cases 1, 2, 3, 4, which are also satisfactory for the IEEE 519-2014 standard, on the other hand only cases 1, 3 have a UFI of 0.33% and 1.76% which also conforms to the standard but cases 2, 4 show 20%, respectively; 13.76%, which are not of good quality in accordance with the standard. Any time we notice that the pq pseudo-mapping matrix method succeeds in improving box 3 by half better than the first two methods with an UFI of 1.76%. It seems to be the method which ise most suitable for this operating hypothesis. Figure 11 shows, respectively, THDs of: 1.73%, 2.12%; 1.73%; 2.23% for cases 1, 2, 3, 4, which also comply with the IEEE 519-2014 standard; on the other hand, only cases 1 and 3 have a UFI of 0.33% and 0.66%, which also conform to the standard but cases 2 and 4 present 28.33%, and 36.2%, respectively, which are not of good quality in accordance with the standard. Any time on the remark, the PQR method succeeds in improving case 3 with a UFI of 0.66%, it seems to be the most indicated method compared to the first three methods for this working hypothesis. Figure 12 shows, respectively, THDs of: 1.73%, 1.95%, 1.74%, 1.95% for cases 1, 2, 3, 4, which also comply with the IEEE 519-2014 standard, on the other hand only cases 1 and 3 have an UFI of 0.33% and 1%, which also comply with the standard but cases 2 and 4 present, respectively, 16.66% and 16.66%, which are not of good quality in accordance with the standard. Any time on the remark that the PQ R method succeeds in improving case 3 with a UFI of 1%, it seems to be the most indicated method compared to the first three methods for this hypothesis operating. But it is also important to note that the PQR method gives us a result of 0.66% UFI, which remains a better result than that obtained with the NFPQ method. Figure 13 shows, respectively, THDs of: 1.73%, 1.73%, 1.73%, and 1.74% for cases 1, 2, 3, and 4, which also comply with the IEEE 519-2014 standard, on the other hand only cases 1, 2, 3, and 4 have an UFI of 0.33%, 0.66 %, 0.66%, 1, respectively, which also conform to the standard. On its own, it succeeds in giving satisfaction for cases 1, 2, 3, and 4 with a better result in case 4 compared to all the methods developed. The DCAP algorithm therefore stands out as being the best for all network operating conditions in this job. The research spirit of this article is to contribute to a classification of six algorithms for extracting reference currents within the framework of harmonic compensation for a stochastic electrical network which for the particular case of this work presents four configurations. A general presentation of all the parameters or indicators is concentrated in a table with the objective also to facilitate the discussion, but above all to validate the results obtained. for the UFI value for the hypotheses (1, 2, 3, 4), however we notice that the quality in THD is better for the modified PQ for the hypotheses (1, 2, 3, 4) in comparison with the original PQ. Results are comforted by the work of (LI-WANG et al 2018). The observation is the same for the pq Mapping-matrix method which also offers 50% of UFI for the hypotheses (1, 2, 3, 4) and 100% for the THD being the same assumptions. We can also see that the quality in THD is less good than the first two methods. Results also reassured by the work of (HANNA et al 2018). On the other hand, the NFPQ and PQR methods are at 50% and 100%, respectively, for their THD and common UFI for the hypotheses (1, 2, 3, 4) defined. Finally, the DCAP method which offers 100% satisfaction in THD and UFI for the hypotheses (1, 2, 3, 4), which in the end, appears as the most satisfactory extraction method. These results are supported by the work of (Hanna Nohra et al., 2014).
Conclusion
In this article, it was a question of making a contribution to the classification of six algorithms for extracting reference currents for harmonic compensation with the TLC hybrid filter under operating conditions in sinusoidal and balanced voltage, sinusoidal and unbalanced voltage, voltage balanced and disturbed. Finally in unbalanced and disturbed tensions with the ultimate goal of determining the algorithm best suited to the disturbances that the electrical network may undergo. Thus, offering the possibility of optimizing the sequence or the compensation step by injection into the electrical network thanks to a robust and reliable extraction algorithm. Harmonic voltage of order 5 taken in positive sequences were used during the execution or compilation of these six algorithms. And it emerges that the algorithm of the DCAP method is positioned as being the extraction technique which resists the most disturbances with results both validated by standard IEEE 519-2014 and EN 50160; on the other hand (Hanna Nohra et al., 2014), which in its work with active compensators already presented the advantages of this new approach for extracting reference currents. These results are reinforced in the four operating conditions developed for our case study.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Data availability statement
The data which contributed to the realization of this article are at the same time in the number of the literature but also of the protocols of several experimental tests in laboratories, thus the possible submissions for publications
|
2022-06-10T15:05:59.623Z
|
2022-06-07T00:00:00.000
|
{
"year": 2022,
"sha1": "c0fbb2bc5bbde19d2775133e5399bba2a85853b0",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311916.2022.2076322?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "4f8a84fcfcfbd6cb7c288b8b903e59671559b503",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
119147233
|
pes2o/s2orc
|
v3-fos-license
|
A Weyl-Type character formula for PDC modules of gl(m|n)
In 1994, Kac and Wakimoto suggested a generalization of Bernstein and Leites character formula for basic Lie superalgebras, and the natural question was raised: to which simple highest weight modules does it apply? In this paper, we prove a similar formula for a large class of finite-dimensional simple modules over the Lie superalgebra gl(m|n), which we call piecewise disconnected modules, or PDC. The class of PDC modules naturally includes totally connected modules and totally disconnected modules, the two families for which similiar character formulas were proven by Su and Zhang as special cases of their general formula. This paper is part of our program for the pursuit of elegant character formulas for Lie superalgebras.
Introduction
It has been long known that character formulas for simple finite dimensional representations of Lie superalgebras are a nontrivial extension of the classical case. The problem originates from the existence of the so called atypical roots. In the absence of these roots, Kac proved in 1997 that the Weyl character formula generalizes in a straightforward fashion [K2, K3]. In 1980, an elegant Weyl-type character formula was proven by Bernstein and Leites [BL] for representations of atypicality 1 (see Section 2.4). Let L (λ) be a finite dimensional simple representation of highest weight λ and atypical root β, then e ρ R · ch L (λ) = w∈W (−1) l(w) w e λ+ρ 1 + e −β .
Great efforts were made to generalize this formula to all finite dimensional modules of gl (m|n). It was shown in [VHKT] that such a formula does not hold in general but does hold for important families of modules, such as the covariant and contravariant modules. In [KW1], Kac and Wakimoto stated a similar formula for the case when all of the atypical roots are simple, which was proven by the authors in [CHR]. Modules satisfying the Kac-Wakimoto character formula were called tame in [KW1], however the term tame was used differently in [KW2].
In [S1, S2], Serganova proved an algorithmic character formula in terms of generalized Kazhdan-Lusztig polynomials. Brundan gave an explicit algorithm for computing these Kazhdan-Lusztig polynomials [B] by using techniques from the theory of quantum groups. Using Brundan's algorithm, Su and Zhang proved a closed formula that consists of an alternating sum of Bernstein-Leites characters. A new approach using super duality was pioneered by Cheng, Wang and Zhang in [CWZ].
There are two classes of representations for which the Su-Zhang formula consists of one Bernstein-Leites term, namely the totally connected and the totally disconnected ones, (where the former contains the covariant and contravariant modules [MV,Corollary 3.5]). In this paper, we generalize these two classes to the class of piecewise disconnected modules (see Definition 16). Roughly speaking, these are the modules whose highest weight splits into components, each of which resembles a totally connected module while the relation between these components resembles a totally disconnected module.
Let L (λ) be a piecewise disconnected module of highest weight λ with respect to the standard choice of simple roots. We prove the following 1-term character formula for L (λ): , Date: July 2, 2014. The third author was partially supported by NSF RTG grant DMS 0943832.
where S λ is a maximal orthogonal set of atypical roots; the weight (λ ρ ) ⇑ is obtained by adding certain atypical roots to λ + ρ; |(λ ρ ) ⇑ − λ ρ | S λ is the number of such roots added; and t λ is a positive integer determined by the lengths of the atypical components λ (see Definitions 15, 21 and 24). Our proof uses Brundan's algorithm [B] and is based on ideas from [SZ]. Unlike the totally connected and totally disconnected cases, for a general piecewise disconnected weight λ, the weight (λ ρ ) ⇑ appearing in formula (1.1) does not correspond to a highest weight vector for any choice of simple roots.
A homogeneous element x ∈ g0 has degree 0, denoted deg(x) = 0, while x ∈ g1 has degree 1, denoted deg(x) = 1. We define a bilinear operation on g by letting on homogeneous elements and then extending linearly to all of g.
Then g has a root space decomposition g = h ⊕ α∈∆0 g α ⊕ α∈∆1 g α , where the set of roots of g is ∆ = ∆0 ∪ ∆1, with The Weyl group of g is W = Sym(m) × Sym(n), and W acts on h * by permuting the indices of the ε's and by permuting the indices of the δ's. In particular, the even reflection s ε i −ε j interchanges the i and j indices of the ε's and fixes all other indices, while s δ k −δ l interchanges the k and l indices of the δ's and fixes all other indices.
The corresponding decomposition ∆ = ∆ + ∪ ∆ − is given by The standard choice of simple roots has the unique property that W fixes ∆ + 1 . Moreover, it contains a basis for ∆ + 0 , which we denote by π0.
2.3. Finite dimensional modules for g = gl(m|n). For each weight λ ∈ h * , the Verma module of highest weight λ is the induced module M (λ) := Ind g n + ⊕h C λ , where C λ is the one-dimensional module such that h ∈ h acts by scalar multiplication of λ(h) and n + acts trivially. The Verma module M (λ) has a unique simple quotient, which we denote by L(λ). Given λ ∈ h * , we use the following abbreviation For each λ ∈ h * , let L0(λ) denote the simple highest weight g0-module with respect to π0. The Kac module of highest weight λ with respect to π is the induced module L(λ) := Ind g g0⊕n + 1 L0(λ) defined by letting n + 1 := ⊕ α∈∆ + 1 g α act trivially on the g0-module L0(λ). The unique simple quotient of . For a proof of the following proposition see for example [M, 14.1.1].
Proposition 1. Let g = gl(m|n) and λ ∈ h * . Then, The atypicality of L(λ) is the maximal number of linearly independent roots β 1 , ..., β r such that (β i , β j ) = 0 and (λ ρ , β i ) = 0 for i, j = 1, . . . , r. Such a set S λ = {β 1 , . . . , β r } is called a λ ρmaximal isotropic set, and we assume that the elements of S λ are ordered so that β i = ε p i − δ q i and q i < q i+1 . As in [KW1], we denote the atypicality of L(λ) by atp(λ ρ ) = r. The module L(λ) is called typical if this set is empty, and atypical otherwise. For the standard choice of simple roots the set S λ is uniquely determined.
Let P denote the set of integral weights, P + the set of dominant integral weights, and define When studying the characters of simple finite dimensional atypical modules, we may restrict without loss of generality to the case that λ ∈ P + . See Remark 8 in [CHR].
2.5. Weight diagrams. The weight diagrams studied in this paper were introduced by Brundan and Stroppel in [BS1]. They were used by Grusson and Serganova in [GS] to give algorithmic character formulas for basic classical Lie superalgebras.
Let λ ∈ P + and write then we refer to the place holder above t as an empty spot. Note that each × corresponds to some atypical root β i . We number the ×'s left to right, which is consistent with the chosen ordering of S λ .
Example 3. If Denote by E the algebra of rational functions Q(e ν , ν ∈ h * ). The group W acts on E by mapping e ν to e w(ν) . For β ∈ ∆ + 1 , we identify elements of the form 1 1+e −β with their expansion as geometric series in the domain e −β < 1. Since ∆ + 1 is fixed by W , expanding commutes with the action of W . The Weyl denominator of g is defined to be .
R is not regular then ν has a non-trivial stabilizer in W . So the stabilizer of ν in W must contain a reflection σ [G, 4.1
2.7.
Character formulas and Kazhdan-Lusztig polynomials. Serganova introduced the generalized Kazhdan-Lusztig polynomials K λ,µ (q) in [S1] to give an algorithmic character formula for finite dimensional irreducible representations of gl (m|n). Brundan gave a new algorithm in [B] for computing the generalized Kazhdan-Lusztig polynomials for gl(m|n) which can be described in terms of paths, (see Section 2.8).
2.8. Paths. We recall Brundan's algorithm [B] to compute K λ,µ (q) using weight diagrams. We define a right move map from the set of (labeled) weight diagrams to itself in two steps.
Definition 6. Let D µ be a weight diagram for µ ∈ P + , and choose a labeling of the ×'s with indexing set {1, . . . , r}. Then for each ×, starting with the rightmost ×, "mark" the next empty spot to the right of it (which is unmarked). The right move R i is then defined by moving × i to the empty spot it marked.
Define a partial order on P by µ ρ λ ρ if and only if λ ρ and µ ρ have the same typical entries, atp(λ ρ ) = atp(µ ρ ) and the i-th atypical entry of µ ρ is less than or equal to the i-th atypical entry of λ ρ Remark 8. For each µ, λ ∈ P + , there exists a path from D µ to D λ if and only if µ ρ λ ρ [B].
Let P λ,µ denote the set of paths from D µ to D λ . If P λ,µ is non-empty, it contains a unique longest path, which sends the i-th × of µ ρ to the location of the i-th × of λ ρ . We call this path the trivial path from D µ to D λ and denote its length by l λ,µ .
The following is a corollary of Theorem 5, Lemma 9 and Equation (2.4).
Piecewise disconnected weights
3.1. Piecewise disconnected weights. We will see that some simple highest weight modules have particularly nice character formulas. In this section we characterize their highest weights.
Definition 11. A weight λ ∈ P + is called totally connected if in the weight diagram D λ there are no empty spots between the ×'s.
Definition 12. A weight λ ∈ P + is called totally disconnected if the diagram D λ contains at least one empty spot between every two ×'s.
Remark 13. Definitions 11 and 12 are equivalent to those given in [SZ,Section 3.7].
Definition 14. Let λ ∈ P + . We call a nonempty contiguous subsection of the weight diagram D λ an atypical component if it contains an ×, but does not contain an empty spot and is maximal with this property. If × j and × k belong to the same atypical component then we write j ∼ k.
Definition 15. Let λ ∈ P + . Enumerate the atypical components of D λ left to right T 1 , . . . , T N , and let t i be the number of ×'s contained in T i for i = 1, . . . , N . We define t λ = t 1 !t 2 ! · · · t N !.
Definition 16. We call a weight λ ∈ P + and the corresponding weight diagram D λ piecewise disconnected if t i ≤ s i , where s i is the number of empty spots between T i and T i+1 , for i = 1, . . . , N − 1.
Remark 17. A totally connected weight λ is piecewise disconnected with N = 1 and t λ = r!. A totally disconnected weight λ is piecewise disconnected with N = r and t λ = 1. Here r = atp(λ ρ ).
Example 19. If
then the corresponding weight diagram D λ is not piecewise disconnected.
Remark 20. A weight λ ∈ P + is totally connected if and only if for every µ ∈ P + the only possible path from D µ to D λ is the trivial path, whereas it is totally disconnected if and only if there exists µ ∈ P + with r! paths from D µ to D λ , where r = atp(λ ρ ).
3.2.
Definition of (λ ρ ) ⇑ . In this section, we define the integral weight (λ ρ ) ⇑ which appears in the statement of the main theorem (Theorem 25). Let λ ∈ P + and write λ ρ as in (2.2). We refer to the coefficient a i (resp. b j ) as the ε i -entry (resp. δ j -entry). If ±(ε k − δ l ) ∈ S λ , then we call the ε k and δ l entries atypical. Otherwise, an entry is called typical.
Definition 21. lf λ ∈ P + is piecewise disconnected, we denote by (λ ρ ) ⇑ the element obtained from λ ρ by replacing each atypical entry with the maximal atypical entry in the atypical component to which it belongs.
main theorem
The main theorem of this paper is as follows.
4.1.
A map from the set of paths to Sym(r). In this section, we define for each λ, µ ∈ P + , an injective map from the set of paths P λ,µ to Sym(r), where r is the atypicality of λ, and describe the image of this map when λ is piecewise disconnected. The image of such a map for general λ was described by Su and Zhang in [SZ,Section 3.8].
For λ, µ ∈ P + , number the ×'s of D µ left to right × 1 , . . . , × r and number the×'s of D λ left to righť × 1 , . . . ,× r . Then a path θ ∈ P λ,µ determines uniquely an element of Sym(r) given by the ordering In this way, we define the map Θ λ,µ : P λ,µ → Sym(r). The map Θ λ,µ is injective, since a path is determined by this ordering. The image of the trivial path is the identity element of Sym(r).
There are two paths from D µ to D λ , namely, the trivial path and the path R 1 R 1 R 1 R 2 R 2 R 2 R 3 R 3 which can be computed as follows.
The image of this non-trivial path under the map Θ λ,µ is the cycle (23). There are no other paths, because if the 4 and 5 positions were filled before the 7 position then the 7 position would be held, making the path impossible to complete.
In the following lemma we describe the image of Θ λ,µ for an arbitrary piecewise disconnected weight.
Proof. Let θ ∈ P λ,µ . Since the ×'s move in order from left to right to their respective destinations, we have that × k ≤× σ θ (k) . This ensures that σ(µ ρ ) λ ρ . When an × reaches its destination, it holds the next empty spot after it. Hence, the ×'s must go in order into each atypical component so that every spot can be filled, that is, if j < k and j ∼ k then σ −1 θ (j) < σ −1 θ (k). Hence, we always have inclusion. When λ is piecewise disconnected, these conditions on σ ∈ Sym(r) are sufficient to define a path θ from D µ to D λ which satisfies × k →× σ θ (k) . Indeed, the number of empty spots following an atypical component and preceding the next is greater than or equal to the number of ×'s in a given atypical component, so an × does not hold an× spot.
Remark 29. If λ is not piecewise disconnected then Lemma 28 does not hold. See [SZ,Section 3.8] for a description of the image in the general case.
In the following lemma we change the defining conditions of the set from Lemma 28 by replacing λ ρ with (λ ρ ) ⇑ , and then we show that this does not change the set.
4.2.
A bijection of indexing sets. In this section, we change the indexing set of the character formula in (2.5) from P λ to a particular subset of (λ ρ − NS λ ).
Definition 32. Forμ ∈ C Lexi λ,reg , defined λ,μ to be the number of paths from D µ to D λ , where µ is the unique dominant element in the W orbit ofμ.
The following lemma is proven using techniques from [SZ,Section 4.1].
Lemma 33. One has Proof. By Corollary 10 it suffices to show that for each µ ∈ P λ , Let w ′ ∈ W such that w ′ (µ ρ ) = µ. To complete the proof it is sufficient to show that |λ ρ − µ| S λ = l λ,µ + l (w ′ ). The number |λ ρ − µ| S λ is the sum of the differences between the atypical entries of λ ρ and µ. This is equal to the number of moves in the trivial path l λ,µ plus the number of spots being skipped. We will show that l (w ′ ) is exactly the number of spots skipped in the trivial path. The element w ′ ∈ W for which w ′ (µ ρ ) = µ can be described explicitly in terms of the trivial path θ. Denote θ = R i 1 • · · · • R i N , then w ′ = w 1 · · · · · w N where each w j is defined as follows. Suppose that the move R i j moved the × at n j to an empty spot at n j + k j + 1, namely, it skipped over k j spots with >'s and <'s. Then w j = s 1 · · · · · s k j −1 where s i is of the form s ε l −ε l+1 if the i-th skip is over the > of ε l and is of the form s δ l −δ l+1 if it is over the < of δ l . It is easy to see that the expression is reduced, so l (w j ) = k j is the number of spots skipped in the move R i j . Also l (w ′ ) = l (w i ), so l (w ′ ) is exactly the number of spots skipped in the trivial path. 4.3. Paths and permutations for piecewise disconnected weights. In this section, we show that if λ ∈ P + is a piecewise disconnected weight, then for each µ ∈ P λ there exists a t λ to 1 map from the set of paths from µ to λ to a certain subset of the Weyl group. This is a crucial step in the proof of the main theorem.
Let W r be the subgroup of W that permutes S λ . Then W r ∼ = Sym(r) and is generated by elements of the form So |W r | = r! and all w ∈ W r have positive sign.
Fix λ ∈ P + , and recall the notation of Section 3.1. We define a subgroup of W r that preserves the atypical components of λ ρ , that is, So w ∈ W r (t λ ) and λ β ∈ T i imply that λ w(β) ∈ T i . Clearly, and hence W r (t λ ) has cardinality t λ .
Proof. Let j be such that λ ρ β j < ν β j ≤ (λ ρ ) ⇑ β j and ν β i ≤ λ ρ β i for all i > j. By definition of (λ ρ ) ⇑ , all the integers between λ ρ β j + 1 and ((λ ρ ) ⇑ ) β j are entries of λ ρ . The typical entries of ν are the same as of λ ρ and there are r − j + 1 atypical entries which are strictly greater than λ ρ β j . This implies that there must be equal entries of the same type, and hence ν is not regular.
|
2016-06-19T06:24:08.000Z
|
2014-07-01T00:00:00.000
|
{
"year": 2014,
"sha1": "66f96375458f04667e3d851ef655a0d65be7c8b5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "66f96375458f04667e3d851ef655a0d65be7c8b5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
261695067
|
pes2o/s2orc
|
v3-fos-license
|
Comparative analysis of gut microbiota between common (Macaca fascicularis fascicularis) and Burmese (M. f. aurea) long-tailed macaques in different habitats
The environment has an important effect on the gut microbiota—an essential part of the host’s health—and is strongly influenced by the dietary pattern of the host as these together shape the composition and functionality of the gut microbiota in humans and other animals. This study compared the gut microbiota of Macaca fascicularis fascicularis and M. f. aurea in mangrove and island populations using 16S rRNA gene sequencing on a nanopore platform to investigate the effect of the environment and/or diet. The results revealed that the M. f. fascicularis populations that received anthropogenic food exhibited a higher richness and evenness of gut microbiota than the M. f. aurea populations in different habitats. Firmicutes and Bacteroidetes were the two most abundant bacterial phyla in the gut microbiota of both these subspecies; however, the relative abundance of these phyla was significantly higher in M. f. aurea than in M. f. fascicularis. This variation in the gut microbiota between the two subspecies in different habitats mostly resulted from the differences in their diets. Moreover, the specific adaptation of M. f. aurea to different environments with a different food availability had a significant effect on their microbial composition.
Nanopore sequencing of bacterial 16S rRNA gene.The full-length bacterial 16S rRNA gene from 120 fecal samples of long-tailed macaques was successfully sequenced using high throughput nanopore sequencing.In total, 2,444,551 sequencing reads were obtained from 120 samples with an average read per sample of 20,371 (Table 1).The average classified reads were 18,091 per sample.According to the rarefaction analysis, all the samples had sufficient sequencing depth for estimation of the bacterial diversity (Fig. 2).
Bacterial diversity in the gut microbiome in Mff and Mfa.Bacterial alpha diversity (level of diversity within individual samples) comparisons between Mff and Mfa in the respective mangrove and island populations were evaluated based on the Chao1 index (Fig. 3a), while the richness and evenness of bacterial operational taxonomic units (OTUs) were determined using the Shannon diversity index (Fig. 3b).Statistical comparisons of indices between groups were carried out using a Kruskal-Wallis test, accepting significance at the P < 0.05 level.
The Chao1 index and Shannon's diversity between different habitat types of the M. fascicularis subspecies were compared.The Chao1 index of the Mff-KPE population on the island had a significantly higher OTU richness (P = 0.0021) than the Mff-BTB population in the mangrove forest.Likewise, the Shannon's diversity was noticeably and significantly higher (P = 0.0002) for the Mff-KPE island population than the Mff-BTB mangrove population.Similarly, the Mfa-PNY population living on the island showed a significantly higher OTU richness (P = 0.0021) and Shannon's diversity index (P = 0.0332) than the Mfa-MFRC mangrove population.Overall, the alpha diversity of Mff was significantly higher than that for the Mfa populations in both habitat types.
To further examine the differences between the samples, beta diversity (level of diversity or dissimilarity between samples) analyses was performed using the Bray-Curtis cluster analysis index to compare the microbial community compositions between Mff and Mfa in mangrove and island populations.The beta diversity (Fig. 3c) between Mff and Mfa in different habitat types (mangrove and island) were significantly different (P = 0.001, Figure 1.The total number of food items (per visit) consumed by Mff and Mfa living in a mangrove forest or on an island.The black, grey, and white columns indicate marine invertebrates, plants, and anthropogenic foods, respectively.
Table 1.Summary of the sequencing and reads classification (mean ± SD) in each population of Macaca fascicularis fascicularis (Mff) and M. f. aurea (Mfa) in the two respective habitat types.permutational multivariate analysis of variance [PERMANOVA]).However, the Mfa-MFRC mangrove population had a significant divergence from the other populations.
Taxonomic composition of the gut microbiota in
Mff and Mfa at different habitats.Firmicutes was the most dominant bacterial phylum among the mangrove and island Mff populations (Fig. 4, upper panel) with a mean ± SD proportion of 57.6 ± 14.6% and 57.3 ± 5.9%, respectively.The Bacteroidetes accounted for 24.0 ± 10.2% and 28.9 ± 6.8% in the Mff mangrove and island populations, respectively, making it the second most abundant phylum.However, the relative abundance of Firmicutes and Bacteroidetes was not significantly different between the Mff mangrove and island populations (4.7 ± 8.1 and 2.1 ± 1.9 for the mangrove and island populations, respectively; Mann-Whitney U test; P < 0.05).Proteobacteria, which made up 8.7 ± 18.0% and 4.4 ± 2.0% of the bacteriome in the Mff mangrove and island populations, respectively, was the third most Similarly, Firmicutes was the most dominant phylum in the Mfa mangrove (74.7 ± 27.2%) and island (64.9 ± 17.9%) populations, while Bacteroidetes was the second most abundant phylum in both Mfa populations (5.4 ± 10.7% and 20.6 ± 13.2% for the mangrove and island populations, respectively).In contrast to the Mff populations, the relative abundance of Firmicutes to Bacteroidetes ratio in the Mfa-PNY island population (8.9 ± 12.4) was significantly lower than the Mfa-MFRC-mangrove population (68.3 ± 82.5; Mann-Whitney U test; P < 0.001), which was aligned with the higher abundance level of Bacteroidetes in the Mfa-PNY island population.The Proteobacteria (14.0 ± 25.8%) and Verrucomicrobia (6.7 ± 6.1%) were the third most abundant phyla in the Mfa mangrove and island populations, respectively.
The top 10 most dominant genera in the bacterial communities of both macaque subspecies in the mangrove and island populations were identified and are shown in Fig. 4, middle panel.The most dominant bacterial genus in Mff was Oscillibacter at 13.3 ± 6.5% and 12.2 ± 3.3% in the mangrove and island populations, respectively.The other predominant bacteria in the bacterial microbiome of Mff were Prevotella, Clostridium sensu stricto, Clostridium XlVa, Faecalibacterium, and Intestinimonas.The proportions of these bacteria varied across samples and were different between the Mff mangrove and island populations.Oscillibacter was also the most dominant www.nature.com/scientificreports/bacterial genus in the fecal microbiome of the Mfa island population (17.8 ± 7.6%) and was higher than that in the Mfa mangrove population (4.9 ± 4.5%).In contrast, Clostridium sensu stricto was the most predominant bacterial genus in the Mfa mangrove population (23.1 ± 22.9%) and with a significantly higher abundance (P < 0.0001) than in the Mfa island population (4.9 ± 7.4%).
At the bacterial species level, Oscillibacter valericigenes was the most dominant species in Mff (13.38 ± 6.56% and 12.20 ± 3.36% in the mangrove and island populations, respectively) (Fig. 4, lower panel).The other less dominant bacterial species were Prevotella copri, Intestinimonas butyriciproducens, and Faecalibacterium prausnitzii; however, their abundance varied between populations.In contrast, Clostridium sardiniense was the most predominant bacterial species in the Mfa mangrove population (14.0 ± 15.5%) with a significantly higher abundance (P < 0.00001) than in the Mfa island population (0.03 ± 0.1%).
Comparison of the bacterial species between the different macaque subspecies (Mff and Mfa) in the same habitat types (island or mangrove) was examined by Mann-Whitney U tests (P < 0.05).The results revealed that the Firmicutes and Bacteroidetes were the two most abundant phyla in the Mfa and Mff island populations; however, the proportion of Firmicutes was not significantly different between them.In contrast, Bacteroidetes were significantly higher in the Mff (28.9 ± 6.8%) than in the Mfa (20.6 ± 13.2%) island populations.Similarly, the mangrove population of Mfa showed a significantly higher abundance of Firmicutes (74.7 ± 27.2%) and a lower abundance of Bacteroidetes (5.4 ± 10.7%) than the Mff mangrove population.
The taxonomic bacterial composition at a genera level showed that the Mfa island population (17.8 ± 12.2%) had a significantly higher abundance of Oscillibacter than the Mff island population (12.2 ± 3.3%), while Clostridium sensu stricto was significantly higher in the Mfa mangrove population (23.1 ± 22.9%) than in the Mff mangrove population (5.8 ± 11.1%).Further comparison at the bacterial species level reveled that Oscillibacter valericigenes was significantly more abundant in the Mfa island population (17.8 ± 7.6%) than in the Mff island population (12.2 ± 3.3%), while the Mfa mangrove population (14.0 ± 15.5%) had a significantly higher abundance of Clostridium sardiniense than the Mff mangrove population (0.4 ± 1.6%).
Differential abundance of gut bacteria between Mff and Mfa in different habitat types.
The taxonomic abundance of the gut bacterial microbiota of Mff and Mfa living in the mangrove forest and on the island were compared further using LEfSe analysis (LDA score > 2, P < 0.05) 39 , as shown in Fig. 5. Differences in the gut bacterial microbiota between the Mff and Mfa populations in the different habitat types of mangrove forest and island were identified.Porphyromonadaceae, Phascolarctobacterium succinatutens, Acidaminococcaceae, and Prevotella fusca were the most enriched taxa in the Mff-BTB mangrove population, while the Mff-KPE island population had a greater number of significantly enriched taxa, including Tannerella forsythia, Bdellovibrionaceae, Rikenella microfusus, Barnesiella viscericola, Ethanoligenens harbinense, Olivibacter sitiensis, and Fibrobacter intestinalis.In contrast, the Mfa-MFRC mangrove population was enriched in Lachnospiraceae incertae sedis, Clostridium saccharolyticum, and Eubacterium hallii.Moreover, Haloferula helveola and Bacteroides fluxus were abundant in the Mfa-PNY island population.These results indicate the significant differences in the compositional abundance of gut microbiota between Mff and Mfa in the mangrove and island populations.
Discussion
Comparative analysis of the gut bacterial microbiota between Mff and Mfa living in different habitat types (mangrove forest and island) revealed the potential influence of different environments and diets on their gut bacterial composition.Overall, the gut bacteria's alpha diversity in Mff was significantly higher than in the Mfa populations.This difference likely resulted from the increased enrichment of bacterial species in Mff populations (BTB and KPE), such as Oscillibacter valericigenes, Prevotella copri, Faecalibacterium prausnitzii, and Intestinimonas butyriciproducens, which was primarily influenced by the consumption of anthropogenic foods.A previous study in rhesus macaques (M.mulatta) also indicated that the population which consumed anthropogenic foods exhibited a higher microbial richness compared to the wild population that freely foraged for natural foods 40 .Generally, the gut microbial diversity tends to be higher in wild animals compared to captive animals, which is mainly attributed to the complexity of their diet in their natural habitats 41,42 .One possible explanation for the higher bacterial richness in the Mff KPE and BTB populations is that, apart from the natural foods in their natural habitats, these Mff could access anthropogenic foods regularly.These findings suggest that the gut microbiome's bacterial composition in Mff is primarily influenced by types of food they consume rather than the habitat types they inhabit.Nevertheless, the effect of host (macaque) genetics, which differ between Mff and Mfa, cannot be ruled out.
The gut microbiota of Mff and Mfa in this study were mainly composed of two phyla, the Firmicutes and Bacteroidetes, which were most likely similar to that of humans and other NHPs, including other wild and captive Thai Mff [34][35][36][37][43][44][45][46] . Note tat the composition of Firmicutes in the Mfa-MFRC mangrove population was highest among the four examined populations of long-tailed macaques.Thus, the relative abundance of Firmicutes exhibited variations among different subspecies and habitat types.Specifically, the Mfa-MFRC mangrove population showed a higher relative abundance compared to the Mfa-PNY island population, and the Mfa populations displayed a higher relative abundance compared to the other two Mff populations.Firmicute species contained numerous genes encoding enzymes related to energy metabolism, and these bacteria can produce a wide variety of digestive enzymes to decompose various substances, assisting the host in the digestion and absorption of nutrients 47 .According to previous studies, a higher ratio of Firmicutes to Bacteroidetes is associated with a higher absorption of dietary energy 48,49 .Bacteroidetes species helped the host in metabolizing the proteins and carbohydrates in the diet 50,51 .Taken together, it can suggest that the abundance of Firmicutes and the ratio of Firmicutes to Bacteroidetes are related to the genetic characteristics (leading to a different subspecies of Mff and Mfa), habitat type (mangrove forests or island), and anthropogenic foods (only in Mff populations).The higher www.nature.com/scientificreports/abundance of Firmicutes to Bacteroidetes may partially be related to the consumption of the high-energy mollusk foods that were observed to be heavily consumed in the Mfa populations in this study.These Mfa populations were observed to primarily rely on the natural food sources available in their respective habitats subject to their specific foraging techniques to acquire these foods.In addition, the abundance of bacteria belonging to the phylum Proteobacteria in the Mfa-MFRC mangrove population was significantly higher than in the Mfa-PNY island population, which could reflect the effects of the habitat type and food items.
During fecal specimen collections, we discovered that the Mff-KPE populations, especially adults, sporadically used percussive stone tools for opening oysters.Thus, the higher bacterial species richness observed in the Mff-KPE population can be attributed to their consumption of anthropogenic foods and their stone-tool use behavior, which allows them to access more food items requiring foraging techniques.This indicates that while diet plays a significant role in bacterial diversity, stone-tool use behavior also contributes to the bacterial diversity.However, due to the short-time stay and lack of individual animal identification and stone-tool use in this study, we were unable to collect data on the proportion of food types consumed by the monkeys on a daily basis.Besides, the data on stone-tool use by each population was obtained from previous studies 24,27,38 , and were also confirmed during the field observations.This limitation hinders our ability to analyze the microbiome composition at an individual level based on the proportion of food consumption.To address this limitation in future research, it would be beneficial to identify each animal individually, collect data on the proportion of their daily www.nature.com/scientificreports/food consumption with or without stone-tool use, and then analyze the microbiome composition at the individual level.Thus, collecting data on the proportion of food items acquired through stone-tool use and without the use of stone-tools for each individual animal would allow for a more detailed analysis of the relationship between stone-tool use, dietary habits, and the gut microbial profiles.Such an individual-level analysis would provide valuable insights into how specific dietary behaviors shape the gut microbiota within each population of M. fascicularis, contributing to a deeper understanding of the factors driving gut microbiome variation in these macaque populations.At the genus level, our results indicated that the microbiome of long-tailed macaque populations was enriched with Prevotella, which is one of the most predominant genera in the human microbiome.In line with these findings, a previous study also reported that the macaque microbiome exhibited a higher abundance of Prevotella than the human microbiome 52 .The predominance of Prevotella was associated with a diet high in carbohydrate and fiber from plant sources 53 .Similarly, western lowland gorillas (Gorilla gorilla gorilla) that consumed a high number of fruits had a high relative abundance of Prevotellaceae 54 .These findings suggest that populations with a higher abundance of Prevotella possess the capacity to effectively break down and utilize the natural plantbased diet.
At the bacterial species level, Oscillibacter valericigenes was the most dominant species in the Mfa and Mff populations with the exception in the Mfa-MFRC mangrove population.Oscillibacter valericigenes is a representative bacterium in the Oscillibacter group that can produce valerate 55 , a short-chain fatty acid that can replace butyrate as an energy source for colonocytes.This bacterium's abundance showed its potential relevance to the macaque's health.These results are also consistent with a previous study reporting a significant abundance of O. valericigenes in healthy humans 56 .Similarly, Faecalibacterium prausnitzii was present in all four populations of long-tailed macaques examined in this study, which is supported by previous studies that F. prausnitzii was the dominant butyrate producer of Clostridium cluster IV, the most common bacteria in the microbiome of humans, and which exhibited anti-inflammatory effects 57 and enhanced the gut barrier functions 58 .The depletion of F. prausnitzii is associated with Chron's disease 59 .Note that the microbiome of the Mfa-MFRC mangrove population was enriched with Clostridium sardiniense and less diversified.These findings are significant for the health of long-tailed macaques because a reduced diversity in the gut microbiota results in fewer microbial metabolic pathways interacting with food items and providing fewer nutritional benefits to the hosts.Similarly, in other mammalian species, a low gut microbial diversity has also been associated with heightened vulnerability to opportunistic pathogens 60 .Clostridium sardiniense is a glycolytic cluster I species that uses anaerobic carbohydrate fermentation to produce butyrate 61 .This species can also promote a more severe infection of Clostridioides difficile in mice by modulating the virulence, growth, and colonization of the pathogen 62 .Also, the reduced Chao 1 and Shannon alpha diversity of the microbiome in the Mfa-MFRC mangrove population could potentially be attributed to the higher abundance of C. sardiniense.It is essential to highlight that the Mfa-MFRC mangrove population in this study are wild animals and are not habituated to human presence.Due to COVID-19 restrictions, human activities were limited during the field observations of Mfa-MFRC mangrove population.As a result, these animals predominantly relied on natural food sources, leading to a less diverse range of microbial species compared to the Mff populations, which had access to both natural and anthropogenic foods.
The LEfSe-based differential species abundance analysis of the Mff-BTB mangrove population revealed that Porphyromonadaceae and Phascolarctobacterium succinatutens were the most enriched taxa.The Porphyromonadaceae have a potential role as adiposity modulators by producing two short-chain fatty acids: acetate and propionate 63,64 .Phascolarctobacterium succinatutens is known for its utilization of succinate and has been previously identified in the gut of healthy humans 64 .These results suggest that the Mff-BTB mangrove population have a specific diet that promotes the growth and proliferation of Porphyromonadaceae and P. succinatutens.These bacteria are known to thrive on certain dietary components, such as complex carbohydrates and fibers, which are abundant in the macaques' food sources in the mangrove habitat.
Tannerella forsythia, a well-known oral human pathogen 65 , was found to be more abundant in the Mff-KPE island population, as indicated by the LEfSe analysis.Periodontitis in humans is strongly associated with the presence of T. forsythia and this species has a significant role in the pathogenicity of the microbiota in subgingival plaques 66 .In the short-time observations during fecal specimen collection and our previous observations before the COVID-19 episode, the Mff-KPE island population was seen to be heavily provided with fresh and leftover foods by humans compared to the other three macaque populations.Thus, it is possible that their diet, which includes anthropogenic food, might have contributed to their higher abundance of T. forsythia in the gut microbiota.
Following the LEfSe analysis, the Mfa-PNY island population, which did not receive anthropogenic foods, showed an enrichment of Haloferula helveola and Bacteroides fluxus.Haloferula helveola is commonly associated with marine environments 67 , and is not known to inhabit the human gut in any marked abundance according to the data from the U.S. NIH Human Microbiome Project 68 and the search engine of EZBioCloud 69 .This is in accord with a previous report that indicated that marine invertebrates were the main food source of the Mfa-PNY island population 38 .Bacteroides fluxus has been isolated from the feces of healthy human individuals 70 .Nevertheless, one case of its presence in an abdominal infection has been reported 71 .Overall, the higher abundance of these bacterial species in the Mfa-PNY island population can be attributed to their specific diet, which includes marine-based foods, and their adaptation to a distinct island habitat, which likely influenced the composition of their gut microbiome.
According to the LEfSe analysis, the Mfa-MFRC mangrove population was enriched with bacterial species from the family Lachnospiraceae.These bacterial species are known to degrade complex polysaccharides, producing butyrate that can be utilized for energy 72 .This finding aligns with the dietary habits of herbivores, which are known to have a higher abundance of Lachnospiraceae compared to omnivores 73 .The results may reflect the plant-based dietary sources available to the Mfa-MFRC mangrove population in their habitat.In conclusion, this is the first report to compare the gut microbiomes of different subspecies of M. fascicularis (Mff and Mfa) living in two different habitat types (mangrove forest and island).The results revealed a significant difference in the gut microbiome associated with the different genetic background of the animals (between the two subspecies of M. fascicularis) and their diverse dietary habits (comparing between mangrove forest and island habitats, as well as anthropogenic foods).The latter factor could be associated with the use of stone-tools in foraging for foods.It was previously reported that the Mfa-PNY island population used percussive stone tools daily 24,32,38 , while the Mfa-MFRC mangrove population performed only food-pounding behaviors 30 .The food-pounding behavior is when the animals used the food (i.e., shell) to pound the food or to pound the stone, while the stone-tool use behavior is using the stone to pound the food, as seen in the Mfa-PNY macaques 30 .Furthermore, the study offered intriguing insights into the potential influences of stone-tool use and anthropogenic foods on the macaque's health, as evidenced through their gut microbiome.The higher gut bacterial diversity observed in the Mff populations, especially in Mff-KPE island population with access to anthropogenic foods and stone-tool use behavior, suggested that both diet and stone-tool use play significant roles in shaping the gut microbiome.In contrast, the reduced diversity in the Mfa-MFRC mangrove population that relies solely on natural food sources may reflect limitations in accessing a diverse range of microbial species.However, to comprehensively elucidate the influences of stone-tool use and diet acquisition on macaque health through the microbiome, further research is needed to investigate the individual-level relationship between stone-tool use, dietary habits, and gut microbial profiles.This is the next question for us to explore further.
Methods
Permit and ethical note.The permits for research and sample collection in the four populations of freeranging long-tailed macaques sampled in this study in Thailand were approved by the Department of National Parks, Wildlife, and Plant Conservation of Thailand.The Institutional Animal Care and Use Committee (IACUC) of the National Primate Research Center of Thailand-Chulalongkorn University approved the study's experimental protocols (Protocol Review no.2075007).The research adhered to the American Society of Primatologists (ASP) Principles for the Ethical Treatment of Non-Human Primates.All methods were performed in accordance with the relevant guidelines and regulations.
Study sites and consumed food items.Two subspecies of free-ranging Macaca fascicularis (Mff and Mfa) at two habitat types (island and mangrove forest), giving a total of four populations, in Thailand were selected for this study (Table 2).The subspecies were identified based on their geographical distribution and morphological characteristics 23,27,30 .The information regarding the food consumed by the monkeys was gathered through direct observation of foraging animals and their consumed foods, or by observing the remaining food item(s) after the animals had finished eating.Food items were identified and photographed using a Nikon COOLPIX W300 (Nikon, Japan).
Fecal specimen collection.
A total of 120 freshly defecated specimens (n = 30 for each population) were non-invasively collected using the fecal swab method in their natural habitats.In each location, the survey was conducted over at least five consecutive days, at 7:00-16:00 h (see Table 2).To avoid contamination with the soil microbiome, the fecal samples were collected from the inner part using cotton swabs (Citoswab, China).Samples were preserved in 2 mL of DNA/RNA shield (Zymo Research, USA) for viral inactivation and nucleic acid stabilization.To avoid double collection, the physical characteristics (i.e., color, texture, and shape) of each fecal specimen were recorded.DNA extraction.DNA was extracted using ZymoBIOMICS™ DNA Miniprep kit (Zymo Research, USA).
Briefly, 750 µL of fecal suspension were lysed in a ZR BashingBead™ lysis tube using TissueLyser LT (Qiagen, Germany) at 50 Hz for 3 min.The cell lysate was then extracted following the manufacturer's instruction.The concentration of DNA was determined using A 260 / 280 nm by NanoPhotometer® C40 (Implen, Germany).
PCR amplification and sequencing on MinION™.
The full length bacterial 16S small subunit ribosomal RNA (16S rRNA) gene, ca.1,500-bp size, was amplified based on PCR with the specific primers; 16S-V1F 5′-TTT CTG TTG GTG CTG ATA TTG CAG RGT TYG ATYMTGG CTC AG-3′ and 16S-V9R 5′-ACT TGC CTG TCG CTC TAT CTT CCG GYT ACC TTG TTA CGA CTT -3′ 74 .The 10 µL PCR reaction mixture consisted of 5 µL of 2 × UltraHiFi mix (Tiangen, China), 2 µL of PCR Enhancer (Tiangen, China), 0.25 µM each of forward and reverse primers, 1.5 µL of ddH 2 O, and 1 µL of the nucleic acid template.The PCR was thermal cycled at 94 °C www.nature.com/scientificreports/for 2 min, followed by 25 cycles of 98 °C for 10 s, 60 °C for 30 s, and 68 °C for 45 s, and then a final 68 °C for 5 min.The amplicons were barcoded by a five-cycle PCR using the barcode primers based on the PCR Barcoding Expansion 1-96 kit (EXP-PBC096; Oxford Nanopore Technologies, UK).The barcoded libraries were enriched using a QIAquick® PCR Purification kit (QIAGEN, Germany) following the manufacturer's instructions.The enriched libraries were quantified by Quant-iT™ dsDNA HS Assay kit using Qubit 4 fluorometer (Invitrogen, USA), and then equimolarly pooled for multiplexing.The pooled library was enriched using 0.5 × Agencourt AMPure XP beads (Beckman Coulter, USA).Afterwards, the library was subjected to end repair and adaptor ligation steps using Ligation Sequencing Kit (SQK-LSK114).Finally, the library was loaded onto the R10.4.1 flow cell and sequenced on a MinION™ Mk1C sequencer (Oxford Nanopore Technologies, UK).
Data analysis.
The FASTQ files were generated from the FAST5 data based on a super-accuracy model with a minimum acceptability quality score (Q > 10) using the Guppy basecaller software v6.0.7 (Oxford Nanopore Technologies, UK) 75 , while MinIONQC was used for the evaluation of the quality of the reads 76 .Porechop v0.2.4 was used for adaptor-trimming and demultiplexing of FASTQ sequences 77 .NanoCLUST was used for clustering, polishing, and taxonomically classifying the filtered reads, based on the size of the sequences for the V1-V9 region of 16S rRNA gene sequences from the Ribosomal Database Project (RDP) database 78,79 .The files were converted into QIIME (Quantitative insight into microbial ecology) format, and the QIIME2 toolkit v2021.2 was used for calculation of the alpha diversity using Chao1 and Shannon indices, and the beta diversity by Bray-Curtis cluster analysis 80 .The MicrobiomeAnalyst was used for the visualization of normalized data 81 .Finally, the Galaxy server was used for the differential abundance analysis of gut microbiota using linear discriminant analysis Effect Size (LEfSe) with P < 0.05 and a linear discriminant analysis (LDA) score > 2 39 .
Figure 2 .
Figure 2. Rarefaction analysis showing that an adequate sequencing depth was obtained for estimating the diversity of all the samples.
Figure 5 .
Figure 5. Differential abundance analysis by Linear discriminant analysis Effect Size (LEfSe) of the gut bacterial microbiome of Mff and Mfa living in a mangrove forest and on the island.The bar plots indicated the differentially abundant bacterial microbiota at different taxonomic ranks.The LDA score shows the effect size and ranking of each differentially abundant taxon (LDA score > 2, P < 0.05). https://doi.org/10.1038/s41598-023-42220-zwww.nature.com/scientificreports/
Table 2 .
Subspecies, code, location, geographical coordinate, habitat types, and date of specimen collection of the wild Mff and Mfa populations in this study.
|
2023-09-13T06:17:07.004Z
|
2023-09-11T00:00:00.000
|
{
"year": 2023,
"sha1": "fb4b3323ae6b4119a5c6e0bc946dea2f7280be05",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-023-42220-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13491564aea8ef07974814ebbaabbd04e3df6448",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264814600
|
pes2o/s2orc
|
v3-fos-license
|
Social Media Interactive Advertising and Purchase Intention of the UAE Customers: An Empirical Analysis
The advent of social media technologies, specifically the phenomenal growth of onl ine and interactive advertising, has assisted various organizations in responding to and communicating with respective consumers at sustainable expenditures through various online channels, including social media, with immense potential and popularity levels. Limited studies have been conducted to examine the connection between consumer engagements by small and medium enterprises ( SME s) and their respective promotional performances, as consumer purchase intention would be notably affected by the ability of an enterprise to interact and share information. Therefore, the current study aims to evaluate the impacts of attitude, brand loyalty, brand image, and brand awareness on interactive advertising and engagement with consumers via social media in the United Arab Emirates ( UAE ) to promote SME digital lifestyle products. Accordingly, a quantitative survey was administered to evaluate 308 responses from customers of companies vending digital lifestyle products before conducting partial least square structural equation modeling (PLS-SEM) to analyze the collected data. The results demonstrated significant positive relationships between both brand loyalty and brand awareness and consumer purchase intention, respectively, whereas the relationships between attitude and brand image and consumer purchase intention were separately discovered to be insignificant.
INTRODUCTION
The advent of new technologies, particularly in the mobile technology domain, has brought about significant transformations in the business landscape, leading to an exponential growth of social activities and interactions on digital platforms (Ahmad & Khalid 2017).In response to this digital revolution, businesses have increasingly turned to social media interactive advertising as a powerful tool to effectively engage with their target audience and drive meaningful outcomes.Numerous studies have demonstrated the substantial influence of social media interactive advertising on consumer behavior and business success.For example, Liang, Choi and Joppe (2020) found that interactive features in social media advertising positively impact consumer engagement and purchase intention, while Hassan, Shahzad and Bashir (2021) highlighted a strong connection between social media interactive advertising and brand loyalty.
With the recognition of the importance of establishing a robust social media presence, businesses strategically select suitable platforms to optimize the efficacy of their interactive advertising campaigns (Voorveld et al. 2018).However, it is impractical and resource-intensive to maintain an active presence on every digital platform.Therefore, understanding the impact of social media interactive advertising becomes crucial for businesses seeking to unlock the consumer journey towards purchase intention and create a symbiotic learning experience with their target audience.These interactive advertising campaigns enable businesses to actively engage consumers, build brand awareness, and influence consumer attitudes and perceptions (Jara, Parra & Skarmeta 2014;Pentina & Koh 2012).
Moreover, social media interactive advertising offers businesses an opportunity to foster trust and enhance brand equity.Sundaram, Mitra and Webster (1998) emphasize the role of interactive features in advertising in developing consumer trust, which is vital for establishing long-term relationships and driving customer loyalty.The impact of social media interactive advertising goes beyond immediate customer engagement and brand awareness; it directly influences key business outcomes such as sales revenue, customer loyalty, and market share (Voorveld et al. 2018).
In summary, social media interactive advertising plays a pivotal role in the success of businesses in the digital era.It enables businesses to actively engage with their target audience, foster brand-consumer relationships, influence consumer behavior, and drive business outcomes.The interactive nature of these campaigns, coupled with strategic platform selection, contributes to enhanced brand equity, customer trust, and long-term business growth.
By addressing these aspects, the current study aims to evaluate the impacts of attitude, brand loyalty, brand image, and brand awareness on interactive advertising and engagement with consumers via social media in the United Arab Emirates (UAE), specifically focusing on promoting small and medium enterprises (SMEs) digital lifestyle products.Through a quantitative survey and partial least square structural equation modeling (PLS-SEM) analysis, this research seeks to provide insights into the relationships between these variables and consumer purchase intention in the UAE context.Al-Tenaiji and Cader (2010) discovered that the user presence on social media was significantly increasing in the UAE and highlighted the consequential impacts of social networking sites in the country, due to its global third rank in terms of social networking site membership, which was ahead of Canada and the USA.For example, the UAE hospitality sector exhibited a prominent presence on the online platform, where hotel consumer complaints, such as those at Dubai Festival City, were resolved swiftly via multiple engagement touchpoints on social media.Additionally, competitions were active and frequent, as demonstrated by a wide variety of posts regarding attractive travel expenses, such as special packages for presidential suites or extra nights in the spa package on Twitter.The survey conducted by Al-Tenaiji and Cader (2010) also discovered that UAE organizations could reach 48% of online audiences through social networks, while the remaining 52% extended their communications to the organizations via multiple social networking sites, after becoming aware of a specific enterprise through advertising (23%), communication (15%), and brand awareness (19%).Consequently, the active communication role provided by social media has motivated different shopping malls to subsequently create their online presence, as indicated by the Dubai Mall with 31,000 followers on Instagram, 477,000 on Facebook, and 66,000 on Twitter (Wally & Koshy 2014).
According to Al-Hubaishi, Ahmad and Hussain (2017), the effective employment of online government services is present in the UAE.For instance, Hassan (2013) demonstrated a highly aggressive utilization of online services through smartphones, with 38.1% of the government services being accessible and available online.Similarly, numerous affordable mobile data plans provided by the National Telecom Service Providers, such as Emirate Integrated, Etisalat, and other telecommunication companies, have catalysed the upward trend of an online presence.As such, the UAE government enthusiastically ensures that the entire Emirates of the country (Dubai, Umm Al-Quwain, Abu Dhabi, Ajman, Ras Al-Khaimah, and Sharjah) and various private organizations collaborate towards the development of functional and convenient online systems to promote a multitude of SME products and services, which is a considerably high priority of the current government.Therefore, the current study sought to investigate multiple existing interactive advertising methods performed by global conglomerates and SMEs in the UAE to influence consumer behavior via social media, which was perceived as a superior advertising means due to various benefits, such as high constant awareness, contact-based efficiencies, rapid brand conversion, stable consumer retention, and high levels of pertinent locatability (Ha 2012).Examples of interactive advertising methods include online advertising (e-mail, classified advertisements, banners) and wireless interactive television advertising, which allow higher extents of marketing gamification or personalization by a particular brand (Bradley & Domingo 2020).
In the past decades, the rapid influx and application trend of mobile-based technologies, internet systems, and websites were demonstrated.The proclaimed fraternities of the technologies were conducive to the mass technology adoption scale before generating emerging opportunities to develop an eclectic range of services and offerings.Accordingly, a deeper comprehension of the role played by social media in providing advertising platforms is required to determine the difference from other conventional advertising forms in terms of the scope of activities, usage, and coverage.Specifically, the SMIA in the current context is related to various forms of social media engagement between consumers and companies.Hence, the current study employed three measures to determine social media engagement from three different aspects, namely functional, emotional, and communal.
CONSUMER PURCHASE INTENTION
Various industries across the globe expend tremendous amounts of financial resources and endeavours when promoting respective products via social media and digital platforms.Correspondingly, the effectiveness of advertising campaigns in facilitating preferable consumer behaviours is constantly and frequently a challenge to improve the feasibility of implementing the campaigns.Particularly, social media advertisements are continuously created and organised by incorporating various key elements that could entice potential customers' attention (Dwivedi et al. 2017;Shareef et al. 2017).Moreover, companies repeatedly gain advantages from social media activities to distinguish themselves from other competitors in elevating consumer purchase intentions in the future (Wang & Kim 2017).As a result of the high engagement degree with consumers on social media, interactive advertising actions pose a huge influence in shaping a superior corporate image and healthier future purchase behaviour while boosting positive customer experiences.
Consumer behaviour can be evaluated by measuring purchase intention as an important index, which serves as a representation of the possibility or degree to which consumers demonstrate high willingness levels for a commodity purchase as observed by the marketers (Wiwutwanichkul 2007).Accordingly, purchase intention scores were regularly adjusted by certain scholars to closely support available data collected from consumers within a limited time before forecasting actual purchase behaviour (Bemmaor 1995).Although certain circumstances might not be pertinent in assessing actual purchase behaviour, intention-based data could be potentially applicable in a majority of scenarios as shown by Creyer and Ross (1997), in which purchase intention was frequently deduced from actual purchase behaviour patterns by numerous researchers.
Consumer purchase intention can be measured in terms of the probability of a consumer purchasing a product or service, wherein the higher the demonstrated purchase probability or willingness, the higher the purchase intention.Besides, consumer purchase intention could act as a practical indicator by providing adequate knowledge for the marketers regarding the experiences, preferences, and current external environments of consumers before gathering relevant information, examining alternatives, and ultimately implementing purchase intention.By referring to the suggestion and improvement of Fishbein and Ajzen (1975), the attitude (subjective norms) and external factors (normative norms) of consumers constructed the measure of consumer purchase intention in the current study.Thus, this study aimed to discover the effects of both subjective norms and normative norms on consumer purchase intention towards the SME products amongst the UAE consumers.
ATTITUDE TOWARDS SOCIAL MEDIA ADVERTISING
Previous studies, such as Knoll (2016) and Kumar et al. (2016) propounded that social media advertising (SMA) was advertisements posted by firms with interactive and instant features, therefore allowing consumers to perform interactive actions on social media.Apart from the interactive and instantaneous nature of the SMA, several SMA aspects, including the built-in polls, quizzes, and pools, which empower consumers to comment, explore, share, like, and follow the social media posts instantly, are subsequently enabling the advertisements to be further interactive.Particularly, the SMA provides managers and consumers with special metrics of advertisement popularities to swiftly appraise the numbers of shares, comments, and received likes.The SMA is considered innovative due to its emergence as the main component of content viewing in a provided social media platform, which is contrasting with the traditional Web 1.0 media advertisements.According to Berthon et al. (2012), social media has shifted power from enterprises to consumers and transformed the consumers from passive participants into active influencers during the advertising process.Consequently, addressing the SMA effect on consumer behaviour should be performed to enhance the rapid digitalisation process of the world and the industry.
The SMA effect could be explored via different approaches and on different platforms, for example, the analysis of short text messages on Twitter, the examination of long messages on Facebook, or the investigation of videos on YouTube.In this regard, organisations are advised to create official social media pages or YouTube channels to determine the most appropriate interactive advertising tools (Hudson et al. 2016), as different features and unique interfaces with respective SMA specifications exist in each social media platform.For instance, the SMA appears primarily in the form of in-stream video advertisements on YouTube (which are skippable before and during the beginning of a video) and in-search advertisements (which are displayed amongst the search results) (Hatzithomas, Fotiadis & Coudounaris 2016).Furthermore, in-stream video advertisements would incidentally emerge when a consumer watches a video, while in-search advertisements would surface instead when consumers deliberately search for related content on YouTube, which nevertheless is a video platform.Similarly, when consumers subscribe to the YouTube channel of a brand, advertisements would be subsequently exposed during their video watching by offering identical content and experiences, despite the dissimilarity in information sources (Johnston et al. 2018).Accordingly, the current study researchers postulated that consumers globally would exhibit varying attitudes on social media sites due to heterogeneous cultures, thus emphasising the importance of behavioural responses to the SMA, which was continually disregarded in past research (Wang, Min & Han 2016).
BRAND IMAGE, BRAND LOYALTY, AND BRAND AWARENESS According to Hyun, Kim and Lee (2011), advertising is indispensable to generating positive emotional responses, including excitement and happiness, and favourable evaluations, such as perceived value, service quality, and consumer satisfaction.Similarly, Sundar and Kalyanaraman (2004) also posited that effective interactive advertising could influence evaluative responses while elevating positive emotions to enhance reliable information processing processes from a specific experience.Accordingly, De Pelsmacker, Geuens and Anckaert (2002) propounded that effective advertising could be described in three evaluative measures, namely clarity, likability, and informativeness.Specifically, advertising would be considered efficient when a commercial was transformed into highly informative and entertaining information embedded in consumer minds.As such, brand equity is subjective to the implementation of effective advertising means before elevating brand awareness and increasing the brand potential amongst target consumers (Sasmita & Mohd Suki 2015).Meanwhile, Jothi, Neelamalar and Prasad (2011) suggested that the SMA is one of the branding strategies in engendering multitudinous business benefits, propagating promotional ideas, encouraging brand and service adoption amongst a target consumer group, facilitating sustainable market competition, updating relevant details to the target audience, boosting the presence of the brand or service, motivating consumer interactions with the brand, and ultimately harnessing social benefits.
TRUST
There is a growing interest in understanding the role of trust and its impact on consumer behavior and purchase intentions (Nuttavuthisit & Thøgersen 2017).Trust plays a crucial role in marketing strategies as it influences consumers' buying plans and willingness to purchase (Hemmerling et al. 2015).De Morais Watanabe, Alfinito, Curvelo and Hamza (2020) define trust as a belief, expectation, or feeling of loyalty that arises from the intent, completeness, or ability of an exchange partner.Consequently, trust can significantly influence consumers' purchasing behavior.In the context of this study, the aim was to investigate the effects of consumers' trust and buying behavior in interactive advertising on social media, with a focus on deepening our understanding of this market in the United Arab Emirates.
Trust is a complex concept with multiple definitions across disciplines.there are definitions of trust found in the literature.Rousseau et al. (1998) conducted an extensive literature study on trust and identified willingness and confident expectations as essential elements in all definitions, regardless of the underlying discipline.According to the Oxford English Dictionary, trust is defined as having confidence in the quality of people or things, accepting or approving something without investigation or evidence, and having expectations about something based on credible value, honesty, and loyalty.
One widely cited definition in the literature is by Mayer, Davis and Schoorman (1995), which emphasizes the expectation that another person will act in ways that are beneficial for the trustor, regardless of their capacity to control or monitor it.This definition also highlights the trustor's willingness to make themselves vulnerable to the trustee's actions.In other words, trust involves a willingness to take risks and the recognition that something important may be at stake in a trusted relationship.
A practical example that demonstrates trust in consumer behavior is the act of buying petrol at a specific gas station.This decision reflects trust as consumers make themselves vulnerable to the potential outcome of the transaction.Drivers rely on the petrol brand based on their belief that the brand will fulfil their expectations (Mayer, Davis & Schoorman 1995).
METHODS
The current study employed a quantitative research method, using a survey questionnaire as the primary instrument to collect relevant data (Akanmu, Hassan & Bahaudin 2020).The survey aimed to assess the relationship between various independent variables, including attitude, brand image, brand loyalty, brand awareness, and social media interactive advertising (SMIA) of SME products in the UAE, and the dependent variable of consumer purchase intention.
To ensure data reliability and minimize bias, the questionnaire setup and reliability were based on pilot studies, although specific details were not mentioned in the initial methods.Prior to the main data collection, a pilot study was conducted to test the questionnaire's clarity, comprehensibility, and internal consistency.The pilot study involved a small group of participants who were not part of the final study sample.The feedback and responses from the pilot study were used to refine the questionnaire, ensuring its validity and reliability.Further details on the pilot study and its outcomes were not provided.
In the survey questionnaire, an odd-scale 5-point Likert scale was selected to measure respondents' agreement levels on each item, ranging from 1 for 'strongly disagree' to 5 for 'strongly agree'.This scale selection was based on recommendations by Krosnik (1999) and Pearse (2011) and aimed to strike a balance between contradictory objectives and prevent ambiguous interpretations.Additionally, respondents were allowed to select multiple answers (up to two or three) in some items to regulate response directions, although the specific items allowing for multiple answers were not mentioned.
For the selection of customers in the study, a simple random sampling technique was employed.Simple random sampling is a probability sampling method where each member of the target population has an equal chance of being selected for the study.By using simple random sampling, the researchers aimed to obtain a representative sample of customers in the UAE and increase the generalisability of their findings to a wider population of customers.
Regarding the selection of Small and Medium Enterprises (SMEs) companies, convenience sampling was utilised.Convenience sampling is a non-probability sampling technique that involves selecting participants based on their accessibility and availability.In this case, the researchers may have chosen SMEs that were conveniently accessible to them or had a willingness to participate in the study.While convenience sampling may not provide a fully representative sample of all SMEs in the UAE, it can still provide valuable insights and practical implications, given the researchers' specific objectives and constraints.
Furthermore, Partial Least Squares (PLS) analysis was utilised in the current study.PLS is a statistical method used to analyze complex relationships between variables in structural equation modeling.It allows for the examination of both measurement and structural models simultaneously.In this study, PLS was employed to examine the relationships between the independent variables (attitude, brand image, brand loyalty, brand awareness, trust, SMIA) and the dependent variable (consumer purchase intention).The use of PLS analysis helped assess the strength and significance of these relationships.However, the specific details on the application of PLS in the study, such as the model specifications or algorithm settings, were not mentioned in the methods section.
In summary, the methods involved a quantitative approach using a survey questionnaire.The questionnaire was refined through pilot studies to ensure clarity and reliability, although additional details on the pilot study were not provided.Simple random sampling was used to select customers, while convenience sampling was employed for SMEs.The relationships between variables were analyzed using PLS, although the specific PLS analysis details were not described.
RESULT ANALYSIS
The hypotheses shown in Figure 4 developed based on the discussed variables would be appraised after validating the model goodness by conducting the partial least square (PLS) algorithm.The path coefficients of each variable are portrayed in Figure 3.The product has a good image BI2 The product has a distinguished image compared to other products BI3
Brand awareness
I can quickly recall a logo or symbol of a particular brand or product on the social media BA1 Some features of a particular product or brand that appears on social media come quickly to my mind BA2 I am aware of this product or brand that showed on social media BA3 I have recognition for this brand or product compared to another competing product or brand on the social media BA4 I am aware of how this product or brand looks like BA5 This social media website keeps to obligations and promises TR2 The social media website has sufficient and plentiful information TR3 The website's infrastructure is dependable TR4 A secure personal privacy is offered on the website TR5 The bootstrapping technique, a component of the Smart-PLS software, was conducted to show the significance of each path coefficient, evaluate the values of T-statistics through consistent bootstrapping, and generate the P-values, as delineated in Table 5. Accordingly, a significant relationship between attitude and purchase intention was not discovered (β = 0.042; t = 0.56; p = 0.578), thus, rejecting the first hypothesis.In contrast, the findings demonstrated a significant positive association between brand loyalty and purchase intention (β = 0.89; t = 4.21; p < 0.000), therefore accepting the second hypothesis.Meanwhile, brand awareness was also shown to possess a significant positive relationship with purchase intention (β = 0.37; t = 2.12; p = 0.034) at a 95% significant level and, thus, accepting the third hypothesis.Besides, the results (β = -0.60;t = 1.61; p = 0.11) manifested an insignificant relationship between brand image and purchase intention, thus not supporting the fourth hypothesis.The results (β = 0.195; t = 3.095; p = 0.002) manifested a significant positive relationship between trust and purchase intention, contradicting the fifth hypothesis.
Summarily, the findings posited that the respondents considered attitude and brand image without respective influences on their purchase intention in digital lifestyle product companies as a reflection of personalities by the specific product purchase.Moreover, the small standard error values suggested that consumer perceptions were concurrent with each other with a high consensus amongst the customers of the digital lifestyle product firms in the UAE.
DISCUSSION THE RELATIONSHIP BETWEEN ATTITUDE AND PURCHASE INTENTION
As illustrated in Table 5, a positive relationship between attitude and purchase intention H1 (β = 0.042; t = 0.56; p = 0.58) was discovered although the association was insignificant, which contrasted with past findings (Dwivedi, Kapoor & Chen 2015;Sano 2014) and, unsporting the first hypothesis (H1).Nevertheless, the positive relationship suggested that technology adoption by consumers could be due to increased work performance and efficiency (Davis 1989).Similarly, previous studies (Abdelghaffar & Magdy 2012; Park & Kim 2013;Wang 2014) also postulated that the avidity of an individual to purchase a product would be highly dependent on personal product attitude.
A deeper understanding of the technology acceptance model could also assist in improving the existing ability of an organisation to create a higher level of purchase intention towards digital lifestyle products.Notably, when the consumer acceptance level for an innovative system increases, positive consumer attitudes towards the purchase intention would also increase.Correspondingly, higher advocacy levels on the adoption of digital advertisements by enterprises would influence consumer psychology in positively shifting towards desired behaviours, especially higher purchase intention towards products or services offered.Furthermore, the attitude of users towards a particular product would be significantly affected by the perception of product userfriendliness.For instance, King and He (2006) showed that the attitude of a user to a brand was significantly linked with perceived direct benefits received from the technology adoption.The result also supported Hofstede (2001) who investigated the high ranking of the UAE in social media usage and discovered that uncertainty avoidance was the adoption rationale of the citizens although they frequently struggled with the situation of low familiarity with the available digital platforms and would only be highly motivated by the presence of a secured system.H1: Attitude has a positive impact on purchase intention.H1 M : Attitude towards SMIA engagement has a positive impact on purchase intention.
THE RELATIONSHIP BETWEEN BRAND LOYALTY AND PURCHASE INTENTION
Brand loyalty was shown previously to be a crucial element in significantly influencing consumer purchase intention (Bilgin 2018).Similarly, the relationship between brand loyalty and purchase intention was demonstrated to be significantly positive H2 (β = 0.89; t = 4.21; p < 0.000) in the current study, thus concurrent with past studies (Dolnicar et al. 2011;Godey et al. 2016).Accordingly, consumer decisions are repeatedly dependent on specific values and desires before proceeding with the product purchase from a brand in particular, which is brand loyalty, apart from the availability of proximate substitutes (Back 2005;Chaudhuri & Holbrook 2001).
Brand loyalty is frequently reported to permit increased premium prices while maintaining comparative product advantages, sustainable long-term profits, and competitive production and marketing expenditures (Dolnicar et al. 2011;Tepeci 1999).Specifically, the airline industry is a typical example where reputation, service quality, frequent-flyer membership, and member price are crucial key factors in retaining consumer loyalty (Dolnicar et al. 2011;Robinson & Kearney 1994;Wen & Chi 2013).Furthermore, the brand innovativeness perceived by consumers could promote brand loyalty which is corresponding to the behavioural and functional specifications (Kunz, Schmitt & Meyer 2011).Gradually, brand loyalty evolves from various implemented essential elements, such as optimal entertainment, user-friendly platforms, distinguished services, comfortable environments, and superior product quality.Similarly, prestige-seeking clients who are highly committed to a unique brand would also be influenced owing to the ability of the brand to fulfilling specific premium requirements (Chang & Ko 2014).H2: Brand loyalty has a positive impact on purchase intention.H2 M : Brand loyalty towards interactive engagement on social media has a positive impact on purchase intention.
RELATIONSHIP BETWEEN BRAND AWARENESS AND PURCHASE INTENTION
Table 5 delineates that brand awareness possessed a significant positive relationship with purchase intention H3 (β = 0.37; t = 2.12; p = 0.034), hence supporting the study hypothesis and concurring with past findings (Alhaddad 2015;Godey et al. 2016) to bridge the knowledge gap existed in the relationships between different factors, such as brand loyalty and brand association, amongst the UAE consumers.
The partial least square structural equation modelling (PLS-SEM) was employed to analyse brand awareness as an independent variable and establish the impact on consumer purchase intention as the dependent variable.Accordingly, the result asserted the positive impact of brand awareness on elevating consumer purchase intention while simultaneously acting as brand equity perceived by the consumers depending on their knowledge and understanding levels of the brand.In addition, social media have transformed into a platform for increasing awareness levels of a brand or product while consumers could continuously access manifold updates and information about the brand or product.Correspondingly, social media allow high convenience degrees in recognising a product of a brand before performing the purchase behaviour of 'compare and contrast' the close substitutes of each product on a specific digital platform.For instance, the mere sighting of an advertisement package could render swift brand recognition.Resultantly, the significant positive relationship between brand awareness and consumer purchase intention discovered in the current study supported past findings (Chung, Lee & Heath 2013;Huang & Sarigöllü 2011;Pouromid & Iranzadeh 2012).H3: Brand awareness has a positive impact on purchase intention.H3 M : Brand awareness towards interactive engagement on social media has a positive impact on purchase intention.
RELATIONSHIP BETWEEN BRAND IMAGE AND PURCHASE INTENTION
In contrast to previous studies which manifested a positive correlation between brand image and purchase intention (Aaker 1996;Rio, Vazquez & Iglesias 2001), the current study findings did not discover a significant association H4 (β = -0.60;t = 1.61; p = 0.11) albeit exhibiting a positive direction.As such, the results differed from past findings, which suggested that the younger generations, who were concurrently consumers, were highly predisposed to products or brands with a positive image (Faircloth, Capella & Alford 2001;Rubio, Oubina & Villasenor 2014;Vahie & Paswan 2006).The predilections were due to the possessed conviction regarding the uniqueness and establishment factors of a particular brand in influencing the expectations and attitudes of younger cohorts (Jamil & Wong 2010).Notwithstanding the insignificant result, the positive relationship predicated that a brand would possess an advantageous position and relatively higher market shares with a favourable brand image.Moreover, consumers' brand awareness of a green corporate image and respective elements would hugely influence consumer purchase intention towards a particular product when the image is employed as the primary information source in advertising the product (Chung, Lee & Heath 2013;Huang & Sarigöllü 2014;Norazah 2013).H4: Brand image has a positive impact on purchase intention.H4 M : Brand image towards interactive social media engagement has a positive impact on purchase intention.
INTENTION
Social interactions were identified as a major influence affecting consumer trust in purchasing decisions (Lu, Fan & Zhou 2016).This assertion resonates with the findings of this research which establishes a positive impact of purchase intention and trust H5 (β = 0.195, t = 3.095, P-value = 0.002).This agrees with previous studies as reported by Mardsen (2010).Adoption of new technology by users is strongly based on them establishing trust in the technology through which the interactive advertising is done, as this address two critical conditions (Gambetta 2000) -the risk of vulnerability and uncertainty.Reports from previous studies have submitted that assurance of consumer security is the most important consideration if a new system is adopted (Fang et al. 2006).Social media as an interactive engagement is an evolving development in UAE, and this is coming with its peculiarities, such as lack of consumer confidence in the process, expected sophistication of the user, user security, limited personcontact interaction, and the risk of accessing personal information of the user by the provider might impair on the trust of the user on the system.This is a major security risk (Radomir & Nistor 2013).
Common issues associated with personal identification information may include data theft, loss, and increased potential to commit fraud (Suh & Han 2003).This prompted Halaweh (2011) to conduct empirical research to establish that a consumer's trust in a system strongly influences purchase intention, as confirmed by this study.He continued by reporting that it is until the user considers the system safe before they adopt the internet as a viable tool to transact with for any product (Hung, Chang & Yu 2013;van Velsen, Wentzel & van Gemert-Pijnen 2015).H5: Trust has a positive impact on purchase intention.H5 M : Trust towards interactive social media engagement has a positive impact on purchase intention.
CONCLUSION
The current study findings contributed significant insights to practitioners, managers, and various shareholders or stakeholders by delineating the positive impacts of attitude, brand loyalty, brand awareness, and brand image on consumer purchase intention.Specifically, brand loyalty, brand awareness, and brand image were shown to be significant influencing factors among UAE consumers of digital lifestyle products.Thus, policymakers within the industry should provide higher levels of focus to restructure and strategize relevant policies and practices, aligning and positioning their approaches with required advanced technologies before implementing respective goals and missions at every corporate level.The industry could also be further enhanced by emphasizing the necessity of integrating technological platforms to facilitate consumer engagement and improve service quality, thereby elevating firm performance and competitive advantage.
The inconsistencies existing in the relationships between the independent variables of attitude, brand loyalty, brand awareness, and brand image, and the dependent variable of consumer purchase intention in past literature, had motivated the current study researchers to appraise these relationships by employing another dimension, specifically interactive advertising, among digital lifestyle product companies.Despite the presence of both significant and insignificant associations, a deeper understanding of interactive advertising is imperative for enterprises to achieve higher degrees of engagement with consumers and influence consumer purchase intention.Furthermore, the current study generated significant awareness among digital lifestyle product firms, enabling them to determine consumer purchase intention and corresponding behavioral criteria, such as time, target, action, and context, which encompass customer focus, life safety, and innovation during the implementation of interactive advertising technologies.Hence, interactive advertising serves as a pragmatic approach to increase consumer purchase intention of SME products provided by respective companies, potentially leading to high sales performance, akin to the results observed from the practice of traditional marketing communication.Ultimately, the digital lifestyle product industry is expected to excel by upholding the insights discovered in the current study and applying additional practices and strategies executed by similar industries.
TABLE 1 .
Attitude coding TABLE 2. Brand equity dimensions: Brand loyalty, brand image and brand awareness coding
TABLE 6 .
The mediating effect of SMIA FIGURE 4. Research framework with hypotheses
|
2023-11-01T15:14:19.012Z
|
2023-08-31T00:00:00.000
|
{
"year": 2023,
"sha1": "8149d6d0eee33777e5b14a22dd316db52305c31f",
"oa_license": null,
"oa_url": "https://doi.org/10.17576/jsm-2023-5208-06",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ac8f48c66402727505bd46969d4e10cf415e29ce",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
3459574
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of lipid profiles and hematological parameters in hypertensive patients: Laboratory-based cross-sectional study
Introduction: Hypertension and dyslipidemia are the two coexisting and synergizing major risk factors for cardiovascular diseases. The cellular constituents of blood affect the volume and viscosity of blood, thus playing a key role in regulating blood pressure. Overweight and obesity are key determinants of adverse metabolic changes including an increase in blood pressure. The aim of this study was to evaluate lipid profiles and hematological parameters in hypertensive patients at Debre Markos Referral Hospital, Northwest Ethiopia. Methods: Laboratory-based cross-sectional study was conducted in 100 eligible hypertensive patients at the hospital. The required amount of blood was withdrawn from the patients by healthcare professionals for immediate automated laboratory analyses. Data were collected on socio-demographic factors, anthropometric measurements, blood pressure, lipid profiles, and hematological parameters. Result: The mean serum levels of triglyceride, total cholesterol, and low-density lipoprotein were significantly higher than their respective cut-off values in the hypertensive patients. Besides, 54%, 52%, 35%, and 11% of the hypertensive patients had abnormal low-density lipoprotein, total cholesterol, triglyceride, and high-density lipoprotein levels, respectively. Higher levels of low-density lipoprotein, hemoglobin, and red blood cell count were observed in the hypertensive patients whose blood pressure had been poorly controlled than the controlled ones (p < 0.05). Waist circumference had a significant positive association with the serum levels of total cholesterol and white blood cell count (p < 0.05). Conclusion: Hypertensive patients had a high prevalence of lipid profile abnormalities and poorly controlled blood pressure which synergize in accelerating other cardiovascular diseases. Some hematological parameters such as red blood cell count are also increased as do the severity of hypertension.
Introduction
Cardiovascular diseases (CVDs) including hypertension are increasing globally. This increment has become a major concern in resource-limited countries such as Ethiopia. In 2000, about 1 billion people (26.4% of adults) were estimated to have hypertension worldwide, and there is a likelihood of increasing to more than 1.5 billion by 2025 as a result of a high number of aging population in many developed countries and an increasing incidence of hypertension in developing ones. 1 In Ethiopia, it has approximately been estimated that about 35.2% of the population are suffering from hypertension. 2 Several risk factors (modifiable and non-modifiable) play a role in the progression of hypertension. 3 Concerning the genetic and environmental factors affecting hypertension, a study showed that age, sex, hyperlipidemia, diabetes, high body mass index (BMI), alcohol drinking, sodium intake, and others were found to be associated with hypertension. An excessive daily intake of cholesterol and saturated fats, as well as subsequent lipid abnormalities leading to dyslipidemia (hypertriglyceridemia and hypercholesterolemia), is associated with obesity and, consequently, hypertension. 4 Hypertension and dyslipidemia, coexisting in 15%-31%, are the two major risk factors for CVD and account for more than 80% of deaths and disabilities in low-and middleincome countries. 5 These risk factors have an adverse effect on the vascular endothelium, which results in enhanced atherosclerosis resulting in CVD. 6 Abnormalities in serum lipid levels can be recognized as a major modifiable CVD risk factor and has been identified as a risk factor for essential hypertension giving rise to the term dyslipidemic hypertension. 7 Hypertension is not the mere determinant of damage of cardiovascular system, and the likelihood of hypertensive patients, with uncontrolled blood pressure, to develop target organ damage is markedly affected by coexisting risk factors. Among them, lipoproteins are heavily implicated in the atherosclerotic process and greatly influence the impact of hypertension on development of target organ injury and hence cardiovascular morbidity and mortality. 8 In addition, there are number of disputes in various studies with respect to variability of hematological parameters in patients with hypertension and normotensive subjects. The pathophysiology of hypertension is multifactorial which is affected by sympathetic overactivity contributing to changes in hematological parameters such as hematocrit, viscosity, and hypercoagulability of blood. These factors vary the kinetics of blood flow acting as contributory risk factor for coronary artery diseases, stroke, and thromboembolism. 9 Thus, the hematological parameters will give an insight to prognosis of the disease as well. Although different studies have been done on lipid profiles and hematological parameters in hypertensive patients in different parts of the world, [10][11][12] there are no ample data on the condition in Africa and Ethiopia in particular. Moreover, there are no reports on the evaluation of lipid profiles and hematological parameters in the study area as well.
Worldwide, there is broad variation in serum lipid profile among different population groups. Therefore, evaluation and monitoring of modifiable risk factors can be beneficial to reduce CVD-associated morbidity and mortality. Hence, the aim of this study was to evaluate lipid profiles and hematological parameters among hypertensive patients in Debre Markos Referral Hospital (DMRH).
Study area, design, and period
The study was conducted at DMRH, Debre Markos, located at 300 km northwest of the capital of Ethiopia, Addis Ababa. A laboratory-based cross-sectional study was conducted to evaluate the serum levels of lipid profiles and hematological parameters among hypertensive patients at the hospital from October 2016 to January 2017.
Population
This study included all eligible hypertensive patients attending at DMRH outpatient department in the time interval of the study period. However, patients having an age range of less than 20 years and greater than 70 years, those who were also taking lipid-lowering medications, patients with thyroid disease, pre-eclampsia/eclampsia, hematologic derangements, and those hypertensive patients with co-morbid diabetes mellitus were excluded from the study in advance to settle down the confounding factors.
Sample size determination and sampling method
The sample size was determined based on the prevalence of hypertension (19.6%) in Ethiopia as reported by systematic meta-analysis, 13 using single-population proportion formula with a confidence level of 95%. After sample size adjustment, 100 patients were enrolled in this study for blood sample collection and related data gathering. While purposive sampling technique was implemented to select the healthcare facility, simple random sampling technique was used to get the study participants in the study period.
Variables
Lipid profiles of hypertensive patients (serum total cholesterol (TC), triglycerides (TGs), high-density lipoprotein cholesterol (HDL-C), and low-density lipoprotein cholesterol (LDL-C)) as well as hematologic parameters (hemoglobin, hematocrit, red blood cell (RBC), white blood cell (WBC), platelets, and RBC indices) were considered as dependent variables. On the other hand, socio-demographic factors, anthropometric, and clinical characteristics were also taken as explanatory variables.
Blood sample and data collection procedures
After the study participants had been asked for their consent to be interviewed and to give a blood sample, about 5 mL blood was withdrawn from the study participants, who had fasted overnight. The blood sample was collected by qualified healthcare professionals in the hospital for immediate laboratory analyses. In addition, the questionnaire was filled with a face-to-face interview, and some anthropometric indicators were also assessed and measured side by side as well.
The collected blood was allowed to stand for 30 min at room temperature to allow complete clotting and clot retraction. Samples were then centrifuged at 3500 r/min for 15 min to extract serum. The serum was then used to determine the levels of TC, HDL-C, and TG. LDL-C was calculated using the Friedwald formula. 14 About 2 mL of the blood was collected in EDTA-coated tubes, and hematological profiles were determined for all samples using a hematological analyzer (ACT-8; Coulter Electronics). Safety precautions were taken while handling and disposing of blood.
Test principles of the laboratory analytes
Lipid profiles. A commercial kit developed by Coxon and Schaffer was used to estimate serum TC concentration. 15 Desirable or normal cholesterol levels were considered to be those below 200 mg/dL. For determination of serum TG concentration, a commercial kit developed from Cromatest ® Cholesterol MR (Linear Chemicals S.L., Barcelona, Spain) was used. 16 Desirable or normal fasting TG levels were considered to be those below 200 mg/dL and are further categorized as borderline, 200-400 mg/dL; high, 400-1000 mg/dL; and very high (>1000 mg/dL). HDL-C was determined by a kit developed from the same source as TG. HDL was measured directly in serum. The apoB-containing lipoproteins in the specimen are reacted with a blocking reagent that renders them non-reactive with the enzymatic cholesterol reagent under conditions of the assay. The apoB-containing lipoproteins are thus effectively excluded from the assay and only HDL-C is detected under the assay conditions. A low HDL-C concentration was considered to be a value below 45 mg/dL. HDL-C values are also used in the calculation of LDL-C (as shown below). LDL-C was calculated from measured values of TC, TG, and HDL-C according to the relationship: LDL-C = TC − HDL-C − (TG/5) where TG/5 is an estimate of very LDL-C (VLDL-C) and all values are expressed in milligrams per deciliter. Desirable levels of LDL-C are those below 100 mg/dL in adults.
Hematological parameters. The Coulter method, automated hematology analyzers, was used to accurately count and size cells by detecting and measuring changes in electrical resistance when a particle (such as a cell) in a conductive liquid passes through a small aperture. Hemoglobin, hematocrit, platelets, RBC count, WBC count, and RBC indices were determined. The RBC count, WBC count, and platelets were determined by the principle of electronic impedance. The hemoglobin which was freed by the lysis of RBCs was combined with potassium cyanide to form a cyanmethemoglobin compound. The absorbance was then measured by spectrophotometry at 550 nm wavelength. Mean corpuscular volume (MCV) was calculated directly from RBC histogram. Mean corpuscular hemoglobin (MCH) was calculated from hemoglobin level and RBC count. In addition, MCH concentration (MCHC) was calculated according to the hemoglobin and hematocrit values. The hematocrit was measured as a function of the numeric integration of MCV. 17
Anthropometrical measurements
The weight of hypertensive patients was measured using a standard balance, and the height was measured using a height measuring device attached to the balance. BMI was then calculated from the body weight (kg) and height (m). 18 Using the World Health Organization (WHO) classification, 19 four categories of BMI can be identified as follows: underweight, <18.5 kg/m 2 ; normal, >18.5-24.9 kg/m 2 ; overweight, >25.0-29.9 kg/m 2 ; and obesity, >30 kg/m 2 . Waist circumference (WC) and hip circumference (HC) of the patients were also measured. WC was measured over light clothing at the level halfway between the iliac crest and the costal margin in the mid-axillary line after exhaling, when the lungs are at their functional residual capacity, with the subject in standing position with the body weight evenly distributed across the feet. HC was measured over light clothing at the level of greater trochanters with the subject in standing position and both feet together. 18 Two consecutive recordings were made for each site to the nearest 0.5 cm using a non-stretchable fiber measuring tape on a horizontal plane without compression of skin. The mean of two sets of values was used. 20 Waist-to-hip ratio (WHR) was calculated by dividing WC to HC. 18 While the cut-off point considered for WC was >80 cm for females and >90 cm for males to define overweight, the cut-off taken for WHR was >0.8 for females and >0.9 for males as per the criterion of the WHO. 21
Data quality control and management
The data collection questionnaire was well prepared and all variables were filled on the data extraction format daily. All the laboratory procedures were handled by medical laboratory technologists. All the tests were also standardized and automated.
Data processing and analysis
After checking for completeness and cleaning, processing and analysis of the data obtained from laboratory analyses of the blood samples and questionnaires were performed by coding and entering the data into Epi-Data statistical software version 3.1 and then exporting the data to Statistical Package for Social Sciences (SPSS) software version 23 package, and the different variables were tested and analyzed. Simple descriptive statistics were used to present the socio-demographic and clinical characteristics of the study subjects. While chi-square (χ 2 ) tests were used to compare categorical variables, continuous variables were presented as mean ± standard deviation (SD) and were compared using Student's t-tests for groups. Other associations were performed with Pearson's correlation coefficient as well as multiple linear regression analysis. A p-value of <0.05 was considered to be statistically significant in all the analyses.
Operational definitions
Dyslipidemia. It is the abnormally elevated levels of any or all lipids and or lipoproteins in the blood. By the same token, it is a defect in lipoprotein metabolism; for example, increased cholesterol, increased TG, increased LDL, and decreased HDL.
Controlled blood pressure. It is the blood pressure that is controlled by antihypertensive drug(s) (Enalapril, Nifedipine, Hydrochlorothiazide, Amlodipine) and/or non-pharmacological treatment, that is, systolic blood pressure (SBP) is lower than 140 mmHg and diastolic blood pressure (DBP) is lower than 90 mmHg.
Uncontrolled or poorly controlled blood pressure. It is the blood pressure not well controlled despite the antihypertensive drugs prescribed, that is, SBP is greater than or equal to 140 mmHg and/or DBP is 90 mmHg or more.
Anthropometric indicators. They are the parameters for the measurement of the human body and its individual parts thereby yielding a quantitative index of their variability. They include height, weight, BMI, WC, and WHR.
Socio-demographic characteristics
The study enrolled 100 sample hypertensive patients, 45 (45%) females, and 55 (55%) males. The average age of hypertensive patients was 51.21 (±12.30) years. The majority of hypertensive patients were found within the age group of 40-59 years. Most of the patients in the study were married (71%), urban residents (78%), and above secondary school (41%). While half of the patients (50%) had a history of alcohol drinking behavior, only 2% of hypertensive patients did smoke cigarette, and there were more male drinkers and smokers than females. In addition, most of the patients (55%) had a history of performing different forms of physical activity (Table 1).
Anthropometric and clinical features
This study revealed that the average BMI is high-normal (24.60 kg/m 2 ) in the study participants. About 31% and 10% of hypertensive patients were overweight and obese, respectively. Females are affected more frequently than males (18% overweight and 10% obese vs 13% overweight and 0% obese). The study also showed that 36 of 55 (65.45%) males and 41 of 45 (91.11%) females had WC greater than their respective cut-off values, and 49 of 55 (89.09%) males and all (100%) female hypertensive patients had WHR higher than the cut-off value. Among the hypertensive patients, 38% of them were found to have a family history of hypertension, and more than half of the patients (62%) had uncontrolled blood pressure despite at least one antihypertensive drug prescribed. Whereas mean SBP and DBP were found to be 138.18 (±17.86) and 84.55 (±9.19) mmHg, respectively, in males, the mean SBP and DBP of females were 146.22 (±29.64) and 87.33 (±17.63) mmHg, respectively ( Table 2).
Levels of lipid panels in hypertensive patients
The mean levels of lipid profile in BP-controlled and BP poorly controlled male and female hypertensive patients are shown in Table 3. The result of this study showed that in male BP-controlled and BP poorly controlled hypertensive patients, the average TC levels were 193.42 (±54.91) and 227.00 (±34.07) mg/dL, and the levels of LDL-C were found to be 106.85 (±38.31) and 128.00 (±36.00) mg/dL, respectively. In addition, the results of this study showed that in the plasma of female BP-controlled and BP poorly controlled hypertensive patients, the average TG levels were 148.88 (±45.59) and 262.59 (±180.53) mg/dL, and the levels of LDL-C were found to be 92.00 (±33.36) and 127.72 (±57.56) mg/dL, respectively. In both sexes, LDL-C levels were significantly higher ( p < 0.05) in patients whose BP is poorly controlled than the controlled ones. However, patients with poor BP control had significantly higher levels ( p < 0.05) of TC in males and TG in females as compared to good BP control.
Among 100 hypertensive patients, only 46 (46%) of them had a desirable level of LDL-C which is below 100 mg/dL, the cut-off value for the metabolite. The rest 54 (54%) had an undesirable level of LDL-C, that is, greater than 100 mg/dL. On the other hand, while only 11 (11%) of hypertensive patients showed an undesirable level of HDL-C lipoprotein, in most of the patients (89%) levels of serum HDL-C were found to be within a normal range which is below 45 mg/dL (Figure 1).
In the sample hypertensive patients, 48 (48%) of them had normal serum TC level which is below 200 mg/dL, the cut-off level for the metabolite. But the remaining patients (52%) had an abnormal level of TC (>200 mg/dL). Whereas 65% of hypertensive patients had a desirable level of serum TG. In 35% of the patients, levels of serum TG were found to be abnormally high (>200 mg/dL). The proportions of lipid profile abnormalities stratified by sex are depicted in Figure 2. LDL-C and TC abnormalities were more prevalent in males as compared to their female counterparts.
Levels of hematological parameters in hypertensive patients
The average levels of the hematological parameters are shown in Table 4. The study showed statistically significant elevation in WBC and RBC levels in BP poorly controlled male hypertensive patients as compared to BP-controlled ones (p < 0.05). Also, it showed statistically significant elevation in platelet levels in BP poorly controlled female hypertensive patients as compared to BP-controlled ones (p < 0.05).
Socio-demographic characteristics and dependent variables
Bivariate, Pearson's correlation, analyses showed that age was positively correlated with serum, LDL-C (r = 0.274, p < 0.05), HDL-C (r = 0.310, p < 0.05), and TC (r = 0.399, p < 0.05) in hypertensive patients. Linear regression analysis also showed that 7.5%, 9.6%, 7.2%, and 0.8% of the variations in serum LDL-C, HDL-C, TC, and TG levels, respectively, are explained by age. Chi-square test showed that patients who did not perform physical exercise had serum TG level above the cut-off value (p < 0.05). Independent samples t-test also showed that the mean serum TC level was significantly higher (p < 0.05) in hypertensive patients who had been drinking alcohol than who had not been drinking. In addition, abnormal lipid profiles prevailed in patients having smoking habit. Whereas correlation analyses showed that age is positively associated with RBC count (r = 0.290, p < 0.05) and levels of hematocrit (r = 0.197, p < 0.05). Also, linear regression analyses showed that 3.9% and 8.4% of the variations in hematocrit and RBC levels, respectively, can be explained by age.
Anthropometric, clinical features, and dependent variables
WC had statistically significant positive correlation with the serum levels of TC (p < 0.05). Correlation analysis also showed that there was positive association between WHR and TG, TC, and LDL-C levels. In hypertensive patients, SBP had statistically significant correlation with LDL-C (r = 0.311, p < 0.05) and TG (r = 0.311, p < 0.05). One-way analysis of variance (ANOVA) with Tukey post hoc test also showed that there was a statistically significant variation in the serum TG level between hypertensive patients who had followed the care for more than 5 years as compared to those who had followed the care for less than a year (p < 0.05).
Among hypertensive subjects, there was a statistically significant positive correlation between SBP and RBC count
Discussion
This study evaluated the serum lipid parameters (LDL-C, HDL-C, TC, and TG) and hematological parameters (RBC, WBC, hematocrit, hemoglobin, platelets, MCV, MCH, and MCHC) in outpatient hypertensive subjects. Significantly larger proportions of patients were found to have elevated levels of TC, TG, and LDL-C. Some hematological parameters such as hematocrit, WBC, and RBC were also increased in parallel with blood pressure in patients. Anthropometric indicators such as WC were also higher than their respective cut-off values in hypertensive patients.
Levels of lipid profiles in hypertensive patients
The result of this study revealed that the average levels of serum TC and TG were found to be higher than their respective cut-off values. In addition, the mean LDL-C and HDL-C levels were significantly higher than their respective cut-off values. These higher mean levels of TC, TG, and LDL-C in hypertensive patients are in agreement with the results of other related studies which are conducted in different parts of the world including Ethiopia. [22][23][24][25] A rising trend was also observed for prevalence of lipid abnormalities and serum levels of TG, TC, LDL-C, and decreasing serum level of HDL-C with the severity of hypertension indicating that they are associated with hypertension. These results are in trajectory with a study done by Nayak et al. 26 In this study, the results of the prevalence of different lipid profile abnormalities have been summarized as per the criteria of National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III). Abnormally high serum level of LDL-C was the most frequently occurring serum lipid profile abnormalities among hypertensive patients, followed by high levels of TC and TG. However, low HDL-C was found to be the most infrequent lipid abnormality in hypertensive patients. But the abnormalities often occurred together rather than in isolation. These findings are comparable with a study done in Nigeria by Osuji et al. who reported that abnormally high serum level of TC was the most frequently occurring serum lipid profile abnormalities among newly diagnosed Nigerian hypertensive patients followed by high levels of LDL-C and low HDL-C. However, elevated TG was found to be the most infrequent lipid abnormality in their study. 27 Despite the comparable results of serum TC, TG, and LDL-C with other studies done in different parts of the world, the study has shown that the serum level of HDL-C was found to be higher as compared to most of the previous studies. However, the elevated HDL-C level was in line with one study done in Nigeria. 28 The reason for this variation in the study area is not clear but may partially be explained that larger proportion of the patients are alcohol drinkers and it is well established that moderate alcohol intake raises HDL-C level by increasing the transport rate of apolipoproteins A-I and A-II. [29][30][31] Abnormalities in serum lipid profiles play a central role in endothelial functional abnormality which is important in the pathogenesis of atherosclerosis, thrombosis, insulin resistance, and hypertension. Lipoproteins rich in TG and LDL-C have been recognized to be toxic to endothelium, while HDL-C may have protective role. Abnormally high serum TC levels are considered to be risk factors for developing macrovascular complications such as coronary heart disease (CHD), stroke, and hypertension. 32
Levels of hematological parameters in hypertensive patients
This study showed statistically significant elevation in WBC and RBC levels between BP-controlled and BP poorly controlled male hypertensive patients, whereas it showed statistically significant variation in platelet levels between BP-controlled and BP poorly controlled female hypertensive patients. Studies have shown that WBC count has been found to be associated with hypertension and its complications. 33 Inflammation may contribute to increasing resistance of microvascular capillary, initiation of platelet aggregation, and increased levels of catecholamine, and there is considerable evidence of an association between inflammation and hypertension. 34 Hemoglobin is the most important determinant of whole blood viscosity. 35 Studies have shown that the concentrations of hemoglobin increased with hypertension in humans. However, only a limited number of large population-based studies have shown a link between hemoglobin concentration and blood pressure. In another study conducted among unselected public employees who did not receive any medication, hemoglobin concentration was significantly associated with hypertension. 36,37 In a recent study involving a large cohort of blood donors who were relatively healthy, hemoglobin concentrations were positively associated with both SBP and DBP. 38,39 Studies also revealed that three erythrocyte parameters (RBC, hemoglobin, and hematocrit) were found to be associated with hypertension in their cohort study. Hematocrit determines blood viscosity, regulates peripheral vascular resistance, and therefore, in principle, blood pressure. 39,40 Therefore, this study generally is concordant to numerous studies conducted in hypertensive subjects.
Socio-demographic characteristics and dependent variables
The majority of hypertensive patients were found within the age group of 40-59 years. This is in line with the previous studies done both in developed and developing countries which consistently reported that age is associated with hypertension. 22,28 In addition, this study revealed that age was positively correlated with serum, LDL-C (r = 0.274, p < 0.05), HDL-C (r = 0.310, p < 0.05), and TC (r = 0.399, p < 0.05), in hypertensive patients. This finding is in corroboration with the previous studies. 24,27 It is also further supported by studies that reported direct correlation of age and cholesterol levels. 41,42 As we age, there is a natural tendency for the blood pressure to rise which could be because of an increase in stiffness of the arteries in the vasculature and endothelial atherosclerotic changes. Wen et al. also reported that there is an age-related progression of arterial stiffness. Blood pressure has an increasingly positive association with arterial stiffness as age increases. 43 Similarly, the results of epidemiological studies have revealed the relation of age with arterial stiffness in patients with hypertension; as age advances, so do the prevalence of hypertension and arterial stiffness. 44,45 Hypertension is usually related to other cardiovascular risk factors such as dyslipidemia, diabetes, and obesity. The presence of these cardiovascular risk factors and the resulting endothelial dysfunction may play a role in the pathophysiology of hypertension. 46 As a study done by Jung et al., 47 the adverse impact of insulin resistance on BP was accentuated in older individuals and may have a greater impact than further aging. Plasma insulin concentrations were also found to be correlated (r = 0.31, p < 0.01) with hypertension. 48 This study also showed that the mean serum TC level was higher in hypertensive patients who had been drinking alcohol than who had not been drinking. In addition, abnormal lipid profiles prevailed in patients having smoking habit, which is in line with a study done in Greece. 49
Anthropometric, clinical features, and dependent variables
This study showed that most of the hypertensive patients (62%) had poorly controlled blood pressure: only 38% of patients had well-controlled blood pressure. Although the study did not assess reasons for such high proportion of poorly controlled hypertensive patients, it could possibly and partly be attributed to noncompliance to antihypertensive drugs, poor follow-up in the hospital, lack of adequate health education and counseling related to hypertension and its precipitating risk factors, and financial constraints for antihypertensive drugs and care.
Anthropometric indicators are related to different pathological conditions. Although BMI is a widely used indicator to reflect obesity generally, it fails to account the proportion of weight related to muscle mass or regional distribution of excess fat in the body, both of which influence the health risks related to obesity. Individuals having same BMI may significantly vary in their abdominal fat distribution or mass. 50 For these reasons, a measure of obesity that takes into account the increased risk of obesity-related illnesses because of the accumulation of abdominal fat is desirable. As a result, there is a new tendency to use WC and WHR. This study tried to investigate the associations of some anthropometric indices (BMI, WC, and WHR) and lipid abnormalities in hypertensive patients in the study area. Concordant to a previous study, 51 the result of this study showed that there is correlation between the anthropometric indicators and lipid abnormalities.
There was a positive association between BMI and the lipid profiles. In addition, WC had a significant positive association with the serum levels of TC and weak association with TG, LDL-C, and HDL-C levels. Correlation analysis also showed that there was positive association between WHR and TG, TC, and LDL-C levels and an inverse relation with HDL. TC level, among lipid profiles, showed the closest relationship with WC and WHR ratio. As regression analysis of the study showed, WC and WHR can better predict lipid abnormalities in hypertensive patients.
An increased WC is most likely associated with elevated risk factors because of its relation with visceral fat accumulation. The mechanism may involve excess exposure of the liver to fatty acids and release of detrimental adipocytokines and lower levels of beneficial adipocytokines. These have multiple detrimental effects, including proinflammatory damage, altered signaling pathways, and reactive oxygen species production on beta cells and other tissues resulting in disease states such as hypertension and diabetes. 52 In addition, the accumulation of visceral fat may bring about an increase in sympathetic overactivity which is associated with insulin resistance and hence increasing the activity of the renin-angiotensin-aldosterone system as visceral adipocytes increase angiotensinogen secretion as compared to the subcutaneous fat. 53 Mechanical effect could also be exerted by the accumulation of visceral fat resulting in renal compression and promoting a rise in arterial blood pressure. 54 In hypertensive patients, SBP had statistically significant correlation with serum LDL-C and TG levels which tended to rise as the duration of hypertension advances. Plethora of studies such as a study conducted in Europe, 55 another study carried out in India, 12 in Nigeria, 27 and a study conducted in Ethiopia 24 are in trajectory with this study. Hypertension and lipid abnormalities are well known to frequently coexist and synergize to be risk factors for CVD. The coexistence of increased blood pressure and lipid abnormalities has many clinical implications. Because hypertension and lipid abnormalities synergize to be risk factors for CVD, both of them should cautiously be intervened. Central obesity and consequent insulin resistance which are underlying factors that play major roles in the pathogenesis of both hypertension and dyslipidemia may link the association. Lipid abnormalities, characteristic of metabolic syndrome, were found to predict hypertension and it had also been shown in cohort studies that dyslipidemia in apparently healthy individuals lead to hypertension. 7,56 This study also revealed that blood pressure had statistically significant positive correlations with RBC count, hemoglobin, hematocrit, and platelet levels. Although this finding is in corroboration with some of the earlier studies, 9,38 it is unlike the finding of Divya and Ashok 12 who reported that hemoglobin and hematocrit showed a negative correlation with SBP among hypertensive patients. This study also showed that WC and WBC count had significant association. Similar finding had been reported in studies conducted in South Korea 57 and Iran. 58 On the other hand, RBC count showed a statistically significant inverse association with WHR in hypertensive patients. As discussed above, WC is related to visceral fat accumulation which leads to release of detrimental proinflammatory cytokines that can increase the WBC count.
Limitation of the study
Even if this study incorporated important laboratory-based findings in the hypertensive patients prospectively, it is not without potential limitations. The sample size seems small and the study was conducted in one healthcare setting due to budget constraint. The study also targeted only hypertensive patients taking medications and did not compare with normotensive subjects. It is also a cross-sectional study which cannot address the future impacts of BP-controlled and BP poorly controlled patients.
Conclusion
The study concluded that hypertensive patients had high prevalence of lipid profile abnormalities and poorly controlled blood pressure in the study area. Some hematological parameters such as RBC count and WBC count were also increased in hypertensive patients as a factor of their blood pressure increases. Significantly higher proportions of hypertensive patients were overweight and obese which seems to contradict the claim that overweight in Ethiopian population is less prevalent. TC level, among lipid profiles, showed the closest relationship with WC and WHR. WC and WHR can better predict lipid abnormalities in hypertensive patients.
|
2018-04-03T02:13:34.786Z
|
2018-02-12T00:00:00.000
|
{
"year": 2018,
"sha1": "cb98f8ef521dc01115840eb670a792070be81249",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2050312118756663",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cb98f8ef521dc01115840eb670a792070be81249",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
67051731
|
pes2o/s2orc
|
v3-fos-license
|
The Machine as Artist : An Introduction
With the understanding that art and technology are continuing to experience an historic and rapidly intensifying rapprochement—but with the understanding as well that accounts thereof have tended to be constrained by scientific/engineering rigor on the one hand, or have tended to swing to the opposite extreme—it is the goal of this special issue of Arts to provide an opportunity for artists, humanists, scientists, and engineers to consider this development from the broader perspective which it deserves, while at the same time retaining a focus on what must surely be the emerging core of our subject: the state of the art in mechatronics and computation is such that we can now begin to speak comfortably of the machine as artist—and we can begin to hope, as well, that an aesthetic sensitivity on the part of the machine might help lead to a friendlier and more sensitive machine intelligence in general.
If we can accept the 1967 founding of the journal Leonardo [1] and the 1968 publication of Jack Burnham's Beyond Modern Sculpture [2] as milestones-and the latter of which had an extensive chapter on "Robot and Cyborg Art"-it must come as a shock to realize that the study of electronic techno-art has been established as a formal discipline for half a century, and which study since placed in brackets with the appearance of at least two comprehensive surveys [3,4].It continues to be the case, however, that there has also been constant and now breath-taking progress, and to the extent that we can at present begin to think of the machine, not as the artist's subject matter or medium, but as creator or co-creator.Indeed, it is this subject to which the current special issue of Arts is dedicated; and we begin by noting that the literature bears ample witness to this emergence, and with the contributions documented therein falling into several major sub-fields: 1.
The kinetic or robotic art works whose movement and/or behavior has become so sophisticated that we are entitled to regard them as performance artists in their own right [5][6][7][8].
2.
The algorithmic studio assistants set loose to embellish computer-mediated graphic or sculptural works of art, and which work is then output via large-format ink-jet printer or additive manufacturing system, or as video [9][10][11][12][13][14].
3.
The autonomous and cleverly-designed painting robots which, drawing upon the emergent properties of minimally-intelligent systems, are nonetheless able to create striking abstract works [15,16].
5.
The purely computational/AI systems which qualify themselves as aesthetically competent entities, if not actual artists, by their ability to predict the style period and/or author of existing works of graphic art [24][25][26][27][28]. 6.
5. The purely computational/AI systems which qualify themselves as aesthetically competent entities, if not actual artists, by their ability to predict the style period and/or author of existing works of graphic art [24][25][26][27][28]. 6.The purely computational/AI systems capable of isolating and capturing the style of a given work of graphic art and applying it in an aesthetically-pleasing manner to an arbitrary image [29][30][31][32][33][34][35][36][37].7. The purely computational/AI systems capable of generating striking imagery based on otherwise mundane or even random visual input fields [38][39][40][41][42].It is of particular interest and significance, moreover, that these sub-fields tend to overlap within the genre of the traditional graphic arts-the physical robotic systems producing sophisticated portraits, and the purely computational systems generating sophisticated analyses and transformations of historic and well-known paintings-for we have here a coming-together of a number of critical threads.
This overlap is due, in the first place, to the fact that graphic art can of course be represented by two-dimensional arrays of pixels, and is thus ideally suited for computational analysis.Indeed, virtually all of the important results reported under categories 5, 6, and 7 above have been achieved with that same family of computational techniques-the "deep neural network", or DNN-that has also been responsible for the recent and unprecedented victories of computer over human in master-level Go and Poker tournaments.In other words, the graphic arts have emerged as a vital research arena for the artificial intelligence community, and to some extent as a replacement for the board game-and along with this circumstance comes the opportunity for our own contributors to address the larger questions associated with AI.
And the ultimate question at this point is no longer whether or not artificial intelligence will be capable of achieving some real degree of autonomy [43]; the question, rather, is the degree to which such an autonomous or semi-autonomous intelligence can be designed to operate in a consistently humane and responsible manner [44], and with "responsible", in this day and age, understood to include an environmental dimension.
But of course it is not merely the status of the graphic arts as a computer-friendly medium that should encourage its various practitioners to take on the question of a humane AI: the far larger point is that the graphic arts represent a creative and non-competitive and distinctly human activity-an activity, in fact, intimately associated with the emergence of humankind from a It is of particular interest and significance, moreover, that these sub-fields tend to overlap within the genre of the traditional graphic arts-the physical robotic systems producing sophisticated portraits, and the purely computational systems generating sophisticated analyses and transformations of historic and well-known paintings-for we have here a coming-together of a number of critical threads.
This overlap is due, in the first place, to the fact that graphic art can of course be represented by two-dimensional arrays of pixels, and is thus ideally suited for computational analysis.Indeed, virtually all of the important results reported under categories 5, 6, and 7 above have been achieved with that same family of computational techniques-the "deep neural network", or DNN-that has also been responsible for the recent and unprecedented victories of computer over human in master-level Go and Poker tournaments.In other words, the graphic arts have emerged as a vital research arena for the artificial intelligence community, and to some extent as a replacement for the board game-and along with this circumstance comes the opportunity for our own contributors to address the larger questions associated with AI.
And the ultimate question at this point is no longer whether or not artificial intelligence will be capable of achieving some real degree of autonomy [43]; the question, rather, is the degree to which such an autonomous or semi-autonomous intelligence can be designed to operate in a consistently humane and responsible manner [44], and with "responsible", in this day and age, understood to include an environmental dimension.
But of course it is not merely the status of the graphic arts as a computer-friendly medium that should encourage its various practitioners to take on the question of a humane AI: the far larger point is that the graphic arts represent a creative and non-competitive and distinctly human activity-an activity, in fact, intimately associated with the emergence of humankind from a preoccupation with mere survival [45,46]-and an activity as well in which the entire focus is on sensitivity of observation and execution.
Arts 2017, 6, 5 3 of 7 In short-and if we can thereby conclude with Herbert Marcuse that "the aesthetic values are the non-aggressive values par excellence" [47]-then the addition of aesthetic capabilities to the machine intelligence armamentarium would perhaps bring us an important step closer to the addition, as well, of a sense of empathy and responsibility-and it is this possibility that we would like to propose as the focus of our special edition on "The Machine as Artist".
But let us emphasize here-and as strongly as possible-that it is not only those who have been involved with the computational graphic arts who are making, or who are in a position to make, an important contribution to the genesis of a "friendly AI".In particular, the artists and scientists and engineers who have worked to bring the robot out of the factory and into public gallery and exhibition spaces are playing a critical role in introducing machine intelligence as a physical as well as mental presence, and we are eager to hear more of their work; and to the extent that our basic thesis is correct, most such contributions will tend to have at least some bearing on the question, "Can there be a humane intelligence apart from the sense of balance and harmony and attention to detail that we normally associate with aesthetics?" Given, however, the speculative and cross-disciplinary nature of this question, it is anticipated that many of the submissions to this special edition will take the form of scholarly essays or even communications (albeit still subject to peer review); i.e.,-and at the risk of repeating ourselves-we hope to provide here an opportunity for specialists in the fields of computer science, neuroscience, anthropology, and art history to share their thoughts on a more open-ended basis.
In this context-and we rush here to our conclusion, and by way of returning to our central theme-the status of the graphic arts is given a powerful boost by the fact that so distinct is the emergence, and so invariant over time the performance and reception of certain of its styles, that we are entitled to regard it as a phenomenon-a phenomenon as yet imperfectly understood, but no less worthy of study, and potentially no less rewarding, than the phenomenon of a certain mineral ore able to fog unexposed photographic plates.Or in other words, we have here a near-ideal venue for interaction between the humanities and the sciences in respect to the question of a humane machine intelligence; and in support of this claim we exhibit following a group of drawings from the Chauvet Cave created some 32,000 years ago (Figure 2)-and the freshness and clarity and sensitivity of which must instill in us a deep wonder: preoccupation with mere survival [45,46]-and an activity as well in which the entire focus is on sensitivity of observation and execution.
In short-and if we can thereby conclude with Herbert Marcuse that "the aesthetic values are the non-aggressive values par excellence" [47]-then the addition of aesthetic capabilities to the machine intelligence armamentarium would perhaps bring us an important step closer to the addition, as well, of a sense of empathy and responsibility-and it is this possibility that we would like to propose as the focus of our special edition on "The Machine as Artist".
But let us emphasize here-and as strongly as possible-that it is not only those who have been involved with the computational graphic arts who are making, or who are in a position to make, an important contribution to the genesis of a "friendly AI".In particular, the artists and scientists and engineers who have worked to bring the robot out of the factory and into public gallery and exhibition spaces are playing a critical role in introducing machine intelligence as a physical as well as mental presence, and we are eager to hear more of their work; and to the extent that our basic thesis is correct, most such contributions will tend to have at least some bearing on the question, "Can there be a humane intelligence apart from the sense of balance and harmony and attention to detail that we normally associate with aesthetics?" Given, however, the speculative and cross-disciplinary nature of this question, it is anticipated that many of the submissions to this special edition will take the form of scholarly essays or even communications (albeit still subject to peer review); i.e.,-and at the risk of repeating ourselves-we hope to provide here an opportunity for specialists in the fields of computer science, neuroscience, anthropology, and art history to share their thoughts on a more open-ended basis.
In this context-and we rush here to our conclusion, and by way of returning to our central theme-the status of the graphic arts is given a powerful boost by the fact that so distinct is the emergence, and so invariant over time the performance and reception of certain of its styles, that we are entitled to regard it as a phenomenon-a phenomenon as yet imperfectly understood, but no less worthy of study, and potentially no less rewarding, than the phenomenon of a certain mineral ore able to fog unexposed photographic plates.Or in other words, we have here a near-ideal venue for interaction between the humanities and the sciences in respect to the question of a humane machine intelligence; and in support of this claim we exhibit following a group of drawings from the Chauvet Cave created some 32,000 years ago (Figure 2)-and the freshness and clarity and sensitivity of which must instill in us a deep wonder: And given, finally, that no modern intellectual enterprise can be complete without a reference to the very real environmental threat facing our planet, we note that here also the graphic arts have a critical role to play, and as likewise deeply embedded in our culture and history-and there is perhaps no better example than Audubon's depiction of the Swallow-tailed Kite (Figure 3).
A computational analysis of the exquisite lines thereof (refined, as we must note, by the master engraver Havell) would almost certainly reveal, from a human factors standpoint, some noteworthy, if not indeed uncanny, qualities; but what should strike us as most uncanny is the fact that the collected set of such images-the graphic art created by Audubon under humble circumstances as he trekked through the wilds of North America-has been responsible for an outpouring of public commitment to environmental preservation to which no modern public relations campaign can bear comparison; i.e., we have here an example of the fact that art has a very real and unique power, and a greater appreciation and understanding of which has now become a vital matter.
And given, finally, that no modern intellectual enterprise can be complete without a reference to the very real environmental threat facing our planet, we note that here also the graphic arts have a critical role to play, and as likewise deeply embedded in our culture and history-and there is perhaps no better example than Audubon's depiction of the Swallow-tailed Kite (Figure 3).
A computational analysis of the exquisite lines thereof (refined, as we must note, by the master engraver Havell) would almost certainly reveal, from a human factors standpoint, some noteworthy, if not indeed uncanny, qualities; but what should strike us as most uncanny is the fact that the collected set of such images-the graphic art created by Audubon under humble circumstances as he trekked through the wilds of North America-has been responsible for an outpouring of public commitment to environmental preservation to which no modern public relations campaign can bear comparison; i.e., we have here an example of the fact that art has a very real and unique power, and a greater appreciation and understanding of which has now become a vital matter.
|
2017-05-06T06:41:59.999Z
|
2017-04-10T00:00:00.000
|
{
"year": 2017,
"sha1": "631bc4b77c178d55069569e02604b69b603369fc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0752/6/2/5/pdf?version=1491913106",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "631bc4b77c178d55069569e02604b69b603369fc",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
207939319
|
pes2o/s2orc
|
v3-fos-license
|
Intravenous Injections of a Rationally Selected Oncolytic Herpes Virus as a Potent Virotherapy for Hepatocellular Carcinoma
As a clinical setting in which novel treatment options are urgently needed, hepatocellular carcinoma (HCC) exhibits intriguing opportunities for oncolytic virotherapy. Here we report the rational generation of a novel herpes simplex virus type 1 (HSV-1)-based oncolytic vector for targeting HCC, named Ld0-GFP, which was derived from oncolytic ICP0-null virus (d0-GFP), had a fusogenic phenotype, and was a novel killer against HCC as well as other types of cancer cells. Compared with d0-GFP, Ld0-GFP exhibited superior cancer cell-killing ability in vitro and in vivo. Ld0-GFP targets a broad spectrum of HCC cells and can result in significantly enhanced immunogenic tumor cell death. Intratumoral and intravenous injections of Ld0-GFP showed effective antitumor capabilities in multiple tumor models, leading to increased survival. We speculated that more active cell-killing capability of oncolytic virus and enhanced immunogenic cell death may lead to better tumor regression. Additionally, Ld0-GFP had an improved safety profile, showing reduced neurovirulence and systemic toxicity. Ld0-GFP virotherapy could offer a potentially less toxic, more effective option for both local and systemic treatment of HCC. This approach also provides novel insights toward ongoing efforts to develop an optimal oncolytic vector for cancer therapy.
INTRODUCTION
Hepatocellular carcinoma (HCC) is the sixth most common malignancy and the third most common cause of cancer-related death worldwide. 1,2 Although curative treatments such as liver resection, liver transplantation, and local ablation have improved the outcome in early stage HCC, most patients are not considered as candidates for these therapies because of an advanced tumor stage or inadequate liver function at the time of diagnosis. 3 This limits their treatment to fewer options, such as target-oriented chemotherapeutic methods and inhibitor drugs. HCC patients generally present with poor prognosis; no effective treatment is available for most patients, and the 5year relative survival rate for patients with advanced stage HCC is only below 11%. 4 Therefore, a more innovative and effective treat-ment for dealing with advanced stage HCC is required to improve patient survival.
In this regard, oncolytic virotherapy offers a promising therapeutic option for treating advanced stage HCC, with tremendous advantages, such as tumor selectivity, safety, effectiveness, immunomodulation, and fewer adverse effects. 5,6 The lead oncolytic virus (OV) in HCC clinical trials, JX-594, has demonstrated evidence of clinical benefit and been granted orphan drug status by the US Food and Drug Administration (FDA). 7 In 2015, OV T-VEC had shown therapeutic benefit against melanoma and become the first FDAapproved oncolytic virotherapy to treat advanced melanoma. 8 To date, a number of OVs, including adenovirus, reovirus, measles, herpes simplex virus, enterovirus, Newcastle disease virus, and vaccinia, have shown single-agent clinical activity and evidence of clinical synergy with immune checkpoint blockade. 9,10 Among the OVs, human herpes simplex virus 1 (HSV-1) is one of the agents having several features that meet the requirements for oncolytic virotherapy, and various forms of genetically modified vectors have been developed for cancer therapy. 5,11 The most advanced candidates, including T-Vec, G207, 1716, G47D, and HF10, have been evaluated in clinical trials, stating evidence of benefits in treating various types of advanced cancer, such as melanoma, glioma, head and neck cancer, and breast cancer. 12,13 Some efforts have been made to test the antitumor activity of HSV-1-based OVs in preclinical models of HCC, with some evidence of antitumor efficacy. 14-19 A number of HSV-1-based oncolytic vectors were designed to target HCC using tissue-specific promoters to drive the expression of an essential viral gene, [20][21][22] however, few efforts have concentrated on engineering OV cellular specificity and enhancing its antitumor potency against HCC. 23 New approaches to the treatment of HCC are being continuously investigated to facilitate the development of treatments with superior efficacy and lower toxicity. d0-GFP is an ICP0-null, replication-selective HSV-1 virus, as previously described. 24 and it plays a key role in blocking IFN-induced inhibition of viral infection, 25 so ICP0-null HSV-1 replicates more efficiently in cancer cells than in normal cells. Here we introduced a rational design and generated OV Ld0-GFP for targeting HCC, which was selected and obtained by repeated passage of d0-GFP in HCC cells and has superior oncolytic activity and tumor selectivity. Ld0-GFP enhances the oncolytic activity by forming large syncytia, and it induces immunogenic cell death in a variety of HCC cell types. In this study, the oncolytic activity of Ld0-GFP against HCC was investigated both in vitro and in vivo, and the safety profile of Ld0-GFP was investigated in immunocompetent mice. Such a safe and potent OV seems to be a good choice as a treatment for patients with HCC. These results add value to our understanding of the mechanisms of action of tumor-specific oncolytic vectors.
Development of a Novel OV against HCC
To generate the oncolytic HSV-1 vectors for HCC, we first repeated passage of d0-GFP in Hep3B, QGY7703, and SMMC7721 cell lines, and we screened the fusogenic d0-GFP progenies for targeting HCC ( Figure 1A). The screening strategy is depicted in Figure 1B. For every round of d0-GFP passage, the cells were infected with viruses at an MOI of 1 and then harvested at 72 h post-infection for subsequent re-infection. After seven rounds of repeated infection, the d0-GFP progenies, which can form fusogenic plaque, were subjected to single-plaque purification, and those that can form syncytia like plaque were selected for preliminary assessment.
To obtain the most potent OVs for targeting HCC, ten fusogenic d0-GFP progenies were picked out and evaluated by testing their replication difference in U-2 OS cells and cell-killing ability on both HCC cell lines (QGY7703) and the hepatic normal cell line (L-02). OV, which had the greatest replication efficiency and relatively higher tumor-killing selectivity, was selected for further assessment. It showed that d0-GFP-7 had highest replication efficiency among those fusogenic d0-GFP progenies, and it had almost equivalent replication efficiency to d0-GFP ( Figure S1). After the comparative evaluation of their cell-killing ability in HCC cell lines (QGY7703) and the hepatic normal cell line (L-02), compared to d0-GFP, we discovered d0-GFP-7 (named Ld0-GFP) with good tumor selectivity, which exhibited high lytic capacity in HCC cells but low lytic capacity in liver normal cells ( Figures 1C and 1D). Our data showed that the dose required to kill 50% of cells (IC 50 ) of Ld0-GFP was at least 26-fold lower than that of d0-GFP in QGY7703 cells, but the IC 50 of Ld0-GFP was at least 2.5-fold higher than that of d0-GFP in L-02 cells ( Figures 1E and 1F), suggesting Ld0-GFP was a superior candidate as a selective killer against HCC.
To assess the oncolytic characteristics of Ld0-GFP, we first compared the plaque size between d0-GFP and Ld0-GFP in SMMC7721 cells. The plaque size of Ld0-GFP was significantly larger than that of d0-GFP due to the syncytia-forming ability of Ld0-GFP (Figures 2A and 2B). During the infection, Ld0-GFP could induce SMMC7721 cancer cell fusion so as to exhibit higher cell-killing activity, and the obvious cell death was only observed at 24 h post-infection of Ld0-GFP ( Figure 2C). U-2 OS cells are widely accepted as a common cell model for studying HSV-1 replication and yield. After ten rounds of viral propagation in U-2 OS cells at an MOI of 0.005, Ld0-GFP progenies were syncytial and homogeneous ( Figure S2).
Next, we evaluated the replication efficiency and oncolytic potency of Ld0-GFP and d0-GFP in U-2 OS cells. As shown in Figures 2D and 2E, Ld0-GFP had better viral yields than d0-GFP only at 24 h postinfection, and later Ld0-GFP and d0-GFP showed similar replication efficiency. However, Ld0-GFP induced significantly higher cell killing than d0-GFP at 24, 48, and 72 h post-infection. Although Ld0-GFP and d0-GFP showed comparable replication efficiency, the oncolytic potency of Ld0-GFP was significantly enhanced in U-2 OS cells, suggesting that the oncolysis-induced cell fusion may contribute to the enhanced cell-killing capability of Ld0-GFP against HCC at late stage.
Ld0-GFP Targets a Broad Spectrum of HCC Cancer Cells with Improved Oncolytic Activity
To explore the oncolytic efficacy of Ld0-GFP in vitro, we first compared the cell-killing effects of d0-GFP and Ld0-GFP viruses www.moleculartherapy.org A B (legend on next page) Molecular Therapy: Oncolytics on various cultured human HCC cell lines. Of the 11 HCC cell lines that we tested, Ld0-GFP showed markedly enhanced oncolysis compared to d0-GFP ( Figure S3; Figure 3A). As shown in Figure 3B, the IC 50 of Ld0-GFP was at least 5-fold lower than that of d0-GFP in HepG2, Huh7, QGY7703, MHHC97H, and Hep3B cells, and the IC 50 of Ld0-GFP was at least 2-fold lower than that of d0-GFP in the remaining HCC cell lines, besides PLC/PRF/5 in which the IC 50 of Ld0-GFP was only 1.41-fold lower than that of d0-GFP. Our data showed Ld0-GFP exhibited increased cell-killing ability not only in high permissive HCC cell lines (HCCLM3, PLC/PRF/5, and Hep3B) but also in less permissive HCC cell lines (SK-HEP-1, BEL7404, and MHHC97H). All these data suggested Ld0-GFP showed superior antitumor capabilities and targets a broad spectrum of HCC cancer cells. Moreover, we tested the in vitro activity of the viruses in the mouse H22 cells, and the IC 50 of Ld0-GFP was at least 2-fold higher than that of d0-GFP irrespective of the relatively low permissivity of mouse cells to HSV-1 ( Figure S4). Additionally, our data showed Ld0-GFP exhibited increased cell-killing ability in non-HCC tumor cells, such as H1299 and HCT116 cells ( Figures S5A-S5C).
Ld0-GFP Induces Strong Immunogenic Cell Death in HCC Cell Lines
To explore the cell death types involved in Ld0-GFP-induced oncolysis, we examined the apoptosis markers after treatment with Ld0-GFP or d0-GFP. Annexin V/propidium iodide (PI)-labeled fluorescence-activated cell sorting (FACS) analyses showed significant upregulation of annexin V staining at 24 h after viral infection in four HCC cell lines ( Figure 4A). Ld0-GFP induced stronger cell apoptosis than d0-GFP in HCC cell lines, and this induction of cell apoptosis was in a dose-related fashion ( Figure 4B). However, due to the cell destruction ability of OVs, the cells may be directly destructed when exposed to a high dosage of virus infection, thus the percentage of cell apoptosis was relatively lower in some HCC cells after treatment with OVs at an MOI of 10.
Similar results were obtained when we determined the late apoptosis or necrosis at 24 h after viral infection in four HCC cell lines (Figure S6). To determine the immunogenic profile of virus-infected HCC cell lines, HCC cell lines were infected with Ld0-GFP or d0-GFP at various MOIs. The supernatants harvested from the infected cells were analyzed for expression of the immunogenic cell death (ICD) determinants (extracellular ATP and HMGB1) at 24 h after viral infection. The secreted ATP and HMGB1 were evidently upregulated in the supernatants of Ld0-GFP-infected HCC cells compared to d0-GFP-infected HCC cells, and this induction of secreted ATP and HMGB1 was in a dose-related fashion ( Figures 4C and 4D). All these data suggested Ld0-GFP induced stronger immunogenic cell death by activating the ICD pathway compared to d0-GFP. 26
Safety Profile of Ld0-GFP in BALB/c Mice
To evaluate the safety and potential toxicity of Ld0-GFP, we established two different toxicity evaluation models, including the murine lethal challenge model and systemic challenge model ( Figures 5A and 6A). For the murine lethal challenge model, the BALB/c mice were challenged through a single intracerebral inoculation of Ld0-GFP or d0-GFP (1 Â 10 5 plaque-forming units [PFU] per dose). Mice were challenged with HSV-1 wild-type strain KOS (1 Â 10 4 PFU per dose) as a parallel positive control.
It was observed that 90% of mice survived in the Ld0-GFP-challenged group and in the d0-GFP-challenged group compared to the KOSchallenged group, while all mice died in the KOS-challenged group ( Figure 5B). The results showed that Ld0-GFP and d0-GFP exhibited comparably reduced neurovirulence in vivo. On days 1, 5, 15, and 30, the histological analysis of whole brains of virus-injected mice and vehicle-injected mice was performed by H&E staining. Obvious pathological abnormality was observed in the brains of KOS-injected mice, but not in those of the Ld0-GFP-injected mice and d0-GFP-injected mice ( Figure 5C). It was observed that the brain tissue around the KOS-injected site was severely injured compared to that around the Ld0-GFP-or d0-GFP-injected site, which led to the deaths of KOS-injected mice within 1 week. Although slight injury was found around the injection route of brain tissue both in d0-GFP-injected mice and Ld0-GFP-injected mice on day 5 post-injection, all mice finally survived and recovered to normal.
Moreover, we established a systemic challenge model to evaluate the toxicity of Ld0-GFP in mice through a single high dose of intravenous injection of virus (5 Â 10 7 PFU per dose) or PBS (vehicle). A significant difference in body weight between the Ld0-GFP-injected group and the KOS-injected group was observed, and there was no difference in body weight between the Ld0-GFP-injected group and the d0-GFP-injected group during the course of the study ( Figure 6B). On days 1, 5, 7, and 30, the histological analysis of vital tissues of virus-injected mice and vehicle-injected mice (n = 2 for each group), including heart, liver, spleen, lung, and kidney, was performed by H&E staining. No obvious pathological abnormality was observed in hearts, livers, spleens, and kidneys of virus-injected mice (Figure 6C). Acute lung injury was observed in KOS-injected mice, but not in Ld0-GFP-injected mice and d0-GFP-injected mice ( Figure 6C). All the evidence supports the conclusion that Ld0-GFP is relatively safe in mice.
Preclinical Evaluation of Ld0-GFP in HCC Mouse Models
To further evaluate the antitumor potential of Ld0-GFP in vivo, we established three different preclinical tumor models, including the subcutaneous xenograft nude mice model bearing Huh7 and Hep3B HCC ( Figure 7A) and the syngeneic HCC mouse model and orthotopic HCC model bearing mouse H22 HCC in situ (Figure 8A). For subcutaneous xenograft models, after the implanted tumor volume reached 100 mm 3 , mice in each model were randomized to receive three doses of intratumoral injection of Ld0-GFP or d0-GFP (5 Â 10 6 PFU per dose). Mice received PBS (vehicle) as a parallel negative control. It was observed that tumor growth was significantly inhibited in the Ld0-GFP-treated group compared to the d0-GFPtreated or vehicle-treated group (Figures 7B and 7D). The results showed that Ld0-GFP exhibited excellent therapeutic efficacy in HCC xenografted immunodeficient mice. Additionally, no obvious toxicity was observed in the virus-treated group during the treatment. However, obvious body weight change was observed in vehicletreated groups, possibly due to the adverse effect of rapid tumor growth on nude mice ( Figures 7C and 7E).
For the syngeneic HCC mouse model, after the implanted tumor volume reached 100 mm 3 , mice were randomized to receive three doses of intravenous injection of Ld0-GFP or d0-GFP (1 Â 10 7 PFU per dose). Mice received PBS (vehicle) as a parallel negative control. It was observed that tumor growth was significantly inhibited in the Ld0-GFP-treated group compared to the d0-GFP-treated or vehicle-treated group ( Figure 8B), and prolonged survival time was observed in the Ld0-GFP-treated group ( Figure 8C). Since mice from the vehicle-treated group started to die on day 30 after virus treatment, we thereafter followed up the long-term survival.
Ld0-GFP therapy induced robust tumor eradication and durable cures without relapse in 62.5% of the mice implanted with H22 tumors during a 150-day follow-up ( Figure 8C), showing higher efficacy compared to d0-GFP therapy (durable cures in 37.5% of the mice). Moreover, we established orthotopic HCC mice bearing mouse H22 HCC in situ to evaluate the oncolytic efficacy of Ld0-GFP in the context of the liver microenvironment through three doses of intravenous injection of virus (1 Â 10 7 PFU per dose); consistent with the previous subcutaneous xenograft models, remarkedly reduced tumor size and prolonged survival were observed in the Ld0-GFP-treated group ( Figure 8D). As shown in Figure 8E, the liver tumor sizes were significantly reduced in the Ld0-GFP-treated group compared to the d0-GFP-treated at 10 or 20 days after the initial treatment.
DISCUSSION
Treatment options and their outcomes in HCC have not changed significantly in decades. Sorafenib has been the standard therapy for patients with unresectable HCC since 2007; however, the clinical efficacy of sorafenib is still unsatisfactory, and only 2-3 months of life was prolonged in patients with advanced HCC. 27 Lenvatinib has been demonstrated to be non-inferior to sorafenib in overall survival in untreated advanced HCC. 28 Recently, a combination therapy of lenvatinib and anti-PD-1 inhibitor has been suggested as a potential new treatment option for advanced HCC, but the potential toxicities of this form of immunotherapy are still largely unknown. 29 There is still an urgent need for improved, less toxic local agents for long-term HCC control. Therefore, the aim of this study was to investigate the potential of oncolytic Ld0-GFP as a new therapeutic agent against HCC.
Our study focused on developing a novel OV for HCC by enhancing the antitumor activities of an ICP0-null oncolytic HSV-1 (Ld0-GFP) in HCC cells. Of the 11 HCC cells tested, we found Ld0-GFP to be the most potent at killing in HCC cells. Surprisingly, the enhanced oncolysis is only restricted in HCC cells, but not in normal liver cell lines. Ld0-GFP showed a greater antitumor effect than d0-GFP but had less toxicities on normal cell lines. This agrees with published observations that show that the continuous adaptation of a virus in specific cell lines at a high MOI can result in a greater anti-cancer effect. 30,31 Due to the adaptation of Ld0-GFP in HCC cells, the majority of HCC cell lines studied were susceptible to direct oncolysis by Ld0-GFP. Ld0-GFP kills tumor cells efficiently and directly through both replication and cell membrane fusion. These two cytolytic mechanisms may also produce a synergistic effect through syncytial formation that facilitates the spread of the OV in tumor tissue as well as bystander killing of uninfected tumor cells. [32][33][34] We sequenced the whole genomes of both d0-GFP and Ld0-GFP, and the amino acids of all open reading frames (ORFs) in the virus genome were compared (Table S1). Ld0-GFP had two vital syncytial mutations, gKsyn1 (Ala-to-Val at position 40) and gB (Glu-to-Asp at postion 816), which were reported to participate directly in the fusion of HSV-1-infected cells. 30,[35][36][37] Other nonlethal mutations in the UL9, UL12, and UL13 genes were also observed, but not reported to participate directly in the fusion of HSV-1-infected cells, which may play a role in enhanced cell-killing ability of Ld0-GFP in HCC cells. Specifically, syncytial mutations that cause extensive virus-induced cell fusion can arise in at least two of the glycoproteins: glycoprotein K (gK) and glycoprotein B (gB). 30,37 Because the gB and gK are late genes of which the expressions are dependent on viral DNA replication, an OV carrying these syncytial mutations will maintain the safety of the original virus, because syncytial formation will only occur in replication-permissive tumor cells, but not in replication-restricted normal nondividing cells. 33 We hypothesized that Ld0-GFP may be modified on viral glycoproteins to increase the cell-killing ability in HCC cells, but not in normal hepatic cells, by introducing some syncytial mutations gK/A40V and gB/E816D, although the underlying mechanisms were not fully understood in this study.
In addition to the direct cytotoxic effect of OVs, it is also well recognized that the antitumor immunity of OVs may play a vital role in controlling tumor growth. It was reported that oncolytic adenovirus and herpes virus can induce the oncolysis of the cancer cells and make them release damage-associated molecular patterns (DAMPs) to induce innate immune response within the tumor, remodeling the tumor microenvironment from immunosuppressive to immune active. 38 Understanding the immunogenicity of dying or dead cancer cells induced by OVs is important when considering their potency for cancer immunotherapy. 39 Our study revealed that ICD was the primary death pattern induced by Ld0-GFP in HCC cells. Ld0-GFP possessed much higher capability to induce ICD than d0-GFP. To determine whether this Ld0-GFP-induced cell death was in fact immunogenic, the in vitro characteristics of ICD in HCC cells were investigated. Two type of DAMPs, released ATP and HMGB1, have been significantly induced after Ld0-GFP infection. Moreover, Ld0-GFP possessed much higher capability to release ICD determinants than d0-GFP. The importance of ICD for initiating an antitumor response has previously been demonstrated, so Ld0-GFP may have better potency to initiate an antitumor response by inducing ICD. 40 In vivo, we demonstrated that virotherapy was more effective at promoting tumor regression in the subcutaneous xenograft model, syngeneic HCC mouse model, and orthotopic HCC model. As expected, intratumoral injection of Ld0-GFP exerted superior therapeutic effects on the HCC xenografts implanted on the nude mice and immunocompetent mice. We speculated that more active cell-killing capability of OV and enhanced immunogenic cell death may lead to better tumor regression. Although the correlation of oncolytic HSV replication/killing in vitro with antitumor activity in immunocompetent models has been challenged, 41,42 we believe that direct killing activity of OVs, magnitude of immunogenic cell death to release DAMPs, and initiation or augmentation of a host antitumor immune response should all play an essential role in oncolytic virotherapy. Moreover, intravenous injection of Ld0-GFP significantly prolonged the overall survival in the orthotopic HCC model, and the efficacy of Ld0-GFP systemic infusion is better than that of d0-GFP systemic infusion, thus demonstrating that Ld0-GFP may be amenable to systemic administration, thereby targeting metastatic disease. Overall, these data indicated that Ld0-GFP could be more effective as a single agent for both local and systemic treatments of HCC. 43 A preliminary systemic toxicity assessment was conducted in BALB/c mice following intravenous injection of Ld0-GFP at a single high dose (5 Â 10 7 PFU). Neither illness nor significant body weight loss was observed in the Ld0-GFP-treated and d0-GFP-treated groups, while the illness and significant body weight loss was observed in the KOS-treated group. Acute lung injury was found in KOS-injected mice, but not in Ld0-GFP-injected mice and d0-GFP-injected mice by histological analysis on days 1, 5, and 7 post-injection. Neurovirulence evaluation results showed that Ld0-GFP had similar neurovirulence similar to d0-GFP, both of them showing significantly lower neurovirulence than KOS. All these data demonstrate that Ld0-GFP possesses a safety profile with less toxicity and neurovirulence. 44 Following the success of immune checkpoint inhibition in multiple solid tumors, there are numerous trials evaluating the role of anti-PD-1 agents in HCC. 45,46 Given the limited response rate (18%) in the management of advanced HCC, there is still an urgent need to create new strategies to maximize the potential of anti-PD-1 immunotherapy. The development of OVs as novel immune sensitizers has recently accelerated; the most notable example is T-Vec, which helps overcome resistance to anti-PD-1 antibodies in patients with advanced melanoma, therefore promoting intratumoral T cell infiltration and improving anti-PD-1 immunotherapy. We are currently exploring the use of multiple sensitizers, including small molecular inhibitors and immune checkpoint antibodies, 47 to facilitate the effectiveness of Ld0-GFP.
In summary, this study developed a novel HSV-1 vector, Ld0-GFP, showing the increased tumor selectivity and improved oncolysis capability against HCC, which depends on efficient and selective viral replication and cancer cell killing in HCC cells. Furthermore, the utility of Ld0-GFP as a potent anti-cancer agent was demonstrated by its potential to elicit cell apoptosis and several ICD-related DAMPs. In addition, Ld0-GFP is efficacious in three preclinical tumor models by systemic infusion or intratumoral injection, and it is relatively safe for the mice treated by systemic infusion or intracerebral injection. The findings from this study have provided the rationale for the application of a novel OV in treating HCC. The antitumor potential of Ld0-GFP may be potentially enhanced by sequential administration or coadministration of other agents to increase virus spread and replication, as well as in combination with immune checkpoint inhibitors. 48
Viruses and Virus Generation
Ld0-GFP used in this study is based on the d0-GFP virus, which was generated as described previously in our laboratory. 24 The d0-GFP virus was bearing the EGFP reporter genes under the control of the human cytomegalovirus promoter replacing the viral ICP0 genes. Ld0-GFP was produced by continuous passage of d0-GFP in three HCC cells (Hep3B, QGY7703, and SMMC7721) until the fusogenic plaques were observed. For every round of d0-GFP passage, these three HCC cancer cell lines were sequentially infected with viruses at an MOI of 1 and then harvested at 72 h post-infection for subsequent re-infection. Each HCC cell line was infected at least twice. After seven rounds of repeated infection, the harvested viruses were subjected to two rounds of freeze and thaw cycles and serially diluted for infection of U-2 OS monolayers. After three passages of plaque purification in cell culture, the EGFP reporter genes and plaques with fusogenic feature were used to select and isolate the random mutant viruses.
Ten fusogenic d0-GFP progenies were picked out and evaluated by testing the replication difference and cell-killing percentages on both HCC cell lines (QGY7703) and the hepatic normal cell line (L-02). For the replication efficiency assay, the U-2 OS cells were infected with d0-GFP or d0-GFP progenies at an MOI of 0.05 PFU. After 72 h of infection, the infected cells together with the supernatants were collected and thereafter subjected to virus titration. For www.moleculartherapy.org the cell-killing ability assay, cells were infected with d0-GFP or d0-GFP progenies at an MOI of 0.001-10 PFU/cell. After 72 h of infection, the number of viable cells was counted by the trypan blue exclusion method. Finally, a novel virus (d0-GFP-7, named Ld0-GFP) with relatively higher replication efficiency in U-2 OS and the highest cell-killing activity in HCC cells (QGY7703), but not in liver normal cells (L-02), was obtained. The IC 50 was interpreted and calculated by non-linear, dose-response regression analysis.
Virus Titration
The titers of the amplified viruses were determined on U-2 OS monolayers using a classical plaque assay. In brief, a monolayer of U-2 OS cells at a density of 2  10 6 cells per 6-cm dish was infected with serially diluted virus in a volume of 0.5 mL for 1.25 h. After viral entry, the cells were overlaid with 2% methylcellulose medium and incubated at 37 C in 5% CO 2 for 2 days. Then, the dishes were stained with neutral red overnight, and the plaques were counted manually using a white-light transilluminator (Qilinbeier, China). Viral titers (PFU/mL) were calculated using the equation plaque numbers  dilution fold  2.
Virus Replication Assay
Cells were seeded in 6-cm plates at 10 6 cells/dish and infected with Ld0-GFP (0.05 PFU/cell) or mock infected (10% DMEM). For each time point, the infected cells were either harvested and thereafter subjected to virus titration or examined by fluorescence microscopy.
Evaluation of the Size of Virus Plaques
To determine the plaque size from the various viruses assayed, SMMC7721 monolayers were infected with diluted d0-GFP and Ld0-GFP viruses in 2% methylcellulose medium. After 48 h of infection, the dishes were stained with medium containing 0.01% neutral red overnight, and the visualized virus plaques were photographed. The size of the plaques was measured with a millimeter scale using ImageJ software, and the area for comparison was calculated using the following formula: area = (p) Â (radius of the minor axis) Â (radius of the largest axis).
Cell-Killing Assay
Cells were seeded in 6-well plates at 1 Â 10 6 cells/well and infected with d0-GFP or Ld0-GFP at various MOIs of 0.001-10 PFU. For each time point, cell viability was expressed as the percentage of viable cells, which were counted by the trypan blue exclusion method. The IC 50 values were interpreted and calculated as previously described.
Cell Death Assay
Cells were infected with d0-GFP or Ld0-GFP at MOIs of 0.1, 1, and 10 PFU/cell or with mock (10% DMEM). After 24 h of infection, the cells were harvested and stained with annexin V, Pacific Blue flow cytometry kit (Invitrogen, CA, USA) and PI. Apoptotic cell death was determined by FACS analysis using the BD FACSDiva Software on a FACSAria II cell sorter (Becton Dickinson, NJ, USA). ELISA analysis was used to determine the expression of ICD determinants in the supernatants of treated cells. Cells were infected with d0-GFP or Ld0-GFP at MOIs of 0.1, 1, and 10 PFU/cell and mock (10% DMEM). After 24 h of infection, the supernatants were harvested. The released ATP was measured by an ATPlite Luminescence kit (PerkinElmer, MA, USA), and the HMGB1 was measured by an HMGB1 ELISA kit (Tecan, Switzerland).
Animal Experiments
The use of the mice was approved by the Institutional Animal Care and Use Committee at Xiamen University (XMULAC20150016). All mice were purchased from Shanghai Slack Laboratory Animal, and they were housed under specific-pathogen-free conditions in a chamber with controlled temperature and humidity.
Subcutaneous Xenograft Model
An inoculum of 1  10 6 Huh7 or 5  10 6 Hep3B cells was injected subcutaneously into the flank of 5-week-old female BALB/c nu/nu mice in 50 mL sterile PBS. After 20 or 14 days, Huh7 tumors or Hep3B tumors reached an average size of $100 mm 3 . Mice were randomized into treatment groups immediately prior to treatment. Virus (5  10 6 PFU) or vehicle (saline) was administered via intratumoral injection every 3 days for three consecutive dosages in total. Tumor growth and body weight were monitored every 3 days. At 21 days after the last treatment, mice received their final measurements, and the volume was calculated according to the following formula: (length  width 2 )/2.
Syngeneic HCC Model
An inoculum of 10 6 murine HCC cells (H22) was injected subcutaneously into the flank of 6-week-old female BALB/c mice in 50 mL sterile PBS. Mice were randomized into treatment groups on day 7 following tumor inoculation, immediately before treatment. Virus (1  10 7 PFU) or vehicle (saline) was administered via intravenous injection every 3 days for three consecutive dosages in total. Tumor growth and body weight were monitored every 3 days, and the volume was calculated according to the following formula: (length  width 2 )/2. The overall survival of mice was monitored over a 150day period.
Orthotopic HCC Model
An inoculum of 5 Â 10 5 murine HCC cells (H22) was implanted into the left liver lobe of 6-week-old female BALB/c mice in 20 mL sterile PBS. After 5 days, the mice were randomized into treatment groups immediately before treatment. Virus (1 Â 10 7 PFU) or vehicle (saline) was administered by tail vein injection every 3 days for three consecutive dosages in total. The overall survival of mice was monitored over a 100-day period. Representative images of livers in Ld0-GFP-and d0-GFP-treated mice were taken 10 and 20 days after the initial treatment.
Neurovirulence Study
The 5-week-old female BALB/c mice were randomly assigned to four groups of 18 mice each; mice were anesthetized with sodium thiopental (60 mg/kg) and inoculated with vehicle (saline), KOS, d0-GFP, and Ld0-GFP by intracerebral injection into the left frontal lobe of the brain, in a volume of 5 mL at a depth of 4.5 mm from the skull surface over a period of 10 min. Ten mice of each group www.moleculartherapy.org were monitored for signs and symptoms of illness for 30 days following inoculation. For each time point (at 1, 5, 15, and 30 days post-injection), two mice of each group were examined for histology analysis. Paraffin sections (5 mm thick) of brain of BALB/c mice were stained with H&E.
Systemic Toxicity Study
The 6-week-old female BALB/c mice were randomly assigned to four groups of 18 mice each; mice were inoculated with vehicle (saline), KOS, d0-GFP, and Ld0-GFP by intravenous injection into the tail vein at a dose of 5 Â 10 7 PFU in a volume of 500 mL over a period of 2 min. Ten mice of each group were monitored for weights and examined for histology analysis at 1, 5, 7, and 30 days post-injection. Paraffin sections (5 mm thick) of vital tissues (including heart, liver, spleen, lung, and kidney) of BALB/c mice (two mice for each group) were stained with H&E.
Genome Sequencing d0-GFP and Ld0-GFP genomic DNA were isolated from infected U-2 OS cells using standard protocols. 49 An unpaired 350-bp Illumina library was generated and double-end sequenced using the HiSeq sequencing platform (Novogene). The resulting reads were assembled initially into large contigs. All ORFs in the virus genome were compared between d0-GFP and Ld0-GFP, using KOS genome sequence (GenBank: JQ673480) as a reference.
Statistics
Statistical significance was calculated using the unpaired two-tailed Student's t test (if the values follow normal distribution) or a repeated-measure ANOVA, as indicated in the figure legends. Data for survival was analyzed by the log-rank (Mantel-Cox) test. For all statistical analyses, differences were considered significant when a p value was below or equal to 0.05 (*p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001; ns, not significant). Statistical analyses were performed using GraphPad Prism 7. The numbers of animals included in the study are labeled in each figure.
CONFLICTS OF INTEREST
The authors declare no competing interests.
|
2019-10-03T09:02:56.192Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "81ceef4d2becef12cdafe9774d232c7214e26026",
"oa_license": "CCBYNCND",
"oa_url": "https://www.cell.com/molecular-therapy-family/oncolytics/pdf/S2372-7705(19)30088-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d09fb702bf3bfdd62837ac9e4b66610fbf181a89",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233652828
|
pes2o/s2orc
|
v3-fos-license
|
Sampling redesign of soil penetration resistance in spatial t-Student models
Aim of study: To reduce the sample size in an agricultural area of 167.35 hectares, cultivated with soybean, to analyze the spatial dependence of soil penetration resistance (SPR) with outliers. Cascavel, Brazil Material and methods: The reduction of sample size was made by the univariate effective sample size ( 𝐸𝐸𝐸𝐸𝐸𝐸 𝑡𝑡 ) methodology, assuming that the t-Student model represents the probability distribution of SPR. Main results: The radius and the intensity of spatial dependence have an inverse relationship with the estimated value of the 𝐸𝐸𝐸𝐸𝐸𝐸 𝑡𝑡 . For the depths of SPR with spatial dependence, the highest estimated value of the 𝐸𝐸𝐸𝐸𝐸𝐸 𝑡𝑡 reduced the sample size by 40%. From the new sample size, the sampling redesign was performed. The accuracy indexes showed differences between the thematic maps with the origi nal and reduced sampling designs. However, the lowest values of the standard error in the parameters of the spatial dependence structure evidenced that the new sampling design was appropriate. Besides, models of semivariance function were efficiently estimated, which allowed identifying the existence of spatial dependence in all depth of SPR. Research highlights: The sample size was reduced by 40%, allowing for lesser financial investments with data collection and laboratory analysis of soil samples in the next mappings in the agricultural area. The spatial t-Student model was able to reduce the influence of outliers in the spatial dependence structure. size of variables with Student’s t-distribution); GPS (global positioning system); OA (overall accuracy); PA (precision agriculture); RNE (relative nugget effect); SD (standard deviation); SE (standard error); SPR (soil penetration resistance); T (Tau concordance index); UTM (Universal Transverse Mercator). Authors’ contributions: All authors: conceptualized the paper, statistical analysis of data, final revision and discussion. LEDC: re viewed the literature and edited the working versions of the manuscript.
Introduction
The Brazilian economy is directly related to agribusiness, and soybean (Glycine max (L.) Merrill) lead this scenario, which figures as the main grain exported by Brazil. Given the economic importance of this commodity, to preserve the productivity and increase it, it is important to know the spatial variability of soybean yield and its relationship with the physical and chemical properties of the soil (Sobjak et al., 2016). From this perspective, precision agriculture (PA) techniques use the knowledge of the spatial variability of grain yield and the physical and chemical properties of the soil, to find the ideal application of the nutrient according to local needs (Molin et al., 2015). The premise of PA is to use localized management of agricultural inputs to increase profits, reduce losses, and preserve the environment (Alamo et al., 2012;Bier & Souza, 2017).
Geostatistics can help PA, as its techniques make it possible to determine the spatial dependence structure and describe the spatial variability of the yield of soybean and the soil attributes (Dalposso et al., 2016(Dalposso et al., , 2018De Bastiani et al., 2017;Schemmer et al., 2017;Fagundes et al., 2018;Grzegozewski et al., 2020). The geostatistical techniques consider the value observed and geographic location of the physical-chemical properties of the soil, considering a sampling of some georeferenced points in the area. Thus, the entire area is characterized by a small representative portion of it (Wang et al., 2013).
Knowing the spatial distribution of soil attributes and agricultural production is possible, even for small farmers. Combining sample planning and spatial statistics techniques, it is possible to characterize the spatial variability of attributes without using equipment with high investment, such as a harvest monitor (Schemberger et al., 2017).
Also, to better understand the nutritional characteristics of the soil, it is important to combine samples of macro-and micro-nutrients and physical attributes, such as soil penetration resistance (SPR), which is related to the analysis of soil compaction. Compacted soils tend to hinder the availability of nutrients and water to the plant, which interferes with the growth of the roots and, consequently, with the development of the plant and the grain, thus affecting productivity (Valadão et al., 2015(Valadão et al., , 2017Marinello et al., 2017;Sivarajan et al., 2018;Colombi & Keller, 2019).
Still, in terms of sampling, there are studies that aim to reduce costs with collection and laboratory analysis of the sample. These studies proposed methods to reduce the number of sampling points to be used in future experiments in the agricultural area, without having a considerable loss in its mapping (Griffith, 2005;Guedes et al., 2014Guedes et al., , 2016Domenech et al., 2017;Maltauro et al., 2019). One of the proposals is the effective sample size, which considers that some sample points may be highly correlated with each other, providing unnecessary cost with collection and laboratory analyzes, since such points add repeated information regarding spatial dependence (Vallejos & Osorio, 2014). The effective sample size represents the estimation of a new sample size considering the effects of the spatial autocorrelation and the purpose of estimating the sample mean of the value of the georeferenced variable as precisely as possible (Griffith, 2005).
The univariate effective sample size estimation developed by Griffith (2005), assumes that the georeferenced attribute has a normal probability distribution. However, there are georeferenced data that do not present a normal probability distribution, especially because such distribution is sensitive to outliers (Fagundes et al., 2018). In this way, Vallejos & Osorio (2014) suggested another more inclusive approach to calculate the estimated value of univariate effective sample size, wich considers the presence of outliers and assumes that the georeferenced variable has Student's t-distribution. The Student's t-distribution allows the class of errors to be extended to other probability distributions to better accommodate the outliers (Assumpção et al., 2014;De Bastiani et al., 2015;Schemmer et al., 2017).
The estimation of effective sample size requires an initial sampling design in the agricultural area and the knowledge of the spatial dependence structure of the georeferenced variable. Generally, when this information is not previously known and the data collection is being initiated, the initial sample size can be determined by the ratio of area to sample size (Wang et al., 2013). For example, the PA recommend considering a maximum of two hectares per sampling point (Molin et al., 2015).
Considering the availability of information obtained previously from the sample design in an experimental area, this study had as main objectives: i) to consider variables that present Student's t-distribution, using the expectation-maximization (EM) algorithm to model the data (Assumpção et al., 2014); ii) to use the spatial dependence structure of SPR to redefine and to reduce the number of sample elements collected in this area by univariate effective sample methodology, considering the existence of the sample points correlated with each other.
Material and methods
We developed two studies: in the first, simulated data was considered, and in the second, we used data on SPR obtained in an agricultural area with soybean cultivation. The simulation study complements the agricultural one because with the simulated data is possible to reproduce a variety of scenarios present in the real data. Therefore, the two studies add practical and theoretical knowledge about sample resizing in soil attributes with spatial dependence structure.
Description of simulations
Consider a stochastic process { ( ), ∈ ⊂ ℝ 2 } , = 1, … , , stationary and isotropic, in which = ( ( 1 ), … , ( )) is a × 1 random vector, where Y( 1 ), … , Y( ) are the observed values of the random variable under study in i sampled spatial locations, with = 1, … , and ∈ ⊂ ℝ 2 . Suppose that has an n-varied Student's t-distribution (De Bastiani et al., 2015), i.e., ., ~ ( , , ν) , where μ is the mean of Y, a constant 3 Sampling redesign of soil penetration resistance in spatial t-Student models value in all i spatial locations is an × 1 is an is an × 1 unit-dimensional vector; ν (ν > 0) is the degree of freedom fixed; and = 1 + 2 ( 3 ) is an × cale matrix, non-singular, where 1 ≥ 0 and 2 ≥ 0 are the nugget effect and partial sill parameters, respectively, I n is the × identity matrix, and ( 3 ) is an × symmetric matrix, where 3 > 0 is a function of the range ( ( 3 ) = ) . The practical range (a) is the spatial dependence radius, the distance at which spatial dependence exists between samples. The parameters 1 , 2 , and 3 , make up the spatial dependence structure of a georeferenced variable (Diggle & Ribeiro Jr, 2007;Soares, 2014). We considered 11 variables (V1, …, V11) with different spatial dependence structures (Fig. 1A). The variables were obtained by simultaneously varying the spatial dependence radius (a) and the intensity of spatial dependence, measured by the relative nugget effect (RNE). As we set the parameter 2 value then the RNE was directly influenced by the variation of the nugget effect ( 1 , 2 , and 3 ). The smallest spatial dependence radius used was 0.3 km, and the largest ranged between 1.0 and 1.2 km. The remaining practical ranges (0.5 and 0.6 km) were considered intermediate based on the maximum distance from the agricultural area (1.8 km). The RNE was considered from moderate (between 25% and 75%) to strong (≤ 25%) (Cambardella et al., 1994).
Given the linear spatial model (Uribe-Opazo et al., 2012), we performed 100 simulations for each of the 11 variables using a Monte Carlo experiment from the Cholesky decomposition of the scale matrix (Cressie, 2015). Each simulation generates a random sample set of these variables, maintaining the characteristics of the spatial dependence structure, and represents different datasets in different agricultural areas or crop years (Mooney, 1997). In these simulations, we fixed the degree of freedom (ν = 5), the mean (μ = 5), the partial sill ( 2 = 1), and the exponential model. As sample planning for the simulations, we used the same configuration (lattice plus close pairs) from the commercial agricultural area under study (Fig. 1C). Other information about the simulations is given in the methodological scheme (Fig. 1B).
Following the scheme presented in Fig. 1B, after apply the EM algorithm to estimate the parameter vector θ for each simulated variable, the value of the effective sample size using the Student's t-distribution was estimated ( , Eq. 1) (Vallejos & Osorio, 2014): (1) where n is the number of simulated sampling points in the original grid (n ≥ 1); v is the degree of freedom (v > 2) 1 is an × 1 unit vector; (̂) = [( (̂) )] is an × estimated spatial correlation matrix of the sample points, where the estimated spatial correlation between the i-th and the j-th sampling point are given by (Eq. 2); r ij are the elements of the ( 3 ) matrix, which calculation depends on the geostatistical model and on the Euclidean distance between observations (De ; and 1 , 2 are the estimated values of the nugget effect and partial sill parameters, respectively. What differs in estimating the univariate effective sample size by considering random vectors with normal probability distribution ( ) in relation to those with Student's t-distribution ( ), is the constant ( + ) . We obtained this constant from the Fisher information matrix for linear spatial models with Student's t-distribution (De . As v > 2 and n ≥ 1, we have v + + 2 > + and is necessarily lower than .
Description of the experimental data
The dataset comes from a commercial area with 167.35 hectares, cultivated with soybean, located in the municipality of Cascavel-Paraná-Brazil, with approximate geographical coordinates of latitude 24.95º South and longitude 53.37º West, and 650 m of average altitude (Fig. 1C). The climate of the region is temperate mesothermic and superhumid, climate type Cfa (Koeppen) (Aparecido et al., 2016), with an average annual temperature of 21ºC. The soil is classified as a Red Dystroferric Latosol with clay texture (EMBRAPA, 2013).
We used a lattice plus close pairs sampling design, with 102 sampling points. This design contained a regular grid (with minimum distance between points equals to 141 m), to which we added 19 sample points (locations). These added locations presented smaller distances with some points of the regular grid (50 m and 75 m). The sample was georeferenced and located with the aid of a signal receiving apparatus with a Geoexplore 3 (Trimble®) Global Positioning System (GPS) set up for the Universal Transverse Mercator (UTM) coordinate system.
In this study, soil resistance to root penetration (in MPa) at depths of 0-10 cm (SPR 0-10 cm), 11-20 cm (SPR 11-20 cm), 21-30 cm (SPR 21-30 cm), and 31-40 cm (SPR 31-40 cm) were used. In terms of improvement in soil management, the study of the spatial dependence of SPR has important agricultural relevance, since this soil attribute is inversely related to root growth and crop yield (Gül-ser et al., 2016). The experimental data of this physical attribute refers to the crop year 2015-2016 and belongs to the database of the Laboratory of Spatial Statistics and the Laboratory of Applied Statistics of the Western Paraná State University (UNIOESTE), Cascavel/Brazil.
The determination of SPR was measured by the penetrograph, as follows: for each sampling point, we performed three readings per centimeter, from 0 to 40 cm, covering the four depths considered (0-10 cm, 11-20 cm, 21-30 cm, and 31-40 cm). The data obtained was transformed in MPa, and the value of the SPR at each depth consisted of the arithmetic mean of the three measurements.
Soil penetration resistance was assumed to have a t-Student probability distribution. From the original sampling design and for each depth, we performed the exploratory and geostatistical analyzes of SPR (Figs. 2A and 2B, respectively). The analyses performed are described in the methodological scheme of Fig. 2, and more information about the methodology is obtained in Cressie (2015).
For each layer of SPR (at depths 0-10 cm, 11-20 cm, 21-30 cm, and 31-40 cm), the value of the effective sample size was estimated ( , Eq. 1) (Fig. 2) by the same methodology applied in the simulated data.
Through the estimated lues in each SPR layer, we redefined a single reduced sample size. The highest estimated value of the was taken (n* = MAX ( ), Fig. 2) from the variables with spatial dependence, i.e., variables in which the value of the spatial dependence radius is not small compared relative to the size of the experimental area and which intensity of spatial dependence (RNE) was at least moderate (Cambardella et al., 1994). We used only georeferenced variables with spatial dependence in the calculation of the since georeferenced attributes without spatial dependence do not present a reduction in the number of sample points (Vallejos & Osorio, 2004).
The highest value criterion was established since a greater number of sampling points is better to capture the spatial variability of variables that have different spatial dependence structures (Pautsch et al., 1998;Diggle & Ribeiro Jr, 2007). Therefore, the tendency is to obtain more representative thematic maps concerning the spatial variability of the attribute in experimental area (Kestring et al., 2015). This is justified by two characteristics: (a) homogeneous variables (with less spatial variability in the area), can be collected with a smaller number of sample units, which would avoid redundant data or oversampling; and (b) variables with rapid change in spatial structure can be collected more intensively, which would avoid undersampling.
To verify the suitability of the reduced sample size, concerning the original sampling design (Fig. 1C), a random design of the original sampling design with sample size n* was selected. For this reduced sample size, the exploratory and geostatistical analysis were also performed Sampling redesign of soil penetration resistance in spatial t-Student models (Figs. 2A and 2B,respectively). Finally, we compared the results obtained between the two sample configurations (original and reduced), using the methodologies presented in Fig. 2 (C and D).
The simulations, and the statistical and geostatistical analysis, were prepared in the software R (R Development Core Team, 2020) using the geoR package (Ribeiro Jr & Diggle, 2001). A computational routine developed in the software R (R Development Core Team, 2020) using the geoR (Ribeiro Jr & Diggle, 2001) and matrixcalc (Novomestky, 2012) packages (and available at goo.gl/JrvtnJ) to estimate the effective sample size ( ).
Simulation studies
The mean and the standard deviation of the estimated values of the were similar for most pairs of variables in which the values of the nugget effect were different and the fixed range was maintained (V1 and V2; V3, V4, and V7; V5 and V6; V9 and V11) (Fig. 3). The estimated values evidenced the existence of three groups of variables (Fig. 3). The first two groups presented, respectively, the highest and intermediate estimated values, being them: the group formed by variables V1 and V2, whose estimated mean value of the was 40 and 44 sample points, in that order. Variables V3, V4, V5, V6, and V7 formed the second group, where the estimated mean value of the ranged from 15 to 31 sample points. These two groups of variables also exhibited high values of standard deviations, with variation of 11 to 14 sample points. The simulated variables V1 and V2 have a small practical range (α = 0.3 km), mainly when compared to the maximum distance between the coordinates of the simulated area (~ 1.8 km). Variables V3, V4, V5, V6, and V7 exhibited spatial dependence radius slightly higher than those of the first group (ranging from 0.5 to 0.6 km), which contributed to the fact that the estimated values were smaller when compared to those obtained in the previous group. The third group, formed by variables V8, V9, V10, and V11, presented the smallest mean values of (ranged from 6 to 8 sample points) (Fig. 3). These four variables have in common, the largest values of the simulated spatial dependence radius (between 1.0 and 1.2 : effective sample size. km). In general, the estimated value ranged from 6 to 44 sample points and provided a reduction between 57% and 95% in the number of sampling points (Fig. 3).
Application of the methodology in soil penetration resistance
The estimated value for SPR at depths 11-20 cm and 21-30 cm was 95 e 101, respectively. The SPR observed at depths 0-10 cm and 31-40 cm had higher reductions in the number of sampling points, with ̂ equal to 51 and 60, respectively, which represents a reduction between 40% and 50%.
Considering the layers of SPR in which spatial dependence was identified (at depths 0-10 cm and 31-40 cm) and the maximum estimated value of the effective sample size observed in these layers, a sample resizing was obtained, reducing the sample size to 60 sample points. Thus, a new sample configuration with 60 points, chosen randomly from the 102 sample points of the original grid, was selected for the study of spatial dependence of SPR.
In the exploratory analysis, the values of the coefficient of variation (CV) showed that the SPR variability is greater in the surface and it is reduced when increasing the sampling depth in the soil (Table 1). The magnitudes of the CVs indicated that there was a medium dispersion of SPR at all depths (Warrick & Nielsen, 1980) (Table 1). Besides, we observed that the reduction in the number of sample points did not influence the SPR variability (Table 1).
The depths 11-20 cm (Fig. 4B) and 31-40 cm (Fig. 4D) showed the greatest amount of outliers (four each), located in the central and western regions of experimental area. The sample points 82 and 34 exhibited outliers, in all depth layers of SPR, except at depth 31-40 cm, in which point 34 was not considered an outlier.
We observed in the geostatistical analysis that, for all depth layers of SPR, the spatial dependence structure can be considered isotropic, i.e., depends only on the distance separating the locations observed, and does not differ with the direction .
The results about the best values for the degree of freedom (v) and the shape parameter (Table 2), showed that for both sample sizes, the model and degree of freedom were the same only for SPR at depth of 31-40 cm. We verified in this depth the lowest values of the standard error (SE) in the Matérn family model with = 0.5, for the degree of freedom v = 10. For SPR at depth of 0-10 cm, the lowest values estimated from the SE were found in the Matérn family model with = 2.5 and shape parameter = 5, or the original sampling design; and with =0.5 and = 10 for the reduced sampling design. At depth 11-20 cm, in both sampling designs, was adjusted the Matérn family model = 2.5 to the semivariance function, but with different degrees of freedom = 5 for the original grid, and = 10 for the reduced grid.
Finally, at depth 21-30 cm of the SPR, although the sampling designs presented the same value for the shape parameter (v = 5), the lowest estimated values of the SE were obtained by the Matérn family model with = 1.5 and 2.5, pectively, for the original and reduced sampling designs ( Table 2). The estimated values of these SEs, at depths 0-10 cm and 11-20 cm of the SPR (Table 2), were smaller in the estimated models considering the reduced sampling design when compared to values obtained in the estimated models using the original sampling design. Besides, for the other depth layers, the estimated value of Sampling redesign of soil penetration resistance in spatial t-Student models the SE of the range function ( 3 ), was also lower for the estimated models considering the reduced sample configuration ( Table 2). The values obtained by the cross-validation method showed a small increase in errors of the spatial prediction with the reduced sampling design ( Table 2). The errors increased by 7.5%, 6.2%, 9.6%, and 11.5%, respectively, at depths 0-10 cm, 11-20 cm, 21-30 cm, and 31-40 cm of the SPR, comparing with the original sampling design.
For the original sampling design, the spatial dependence structure in the intermediate depth layers of the SPR (11-20 cm and 21-30 cm) presented pure nugget effect, due to the low values of the practical range (180.5 and 110.2 m) and the low spatial dependence (̂ ≥ 5%; Cambardella et al., 1994) (Table 2).
Considering the reduced sampling design, the intensity of spatial dependence was moderate in these intermediate-depth layers (̂ between 25% and 75%; Cambardella et al., 1994). Also, at depth 11-20 cm, there was an increase in the estimated value of the spatial dependence radius to the original sampling design (from 180.5 to 209.1 m). In the other depth layers of the SPR, there was a decrea-se in the estimated practical range (ranging from 7.6 to 31.9 m), compared to that obtained with the original sampling design ( Table 2).
The estimated values of the practical range were relatively low for both sampling designs and in all depth layers of the SPR. That is because the maximum distance in the experimental area is approximately 1,800 m, the ranges ranged from 110.2 to 291.5 m in the original sampling design, and from 78.3 to 273.2 m in the reduced sampling design (Table 2).
We observed visual differences between the maps elaborated considering the two sampling designs, which are most noticeable at depth 11-20 cm (Fig. 5B). According to the classification of Anderson et al. (2001), in most of the depth layers of the SPR, there was a low percentage of hits between the reference map (original sampling design) and the model map (reduced sampling design), because the estimated value of the overall accuracy (OA) was lower than 85%. This indicates that a smaller number of pixels were classified in the same class interval in both maps, evidencing differences between the elaborated maps considering the two sampling designs. The only Table 2. Estimated values of the parameters that define the spatial dependence structure in each depth layer of soil penetration resistance (SPR, in MPa) from the best values of the shape parameters and considering the original ( n = 102 points) and reduced (n* = 60 points) samplings designs.
: degree of freedom.
: shape parameter of the Matérn family model. Estimated values of: μ : mean, φ 1 : nugget effect, φ 2 : partial sill, 3 : function of the range, â â : pratical range (kilometers), ̂ = 100 1 9 Sampling redesign of soil penetration resistance in spatial t-Student models exception was at depth 21-30 cm (Fig. 5C) in which similarity between the maps made with the original and reduced sampling design was observed (OA > 85%). The Tau concordance index (T), unlike the OA, accounts for not only the proportion of pixels classified in the same class interval in the reference and model maps but also those whose classification was not the same in both maps. The maps made considering the original and reduced sampling designs presented from low to medium accuracy (T < 0.80; Krippendorff, 2004), with the exception at depth 21-30 cm of the SPR (Fig. 5).
Some classes of thematic maps elaborated using the original sampling design presented some null pixels, as can be seen visually at depths of 0-10 cm, 11-20 cm, and 21-30 cm of the SPR (Figs. 5A, 5B, and 5C, respectively). Besides, at depth 21-30 cm, where was obtained high values of the EG and Tau accuracy indexes, a high number of pixels (more than 90% of total) in the same classes was observed (Fig. 5C). Still about at depth 21-30 cm, we observed the formation of circular regions around the sample points (Fig. 5C).
Finally, the estimated values of SPR, using the reduced sampling design, showed the existence of limitations to root growth that varied from low to moderate in almost all the agricultural areas (Canarache, 1991).
Simulation studies
Considering the 100 simulations of each variable, the graph with the means and standard deviations of the estimated values of the univariate effective sample size showed that the variation of the value of the nugget effect did not generate a relevant change in the estimated value of the effective sample size (Fig. 3). The practical range negatively influenced the estimated values, since the greater the practical range of the variable, the lower the estimated value. Although a different sample configuration and size, and even another probabilistic distribution (normal) were considered, the simulation studies of Vallejos &Osorio (2014) andDal Canton et al. (2021) reached similar conclusions regarding the influence of the practical range in reducing the number of sampling points.
The high difference in the estimated values ( Fig. 3) can be explained by the discrepancy between the variables concerning the values of the parameters of spatial dependence, mainly regarding the practical range, which variation was of 0.3 to 1.2 km.
Studies carried out in agricultural areas of a smaller size than the one considered in this paper (< 50 ha), characterized the spatial dependence on soil attributes using a sample size smaller than 50 sample points (Carvalho et al., 2013;Araújo et al., 2014, Tavares et al., 2014, sam-ple size similar to that obtained in the present study for most simulated variables.
Application of the methodology in soil penetration resistance
The values of the CV obtained by Johann et al. (2004) and Bazzi et al. (2013) showed results similar to those of this work, with a moderate classification for CVs in agricultural areas in Western Paraná with soybean planting, and with similar conditions of management, climate, and soil. Besides, the SPR variability is reduced when increasing the sampling depth in the soil, corroborating with that obtained in this study.
The SPR at depths 11-20 cm and 21-30 cm practically did not show a reduction in sample size. This fact is justifiable, mainly due to the practical range influence on the estimated value, verified in the simulation studies of the present study, and verified in Vallejos & Osorio (2014) and Dal Canton et al. (2021) as well. The SPR at depths 11-20 cm and 21-30 cm presented a small estimated value of the spatial dependence radius (110.2 and 180.5 m), relative to the size of the experimental area, and also a low intensity of spatial dependence (̂ > 75%; Cambardella et al., 1994) (Table 2). The higher reductions in the number of sampling points presented at depths 0-10 cm and 31-40 cm are due to the higher estimated values of practical range (291.5 and 222.5 m) ( Table 2).
Griffith (2005) obtained a reduction in sample size (from 36% to 45%) similar to that found in this study, which varied between 40% and 50%, using different sample configurations, attribute probability distribution, and soil chemical attributes. Domenech et al. (2017) considered auxiliary information measurement to map the attribute of interest (soil depth to the petrocalcic horizon), and obtained a reduction in sample size similar to this study also (from 50% to 70%), although their methodologies for optimization and selection of sampling points were different from this study. The mentioned authors obtained sample reductions similar to those of the present study, and the thematic maps obtained by them were considered efficient.
It was found in the literature researches to analyze the spatial variability at different depths of SPR and used between 49 and 60 sample points (Rosalen et al., 2011;Rodrigues et al., 2014;Tavares et al., 2014). Considering the new sample configuration, reduced to 60 sample points, these authors used values of sample size similar to this research, although the magnitude of their mapped experimental areas was lower than that of this study (< 50 ha).
Comparing the original and reduced sampling designs, the estimated values of the relative nugget effect (RNE) and the practical range indicate that with a reduced number of sample points there was an increase in spatial dependence and minor changes in the spatial dependence radius (Table 2). Besides, the estimated values of the SEs of the estimated parameters that define the spatial dependence structure were smaller in the estimated models from the reduced sampling design for the majority of the cases (Table 2). This shows that even with a smaller number of sample points, it was possible to verify the existence of spatial dependence in all depth layers of SPR.
The increase in spatial prediction errors after sampling redesign was already expected, as the number of sample points was reduced by 40%. The literature shows that the greater the number of sample points, the better the result of the interpolation, as shown by the studies by Coelho et al. (2009), Kestring et al. (2015, and Guedes et al. (2016), using different sample densities and metrics to calculate errors. However, the greater the number of observations, the greater the financial cost. Thus, with the magnitude of sample reduction obtained in this study, the increase in spatial prediction errors can be considered small.
The results also indicate that even using a smaller number of sampling points in the study area, efficiently models were estimated to the semivariance function and that they were able to identify the existence of spatial dependence in all depth layers of SPR. This is an important feature of this study since the reduction of the sample size difficult the semivariance calculation (Kestring et al., 2015). However, although it is possible to verify the existence of spatial dependence in all depth layers of SPR, the visual analysis and the accuracy indices (OA and Tau) showed that there are differences between the thematic maps generated with the original and reduced sampling design, indicating that there was the influence of sample size on the spatial dependence characterization of SPR.
About the circular regions around the sampling points, identified at depth 21-30 cm of the SPR map (Fig. 5C), we observed the low estimated value of the practical range (̂ = 78.30 m), near the shortest distance between sample points (~ 50 m), which resulted in the formation of small subregions centered in the sample points, a phenomenon known as 'bull eyes effect' (Menezes et al., 2016), and also observed by Dalposso et al. (2018) andDal Canton et al. (2021).
The results showed that the univariate methodology proved to be advantageous considering the lowest cost in the sampling process due to the 40% reduction in the sample size and the results obtained in the characterization of the spatial dependence in the experimental area. Also, the method proposed in this study obtained a single sample size for all attributes, based on the variables with spatial dependence structure and the maximum estimated value of the among them.
|
2021-05-05T00:08:56.497Z
|
2021-03-18T00:00:00.000
|
{
"year": 2021,
"sha1": "9140192ee5f2b3e5bd9ff542f9ab8b43f0369d36",
"oa_license": "CCBY",
"oa_url": "https://revistas.inia.es/index.php/sjar/article/download/16949/5062",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e8515b00372798a067690bd4b63229cb2c6ec7d6",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
26478762
|
pes2o/s2orc
|
v3-fos-license
|
Sparse Digital Cancellation of Receiver Nonlinear Distortion in Carrier Aggregation Systems
In carrier-aggregation systems, digital baseband cancelation of self-interference generated by receiver nonlinearity requires the estimation of several reference signals contributions. As the nonlinearity order and frequency selectivity of the chip response increase, the number of reference signals significantly increases rendering the estimation of their contributions more complex. We propose a sparsity-based approach for the selection of the reference signals to match the distortion interference using a few reference signals. Simulation results show significant performance improvement over prior art with the same complexity.
I. INTRODUCTION
In carrier aggregation (CA) systems, signals are transmitted and/or received over multiple frequency bands simultaneously. CA is adopted in the long-term evolution (LTE) standard [1]. Due to imperfect chip and board isolations, one or more uplink (UL) signals can then leak into the low-noise amplifiers (LNAs) of the receive chains of downlink (DL) signals. Practical LNAs exhibit non-linearity behaviour [2] generating harmonic distortion (HD) and inter-modulation distortion (IMD) of the leaked UL signals that can lie at the DL band. This problem causes significant self-interference that degrade the performance of the victim band [1].
In [3], digital baseband interference cancellers are used, where the UL baseband signals were utilized to regenerate the distortion and subtract it from the received signal. Before reaching the LNA input, UL signals are shaped by the chip frequency response. Hence, the LNA-generated distortion is a nonlinear function of the weighted summation of several delayed versions of the original UL signals. The summation weights represent the chip channel response at the corresponding lags. To avoid using nonlinear cancellation filters, the distortion signal is decomposed into the linear combination of the UL signals raised to different powers and delayed by different lags, called reference signals. The reference signals weights are functions of the original chip response and can be estimated using any linear estimation technique [3]. In [4], [5] and [6], digital cancellation of nonlinearity distortion generated at the transmitter and receiver, respectively. In [6], the IMD was between one modulated signal and an unmodulated tone, unlike this paper and [3] where both inter-modulated signals are modulated. Unlike modulated signals, unmodulated tones do not experience frequency-selective channel.
The reference signals in [3] are static and pre-defined regardless of the chip channel response. Instead, we propose a sparsity-based solution to dynamically select the reference signals, where all candidate reference signals are included in a dictionary matrix. Then, based on their auto-correlation and cross-correlation with the observed signal, a subset of them is selected to represent the distortion signal. The number of selected reference signals is flexibly set based on the design constraints on complexity and power. Unlike [3], our approach enables different subset selection for different chip responses. The rest of the paper is organized as follows. The system model and problem formulation are described in Sections II and III, respectively. Our proposed sparsity-based approach is presented in Section IV. Simulation results are provided in Section V, and the paper is concluded in Section VI.
Notations: Lower and upper case bold letters denote vectors and matrices, respectively, and 0 denotes the all-zero vector. Also, ( ) * , ( ) T and ( ) H denote the complex conjugate, transpose and conjugate transpose operations, respectively. The notation | | denotes the absolute value.
II. SYSTEM MODEL We describe the two main nonlinearity distortion models known in the literature, namely, HD and IMD.
A. Harmonic Distortion Model
In practical transceivers, the UL signal up-converted to carrier frequency f tx leaks into the receiver LNA. Due to its inevitable nonlinearity, the LNA generates the Q-th order harmonic of the leakage UL signal at frequency Q × f tx . In FDD systems where the receiver oscillator frequency f rx = Qf tx , the HD of the UL leakage signal will interfere with the desired downlink (DL) signal de-sensitizing the whole receiver chain. In CA systems, f tx and f rx are the UL and DL carrier frequencies of two different aggregated bands, for example, bands 12 (710MHz) and 4 (2.13 GHz), respectively, in LTE [1]. We denote the baseband time-domain (TD) UL signal at the digital-to-analog converter (DAC) input by s(n). The UL signal leaks to the LNA input through a finite impulse response (FIR) channel representing the chip response {h(k)} L−1 k=0 where L is the channel length. The TD baseband equivalent of the leakage signal at the LNA input is given by: We write the Q-th order HD signal at the analog-to-digital converter (ADC) output as follows: where c 0 is related to the Q th order input-referred intercept point (IIP) of the LNA [2].
B. Inter-Modulation Distortion Model
In carrier aggregation, the two UL signals of different frequencies, f tx,1 and f tx,2 leak to the receiver LNA. As a result of the LNA nonlinearity, these two leakage signals intermodulate together creating an IMD signal sitting at a new frequency pf tx,1 + qf tx,2 , where p and q are integer nonzero numbers. If this new frequency equals to the frequency of the downlink signal f rx , then the system is said to suffer from IMD. The IMD order is given by Q I = |p| + |q|. In LTE, bands 3 (1750MHz) and 20 (850MHz) can cause 3 rd -order IMD (p = 2, q = −1) to band 7 (2660MHz) [1]. We write the Q I -th order IMD signal at the ADC output for p, q > 0 as follows: where x 1 (n) and x 2 (n) are the TD baseband equivalent of the two UL leakage signals seen at the LNA input, given by: where s 1 (n) and s 2 (n) are the complex baseband TD signals at the inputs of the DACs associated with the two UL chains. Furthermore, h 1 (k) and h 2 (k) are the FIR channels of lengths L 1 and L 2 , respectively, representing the chip responses between the two UL chains and the LNA input. The IMD expression in (3) can be also written for p < 0 and/or q < 0 with the following modification. If p or q is negative, we replace the corresponding signal x 1 (n) or x 2 (n), respectively, in Equation (3) by its complex conjugate.
A. Problem Formulation
The distortion signal p(n) in (2) or (3) for HD or IMD schemes, respectively, interferes with the desired DL signal y(n) of power P s yielding: where z(n) is the complex background additive Gaussian (AWG) noise of single-sided power spectral density N o . The cancellation algorithm exploits the knowledge of the transmitted UL signal s(n) (or s 1 (n) and s 2 (n) in IMD cases) to constructs the distortion signal p(n) and cancel it from r(n). The algorithm requires the estimation of the FIR channel h(k) (or h 1 (k) and h 2 (k)) representing the chip response to construct the leakage signal x(n) (or x 1 (n) and x 2 (n)) seen at the LNA input. However, the observed signal r(n) is not linear in the unknown channel h(k) due to the LNA nonlinearity in (2) and (3). Hence, we need to expand the polynomials of the distortion signal, e.g., |q| , using the following multinomial theorem: For instance, the first polynomial in the right-hand side (RHS) of (2) is expanded for Q = 3 and L = 2 as follows: where h t 3 t=0 are the new parameters to be estimated. Rewriting Eqn. (5) in matrix-vector format, we get where r = [r(0), r(1), . . . , r(P − 1)] T , and P is the number of observed samples used in the estimation process. Furthermore, the columns of the matrix D represent the distortion reference signals obtained by the expansion of the distortion polynomials. Following the example in (7), the t-th columns of D is filled by the samples s t (n)s 3−t (n − 1) . The length-L vector v contains the new parameters to be estimated representing the contributions of the reference signals in the columns of D. Finally, the vector e is the error vector containing the desired DL signal y(n), background noise z(n), and the part of the distortion signal not represented in the columns of D due to its weak contribution. The linear least squares (LLS) solution of v is given by the solution of the following minimization problem [7]: where R = D H D is the auto-correlation matrix of the reference signals. Moreover, q = D H r represents the crosscorrelation vector between the observed signal and the reference signals. Transforming the nonlinear estimation problem into a linear one comes at the expense of increasing the problem dimension, especially for high distortion levels (Q and Q I ) and channel length L. In HD schemes, modeling only x Q (n) in the reference matrix D increases the problem dimension from L toL = (Q+L−1)! Q!(L−1)! . For Q = 3 and L = 4, this corresponds to increasing the problem dimension from L = 4 toL = 20. The new problem dimension (after multinomial expansion) becomes even larger for IMD schemes. In Section IV, we show how sparsity-based techniques are used to reduce the number of parameters to be estimated.
B. Prior Art and other approaches
The prior-art IMD cancellation approach in [3] models the 3 rd -order IMD reference signal as follows: where the total number of terms in the expansion of (10) is J prior = L 1 × L 2 . Comparing (10) with the exact representation in Eqns. (3) and (4), we find the model in [3] is different from the exact model. The model in [3] squares s 2 (n − k 2 ), while the squaring should go over s 2 after passing through the filter h 2 (k), i.e., the squaring should go over L2−1 k2=0 h 2 (k 2 )s 2 (n − k 2 ) . The IMD reference signal was modeled in [3] as if s 2 (n) go through the non-linearity before getting filtered which is not true as discussed in Section II.
Another candidate approach would be to model the distortion as if the nonlinearity is generated before the channel. We call it the Hammerstein-based approach as it follows the well-known Hammerstein model [8]. For example, the first nonlinearity term in the RHS of (3) modeling the IMD signal is approximated by: For both the prior-art and Hammerstein-based approaches, the corresponding reference matrix (D prior or D Hamm ) will have J prior or J Hamm columns, respectively. Then, the LLS-based estimation of the distortion signature is then obtained for the prior-art approach (and similarly for the Hammerstein-based approach) as follows:
IV. PROPOSED ALGORITHM
We propose a novel approach for improved reference signal design while controlling the problem dimension. Our approach comprises two main steps. First, we use the multinomial expansion in (6) and include all the reference signals out of this expansion in the reference matrix D. The second step is to obtain a J sparsity -sparse solution of v in (8) with only J s <<L nonzero entries. This sparse solutionv s should keep Dv s close to the observation vector r. This requirement can be casted into the following optimization problem: where v 0 is the l 0 -norm of the argument vector and represents the number of nonzero entries in this vector. However, the solution of (13) requires an intensive computation burden even for moderate values ofL and J s due to the huge search space. Several techniques have been proposed in the literature to obtain approximate solutions of (13), e.g., Orthogonal Matching Pursuit (OMP) [9], Orthogonal Least Squares (OLS) [10], and FOCUSS [11]. We choose the OMP thanks to its implementation simplicity and efficiency. The OMP algorithm takes as input both the reference matrix D, the observation vector r, and the required sparsity level J s . The OMP output is an approximate J s -sparse solutionv OMP of (9) as follows: The OMP technique is described as follows: Initialization: Define an empty index set I 0 = φ, set the initial residual r 0 = y, initializev OMP = 0, and set k = 1.
The k th iteration: 3) Update I k = I k−1 ∪ c k . In this step, the indices of the nonzero elements are augmented by c k , the index of the k th nonzero entry computed at the k th iteration. wherev OMP (I k ) holdsv OMP elements indexed by I k .
5) Compute r k = y − D(:, I k )v OMP (I k ) where r k is the residual error term at the k th iteration. 6) If k = J s , exit the algorithm, else set k = k + 1 and go to Step 1. In words, OMP tries to find the columns (atoms) of the matrix D (dictionary) whose linear combination is close (matched) to y. From (15), we find that the maximum size of the matrix to be inverted is J s , controlled by the designer and can be made much smaller thanL. We can actually set J s = L and let the OMP technique choose the L reference vectors that closely matches the observed signal. The selected set of reference vectors are adaptive and can be different from a channel response to another. This is clearly different from [3] where the reference signals are pre-set regardless of the channel response. The matrix inversion in (15) is efficiently implemented using the Cholesky decomposition algorithm [12]. An efficient OMP implementation algorithm using adaptive Cholesky decomposition is proposed in [13], [14], where the decomposition step is not performed every iteration. Instead, it is observed that the matrix (D(:, I k )) H D(:, I k ) is the same as the matrix (D(:, I k−1 )) H D(:, I k−1 ) in the previous iteration except for an extra augmented row and column. This observation saves computations by using the same Cholesky decomposition of the previous iteration with one more augmented vector obtained by forward substitution [15].
V. SIMULATION RESULTS AND DISCUSSION
We simulate the performance of our proposed approach and compare it with the other approaches described in Section III-B. We use practical chip responses provided by wellknown manufacturers. In Fig. 1, we simulate the cancellation performance for the 3 rd -order IMD mechanism generated by bands 3 and 20 on band 7 as described in Section II-B. The uplink signals of bands 3 and 20 both have bandwidths of 10 MHz, and the receive bandwidth of band 7 is also 10 MHz. The block size P is set to 520 samples. The IMD level before cancellation is -85 dBm while the IMD-to-noise power ratio (INR) is set to 0 dB. Fig. 1 shows the IMD power levels before and after cancellation over a wide range of the DL receive signal power P s . Using the same number of cancellation taps J s = J prior = J Hamm = 9, our sparsitybased approach shows 6 and 9 dB improvements over the prior-art approach in [3] and the Hammerstein-based approach, respectively, described in Section III-B. For our sparsity-based approach, we construct the columns of the dictionary matrix D by setting L 1 = L 2 = 3 in (3) and (4), hence, D has 18 columns. For the prior-art approach in [3], we set L 1 = L 2 = 3, so J prior = L 1 × L 2 = 9. In Fig. 1, we also show the performance of our canceller with full complexity, i.e., J =L = 18, where all columns of D are utilized in IMD estimation and cancellation. Our approach shows additional 6 dB interference suppression over prior art in [3]. As expected, the residual IMD power level gets higher than that of the original IMD at high DL signal power P s . The reason is the poor estimation accuracy due to low IMD to DL signal power ratio (ISR). However, in practice the original IMD power will not be fixed to -85 dBm regardless of P s as in Fig. 1 gets closer to the base station, and the UL signal is transmitted at lower power levels.
In Figs. 2 and 3, we simulate the performance for the 3 rd -order HD mechanism in Section II-A where band 12 generates interference on band 4. The UL and DL bandwidths of bands 12 and 4, respectively, are both set to 10MHz. The original HD power level is set to -85 dBm with INR = 0 dB and P = 520 samples. Since the prior art in [3] was proposed for IMD but not for HD, we compare our sparsitybased approach with the Hammerstein-based approach. For our sparsity-based approach, the columns of the dictionary matrix D is constructed by setting L = 4 in (1) and (2), hence D has 20 columns. In Fig. 2, the residual HD power level is simulated for both approaches using J = 10 cancellation taps over a wide range of P s . The performance of the full complexity (J = 20) is also shown. In Fig. 3, we fix P s = −95 dBm and compare the performances of both approaches for different number of cancellation taps J. The superiority of our sparsity-based approach is clear, where increasing J improves its performance as it improves the HD estimation through including more component reference vectors in the estimation process. However, increasing J does not improve the performance of the Hammerstein-based approach because its estimation process does not include the contribution by most of the cross terms of the HD expansion equation, c.f. Eqn. (7).
VI. CONCLUSION
We proposed a sparse linear filter to cancel the distortion signal generated by the HD and IMD of uplink signals leaking into the receiver LNA. The reference signals selection process is dynamic and adapts itself for different channel responses of the chip through correlating the observed vector with the dictionary reference signals. Our sparsity-based approach provides additional interference suppression over the prior-art approach using the same number of filter taps.
|
2017-08-18T17:13:10.000Z
|
2017-08-18T00:00:00.000
|
{
"year": 2017,
"sha1": "88fe7b482d1b6a30fb9f4710ad99c498de31e168",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e2750e142958cefa60bb28081fe49db9c2ec47ca",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
247254968
|
pes2o/s2orc
|
v3-fos-license
|
Breast cancer screening in women with extremely dense breasts recommendations of the European Society of Breast Imaging (EUSOBI)
Abstract Breast density is an independent risk factor for the development of breast cancer and also decreases the sensitivity of mammography for screening. Consequently, women with extremely dense breasts face an increased risk of late diagnosis of breast cancer. These women are, therefore, underserved with current mammographic screening programs. The results of recent studies reporting on contrast-enhanced breast MRI as a screening method in women with extremely dense breasts provide compelling evidence that this approach can enable an important reduction in breast cancer mortality for these women and is cost-effective. Because there is now a valid option to improve breast cancer screening, the European Society of Breast Imaging (EUSOBI) recommends that women should be informed about their breast density. EUSOBI thus calls on all providers of mammography screening to share density information with the women being screened. In light of the available evidence, in women aged 50 to 70 years with extremely dense breasts, the EUSOBI now recommends offering screening breast MRI every 2 to 4 years. The EUSOBI acknowledges that it may currently not be possible to offer breast MRI immediately and everywhere and underscores that quality assurance procedures need to be established, but urges radiological societies and policymakers to act on this now. Since the wishes and values of individual women differ, in screening the principles of shared decision-making should be embraced. In particular, women should be counselled on the benefits and risks of mammography and MRI-based screening, so that they are capable of making an informed choice about their preferred screening method. Key Points • The recommendations in Figure 1 summarize the key points of the manuscript
Introduction
Breast density describes the amount of fibroglandular tissue in the breast relative to the amount of fatty tissue. The epithelial structures within the breast, i.e. the glandular lobes and the ducts, are part of this fibroglandular tissue; therefore, breast cancer mostly originates from this tissue. The amount of fibroglandular tissue is largely genetically determined and depends on hormonal stimulation. It usually decreases over time, particularly after menopause.
The fibroglandular tissue absorbs ionizing radiation (xrays) and projects white on mammography. Consequently, it is commonly referred to as 'dense' tissue. Since most cancers absorb x-rays to a similar extent as fibroglandular tissue, cancers manifest as white masses on mammograms. Dense (white) tissue on mammograms can therefore hide the similarly dense (white) cancers. This means that dense tissue may prevent the detection; i.e., it can 'mask' cancers on mammography [1]. Only the small fraction of cancers that contain calcifications is reasonably well seen on mammography independent of the amount of dense tissue.
The distribution of the individual amount of fibroglandular tissue, and thus of mammographic densities across the female population, follows a typical bell-shaped curve (Gaussian distribution) of many biological features. In clinical practice, this biologic continuum is categorized in four large bins and is described according to the ACR BI-RADS atlas terminology as follows [2]: a. The breasts are almost entirely fatty (about 10% of the screening population [3]) b. There are scattered areas of fibroglandular density (about 42% of the screening population) c. The breasts are heterogeneously dense, which may obscure small masses (about 40% of the screening population) d. The breasts are extremely dense, which lowers the sensitivity of mammography (about 8% of the screening population) The latter two categories are commonly referred to as 'dense' breasts.
Although visual assessment of density is known to have a relatively high intra-and inter-reader variability, the visual estimation of density has been reported to have a somewhat higher correlation with breast cancer risk than automated assessments [4][5][6]. However, to minimize variability in the selection of women for supplemental or alternate screening based on breast density, automated methods may be preferable [4][5][6].
Besides the risk of masking breast cancer, women with extremely dense breasts in the screening age range have an increased risk of developing breast cancer, which is approximately twice as high as for the 'average' woman, and almost 4-6 times as high as in women with almost entirely fatty breasts [1,7]. This is due to both the absolute higher amount of fibroglandular tissue within the breast and the breast composition [8]. Breast density is independent of other personal risk factors typically used for breast cancer risk prediction, and complementary when used in conjunction [9,10]. Breast density is estimated to account for 26% of breast cancers in postmenopausal women [11]. Moreover, a higher breast density has also been associated with an increased breast cancerspecific mortality [12], albeit this data is not consistent among studies [13,14].
Current evidence on breast cancer screening in women with dense breasts Screening is widely regarded as one of the most successful approaches to reducing breast cancer mortality in average-risk women and is recommended by the WHO [15] as well as EUSOBI [16]. Based on a meta-analysis of randomized controlled trials reporting on population screening, offering mammography screening to women aged 50-70 reduces breast cancer mortality by 20% [17]. Case-control studies in women actually screened show an even substantially higher mortality reduction of approximately 40% [18].
And yet, unfortunately, current screening strategies still fail to prevent death due to breast cancer in a substantial proportion of women: Among every 1000 women screened, in 8 disease-specific death is averted, but 11 still die from breast cancer [19]. This is due to the failure of timely detection of biologically relevant breast cancers, i.e. underdiagnosis.
Underdiagnosis is more of a problem in women with extremely dense breast tissue compared to other women. In women with largely fatty breasts, the sensitivity of mammography screening is 86 to 89%, meaning that only 11 to 14% of cancers present as interval cancer in between two screening rounds. This program sensitivity decreases to 62-68% in women with extremely dense breasts [20]. For full-field digital mammography (FFDM) similar poor figures were reported, with a program sensitivity of only 61% based on biannual screening [21]. There is currently little data on interval cancers by density for digital breast tomosynthesis (DBT), but it is unlikely that tomosynthesis is going to overcome the reduction in sensitivity caused by density. Several studies reported increased cancer detection rates by 20 to 40% also in women with (extremely) dense breasts [22,23], mainly due to the detection of more spiculated masses and architectural distortions, but there is only limited evidence that this leads to reduced rates of interval cancers in these women. According to Conant et al, the sensitivity of DM and DBT was similarly based upon 1-year follow-up [23]. X-ray-based anatomic imaging modalities-being either screen-film mammography, FFDM or DBT-all seem to be heavily affected by breast density and thus lead to underdiagnosis of relevant cancers in these women.
Several studies have investigated supplemental ultrasound as a technique to improve the performance of populationbased screening in women with extremely dense breasts [24][25][26][27][28]. On average, cancer detection increases by 2.3/1000 screens with ultrasound [26]. The added detection with ultrasound is also present when DBT screening is performed [27] and persists in follow-up rounds [26,28]. Unfortunately, the number of false-positive examinations strongly increases with ultrasound. Reported positive predictive values for biopsy vary widely, but are commonly below 10% for findings only observed at ultrasound [26] and remain relatively low, even as specificity increases in follow-up rounds [28]. In a large prospective Japanese study conducted in women aged 40-49 years, it was shown that program sensitivity improved from 77 to 91% with the addition of an ultrasound examination [25]. Moreover, they showed that the frequency of interval cancer was reduced by 50%. Nevertheless, it is not clear whether these findings can be considered a valid reference for European screening programs, where screening focuses on women aged 50-69, where the incidence of breast cancer is much higher and where women tend to have larger, more heterogeneous breasts; the results do provide initial evidence to suggest that, for some women, ultrasound may be beneficial [24][25][26][27][28][29]. Within Europe, supplemental ultrasound has been structurally implemented in Austria for women with dense breasts (BI-RADS classes c and d). In the timeframe from 2014 to 2017, the program showed a sensitivity of 71% and a specificity of 99%. The breast cancer detection rate was similar to EU standards. However, currently, the added value of supplemental ultrasound regarding cancer detection is limited [30].
Accordingly, so far, these results have been insufficient for EUSOBI to recommend that average-risk women undergoing mammographic screening should be informed about their breast density [16].
This reluctance was explained by the following facts: & We were not convinced that the benefit/risk ratio of supplemental screening for women with extremely dense breasts was positive. & Many European countries do not offer any form of supplemental screening. & Informing women about their density, in absence of highlevel scientific evidence for screening alternatives, could increase anxiety and reduce screening participation.
However, this policy is now to change. This change is prompted by an analysis of results of recent screening studies with contrast-enhanced breast MRI, particularly the DENSE trial and the EA1411 ECOG-ACRIN study.
The DENSE trial is a Dutch nationwide multicenter randomized trial in women with extremely dense breast tissueas automatically assessed by a computer program (Volpara)with a normal mammographic screening result [31].
Of the women invited for contrast-enhanced MRI, 59% agreed to participate (4783/8061) and underwent MRI screening. Supplemental MRI detected an additional 16.5 cancers /1,000 screens in the first round.
The interval cancer rate was 0.8/1,000, compared to 4.9/ 1,000 in women invited but not participating and 5.0/1,000 in women in the control group (n = 32,312). In other words, undergoing supplemental MRI screening reduces the frequency of interval cancers by 84%, thus effectively reducing underdiagnosis. The number of benign findings leading to recall was 79.8/1,000 with MRI screening. The PPV of MRI prompted biopsy was 26.3%, which we deem acceptable because it is similar to the PPV of biopsy reported for mammography.
That MRI indeed detects breast cancers earlier is also apparent from the number of cancers detected at the subsequent mammographic screening round, which was 2.0/1,000, as compared to 6.8/1,000 in the regular population of women with extremely dense breasts.
Furthermore, the next MRI screen (2 years later) yielded a supplemental detection rate of only 5.9/1,000, all of which were stage 0/1 and node-negative; providing further evidence that relevant cancers are detected predominantly earlier. Moreover, the number of benign lesions leading to recall becomes much smaller in the follow-up round (28.4/ 1,000), and therefore, the PPV remained stable (PPV = 23.5% in follow-up) [32].
The results of the DENSE trial have been modelled in a microsimulation model (MISCAN) to determine the longterm impact of offering breast MRI screening to women with extremely dense breasts [33]. This model was also used to explore other scenarios, for which the model used the measured sensitivity and specificity of mammography and MRI as observed in the DENSE trial, plus the estimated biological behavior of breast cancers, and information on the efficacy of treatment of breast cancers obtained from historical data. The results of this microsimulation model suggest that adding biennial MRI to biennial mammography-as was performed in the DENSE trial-would save 8.6 additional lives per 1,000 women invited, at a cost of 150,000 Euro per life, or 22,500 Euro per quality-adjusted life-year (QALY). While this is already deemed cost-effective, alternative strategies using MRI alone (without mammography) dominate this strategy in the model. For example, using MRI alone once every 4 years could be regarded as the most cost-effective screening strategy. This would save 7.6 additional lives per 1,000 women screened at a cost of 75,000 Euro per life or 11,500 Euro per QALY. In practice, MRI alone with a frequency of once every 2 to 3 years may be preferred to prevent non-detection of rapidly growing cancers, although a higher frequency may also lead to a somewhat higher false positive rate (see below).
As the costs of MRI screening are mostly influenced by the cost of the MR scans [33], there is strong interest in breast MRI with shorter scan protocols, generally referred to as abbreviated breast MRI. This may enable a higher throughput and therefore a lower cost per examination.
The EA1411 ECOG-ACRIN study was an international study (mainly conducted in the USA) with 48 sites in academic, community hospital, and private practice settings. It included 1,444 women with dense breasts (heterogeneously densecategory c, or extremely dense-category d), who underwent routine screening both by DBT and abbreviated MRI. Both screening methods were conducted in randomized order and read strictly independently of each other, in order not only to compare the performance of abbreviated MRI with that of DBT, but to also investigate the use of abbreviated MRI as a stand-alone screening method [34]. MRI protocols were variable, but were all shorter than 10 min. In the first screening round of the EA1141 study, overall cancer detection rate with MRI was 15.2/1,000, as compared to 6.2/1,000 for DBT. Respective sensitivity was 95.7% versus 39.1%. No interval cancers were observed. The positive predictive value for biopsy was somewhat lower for MRI (19.6% versus 31%), although not statistically significantly different. This is likely caused by the fact that a prior DBT needed to be available (i.e. this was a follow-up DBT screening examination), whereas none of the participants could have had a prior MRI (i.e. this was a first-round MRI examination). In summary, the ECOG-ACRIN EA1141 study shows that abbreviated MRI can have a similar success as the standard MRI protocol that was used within the DENSE trial. Moreover, the study provided further evidence that in women undergoing MRI for screening, the additional contribution of x-ray-based breast imaging is very limited [34].
In summary, there is cumulating evidence on the fact that women with dense breasts are underserved by screening with mammography or DBT alone. This evidence is available for both women with heterogeneously dense as well as extremely dense breasts. For the latter, there is now level I evidence available on the efficacy of MRI screening on reducing underdiagnosis and breast cancer-specific mortality, and on an improved benefit-risk balance of screening compared to regular, mammographic screening. While for women with heterogeneously dense breasts MRI may also improve cancer detection, the risk-benefit balance is currently less clear.
Consequently, EUSOBI will now recommend MRI screening in women with extremely dense breasts as specified in the "EUSOBI recommendations on screening women with dense breasts" section. This recommendation is independent of other recommendations for screening in women at increased risk due to, for example, family history or a personal history of breast cancer. The evidence is strongest for women aged 50 to 70. However, it could be considered to adopt the recommendations from the age at which screening is started when this is different.
Despite the currently available evidence, it is likely not possible to implement MRI screening for women with extremely dense breasts immediately and everywhere. Differences in the availability of equipment, staff and experience and the general willingness of policymakers to pay for screening tests vary from country to country and will affect the level to which these recommendations can and will be implemented.
When implementing MRI screening, it is essential to standardize the examinations, educate technologists, radiologists and other involved professionals, and monitor the quality of images acquired. Radiologists' performance must also be monitored with a specific focus on the prevention of falsepositive recalls, as these are considered a major burden to the healthy female population. The availability of MRguided biopsy is essential for the introduction of breast MRI as a screening technique [35].
Recommendations on how to inform women
Physicians who counsel women about their respective choices regarding breast cancer screening in general, and screening in women with extremely dense breasts in particular, must have expertise in the principles of screening in general, and in screening by imaging in particular.
Such expertise is usually not routinely available in primary healthcare providers.
Accordingly, EUSOBI urges radiologists to assume this important task and directly engage in informing women about the pros and cons of screening. Educating other healthcare providers might be another way to ensure that women receive correct and objective information. The following passages may serve as a guide for women's education; How to explain the advantages associated with breast MRI screening in women with extremely dense breasts Based on the modelled results of the DENSE trial [31,33] This reduced mortality, and potentially less aggressive treatment, comes at a price.
How to explain the disadvantages associated with breast MRI screening: In essence, screening in general, as well as screening by MRI in particular, has three relevant disadvantages. Women should also be thoroughly informed about these downsides of screening in order to be able to make informed choices.
First, the need to undergo the screening test
For women with extremely dense breasts, this currently implies undergoing a mammogram at least once, i.e. at the start of screening, to establish the presence of extremely dense breasts, and then contrast-enhanced breast MRI, either as a supplemental or stand-alone screening test, once every 2 to 4 years.
Hence, she should be informed about the need for an IV cannulation, the administration of an intravenous contrast agent and the nature of a 10-min MRI examination [35,36]. While these examinations are in general well accepted, they are not perceived as pleasant. The administration of contrast agent implies that there is a very small risk for non-negligible (allergic) contrast reactions. Notably, these allergic or pseudoallergic reactions are rare, and the vast majority of these events are mild.
Possible side effects of the contrast agent are as follows [37]: & Occasionally (about 1 in 100): headache or nausea. & Rare (less than 1 in 1,000): anaphylactoid or mild anaphylactic reactions leading to rash, mild drop in blood pressure, tachycardia, not requiring specific treatment. & Extremely rare (less than 1 in 1,000,000): severe hypersensitive reactions (anaphylactoid or anaphylactic) with cardiovascular, respiratory of cutaneous manifestations, ranging from mild to severe, potentially life-threatening.
Second, the possibility of false-positive screening findings Whenever screening findings are abnormal, further assessment is required to establish a final diagnosis to finally decide whether the finding represents breast cancer or not. Where this assessment confirms the presence of breast cancer, the respective screening finding is considered 'true-positive'; when the assessment proves the presence of a benign change, but no breast cancer, the respective screening finding is considered 'false positive'-possibly better understood when referred to as 'false alarm'. Women should be informed that supplemental screening tests in general, and screening with breast MRI in particular, when used over several years or even decades, will increase the chance that she will at least once experience the situation of a 'false alarm', i.e. receive a positive screening test which, after appropriate assessment, turns out to be a harmless finding. Of all positive (abnormal) screening findings, only about 30% are really cancerous; this value is similar for mammography and for MRI.
Women should also be informed about the fact that the 'assessment' to find out whether a positive screening finding corresponds to cancer or not will consist of some additional imaging studies for most, and/or of minimally-invasive needle biopsy for some women. Particularly the latter is an unpleasant and somewhat painful, yet generally well accepted, procedure [38]. Regardless, it is essential to minimize the need for additional procedures. Based on current literature, with mammographic screening approximately 1 in 7 women will ever need additional imaging or biopsy, with 2 or 4 yearly MRIs; this number may increase to approximately 1 in 4 to 5 women.
Where the assessment confirms the absence of breast cancer, in other words: In women where the screening finding was false-positive, women may have experienced (eventually unnecessary) fear of having breast cancer for a few days until the assessment results are available. Therefore, effort should be made to avoid false-positive findings altogether and to keep the time to the final diagnosis short.
No woman should ever be treated for breast cancer because of a false-positive screening finding. Only when pathologic review undoubtedly shows cancerous tissue should women receive treatment for breast cancer.
Third, the possibility of overdiagnosis A number of cancers detected during screening would never have become symptomatic before the affected woman would have died of other causes. Diagnosis of such cancers is referred to as 'overdiagnosis'. Unfortunately, overdiagnosis is not knowable at the individual level at the time of cancer detection. In practice, these women will generally be treated for their disease as currently there is no reliable method to determine whether a specific cancer is life-threatening or represents an 'overdiagnosis'.
Based on the modelled DENSE data, about 25% of mammographically detected cancers (in 1.7% of women) and about 22% of MRI detected cancers (in 2.1% of women) may represent overdiagnosis [33]. These are mainly the low grade in situ, and some very indolent invasive breast cancers. Treatment is tailored to the specific biology of the disease in a given patient. Hence, while overdiagnosis cannot be prevented; the effect is mitigated by adapting the treatment to the aggressiveness of the detected cancer.
Shared decision-making
Screening in general, and MRI screening in particular may be lifesaving. However, it should be realized that, although breast cancer is by far the most frequent type of cancer in women, and although it still represents the most or second-to-most important cause of cancer death in women, the vast majority of women (> 85%) will never develop breast cancer during their lifetimes.
Thus, while all women should be invited to undergo breast cancer screening, only a minority will ever be diagnosed with breast cancer, and only those women can benefit from early diagnosis. The remaining women will never develop breast cancer and in these women, undergoing screening cannot be beneficial (other than assuring a woman that she does not have breast cancer), but can only have negative side effects. From a population standpoint, the (substantial) benefit for the relatively few women who do develop breast cancer justifies the side effects of screening for the vast majority who remain cancerfree.
Still, mammographic screening is commonly criticized because of false-positive findings and overdiagnosis. Even though the benefit/risk ratio increases with MRI screening, in absolute numbers, both the number of false-positive screening tests and the number of overdiagnoses increase. For the individual woman, the recognized side effects may be arguments to deviate from the population-based screening advice. This must be respected.
Choosing not to attend a given screening program, or opting for a less efficient screening method, should be a choice that resides with the individual woman herself. Such a choice is a personal decision that should never be criticized, nor penalized, not even indirectly. However, to enable women to make an informed decision, they must be well informed by their radiologists (breast imagers) and should be able to place this information in the context of their preferences and values. This is the hallmark of shared decision-making. In particular for screening, where multiple options are viable and justifiable, this participatory process is absolutely essential.
It obviously also implies that there is an obligation for the medical community to offer techniques that are proven effective; otherwise, the freedom of choice is essentially denied.
It should be noted that true application of shared decisionmaking clashes with current measures of the effectiveness of screening programs that assess quality-among other things-mainly by considering the overall participation rate. Although this is sound from a public healthcare perspective, it ignores the fact that individual women's needs, priorities and values differ. What appears perfectly acceptable to one woman may be unacceptable to another. Of course, achieving or demonstrating a reduction of mortality on a population-wide level requires high participation rates. However, these concerns should not preclude or delay the recommendation of imaging tests that can effectively avoid premature death from breast cancer in individual women, even if such tests are not yet widely available.
Consequently, we should move away from evaluating the participation rate of a one-size-fits-all screening program towards more personalized screening. We should start assessing how a multifaceted screening program fits with the wishes of the women we intend to serve.
Further considerations
This recommendation is only applicable to women with extremely dense breasts. Although women with less dense breasts, e.g. those with heterogeneously dense breasts, might also benefit from other screening approaches, the evidence in this field is currently insufficient to make strong recommendations for practice. Rather, we urge the medical community to also investigate the value of MRI screening for women with less dense breast tissue in high-quality trials.
In the future, other factors than breast density alone could be used to select women at average risk who would benefit most from MRI screening. For example, density could be combined with classic risk calculators in order to select a smaller fraction of women at higher risk for MRI screening [9,39]. Likewise, patient selection using AIassessment of screening mammograms could be employed as this would likely allow earlier detection of cancers in women with less dense breasts too [40]. However, these approaches are currently not validated in prospective studies, and it remains therefore uncertain whether they could achieve similar or even better results than the selection of women based upon density alone. Still, in the future, this may lead to other selection criteria for MRI screening than we currently recommend. Due to continuous technical innovation, other imaging modalities may eventually offer practical advantages over the currently proposed contrast-enhanced breast MRI examinations, including contrast-enhanced mammography, several ultrasound-based techniques, MRI sequences without intravenous contrast administration and isotope-based imaging tests [41]. Unfortunately, most of these techniques have not (or only marginally) been tested in screening and any assumption about their efficacy is therefore premature. Still, some of these techniques could be considered in women at increased breast cancer risk with contraindications to MRI screening as they have a proven higher clinical sensitivity than mammography.
EUSOBI recommendations on screening women with dense breasts
In women with extremely dense breast tissue at average risk, underdiagnosis of relevant breast cancers is a major challenge, even with high-quality 2D digital mammography or DBT screening. Therefore, these women are underserved by current mammographic screening programs.
In view of all the above, the EUSOBI has decided to adopt the recommendation for breast cancer screening provided in Fig. 1 Acknowledgements The authors wish to thank Dr. EAM Heijnsdijk for providing supplemental insight into the cost-effectiveness analysis of the DENSE trial.
Funding The authors state that this work has not received any funding.
Declarations
Guarantor The scientific guarantor of this publication is R. M. Mann.
Conflict of interest All authors are members of the current executive board of the European Society of Breast Imaging (EUSOBI). The document reflects the current position of EUSOBI. The authors report no relevant disclosures to the contents of this work.
Statistics and biometry
No complex statistical methods were necessary for this paper.
Informed consent Not applicable
Ethical approval Institutional Review Board approval was not required because this is a society recommendation.
Methodology
• Expert consensus based on the current literature assessment Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2022-03-08T14:40:07.787Z
|
2022-03-08T00:00:00.000
|
{
"year": 2022,
"sha1": "850ead71e3976e761811bcdd98b7c053a91aa5b1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00330-022-08617-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "3e256a60182e1024dc5791a3b7aee2ed623adbf3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251603893
|
pes2o/s2orc
|
v3-fos-license
|
Dietary Intervention with Blackcurrant Pomace Protects Rats from Testicular Oxidative Stress Induced by Exposition to Biodiesel Exhaust
The exposure to diesel exhaust emissions (DEE) contributes to negative health outcomes and premature mortality. At the same time, the health effects of the exposure to biodiesel exhaust emission are still in scientific debate. The aim of presented study was to investigate in an animal study the effects of exposure to DEE from two types of biodiesel fuels, 1st generation B7 biodiesel containing 7% of fatty acid methyl esters (FAME) or 2nd generation biodiesel (SHB20) containing 7% of FAME and 13% of hydrotreated vegetable oil (HVO), on the oxidative stress in testes and possible protective effects of dietary intervention with blackcurrant pomace (BC). Adult Fisher344/DuCrl rats were exposed by inhalation (6 h/day, 5 days/week for 4 weeks) to 2% of DEE from B7 or SHB20 fuel mixed with air. The animals from B7 (n = 14) and SHB20 (n = 14) groups subjected to filtered by a diesel particulate filter (DPF) or unfiltered DEE were maintained on standard feed. The rats from B7+BC (n = 12) or SHB20+BC (n = 12), exposed to DEE in the same way, were fed with feed supplemented containing 2% (m/m) of BC. The exposure to exhaust emissions from 1st and 2nd generation biodiesel resulted in induction of oxidative stress in the testes. Higher concentration of the oxidative stress markers thiobarbituric acid-reactive substances (TBARS), lipid hydroperoxides (LOOHs), 25-dihydroxycholesterols (25(OH)2Ch), and 7-ketocholesterol (7-KCh) level), as well as decreased level of antioxidant defense systems such as reduced glutathione (GSH), GSH/GSSG ratio, and increased level of oxidized glutathione (GSSG)) were found. Dietary intervention reduced the concentration of TBARS, 7-KCh, LOOHs, and the GSSG level, and elevated the GSH level in testes. In conclusion, DEE-induced oxidative stress in the testes was related to the biodiesel feedstock and the application of DPF. The SHB20 DEE without DPF technology exerted the most pronounced toxic effects. Dietary intervention with BC in rats exposed to DEE reduced oxidative stress in testes and improved antioxidative defense parameters, however the redox balance in the testes was not completely restored.
Introduction
Over the last few decades, several studies have emphasized adverse health effects of diesel exhaust emissions (DEE), manifested, among others, by reduced lung function, irritation symptoms, inflammatory responses, cardiovascular effects, and premature deaths [1][2][3][4][5][6]. The health effects of DEE result mainly from emission of gaseous chemicals, such as nitrogen oxides (NOx), hydrocarbons (HC), carbon monoxide (CO), soot, and emission of particulate matter (PM) [7]. Diesel PM is mostly composed of elemental carbon core and substances adsorbed on its surface, such as polycyclic aromatic hydrocarbons (PAH), PAHderivatives, and inorganic compounds (metals, ions, inorganic acids, salts). Both gaseous and particulate components of DEE induce cellular oxidative stress, causing DNA oxidation and single strand break formation, as well as stimulate release of inflammatory cytokines (such as interleukin (IL)-1α and -β and tumor necrosis factor (TNF)-α, as well as IL-8), leading to higher risk of autoimmune reactions [8][9][10].
The exposure of animals to DEE has previously been shown to increase morphological sperm abnormalities, induce ultrastructural changes in Leydig cells, and reduce the level of luteinizing hormone (LH) [11]. Further in vivo studies demonstrated that DEE increased testicular concentration of testosterone and induced degeneration of tubules [12]. In another study, it was found that chronic exposure to DEE impaired the fertility of male mice, decreased the sperm count/motility, and disrupted the spermatogenesis [13]. While the mechanisms underlying the adverse reproductive and developmental effects of DEE from combustion of diesel or biodiesel blends are not fully understood, the promotion of oxidative stress and inflammation have been shown to play an important role in the reproductive toxicity [14].
As significant reduction in DEE exposure due to reduction of diesel oil consumption is currently unlikely, lifestyle changes might be the option to reduce reproductive toxicity, such as intake of dietary natural phytochemicals with established anti-inflammatory and antioxidant properties. Among dietary natural phytochemicals, blackcurrant fruits (Ribes nigrum) (BC) exhibit strong antioxidant, anti-inflammatory, antimicrobial, antitumor, and immunomodulatory properties due to the presence of many polyphenolic compounds, such as flavonoids and anthocyanins [15][16][17][18][19][20][21]. Due to their ROS-scavenging, anti-inflammatory, and metal-chelating abilities blackcurrant fruits have been intensively studied during recent decades [22]. However, to the best of our knowledge there are no data describing a potential protective role of blackcurrant fruits against DEE-induced reproductive toxicity in vivo. Therefore, the present study was conducted to address this question with two main objectives: to determine whether 28-day exposure to DEE from combustion of the 1st and 2nd generation biofuels disrupt the testicular oxidative/anti-oxidative balance and to investigate whether a dietary intervention with blackcurrant pomace can attenuate the DEE-induced testicular oxidative stress in Fisher344/DuCrl rats.
Reagents and Chemicals
The reagents and chemicals were purchased from Sigma-Aldrich (St. Louis, MO, USA), unless otherwise indicated. The list of reagents and chemicals necessary for animal experiment and biochemical analyses were enclosed in Supplementary Materials.
Animal Study
All procedures were approved by the Third Local Animal Care and Use Committee in Warsaw, Poland (Certificate of Approval No WAW3/04/2014) according to Polish and UE standards and law regulations, as well as in line with 3R rules. Healthy adult male Fisher344/DuCrl rats (306.9 ± 1.9 g at the beginning of experiment, n = 59) were obtained from Charles River Laboratories, Inc. (Schulzfeld, Germany). The rats acclimatized to animal house conditions in the Institute of Veterinary Medicine (WULS-SGGW) for 1 week (22 ± 1 • C, 50 ± 5% relative humidity, 12 h light-dark cycle). The animals were fed with AIN-93M pellets for laboratory rats (ZooLab, Sędziszów, Poland) and had free access to water. After the acclimatization period, the rats were randomly assigned into 9 groups, including 8 experimental groups and 1 control group, according to the scheme presented in Figure 1.
Antioxidants 2022, 11, x FOR PEER REVIEW 3 of 20 AIN-93M pellets for laboratory rats (ZooLab, Sędziszów, Poland) and had free access to water. After the acclimatization period, the rats were randomly assigned into 9 groups, including 8 experimental groups and 1 control group, according to the scheme presented in Figure 1.
Exposure of Animals to DEE
The exposure of rats to DEE and detailed chemical exhaust analysis have been previously described by our group [23][24][25][26]. Briefly, DEE were generated from a Fiat Panda 1.3 JDT (2014) with a Euro 5 engine (Common Rail 3rd generation injection system, 1248 cm 3 , max. power 75 bhp, max. torque 190 Nm). Rats were exposed in whole body inhalation chambers to DEE from two biodiesel blends (B7-the 1st generation biofuel containing 7% v/v fatty acid methyl esters (FAME) or SHB20-the 2nd generation biofuel containing 7% v/v FAME and 13% v/v synthetic hydrotreated vegetable oils (HVO)) diluted to 2% with air, according to the scheme presented at Figure 1.
The inhalation procedure was described in detail in our previously published papers [23][24][25]27] and was also illustrated in Figure S2 in Supplementary Materials. Estimated concentrations of substances in the air inside the test chambers (Tables S6 and S7 in Supplementary Materials) were described in our previously published paper [23]. For inhalation (6 h/day; 5 days a week for 4 weeks, according to OECD Guideline for the Testing of Chemicals No 412 [28]), the individual cages with rats were placed in two separate inhalation chambers equipped with a rack for cages). A modification of the car engine allowed to supply an unfiltered or filtered DEE using diesel particulate filter (DPF) technology. Inhalation was performed separately for each type of biodiesel blend. Animals from four experimental groups (B7 (+DPF), SHB20 (+DPF), B7 (-DPF), SHB20 (-DPF)), exposed to filtered or unfiltered DEE, were maintained on standard feed without blackcurrant pomace (BC). Another four experimental groups (B7+BC (+DPF), SHB20+BC (+DPF), B7+BC (-DPF), SHB20+BC (-DPF)) exposed in the same condition ( Figure 1) were maintained on
Exposure of Animals to DEE
The exposure of rats to DEE and detailed chemical exhaust analysis have been previously described by our group [23][24][25][26]. Briefly, DEE were generated from a Fiat Panda 1.3 JDT (2014) with a Euro 5 engine (Common Rail 3rd generation injection system, 1248 cm 3 , max. power 75 bhp, max. torque 190 Nm). Rats were exposed in whole body inhalation chambers to DEE from two biodiesel blends (B7-the 1st generation biofuel containing 7% v/v fatty acid methyl esters (FAME) or SHB20-the 2nd generation biofuel containing 7% v/v FAME and 13% v/v synthetic hydrotreated vegetable oils (HVO)) diluted to 2% with air, according to the scheme presented at Figure 1.
The inhalation procedure was described in detail in our previously published papers [23][24][25]27] and was also illustrated in Figure S2 in Supplementary Materials. Estimated concentrations of substances in the air inside the test chambers (Tables S6 and S7 in Supplementary Materials) were described in our previously published paper [23]. For inhalation (6 h/day; 5 days a week for 4 weeks, according to OECD Guideline for the Testing of Chemicals No 412 [28]), the individual cages with rats were placed in two separate inhalation chambers equipped with a rack for cages). A modification of the car engine allowed to supply an unfiltered or filtered DEE using diesel particulate filter (DPF) technology. Inhalation was performed separately for each type of biodiesel blend. Animals from four experimental groups (B7 (+DPF), SHB20 (+DPF), B7 (-DPF), SHB20 (-DPF)), exposed to filtered or unfiltered DEE, were maintained on standard feed without blackcurrant pomace (BC). Another four experimental groups (B7+BC (+DPF), SHB20+BC (+DPF), B7+BC (-DPF), SHB20+BC (-DPF)) exposed in the same condition ( Figure 1) were maintained on the same feed supplemented with BC (20 g/kg feed). The animals were given feed and water ad libitum. Chemical analysis of animal feed was performed in Merieux NutriScience Silliker Laboratory (Warsaw, Poland) and the results are presented in Table S1 (Supplementary Materials). Chemical analysis of selected flavonoids and phenolic acids in blackcurrant pomace confirmed that more than 93% (w/w) of analyzed compounds were anthocyanins. The characteristic of phenolics in experimental feed is presented in Table S2 (Supplementary Materials). Individual feed consumption was monitored once a day and body weight gain of rats were determined weekly.
After the experiment (4 weeks), rats were anesthetized with isoflurane (Aerrane Isofluranum, Baxter, Deerfield, IL, USA) and bled by heart puncture. Next, both testes were dissected, washed with ice-cold PBS, weighted, and frozen in liquid nitrogen. The tissues were stored in −80 • C for biochemical analysis. Testicular tissue was homogenized in 5-10 mL cold buffer (50 mM potassium phosphate, pH 7.5 with 1 mM EDTA) per gram of tissue and then centrifuged at 10,000× g for 15 min at 4 • C.
Histological Assessment of Testis
The collected testicular samples were fixed in 10% buffered formaldehyde for 24 h and embedded in paraffin. Four-micron sections were cut from paraffin blocks, fixed on microscope glass slides, stained with hematoxylin and eosin, and evaluated by a veterinary pathologist.
Oxidative Stress Parameters in Testis
The level of testicular lipid peroxidation was evaluated based on formation of malondialdehyde (MDA) by measuring thiobarbituric acid-reactive species (TBARS) as described by Ohkawa et al. [30] with some modifications. The TBARs concentration was calculated from a standard curve using 1,1,3,3-tetramethoxypropane, which gives MDA upon hydrolysis during the assay. The concentration of lipid hydroperoxides in testicular homogenates was analyzed by the method of Yagi [31]. The absorbance of samples was read at 665 nm. The concentration of lipid hydroperoxides was calculated from a standard curve using cumene hydroperoxide. The concentrations of 25-dihydroxycholesterol (25(OH)2Ch) and 7-ketocholesterol (7-KCh), the two major cholesterol oxidation by-products, were assessed in testicular homogenates using the high-performance liquid chromatography (HPLC-UV) method according to Suchecka et al. [32]. Analyses of lipid peroxidation, lipid hydroperoxides, and oxysterols were assessed in technical duplicates.
Anti-Oxidative Defense System Parameters in Testis
The level of total antioxidant status (TAS) and activity of superoxide dismutase (SOD), glutathione peroxidase (GPx), and reductase (GR) in the testis homogenates were determined spectrophotometrically using Randox reagents (Randox Laboratories Ltd., Crumlin, Co. Antrim, UK), cat no: NX2332, SD125, RS505, GR2368, respectively, according to the manufacturer's protocols. Reduced glutathione (GSH) and its oxidized form glutathione disulfide (GSSG) were determined using HPLC method [33,34] according to the procedures described in [35]. The analysis of TAS level and enzymes activities were performed in technical duplicates.
Statistical Analysis
Statistical analysis was conducted using Statistica software version 13.3 [36]. All data were analyzed using two-way analysis of variance (ANOVA) followed by post hoc Duncan's test separately for animals exposed to unfiltered and filtered DEE. The comparison of data from experimental groups versus the control group was performed using one-way analysis of variance (ANOVA) followed by post hoc Duncan's test. Statistical significance was set at p < 0.05. All the results were expressed as mean ± SEM (Standard Error of Mean). The figures were performed in GraphPad Prism version 9.2.0 (332) for Windows, GraphPad Software, San Diego, CA, USA [37].
Animal Observation, Weight Changes, Gonadosomatic Index, and Microscopic Evaluation of Rat Testis
All rats appeared to be in a good health condition without any negative behavioral symptoms throughout the experimental period. Moreover, no exposure-related mortality was noted. The initial animals body weights did not significantly differ between experimental groups (data not shown). Exposure to DEE and dietary intervention with BC pomace revealed lower weekly food intake of rats from SHB20+BC exposed to unfiltered DEE compared with B7+BC and CTR group ( Figure S1A,B). Higher average daily intake of phenolic compounds was observed in rats from B7+BC exposed to DEE with or without DPF-treatment compared to animals from SHB20+BC group (Table S3, Supplementary Materials).
The mean initial body weight of rats did not significantly differ between control and experimental groups of rats exposed to DEE with or without DPF-treatment. However, the animals from CTR group had a higher initial body weight, the results of one-way ANOVA at each time point revealed that only after first week of the experiment significantly lower body weight of animals from SHB20 and SHB20+BC groups exposed to DEE with or without DPF-treatment were noted, as compared to control group (Figure 2A,B). Dietary intervention with BC did not affect the weight of rats significantly. of data from experimental groups versus the control group was performed using one-way analysis of variance (ANOVA) followed by post hoc Duncan's test. Statistical significance was set at p < 0.05. All the results were expressed as mean ± SEM (Standard Error of Mean). The figures were performed in GraphPad Prism version 9.2.0 (332) for Windows, GraphPad Software, San Diego, CA, USA [37].
Animal Observation, Weight Changes, Gonadosomatic Index, and Microscopic Evaluation of Rat Testis
All rats appeared to be in a good health condition without any negative behavioral symptoms throughout the experimental period. Moreover, no exposure-related mortality was noted. The initial animals body weights did not significantly differ between experimental groups (data not shown). Exposure to DEE and dietary intervention with BC pomace revealed lower weekly food intake of rats from SHB20+BC exposed to unfiltered DEE compared with B7+BC and CTR group ( Figure S1A,B). Higher average daily intake of phenolic compounds was observed in rats from B7+BC exposed to DEE with or without DPF-treatment compared to animals from SHB20+BC group (Table S3, Supplementary Materials).
The mean initial body weight of rats did not significantly differ between control and experimental groups of rats exposed to DEE with or without DPF-treatment. However, the animals from CTR group had a higher initial body weight, the results of one-way ANOVA at each time point revealed that only after first week of the experiment significantly lower body weight of animals from SHB20 and SHB20+BC groups exposed to DEE with or without DPF-treatment were noted, as compared to control group (Figure 2A,B). Dietary intervention with BC did not affect the weight of rats significantly. Body weight change of rats exposed to DEE from B7 or SHB20 biofuels with (A) or without (B) DPF-treatment and with or without blackcurrant pomace supplementation (BC) during the experiment. CTR-control group; B7-1st generation biofuel; SHB20-2nd generation biofuel; BCblackcurrant pomace; DPF-diesel particulate filter. Data are expressed as mean ± SEM; + -Statistically significant difference from the control group (CTR), + p < 0.05; ++ p < 0.01, one-way ANOVA with Duncan's post hoc test.
The effect of DEE exposure on the testis gonadosomatic index (GSI) is shown in Figure 3A,B). Analysis of variance (ANOVA) with Duncan's post hoc test revealed a significant increase of GSI in rats from B7 (with DPF-treatment) groups ( Figure 3A), as well as in animals from B7 (with or without BC) and SHB20 groups without DPF-treatment (Figure 3B), as compared to the corresponding control group (p < 0.05). The results showed no effect of BC supplementation on male GSI. Figure 2. Body weight change of rats exposed to DEE from B7 or SHB20 biofuels with (A) or without (B) DPF-treatment and with or without blackcurrant pomace supplementation (BC) during the experiment. CTR-control group; B7-1st generation biofuel; SHB20-2nd generation biofuel; BC-blackcurrant pomace; DPF-diesel particulate filter. Data are expressed as mean ± SEM; + -Statistically significant difference from the control group (CTR), + p < 0.05; ++ p < 0.01, one-way ANOVA with Duncan's post hoc test.
The effect of DEE exposure on the testis gonadosomatic index (GSI) is shown in Figure 3A,B). Analysis of variance (ANOVA) with Duncan's post hoc test revealed a significant increase of GSI in rats from B7 (with DPF-treatment) groups ( Figure 3A), as well as in animals from B7 (with or without BC) and SHB20 groups without DPF-treatment ( Figure 3B), as compared to the corresponding control group (p < 0.05). The results showed no effect of BC supplementation on male GSI. In all groups, histological analysis of testicular sections revealed normal architecture of the seminiferous tubules, normal interstitial compartment with Leydig cells, and regular seminiferous epithelium containing the germ cells in all stages of differentiation and normal tubular lumen ( Figure 4). . Representative microphotographs of hematoxylin-eosin-stained cross sections of testes of rats exposed to DEE from B7 or SHB20 biofuels with or without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Testes of rats exposed to (A) filtered DEE from B7 biodiesel blend, (B) filtered DEE from B7 biodiesel blend and fed with feed with BC pomace, (C) unfiltered DEE from B7 biodiesel blend and fed with BC pomace, (D) filtered DEE from SHB20 biodiesel blend, (E) filtered DEE from SHB20 biodiesel blend and fed with feed with BC pomace, and (F) testes of control rats. B7-1st generation biofuel; SHB20-2nd generation biofuel; BCblackcurrant pomace; DPF-diesel particulate filter; CTR-control group. Images were captured at 400× magnification; scale bars correspond to 50 μm. . Gonadosomatic index (GSI) of testes of rats exposed to DEE from B7 or SHB20 biofuels without (A) or with (B) DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. CTR-control group; B7-1st generation biofuel; SHB20-2nd generation biofuel; BC-blackcurrant pomace; DPF-diesel particulate filter; + denotes statistically important difference vs. CTR group, p < 0.05. Data are expressed as mean ± SEM.
In all groups, histological analysis of testicular sections revealed normal architecture of the seminiferous tubules, normal interstitial compartment with Leydig cells, and regular seminiferous epithelium containing the germ cells in all stages of differentiation and normal tubular lumen ( Figure 4). In all groups, histological analysis of testicular sections revealed normal architecture of the seminiferous tubules, normal interstitial compartment with Leydig cells, and regular seminiferous epithelium containing the germ cells in all stages of differentiation and normal tubular lumen ( Figure 4). . Representative microphotographs of hematoxylin-eosin-stained cross sections of testes of rats exposed to DEE from B7 or SHB20 biofuels with or without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Testes of rats exposed to (A) filtered DEE from B7 biodiesel blend, (B) filtered DEE from B7 biodiesel blend and fed with feed with BC pomace, (C) unfiltered DEE from B7 biodiesel blend and fed with BC pomace, (D) filtered DEE from SHB20 biodiesel blend, (E) filtered DEE from SHB20 biodiesel blend and fed with feed with BC pomace, and (F) testes of control rats. B7-1st generation biofuel; SHB20-2nd generation biofuel; BCblackcurrant pomace; DPF-diesel particulate filter; CTR-control group. Images were captured at 400× magnification; scale bars correspond to 50 μm. . Representative microphotographs of hematoxylin-eosin-stained cross sections of testes of rats exposed to DEE from B7 or SHB20 biofuels with or without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Testes of rats exposed to (A) filtered DEE from B7 biodiesel blend, (B) filtered DEE from B7 biodiesel blend and fed with feed with BC pomace, (C) unfiltered DEE from B7 biodiesel blend and fed with BC pomace, (D) filtered DEE from SHB20 biodiesel blend, (E) filtered DEE from SHB20 biodiesel blend and fed with feed with BC pomace, and (F) testes of control rats. B7-1st generation biofuel; SHB20-2nd generation biofuel; BC-blackcurrant pomace; DPF-diesel particulate filter; CTR-control group. Images were captured at 400× magnification; scale bars correspond to 50 µm.
Oxidative Stress Markers
As illustrated in Figure 5A,B, the exposure of rats to SHB20 DEE with DPF-treatment and B7 or SHB20 DEE without DPF-treatment significantly increased the concentration of TBARS in the testes compared with the control group. There was a significant difference in TBARS concentration between B7 and SHB20 DEE groups, with higher concentration in the testes of rats exposed to SHB20 DEE with and without DPF-treatment. Dietary intervention with BC significantly reduced increase of concentration of TBARS in the testes of rats exposed to SHB20 DEE without or with DPF-treatment. Figure 5C,D show that in the testes of rats exposed to B7 or SHB20 DEE without and with DPF-treatment, with exception of B7(-DPF)+BC group, the concentration of lipid hydroperoxides (LOOHs) significantly increased, compared with the control group. There was a significant difference in the concentration of LOOHs in the testes between B7 and SHB20 DEE groups, with higher concentration in rats exposed to SHB20 DEE with DPF. Dietary intervention significantly ameliorated the increase in concentration of LOOHs in the testes of rats exposed to B7 DEE without DPF-treatment and in rats exposed to B7 and SHB20 DEE with DPF-treatment.
Oxidative Stress Markers
As illustrated in Figure 5A,B, the exposure of rats to SHB20 DEE with DPF-treatment and B7 or SHB20 DEE without DPF-treatment significantly increased the concentration of TBARS in the testes compared with the control group. There was a significant difference in TBARS concentration between B7 and SHB20 DEE groups, with higher concentration in the testes of rats exposed to SHB20 DEE with and without DPF-treatment. Dietary intervention with BC significantly reduced increase of concentration of TBARS in the testes of rats exposed to SHB20 DEE without or with DPF-treatment. Figures 5C,D show that in the testes of rats exposed to B7 or SHB20 DEE without and with DPF-treatment, with exception of B7(-DPF)+BC group, the concentration of lipid hydroperoxides (LOOHs) significantly increased, compared with the control group. There was a significant difference in the concentration of LOOHs in the testes between B7 and SHB20 DEE groups, with higher concentration in rats exposed to SHB20 DEE with DPF. Dietary intervention significantly ameliorated the increase in concentration of LOOHs in the testes of rats exposed to B7 DEE without DPF-treatment and in rats exposed to B7 and SHB20 DEE with DPFtreatment. A,B) and lipid hydroperoxides (LOOHs) (C,D) in the testes of rats exposed to DEE from B7 or SHB20 biofuels with and without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Data are expressed as mean ± SEM. # -significant difference between normal and blackcurrant pomace supplemented diet, # p < 0.05; ## p < 0.01; ### p< 0.001; *-Statistically significant difference between B7 and SHB20 DEE, *** p< 0.001; + -Statistically significant difference from the control group (CTR), + p < 0.05; ++ p < 0.01; +++ p< 0.001, one-way ANOVA with Duncan's post hoc test. in the testes of rats exposed to DEE from B7 or SHB20 biofuels with and without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Data are expressed as mean ± SEM. # -significant difference between normal and blackcurrant pomace supplemented diet, # p < 0.05; ## p < 0.01; ### p < 0.001; *-Statistically significant difference between B7 and SHB20 DEE, *** p < 0.001; + -Statistically significant difference from the control group (CTR), + p < 0.05; ++ p < 0.01; +++ p < 0.001, one-way ANOVA with Duncan's post hoc test.
As presented in Figure 6A,B, the exposure of rats to B7 or SHB20 DEE with and without DPF-treatment significantly enhanced the concentration of oxidized cholesterol metabolite (25(OH)2Ch), versus the control group. There was a significant difference in the concentration of 25(OH)2Ch between B7 and SHB20 DEE groups, with higher concentration in the testes of rats exposed to DEE from SHB20 biodiesel fuel with DPF-treatment. The latter effects of DEE were prevented by dietary intervention with BC in rats exposed to SHB20 DEE without DPF-treatment.
out DPF-treatment significantly enhanced the concentration of oxidized cholesterol metabolite (25(OH)2Ch), versus the control group. There was a significant difference in the concentration of 25(OH)2Ch between B7 and SHB20 DEE groups, with higher concentration in the testes of rats exposed to DEE from SHB20 biodiesel fuel with DPF-treatment. The latter effects of DEE were prevented by dietary intervention with BC in rats exposed to SHB20 DEE without DPF-treatment.
The results of 7-KCh concentration in the testis, presented in Figure 6C,D, showed that the rats exposed to B7 or SHB20 DEE with or without DPF-treatment revealed significantly increased concentrations of this oxysterol compared to CTR group, except the animals from B7+BC group exposed to DEE with DPF-treatment. Furthermore, the animals exposed to DEE from SHB20 biodiesel fuel with or without DPF-treatment presented higher concentration of 7-KCh in testis than the rats exposed to DEE from B7 biofuel. The effects of DEE were inhibited by dietary intervention with BC in rats exposed to B7 DEE with DPF-treatment and in animals exposed to SHB20 DEE without DPF-treatment. Figure 6. The concentration of 25-dihydroxycholesterol (25(OH)2Ch) (A,B) and 7-ketocholesterol (7-KCh) (C,D) in the testes of rats exposed to DEE from B7 and SHB20 biofuels with and without DPFtreatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Data are expressed as mean ± SEM. # -Statistically significant difference between normal and blackcurrant pomace supplemented diet, ## p < 0.01; ### p< 0.001; *-Statistically significant difference between B7 and SHB20 DEE, * p < 0.05; *** p< 0.001; + -Statistically significant difference from the control group (CTR), + p < 0.05; ++ p < 0.01; +++ p< 0.001, one-way ANOVA with Duncan's post hoc test. in the testes of rats exposed to DEE from B7 and SHB20 biofuels with and without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Data are expressed as mean ± SEM. # -Statistically significant difference between normal and blackcurrant pomace supplemented diet, ## p < 0.01; ### p < 0.001; *-Statistically significant difference between B7 and SHB20 DEE, * p < 0.05; *** p < 0.001; + -Statistically significant difference from the control group (CTR), + p < 0.05; ++ p < 0.01; +++ p < 0.001, one-way ANOVA with Duncan's post hoc test.
The results of 7-KCh concentration in the testis, presented in Figure 6C,D, showed that the rats exposed to B7 or SHB20 DEE with or without DPF-treatment revealed significantly increased concentrations of this oxysterol compared to CTR group, except the animals from B7+BC group exposed to DEE with DPF-treatment. Furthermore, the animals exposed to DEE from SHB20 biodiesel fuel with or without DPF-treatment presented higher concentration of 7-KCh in testis than the rats exposed to DEE from B7 biofuel. The effects of DEE were inhibited by dietary intervention with BC in rats exposed to B7 DEE with DPF-treatment and in animals exposed to SHB20 DEE without DPF-treatment.
Antioxidant Defense Markers
The effects of dietary intervention with blackcurrant pomace in rats exposed to B7 and SHB20 DEE without and with DPF-treatment on antioxidant defense markers are shown in Figures 7-9. The exposure of rats to B7 or SHB20 DEE without and with DPFtreatment, with exception of SHB20(-DPF) group, increased the total antioxidant potential (TAS) versus controls ( Figure 7A,B). There was a significant difference in the TAS between B7 and SHB20 DEE groups, with higher level in the testes of rats exposed to B7 DEE without DPF-treatment. Dietary intervention significantly increased the TAS in the testes of rats exposed to B7 and SHB20 DEE without and with DPF-treatment.
Antioxidant Defense Markers
The effects of dietary intervention with blackcurrant pomace in rats exposed to B7 and SHB20 DEE without and with DPF-treatment on antioxidant defense markers are shown in Figures 7-9. The exposure of rats to B7 or SHB20 DEE without and with DPFtreatment, with exception of SHB20(-DPF) group, increased the total antioxidant potential (TAS) versus controls ( Figure 7A,B). There was a significant difference in the TAS between B7 and SHB20 DEE groups, with higher level in the testes of rats exposed to B7 DEE without DPF-treatment. Dietary intervention significantly increased the TAS in the testes of rats exposed to B7 and SHB20 DEE without and with DPF-treatment. A,B) and superoxide dismutase (SOD) activity (C,D) in the testes of rats exposed to DEE from B7 and SHB20 biofuels with or without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Data are expressed as mean ± SEM. # -Statistically significant difference between normal and blackcurrant pomace supplemented diet, ## p < 0.01; ### p< 0.001; *-Statistically significant difference between B7 and SHB20 DEE, * p < 0.05; *** p< 0.001; + -Statistically significant difference from the control group (CTR), + p < 0.05; ++ p < 0.01; +++ p< 0.001, one-way ANOVA with Duncan's post hoc test. Figures 7C,D show that the superoxide dismutase (SOD) activity was significantly higher only in the testes of rats exposed to SHB20 DEE without DPF-treatment and with dietary intervention, and in the testes of rats exposed to B7 DEE with DPF-treatment and with BC dietary intervention, compared to the corresponding groups without dietary intervention.
As presented in Figure 8A,B, there was no difference in the activity of glutathione in the testes of rats exposed to DEE from B7 and SHB20 biofuels with or without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Data are expressed as mean ± SEM. # -Statistically significant difference between normal and blackcurrant pomace supplemented diet, ## p < 0.01; ### p < 0.001; *-Statistically significant difference between B7 and SHB20 DEE, * p < 0.05; *** p < 0.001; + -Statistically significant difference from the control group (CTR), + p < 0.05; ++ p < 0.01; +++ p < 0.001, one-way ANOVA with Duncan's post hoc test.
Antioxidants 2022, 11, x FOR PEER REVIEW 10 of 20 without and with DPF-treatment. Dietary intervention with BC significantly increased the activity of GPx in the testes of rats exposed to SHB20 DEE without and with DPF, compared to the corresponding groups without dietary intervention. in the testes of rats exposed to DEE from B7 or SHB20 biofuels with or without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Data are expressed as mean ± SEM. # -Statistically significant difference between normal and blackcurrant pomace supplemented diet, ## p < 0.01; ### p < 0.001; *-Statistically significant difference between B7 and SHB20 DEE, * p < 0.05; ** p < 0.01; *** p < 0.001; + -Statistically significant difference from the control group (CTR), +++ p < 0.001, one-way ANOVA with Duncan's post hoc test.
As illustrated in Figure 8C,D, no changes of the activity of glutathione reductase (GPx) in the testes were observed between rats exposed to B7 and SHB20 DEE without and with DPF-treatment and the control rats. Dietary intervention with BC markedly increased the activity of GR in the testes of rats exposed to SHB20 DEE with DPF-treatment but reduced the activity of GR in the testes of rats exposed to B7 DEE without DPF-treatment, compared to the corresponding groups without dietary intervention. Figure 9A,B shows that the concentration of reduced form of glutathione (GSH) in testes was significantly higher in groups of rats exposed to B7 DEE regardless the DPFtreatment and in rats from SHB20+BC group inhaled with filtered DEE. However, markedly lower level of GSH was demonstrated in both groups of rats exposed to SHB20 DEE (without and with DPF-treatment) as well as in rats from SHB20 group with DPF-treatment, versus controls. Dietary intervention with BC significantly increased the concentration of GSH in the testes of rats exposed to SHB20 DEE with DPF-treatment, as compared Figure 8. The activity of glutathione peroxidase (GPx) (A,B) and glutathione reductase (GR) (C,D) in the testes of rats exposed to DEE from B7 or SHB20 biofuels with or without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Data are expressed as mean ± SEM. # -Statistically significant difference between normal and blackcurrant pomace supplemented diet, ## p < 0.01; ### p < 0.001; *-Statistically significant difference between B7 and SHB20 DEE, * p < 0.05; ** p < 0.01; *** p < 0.001; + -Statistically significant difference from the control group (CTR), +++ p < 0.001, one-way ANOVA with Duncan's post hoc test. Figure 7C,D show that the superoxide dismutase (SOD) activity was significantly higher only in the testes of rats exposed to SHB20 DEE without DPF-treatment and with dietary intervention, and in the testes of rats exposed to B7 DEE with DPF-treatment and with BC dietary intervention, compared to the corresponding groups without dietary intervention.
As presented in Figure 8A,B, there was no difference in the activity of glutathione reductase (GPx) in the testes between control rats and rats exposed to B7 or SHB20 DEE without and with DPF-treatment. Dietary intervention with BC significantly increased the activity of GPx in the testes of rats exposed to SHB20 DEE without and with DPF, compared to the corresponding groups without dietary intervention.
As illustrated in Figure 8C,D, no changes of the activity of glutathione reductase (GPx) in the testes were observed between rats exposed to B7 and SHB20 DEE without and with DPF-treatment and the control rats. Dietary intervention with BC markedly increased the activity of GR in the testes of rats exposed to SHB20 DEE with DPF-treatment but reduced the activity of GR in the testes of rats exposed to B7 DEE without DPF-treatment, compared to the corresponding groups without dietary intervention.
Antioxidants 2022, 11, x FOR PEER REVIEW 11 of 20 to the corresponding group without dietary intervention. In turn, the concentration of oxidized form of glutathione (GSSG) (Figure 9C,D) was significantly higher in rats exposed to B7 and SHB20 DEE without and with DPF-treatment, compared to the control group. Dietary intervention with BC significantly decreased the concentration of GSSG in the testes of rats exposed to SHB20 DEE without and with DPF-treatment, compared to the corresponding groups without dietary intervention. Figure 9. The concentration of reduced form of glutathione (GSH) (A,B), oxidized form of glutathione (GSSG) (C,D) and GSH to GSSG ratio (E,F) in the testes of rats exposed to DEE from B7 and SHB20 biofuels with and without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Data are expressed as mean ± SEM. # -Statistically significant difference between normal and blackcurrant pomace supplemented diet, ## p < 0.01; ### p< 0.001); *-Statistically significant difference between B7 and SHB20 DEE, * p < 0.05; ** p < 0.01; *** p < 0.001); + -Statistically significant difference from the control group (CTR), + p < 0.05; ++ p < 0.01; +++ p < 0.001, one-way ANOVA with Duncan's post hoc test. Figure 9E,F shows a significant reduction in the cellular GSH:GSSG ratio in the testes of rats exposed to SHB20 DEE without or with DPF-treatment, compared to the control group. Dietary intervention prevented a drop in the GSH:GSSG ratio only in the testes of rats exposed to SHB20 DEE with DPF-treatment, compared to the corresponding group without BC.
Discussion
Our results revealed that even though exposure to DEE from the combustion of both biofuels had no effect on the general health condition and food intake, a significantly lower body weight gain was observed in rats after the first week of inhalation. This result differs from other studies, which revealed no relationship between exposure to DEE and A,B), oxidized form of glutathione (GSSG) (C,D) and GSH to GSSG ratio (E,F) in the testes of rats exposed to DEE from B7 and SHB20 biofuels with and without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days. Data are expressed as mean ± SEM. # -Statistically significant difference between normal and blackcurrant pomace supplemented diet, ## p < 0.01; ### p < 0.001); *-Statistically significant difference between B7 and SHB20 DEE, * p < 0.05; ** p < 0.01; *** p < 0.001); + -Statistically significant difference from the control group (CTR), + p < 0.05; ++ p < 0.01; +++ p < 0.001, one-way ANOVA with Duncan's post hoc test. Figure 9A,B shows that the concentration of reduced form of glutathione (GSH) in testes was significantly higher in groups of rats exposed to B7 DEE regardless the DPFtreatment and in rats from SHB20+BC group inhaled with filtered DEE. However, markedly lower level of GSH was demonstrated in both groups of rats exposed to SHB20 DEE (without and with DPF-treatment) as well as in rats from SHB20 group with DPF-treatment, versus controls. Dietary intervention with BC significantly increased the concentration of GSH in the testes of rats exposed to SHB20 DEE with DPF-treatment, as compared to the corresponding group without dietary intervention. In turn, the concentration of oxidized form of glutathione (GSSG) (Figure 9C,D) was significantly higher in rats exposed to B7 and SHB20 DEE without and with DPF-treatment, compared to the control group. Dietary intervention with BC significantly decreased the concentration of GSSG in the testes of rats exposed to SHB20 DEE without and with DPF-treatment, compared to the corresponding groups without dietary intervention. Figure 9E,F shows a significant reduction in the cellular GSH:GSSG ratio in the testes of rats exposed to SHB20 DEE without or with DPF-treatment, compared to the control group. Dietary intervention prevented a drop in the GSH:GSSG ratio only in the testes of rats exposed to SHB20 DEE with DPF-treatment, compared to the corresponding group without BC.
Discussion
Our results revealed that even though exposure to DEE from the combustion of both biofuels had no effect on the general health condition and food intake, a significantly lower body weight gain was observed in rats after the first week of inhalation. This result differs from other studies, which revealed no relationship between exposure to DEE and body weight gain in adult animals [12,38]. However, the studies on the effects of in utero DEE exposure showed the reduced body weight of male offspring at postnatal day 90 [39]. A precise mechanism of these effects has not yet been established.
The differences in the body weight gain did not affect the testis morphology. Regardless of the observed increase in relative testicular weight (GSI) in rats exposed to B7 DEE (with or without DPF), as well as in SHB20 and B7+BC (without DPF) groups compared to control animals, the rats from all experimental groups displayed normal morphology of the testes. This result is in line with the observation published by Watanabe and Oonuki [40], who reported no structural malformations in the testes from diesel exhaust-exposed rats. In contrast to these findings, Kisin et al. [11] and Yang et al. [13] reported clustering of the dystrophic seminiferous tubules with arrested spermatogenesis and the presence of degenerating spermatocytes in mice exposed to biodiesel B50. Similarly, Ono et al. [41] and Kubo-Irie et al. [42] showed that exposure to DEE during pregnancy caused harmful effects in the testes of male offspring, manifested as degenerated seminiferous tubules with multinucleated giant cells. The observed discrepancy between our results and conclusions from other publications is likely caused by the differences in the type of fuel, exposure conditions, and animal species. We tested two types of biofuels: B7 biofuel (containing 7% FAME-fatty acid methyl esters with the chemical structure CH 3 (CH2) n COOCH 3 ) and SHB20 biofuel (containing 7% v/v FAME and 13% v/v HVO-hydrotreated vegetable oils free of aromatics, oxygen and sulfur with the chemical structure CnH2n+2) [43]. The exposure of rats to the gaseous and particulate emissions from diesel engine for both fuels (2.1-2.2% (v/v) of inhaled air was environmentally relevant and was within the range that humans are likely to be exposed to in a city agglomeration. Based on the results presented in Tables S6 and S7 (in Supplementary Materials), higher concentration of unburned hydrocarbons inside the inhalation chamber with DPF-treatment was noted regardless of the type of biodiesel blend. Moreover, the application of DPF resulted in reduction of particulate matter concentration from B7 compared to SHB20 biodiesel emission (by 92% and 91%, respectively). The concentration of polycyclic aromatic hydrocarbons (PAHs) differed depending on the type of biodiesel blend and DPF-treatment. Generally, more than 90% reduction of PAHs concentration was noted inside the chambers with DPF application. However, Kisin et al. [11] tested B50 biofuel containing 50% v/v FAME and determined abnormalities in the mice reproductive system after exposure to particulate emissions from diesel engine with a cumulative dose of 60 µg per mouse of total carbon. Likewise, the other authors listed above examined detrimental effects of particulate emissions (1.0 mg/m 3 , from day 2 until day 16 post coitum) from diesel engine on mouse spermatogenesis in offspring. Our assumption is also consistent with the results of our and other experimental studies showing that the type of biofuel and concentration of biodiesel blends may affect the toxicological characteristics of diesel particulate matter due to potential increases of certain toxic compounds in its composition [23,44,45]. Dietary intervention with blackcurrant pomace had no effect on the food intake, body weight gain, gonadosomatic index (GSI) of testes and the architecture of seminiferous tubules of rats exposed to DEE.
Dysfunction of testes is suggested to be associated with induction of oxidative stress. Due to relatively higher level of unsaturated fatty acids in testes than in other organs, male gonads are particularly susceptible to increased oxidative stress induced by, e.g., different environmental factors [46,47]. A well-known symptom of oxidative stress is peroxidation of lipids due to the overproduction of free radicals and peroxides in the intracellular and extracellular environment. We therefore decided to measure malondialdehyde (MDA) as the thiobarbituric acid-reactive-substances (TBARS), which is a critical biomarker in the study of polyunsaturated fatty acids (PUFAs) peroxidation [48,49]. Our results revealed that the exposure of rats to B7 and SHB20 DEE significantly increased the concentration of TBARS, but this effect was the most pronounced in rats treated with SHB20 DEE without DPF-treatment. These results corroborate with results of Liu et al. [50,51], who demonstrated the increased concentration of MDA in testes of rats intratracheally administered with PM2.5 from DEE. The increased level of MDA was also observed in the testes of mice exposed to 3-methyl-4-nitrophenol, a component of diesel exhaust particles [52]. Aside from the TBARS, lipid hydroperoxides (LOOHs) are useful biomarkers of early-stage lipid peroxidation. Lipid hydroperoxides, a very unstable products of reaction of peroxyl radical (ROO • ) polyunsaturated fatty acids, are considered as one of the most important markers of cellular oxidative stress [53,54]. Our findings showed that the exposure of rats to DEE significantly increased the concentration of LOOHs, with the highest concentration in rats exposed to SHB20 DEE with DPF-treatment. Another reliable marker of oxidative stress in vivo is peroxidation of products of cholesterol metabolism, namely oxysterols [55]. In the present study, we determined the levels of two oxysterols, which are the most common products of reaction between cholesterol and oxygen radicals: 25-dihydroxycholesterol (25(OH)2Ch) and 7-ketocholesterol (7-KCh) [56,57]. 7-ketocholesterol is the most toxic oxysterol within the organism and exerts a strong impact to form ROS in the cells and promote oxiapoptophagy, a cell death process induced by certain oxysterols [56]. We noted that both types of DEEs significantly enhanced the concentration of 25(OH)2Ch and 7-KCh in the testes. DEE from SHB20 biodiesel fuel appeared to be the most potent in increasing of 25(OH)2Ch and 7-KCh concentrations. To the best of our knowledge, the effects of DEE from the combustion of biodiesel fuels on the generation of oxidatively modified forms of cholesterol in the testes has not been evaluated so far, but Rao et al. [58] reported a marked increase in 7-KCh concentration in plasma lipoproteins (VLDL and LDL/IDL) and in the aorta of mice in response to inhaled particulate matter (PM2.5) during chronic air pollution exposure.
Based on the above-mentioned data, our results clearly demonstrate that both B7 and SHB20 DEE induced oxidative stress in the testes of rats, and the effects were higher for SHB20. Moreover, the DPF-treatment reduced several of the effects. The highest toxic effects were observed for SHB20 DEE without DPF technology. We believe that the observed differences in the oxidative toxicity between B7 DEE and SHB20 DEE resulted from the dissimilarity in the composition of the DEE-derived particles. Some studies investigating the exposure of animals to DEE particulate matter indicated that alteration of applied engine load changed the composition and biological reactivity of particles [59,60]. Indeed, the analysis of DEE from the combustion of both biofuels used in this study clearly demonstrated a difference in chemical characteristics of emissions from the two tested fuels [26,61,62]. Though the concentrations of the gaseous components (CO, CO 2 , NO x ) were approximately in the same range for B7 and SHB20 DEE, the combustion of SHB20 biofuel generated a higher number of ultrafine particles with diameter less than 100 nm (~85%) than the combustion of B7 biofuel (~55%). Compared with larger particles, ultrafine particles penetrate rapidly into various body organs, are cleared more slowly, retain longer after deposition, and have a greater ability to deposit toxic chemicals on the surface due to a larger surface-to-mass ratio [63,64]. In line, the observed higher toxicity of DEE without DPF-treatment, as compared with DPF-treatment, was most likely related to a higher concentration of particulate matter. The efficiency of DPF-treatment was previously confirmed in our FuelHealth project, showing a reduced total mass of particulate matter by approximately 90% [61].
The higher pro-oxidative properties of SHB20 DEE exposure, as compared with B7 DEE, may be attributed to the presence of redox-active transition metals, present only in SHB20-derived particles [62]. The particle-associated metals, such as copper and iron might undergo Fenton reaction, promoting formation of free radicals [65,66]. However, the effects might also be due to more secondary responses arising from activation of cellular redox machinery [63].
Since oxidative stress develops because of an imbalance favoring oxidants, leading to a disrupted redox signaling and molecular damage, we investigated the effects of DEE on antioxidant defense systems [67][68][69], expressed as total antioxidant status (TAS), which describes the dynamic equilibrium between prooxidants and antioxidants [70], and the level/activity of antioxidant enzymes and low molecular weight radical scavenger. In our study, the treatment with DEE from the combustion of both biofuels without and with DPF-treatment significantly increased the level of TAS, with the highest value in the testes of rats exposed to B7 DEE without DPF-treatment. The opposite results were presented by Nemmar et al. [71], who studied the interaction of diesel exhaust particles with human, rat and mouse erythrocytes in vitro and found that TAS was decreased in a dose-dependent manner in rat and mouse erythrocytes but was not affected in human erythrocytes. The observed increase of TAS may, in our opinion, reflect a redox response.
Apart from TAS, some enzymatic antioxidants, such as superoxide dismutase (SOD), glutathione peroxidase (GPx), and glutathione reductase (GR), as well as non-enzymatic antioxidants, such as glutathione (GSH), play important roles in the antioxidative defense system. In the present study, we did not find a statistically significant difference in the activity of SOD, GPx, and GR in testicular homogenates from the exposed groups. Similar observations were already reported for the testicular GPx activity in male rats after treatment with motorcycle exhaust [72]. However, previous studies examining the testicular activity of SOD and GR following exposure to DEE have yielded somewhat inconclusive findings. Liu et al. [50] showed that long-term exposure of male Sprague-Dawley (SD) rats to PM2.5 (20 mg/kg for 4 weeks) from DEE decreased the SOD activity in the testes. However, Cao et al. [73] demonstrated a dose-dependent increase in the SOD activity in the testes of SD rats exposed to fine PM (10 or 20 mg/kg/day for 4 weeks). Along with the lack of changes in SOD, GPx, and GR, we found a decreased level of reduced form of glutathione (GSH) and the GSH/GSSG ratio in rats exposed to SHB20 DEE, accompanied by the increased level of oxidized form of glutathione (GSSG). This result may be interpreted as evidence of redox unbalance, which in turn may reflect the prolonged oxidative stress and impairment of antioxidant defenses due to the low ability to scavenge ROS. This conclusion is based on several observations showing that the amount of free GSH decreases when cellular systems are unable to counteract the oxidative-mediated insults [74]. Moreover, it should be kept in mind that the increased level of GSSG causes the GSH depletion what may result in diminished antioxidant defense [75]. Interestingly, we observed the increased levels of GSH and GSSG without the increase the GSH/GSSG ratio in rats exposed to B7 DEE. The significant increase of GSH level is probably a response to the increased generation of ROS. Additionally, a concomitant increase in the GSSG level may suggest de novo GSH synthesis leading to protection against oxidative stress [76]. Comparing our results to the literature, partially similar results were reported by Kisin et al. [11], who observed the reduced level of GSH in the testes of C57BL/6 mice exposed to particles from the combustion of BD50 biofuel. Another group also demonstrated the decreased GSH level and the GSH to GSSG ratio in the testes of ICR mouse exposed intragastrically to 1-nitropyrene from the DEE [77].
According to currently acceptable hypothesis, the oxidant and anti-oxidative parameters are directly interactive with pro-inflammatory markers and set up two scenarios. On the one hand, it is believed that when ROS production goes beyond their clearance, excessive or accumulated ROS can disrupt redox balance and induce oxidative stress and proinflammatory response. On the other hand, a considerable body of experimental evidence has shown that respiratory burst made by inflammatory cells during inflammation can lead to an increased production of ROS, which stimulates changes in the oxidant/antioxidant balance. We have made an attempt to clarify this issue and we have determined the testicular IL-1β, IL-6, and TNFα gene expressions and protein levels in rats exposed to DEE from B7 and SHB20 biofuels with and without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days (Table S5 in Supplementary Materials). Our results revealed that only DEE from SHB20 biofuel with and without DPF-treatment increased significantly the IL-6 gene expression. Dietary intervention attenuated this effect, though the IL-6 gene expression was still significantly higher than in the control group. The expression of IL-1β and TNFα genes in the testes were not altered by DEE from B7 and SHB20 biofuels with and without DPF-treatment. In addition, no significant association between the testicular IL-1β, IL-6, and TNFα gene expressions and protein levels was observed. To sum up, the obtained results may suggest that the first scenario is more likely, i.e., that the exhaust emissions from B7 and SHB20 biodiesel blends combustion induced ROS, which in turn disrupted redox balance and induced oxidative stress and weak pro-inflammatory response, manifested as the increased IL-6 gene expression.
Recapitulating, the presented results clearly revealed that DEE from the combustion of both biofuels affected the antioxidant system and caused oxidative reactions likely triggered by ROS, which was unable to prevent oxidative reactions triggered by them. However, dietary intervention with BC significantly mitigated these redox-effects, demonstrated as the increased activities of TAS, SOD, GPx, and GR in the DEE-exposed rats. We further observed that treatment with BC significantly elevated the GSH level, decreased the concentration of GSSG, and prevented a drop in the GSH:GSSG ratio. These results are in agreement with previous studies showing that anthocyanins, mainly cyanidin-3-glucoside, exerted strong antioxidative effects not only by scavenging intracellular ROS, but also by modulating specific signaling pathway of antioxidant adaptive response [78,79]. For instance, Ye et al. [80] reported that cyanidin-3-glucoside intervention inhibited the intracellular ROS and O 2− levels, as well as restored the impaired activity of SOD in diabetic db/db mice. Sukprasansap et al. [81] presented the dramatically activated gene expression of SOD and GPx in cyanidin-3-glucoside-treated cells. In addition, Zhu et al. [82] shown that the cyanidin-3-O-glucoside lowered the oxidative stress through GSH-based antioxidant defense mechanism. Moreover, Casati et al. [83] reported that other blackcurrant anthocyanin, delphinidin-3-rutinoside, exerted its protective effect against oxidative damage by reducing intracellular ROS and increasing intracellular antioxidant factors such as GSH in MC3T3-E1 cells. Dietary intervention with BC significantly reduced the increased concentration of TBARS, 7-KCh, and LOOHs in the testes of rats exposed to DEE. These results are in line with previous studies, showing the protective effects of anthocyanins from blackcurrants against oxidative stress and other redox-related responses [83,84]. Our results showed the high concentrations of anthocyanins in blackcurrant pomace, mainly cyanidin-3-glucoside, delphinidin-3-O-glucoside, cyanidin-3-O-rutinoside, and delphinidin-3-O-rutinoside (Supplementary Materials), which possess the ROS-scavenging activity against hydroxyl radicals and superoxide, and metal-chelating abilities in vitro and in vivo [22,85,86]. The cyanidin-3-glucoside markedly decreased oxidative damage, leading to the decreased level of TBARS during serum formation in rats [87]. Similar results were reported by Nowak et al. [88], who reported that anthocyanins from blackcurrant leaves exhibited inhibitory effect on the generation of TBARS. More recently, Tan et al. [89] reported that cyanidin-3-glucoside substantially decreased the MDA levels via improving antioxidant enzymes activities (SOD, GSH-Px, and CAT) in H 2 O 2 -induced oxidative stress in HepG2 cells.
Conclusions
In conclusion, the exposure of rats to diesel exhaust emissions (DEE) from the combustion of 1st generation (B7) and 2nd generation (SHB20) biofuels without and with DPF-treatment caused oxidative stress in the testes, manifested by the increased lipid peroxidation and reduced capacity of antioxidant defense system. Our results revealed that dietary intervention with blackcurrant pomace, could attenuate these deleterious effects through a reduction in TBARS, LOOHs, 25(OH)2Ch, and 7-KCh, accompanied by an increase in TAS, SOD, GSH-Px, GR, and GSH activities. This effect of blackcurrant pomace, regarded as a good source of anthocyanins, could be related to their capacity to scavenge free radicals, and modulating specific signaling pathways of antioxidant adaptive response. Nevertheless, it must be pointed out that despite the ameliorating effect of dietary intervention with blackcurrant pomace, the redox balance in the testes was not completely restored. We reported for the first time that exposure of animals to exhaust emissions from 2nd generation biodiesel (SHB20) revealed higher markers of oxidative stress in the testes than exposure to conventional B7 biofuel. A novelty of the presented research was to investigate the effects of dietary intervention with blackcurrant pomace that is a residue material obtained during juice production in animals exposed to DEE at a level relevant to human exposure in urban agglomerations. Cars equipped with diesel engines are still very common especially in many European countries and contribute to urban air pollution. Considering the fact that traffic-related air pollution negatively affects male reproductive potential and reduction of the impact of diesel cars on environment is a long-term effort in many countries, there is a need to implement alternative methods diminishing negative health effects from the exposition to DEE. According to our opinion, a dietary intervention with compounds presenting a high antioxidative potential can be a solution to improve a redox balance in the reproductive system.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/antiox11081562/s1, Figure S1: Average weekly feed intake by rats exposed to DEE from B7 or SHB20 biofuels without (A) or with (B) DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days; Figure S2: The scheme of the exposure of the test fuels to the animals; Table S1: Chemical analysis of experimental feeds composition [90,91]; Table S2: The content of selected phenolics in experimental feeds [92,93]; Table S3. The estimated phenolics mean daily intake in rats exposed to B7 and SHB20 diesel fuel exhaust emission with (DPF+) or without (DPF-) diesel particle filter (DPF) application (mean ± SE); Table S4. RT-PCR primer sequences of analyzed genes; Table S5. Testicular IL-1β, IL-6, and TNFα gene expressions and protein levels in rats exposed to DEE from B7 and SHB20 biofuels with and without DPF-treatment and with or without blackcurrant pomace supplementation (BC) for 28 days; Table S6. Estimated concentration of selected substances in the whole body inhalation chambers (mean ± SD); Table S7. The estimated concentration of the selected aromatic hydrocarbons in the whole body inhalation chambers (mean, ± SD).
|
2022-08-17T15:12:14.850Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "d79c462919d5b72f4aa07bf1f2fe43ac6874c79e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3921/11/8/1562/pdf?version=1660288431",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce8f03bfdec98c9435b36d561590302dd553be15",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.