id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
16190115
|
pes2o/s2orc
|
v3-fos-license
|
Plasma levels of microRNA-24, microRNA-320a, and microRNA-423-5p are potential biomarkers for colorectal carcinoma
Background MicroRNAs are stable and easy to detect in plasma. The plasma levels of microRNAs are often changed in disease conditions, including cancer. This makes circulating microRNAs a novel class of biomarkers for cancer diagnosis. Analyses of online microRNA data base revealed that expression level of three microRNAs, microRNA-24 (miR-24), microRNA-320a (miR-320a), and microRNA-423-5p (miR-423-5p) were down-regulated in colorectal cancer (CRC). However, whether the plasma level of these three microRNAs can serve as biomarkers for CRC diagnosis and prognosis is not determined. Methods Plasma samples from 223 patients with colorectal related diseases (111 cancer carcinoma, 59 adenoma, 24 colorectal polyps and 29 inflammatory bowel disease) and 130 healthy controls were collected and subjected to reverse transcription-quantitative real time PCR (RT-qPCR) analyses for the three microRNAs. In addition, plasma samples from 43 patients were collected before and after surgical treatment for the same RT-qPCR analyses. Results The concentrations of plasma miR-24, miR-320a and miR-423-5p were all decreased in patients with CRC and benign lesions (polyps and adenoma) compared with healthy controls, but increased in inflammatory bowel disease (IBD). The sensitivity of miR-24, miR-320a and miR-423-5p for early stage of CRC were 77.78 %, 90.74 %, and 88.89 %, respectively. Moreover, the plasma concentration of the three microRNAs was increased in patients after the surgery who had clinical improvement. Conclusions The plasma levels of miR-24, miR-320a, and miR-423-5p have promising potential to serve as novel biomarkers for CRC detection, especially for early stage of CRC, which are superior to the currently used clinical biomarkers for CRC detection, such as CEA and CA19-9. Further efforts to develop the three microRNAs as biomarkers for early CRC diagnosis and prediction of surgical treatment outcomes are warrant. Electronic supplementary material The online version of this article (doi:10.1186/s13046-015-0198-6) contains supplementary material, which is available to authorized users.
Therefore, it is imperative to develop novel sensitive and specific circulating biomarkers for detection of CRC, especially in the early stages.
MicroRNAs are small non-coding RNAs of 18-22 nucleotides in length, which regulate gene expression at the post-transcriptional level by binding to the untranslated regions (UTRs) of mRNAs [5][6][7]. Since their discovery in 1993, emerging evidence shows that altered expression of microRNAs is associated with cancer, including CRC [1,[8][9][10]. MicroRNAs can be released to the blood. Since plasma microRNAs are protected from RNase digestions, they remain stable for a long period of time even under extreme harsh conditions. The stability and easy detectability make circulating microRNAs an ideal candidate to serve as a biomarker for cancer detection [11]. In addition, the abundance of plasma micro-RNAs normally does not vary in different gender [12][13][14][15][16][17]. Therefore, circulating microRNAs show great potential values as tumor markers for cancer diagnosis.
It has been reported that miR-24 and miR-320a act have tumor suppress activity in CRC. MiR-24 inhibits cell cycle progression via a p53-and p21-independent manner, and its expression is downregulated in CRC [18]. MiR-320a suppresses initiation, metastasis, and invasion of CRC [19][20][21]. In addition, it also induces G0/ G1 arrest in CRC [22]. The expression of miR-320a is inversely associated with CRC aggressiveness of CRC and CRC cell lines [20]. Although expression of miR-423-5p, miR-24 and miR320a is down-regulated in CRC cell lines, it remains unknown whether plasma level of the three microRNA is changed in patients with CRC. To determine whether the abundance of miR-24, miR-320a, and miR435-5p in the plasma was changed in CRC patients and could serve as biomarkers for therapy evaluation, we measured the plasma concentration of miR-24, miR-320a, and miR435-5p in CRC patients prior or post the surgery, as well as healthy controls. The results showed that plasma levels of miR-24, miR-320a, and miR-423-5p were decreased in patients with CRC, benign lesions (polyps and adenoma) compared with healthy controls, but increased in patient with inflammatory bowel disease (IBD). The concentration of the three microRNAs was increased with the clinical improvement of the patients after the surgery. In addition, the three microRNAs all showed high detection rates for early stage of CRC. The results indicate that the plasma levels of miR-24, miR-320a, and miR-423-5p have the potential to serve as biomarkers for CRC detection, especially for early stage of CRC.
Ethics statement
The study was carried out according to the ethical principles of the 2008 revised Declaration of Helsinki. All plasma-based studies were approved by the Ethics Committee of the Xiamen University affiliated Zhongshan Hospital. All participants gave a written consent and agreed their information to be stored in the hospital database and used for research purposes.
Plasma sample collection
Blood samples were collected from CRC patients who had been diagnosed categorized based on the International Union Against Cancer (UICC) and American Joint Committee on Cancer (AJCC) TNM staging system for CRC established in 2003. Age-and gender-matched healthy individuals with no history of cancer and in good health conditions based on self-report were collected from Xiamen University affiliated Zhongshan Hospital between December 2012 and November 2013. The IBD was diagnosed based on standard endoscopic, histologic, and radiographic criteria. Patients with other gastrointestinal tract complications, hemolysis, or high blood lipid were excluded. The blood samples were collected from patients before operational treatments, chemotherapy, or radiotherapy. At 10 days post the operation, paired plasma samples were collected from 43 randomly selected patients. All plasma samples were extracted from EDTA-K 2 tubes and centrifuged as described previously [23]. After the first centrifugation for 10 min at 1,600 g, the supernatants were carefully removed and transferred to a new tube follow by centrifugation again at 16,000 g for 10 min to remove residual blood cells. The plasma was then divided into small aliquots and snap-frozen at −80°C. Clinical characteristics of the CRC patients are summarized in Table 1.
Biochemical analyses
The plasma concentration of CEA and CA19-9 was measured by using the Roche High-sensitivity Assay kit performed on a Cobas e601 System. The cut-off point of CEA is 5 ng/ml and the detection limit is 0.2 ng/ml with a CV of < 5 %. The cut-off point of CA19-9 is 27 U/ml and the detection limit is 0.6 U/ml with a CV of < 5 %. Samples were randomized for testing and blinded to the trained clinical laboratory technician who analyzed and interpreted the data.
MicroRNA isolation
MicroRNA was isolated from 200 μl plasma samples using the miRcute miRNA extraction kit (TIANGEN) according to the manufacturer's instructions. A synthetic microRNA cel-miR-39 (QIAGEN) was added to each plasma specimen at a final concentration of 5 nmol/ml as a reference before isolation. The purified microRNAs were dissolved in 30 μl RNase-free water (PROMEGA) at a concentration ranging from 5-50 ng/μl. The ratio of OD 260 and OD 280 absorbance of each sample was between 1.8 and 2.1. All isolated microRNAs were aliquoted and stored at a −80°C freezer until use.
Statistical analyses
Relative levels of the three microRNAs were quantified using the 2 -ΔΔCq method. The data then were transformed to log 10 for analyses. The nonparametric Mann-Whitney U test and Kruskal-Wallis tests were used to analyze the abundance of miR-24, miR-320a and miR-423-5p in the disease and health groups. ROC curves were applied to analysis the diagnostic values of the three microRNAs. Youden index (sensitivity + specificity-1) was chosen to identify the optimal cut-off threshold values. Data were analyzed using two-side test and a P value < 0.05 was considered statistically significant. The statistical analyses were carried out with the IBM SPSS 19.0 software and the graphs were generated by using Graphpad Prism 5.0 and Canvas X.
Results
Establishing quantitative RT-PCR analyses for detecting miR-24, miR-320a, miR-423-5p and cel-miR-39 in plasma Since abnormal expression of microRNAs are often associated with cancer progression, we identified the three micro-RNAs based on the following (1) up-regulated or downregulated in CRC compared with adjacent tissues in the miRCancer database (http://mircancer.ecu.edu/search.jsp), or our previous study; (2) has not been analyzed in CRC patients plasma; and (3) Ct value in plasma is less than 35. A panel of 7 candidate microRNAs were selected for further analysis. For the training set, plasma samples of CRCs and healthy controls were randomly selected for qRT-PCR analyses. Among them, miR-24, miR-320a, and miR-423-5p were significantly down-regulated in CRC and were selected for the analyses hereafter. The specific amplification and stability of the three candidate microRNAs in plasma was confirmed in Additional file 1. Melting curve analyses showed that there was only one unique peak for every sample (Additional file 1: Figure S1A-D). Agarose gel electrophoresis also showed one single band from randomly selected samples (Additional file 1: Figure S1E). The calculated slopes and coefficient of determination for miR-24 were −2.3815, r 2 = 0.9992; miR-320a were −2.2526, r 2 = 0.9974; miR-423-5p were −2.2247, r 2 = 0.9964; and cel-miR-39 were −2.3732, r 2 = 0.9998 (Additional file 1: Figure S1F), indicating that the amplification efficiencies for miR-24, miR-320a, miR-423-5p, and cel-miR-39 reached 98.15 %, 106.12 %, 107.89 %, and 98.59 %, respectively. Incubation of the samples at 37°C for up to 24 h or repeating freeze-and-thaw did not cause significant changes in Cq value (Additional file 1: Figure S2), indicating that the microRNAs were stable in the plasma. Moreover, intra-assay variations and internal-assay variations were all less than 2 %, indicating that the RT-qPCR analyses were accurate and reliable (Additional file 1: Table S1 and S2). This warrants the RT-qPCR analyses of plasma miRNA for clinical applications.
CRC patients have reduced abundances of miR-24, miR-320a, and miR-423-5p in the plasma To determine whether the plasma levels of miR-24, miR-320a, and miR-423-5p were changed in CRC patients, real-time RT-PCR was carried out to assess plasma concentrations of the three microRNAs in healthy controls and CRC patients. Compared with healthy controls, the abundances of all three microRNAs in the plasma were reduced in patients with CRC and benign lesions (colorectal adenoma and polyps), but increased in patients with IBD (Fig. 1). The expression levels of miR-320a and miR-423-5p were inversely correlated with the progression stages of the disease from normal-benign lesionscarcinoma. In addition, the plasma level of miR-320a and miR-423-5p was higher in patients with rectal cancer than those with CRC (Additional file 1: Figure S3A-C). No difference was found between patients with adenoma and polyps, as well as the patients with Crohn's disease (CD) and with ulcerative colitis (UC) (Additional file 1: Figure S3D-I).
The plasma levels of miR-24, miR-320a and miR-423-5p have diagnosis values for CRC To determine whether the plasma levels of miR-24, miR-320a, and miR-423-5p had CRC diagnostic values, the ROC curve was applied to analyze their diagnostic sensitivity and specificity ( Fig. 2; Table 2). At the threshold of −1.731, the optimal sensitivity and specificity of miR-24 were 78. Table 2, the PPV for the three microRNAs were higher than 72 %. The results indicate that a person with high plasma levels of any one of the three microRNAs has a greater risk of CRC than those with low plasma levels of the three microRNAs. Additionally, the NPV and diagnosis efficiency of the three micro-RNAs were all higher than 80 %. The results indicate that the plasma levels of the three microRNAs have high diagnosis values. The plasma level of miR-24, miR-320a and miR-423-5p can be used for early detection of CRC Early diagnosis and treatment of CRC is of great values to improve survival of CRC patients. Currently, CEA and CA19-9 are two most commonly used diagnosis markers for CRC. Therefore, the performance of the three microRNAs and CEA or CA19-9 in detecting early stages (stage I, stage II) of CRC was compared. In 54 patients with CRC at early stages, both CEA and CA19-9 were detected in 11 patients and the sensitivity was 20.37 %. In contrast, the plasma miR-24, miR-320a and miR-423-5p were detected in 42 (77.78 %), 49 (90.74 %), and 48 (88.89 %) patients, respectively (Fig. 3). According to the ROC curve, the AUC of three microRNAs reached 0.822, 0.897, and 0.839, respectively (Fig. 4a-c). By combining with the three microRNAs, the sensitivity was increased to 90.74 %, but the specificity dropped to 70.77 % (AUC = 0.941) (Fig. 4d, Table 3). Although the specificity of the three microRNAs was similar to CEA and CA19-9, the diagnosis efficiency and NPV, especially the sensitivity of the three microRNAs were higher than those of CEA and CA19-9. The findings validate the performance of miR-24, miR-320a, and miR-423-5p as a plasma marker for early detection of CRC, and indicate that the three microRNAs are better biomarkers for CRC early detection than CEA and CA19-9.
The changes in plasma level of miR-24, miR-320a and miR-423-5p after the surgery predicts the risk of post-surgery metastasis To determine whether the plasma levels of the three microRNAs had predicting values for clinical improvement after the surgery, the plasma levels of the three microRNAs in 43 patients were compared before and after the surgery. The data showed that the plasma levels of miR-24, miR-320a, and miR-423-5p in 37 ( 3 Plasma levels of miR-24, miR-320a, miR-423-5p are better biomarkers for early detection of CRC than currently used CEA, CA19-9. a-f Two-parameter classification is used to detect early stages of CRC. The cut-off value 5.0 ng/mL for CEA is, and 27U/ mL for CA19-9. The data is presented as log 2 . The cut-off values for miR-24, miR-320a and miR-423-5p are −1.731, −1.006, and −0.854, respectively. The data is calculated from the ROC curve. (g), Detection rates of the three microRNAs CEA, and CA19-9 in a total of 54 patients with early stages of CRC
Discussion
About 52.5 % microRNA coding sequences are located at fragile sites and genomic regions [24], and are more vulnerable to mutations or environmental influences. Aberrant microRNA expression and mutations have been shown to contribute to cancer initiation and progression [25][26][27][28], which can be potential biomarkers for cancer diagnosis and progression. Assessing microRNA expressions in CRC requires invasive examination, which limits the values of using microRNA as biomarkers for CRC diagnosis and prognosis. In our previous experiments, normal colorectal cells (CCD-18Co) have a higher expression than CRC cells (LS 174 T, HCT-8, SW480 and SW620) (Additional file 1: Figure S4). Herein, we report that expression of miR-24, miR-320a and miR-423-5p in the plasma of CRC patients was reduced and that changes of plasma levels of miR-24, miR-320a and miR-423-5p predicted the risk of post-surgery metastasis of CRC patients. The results indicate that the plasma level of miR-24, miR-320a and miR-423-5p can serve as normal biomarkers for CRC diagnosis and prognosis. MiR-24 is a master regulator from the gene cluster of miR-23a-27a-24-2. Its expression is frequently downregulated in a variety of cancers [18,[29][30][31][32], including CRC. It has been shown that miR-24 can function as a tumor suppressor in CRC, including suppressing proliferation, migration, and invation [18]. Moreover, miR-24 elicits its tumor suppression activity by repressing DHFR expression in CRC cells. MiR-320 is also a tumor suppressor. Expression of miR-320 is down-regulated in many human malignancies [33][34][35]. Overexpression of miR-320 inhibit colon cell proliferation, migration, and invasion. Moreover,miR-320 regulates the Wnt/β-catenin pathway partly by targeting FOXM1 that promotes the tumor initiation and progression [36]. In addition, miR-423-5p also contributes to proliferation and invasion in gastric cancer by targeting TIF1 [37]. However, the role of miR-423-5p in CRC is largely unknown.
To our knowledge, this is the first comprehensively study on the expression and clinical significance of plasma levels of microRNAs-miR-24, miR-320a, and miR-423-5p in CRC patients. Reduced plasma level of miR-24 was detected in 78.38 % patients with CRC, and that for miR-320a and miR-423-5p were even higher (>90 %). Moreover, all of the PPV were higher than 70 % and the diagnosis efficiency was higher than 80 %. Consistently, the AUCs for CEA and CA19-9 were both lower than three microRNAs whether in the diagnosis of CRC or early stages of CRC.
Clinical data shows that the 5-year survival rate of early stage of CRC patients after surgery is higher than 90 %. However, due to no obvious symptoms of early CRC and the lack of sensitive detection methods for early diagnosis, the majority of CRC patients are diagnosed at the advanced stages and lost their best time for treatment. Therefore, the low 5-year survival rate of post-operation is lower than 20 %. The colonoscopy can only detect 18 %-35 % of early CRC [38][39][40]. The two most commonly used biomarkers for CRC are CEA and CA19-9, which can only detecte 10 % and 15 % 1 st stage CRC, respectively [41]. Therefore, high sensitive biomarkers for CRC early detection are urgently needed. Detection of the plasma level of the three microRNAs significantly improve the detection rate of CRC at early stages (miR-24: 77.78 %; miR-320a: 90.74 %; miR-423-5p: 88.89 %). In conclusion, compared with CEA and CA19-9, the performance of the plasma level of the three microRNAs indicates that they have great clinical values for CRC detection.
The plasma levels of miR-24, miR-320a and miR-423-5p of patients with benign lesions (adenoma and polyps) were between those in normal controls and CRC patients. Furthermore, the expression of miR-320a and miR-423-5p was inversely correlated with the progression of the disease. This indicates that the plasma level miR-320a and miR-423-5p can be used to predict the disease conditions. Unlike patients with benign lesions (adenoma and polyps), the plasma levels of the three microRNAs were increased in patients with the IBD. The finding warrants the promise of using plasma levels of the three microRNAs as biomarkers for early detection of CRC, benign colorectal diseases, and IBD. Furthermore, the data also showed that the increased circulation levels of the three microRNAs were associated with the outcome of the surgical treatment. This suggests that the plasma levels of the three microRNAs are of potential prognosis values for CRC progression after the surgery.
Conclusions
In summary, the plasma levels of miR-24, miR-320a, and miR-423-5p were reduced in patients with CRC and reversely correlated with the stages of progression. Furthermore, the changes of plasma level of the three microRNAs predicted the risk of post-surgery metastasis. The results suggest that the plasma level of miR-24, miR-320a, and miR-423-5p can serve as novel biomarkers for CRC diagnosis and prognosis.
|
2018-04-03T04:31:42.045Z
|
2015-08-22T00:00:00.000
|
{
"year": 2015,
"sha1": "472dd68d2a860489da6cfcb110cab3e855765979",
"oa_license": "CCBY",
"oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-015-0198-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f3df2a77c675c26bbe544b38dcb32898b739f86",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
139229007
|
pes2o/s2orc
|
v3-fos-license
|
Feasibility evaluations of three-dimensional-printedhigh-gainre fl ectarray antenna for W-band applications
: An evaluation reveals the feasibility of a high-gain and low-fabrication-cost re fl ectarray antenna for W-band millimeter-wave radar applications. This re fl ectarray antenna o ff ers the advantage of a simple, fl at structure, and can be fabricated at low cost by three-dimensional (3D) printing. This paper describes the fabrication of a 3D-printed re fl ectarray antenna, whose gain is more than 30 dBi in the W-band. Firstly, the re fl ection-signal phases of dielectric plates of di ff erent thicknesses are evaluated using fi nite-di ff erence time-domain (FDTD) analysis to obtain the characteristics of the re fl ectarray design. Then, an eight-zone Fresnel re fl ectarray, in which a phase di ff erence of 45° separated each zone, is analyzed and fabricated. The results of the FDTD analysis shows a 31.6 dBi antenna gain at 78.5 GHz. Finally, the designed re fl ectarray is fabricated using a 3D printer. Measurements indicate that the achieved 30.4 dBi antenna gain nearly equaled the FDTD analysis value of 31.6 dBi at 78.5 GHz.
Introduction
Reflectarray antennas offer the advantages of a simple, small-volume, and flat structure [1,2,3]. We have been a developing a millimeter-wave radar system for civil aviation uses, including helicopter collision-avoidance onboard radar [4]. A reflectarray antenna with a quasioptical approach is one of the most important options supporting high-performance radar systems usable for millimeter-wave radar applications [3]. In addition, we are interested in low-cost fabrication methods that maintain high performance [5]. Three-dimensional (3D) printers can reduce the cost of fabrication [2]. However, as far as we know, reflectarray antennas produced in this fashion difficult to demonstrate high-gain characteristics above 30 dBi in the W-band. This paper discusses the feasibility evaluations of a high-gain reflectarray antenna for W-band millimeter-wave radar applications. Our discussion confirms that 3D-printed fabrication produces a reflectarray antenna with gains exceeding 30 dBi. Firstly, the reflection-signal phases of dielectric plates of varying thicknesses are investigated numerically in the W-band using the finite-difference time-domain (FDTD) method. Next, an eight-zone reflectarray is designed to operate at 78.5 GHz and then analyzed to evaluate the antenna characteristics. Finally, the reflectarray antenna is fabricated with a 3D printer. Then, the antenna characteristics are evaluated in an anechoic chamber.
Analysis of the reflection-signal phase of dielectric plates
The reflection-signal phase must be determined to finalize the design of a reflectarray antenna. The thickness of the dielectric plate is the major design parameter for a 3D-printed reflectarray antenna. An FDTD analysis of the dielectric plate is carried out to obtain the quantitative value of the phase shift of the dielectric material. Fig. 1(a) shows the FDTD analysis model used to obtain the reflection-signal phases. These reflection-signal phases of the plane-incident wave are analyzed for different dielectric-plate thicknesses. If one assumes an infinite periodical space, the top and bottom walls each consist of perfect electric conductor (PEC). In addition, each side wall consists of a perfect magnetic conductor (PMC). Also, the PEC is attached to the back of the dielectric plate. Commercially available FDTD analysis software (SEMCAD X, Schmid & Partner Engineering AG, Zürich, Switzerland) is employed. Fig. 1(b) shows the FDTD analysis parameters. Fabrication of the reflectarray with a 3D printer employs the dielectric constants of acrylonitrile butadiene styrene (ABS) plastics. The relative permittivity and loss tangent are 2.3 and 0.1, respectively. Fig. 1(c) shows the analyzed reflection-signal relative phase shift for the different dielectric-plate thicknesses at 73.5 GHz, 78.5 GHz, and 83.5 GHz. In this case, the amounts of the phase shift are relative values compared without any dielectric materials. The thickness of the 0.1 mm step is analyzed. Results confirmed that the phase shift is proportional to the frequency. Because the center frequency of the reflectarray is 78.5 GHz, a 360°phase shift is obtained at a dielectric-plate thickness of approximately 3.3 mm.
Design and analysis of the reflectarray antenna
The W-band reflectarray antenna is designed using the analysis results of the dielectric-plate reflection phase. The reflectarray design is derived from the Fresnel equation where r n and f are the nth radius and focal length, respectively. In addition, P and λ are the number of Fresnel zones and wavelength of the incident wave, respectively. A comparison with the conventional reflectarray fabricated by the metallic patch on the dielectric substrate [3] shows that the number of Fresnel zones P and focal length f were 8 and 75 mm, respectively. Then, the outer diameter of the reflectarray was 154 mm. The designed center frequency was 78.5 GHz. The FDTD analysis results described in the previous section support determining the thickness of each zone to configure an 8-zone reflectarray in which phase shifts were 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°at 78.5 GHz. The required thickness of the dielectric plate is obtained from Fig. 1(a). For example, the dielectric thicknesses for phase shifts of 0°, 90°, and 180°are 3.3 mm, 2.8 mm, and 1.7 mm, respectively. The dimensions of the reflectarray are 154 mm  154 mm  3:3 mm. reflectarray dielectric material are the same as in Fig. 1(b). In addition, a PEC is attached behind the reflectarray. Fig. 2(b) and 2(c) show the analyzed yz-plane electric-field strength and radiation patterns of the reflectarray antenna at 78.5 GHz. The radiation conditions for the reflectarray surface are confirmed by the electricfield strength. In addition, the maximum antenna gain is found to be 31.6 dBi. The half-power beamwidth (HPBW) measures for azimuth and elevation are 1.8°and 1.6°, respectively.
Fabrication and measurement of the reflectarray antenna
The designed reflectarray antenna was fabricated using a commercially available 3D printer (Afinia H800 3D Printer, Afinia 3D, Chanhassen, MN). Fig. 3(a) shows the printing process of the eight-zone reflectarray. The printing resolution of the vertical axis is 0.1 mm using an ABS plastics filament. In addition, the total printing time is 9 hours for the designed 3.3-mm-thick reflectarray. Because the ABS plastics filament is one of the most common materials for 3D printing, the cost of the used filament is less than a few US dollars. Then, Fig. 3(b) shows the measurement setup of the fabricated reflectarray antenna for the W-band. The surface of the reflectarray do not receive any processing, such as smoothing or other postprinting corrections. The 0.1 mm aluminum tape is attached to the back of the reflectarray. The primary source of the antenna is the WR-10 open-ended waveguide, which is the same used in the analysis. In addition, the focal length of the antenna can be adjusted at the micrometer level to clarify the sensitivity depending on the radiation point. Fig. 3(c) shows the analyzed and measured radiation patterns for the azimuth plane at 78.5 GHz. The measured radiation characteristics at focal lengths of 75 mm and 77 mm are shown. The focal length of the original design is 75 mm. In addition, the 77 mm focal length produce the maximum measured antenna gain. The measured antenna gains for different focal lengths appear in Fig. 3(d). The antenna gain increased as the focal length increased. The maximum antenna gain is obtained at a 77 mm focal length. The FDTD analysis shows a 31.6 dBi antenna gain; the measured antenna gains of the 75 mm and 77 mm focal lengths are 27.9 dBi and 30.4 dBi, respectively. This 2 mm difference in focal lengths is attributable mainly to the differences in the material constant across the reflectarray. The 2.3 dielectric constant of the ABS plastics in the analysis is slightly higher than the dielectric constant of the ABS plastics filament employed in the fabrication. The HPBW of the measured azimuth radiation patterns is 1.7°for the focal lengths of 75 mm and 77 mm. On the other hand, the analyzed HPBW was 1.6°. The demonstrated antenna characteristics of the fabricated reflectarray antenna agree well with the FDTD analysis results. In addition, the effectiveness and validity of the analysis are confirmed by the measurement.
The reflectarray antenna based on the printed substrate showed a maximum gain of 33 dBi at 72 GHz, which is shifted from the gain at 78.5 GHz [3]. The proposed 3D-printed reflectarray antenna also achieved a maximum antenna gain greater than 30 dBi. These measurements confirm the feasibility of the 3D-printed millimeter wave antenna exhibiting both high gain and low cost.
Conclusion
3D printing of a W-band reflectarray antenna was proposed and demonstrated to be a practical method for producing a high-gain and low-cost millimeter-wave antenna that could be used in radar systems.
Firstly, the reflection-signal phase shifts, attributable to different dielectric-plate thicknesses, were investigated using FDTD analysis that assumed the use of ABS plastics. Secondly, a 154-mm-diameter eight-zone reflectarray antenna was designed using the analysis results of the reflection phase. The results of the reflectarray antenna analysis showed an antenna gain greater more than 30 dBi at 78.5 GHz. Then, the reflectarray antenna was fabricated using a commercially available 3D printer and common ABS plastic filaments. Finally, the measured antenna characteristics showed a 30.4 dBi antenna gain at 78.5 GHz. The feasibility of producing high-gain performance above 30 dBi with low-cost fabrication was confirmed by these results.
A conformal antenna, produced by a 3D printer at 0.1 mm resolution, will be investigated next to obtain additional high-gain characteristics.
|
2019-04-30T13:07:14.810Z
|
2018-04-06T00:00:00.000
|
{
"year": 2018,
"sha1": "68099571c03c9d511278ea233cbbd030f7253150",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/comex/7/6/7_2018XBL0038/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "087c83fbd5326a26d9fb5c0b82c54d2f0c3bb542",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
119735234
|
pes2o/s2orc
|
v3-fos-license
|
Path-by-path uniqueness of infinite-dimensional stochastic differential equations
Consider the stochastic differential equation $\mathrm dX_t = -A X_t \,\mathrm dt + f(t, X_t) \,\mathrm dt + \mathrm dB_t$ in a (possibly infinite-dimensional) separable Hilbert space, where $B$ is a cylindrical Brownian motion and $f$ is a just measurable, bounded function. If the components of $f$ decay to 0 in a faster than exponential way we establish path-by-path uniqueness for mild solutions of this stochastic differential equation. This extends A. M. Davie's result from $\mathbb R^d$ to Hilbert space-valued stochastic differential equations.
Framework & Main result
Let us consider the following stochastic differential equation (SDE) can now be considered as an ordinary integral equation (IE), that is perturbed by an Ornstein-Uhlenbeck path Z A (ω). If such a (unique) function can be found for almost all ω ∈ Ω, the map ω −→ x is called a (unique) path-by-path solution to the equation (SDE). Naturally, this notion of uniqueness is much stronger than the usual pathwise uniqueness considered in the theory of SDEs.
The main result of this article states that on every filtered stochastic basis as above there exists a unique mild solution to the equation (SDE) in the path-by-path sense. Although, in the finite dimensional setting many papers have been written about path-by-path uniqueness (see for example [Dav07], [Sha14], [BFGM14], [Pri15]) to the best of our knowledge this is the first result in a general infinite-dimensional Hilbert space setting. However, for the special case where H = L 2 ([0, 1], R) and A = ∆ path-by-path uniqueness has been shown recently in [BM16] for space-time white noise.
Let us now state the assumptions on the drift f and the main result. Remark 1.5 Set Ω := L 2 ([0, T ], H) and P such that the projection π t (ω) := ω(t) is a cylindrical Brownian motion. As in the introduction consider the map Let f be as in Assumption 1.2 then path-by-path uniqueness holds for the SDE
Structure of the article & Roadmap for the proof
The structure of this article is the following: In the following section 2 we introduce approximation lattices and the notion of the effective dimension of an (infinite-dimensional) set. This is reminiscent to the Kolmogorov ε-entropy, which was used in the proof of A. V. Shaposhniko (see [Sha14]) for the finite-dimensional case. In the third section we prove two regularization by noise estimates of the map ϕ n,k : (x, y) −→ which are based on the estimates previously obtained by the author in [Wre16]. We show that for every δ > 0 |ϕ n,k (x, y)| H ≤ C δ √ n2 −n/6 |x − y| H + ε n with ε n n→∞ −→ 0 for all ω ∈ Ω outside a set of mass δ. Here, x and y are in an approximation lattice of a suitable subset Q of H which includes the image of f . For fixed ω ∈ Ω the map ϕ n,k is therefore "close to" being Lipschitz continuous. This estimate acts as a replacement for the lack of regularity for the non-linearity f in equation (SDE).
In the fourth section we extend these estimates: For sequences of functions (h m ) m∈N converging to h we prove, despite the lack of continuity in f , that This approximation theorem (Theorem 4.6) implies that the above map ϕ n,k is continuous and therefore enables us to extend the estimates of the previous section from an approximation lattice to all x, y of Q (Corollary 4.7). The result obtained in this section is also necessary to justify the limiting argument in the proof of Theorem 6.2.
It turns out that in the proof of the main result (Theorem 1.3) we have to consider terms of type for a sequence of points {x q ∈ Q|q = 1, ..., N}. Using just the estimates of Section 3 for each term under the sum of (1.2.2) is, unfortunately, insufficient to prove the main result (Theorem 1.3) as this would merely give us an estimate of order O(2 −(1/2−ε)n N).
To overcome this, in Section 5 we use the fact that the above points x q ∈ Q are values of a solution of an integral equation and hence can be well approximated by a one-step Euler approximation. This enables us to prove much stronger estimates for expression (1.2.2). Namely, bounds of order O(2 −n N) (Theorem 5.4).
Section 6 contains the proof of the main result (Theorem 1.3). As a first step of the proof of the main theorem we reduce the problem via Girsanov's Theorem to the following Proposition.
for all t ∈ [0, T ] is the trivial solution u ≡ 0, then the assertion of Proposition 1.4 holds with Ω 0 := Ω ′Z A , whereZ A t := X t − e −tA x 0 with X being a solution of (SDE). Recall that X is an Ornstein-Uhlenbeck process under a measureP obtained via Girsanov transformation.
The set of "good omegas" Ω 0 of the main result 1.3 therefore depends solely on the strong solution X, the initial condition x 0 and the drift f . A proof of this proposition will be given in this section below. Now, let u be a function solving equation (1.7.1) and let us write ϕ n,k (x) := ϕ n,k (x, 0). To show that every solution to (1.7.1) is trivial we use a discrete logarithmic Gronwall inequality of the form In Section 6 we first show that Subsequently, we construct functions u ℓ ℓ→∞ −→ u, which are constant on the dyadic intervals [k2 −ℓ , (k + 1)2 −ℓ [. Using the equation (1.2.1) mentioned above this can be rewritten as lim ℓ→∞ |ϕ n,k (u ℓ (·))| H ≤ |ϕ n,k (u n (·))| H + ∞ ℓ=n |ϕ n,k (u ℓ+1 (·)), u ℓ (·))| H .
Splitting the integrals and using that u ℓ is constant on dyadic intervals of size 2 −ℓ we can bring this is in the somewhat more complicated form Using the estimates for ϕ n,k and expression (1.2.2) developed in the previous section we ultimately obtain an estimate of order where we have to impose the somewhat technical condition that 0 < |u(k2 −n )| H < 1. We therefore obtain a discrete log-Type Gronwall inequality of the form which, similar to the standard Grownall Inequality, implies that u has to be trivial (Corollary 6.3), so that the condition of Proposition 1.7 is fulfilled completing the proof.
Proof (of Proposition 1.7)
Let (X t ) t∈[0,T ] be a solution to (SDE). We setZ A t := X t − e −tA x 0 so thatZ A is an Ornstein-Uhlenbeck process with drift term A starting in 0 under a measureP ≈ P obtained by Girsanov's Theorem as mentioned in Remark 1. Let ω ∈ Ω ′Z A and x ∈ C([0, T ], H) be a solution to (IE) ω . We then have .
By plugging in the definition ofZ A and by setting we rewrite the above equation to SinceZ A is an Ornstein-Uhlenbeck process underP starting at zero and ω ∈ Ω ′Z A we conclude that u ≡ 0 and henceforth x t = X t (ω). Analogously, we obtain for any other solution x ′ that x ′ t = X t (ω) = x t so that all solutions of (IE) ω coincide on Ω ′Z A and are therefore unique.
Approximation Lattices
In this section we define the set Q, where the function u (see equation (1.7.1) of Proposition 1.7) takes values in. Additionally, we define the so-called effective dimension of a set, which is a variant of the Kolmogorov ε-entropy for lattices. At the end of this section we estimate the effective dimension of our set Q.
Definition 2.1 (The set Q) We define where γ is the constant from Assumption 1.2. Additionally, for r ∈ N we set where the components x n of x can be written as with certain k n ∈ Z for every n ∈ N.
I.e. given any point (x n ) n∈N in the set B ∩ 2 −m Z N , all components x n are zero for n ≥ d m and d m is the smallest integer with this property.
We define the effective dimension of a set B ⊆ R N by Let | · | 1 and | · | 2 be two norm on B. | · | 1 and | · | 2 are called effectively equivalent if for every m ∈ N they are equivalent on the restricted domain B ∩ 2 −m Z N . I.e. for every m ∈ N there exists constants c m , C m ∈ R such that
Proposition 2.3
Let B ⊆ R N with 0 ∈ B be an effectively finite-dimensional set then the norm | · | 2 and the maximum norm | · | ∞ are effectively equivalent. More precisely, we have
Note that this implies that Q r is effectively finite-dimensional for every r ∈ N.
Proof Let x ∈ Q r ∩ 2 −m Z N . Observe that every component x n is of the form x n = k n 2 −m with k n ∈ {−2 · 2 m−r , ..., 2 · 2 m−r }.
We are going to show that k n = 0 holds for every n ≥ d m .
Theorem 2.5
Let r ∈ N and m ∈ N. The number of points in the m-lattice of Q r can be estimated as follows
Proof
Let m ∈ N and x ∈ Q r ∩ 2 −m Z N and note that, as in the last proof, every component x n is of the form x n = k n 2 −m with k n ∈ {−2 · 2 m−r , ..., 2 · 2 m−r }.
k n can take at most 4 · 2 m−r + 1 different values in the dimensions 1 ≤ n < ed(Q r ) m , so that the total number of points x ∈ Q r ∩ 2 −m Z N can be estimated by Note that k n = 0 for n ≥ ed(Q r ) m . The second part of the assertion follows analogously.
Corollary 2.6
Let r ∈ N. For every m ∈ N there exists a map m (x)| ∞ holds for all x ∈ Q r , m ∈ N and r ∈ N.
Proof
Let r, m ∈ N. By Theorem 2.5 and Lemma 2.4 Q r ∩ 2 −m Z N is a finite set, hence we can write where N ∈ N is some number depending on both r and m. For every x ∈ Q r we set Observe that the map π (r) m fulfills all the required properties.
Definition 2.7 (Dyadic point)
We set We say that x ∈ R N is a dyadic point if x ∈ D.
Regularization by Noise
In this section we are going to prove various estimates regarding the map ϕ n,k defined below. Surprisingly, although we do not assume any regularity on b, ϕ n,k is "close to" being Lipschitz continuous in space. This is due to the noise, which improves the situation significantly. From this point onwards let (Z A t ) t∈[0,∞[ be an Ornstein-Uhlenbeck process on a probability space (Ω, F , (G t ) t∈[0,∞[ , P) with drift term A and initial sigma-algebra (G t ) t∈[0,∞[ as defined in the introduction.
Usually we drop the b and ω and just write ϕ n,k (x) instead of ϕ n,k (b; x, ω). Additionally, we set
Remark 3.5 Note that the constant C ε depends on ε and γ from Assumption 1.2, but not on b. Conversely, the set of "good omegas" A c ε,b,n,k depends on ε, b, n and k.
Since x, y ∈ 2Q r ∩ 2 −m Z N and | · | ∞ , | · | 2 are effectively equivalent norms i.e. | · | 2 ≤ ed(2Q r ) m | · | ∞ (see Proposition 2.3) the above expression is smaller than Due to Corollary [Wre16, Corollary 3.1] this probability is smaller than Using that η ε ≥ 1 and ed(2Q r ) m ≥ 1 the above is bounded from above by In order to get a uniform bound we calculate Invoking Theorem 2.5 results in Hence, we can bound the above probability by Note that the last sum converges since ed(2Q r ) m ≥ 1 and because of the above is smaller than Plugging in Definition (3.5.1) of η ε the above is smaller than ε 3 e −n . In conclusion there exists a measurable set for n ≥ 1, k ∈ {0, ..., 2 n − 1}, r ∈ {0, ..., 2 n }, m ≥ r and x, y ∈ 2Q r ∩ 2 −m Z N .
Step 3: And hence we have Additionally, we have r ∈ {−2, ..., 2 n } and x ∈ 2Q r , because of the fact that Hence, we can apply Claim (3.5.3) of Step 2 to obtain Step 4:
Conversely to
Step 3, for fixed n ∈ N let x ∈ 2Q ∩ D such that |x| ∞ ≤ 2 −2 n . Then x ∈ Q r with r = 2 n so that by Invoking Step 2 (i.e. inequality (3.5.3)) we have This concludes the proof.
Theorem 3.6 For every 0 < ε < 1 there exists C ε ∈ R such that for every Borel measurable function b : Remark 3.7 Note that the constant C ε depends on ε and γ, but not on b. Conversely, the set of "good omegas" A c ε,b depends on both, ε and b.
Proof
Step 1: Let m ∈ N and x, y ∈ Q ∩ 2 −m Z N . For 0 < ε < 1 we set Analogously to the previous proof we estimate Due to [Wre16, Corollary 3.1] this expression is bounded by ans since η ε ≥ 1 as well as ed(Q) m ≥ 1 the above expression can be estimated from above by Using this, we estimate the following probability By invoking Theorem 2.5 for r = 0 we have So that we can bound the above probability by Note that the last sum converges since ed(Q) m ≥ 1. Hence, the above is bounded from above by so that, in conclusion, we have estimated the above probability by Therefore, we obtain Step 2: Claim: For all points x, y ∈ Q ∩ D, with |x − y| ∞ ≤ 1, n ≥ 1 and k ∈ {0, ..., 2 n − 1} we have Note that this implies that m ≥ 0. Using Corollary 2.6 for every r ∈ N with r ≥ m we set Note that both sums on the right-hand side are actually a finite sums, because x and y are dyadic points. Also note that x r , x r+1 , y r , y r+1 ∈ 2 −(r+1) Z N , so that by using inequality (3.7.1) the above expression is bounded from above by where we have used that by the definition of this can be further estimated from above by Invoking Lemma 2.4 yields that ed(Q) r ≤ (ln(r + 1)) 1/γ , where γ > 0 is the constant from Assumption 1.2. Using this we can further estimate the above expression by By performing an index shift this can be written as We use √ r + m + 1 ≤ √ r + 1 + √ m and invoke Lemma 3.3 to estimate this further from above by Expanding the terms yields Plugging in (ln(m+1)) 1/γ ≤ 2 m/2 and evaluating the sum term by term leads us to the following upper bound In conclusion we finally obtained We are going to estimate this further using the following claim: Claim: For every n, m ∈ N we have that holds.
Proof of Claim (3.7.4): This ends the proof of Claim (3.7.4). Using inequality (3.7.3) and (3.7.4) we conclude that Recall that 2 −m−1 ≤ |x − y| ∞ so that the above is smaller than which finishes the proof of Claim (3.7.2) and hence the assertion.
4 Continuity of the map ϕ n,k In this section we will prove that for almost all ω ∈ Ω the map is continuous. Furthermore, we will show that on a suitable class of Lipschitz functions h and their dyadic piecewise approximations that the map is continuous w.r.t. the maximum norm.
Definition 4.1
We set where (λ n ) n∈N are the eigenvalues of the operator A of our Ornstein-Uhlenbeck process Z A .
Definition 4.2
We define
Remark 4.3
Note that elements in Φ are continuous, since functions in Φ are Lipschitz continuous (with Lipschitz constant at most 2). Φ n will be used to approximate elements in Φ. Also note that Φ and Φ n are separable w.r.t. the maximum norm and hence Φ * is separable.
Observe that the above spaces are constructed in such a way that the assumptions we impose on b (see Assumption 1.2) implie that the function u from Proposition 1.7 is in the space Φ. I.e. the difference of two solutions of (1.1.1) always lives in the space Φ due to Assumption 1.2.
Lemma 4.4
Let h ∈ Φ * and n ∈ N. We then have
Proof
Let h ∈ Φ * and n ∈ N be as in the assertion. If h ∈ Φ the inequality follows immediately from the Lipschitz continuity of h. Let h ∈ Φ m for some m ∈ N.
Case 1: m ≥ n + 1 We have Using the assumption that h ∈ Φ m by definition of Φ m the above expression us bounded from above by Case 2: m < n + 1 Since h ∈ Φ m is constant on all intervals of the form [k2 −m , (k + 1)2 −m [ the sum simplifies to Path-by-path uniqueness of infinite-dimensional stochastic differential equations And using the definition of Φ m the above sum is bounded by Lemma 4.5 holds on Ω ε,U uniformly for any h ∈ Φ * .
Set y := 2Az then y ∈ H hence, [Bog98, Corollary 2.4.3] is applicable which implies that the measures N(0, 1 2 A −1 ) and (Z A + z)(P) are equivalent. We set for all z ∈ D(A). By the Radon-Nikodyn Theorem there exist densities ρ z so that Furthermore, the family {ρ z |z ∈ N m } is uniformly integrable, since N m is finite. Hence, there exists δ > 0 such that holds for every n ≥ 1, k ∈ {0, ..., 2 n − 1} and x, y ∈ Q ∩ D on A c ε,U . Furthermore, we define the events B ε,U by We then have Since µ[U] ≤ δ using inequality (4.5.2) the above is bounded from above by Path-by-path uniqueness of infinite-dimensional stochastic differential equations In conclusion we proved that we have P[B ε,U ] ≤ ε 2 and therefore obtained that where ⌊·⌋ denotes the componentwise floor function.
is a dyadic number for all t ∈ [0, 1]. Also note that h n converges to h for n → ∞.
We are going to prove that And since ω ∈ A c ε,U we obtain for n ≥ m Note that since h n is constant on intervals of the form [k2 −n , (k+1)2 −n [ we have h n ((k/2)2 −n ) = h n (⌊k/2⌋2 −n ), so that the above equals Plugging in Definition (4.5.3) yields that the above expression can be written as Since k = 2⌊k/2⌋ in case k is even the sum can be restricted to k of the form k = 2k ′ + 1 for k ′ ∈ {0, ..., 2 n − 1}. with the help of (4.5.1) the above is bounded by Using Lemma 4.4 we can further estimate the above sum by 1 so that in conclusion we obtain Therefore as long as ω ∈ A c ε,U ∩ B c ε,U we have by Lebesgue's dominated convergence Theorem, the lower semi-continuity of ½ U and by the above calculation In conclusion we have proven that A c ε,U ∩ B c ε,U ⊆ E ε,U and hence P[E ε,U ] ≥ 1 − ε which completes the proof. on Ω ′ .
Proof
Let b be as in the assertion. For ℓ ∈ N let ε ℓ := 2 −ℓ . By Lemma 4.5 for every ε ℓ there exists for every ℓ ∈ N a δ ℓ such that for every pair (ε ℓ , δ ℓ ) the conclusions of Lemma 4.5 holds. Applying Lusin's Theorem to the pair (b, δ ℓ ) yields for every ℓ ∈ N a closed set andb ℓ is continuous. Then, by invoking Lemma 4.5 for (ε ℓ , δ ℓ , K c ℓ ) we obtain for every ℓ ∈ N a measurable set Let Let ω ∈ Ω ′ be fixed. Then, there is an N(ω) ∈ N such that for all ℓ > N(ω) we have ω ∈ Ω ℓ and therefore for all m ∈ N we obtain (4.6.1) Note that inequality (4.6.1) also holds if we replace h m by h, since h ∈ Φ * by assumption.
The assertion now follows easily by the following calculation ≤ε ℓ by (4.6.1) .
In conclusion we have
Using the above calculation this is bounded from above by Sinceb ℓ is continuous and h m converges pointwise to h this is the same as where the last inequality follows by invoking inequality (4.6.1) for h m replaced by h. Taking the limit ℓ → ∞ completes the proof of the assertion, since the left-hand side is independent of ℓ.
Using the above Approximation Theorem we can now extend the estimates obtained in Section 2 to the whole space Q as the following Corollary shows.
Proof
The first inequality follows from Theorem 3.4 for all points x ∈ 2Q ∩ D. For general points x ∈ 2Q this follows by approximating 2Q ∩ D ∋ x n −→ x and using Theorem 4.6.
The second inequality follows in the same way by combining Theorem 3.6 and Theorem 4.6. Note that the estimate can be trivially extended from points x, y ∈ Q with |x − y| ∞ ≤ 1 to x, y ∈ 2Q by changing the constant C ε and using that ϕ n,k is a seminorm.
Observe that one can choose (C ε / A ε,b ), so that the conclusion of Theorem 3.4 and 3.6 hold (with the same constant / one the same set).
Long-time Regularization by Noise via Euler Approximation
In this section we will prove estimates for terms of the type N q=1 |ϕ n,k+q (x q+1 , x q )| H .
We will first prove a concentration of measure result for the above term in Lemma 5.3. Using this we prove a P-a.s. sure version of this estimate in Theorem 5.4. However, this estimate only holds for medium-sized N. By splitting the sum and using Theorem 5.4 repetitively we conclude the full estimate in Corollary 5.5.
Note that applying Corollary 4.7 to every term under the sum would result in an estimate of order O( √ n2 −n/6 N). Since N will later be chosen to be of order 2 n this is of no use. The technique to overcome this is two-fold: On the one hand the ϕ n,k+q terms have to "work together" to achieve an expression of order O(N). However, since {ϕ n,k+q (x q )|q = 1, ..., N} are "sufficiently uncorrelated" the law of large numbers tells us to expect on average an estimate of order O( √ N ).
On the other hand in later applications x q will be values from the solution of the integral equation (IE) ω , so that it is reasonable to assume that |x q+1 − x q | H ≈ |ϕ n,k+q (x q )| H . Exploiting this enables to use both of our previous established estimates for every |ϕ n,k+q (x q+1 , x q )| H term.
Using both techniques we end up with an estimate of order O(2 −n N) (see Corollary 5.5).
Theorem 5.1 (Burkholder-Davis-Gundy Inequality)
Let (M n , F n ) n∈N be a real-valued martingale. For 2 ≤ p < ∞ we have In the celebrated paper [Dav76, Section 3] it is shown that the optimal constant in our case is the largest positive root of the Hermite polynomial of order 2k. We refer to the appendix of [Ose12] for a discussion of the asymptotic of the largest positive root. See also [Kho14,Appendix B], where a self-contained proof of the Burkholder-Davis-Gundy Inequality with asymptotically optimal constant can be found for the one-dimensional case.
Lemma 5.2
Let (M n ) n∈N be a martingale of the form holds for all r ∈ N.
Proof
Let (M n ) n∈N be a martingale. Using the Burkholder-Davis-Gundy Inequality (5.1.1) for every r, p ∈ N with p ≥ 2 we have In conclusion we obtain Using Stirling's Formula for p ≥ 1 2πpp p e −p ≤ p! and the above calculation we finally obtain
(5.3.1) on A c ε,bq,n,k+q . Note that x is allowed to be a random variable and we have used that |·| ∞ ≤ |·| H . We now set Let, as in the assertion, be n ∈ N with n ≥ N ε , r ≤ 2 n/4 , k ∈ {0, ..., 2 n − r − 1} and x 0 ∈ Q. Additionally, let x q+1 := x q + ϕ n,k+q (b q ; x q ) be the Euler approximation defined for q ∈ {0, ..., r − 1}. We write x q = (x (i) q ) i∈N for the components of x q and for q ∈ {1, ..., r} we calculate Via induction on q we deduce and since both x q ∈ Q and by Assumption 1.2 b takes values in Q we conclude that x q ∈ 2Q for all q ∈ {1, ..., r}. Note that x q is G (k+q)2 −n -measurable. Due to the fact that Inequality (5.3.1) only holds on A c ε,bq,n,k ⊆ Ω we modify x q in the following waŷ x q+1 :=x q + ½ A c ε,bq ,n,k+q ϕ n,k+q (x q ).
For the next step we define with τ ∈ N. Note that M τ is a G (k+τ +1)2 −n -Martingale with M 0 = 0. Furthermore, for every p ∈ N we have the following bound of the increments of M Using Corollary [Wre16, Corollary 3.2] and inequality (5.3.2) this is bounded by
Using Corollary [Wre16, Corollary 3.2] again this is bounded by
Note that x 0 is deterministic. Using this bound we invoke Lemma 5.2 with C := 18β A 2 −n |x 0 | H + 2 −2 n and hence we obtain the following bound for the Martingale (M τ ) τ ∈N E exp 1 8 In a similar way as (X q , Y q , Z q , M τ ) we define Observe that M ′ τ is a G (k+τ )2 −n -Martingale and in a completely analogous way as above we obtain E exp 1 8 Let us now consider the term V q Using Corollary [Wre16, Corollary 3.2] for p = 1 and inequality (5.3.2) this is bounded by Invoking Corollary [Wre16, Corollary 3.2] again this can be further bounded from above by This leads us to For notational ease we set C ′ := 18β A . Finally, starting from the left-hand side of the assertion and using Y q = X q + W q + V q we get for every η > 0 By applying the increasing function x → exp(x 1/2 ) to both sides and using Chebyshev's Inequality this can be bounded from above by Using inequality (5.3.4) and (5.3.5) we can conclude that which completes the proof.
Proof
We set r := ⌊2 n/24 ⌋. For the sake of notional ease we set x q ′ = 0 whenever q ′ > N. In order to estimate the left-hand side of the assertion we will use Theorem 5.4. To this end we split the sum into s pieces of size r. Choose i ∈ {0, ..., r − 1} such that Starting with the left-hand side of the assertion we split the sum into three parts. The first part contains the terms x q for q = 0 to q = i. Since i ≤ r ≤ 2 n/24 this can be handled by applying Theorem 5.4 directly. The second part contains s sums of size r. Here, Theorem 5.4 is applicable for every term of the outer sum running over t. The last part can be handled, in the same way as the first part, by directly applying Theorem 5.4. This strategy yields Estimating this further by using inequality (5.5.1) and r ≤ 2 n/24 yields the following bound which completes the proof.
Proof of the main result
In this section we are going to formulate a log-type Gronwall inequality of the form for j ∈ {0, ..., 2 n }.
In Lemma 6.1 we prove this implication in an abstract setting. Using all our previous considerations we show in Theorem 6.2 that our function u from Proposition 1.7 satisfies such a Gronwall inequality and hence has to coincide with the zero function (Corollary 6.3).
By assumption we have Using the inequality ln(1 + x) ≤ x the above, and hence γ j+1 , is smaller than By induction on j ∈ {0, ..., 2 m } we obtain Since m is "sufficiently big" the term inside the brackets is in the interval [0, 1] so that γ j is bounded from below by Plugging in the definition of γ j implies that Isolating β j yields β j ≤ exp log 2 (β 0 )e −2K−1 .
For n ≥ m we define By splitting the sum in (6.2.3) in two sums, one where k is even and one where k is odd, we can estimate ψ n by ψ n−1 . To this end let n ∈ {m + 1, ..., N}. We then have Since k − 1 is even whenever k is odd, rewriting the term |u((k − 1)2 −n )| H yields that the above equals And, henceforth, since n ∈ {m + 1, ..., N} using inequality (6.2.2) and the definition of ψ n (equation (6.2.3)) we have the following bound We use |u(j2 −m )| H ≤ β to bound the above by Furthermore, using inequality (6.2.1) i.e. N ≤ 7 · 2 2m/3 we bound the above by In conclusion we obtain ψ n ≤ 8 · 2 n−m β + α2 −m/3 , ∀n ∈ {m, ..., N}.
We set for ℓ ≥ n From the reversed triangle inequality we deduce (6.2.5) The idea of the proof is the following: We will obtain estimates for the two sums on the righthand side of the above inequality (6.2.5). For the first sum we simply use Theorem 3.4 (in the form of Corollary 4.7) to obtain estimate (6.2.6). We will split the second sum in the cases ℓ < N and N ≤ ℓ. In the first case we use Corollary 5.5, which will lead us to inequality (6.2.9). For the second case we have to do a more direct computation, which heavily relies on the fact that u is Lipschitz continuous (inequality (6.2.7)).
Combining all of this will result the final bound (6.2.10). Using the knowledge of the already established estimate (6.2.5) and the definition of α (6.2.2) we will be able to estimate α in terms of β (inequality 6.2.11). Feeding this back into inequality (6.2.2) for n = m completes the proof.
We will now estimate the two sums on the right-hand side starting with the ϕ n,k sum. We apply Corollary 4.7 to obtain Again, using that n ∈ {m, ..., N} and the definition of ψ n (equation (6.2.3)) this can be written as 2C ε n 1 2 + 1 γ 2 −n/2 2 n−m 2 −2 m + ψ n .
Proof
Step 1: Let 0 < ε < 1 40 and Ω ε,f be the of set of Theorem 6.2. Fix ω ∈ Ω ε,b and let u, as stated in the assertion, be a solution to the above equation. Since f ∞ ≤ 1 the function u is Lipschitz continuous with Lipschitz constant at most 2. Furthermore, Assumption 1.2 on f implies that u is Q as well as Q A -valued.
|
2017-06-23T14:24:36.000Z
|
2017-06-23T00:00:00.000
|
{
"year": 2017,
"sha1": "59d7112962f0bc9e65d632ea14021f942a13ce97",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "59d7112962f0bc9e65d632ea14021f942a13ce97",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
159040637
|
pes2o/s2orc
|
v3-fos-license
|
Weighted Sum-Rate Optimization for Intelligent Reflecting Surface Enhanced Wireless Networks
Intelligent reflecting surface (IRS) is a promising solution to build a programmable wireless environment for future communication systems. In practice, an IRS consists of massive low-cost elements, which can steer the incident signal in fully customizable ways by passive beamforming. In this paper, we consider an IRS-aided multiuser multiple-input single-output (MISO) downlink communication system. In particular, the weighted sum-rate of all users is maximized by joint optimizing the active beamforming at the base-station (BS) and the passive beamforming at the IRS. In addition, we consider a practical IRS assumption, in which the passive elements can only shift the incident signal to discrete phase levels. This non-convex problem is firstly decoupled via Lagrangian dual transform, and then the active and passive beamforming can be optimized alternatingly. The active beamforming at BS is optimized based on the fractional programming method. Then, three efficient algorithms with closed-form expressions are proposed for the passive beamforming at IRS. Simulation results have verified the effectiveness of the proposed algorithms as compared to different benchmark schemes.
I. INTRODUCTION
Intelligent reflecting surface (IRS), also known as large intelligent surface (LIS), is an artificial passive radio structure which reflects the incident radio-frequency (RF) waves into specified directions with low power consumption [1]- [3]. While IRS resembles a full-duplex amplify-and-forward (AF) relay [4], it forwards the RF signals using passive reflection beamforming, thus the power consumption of IRS is much lower than that of the AF relay, and there is nearly no additional thermal noise added during reflecting. Therefore, IRS has recently been considered as the key enabler for smart radio environment, which can greatly enhance the performance of wireless systems [5]- [8].
In this paper, we investigate an IRS-aided multiple-input single-output (MISO) multiuser downlink communication system as shown in Fig. 1, in which a multi-antenna base station (BS) serves multiple single-antenna mobile users. In the system, the direct links between the BS and the mobile users may suffer from deep fading and shadowing, and the IRS is deployed on a surrounding building's facade to assist the BS in overcoming the unfavorable propagation conditions by providing high-quality virtual links from the BS to the users. The objective of this paper is to maximize the weighted sum-rate (WSR) of the mobile users by jointly optimizing the active beamforming at the BS and the passive beamforming at the IRS.
A. Related Works
The IRS relays source signals from the BS by passive beamforming, thus the conventional relay beamforming algorithms are not applicable here. Moreover, the reflection element suffers a stringent instantaneous power constraint, which makes the passive beamforming more challenging. It is worth noting that the joint beamforming problem is much different from the hybrid digital/analog processing [9]- [11] and the constant-envelope precoding [12]- [14] in massive multiple-input multiple-output (MIMO) systems. Specifically, those designs are restricted to the transceiver sides, while the IRS aims to control and optimize the behavior of the wireless environment.
On the other hand, most existing works on IRS assume that each element is a continuous phase shifter, and then the passive beamforming is equivalent to adjust the phase-shift matrix. In [15] and [16], the authors first presented the joint active and passive beamforming problem, while the transmit power of the BS is minimized based on the semidefinite relaxation (SDR) technique. In [17] and [18], the authors focused on the maximization of sum-rate and energy efficiency, while employing zero-forcing beamforming at the BS. In [19], the authors made some practical modifications on the channel model of [18], and then the minimum signal-to-interferenceplus-noise ratio (SINR) of the mobile users is maximized via joint active and passive beamforming. To the best of our knowledge, the joint beamforming for maximizing the WSR of users has not been addressed before.
In practice, the reflection element may only shift the incident signal with discrete reflection coefficient (RC) values due to hardware limitations. In [20], the authors proposed to quantize the solution of the continuous phase shifter obtained by the function fmincon in MATLAB into the discrete feasible set to maximize the energy efficiency. However, both the numerical optimization and the quantization operation are heuristic with unpredictable performance loss. In [21], the authors proposed an alternating optimization algorithm to find a local optimal discrete phase-shift solution for the transmit power minimization problem in the single-user MISO system. However, this method cannot be directly applied to the WSR maximization problem in the multi-user system.
Another key challenge is the computational complexity. In practice, the elements on IRS can be massive, thanks to the low cost and low power consumption of the passive components. Therefore, low-complexity algorithm for passive beamforming is preferred. In [16], the passive beamforming was formulated as a non-convex quadratically constrained quadratic program (QCQP), and the SDR technique was employed to solve this problem in polynomial complexity. However, SDR is not scalable to large-scale IRS as the number of involved variables is quadratic in the number of reflection elements. In addition, extracting a rank-one component from the optimum solution to the SDR problem is NP-hard in general. In [19], instead of SDR, the authors solved the QCQP with low complexity by exploiting the rank-one assumption of the channel between BS and IRS. However, it is not applicable to the general channel model in [16].
B. Contributions
In this paper, we study the joint active and passive beamforming problem to maximize the WSR of the IRS-aid multiuser downlink MISO system. This problem is non-convex due to the multiuser interference, and the optimal solution is unknown. We try to design an iterative algorithm to find a suboptimal solution with low computational complexity. Specifically, we first decouple the active beamforming at BS and the passive beamforming at IRS based on the Lagrangian dual transform proposed in [22]. Then, the active beamforming is solved with closed-form solutions based on the multiratio quadratic transform proposed in [22], and the passive beamforming is reformulated as the QCQP which is the same as that in [16] and [19].
In contrast to [16] and [19], we attempt to design a unified algorithm for the passive beamforming subproblem, which is applicable to both continuous and discrete phase-shift setups. To this end, we relax the RC constraint to a ideal convex set, where both the phase and the amplitude of RC can be adjusted. Then, low-complexity algorithms with closed-form expressions are designed to find the optimal solution of the convex QCQP. We further show that, these algorithms can be extended to the non-convex phase-shift cases with a small modification. It is worth noting that, the joint active and passive beamforming solution under the ideal RC assumption not only reveals the ultimate performance limits of the proposed algorithms for the IRS-aided system, but also provides a reasonable initial point for the joint beamforming under nonconvex phase-shift assumptions.
The main contributions of this work are summarized as follows: • Firstly, this paper is one of the early attempts to study the WSR maximization problem for the IRS-aided multiuser downlink MISO system. An iterative algorithm with closed-form expressions is proposed to alternatively optimize the active beamforming at BS and the passive beamforming at IRS. • Secondly, we design three low-complexity algorithms for the passive beamforming at IRS. All these algorithms are applicable to the ideal RC assumption, the continuous phase-shift assumption, as well as the discrete phase-shift assumption. • Finally, simulation results have verified that the proposed algorithms achieve significant capacity gains against benchmark schemes. Moreover, the continuous phase shifter may achieve nearly the same performance as that in ideal cases, and the 2-bit phase shifter may work well with only a small performance degradation. It should be noted that another important application of the passive radio is ambient backscatter communications [6], [23]- [26] or symbiotic radio network [27]- [29], which are used to support low-power communications in Internet of Things (IoT) applications. In particular, the data of the IoT devices are embedded into the reflected signal from the environment rather than emitting a new radio carrier, resulting in high spectral and energy efficiency.
The rest of the paper is organized as follows. Section II outlines the system model. The algorithm framework for joint active and passive beamforming is presented in Section III. In Section IV, three low-complexity algorithms are proposed for the RC adjustment subproblems. Simulation results are provided in Section V to verify the effectiveness of the proposed algorithms, and Section VI concludes the paper.
The notations used in this paper are listed as follows.
denotes statistical expectation, ½(·) is the indicator function, and Pj F (·) indicates the projection operation onto set F . CN (µ, σ 2 ) denotes the circularly symmetric complex Gaussian (CSCG) distribution with mean µ and variance σ 2 . I M denotes the M × M identity matrix. For any general matrix G, g i,j is the i-th row and j-th column element. G T and G H denote the transpose and conjugate transpose of G, respectively. For any vector w (all vectors in this paper are column vectors), w i is the i-th element, and w denotes the Euclidean norm. The quantity max(x, y) and min(x, y) denote the maximum and minimum between two real numbers x and y, respectively. |x| denotes the absolute value of a complex number x, x * denotes its conjugate, and Re{x} and Im{x} denote its real part and imaginary part, respectively.
A. Channel Model
This paper investigates an IRS-aided multiuser MISO communication system as shown in Fig. 1, which consists of one BS equipped with M antennas, one IRS which has N reflection elements, and K single-antenna users. The baseband equivalent channels from BS to user k, from BS to IRS, and from IRS to user k are denoted by h d,k ∈ C M×1 , G ∈ C N ×M , and h r,k ∈ C N ×1 , respectively, with k = 1, · · · , K. For simplicity, we assume that all the channels experience quasistatic flat-fading. In addition, we assume that the channel state information (CSI) of all channels involved is perfectly known at the BS and the IRS, which is the same as [15]- [19].
It should be emphasized that the availability of perfect CSI is an idealistic assumption. Nevertheless, the algorithms proposed under this assumption are still useful as a reference point for studying the theoretical performance gain brought by the IRS, as well as providing training labels for the machine learning based joint beamforming designs, e.g., [30] and [31]. How to obtain CSI at IRS is a difficult task. Some earlyattempts can be found in [31] and [32], in which a channel construct approach is proposed to obtain the full CSI with low training overhead based on compressive sensing tools.
The IRS-aided link (i.e., BS-IRS-user link) is modeled as a concatenation of three components, i.e., the BS-IRS link G, IRS phase-shift matrix (i.e., passive beamforming), and IRS-user link h d,k . Denote by θ n ∈ F the RC of the n-th reflection element, where F is the feasible set of RC. The reflection operation on the IRS element resembles multiplying the incident signal with θ n , and then forwarding this composite signal as if from a point source, which is the main difference from the active reflection surface [33]- [35]. It is known that, the power of signals decreases drastically during reflection. 1 Thus the phase-shift matrix of IRS is approximately denoted by a diagonal matrix Θ = √ ηdiag(θ 1 , · · · , θ n , · · · , θ N ) (where η ≤ 1 indicates the reflection efficiency), for the power of signals reflected two or more times is much smaller than that of the signal reflected only one time. Besides, we consider following three assumptions for the feasible set of RC in this paper: • Ideal RC: Under this assumption, we only restrict that the RC is peak-power constrained: It is shown in [37] that, the amplitude and phase of θ n can be controlled independently via controlling over the resistance and capacitance of the integrated circuits in the IRS element, respectively. Under this assumption, the theoretical performance upper bound of passive beamforming can be obtained afterwards. • Continuous Phase Shifter: In [15]- [19], it is assumed that the strength of the reflection signal from each reflection element is maximized, thus |θ n | 2 = 1. Then, the reflection element only adjusts the phase of the incident signal, 1 The power loss of reflection operation is generally larger than 10 dB due to the "double-fading" effect [36] which will be presented in the link budget in Section V-A. and we have θ n = e jϕn . Since θ n can be adjusted to any desired phase, we have: (2) • Discrete Phase Shifter: In practice, the reflection element only has finite reflection levels. Same as [20] and [21], we assume that θ n only takes τ discrete values which are equally spaced on the circle θ n = e jϕn , i.e., (3)
B. Received Signal at User k
Denote the transmit data symbol to user k by s k . It is assumed that s k (k = 1, · · · , K) are independent random variables with zero mean and unit variance. Then, the transmitted signal at the BS can be expressed as where w k ∈ C M×1 is the corresponding transmit beamforming vector.
The signal received at user k is expressed as where u k ∼ CN (0, σ 2 0 ) denotes the additive white Gaussian noise (AWGN) at the k-th user receiver.
C. Problem Formulation
The k-th user treats all the signals from other users (i.e., s 1 , · · · , s k−1 , s k+1 , · · · , s K ) as interference. Hence, the decoding SINR of s k at user k is The transmit power constraint of BS is Let W = [w 1 , w 2 , · · · , w K ] ∈ C M×K . In this paper, our objective is to maximize the WSR of all the users by jointly designing the transmit beamforming matrix W at the BS and the RC matrix Θ at IRS, subject to the transmit power constraint in (7). The WSR maximization problem is thus formulated as where F ∈ {F 1 , F 2 , F 3 }, and the weight ω k is used to represent the priority of user k.
Despite the conciseness of (P1), it is generally much more difficult than the power minimization problem in [16] due to the non-convex objective function f 1 (W, Θ) and the nonconvex constraint sets F 2 and F 3 . 2 In this paper, we try to find a suboptimal solution for (P1) with low computational complexity. To be specific, we need to address two technical challenges: • First, we need to decouple the optimization variables in f 1 (W, Θ) to make (P1) non-convex and intractable. • Second, the complexity for RC adjustment algorithm should be scalable for cases with large N .
III. WSR MAXIMIZATION FOR DOWNLINK TRANSMISSION
In this section, we address the first challenge to decouple the optimization of the transmit beamforming W and the RC matrix Θ into several tractable subproblems.
A. Lagrangian Dual Transform
To tackle the logarithm in the objective function of (P1), we apply the the Lagrangian dual transform proposed in [22]. Then, (P1) can be equivalently written as where α refers to [α 1 , · · · , α k , · · · , α K ] T , and α k is an auxiliary variable for the decoding SINR γ k ; and the new objective function is defined by In (P1 ′ ), when W and Θ hold fixed, the optimal α k is Then, for a fixed α, optimizing W and Θ is reduced to (P1 ′′ ) is the sum of multiple-ratio FP problems, and the non-convexity introduced by the ratio operation can be solved via the recently proposed fractional programming technique [22]. In the next two subsections, we will investigate how to solve W by fixing Θ and to solve Θ by fixing W, respectively. Then, the original problem (P1 ′ ) can be solved in an iterative manner by applying the alternating optimization 2 Mathematically, the problem of minimizing the transmit power given individual rate requirements of users is equivalent to the problem of maximizing the minimum SINR of users given transmit power constraint.
, , as illustrated in Fig. 2. In particular, in each iteration, we first update the nominal SINR α, and then better solutions for W and Θ are updated, respectively. The process is repeated until no further improvement is obtained.
B. Transmit Beamforming
In this subsection, we investigate how to find a better beamforming matrix W given fixed Θ for (P1 ′′ ). Denote the combined channel for user k by Then, the SINR γ k in (6) becomes Using γ k in (12), the objective function of (P1 ′′ ) is written as a function of W: Thus, given α and Θ, optimizing W becomes It is known that, (P2) is the multiple-ratio fractional programming problem. Using quadratic transform proposed in [22], f 2 (W) is reformulated as where β = [β 1 , · · · , β K ] T , and β k ∈ C is the auxiliary variable. Then, based on [22], solving problem (P2) over W is equivalent to solving the following problem over W and β: (P2a) is a biconvex optimization problem, and a common practice for solving it (which does not guarantee global optimality of the solution) is alternatively updating W and β by fixing one of them and solving the corresponding convex optimization problem [38]. Lemma 1: The optimal β k for a given W is Then, fixing β, the optimal w k is where λ 0 is the dual variable introduced for the power constraint, which is optimally determined by Proof: β • k in (15) and w • k in (16) can be obtained by setting ∂f 2a /∂β k and ∂f 2a /∂w k to zero, respectively.
C. Optimizing Reflection Response Matrix Θ
Finally, we optimize Θ in (P1 ′′ ) given fixed α and W. Using γ k defined in (6), the objective function of (P1 ′′ ) is expressed as a function of Θ: for all i and k. Using (19), f 3u (Θ) in (18) is equivalently transformed to a new function of θ: Finally, optimizing Θ is translated to optimizing θ, which is represented as follows: (P3) is also a multiple-ratio fractional programming problem, and can be translated to the following problem based on the quadratic transform proposed in [22]: where the new objective function is and ε refers to the auxiliary variable vector [ε 1 , · · · , ε K ] T . Similarly, we optimize θ and ε alternatively. The optimal ε k for a given θ can be obtained by setting ∂f 3a /∂ε k to zero, i.e., Then the remaining problem is optimizing θ for a given ε. It is known that, |b i,k + θ H a i,k | 2 in (21) can be further written as Substituting (22) and (23) into (21), the optimization problem for θ is represented as follows where the objective function is and Since a i,k a H i,k for all i and k are positive-definite matrices, U is a positive-definite matrix, and f 4 (θ) is a quadratic concave function of θ. Therefore, the passive beamforming subproblem (P4) is a QCQP which is the same as that in [16] and [19], and the non-convexity of (P4) is only introduced by the constraint in (24). We will investigate the algorithms to solve (P4) in the next section.
D. Algorithm Development
We summarize the proposed alternating optimization method in Algorithm 1. To be specific, the algorithm starts with certain feasible values of W (0) and Θ (0) . Next, given a fixed solution {W (i) , Θ (i) } in the i-th iteration, we first update the the nominal SINR α (i+1) , and then the transmit beamforming W (i+1) and RC values of IRS Θ (i+1) are updated based on the fractional programming techniques, respectively, for the (i + 1)-th iteration. The convergence of the whole algorithm is discussed in the following proposition.
Proposition 1: Algorithm 1 is guaranteed to converge, if the RC vector θ obtained by solving (P4) in the i-th iteration satisfies: Proof: It can be verified that, when (29) is satisfied, the objective function is monotonically nondecreasing after each iteration. Therefore, Algorithm 1 is guaranteed to converge. Until The value of function f 1a in (9) converges.
IV. REFLECTION COEFFICIENTS ADJUSTMENT FOR (P4)
In Section III, the WSR maximization problem is decoupled, and an iterative algorithm for joint active and passive beamforming is proposed. We have derived closed-form solutions for every step in Algorithm 1 except Step 3.2, i.e., optimizing the RC by solving (P4). Thus we address (P4) in this section.
After dropping irrelevant constant terms, (P4) is equivalently translated to where Note that, f 4a (θ) is a concave quadratic function. When θ n ∈ F 1 , the constraints in (30) is also convex. Hence (P4a) is convex in this case. However, when θ n ∈ F 2 (or θ n ∈ F 3 ), (P4a) is non-convex, and to find the optimal θ is a challenging task.
In [16], the author applied SDR to solve this problem. Later they used Gaussian randomization to construct a rankone solution. In general, the computation complexity of SDR is in the order of O(N 6 ) [39], which is not scalable for cases with large N . Therefore, in this section, we aim to design low-complexity and scalable algorithms to solve (P4). In particular, we first propose algorithms with closed-form solutions for the convex case (i.e., θ n ∈ F 1 ). Then, the algorithms proposed for the convex problem are extended to tackle the non-convex cases.
A. Nearest Point Projection
In [20], the authors suggested solving the RC adjustment problem for F 2 first, and then projecting the solution to F 3 . The performance of the projection solution is highly related to that of the solution for the original problem. However, since F 2 is non-convex, the optimal solution under this feasible set is difficult to obtain. To overcome this drawback, we make a small modification on that two-step method: • Firstly, we solve the convex problem by assuming θ n ∈ F 1 , and derive the optimal θ. • Secondly, θ is projected to the nearest feasible point in the non-convex set F 2 or F 3 to obtain a suboptimal solution for the non-convex problem. We name this method as the nearest point projection (NPP) method. The projection solution in our method may achieve better performance, since at the first step, we obtain the optimal solution instead of a suboptimal one carried out by numerical optimization.
However, f (θ) = |θ| 2 is not a complex analytic function. Thus, we rewrite the above constraint as θ H e n e H n θ ≤ 1, ∀n = 1, · · · , N, where e n ∈ R N ×1 is an elementary vector with a one at the n-th position. Then, (P4a) (with F = F 1 ) is represented as s.t. θ H e n e H n θ ≤ 1, ∀n = 1, · · · , N. The above problem is convex, and it can be equivalently transformed to the dual problem via Lagrange dual decomposition (LDD): where λ = [λ 1 , · · · , λ N ], λ n is the dual variable for the constraint θ H e n e H n θ ≤ 1, and L(θ, λ) denotes the dual objective function, which is given by G(θ, λ) is a concave function with respect to θ. It can be verified that, the Slater's condition is satisfied and thus the duality gap is indeed zero [40]. Then we have following lemma: Lemma 2: The optimal θ for a given λ is The optimal dual variable vector λ • can be determined according to the constraints in (33) via the ellipsoid method. Proof: θ • in (35) can be obtained by setting ∂G/∂θ to zero.
2) Projection
Step: Denote the optimal RC for the cases when θ n ∈ F 2 and θ n ∈ F 3 by θ • n . Then, we have where Pj F (·) indicates the projection operation onto F .
• When θ n ∈ F 2 , the angle of θ • n is: • When θ n ∈ F 3 , the angle of θ • n is: 3) Discussion: Note that, the θ • obtained by projection is not a local optimum solution of the original non-convex problem. Thus, we only update θ • , when the constraint (29) in Proposition 1 is satisfied to guarantee the convergence of Algorithm 1.
In addition, another drawback of the method proposed above is the complexity. In each iteration step of the LDD method, the highest complexity operation is to find θ • in (35), in which the complexity of the summation operation (N 6 ), which is the same as the SDR technique in [16]. Therefore, it is worthy designing low-complexity method to replace the conventional LDD for the NPP method.
B. Iterative Reflection Coefficient Updating
In [21], for the single-user cases, the authors proposed an alternating optimization algorithm, which iteratively optimizes one of the N RC in θ by keeping the others fixed. In this subsection, we extend this method to the multi-user system. In contrast to the NPP method, the complexity of the algorithm proposed in this subsection is very low, and moreover, a local optimum can be found for (P4a).
1) Subproblem Formulation for Optimizing θ n : Denote the element at i-th row and j-th column of U by u i,j , and the i-th element of ν by ν i . Then, θ H ν can be written as Similarly, θ H U θ is represented as From the definition of U in (26), U is a hermitian matrix. Substituting u i,j = u * j,i into (40), we have Substituting (39) and (41) into (31), f 4a (θ) can be translated to a function of θ n . After dropping all the irrelevant constant terms, we have where Then, the subproblem for optimizing θ n given all the other 2) Optimal Solutions: f 5 (θ n ) in (42) is a concave quadratic function of θ n , and the closed-form solution for θ n can be derived for both the convex case and the non-convex case: • θ n ∈ F 1 : In this case, |θ n | 2 ≤ 1. Thus, the optimal RC can be found by maximizing the quadratic objective, and then projecting back into a unit ball: • θ n ∈ F 2 : In this case, |θ n | 2 = 1. Hence, f 5 (θ n ) = −A 1,n + 2Re {θ * n A 2,n }, which is a linear function. Then, the optimal RC for (P5) can be obtained from • θ n ∈ F 3 : In this case, we also have |θ n | 2 = 1. The optimal RC is Finally, all the reflection coefficients can be optimized based on (P5) in the order from n = 1 to n = N and repeatedly. This method is named as the iterative reflection coefficient updating (ICU).
3) Discusssion: θ • n provided above is the optimal solution for (P5), which means that we always find the optimal θ • n while fixing other RC values. As a result, the ICU algorithm will converge to a local optimum of (P4a) for all the three RC assumptions. Especially, when θ n ∈ F 1 for all n, this local optimum is the global optimal solution, since (P5) is convex in this case. Therefore, in the i-th iteration of Algorithm 1, if we initial the ICU by using θ (i−1) , the updated θ (i) always satisfies the constraint (29) in Proposition 1: and thus Algorithm 1 will converge. From (44), the complexity for solving (P5) is O(N ). In every iteration step of ICU, we need to solve N times (P5) for all the N RC values in θ. Thus the complexity of the ICU is O(N 2 ). Since ICU can also find the optimal solution for θ n ∈ F 1 case, it can be used to replace the LDD for the NPP method by applying θ • n in (45). Through such operation, the complexity of NPP is reduced to O(N 2 ).
C. Alternating Direction Method of Multipliers
One drawback of the ICU algorithm is that, the RC values in θ should be optimized one by one. Although the complexity is low, it may still cost long time when N is large. In this section, we try to propose algorithm which can optimize θ in parallel, meanwhile the complexity should be much lower than the NPP method.
1) Problem Transform: Roughly speaking, when θ n ∈ F 2 or θ n ∈ F 3 ), (P4a) is a convex optimization problem with some additional non-convex constrains. Recently, a heuristic method has been proposed for this kind of problem by employing the alternating direction method of multipliers (ADMM) [41], [42]. Although this heuristic non-convex ADMM may not find an optimal point, it can be dramatically fast to carry out a "good" solution [42].
Let's introduce an auxiliary vector q for θ, and a penalty term for q = θ. Then, (P4a) is equivalently represented as where µ > 0 is the penalty parameter. We need two Lagrange variables λ R = [λ R,1 , · · · , λ R,N ] T and λ I = [λ I,1 , · · · , λ I,N ] T for Re{q − θ} = 0 and Im{q − θ} = 0, respectively, since the constraint in (49a) is a complex equation. Then, the Lagrangian of (P4c) is: where ½ F (·) is the indicator function of set F (i.e., ½ F (θ n ) = 0 if θ n ∈ F ; otherwise, equals infinity). Thus the dual problem is formulated as It can be verified that, the Slater's condition [40] is satisfied when F = F 1 . Thus the duality gap is indeed zero, i.e., (P6) is equivalent to (P4c). However, when F = F 2 or F = F 3 , (P4c) is non-convex and the duality gap exists. In these cases, the optimal objective value of (P6) only serves an upper bound of the primal problem (P4c). The benefit of this transformation is that, solving (P6) is relatively simpler than solving the primal problem (P4c).
2) ADMM for (P6): In this part, we solve (P6) via ADMM which has the following iterative form: whereλ = λ R + jλ I , and t is the iteration index.
To be specific, in (51), θ t+1 is optimized given q t andλ t . The optimal θ is The projection operation in (54) is: • When θ n ∈ F 2 : ∠θ t+1 n = ∠θ n ; • When θ n ∈ F 3 : Then, in (52), q is optimized given θ t+1 andλ t , and we have In the end, the Lagrange variablesλ t+1 is updated in (53). Note that, when F = F 1 , the ADMM algorithm will converge to the global optimum [43]. However, this is not necessarily true when F = F 2 or F = F 3 , since (P4c) is a non-convex optimization problem. For these cases, the convergence condition of the ADMM algorithm is presented in the following lemma.
Lemma 3: When θ n is belongs to set F 2 or F 3 , the ADMM algorithm presented above guarantees to converge, if the penalty parameter µ satisfies: Proof: Please see the detailed proof in Appendix A. In this paper, we choose µ = ι U 2 , where ι ≥ 1 is the minimum integer which satisfies (59).
Optimal for dual problem 3) Disucssion: From Lemma 3, we only know that the ADMM algorithm converge. But it need not be to a global or even local optimal. Thus, we need to check whether the output θ satisfies the the constraint (29) to guarantee the convergence of Algorithm 1.
It is seen that, the highest complexity operation in the ADMM algorithm is to update q t+1 in (58). Contrary to the LDD, the matrix inverse term in (58) is a constant in every iteration step, and thus it can be pre-computed in the initialization step. Therefore, the complexity of the ADMM algorithm is O(N 3 ), which is much lower than the LDD in Section IV-A and the SDR technique in [16]. In addition, although the complexity of ADMM is slightly higher than the ICU in Section IV-B, the RC vector θ is updated in parallel instead of the serial operation in ICU. Hence, the ADMM algorithm may converge faster via parallel computing, especially when N is large. Finally, we summarize the comparison of the three RC optimization algorithms in TABLE I. Note that, both the ICU and ADMM algorithms can be employed to replace the LDD in NPP method, and then the complexity of NPP is reduced to O(N 2 ) and O(N 3 ), respectively.
A. Simulation Scenario
In this section, numerical examples are provided to validate the effectiveness of the proposed algorithms. We consider an IRS-aided femtocell network illustrated in Fig. 3 where d D,k denotes the link distance between the BS and the k-th user, ̺ D = 3.5 is the path loss exponent, C p is a constant with respect to the wavelength, and ς B and ς k denote the antenna gain of the BS and k-th user, respectively. We assume that C p ς B ς k = −30 dB, which indeed is the path loss at the reference distance (d D,k = 1 m). Besides, we assume that the IRS-aided link experiences the free-space path loss, since the location of IRS is usually carefully chosen. Then, the large-scale fading with respect to the power of G and h r,k are denoted by κ G = C p ς B ς I d −̺I G and κ r,k = C p ς I ς k d −̺I r,k , respectively, where ̺ I = 2 is the path loss exponent, d G and d r,k are the distance from BS to IRS and from IRS to the k-th user, respectively, and ς I is the reflection gain of the IRS element. Since the IRS-aided link is the concatenation of G and h r,k , the total path loss is Comparing (61) with (60), one can see that the signal reflected by the IRS suffered from the "double-fading" effect [36]. Nevertheless, the reflection gain of the IRS elements is usually much higher than the antenna gain of the mobile station thanks to the recent advances in meta-materials. Denote by the relative reflection gain ξ = ςI √ ςBς k . Then, in the simulation scenario shown in Fig. 3, the direct-link path loss from BS to (200 m, 0) is about -111 dB. If we have ξ = 10 dB, the path loss of the IRS-aided link is about -122 dB, and in this case the IRS may potentially double the average receive power at the user side using N = 10 elements.
For simplicity, we assume the Rayleigh fading model to account for small-scale fading. The weights ω k are set to be equal in all the simulations. All the simulation results are obtained by averaging over 10 4 channel realizations. Specifically, we first generate 100 snapshots, in which the locations of the mobile users are randomly chosen. Then, for each snapshot, we further generate 100 channel realizations with independent small-scale fading.
B. Benchmarks and Initialization
We compare the performance of the proposed algorithms with the following 2 baselines: • Baseline 1 (Without the aid of IRS): Let N = 0, and then the active beamforming is optimized via the WMMSE in [44]. In particular, this baseline can be obtained by skipping the Step 3.1 and 3.2 of the Algorithm 1 [45]. • Baseline 2 (Random passive beamforming): The phaseshift matrix of the IRS is not optimized, in which the RCs are chosen by random values from set F 2 . Then, WMMSE is adopted to optimize the active beamforming at the BS. Besides, the WSR maximization problem (P1) investigated in this paper is non-convex, and the proposed Algorithm 1 only finds a suboptimal solution. Therefore, the performance of Algorithm 1 is sensitive to the initialization of W and Θ. In this paper, for the three different assumptions of RC values, we employ different initializations: • θ ∈ F 1 : In this case, θ in Θ is initialized by random values in F 2 , and W is initialized by the zero-forcing beamforming. • θ ∈ F 2 or θ ∈ F 3 : In this case, the constraint sets of θ is non-convex, and the proposed algorithm is more vulnerable to be trapped in local optimum. Hence, we initialize W and Θ by the solutions of the θ ∈ F 1 (convex constraint set) case, in which both the active and passive beamforming have found good directions. Fig. 4(a) illustrate the average sum rate of different schemes with respect to the transmit power P T , when N = 10 and ξ = 10 dB. The average sum rate over different channel realizations is denoted by R. We set L I = 100 m, and thus the IRS is deployed at (100 m, 50 m). It is seen that, the performance gain of the random passive beamforming scheme (Baseline 2) is very small, since most reflected signals cannot arrive the receivers of the mobile users. On the other hand, significant performance gains are achieved by joint active and passive beamforming optimization, and all the three proposed algorithms have almost the same performance. In particular, the joint beamforming schemes achieve about 3 dB gain comparing with Baseline 1 as expected. The performance loss is negligible when the ideal RC constraint reduces to θ ∈ F 2 . This reveal that, although we relax the constraint of θ from |θ| = 1 to |θ| ≤ 1, the amplitude of the optimal θ is still very close to 1. In addition, we also observe that the "1-bit" phase shifter still achieves about 1.5 dB gain, and the "2-bit" phase shifter may obtain almost the full beamforming gain compared with the continuous phase shifter cases.
C. Sum Rate versus Transmit Power P T
Next, in Fig. 4(b), we fix the transmit power P T = 0 dBm, and plot the cumulative distribution function (CDF) of the sum rate over different snapshots. It is seen that, the performance gains of all the proposed schemes under different RC assumptions are stable over the CDF curves, and also keep consistent with their counterparts in Fig. 4(a). Therefore, we conclude that, with high probability, the performance of the proposed algorithms will be good irrespective of user location. Fig. 5(a) compares the average sum rate with the size N of IRS, while the transmit power of BS is fixed to 0 dBm and the relative reflection gain of IRS is ξ = 10 dB. It is observed that the performance of all the schemes with the aid of IRS increases with the increase of N , since the sum power of the signals reflected by the IRS becomes stronger. However, the quantization loss of the discrete phase shifter also increases as N increases. Hence, we prefer high-order quantization for cases with large N . In addition, we observe that, to achieve R = 20 bps/Hz, the size should be increased from 10 to 40 (for continuous-phase-shifter cases), i.e., 6 dB. On the other hand, R = 20 bps/Hz can also be achieved by increasing P T from 0 dBm to 5 dBm (only 5 dB) as shown in Fig. 4(a). This is because, when N increases, only the IRS-assisted link is enhanced, while when the transmit power of BS increases, both the direct link and the IRS-assisted link get benefits.
D. IRS Size and Material
Then, we investigate the impact of the relative reflection gain ξ on the average sum rate. It is known from [37] that, ξ is dominated by the resistance of the integrated circuits in the IRS element, and recent research has shown that ξ can be greatly improved by exploiting the negative resistance materials [46]. 3 Fig. 5(b) illustrates the average sum rate of different schemes with respect to ξ, when P T = −5 dBm and N = 10. One can observe that, the IRS-aided system with continuous phase shifter may achieve about R = 20 bps/Hz by increasing ξ from 10 dB to 15 dB. Comparing with Fig. 4(a) and Fig. 5(a), we conclude that, to increase ξ is much more effective than to increase P T and N for improving R. This is because, the reflection gain of IRS is counted twice during the reflecting operation according to (61). Moreover, when ξ is large, even the random passive beamforming scheme (Baseline 2) could achieve a significant rate gain. Therefore, it is very attractive to investigate how to improve ξ for the IRS elements with new reflection materials.
E. Deployment Location
Finally, we discuss on the impact of the IRS deployment locations. We move the IRS from L I = 50 m to L I = 200 3 The negative resistance materials are generally comprising of active components. Thus in this case, the IRS becomes semi-passive. m, and plot the average sum rate of different schemes with respect to L I in Fig. 6, while setting P T = 0 dBm, N = 10, and ξ = 10 dB. It can be seen, the performance gain of the IRS-aided system increases when the IRS is deployed closer to the BS or the cluster of users, and deploying the IRS at the center place (L I = 100 m) is the worst case. This conclusion can also be inferred from the double-fading path-loss model in (61). However, when the IRS is deployed too close to the BS or users, the propagation condition may get as worse as the direct link. Thus there exists a trade-off between the propagation condition and the double-fading effect. In practice, for the convenience of controlling between BS and IRS, the IRS is preferred to be deployed close to the BS, while guaranteeing high-quality BS-IRS-user links at the same time.
VI. CONCLUSION In this paper, we investigate the IRS-aided multiuser downlink MISO system. Specifically, a joint active and passive beamforming problem is formulated to maximize the WSR under the BS transmit power constraint. To tackle this nonconvex problem, an iterative method has been developed by utilizing the recently proposed fractional programming technique. In addition, three low-complexity algorithms are proposed to solve the passive beamforming problem with closed-form solutions. All the three algorithms are applicable not only to the continuous phase-shift IRS but also the discrete phase-shift IRS. Extensive simulation results demonstrated that the proposed joint beamforming scheme achieves significant capacity gain compared with the conventional system without the IRS and the IRS-aided system employing random passive beamforming. Moreover, it is also shown that the IRS with 2-bit quantizer may achieve sufficient capacity gain with only a small performance degradation.
APPENDIX A PROOF OF LEMMA 3 From (58), we have following equation: Substituting (62) into (53), we havē Therefore,λ can be replaced by a function of q: By substituting (64) into G(q, θ, λ R , λ I ), we get a new function: It is easy to verify that, when µ 2 I N −U ≻ 0 in (59) is satisfied, the ADMM iteration from (51) to (53) is equivalent to the following coordinate ascent iteration to V(q, θ): Hence, the ADMM algorithm guarantees to converge.
|
2019-05-20T07:23:59.000Z
|
2019-05-20T00:00:00.000
|
{
"year": 2019,
"sha1": "cbfa4ab4156926163978a55f101f67fcf75e1730",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b9e318e432c30dc35ebb817b3e24f70f801d7641",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
}
|
56279501
|
pes2o/s2orc
|
v3-fos-license
|
Miscue Analysis: A Glimpse into the Reading Process
This paper aims to analyse Form One students’ ability in reading prose. A qualitative research method was carried out involving 6 average ability students. The prose “Fair’s Fair” by Narinder Dhami was used as an instrument to gauge students’ ability in oral reading. The assessment carried out on the reading is miscue analysis, a tool to measure oral reading accuracy at the word level by identifying when and the ways in which the students deviates from the text while reading aloud. Miscues analysed are insertions, hesitation, omission, repetition and substitution. Miscues that maintain the meaning of the sentences are the participants’ strengths while miscues which disrupt the meaning of the sentences are the participants’ weaknesses. The data collected are analysed using descriptive statistics. The findings show that the percentage of strengths outweighed the percentage of weaknesses for all the participants on the occurrences of miscues. The students’ reading behaviour has provided insights into their language cueing system and the strategies they use during the reading process to comprehend a text.
INTRODUCTION
Reading is a thinking process that involves recognizing words and it allows students to use his or her prior knowledge to make meaning of a text.In this process, miscues occurrences explain students' strategies used to overcome difficulties while reading.A miscue is an unexpected response that occurs when the reader's knowledge of language and concepts of the world may not match up with the text (Goodman, 1996).Miscues are defined as instances in oral reading when a reader reads a text in a way that the person listening would not expect.By analysing student miscues, teachers have a glimpse into the reading process (Almazroui, 2007;Goodman & Marek, 1996;Kabuto, 2009;Moore & Brantingham, 2003;Moore & Gilles, 2005;Wilson, 2005).Miscue analysis is an effective technique for examining and evaluating the development of control in the reading process of students.It is an analytical procedure for assessing students' meaning construction from the print and demonstrates the knowledge that a student brings to the text (Goodman, 1996).Additionally, it helps students to become aware that they are better readers than they think they are.Goodman (1996) believes that readers who revalue themselves become more confident and are willing to take risks.
Among the miscues, substitution miscue provides information on three cueing systems: grapho-phonic, syntactic, and semantic (Rhodes, 1993).The three cueing system is adapted from Turbill (2002).Figure 1 shows the interaction among the three cueing systems.
Figure 1.Intersection of the three cueing systems in reading (adapted from Turbill, 2002).
a) Grapho-phonic cueing system This cueing system is derived from the relationship between the written forms of letters and letter combinations and their sounds.If the substituted word is acceptable in the aspects of phonics and graphics, it is considered a good miscue.Otherwise, it is a bad miscue.b) Syntactic cueing system This cueing system is based on grammar.If words substituted maintain the meaning in the sentence structures and in the paragraph as a whole, the miscue is acceptable.c) Semantics cueing system This cueing system is based on the meaning of words, phrases and sentences.If the substituted word maintains the meaning in the aspects of words, phrases and sentences, it is acceptable and considered a good miscue.
LITERATURE REVIEW
Various definitions of the reading process are provided by scholars of reading.From the traditional view, reading is defined as decoding words and symbols from the print to construct meaning (Gough, 1972).This notion is characterized as data-driven or text-driven because the focus is on the surface of the text (Shapiro & Riley, 1989).There are two aspects focused on and they are: i) pronunciation, and ii) identification of words and their meaning.
Semantic Syntactic
Similarly, LaBerge and Samuels (1974) state that reading is a process of mastering small units of printed data before integrating them into larger units.These definitions emphasize data on the page rather than meaning of the text.Smith (1982) contends that readers bring concepts to written material to understand it.This means that readers utilize their prior knowledge to comprehend sentence structures or words.This view has been labelled as concept driven.In this perspective, Smith (1982) proposed the idea that reading is not passive but purposeful and rational, dependent on the prior knowledge and expectations of the reader.
Additionally, Rosenblatt (2004) proposed that the act of reading is transactional between the reader and the text that occurs within certain context.It appears that meaning does not reside in the text but the reader's interaction with it.Each reader may transact a text differently based on his or her prior knowledge.Goodman (1994) defines reading as a socio-psycholinguistic process and in this view; he highlights the idea of context.The term context that Goodman (1994) refers to is the cues from three linguistic systems: grapho-phonics, syntax, and semantics to make meaning.Hence, Goodman (1994) focuses on readers' background knowledge involving cues from three linguistic systems which are grapho-phonics, syntax, and semantics to make meaning.
Students' Ability in Reading
Students' ability in reading can be gauged through an analysis of the reading process.The miscue patterns which lead to meaning loss in sentences, indicates students' weaknesses.Otherwise, it is considered as students' strengths.The number of miscues which indicate students' weaknesses and strengths is counted in the form of percentages.A high percentage of strengths indicate that students are proficient readers and a low percentage of strengths reveals that students are less proficient readers.By identifying strengths and weaknesses, teachers can help students revalue reading and gain confidence in their ability to read (Moore & Brantingham, 2003).In addition this could measure the effectiveness of an intervention and guide staff development (Davenport, 2002).
Research Design
A qualitative approach was used to collect data.The analysis of miscues provided information on the elements that help or hinder students' texts comprehension and their reading strategies used to comprehend texts.The In-Depth Procedure by Goodman et al. (2005) was used to analyse the miscues as it is able to identify how participants make use of miscues to construct meaning during the reading process.
Research Question
The research question addressed in the study is: What are students' strengths and weaknesses in reading and understanding prose?
Participants
Six Form One students were selected in this study based on purposive sampling procedures.They scored grade 'B' in their mid-year examination and comprise both genders from different social backgrounds.The participants are all from the Malay race.
Research Instrument
The reading text used as the research instrument is chapter three (consisting of 56 lines and 7 pages: p. 26-32) from the short story "Fair's Fair" by Narinder Dhami prescribed for Form One secondary school students.This short story is taught in schools.It serves as an instrument to analyse miscue systems which provides insights into how students integrate the language cueing systems during the reading process to construct meaning.
Data Collection Procedures
Data collected were analysed based on the In-Depth Procedure as outlined by Goodman et al. (2005).This procedure allowed the exploration of the miscues in relation to other miscues produced by the readers within the sentence or the entire story (Goodman et al., 2005).Firstly, the passage was type-written and two copies were made of the passage.One copy was for the participant and the other was for the researchers to be used as a code sheet.Each line in the code sheet was numbered so that miscues were identified at exactly where the miscues occurred.
Next, participants' readings were audio-recorded without any aid from the researchers.Before the recording session began, light conversations were made to reduce the anxiety of the participant and to put him or her at ease.The participant was informed that he or she will not be graded for the reading.As students read, the researchers coded for miscue categories in every line of the passage, if there was any.The possible coding system is shown in Table 1, adapted from Argyle (1989) for the miscues patterns such as substitution, insertion, omission, self-correction, repetition and hesitation.
Following are the details on the types of miscue patterns (Goodman et al., 2005): a) A substitution miscue: a substitution miscue happens when a reader substitutes incorrect words or phrases to replace the correct text.
b) Insertion miscue: an insertion miscue is when the reader reads words that are not in the text.c) Omission miscue: omission miscue is when a reader does not read words that are in the text.d) Correction miscue: correcting and replacing words to their original form in the text is known as correction miscue.e) Repetition miscue: readers reread the words or phrases in the text.f) Hesitation miscue: while reading some readers pause in front of words in the text.
Among the miscues, substitution miscue provides more information about the reader compared to other miscues (Davenport, 2002;Goodman et al., 2005).A substitution miscue is based on three aspects (Goodman et al., 2005): a) Do the substituted words look like (the) text words?(grapho-phonically acceptable) b) Do the substituted words fit grammatically into the sentence?(syntactically acceptable) c) Do the substituted words make sense within the whole passage?(semantically acceptable) The grapho-phonic cueing system is also known as the phonic cueing system or the phonological cueing system.The prefix 'grapho' means writing.The word 'phonic' relates to sound.Grapho-phonic analysis refers to letter-sound relationships within a word.The sounds often hint towards a certain meaning as readers read a text (Goodman et al., 2005).
Semantic cues are associated with the overall meaning of the text, both understanding the words and sentences in a text.According to Goodman et al. (2005) systematic syntactic relations include word order, tense, number and gender whereas the semantic cueing system is based upon meaning within context.Semantic understanding is determined by the reader's vocabulary or lexicon (Hynds, 1990, as cited in Goodman, 1996).The advantage of using the in-depth procedure in miscue analysis allows researchers to identify how participants make use of the three miscue systems while substitution miscues patterns occur during the reading process (Goodman et al., 2005).
The miscues that did not change the meaning semantically, syntactically or in the grapho-phonic cueing system, postulates participants' strength and the miscues which were unaccepted and changed the meaning in the language system were coded as participants' weaknesses.Descriptive statistical analysis was used to analyse the coded miscue patterns from the coding sheet in the form of frequency counts and percentages.
Technique of Data Analysis
The analysis provides information on: i) occurrences of participants' miscue patterns, ii) miscues on an individual basis, iii) miscues in percentages, and iv) the percentage of strengths and weaknesses on the miscues made by each participant.This information help identify proficient readers among the participants in reading to understand the prose forms.The context in which proficient readers refer to, are those whose miscues maintain the meaning and grammar in print as defined by Goodman (1973).
Substitution Miscues
Participant 1 line 8, "And my dad won't give us any more" the contraction 'won't' was substituted with another contraction 'don't', the same parts of speech, a negative form.In this example, the substitution of 'don't' for 'won't' shows that the participant used the middle letter 'o' as well as the ending blend, possibly recognizing the 'on't' pattern in the word 'don't'.In the graphic cueing system, the word substituted, showed some graphic similarity because both the words look alike at the ending pattern.There are some graphic similarities with the words.Phonic wise the word pronounced correspond a little to the written text especially at the ending part.Therefore, there is some phonic similarity.This reflects the participant's strength partially.
Grammatically, the word substituted in the phrase changes the meaning syntactically.The word 'won't in the sentence means the dad has been giving money but from a certain period, he had stopped giving money.The substituted word 'don't' for 'won't' means he has not been giving money at all.It shows the participant's weakness.Semantically, the word substituted does not change the meaning within the whole passage.This reflects the participant's strength.
In line 12, 'Something fell out of her bag'.In this sentence the preposition 'of' was substituted with the preposition 'from', the same parts of speech.The participant had made use of the consonant letter 'f' the ending initial, possibly to recognize the word 'from' for 'of'.The pronunciation of the word substituted is unacceptable in the phonic cueing system because there is no phonic similarity and graphically, there is no similarity between the word substituted and the word in print.Therefore, it exhibits the participant's weakness.Grammatically, the word substituted fits correctly into the sentence and it makes sense within the whole passage.The meaning is not changed either syntactically or semantically.Hence, the miscue is coded as the participant's strength.
In line 18, "It must be that woman's purse," said Sam.The pronoun 'it' is used to describe a thing in the sentence but was substituted with a personal pronoun 'I'.In this example, there is a high possibility that the participant made use of the vowel letter 'i' in the beginning blend and omits the consonant letter 't'.In the graphic cueing system, there is some similarity but no phonic similarity.Therefore, the strategy used is the participant's weakness.In the aspect of grammar, the substituted word does not fit in the sentence nor does it make sense within the whole passage.There is change in the meaning and the sentence is ungrammatical.The miscue is coded as a weakness.
In line 29, "Wow!There's a lot of money in her".In this example, the article 'a' was substituted with the verb-to-be 'are'.Most probably the participant made use of the vowel letter 'a' at the beginning blend to come up with the word' are', a verb-to-be.There is some similarity graphically and no phonic similarity.The pronunciation does not match with the written word.Hence, it is the participant's weakness.Grammatically, the word substituted does not fit in the phrase as there will be two verbs-to-be, 'is' and 'are' in the sentence.As such, syntactically and semantically the substituted word is unaccepted.The miscue shows the participant's weakness.
In line 32, "You could keep it", said Raj. Participant 1 substituted the modal verb 'could' to another modal verb 'should', which is the same parts of speech.The participant used the middle letters 'ou' and the ending blend 'ld' to recognize the word 'should'.As such, there is high graphic similarity and some similarity in the phonic cueing system.This is coded as the participant's strength.The word substituted is acceptable and fits in the sentence grammatically although the degree of certainty between the two modal verbs differs.Within the whole passage, the word does not change the meaning neither in the sentence.The miscue made is strength.
Participant 3 made two substitution miscues.In line 14, "She walked on up the street".The preposition 'up' [ʌp] was substituted with 'ap ' [aep] which is not a word.In the phonic system, there is some phonic similarity and some graphic similarity.Probably the participant used the ending consonant letter 'p' in the word 'up' to recognize the non-word 'ap ' [aep].This reflected the participant's weakness.The substitution of non-word does not make any sense neither in the sentence nor within the whole passage.It is unaccepted syntactically.Therefore, the miscue is coded as a weakness.
In line 48, "…gave me five pounds!Now I can".The word 'gave' in the phrase was substituted with the word 'give'.The participant made use of the beginning consonant letter 'g' and the ending pattern 've' to recognize the word 'give'.There is high graphic and phonic similarity in the word substituted with the word in print.This exhibits the participant's strength.The word substituted does not change the meaning of the phrase.Syntactically, the substituted word is accepted.It is the participant's strength.However, the word 'give' is the present tense form of the verb which does not fit grammatically in the phrase because the story is in the past tense.There was no attempt made by the participant to correct the miscue.Semantic wise, the miscue is coded as a weakness.
Participant 4 made four substitution miscues.Line 29, "Wow!There's a lot of money in here."In this example, the substitution of 'there' for 'here' shows that the participant used the word 'here' to recognize the word 'there'.There is high graphic and phonic similarity in the ending pattern.The miscue is coded as the participant's strength.In this sentence, the word 'there' fits grammatically and within the whole passage.It makes sense because the meaning is not distorted.As such, it is coded as the participant's strength.
In line 37, "He saw the woman standing at the bus stop".The preposition 'at' is substituted with the preposition 'of'.The words are of the same parts of speech.There are no graphic and phonic similarities between both the words.It is coded as the participant's weakness.The substitution which changes the meaning of the sentence is unaccepted both syntactically and semantically.So, the miscue is coded as a weakness.
In line 54, "And we can eat lots of candy floss".The letter 's' was added to the word 'eat'.In the grapho-phonic cueing system, there are high graphic and phonic similarities.It is coded as the participant's strength.The addition of the phoneme /s/ does not distort the meaning of the sentence.However, it is incorrect semantically because a verb which follows a modal verb must be in the base form.There was not any attempt made by the participant to correct the miscue.Therefore, the miscue exhibits a weakness.
Participant 5 made three substitution miscues.In line 20, "That's what fell out of her bag."The word 'fell' was substituted with a present tense word 'fall'.In this example, the participant made use of the consonant letter 'f' in the beginning as well as 'll' in the ending blend, possibly recognizing the 'll' pattern in the word 'fall'.In the grapho-phonic cueing system, there is a high graphic similarity and some phonic similarity.The miscue is coded as strength for the participant because the substituted word did not change the meaning of the sentence.However, semantically it is ungrammatical because the substituted word is in the present form and within the whole passage, it is unacceptable.As such, the miscue is coded as the participant's weakness.
Participant 6 made only one substitution miscue.In line 26, "What shall we do?" asked Raj.The personal pronoun 'we' is substituted with 'I' which is a personal pronoun, too.In the grapho-phonic cueing system, there is no graphic or phonic similarity.At the sentence level, the meaning is not distorted and is grammatical.Within the whole passage, it is ungrammatical because there are three characters involved, not one.So, the miscue is the participant's weakness.
There was a similar substitution miscue which occurred among Participants 1, 4 and 5.In line 46, "Lee ran back to Raj and Sam", the past tense form of the verb 'ran' is substituted with the present tense form 'run'. Probably these participants made use of the consonant letter 'r' in the beginning and 'n' in the ending to recognize the word 'run'.In the grapho-phonic cueing system, there is high graphic similarity and some phonic similarity.It is strength for the participants.At the sentence level, the substituted word did not distort the meaning.As such, it is accepted syntactically.It is strength for the participants.Within the whole passage, the substituted word is unacceptable because the passage is in the past tense.Therefore, semantically, it is ungrammatical and is coded as the participants' weakness.
Insertion Miscues
Insertion miscue occurred among Participants 1, 3 and 5. Insertion miscues were made when a word is added in between two words.For Participant 1, in line 12 "Something fell out of her ^ bag."The preposition 'of' inserted in the sentence did not distort the meaning but semantically it is unaccepted because two prepositions in a sentence is ungrammatical.The strategy used showed the participants' weakness.
Participant 3, in line 50 "We can go on ^ the ghost train," The preposition 'in' is added in between the words.The addition of the word 'in' makes the sentence ungrammatical as it is redundant.It is coded as the participant's weakness.Participant 5, in line 39 "You dropped ^ your purse," said Lee.The word 'out' was added in between the words.This is unnecessary and forms an ungrammatical sentence.In sum, the insertion strategy used in reading by the participants affected the meaning of all the sentences.This miscue is coded as the participants' weakness.
Omission Miscues
For Participant 1, line 6 "My mum hasn't got any jobs for us".The verb 'got' is omitted.Omission led to an incorrect sentence.Participant 3, line 33 "Then you could have a really good…".The word 'really' is omitted in the phrase.Both omission in the sentences and phrase do not affect the meaning syntactically.The causes of word omission are when reading is done too quickly (Goodman, 1973).
However, Goodman (1973) as cited in Wixson (1979) pointed out that as readers become more proficient they tend to omit known words that are unnecessary for understanding rather than unknown words.In this study, as observed by the researchers, the occurrence of omission miscues is due to the participants' fast reading.Thus, this supports Goodman's (1973) findings on omission miscues.Nevertheless, the strategy used by the participants did not change the meaning of the sentences.Hence, it reflects the participant's strengths.
Repetition Miscues
Repetition was common among Participants 2, 3, 4, 5 and 6.The words repeated by Participant 2, were 'purse' in line 17, and 'Thank you' in line 41.Participant 3 repeated the word 'but' in line 13, and in line 16, 'Raj'.In line 26 the phrase, 'asked Raj', in line 32 "You could keep it" and in line 39 the word 'dropped' was repeated.
Participant 4 repeated the word 'asked' in line 4 and in line 42, the phrase 'a good boy'.Participant 5 in line 6, the phrase 'hasn't got' was repeated.Participant 6 in line 1, the word 'money' and the phrase 'on up' were repeated in line 14.According to Wixson (1979), repetition takes place when the reader is confirming the meaning of the word or struggling with it.Readers repeat words when they are uncertain of the words and want to make sense of the passage.As such, these miscues are coded as the participant's strength.
Hesitation Miscues
Hesitation miscue was also common among Participants 1, 3, 4, 5 and 6.The participants paused in between the words.Participant 1, in line 22 "They all looked at the / purse".Participant 3, line 6 "My mum / hasn't got any jobs for us," and in line 12 "Something fell out / of / her bag" Participant 3 paused twice.Participant 4, in line 12 paused twice "Raj, Sam and Lee went / to / look", and paused in line 50 "We can go / on the ghost train".Besides, Participant 5 paused in line 22 "They all / looked at the purse".Finally, Participant 6 paused once in the second line "He had to find / some more" and in line 28, "Lee picked up the / purse".The reason for hesitation is believed that the readers are trying to decode the word that follows the pause (Huszti, 2008).This strategy used exhibits the participants' strengths.
Inversion Miscues
Inversion miscue may indicate fluent reading where the reader is adapting to what is written in a form close to familiar speech (Huszti, 2008).Only one inversion miscue occurred among the Form One participants.Participant 1 made the miscue in line 40, "Here it is".The words 'it is' is reversed to 'is it '.The inversion miscue did not make a drastic change in the meaning.Hence, it is regarded as the participants' strength.
Analysis of Participants' Miscue Patterns
Participants' miscues were analysed.Table 2 shows the number of substitution miscues accepted and not accepted in the three aspects: grapho-phonic, semantic and syntactic cueing systems and their total, the number of other miscues produced by each participant and the overall total of the miscues.
Substitution miscues were analysed within three aspects which are grapho-phonic cueing system, syntactic and semantic cueing.In the grapho-phonic cueing system, the 'graphic' characters of a word explains how much the miscue does look like what was expected in the print whereas the phonic character of a word denotes the sound made by combining various letters (Goodman, 1973).
By attending to the graphic and phonic features of a word, the degree to which participants used the grapho-phonic system is indicated by interpreting whether they are of high graphic similarity, some graphic similarity or no graphic similarity.Interpretation of the phonic cueing data is explained in the same ways, which are high phonic similarity, some phonic similarity and no phonic similarity.If there is some graphic and phonic similarity with the substituted words, the graph-phonic cueing system is accepted and reflected as the participants' strength.
Grammatical function is addressed through the substitution miscues.Miscues that were acceptable with no meaning change syntactically (words that fit into the sentence) and semantically (words that fit into the whole passage) are reflected as the participants' strength.Miscues that changed the meaning syntactically and semantically indicate loss of meaning in construction (Ebersole, 2005) and is reflected as the participants' weakness.
Substitution miscue analysis revealed that there were 17 substitution miscues occurrences in the participants' oral reading.There were 12 acceptance of graphophonic cueing system, 12 acceptance of syntactic cueing, and 4 acceptance of semantic cueing.Goodman, 1996): GAacceptance of grapho-phonic cueing system SAacceptance of syntactic cueing (words substituted fit grammatically into the sentence) SAacceptance of semantic cueing (words substituted make sense within the whole passage) (/) Accepted (x) Not accepted Omission miscue showed two occurrences, in which Participants 1 and 3 each produced one.Data indicated eight occurrences for hesitation miscue.Participants 1, 3 and 5 produced one hesitation miscue whereas Participant 4 produced three and Participant 6 produced two.As for insertion miscue, data showed that there were three occurrences.Participants 1, 3 and 5, each made an insertion miscue.
The total number of occurrences for repetition miscue is 13.All the participants except Participant 1 produced the miscue.Participant 2 produced two, Participant 3 produced six, Participant 4 produced two, Participant 5 produced one and Participant 6 produced two.Finally, there was an inversion miscue in which Participant 1 was responsible for.
Percentage of Miscues
The percentage for each miscue is calculated by the total number of each miscue divided by the overall total number of the miscue patterns and is multiplied by 100 (percentage of miscue patterns = total of each miscue patterns ÷ overall total miscues x 100 %).
Additionally, the percentage for the three aspects (grapho-phonic, syntactic and semantic cueing system) which explains the substitution miscues is calculated, too.The total substitution miscues is 17.The number of grapho-phonic cueing system which is acceptable is 71% (n=12).As for syntactic cueing system, the total number accepted is also 71% (n=12) and semantically accepted substitution miscues revealed 24% (n=4).Similarly, the percentage for the unaccepted substitution miscues in the three aspects (grapho-phonic, syntactic and semantic cueing system) is calculated.Data show that 29% (n=5) substitution miscues were unaccepted grapho-phonically.Syntactically unaccepted substitution miscues also indicate 29% (n=5).In the semantic aspect, the substituted words unaccepted is 76% (n=13).
DISCUSSION
The pattern of miscues in oral reading can suggest the participants' strengths as well as their weaknesses (Goodman, 1969).They provide a glimpse and insights on how they were made in order to understand what is really going on in the reader's mind when a text is read (Goodman, 1969).Individual strengths and weaknesses are calculated in the occurrences of participants' miscues.
In the analysis of participants' strengths and weaknesses on the occurrences of miscues, the percentage of strengths outweighed the percentage of weaknesses for all the participants.The results suggest that Participant 2 is a proficient reader because the participant only made two repetition miscues in which the meaning and grammar were maintained.Next, Participant 6 is the second in the list because of the participant's least percentage of weaknesses (2%).Further, the medium ability readers are Participants 3, 4 and 5 because the type of miscues they produced indicates less percentage of weaknesses.Participant 1 is less proficient than the others because the participant's percentage of weaknesses showed 14% which is the highest percentage of weaknesses among all the participants.
CONCLUSION
Miscue analysis helps students better understand the reading process and become more confident readers.Students are made aware of the many strategies and thinking processes that occur when reading.By increasing their awareness they are able to monitor their own comprehension while reading and become proficient readers.Teachers are able to systematically examine students' reading behaviours that indicate their reading strengths and weaknesses in a focused and manageable way (Argyle, 1989).Hence, this helps them to make decisions about upcoming reading instruction.Insights gained from miscue analysis can help both the students and teacher achieve success.
|
2018-12-17T20:13:59.404Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "87bbaa3070f04c46f9300c4ef23cb6901b665a5a",
"oa_license": "CCBY",
"oa_url": "http://jurnal.unsyiah.ac.id/SiELE/article/download/9927/8278",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "87bbaa3070f04c46f9300c4ef23cb6901b665a5a",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
21752026
|
pes2o/s2orc
|
v3-fos-license
|
Beneficial Effects of Resveratrol-Mediated Inhibition of the mTOR Pathway in Spinal Cord Injury
Spinal cord injury (SCI) causes a high rate of morbidity and disability. The clinical features of SCI are divided into acute, subacute, and chronic phases according to its pathophysiological events. The mammalian target of rapamycin (mTOR) signaling pathway plays an important role in cell death and inflammation in the acute phase and neuroregeneration in the subacute/chronic phases at different times. Resveratrol has the potential of regulating cell growth, proliferation, metabolism, and angiogenesis through the mTOR signaling pathway. Herein, we explicate the role of resveratrol in the repair of SCI through the inhibition of the mTOR signaling pathway. The inhibition of the mTOR pathway by resveratrol has the potential of serving as a neuronal restorative mechanism following SCI.
Introduction
Spinal cord injury (SCI) causes a high rate of morbidity and disability. Presently, more than 2.5 million people suffer from SCI, with annually reported new cases being about 12,000 in the United States [1,2]. SCI is classified into primary and secondary phases. The primary phase of SCI begins when a sharp penetrating force lacerates or macerates the spinal cord. Additionally, a blunt force contusing or compressing the spinal cord falls under the primary phase of SCI. After primary injury, the injured area of the spinal cord increases gradually. This is when secondary injury takes place. Secondary injury includes events such as vascular disorder, glutamate excitotoxicity, apoptosis, and inflammation. Trauma engenders mechanical damages to sensitive capillaries and causes bleeding. Ischemia has been correlated with hemorrhages [3]. Disruption of the blood-spinal cord barrier leads to inflammatory cell infiltration. Vascular disorder triggers microglia to produce proinflammatory cytokines, followed by progressive cellular necrosis and the release of ATP, DNA, and K + [4]. SCI creates a cytotoxic postinjury environment and activates microglia to recruit phagocytes. Neutrophils, monocytes, and microglia are the inflammatory cells involved in SCI [5]. Following SCI, neutrophils tend to increase in the primary lesion, producing oxidative and proteolytic enzymes. Macrophages release proinflammatory cytokines, nitric oxide, and proteases. Furthermore, glutamate excitotoxicity, which is a result of the release of glutamate and astrocytes, leads to further neuronal cell death [6]. Additionally, after SCI, the death of groups of neurons as well as microglia, oligodendrocytes, and astrocytes takes place. In white matter tracts, oligodendrocyte death lasts several weeks after SCI [7]. Oligodendrocyte death is advantageous to postinjury demyelination. Secondary injury lasts for several weeks, which in turn provides a therapeutic time window. Its effectiveness lies in reducing the destruction of the neural tissue during the mitigation of the above process. SCI treatment mainly includes surgical treatment during the early stage. During middle and late stages of SCI, drug interventions are employed. It is worth noting that all these stages possess certain therapeutic effects. In our precursory study, curcumin was found to have the capability of serving as a future therapeutic for SCI [8]. It is worth noting that one of the chief targets in SCI treatment is to improve the microenvironment and promote the regeneration of the injured site. Thus, in another study, we also deduced that subsequent to SCI, treatment with olfactory ensheathing cell-(OEC-) seeded poly(lactic-co-glycolic acid) (PLGA) complex could not only ameliorate the microenvironment but also promote cell differentiation [9]. In recent years, coupled with in-depth knowledge of the various signaling pathways, SCI treatment has been improved by interfering the procedure with various signaling pathways such as nuclear factor-kappa B (NF-κB), mTOR, and mitogenactivated protein kinase (MAPK) [10][11][12]. The mTOR signaling pathway plays a very important role in the progress of cell death, inflammation, neuroregeneration, and regulation of glial scar following SCI.
Relationship between the mTOR Signaling Pathway and SCI
As a member of the phosphatidylinositol-3-kinase-related kinase superfamily, mTOR is a serine/threonine protein kinase. The combination of mTOR complex 1 (mTORC1) and mTOR complex 2 (mTORC2) protein complexes forms the mTORC signaling pathway. mTORC1 phosphorylates downstream effectors such as p70 ribosomal S6 protein kinase (p70S6K) and further regulates mRNA translation. Thus, mTORC1 is important in stimulating protein synthesis. mTORC2, on the other hand, has been demonstrated to phosphorylate members of the AGC kinase family including protein kinase B (Akt), which is linked to several pathological conditions. mTORC2 is regarded as a regulator of the actin cytoskeleton [13]. Phosphatidylinositol-3-kinase/protein kinase B (PI3K/Akt) is one of the major pathways that activate mTOR. mTOR plays an important role in several physiological functions in the CNS, which includes the regulation of neuronal cell growth and survival and development of axon and dendrite [14,15]. Additionally, a number of pathophysiological diseases such as neurodegenerative cancer, cardiovascular cancer, and renal cancer have been correlated with the regulation of the mTOR pathway [16][17][18][19]. Studies pertaining to the function of mTOR in SCI have been reported to be hinging on the time phase following SCI [20].
Relationship between the mTOR Signaling Pathway and
Acute/Subacute Stages of SCI. Regarding the acute phase in the wake of SCI, the mTOR signaling pathway participates in the regulation of cell death, activation of macrophage/ microglia, and inflammation [20].
mTOR Signaling Pathway Participates in Cell Death.
Neuronal death includes neuronal apoptosis, autophagy, and necrosis. As mTOR's inhibitor, rapamycin could block the activation of the Akt/mTOR pathway, thereby preventing apoptosis of nerve cells [21]. Akt could elevate cyclin D1 by inactivating glycogen synthase kinase-3β (GSK-3β) and reducing protein 27 kinase inhibition protein 1 (p27). The inhibition of Akt phosphorylation leads to G1 arrest, which in turn induces apoptosis [22]. Also, the inhibition of the PI3K/Akt/mTOR signaling pathway could reduce the apoptosis-related proteins through the mitochondrial pathway after SCI [23]. TORC1 inhibition could hamper protein translation, resulting in the decrement of the Bcl-2 family and instigation of apoptosis [24]. Moreover, the inhibition of the PI3K/Akt/mTOR pathway could increase the expression of Beclin-1 [25]. A study discovered the augmentation of Beclin-1 expression to increase and decrease Bcl-2 and Bax expressions, respectively, eventually culminating in abated levels of apoptosis [26]. Autophagy precedes apoptosis, demonstrating a significant effect in regulating cell death [27]. Autophagy, a significant moderator, participates in pathological changes after SCI [28]. Autophagy is important to secondary injury's repair. The disruption of autophagy after SCI aggravates endoplasmic reticulum (ER) stress and causes cell death [29]. The inhibition of the mTOR signaling pathway could trigger autophagy [30]. Rapamycin induces the occurrence of autophagy by inhibiting the mTOR signaling pathway. Research studies have evidenced the employment of rapamycin to curtail the phosphorylation of p70S6K protein whilst augmenting the expression levels of LC3 and Beclin-1 [31]. Autophagy, as an intracellular catabolic mechanism, increases cell survival rate by degrading and recycling damaged organelles and inept proteins to provide ATP and amino acids [32,33]. However, autophagy leads to cell death in certain pathological situations. Activating autophagy destroys injured cells and prevents neuronal loss. Studies found the activation of autophagy through the autophagosomal-lysosomal pathway to induce neuroprotective effects [34]. Autophagy starts off with the formation of autophagosomes that fuse with lysosomes to allow lysosomal hydrolases to degrade contents [35]. Autophagy flux is the progress including sequestration in autophagosome delivery and degradation in lysosomes [36]. Again, when neurons are injured, astrocytes are activated. Studies have reported the occurrence of apoptosis in astrocytes, which culminates in cell death during the early period of injury, subsequently curtailing the release of injurious factors [34]. This, ultimately, is adverse to neuronal regeneration. Additionally, there are reports suggesting a strong biochemical crosstalk mechanism between apoptosis and autophagy [37]. Both autophagy and apoptosis are potentially influenced by several similar signaling pathways such as p53, Ser/Thr kinases, and Bcl-2-homology-3-only proteins [38]. The cross-regulation of apoptosis and autophagy was demonstrated by suppressing each other. It is vital to point out that mitophagy (autophagy of mitochondria) is a key point in the inhibition of the apoptosis by autophagy in several maladies [39]. Hypoxia or ATP depletion is triggered by mitochondrial dysfunction, which in turn causes cytochrome release and the activation of caspase-9, and eventually, apoptosis.
mTOR
Signaling Pathway Regulates Inflammation. The traumatic spinal cord induces local inflammatory response by activated microglia, infiltrating neutrophils, and macrophages. Moreover, the upregulation of associated proinflammatory cytokines' expression has been reported to be induced by the traumatic spinal cord [40]. mTOR regulates the maturation of antigen-presenting cells such as T and B cells [41]. In a carried-out study, the mTOR pathway was found to improve the survival of EOC2 microglia in the oxygen-glucose deprivation stage whilst enhancing the expression of nitric oxide synthase 2 (NOS2) in the hypoxia stage in the BV2 microglial cell line. This suggests that mTOR participates in microglial proinflammatory activation, activates mTOR/MEK1/ERK1/2/IKKβ/IκB-α/NF-κB, and results in inflammation [42,43]. mTOR inhibition could improve anti-inflammation through the regulation of T cells [44]. Again, studies have evinced the inhibition of mTOR to control the activation of macrophage/microglia and curtail neuroinflammation [45]. mTOR inhibition could also reduce the proinflammatory cytokines produced by macrophages [46]. Furthermore, mTOR inhibition could bring about anti-inflammation via autophagy [47]. The instigation of autophagy could be as a result of impediment of mTOR. Autophagy, in turn, could degrade inflammasome components and remove the endogenous signals of inflammasome activation, thereby hindering inflammation. In addition, studies have evidenced rapamycin and temsirolimus to dramatically abridge the expressions of inducible NO synthase (iNOS), cyclooxygenase 2 (COX2), and glial fibrillary acidic protein (GFAP) and reestablish nNOS levels. Other researches have also demonstrated that mTOR inhibitors could accommodate the neuroinflammation in SCI [48]. Thus, inhibition of mTOR could decrease the inflammation procedure after SCI.
2.2.
Relationship between the mTOR Signaling Pathway and the Chronic Phase of SCI. During the chronic phase following SCI, mTOR participates in regulating neuroregeneration and glial scar formation.
mTOR Signaling Pathway Participates in Regulating
Neuroregeneration. Injured neurons of the CNS undergo normal cell apoptosis rather than regeneration. The chief cause of central nerve regeneration failure has to do with inhibitory factors in the myelin as well as with the formation of glial scar and weak growth capability of mature neurons [49]. There are evidences explicating that mRNA and ribosomes of axons could take part in synthesizing cytoskeletal proteins [50,51]. Researches have showed that the synthesis of axon local protein might participate in axonal regeneration [52]. During the chronic phase, damaged neural tissue regeneration is regulated by the mTOR signaling pathway. Immunohistochemistry results have propounded the mTOR signaling pathway to be present in neurons of nociceptivespecific C-fiber at the level of dorsal root ganglion and spinal cord neurons of inner lamina II [53]. Also, mTOR inhibition is instrumental in axonal regeneration. In the wake of SCI, the inhibition of mTOR by rapamycin could promote axonal regeneration via the suppression of new protein synthesis and cell proliferation. Improvement in CNS myelination and oligodendrocyte differentiation has also been connected to mTOR [20]. S6K1 (ribosomal protein S6 kinase 1) is the important downstream protein of mTOR [54]. Research studies demonstrated that after SCI, hindering S6K1 could promote regeneration of both the corticospinal tract and axon counts at 8 weeks [55]. Additionally, phosphatase and tensin homolog on chromosome 10 (PTEN), a lipid phosphatase, converts PIP3 to PIP2, subsequently inhibiting the downstream effectors of PI3K's activation [56]. Through the deletion of PTEN, there was activation of the PI3K/ mTOR pathway and regulation of cell growth as well as proliferation by initiating cap-dependent protein translation [57,58]. In addition, PTEN deletion was found to be a contributory factor in corticospinal tract regeneration, subsequently giving the impression that the PTEN/Akt/mTOR pathway could regulate axonal growth [59]. In another research study, the knocking out of tuberous sclerosis complex 1 (TSC1) was demonstrated to reactivate mTOR, which in turn promoted axonal regeneration [60]. However, TSC deficiency caused insulin resistance and resulted in unfolded protein response (UPR). It also regulated endoplasmic reticulum (ER) stress [61]. Interestingly, UPR initiates apoptotic pathways in the event when cells are unable to adapt to a condition in perturbation [62]. In barring cytoskeletal protein synthesis, microtubule assembly is instrumental in axonal regeneration, which has been confirmed to be a key in the instigation of the growth cone. A study evinced autophagy to be vital in both microtubule stability and axonal regeneration following CNS injury [63].
mTOR Signaling Pathway Participates in Regulating
Glial Scar. Glial scar is another major barrier for regeneration; thus, overcoming this barrier would be significant to axonal regeneration [64]. Glial scar consists of reactive astrocytes and connective tissues. The main component of the extracellular matrix is chondroitin sulfate proteoglycan. More specifically, astrocytes become hypertrophic and proliferative and form an astrocytic-rich border to produce glial scar after SCI [65]. Glial fibrillary acidic protein (GFAP), vimentin, and nestin cause glial hypertrophy. Thus, limiting astrocytic responses provides a potential therapeutic regimen in the enhancement of functional recovery after SCI. Moreover, mTOR could participate in astrogliosis by increasing cascaded downstream proteins and activating astrocytes [66]. Thus, regulating mTOR is a key to curtail the scar formation. Research has evinced astrocytes to be upregulated by an epidermal growth factor (EGF). EGF could phosphorylate Akt, causing the activation of mTOR as an important pathway of astrocyte physiology [67]. Several mTOR upstream regulators are vital to astrocytes. Luan et al. demonstrated that the downregulation of PI3K/Akt/mTOR expression could inhibit the formation of glial scar [68]. PI3K/Akt/mTOR inhibition could attenuate the formation of glial scar. Also, PTEN could negatively regulate the PI3K/Akt/mTOR pathway, thus showing a great function in attenuating glial scar formation [69]. Pharmacological inhibition using the mTOR-selective drug, rapamycin, was found to decrease astrogliosis and reduce GFAP expression at the injured site [70]. Autogenous hypertrophy and reentry into the cell cycle engender reactive astrogliosis. The regulation of the cell cycle is important in curtailing scar formation. Rapamycin could modulate the cell cycle to inhibit astrocyte proliferation. Additionally, a study found that rapamycin could restrain the proliferation of astrocytes by decreasing the cell number in the S stage [71]. These findings have clinical implications as potential SCI therapeutic applications with the inhibition of the mTOR signaling pathway.
Resveratrol Repairs SCI by Inhibiting the mTOR Signaling Pathway
Each herb consists of numerous chemical constituents from different categories. Different active ingredients show therapeutic functions in a number of disease treatments. In recent years, traditional Chinese medicine (TCM) has been drawing attention in SCI treatment [72]. Resveratrol is a natural polyphenol antioxidant TCM. Its active ingredient is comprised of Polygonum cuspidatum, red grape skins, red wine, blueberries, and some nuts. Resveratrol has a number of biological activities and pharmacological actions, which include anti-inflammation, antioxidation, inhibition of platelet aggregation, and improvement of microcirculation [73][74][75]. Following neuronal injury, resveratrol exhibits its neuroprotective effects by regulating autophagy and apoptosis [76]. In the last few years, resveratrol has been explicated as a potential therapeutic in SCI treatment. Its therapeutic effect has been confirmed through behavioral scores [77]. Resveratrol can inhibit mTOR through several mechanisms such as PI3K and Akt [78,79] (Figure 1). Both PI3K and Akt are mTOR upstream activators. Furthermore, resveratrol, in high concentration, has been evidenced to inhibit the mitochondrial function, decrease cellular ATP levels, and activate AMPK [80]. Moreover, the mTOR pathway could be inhibited by the activation of AMPK, as depicted in Figure 1. In a study conducted, expressions of AMPK and mTOR were increased and decreased, respectively, following resveratrol treatment [81]. Interestingly, resveratrol has also been evidenced to be involved in apoptosis, autophagy, and inflammation as well as in scar tissue improvement subsequent to SCI through the mTOR pathway; thus, resveratrol ameliorates SCI. Additionally, in the wake of SCI, resveratrol treatment improved the Bcl2/Bax ratio and decreased the expression level of caspase-3. What is more, research studies have also revealed the antiapoptotic effect of resveratrol [82]. The apoptotic effect of resveratrol was attributed to the inhibition of PI3K, Akt, and mTOR phosphorylation [83]. In a study by Zhao et al., the authors indicated augmentation in autophagy expression following resveratrol treatment in SCI [84]. In another study by Park et al., they posited that the suppression of mTOR activity might culminate in resveratrol instigating autophagy [85]. The detailed mechanism was through the inhibition of the mTOR-ULK1 pathway. mTOR inhibition reduced the hindrance of unc-51-like autophagy activating kinase 1 (ULK1) phosphorylation and induced autophagy. With that being said, autophagy is involved in neuroprotection [86]. Again, resveratrol has been evidenced to inhibit proliferation of pathological scar fibroblasts by decreasing mTOR expression and its downstream molecule p70S6K [87]. More so, resveratrol has been found to have an effect on inflammation. Treatment with resveratrol after SCI reduced the expression of inflammatory cytokines such as IL-1β, IL-10, and TNF-α [82]. Resveratrol has also been suggested to suppress NF-κB's activity. mTOR and NF-κB pathways are tied in many aspects. For instance, the activation of mTOR by Akt culminates in the activation of NF-κB, which is associated with inhibitor of NF-κB kinase (IKK) and mTORC1 complex's Raptor [80] (Figure 1). Suffice to say, resveratrol inhibits NF-κB and inflammatory molecules through the mTOR pathway [88].
Conclusion
mTOR is involved in the regulation of several diseases, cellular functions, and trauma in the CNS. The mTOR signaling pathway plays an important role at different times. As such, the perspicacious knowledge concerning how the mTOR signaling pathway works in the process of neural protection is of great significance in SCI. TCM is an important supplementary treatment for SCI, which may offer therapeutic and reparative benefits in SCI. TCM might replace the application of nonsteroidal anti-inflammatory drugs, neurotrophic factors, or even methylprednisolone. Resveratrol participates in apoptosis, formation of pathological scar, and proliferation of fibroblasts as well as in anti-inflammation via the inhibition of mTOR in repairing SCI. Resveratrol, through the mTOR pathway, has the tendency of serving as an SCI therapeutic. However, further research pertaining to the effect and specific molecular mechanisms in the different phases of SCI still needs to be elucidated.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
|
2018-05-25T21:26:16.707Z
|
2018-03-26T00:00:00.000
|
{
"year": 2018,
"sha1": "b2146ed26cb7ff5adcd59d27045a0044363025b9",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/np/2018/7513748.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "893dbbf2097455689131d84c1df2c561bdf51779",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259060078
|
pes2o/s2orc
|
v3-fos-license
|
Fish Logistic System Using Value and Cold Chain Approaches
The purpose of this study was to solve the poverty issues prevalent in local fishers’ households in the Biak Archipelago, Indonesia. The Value and Cold Chain approaches were used to determine the Government's mandate through Regulation of the Minister of Marine Affairs and Fisheries Number 5 of 2014 on the National Fish Logistics System. This regulation is supported by Presidential Instruction Number 7 of 2016 on accelerating the development of the fishing industry, though limited to socialization in the Biak Islands. The Case study research approach and normative analysis were tested using Porter's theory of value chains. The results showed that the logistics system in the upstream and downstream flow of goods and information was not implemented. Furthermore, the Government focused on appointing third parties to operationalize the Cold Chain and the low competence of human resources to organize and manage the fisheries sector.
INTRODUCTION
The Value Chain and Cold Chain relate to the many local fishers' households in the Biak Islands living in poverty. In this regard, Indonesia's Government decided to build a logistics system that integrates producers and consumers. The management of marine resources potential in the Biak Archipelago is not optimal because it is controlled by capital owners while residents only catch fish for consumption.
Geographical problems with the underdeveloped or mismanaged logistics systems integrated upstream and downstream have increased the prices of some necessities. These necessities have great potential to improve people's welfare because of the high market demand in several countries while the systems' local management is poor. Therefore, the Government issued policy number 5 of 2014 (Minister of Maritime Affairs and Fisheries of the Republic of Indonesia, 2014) concerning the national fish logistics system to solve the problem of costly upstream and downstream fish management.
The two main elements in the logistics system are the flow of goods and information. Furthermore, Setijadi's opinion stated that the logistics system consists of several main subsystems, including inventory, warehousing, transportation, and information. They form a logistics system that needs efficient and effective management for an economical flow of goods and information (Yolanda, 2019).
Porter's value chain approach determines the extent to which these economic activities are performed by various interrelated and coordinated business actors, not only one business actor (value chain operator). Therefore, the value chain involves relationships and coordination of business actors, including suppliers of raw materials, primary producers, processing industries, distributors, or traders, as shown by Porter (1985).
The low productivity of marine fisheries has made the government take part in the logistics system's value chain by issuing Presidential Instruction Number 7 of 2016 (Government of the Republic of Indonesia, 2016) . The Instruction concerns accelerating the development of the fishing industry to improve the fishers' welfare. This is accomplished by improving processing, marketing, and employment, increasing foreign exchange from the marine fisheries sector.
The worldwide problem of poverty, including in Indonesia, has made several countries schedule poverty reduction as one of the Sustainable Development Goals (SDGs), especially in the zero poverty goal. Poverty alleviation is a national agenda in every work plan and government policy in most development planning documents. The national statistics showed that poverty was reduced by 9.66% in September 2018. However, in Papua Province in eastern Indonesia, the poverty rate is high at 27.74%. Therefore, it is necessary to analyze the logistics system to reduce poverty in traditional fishing households in the Biak Archipelago.
Several aspects that surfaced include isolation (topographical conditions), vulnerability (difficulty in accessing basic services), and powerlessness (economic conditions and employment). Other aspects were physical weakness (low quality of human resources), material poverty (low investment inflows), and disaster. Papua Province has a high poverty rate, especially in the Biak archipelago, making its accessibility difficult due to a poor logistics system.
METHODS
This research used the case study and normative approaches that combine qualitative methods through several phases. A qualitative approach is used to examine various trends, predict the impact of events, and estimate potential problems to help consider alternative plans.
The qualitative approach is a research and forecasting process based on a methodology that investigates a social phenomenon and human problem. The FGD (Focus Group Discussion) methods applied in the qualitative approach to obtain more in-depth information on the relationship between variables. This method stimulates new ideas and concepts based on findings from the qualitative model. Furthermore, FGDs interpret the evaluation results better and examine the behavior and desires of the people being assessed. They were conducted non-strictly and formally to obtain more comprehensive, in-depth, and open information.
Institutional Management
The field scan results showed that the coastal management of the Biak Archipelago in Indonesia is still weak. This is due to the low collaborative, participatory planning that started from the Biak Customary Furnace used as a seafaring ancestor. Therefore, an institutional analysis was used to diagnose the coastal community institutional problems understood by traditional fishers in the Biak Archipelago.
The field scan found that the main problem was the fishers' low purchasing power due to inaccessibility to information provided by the Regional Government. Moreover, stakeholders in the coastal community development run independently or not coordinated between technical agencies. Therefore, coastal management development requires the support of all parties committed to improving the community's economy. Also, it requires a partnership between the local government, private sector, fishers, non-governmental organizations, and higher education communities.
The Value Chain Approach
The observations of the fishing community in the Indonesian Biak Archipelago showed that collector traders buy the fishers' catch from boat landing sites and sell them to retailers. The play a role in collecting fish and distributing them to advanced traders. Furthermore, field observations female fishers (fishers' wives) collect the catch at the boat landing site and sell to diluent collectors at the district level that sells to wholesalers at the Fish Market.
Diluent traders are actors in the trading system that deal directly with consumers. The key informants in this research were diluent traders, comprising non-local collectors that buy fish at the landing site. There are two forms of trading system, where the first one is conducted by women fishers by collecting catches to be sold to retailers. The second trading system is conducted by retailers that sell the purchases from the first collectors to wholesalers in the Main Market.
The flow pattern of fresh fish in the Biak Islands consists of fishers, collectors, and traders. The distribution chain in Figure 2 shows that fishers sell their catch directly at the boat landing site to local buyers or collectors (fishers' wives), buyers in the local market. Alternatively, they sell to retailers or collectors that then sell to local consumers. Moreover, the field observations show that the main players selling fresh fish in the Biak Islands are female fishers as local collectors. They sort the catch at the boat landing site and sell it to local markets, retailers, or from house to house.
The Cold Chain Approach
The field scans showed that several sea fish landing points conducted by traditional fishers are centered on several agreements. The landing points are places of movement of goods from fishers as producers to the intermediate district or village collectors. However, district collectors have not provided a positive value because there are no supporting facilities, such as shelters and transportation equipment for fishers to maintain the quality of the catch.
Cold Chain Analysis showed that fish was not the main requirement for fishers in maintaining catch quality. This is due to limited resources that reduced the catch quality and selling price, as theoretically presented by experts that previously wrote about Cold Chain. For instance, Lestari (2019) examined the role of logistics services in the fisheries sector. The research found complexities between the production's distribution and consumption in the macro Indonesian marine and fisheries sector. Furthermore, there are Illegal, Unreported, and Unregulated Fishing activities coupled with small fishing vessels. The research also found limited facilities and infrastructure and an upstream-downstream production system that has not been integrated.
Lestari (2019) examined the macro data on Indonesian capture fisheries and found that perishable fish commodities result in around 35% losses or wastage. This implies about 10% damage in the distribution process contributed. Moreover, most fishers' ability still needs improvement because they cannot keep the fish in good condition. The delivery facilities and infrastructure in several areas are also constrained.
Based on the Regulation of the Minister of Maritime Affairs and Fisheries (2014), Number 5 of 2014 concerning the National Fish Logistics System, Article 1 point 4 explain that the National Fish Logistics System (SLIN) is a supply chain management system for fish and fishery products, materials, and production equipment. Also, SLIN provides information on procurement, storage, and distribution. It is an integral part of policies to increase capacity, stabilize upstream-downstream fisheries production systems, and control price disparities to meet domestic consumption needs.
There are complexities in marine and fisheries issues in production distribution and consumption, and Illegal Unreported and Unregulated Fishing activities. Therefore, the Government issued the Minister of Maritime Affairs and Fisheries Regulation, Number 5 of 2014 (Minister of Maritime Affairs and Fisheries of the Republic of Indonesia, 2014), concerning the National Fish Logistics System with the various objectives. The first objective was to increase the capacity and stabilization of the national fishery production and marketing system. The second objective is strengthening and expanding connectivity between upstream and downstream production and marketing efficiently. The third objective was improving the efficiency of fish supply chain management, production materials, tools, and information from upstream to downstream.
The Government has appointed the logistics expert members of the Indonesian Supply Chain (SCI) to ensure the mandate of Permen-KP/5/2014 runs optimally at the upstream-downstream fisheries production stakeholder level.
This research focuses on the recommendations by Indonesian logistics experts to describe the various complexities of marine fisheries production conducted by traditional fishers. Field scans showed that fishing is performed traditionally with limited catches for local consumption. This is because the Regional Government has not fully implemented the upstream-downstream logistics system. Furthermore, local communities, such as the Bosnik market and people from several villages in the East Biak region of Indonesia, need good infrastructure for landing.
Potential Risks
The analysis is used to determine the potential risks faced by fish value chain actors. The most probable risks traditional fishers face in the value chain are limited fishing gear, labor, and selling prices.
This analysis only focuses on several risk agents in the fish value chain with various occurrence levels that result in significant losses. For instance, risk agents such as climate change and erratic weather cause decreased catches. Field scans showed several points that produce risk, necessitating the following priority steps: a. Low Level of Progress (HR) causes risks such as low catch quality, limited fishing gear, remote market, and operating traditional boats. b. Lack of capital causes less catch, low fish quality, labor shortages, traditional fishing gear, and limited storage area. c. Long shipping distance causes fewer catches when the weather change and makes shipments to market to be late.
Fish Logistics System in the Biak Islands Indonesia
Based on the distance between the boats' landing area and the fish auction place, the Regional Government should build a distribution system "Hub-and-Spoke." The 1970-1980 distribution began using the "Hub-and-Spoke" system developed focusing on the origin and destination of goods. In this case, goods delivery is conducted using a point as a hub, enhancing efficiency. Additionally, there is an improvement in the use of the fleet on long-distance routes. The selection of fleet capacity on a route is also adjustable according to its volume.
The Regional Government should maximally support the improvement of the community's economy to unravel the problem of the upstream-downstream distribution system. This is because the marine fisheries sector is the main potential based on local community values. Also, based on regional developments, coastal governance has not sufficiently increased the community's welfare, especially fishing families.
The Government has resolved welfare problems by issuing many policies, including improving basic infrastructure and building land, sea, and air connectivity. This aims to integrate Indonesia's logistics system from west to east to harmonize prices for basic commodities. Additionally, the manufacturers, distributors, and retailers in the logistics chain would receive fair economic benefits.
One of the five priority government programs is Human Resources, which encourages the improvement of community welfare. Therefore, the human resources needed should create and manage the upstream-downstream value chain correctly. A well-managed upstream-downstream logistics system would significantly contribute to the competitiveness of marine fishery production on the Biak Archipelago coast, as well as within and outside Papua, Indonesia.
The opinion of the Chairman of Supply Chain Indonesia (SCI) Setijadi published in Republica highlighted the inadequacy of human resources in Indonesia's logistics management. According to Logistics Performance Index (LPI) and World Bank 2018, the quality and competence of Indonesia's logistics rank 44th. World Bank publications in the Logistics Performance Index (LPI) show Indonesia's ranking above the Philippines (69th), Brunei Darussalam (77th), Laos (83rd), Cambodia (111th), and Myanmar (128). However, it is still below Singapore (3 rd ), Thailand (32), Vietnam (33), and Malaysia (36). Setijadi stated that Human Resources in various ministries or institutions need to master logistics competence. For instance, understanding the supply chain is important in policy-making for the management of basic and essential goods and increasing exports. The Chairman of Supply Chain Indonesia (SCI) Setijadi stated that the Regional Government should optimally prepare resource humans that understand the supply chain. This aims to ensure proper use of the fish storage and refrigerating area (Cold Storage) with a capacity of 200 tons built Government at approximately Rp. 19 billion. Consequently, this would improve the welfare of fishers' families and the parties in the supply chain. Initial observations were carried out on the information from fishery extension employees. According to research informants, fish management deviations have not been operated because they wait for invited private parties to cooperate in storage management. This is because the fish storage process operated contradicts the parties' expectations. The capacity of Cold Storage that reaches 200 tons is insufficient when compared with the available fishing gear. Therefore, the Regional Government should add a large-capacity fishing fleet to fulfill the 200 tons within five to ten years (Yolanda, 2019).
CONCLUSION
Field problem diagnosis showed that the upstream-downstream logistics supply chain phenomenon is not optimal. It is not in line with the National Fish Logistics System (SLIM) towards improving the welfare of fishers' households in the Biak Islands, Indonesia.
Local communities or traditional fishers participate only in socializing the benefits of Cold Storage. Although this looks reasonable, it is concerning because local governments are considered weak in supporting traditional fishers. The local governments cannot improve fishing gear and strengthen fishers based on the hub and spoke system.
|
2023-06-04T15:02:05.403Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "bc5512b7b81cd6b387d6b1edc57abf1c5e4c07b7",
"oa_license": "CCBYSA",
"oa_url": "https://jurnal.man.feb.uncen.ac.id/index.php/jmb/article/download/99/83",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a7f4623ce34c0f3b81f68c760d238a4eaa916791",
"s2fieldsofstudy": [
"Environmental Science",
"Business",
"Economics"
],
"extfieldsofstudy": []
}
|
3517021
|
pes2o/s2orc
|
v3-fos-license
|
Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones
The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry.
Introduction
Antenna pose has always played an important role in cellular network planning and optimization, from the era of the 2G network [1] to the present day (e.g., [2,3]). It directly affects signal coverage, soft handover and interference between cells [4] and indirectly affects other network performance indicators, such as quality of service [5], and network configuration parameters, such as transmission power [6]. Thus, determining the pose of an antenna during installation and monitoring its subsequent changes in pose are important tasks.
The pose of an antenna is typically parameterized in terms of its downtilt (or elevation) and azimuth angles (e.g., in [4]), which are formally defined with respect to the direction of the main lobe [7]. For the time being, there are two popular approaches to measure the antenna pose in the industry. The first method is to measure the downtilt and azimuth angles manually by a person using an inclinometer and a compass; the second one is to employ specialized sensors, such as the Antenna WASP [8] from 3Z Telecom TM , or portable measurement devices equipped with internal sensors, such as the antenna alignment tool (AAT) [9] from Sunlight TM , to facilitate the measurement process.
However, both methods have their limitations. For the manual measurement, because numerous antennas are mounted on towers that are high off the ground and electrically powered, reaching these antennas takes much effort and poses a high risk for workers. Moreover, it is difficult to guarantee the accuracy and precision of such manual measurements because of individual differences among workers. As for the second solution with sensors, on the one hand, since a single sensor unit like the WASP costs a few tens of U.S. dollars, the gross overhead becomes enormous when the total number of antennas is considered for a large mobile network; on the other hand, portable measurement devices like AAT are usually expensive, and workers still must physically access the antennas to use them.
The recent proliferation of mobile phones with various types of built-in sensors, especially cameras and inertial measurement units (IMUs), has given rise to a wide range of interesting new applications and algorithms [10] that rely on the fusion of visual and inertial information for use in many fields: for example, object recognition [11], 3D reconstruction [12,13], tracking [13,14] and pose estimation [15]. These studies have inspired us to propose a novel non-contact solution to the measurement problem of the antenna pose in the Earth frame using the camera and IMU data from mobile phones.
Two technical challenges arise when designing this non-contact approach. First, antennas are only sparsely textured and usually have simple shapes with smooth surfaces (see Figure 1a), providing a few of the distinctive features and stable matches (see Figure 1b,c) that are usually required for existing feature-based pose estimation methods. Second, the IMU sensors in mobile phones are usually ultra-low-cost (consumer-level) microelectromechanical system (MEMS) sensors with poor accuracy [16][17][18]; however, accuracy is of key importance for industry applications, such as network optimization [19].
To address these challenges, we design and develop our solution based on adequate consideration of the characteristics of antennas and the sensors in mobile phones. First, we introduce a 3D antenna model and describe the visual pose estimation problem for an antenna as a direct 2D-3D matching problem based on the outer contours of the antenna to avoid the influence from the antenna's lack of geometric and textural features. This approach requires prior knowledge of the antenna's 3D geometry, but this is not yet an excessive requirement because of the limited number of different antenna products that are currently in use. Second, to improve the accuracy of pose estimation, we develop a coarse-to-fine strategy for antenna pose estimation, in which we first find an approximate pose automatically by exploiting the shape characteristics of the antenna and reduce the original unconstrained candidate pose search space to a constrained one, and we then seek an optimal solution in this reduced space using global optimization techniques. Moreover, to reduce the visual-inertial fusion error of mobile phones, we also propose a new camera-IMU calibration method for accurate calculation of the relative poses between the relevant sensors.
Therefore, we are able to build up a non-contact antenna pose estimation pipeline after addressing these challenging problems. The pipeline consists of three major steps: first, we capture antenna photographs remotely using a mobile phone with an IMU including the magnetometer despite the fact that an IMU is indeed composed of only inertial sensors (i.e., accelerometers and gyroscopes) in a strict way; then, we estimate the pose of the antenna relative to the camera from the images with our coarse-to-fine visual pose estimation method, and we estimate the orientation of the IMU relative to the Earth from the IMU outputs; and finally, the downtilt and azimuth angles of the antenna are calculated by concatenating the two poses with the refined transformation between the camera and the IMU as a result of our camera-IMU calibration method. Accordingly, our major technical contributions include the following: 1. We present an accurate solution to the downtilt and azimuth estimation problem for antennas based on multi-view antenna images and IMU data captured by a mobile phone. To the best of our knowledge, this is the first report of a non-contact method of measuring the pose of an antenna using a mobile phone. 2. We enhance existing camera-IMU calibration models by introducing additional degrees of freedom (DoFs) between the accelerometer and magnetometer, and we define a new error metric based on both the downtilt and azimuth errors instead of a single unified rotational error. This enables us to propose a new camera-IMU calibration method that permits simultaneous improvement of the estimation accuracy for both the downtilt and azimuth angles, making it suitable for tasks in which both types of error are crucial. 3. We propose an automatic method of determining an approximate pose from multi-view antenna contours for visual pose estimation, and we also provide bounds on the search space for pose refinement, thereby converting the underlying unconstrained optimization problem to a constrained one to allow solutions to be obtained with better accuracy.
The paper proceeds with a review of related works in Section 2. A formal formulation of the problem and an overview of our estimation approach are presented in Section 3, and the details of the implementation are given in Sections 4 and 5. Section 6 describes evaluations of the proposed approach using both synthetic and real-world datasets. Section 7 discusses and concludes the paper with indications of our future work.
Related Work
There is a vast amount of literature related to pose estimation problems, and the most important and most closely related studies are those concerning visual pose estimation and visual-inertial fusion. We will focus on techniques that specifically address visual pose estimation of rigid objects with known geometries and sparse textures, as well as techniques for camera-IMU calibration, which is a key component of visual-inertial fusion.
Visual Pose Estimation
The general problem of visual pose estimation has been a long-standing topic in computer vision (see [20] for an early review). By adopting an antenna geometry model, we are formulating the problem as one of a 2D-3D matching in which "3D objects are observed in 2D images" [20], the goal of which is "to estimate the relative position and orientation of a 3D object to a reference camera system" [20].
There are two major paradigms for approaching this problem, distinguished by how correspondences are established between the model and the imagery. One is the feature-based approach, in which an image is abstracted into a small number of key-point features. The other is the direct approach, in which image intensities are used directly to determine the desired quantities.
The feature-based approach is typically the most popular solution. The core underlying idea is to compute a set of correspondences between 3D points and their 2D projections, from which the relative position and orientation between the camera and target can then be estimated using various algorithms, such as those for solving the perspective-n-point (PnP) problem [21]. Consequently, the performance of this approach hinges on whether enough features can be detected and correctly matched. Although numerous feature detection and tracking schemes [22][23][24][25] have been developed, these methods are unsuitable for textureless objects. Recently, line features, such as the bunch of lines descriptor (BOLD) [26], have been proposed for handling textureless objects, but on very simple shapes with too few line segments and little informative content, they are still prone to failure. Furthermore, the question of how to build stable 2D-3D correspondences is a topic that is still under investigation.
The direct pose estimation approach attempts to avoid issues of feature tracking and matching by matching model projections to 2D images as a whole. There exists a large class of methods based on template matching. Hinterstoisser et al. proposed a series of template-matching-based methods using inputs based on the distance transform [27], dominant gradient orientations [28] and the recently developed concept of gradient response maps (GRM) [29]; Liu et al. [30] used edge images and included edge orientations in templates in their fast directional chamfer matching (FDCM). GRM and FDCM are state-of-the-art template matching methods. Once an object is registered using a pre-built template, a refinement process, which is usually based on the iterative closest point (ICP) algorithm [31], is performed to refine the object's pose, such as in FDCM. In 2015, Imperoli and Pretto proposed the direct directional chamfer optimization (D2CO) [32] for pose estimation, in which a non-linear optimization procedure (the Levenberg-Marquardt algorithm) is applied in the refinement stage instead of an ICP-based method, and in a comparison with four ICP-based refinement methods (including FDCM), D2CO demonstrated an advantage in terms of the correct model registration rate. The idea of optimizing the pose parameters has also been pursued in tracking [33] and simultaneous localization and mapping (SLAM) applications [34]. As an alternative to the template matching framework, Prisacariu and Reid [35] introduced a level-set-based modeling method based on a cost function describing the fitness between the estimated pose and the foreground/background models, and they solved the optimization problem using a simple gradient descent approach given an initial pose. Their Pixel-Wise Posteriors for 3D tracking and segmentation (PWP3D) method has been widely used in tasks of simultaneous segmentation and pose estimation, and as a subsequent improvement to PWP3D, Zhao et al. [36] proposed a boundary term to PWP3D (BPWP3D) , which offers finer boundary constraints for more challenging detection environments. However, these (local) optimization-based methods depend on the initial parameters and may become trapped in local optima.
Our pose estimation method predominantly belongs to the second category. By exploiting a shape prior for an antenna and matching its geometric features, we automatically find an initial pose to avoid potential human interaction and any overhead incurred for the building and matching of templates. Moreover, in the subsequent pose refinement step, we construct bounds on the pose search space to transform the original unconstrained optimization problem into a bounded one, which is then solved using global optimization techniques.
Recently, depth cameras have begun to be used for 3D pose estimation. However, current consumer-level depth cameras are not capable of detecting objects at long distances. For example, the maximum detection distance for a Kinect v2 is 4.5 m. Therefore, such approaches have limited applicability to our problem.
Camera-IMU Calibration
To relate measurements in the camera frame to the Earth frame, the relative pose between the camera and the IMU (i.e., the rigid transformation between the two frames) should be known. The process of determining this transformation is usually referred to as camera-IMU calibration [37].
Fleps et al. [38] classified the existing approaches into two categories: approaches that require specialized measurement setups and facilitate closed-form solutions and filter-based approaches with approximate solutions. Mair et al. [39] categorized the approaches into three classes: methods with closed-form solutions, Kalman-filter-based methods and methods that make use of optimization techniques. Here, we offer a review from another perspective, based on the hardware configurations used in the various calibration methods, leading to two groups.
Methods in the first group rely on the gyroscope in an IMU. A prevalent practice is a filter-based approach in which the calibration parameters are integrated into the state vector of an IMU motion filter (e.g., [40,41] (see [42] for an overview)) and are solved simultaneously with other motion states. However, as noted by Maxudov et al. [43], a long state vector naturally imposes certain limitations on accuracy. Moreover, the filter-based framework is unnecessary for offline calibration; based on this insight, Fleps et al. [38] modeled the calibration problem in a non-linear optimization framework by modeling the sensors' trajectory. In these methods, the camera is in constant motion, and oversimplifying the model of a camera on a mobile phone by using a global-shuttered model instead of a rolling-shuttered model may cause problems, as revealed in more recent works [44,45].
The methods belonging to the other group considered here are also suitable for use with gyroscope-free IMUs. These methods are closely related to hand-eye calibration, or, more concretely, eye-in-hand calibration, an approach used in the robotics community in which the relative pose between the camera and a rigid rig is sought. Since it was first proposed by Shiuand Ahmad in 1989 [46], hand-eye calibration has been largely considered a solved problem (see [47] for a review), and recent research has mainly focused on the development of more powerful solvers [48]. In camera-IMU calibration, the role of the "hand" is played by an accelerometer or an accelerometer-magnetometer pair. In the first complete camera-IMU calibration procedure, proposed by Lobo and colleagues [37], the authors estimated the rigid rotation between the camera and accelerometer as a standalone step in which the rotation was estimated by having both sensors observe the vertical direction in several poses. The camera relies on an ideally vertically placed checkerboard and the accelerometer on gravity to obtain a vertical reference. Their work was released as a [49] toolbox and is widely used. In Vandeportael's work on a camera that knows its orientation (ORIENT-CAM) [50], a similar idea was applied. However, since the IMU used in ORIENT-CAM consists of an accelerometer and a magnetometer, the relative rotation is estimated by aligning observations of the Earth frame from the IMU and the camera by means of a checkerboard that is ideally laid out such that it is both perfectly horizontal and perfectly northward-oriented. This method requires a carefully placed reference, as in [37], and any error during setup directly introduces bias into the calibration results. Under the assumption of negligible camera translations during the calibration process, in their work on ego-motion [51], Domke and Aloimonos solved for the rotation between the camera and accelerometer by relating gravity observations in IMU frames with the motion of the camera. By considering the relative rotations between different camera frames, they avoided the need for artificial references requiring a rigorous setup.
Our calibration method belongs to the second category. Unlike existing approaches, we consider the difference in precision between the two IMU sensors in a mobile phone and use a finer-grained error metric consisting of two terms, instead of a unified one (as in [50,51]), to reflect the resulting effect. Moreover, we do not assume perfect accelerometer-magnetometer alignment during the assembly of the sensor hardware and thus are able to decouple the accelerometer-related error and the magnetometer-related error. The reasons that we do not adopt a method of the first category are as follows: (1) the dynamic features of the gyroscope and the moving camera are nonessential to our measuring problem, in which instantaneous sensor outputs are employed; and (2) a calibration method that is independent of the gyroscope is applicable to a wider range of devices.
Problem Formulation and Method Overview
Our goal is to estimate the antenna pose in the Earth frame from multi-view data, which consist of multi-view images of the antenna and IMU (accelerometer and magnetometer) measurements recorded at the exact same instant as each image capture. Below, we first formally define the problem and then present an overview of our solution.
Notation and Problem Formulation
As described in the Introduction, the number of different antenna types in use is quite limited, and therefore, it is reasonable to assume a known 3D antenna geometry once we have identified the antenna type from the acquired images. Let this geometry be denoted by M, and let us assume that the bounding box of the model is centered at the origin point of the object frame (OF) and that its three axes are aligned with the axes of OF, without loss of generality.
In each of the multi-view images, the antenna (treated as the foreground) is represented by a contour expressed as a list of connected points, denoted by Φ i , i = 1, 2, ..., P, where P is the number of viewpoints. Such contours can be the outputs of a procedure based on image segmentation, shape detection or human interaction during image capture; we do not discuss this procedure here. This representation discards any textural information and interior shape information for an antenna, making it generally impossible to obtain a unique pose solution from a single viewpoint. Nevertheless, we opt to simply ignore these two kinds of information because of their instability, as demonstrated in Figure 1. Instead, contours captured from multiple viewpoints enable the determination of a unique solution.
The 3D mesh M and the 2D images are related by camera projections. We model the phone camera as a pin-hole camera, which maps M first from OF into the camera frame (CF) via an and then into the image plane via a perspective function. The projective function is determined by a set of intrinsic parameters of the camera, denoted by K, which is taken to consist of known constants for a pre-calibrated camera.
In addition to the images, the other important half of the multi-view data consists of the IMU measurements, which encode the orientations of the IMU in the Earth frame (EF) when the images were captured. The directions of gravity and magnetic north at a given point on Earth define EF in that location, and the accelerometer and magnetometer sensors of the IMU respond to the gravitational force and magnetic flux, yielding their projections onto the sensor axes. We let S i = (a i , m i ), i = 1, 2, . . . , P, denote the overall IMU measurements, where a i denotes an accelerometer measurement and m i denotes a magnetometer measurement.
To ensure an accurate formulation of the problem, there are two small misalignments that we should consider. First, the accelerometer frame, denoted as A, should ideally coincide with the magnetometer frame, denoted as M, such that the orientation of the IMU in EF can be determined from the outputs of these two sensors (e.g., [52]). However, because the sensor hardware usually resides on different chipsets in a mobile phone, a small rotation may exist between the two sensor frames. Moreover, other environmental factors (especially magnetic factors) may affect the rotation of each sensor frame, thereby worsening the misalignment. Let this unknown rotation be denoted by R A M , which we can then use to re-align A and M once it is known. For convenience, we also regard A as the overall IMU frame (IF) when doing so will not cause confusion. Second, the frames of the camera module and IMU module in a mobile phone should also ideally be perfectly aligned, differing only by exchanges in the directions of the axes, which is also generally not true in reality. We describe the true relation as a rigid transformation ( R A C , T A C ) (or ( R I C , T I C )), and although it is easy to obtain an approximation of the relative pose between the camera and IMU frames from the mobile phone API, finding the precise transformation requires greater effort. For the pose estimation problem, we temporarily take these two misalignments as priors.
Our final goal in the estimation problem is to determine the antenna's downtilt and azimuth angles in the Earth frame, which together represent the rotation of the antenna relative to EF, denoted by R E O . The symbol of O represents the object coordinate frame of the antenna.
We summarize these quantities and their relationships in the graph model shown in Figure 2. A straightforward interpretation of the graph model provides us with a formulation of an estimation problem with a conditional cost function given all priors and observations, which is: where R E O is the pose to be estimated. The two main sources of input are the camera projection process and the IMU sensing process, so we re-express Equation (1) as follows: where g 1 is the projection-related error and g 2 is the sensing-related error; the constraint Equation (2b) models the relation between CF and IF and thus relates two error terms. Note that Equation (2a) is a generic formulation of our pose estimation problem in the Earth frame, and different solutions may arise depending on the choices of g 1 and g 2 .
Method Overview
A direct optimization-based solution to Equation (2a) is impractical because of its high dimensionality; therefore, we will break the problem down into smaller parts to solve it.
Referring to the original graph model presented in Figure 2, we find that the first item in Equation (2a), which corresponds to the red-outlined region in the upper right of the figure, describes P model-based visual pose estimation problems, and similarly, the second item, corresponding to the blue-outlined region in the lower left of Figure 2, describes the IMU orientation estimation problem seeking the rotation of IF in EF denoted by R E I i , for which effective solutions (e.g., [52] where avg is used to fuse estimations from multiple viewpoints. Using R E O , we can calculate the downtilt and azimuth angles of the antenna. In addition, we note that the priors R A C and R A M (i.e., the relative poses between the camera, accelerometer and magnetometer) are inherent to each specific mobile phone; thus, they need to be calculated only once and can then be stored for later use. To acquire the exact values of these rotations, we employ a dedicated offline camera-IMU calibration process, which will be described in Section 4.
To summarize, our solution for antenna pose estimation in the Earth frame consists of the following four main steps:
1.
For a given phone, we compute the relative poses between its camera and IMU sensors using an offline camera-IMU calibration procedure. Once calculated, the relative poses of the camera and IMU sensors will not change during the antenna pose estimation process.
2.
Using the antenna model and images obtained from calibrated viewpoints, we estimate the relative pose between the antenna frame and the camera frame for each viewpoint.
3.
We correct the IMU data using the relative rotation between the accelerometer and magnetometer from (1), and we calculate the rotation of the IMU in the Earth frame for each viewpoint using existing IMU orientation estimation techniques.
4.
We concatenate the antenna pose in the camera frame and the IMU orientation in each viewpoint with the relative camera-IMU rotation from (1)
Relative Poses between the Camera and IMU Sensors
In this section, our aim is to accurately determine the relative rotations between the camera, accelerometer and magnetometer to improve the accuracies of downtilt and azimuth estimation for a remote target.
We use a checkerboard to capture the data we need for calibration. The board is placed in several orientations, and for each placement, we measure the downtilt and azimuth angles and capture multiview data in the same manner used for capturing data from an antenna. This checkerboard pose measurement step replaces the careful checkerboard setup required in [37,50]. Multiple groups of data are captured to provide sufficient constraints for the calibration.
Suppose that we have a group of calibration data that consists of Q checkerboard placements with measured downtilt and azimuth angles of (t i , a i ) , i = 1, 2, . . . , Q, and that multi-view data have been captured from Q i viewpoints for the i-th checkerboard placement. Then, we can model the calibration using a graph model similar to that presented for the antenna pose estimation problem, as shown in Figure 4.
Unlike in the case of the antenna pose estimation problem, because of the maturity of camera calibration techniques (e.g., [53]), the relative pose between the camera and the checkerboard is considered to be known, and the downtilt and azimuth angles of the checkerboard are regarded as the ground truth. Thus, we can transform the graph model into the following optimization problem: where δ R is a distance function or metric for rotations, which we will explain in detail later, and R I j E i is the IMU orientation in EF as calculated from S j i , which is the j-th frame of sensor data in the i-th group of calibration data, with existing methods like [52], after the accelerometer and magnetometer measurements have been aligned via R G M . The symbol of B represents the coordinate frame of the checkerboard. The functions of h t and h a are defined for calculating the downtilt and azimuth angles of the checkerboard. In astronomy, for a vector V in EF, its tilt and azimuth angles are defined as follows: where t is the tilt angle, a is the azimuth angle and V (i) is the i-th component of V. For a checkerboard, we can use its edge directions, its surface normal direction or a combination thereof to describe its tilt and azimuth angles, and since most antennas are pointing downwards, we prefer to use the term downtilt instead of tilt, which are opposite from each other. To be specific, suppose that we choose a direction v on the checkerboard to define the downtilt angle and that the rotation of the checkerboard relative to EF is R E B ; then, the downtilt angle of the checkerboard in EF is defined by R E B · v. For convenience, we denote the above process by the function h t R E B , where we omit any reference to a predefined v. We can formally define the azimuth angle of the checkerboard in a similar manner and encode the process as h t R E B . An illustration is presented in Figure 5. In Equation (3), the introduction of R G M , i.e., the relative rotation between the accelerometer and magnetometer, is a key element that differentiates our method from previous camera-IMU calibration methods. We have explained our motivations for this in Section 1, and further evidence supporting our approach is provided by the contrasting behaviors of the downtilt and azimuth error curves with and without the additional DoFs, as shown in Figure 6, which illustrates that it is difficult to find a balance such that both the downtilt and azimuth errors can be kept simultaneously low when R G M is ignored.
To complete our definition of Equation (3), we design δ R to be a rotation metric defined in terms of the downtilt and azimuth angles, such that, for two orientations R 1 and R 2 , we have: where δ R is a rotation distance function defined in terms of the downtilt and azimuth angles, δ t and δ a are two special functions for calculating the minimal differences in the downtilt and azimuth angles based on their periodicity and w is a weighting parameter that will be explained later. A simple choice for δ t and δ a is the Euclidean distance after the transformation of the angles into the same phase.
Note that our metric is defined based on the downtilt and azimuth angles and thus has only two DoFs, meaning that it is an incomplete representation of a rotation. Although it would be easy to add another DoF to the definition, we choose not to do so to decrease the number of measurements needed during data capture. We include the weight parameter w in the final expansion in Equation (4) to reduce the effect of the azimuth-related error on the overall cost. As is known from [52], the downtilt reading of an IMU relies solely on the accelerometer output, whereas the heading (azimuth) measurement predominantly depends on the magnetometer output. However, the precision of the accelerometer in a mobile phone is typically much higher than that of the magnetometer, and the magnetic environment is highly unstable compared to the gravitational environment in practice. Hence, the scales of the errors on the two components of δ R are likely to be unbalanced, which may lead to non-optimal solutions for the overall calibration; by restricting w to a value less than 0.5, we can re-balance the two types of errors.
Although we cannot determine w analytically, we can show that the calibration accuracy is insensitive to w when the value of w is sufficiently low, as seen from the experimental results presented in Figure 6. Empirically, we recommend keeping this value in the range of [0.1, 0.3]. (3) and (4), we obtain:
Combining Equations
Equation (5) is written in a standard least-mean-square form, and it can be effectively solved using the Levenberg-Marquardt algorithm [54].
Antenna Poses Estimated from Captured Images
Considering that the scene containing the antenna is static from one viewpoint to another, if we insert a camera calibration object (e.g., a checkerboard) into the scene and employ a suitable extrinsic camera calibration technique (e.g., [53]) or apply a structure-from-motion (SfM) technique (e.g., [55]) to the background, we can obtain the relative poses of the camera corresponding to all viewpoints relative to a visual reference frame, meaning that the task can be formulated as a visual pose estimation problem using data from P viewpoints. Let the viewing reference frame (VF) be denoted by V; let the camera frames (CFs) be denoted by C i , i = 1, 2, . . . , P; and let the relative poses be denoted by R C V i , T C V i , i ∈ 1, 2, . . . , P. Then, we need to find only the relative pose between O and V instead of the original 3P unknowns. This process is expressed as follows: where g 2 describes the error on the visual pose estimation based on images acquired from P viewpoints.
To complete our definition of Equation (6), we define g 1 as a contour-based distance function between the projections of the 3D antenna model and the antenna foregrounds in the real images: where Φ π Our approach does not rely on any assumption regarding the form of d C . Without loss of generality, we define d C based on a point-to-contour distance: where Φ and Φ" are the two contours to be matched and d c p is the operator for calculating the shortest Euclidean distance between a point and all points on a contour: Another way to interpret d c p is to treat it as an embedding function of the level set underlying a contour; for details, we refer the reader to [56]. An efficient algorithm to compute d c p is given in [57]. To solve Equation (6), we adopt a coarse-to-fine strategy. First, we exploit the fact that most antennas are approximately cuboid in shape to recover an approximate pose, by aligning the 3D principal axis of the model with the 2D principal axes in the multi-view images and finding a proper rotation around the principal 3D axis. Then, based on this coarsely estimated pose, we construct bounding constraints to be applied to the pose search space, thereby allowing us to seek the optimal pose by using global optimization techniques to minimize Equation (6). An overview of our approach is provided in Figure 7.
Figure 7.
As shown in the leftmost box, the position of the 3D principal axis is recovered from multi-view contours (green), and the 3D antenna model (gray) is aligned with the recovered axis and rotated to find a coarse estimate of the antenna pose. As shown in the left column of the middle box, there are discrepancies between the antenna contours and the projections from the approximate pose (blue), and as shown in the right column, the approximate pose can be globally refined to reduce the contour discrepancy with respect to the resulting refined pose (pink). In the rightmost box, the final overall estimation result is shown as the transformation from the model frame (gray) to the reference frame (pink).
Axial Alignment
The strong axiality of the antenna shape originates from the fact that most directional antennas are approximately cuboid in shape. We use the concept of principal axes to describe the axiality of both the antenna model and the antenna projections in images. We define the 3D principal axis of the model as the 3D line segment that crosses the centroid of the model, is oriented in the direction along which the model extends the farthest and is bounded by the mesh (as illustrated in Figure 8a); similarly, the 2D principal axis of a projection is the 2D line segment that crosses the centroid of the 2D silhouette, is oriented in the direction along which the silhouette extends the farthest and is bounded by the contour (as illustrated in Figure 8a).
The first step of our coarse pose estimation procedure is to find a pose for which the 3D and 2D axes are aligned. First, we detect the 2D/3D principal axes from the images and the 3D model. There are many ways to achieve this, for example, by applying principal component analysis (PCA), or independent component analysis (ICA) to the contour points and the 3D model vertices, or by finding the (rotated) bounding box of the contour/model. Once the 2D axes have been found for all viewpoints, we recover the 3D principal axis in VF from the end points of the 2D axes using triangulation methods (e.g., [58]). Let the recovered 3D axis be denoted by −−→ E 0 E 1 , and let the 3D principal axis of the model be denoted by −−→ E 0 E 1 ; then, we have: where R ⊥ , T is the pose to be estimated. Equation (8) is also known as the generalized Procrustes problem and can be efficiently solved analytically [59].
Circumferential Match
However, the solution to Equation (8) is not unique: from a geometric point of view, R ⊥ describes only the yaw and pitch of the antenna. Let the two angles be α and β ; we can write R ⊥ as R ⊥ (α , β ). The left roll angle, which describes the rotation around −−→ E 0 E 1 , is still undetermined. To eliminate the remaining uncertainty, we enumerate the discrete rotations of the model around −−→ E 0 E 1 based on R ⊥ , T to find a rotation that minimizes the difference between the widths of the antenna silhouettes at their centers in the real images and in the projections (as indicated by the dashed lines in Figure 8b). Figure 9 shows an example of how the width difference changes with the rotation; two minima are observed because of the symmetry of the antenna model, and we select the correct one based on prior knowledge of which side of the antenna is facing the camera.
Let the best rotation angle determined through enumeration be γ , and let the additional rotation it represents be denoted by R (γ ); then, by combining R ⊥ and R , we obtain the following approximate rotation:
Bounds on Pose Parameters
The pose obtained above is inaccurate as a result of three factors: (1) displacements between the detected 2D/3D principal axes and their true positions; (2) potential errors in the triangulation of the 3D principal axis from the images; and (3) imprecision in the determination of the roll angle. Consequently, we wish to further refine this pose.
A simple approach is to treat the approximate pose as an initialization and then iterate until convergence is achieved, as done in PWP3D [35] and D2CO [32]. Although our approximated poses function well as initializations in most cases, they still cannot guarantee the avoidance of local minima. For higher accuracy, we attempt to find bounds on the pose search space that will allow us to use global optimization techniques to solve Equation (6).
We first concentrate on the rotational component of the pose. The first two sources of imprecision are predominantly related to the yaw and pitch of the antenna. We observe that the projection of the 3D principal axis based on the approximate pose never falls out of the area enclosed by the two long side edges of the antenna silhouette for each viewpoint, which means that the recovered 3D principal axis always lies within the double cone enclosing the visual hull of the antenna suggested by contours from multi-view images, as indicated in Figure 10a. Let the viewing angle between the two most distant viewpoints be denoted by ϑ, and let the radius of the bounding cylinder of the antenna be denoted by r; then, the diameter D of the cone is 2r/ sin ϑ 2 (see Figure 10b), and the opening angle ε of the cone is 2 arctan( D H ), where H is the height of the antenna (see Figure 10a). In this way, we can obtain bounding constraints on the refinements to the yaw and pitch. In practice, we have found that the approximate rotation is usually much closer to the true value than these bounds would suggest, so we scale the bounds by an empirical factor w 1 to further shrink the search space. Regarding the last source of inaccuracy, i.e., that affecting the roll angle, we already have a natural bound, namely the granularity used when enumerating the roll angle in Section 5.1.2, which we let be denoted by .
To summarize, the bounded search space for the refined rotation is: where ∆α, ∆β and ∆γ describe the difference between the approximate rotation and the real value.
Regarding the translation of the model, it can be similarly observed that the projection of the center of the model always falls within the antenna foreground in the image and is usually not far from its true position. This means that in OF, the true translation is confined to the cylinder formed by the top and bottom of the double cone found above (see Figure 10a), yielding bounds of (D, H, D) on translations in OF based on T . Moreover, for reasons similar to those motivating the introduction of the scale factor w 1 , we also introduce an empirical factor w 2 for the translation along the principal axis. Finally, in VF, we have the following bounded translation space: where the vector ∆T describes the difference between the approximate translation and the real value.
Refinement via Constraint-Based Optimization
To summarize, the optimization defined in Equation (7) is now rewritten as: The constraints expressed in Equation (9) are simple box-shaped boundary constraints, which enable us to search for the refined pose in a reduced space by seeking global convergence using algorithms such as the dividing rectangles (DIRECT) algorithm [60]. Typically, another round of local optimization (we use constrained optimization by linear approximations (COBYLA) [61]) is then performed to also ensure local optimality.
Validation of the Effectiveness of the Bounds Applied for Pose Refinement
In Table 1, we present the statistics of the estimation error with respect to the ground truth based on the refinement results obtained using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [62] and COBYLA as solvers for Equation (9) on an antenna dataset named AntennaL, which consists of 65 groups of data (described in detail in Section 6.1). Both solvers seek a local optimum, but the latter is a solver that can take advantage of bounding constraints, whereas the former is not. A comparison reveals that COBYLA yields far better estimates of both the downtilt error and the azimuth error, although there are cases in which the azimuth error is greater than the maximum tolerance allowed in the industry (15 degrees). Table 1. Lower quartile (Q1), median (Q2) and the upper quartile (Q3) of error distribution of downtilt and azimuth estimation using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm and constrained optimization by linear approximations (COBYLA) on AntennaL, together with the mean, standard devation (Std.) and maximum error are given for comparison.
Experimental Results
We experimentally evaluated the overall pipeline of the proposed method of antenna pose estimation. The two key steps of camera-IMU calibration and visual pose estimation were evaluated and compared with state-of-the-art methods; the effects of various camera-IMU calibration method and camera extrinsic calibration methods on antenna pose estimation were compared, and the accuracy of the overall pipeline was reported on data of working antennas.
Note that a downtilt deviation of 1.5 degrees will drastically affect the performance of an antenna [19], and the empirical azimuth deviation tolerance is approximately 15 degrees. Therefore, the overall downtilt and azimuth estimation error has to be less than the two values for industrial applications, such as network optimization.
Setup and Datasets
We captured multiple datasets and organized them into two groups for various evaluations. In the first group, there are three datasets, named BoardF, BoardL and Motion, which are for evaluations of camera-IMU calibration methods; in the second group, there are two datasets, named AntennaL and AntennaS, which are for evaluations of visual pose estimation methods and the overall pipeline.
All of these datasets are captured using a Samsung c Galaxy S4 Zoom smart phone with an Android application that we developed ourselves, and a tripod is used to ensure stable sensor data when necessary. Examples are presented in Figure 11, and details are given below. The datasets are publicly available at http://zju-capg.org/antenna/data.
Camera-IMU Calibration/Evaluation Dataset
The dataset of BoardF consists of multi-view checkerboard data, in which the checkerboard is always laid horizontally with the X or Y axis pointing directly north and multi-view data are captured around the checkerboard about every 15 degrees. The specific placement of the checkerboard satisfies the needs of [63], which requires an ideally vertical or horizontal checkerboard, and the needs of [50], which requires a checkerboard aligned to the north. We organized BoardF into two sub-datasets: originally, the data were captured in a hall and a balcony separately and were therefore divided into two subsets named hall and balcony. Either hall or balcony can be used to perform the camera-IMU calibration; however, we prefer to use them together (i.e., the dataset of BoardF) to avoid potential overfitting.
BoardL was collected for calibration using our method. The restrictions on the checkerboard orientation applied in BoardF are removed, and the downtilt and azimuth angles of the checkerboard are treated as the ground truth.
Both BoardF (including its two subsets) and BoardL are further split in half, with one half serving as the calibration set and the other serving as the evaluation set.
Motion is a dedicated set for use in camera-to-IMU calibration and synchronization toolbox(CRISP) [44], which requires inputs consisting of video data and gyroscope readings. We note that the implementation of CRISP as provided by its author is built using fixed-ratio gyroscope data; however, because the gyroscope in an Android phone works in an event-based manner, we resampled the gyroscope readings as suggested by the author.
Multi-View Antenna Dataset
We collected two datasets of multi-view antenna data, each serving a different purpose. AntennaL consists of 65 groups of data captured on campus using an antenna whose downtilt angle ranged from approximately 1.5 degrees to approximately 11.5 degrees and whose azimuth angle ranged from −180 to 180 degrees, and each group of data consists of three viewpoints. AntennaS is similar to AntennaL, but was collected based on working antennas; however, because of restrictions imposed by the local telecom company, our access was limited to only two antennas, from which we captured six groups of data consisting of three to six viewpoints for each group. A checkerboard was placed in the scene in both AntennaL and AntennaS to enable the determination of the extrinsic camera parameters. We evaluated the accuracy of our method on AntennaL, which contains many more groups of data and antenna poses, and verified the performance of our method in a real environment using AntennaS.
Camera-IMU Calibration Methods
In this section, we first compare our method (under two parameter configurations) with those presented in [37,50,51] on BoardF. We calibrated each method on the portion data for calibration in hall, balcony and BoardF and then evaluated them on the corresponding evaluation data. Although we assert that the use of a gyroscope is unnecessary for the estimation task, we also evaluated a recent gyroscope-based camera-IMU calibration method, namely CRISP [44]. Because it requires video and gyroscope input, CRISP was calibrated on Motion.
The calibration methods were evaluated by estimating the downtilt and azimuth angles of the checkerboards by applying the calibration results (i.e., the rotations between the camera and the IMU sensors) to an uncalibrated phone. Statistics are presented in Table 2 as the mean error and standard error with respect to the ground truth. As seen from Table 2, our method always shows the best or very close to the best performance in downtilt estimation. Meanwhile, for azimuth estimation, none of the three existing methods can reduce the error effectively, whereas our method always achieves a large improvement by considering the possible accelerometer-magnetometer misalignment.
Moreover, we also report the results for the accuracy of our calibration method when it is calibrated on a proper dataset, BoardL, in which no restrictions are placed on the checkerboard orientations. We tested configurations of either keeping or removing the variables describing the relative rotation between the accelerometer and magnetometer in our calibration, as well as varying the weighting parameter w from 0.05 to larger values with a fixed step size of 0.01.
In Figure 6, the two solid blue lines (calibration without accelerometer-magnetometer DoFs) for the downtilt and azimuth angles show opposite tendencies: when w is low, the downtilt estimation is improved relative to the baseline in green lines, whereas the azimuth error is very large, and when w is high, the situation is reversed. When the additional DoFs between the accelerometer and magnetometer are added, both types of error are effectively reduced simultaneously when w is below 0.3, as shown by the red lines. This indicates the existence of rotation between the accelerometer and magnetometer.
Furthermore, the curves of the downtilt/azimuth error versus w represented by the red lines show broad plateaus, thereby demonstrating the robustness of the method with respect to w.
Therefore, we report our method to work best with the additional DoFs between accelerometer and magnetometer and with w set to a low value approximately in the range [0.1, 0.3]. With these configurations, the downtilt and azimuth accuracies are around respectively 0.35 and 3.4 degrees on average with standard deviations of 0.25 and 3.4 degrees, which are generally superior to those reported by the state-of-the art in Table 2.
Visual Pose Estimation Methods
For a comparison of our visual pose estimation method with PWP3D [35] and D2CO [32], each method was plugged into our overall pipeline for the estimation of the antenna downtilt and azimuth angles, and the estimation results were then evaluated against the ground truth. The camera-IMU calibration parameters were obtained from the results of our calibration method on BoardL with w set to 0.1.
To ensure a fair comparison, we slightly modified the two existing methods. For PWP3D, we used the multi-view version and replaced the foreground/background probabilistic model with a deterministic one based on antenna silhouettes to eliminate errors in its segmentation step; for D2CO, we trivially extended it to obtain a multi-view version by accumulating the costs from each view, which is the strategy adopted in PWP3D. To initiate each method, we used our approximate poses, whose projections in each view show extensive overlap with the ground truth, as shown in the first column of Figure 12. In Figure 13, the distributions of the estimation errors are shown as cumulative histograms. We first note that all three methods yield satisfactory results on more than 90% of the data, which demonstrates the success of the initialization using our approximated poses. However, there are several exceptions in which PWP3D and D2CO fail to find the optimal poses, as shown toward the right end of the X axis ; the results for three of these cases are shown in Figure 12, together with the initial poses and the results of our method. Table 3 presents the quantitative results for the three methods, where our method shows the lowest mean, standard and maximum errors. In this subsection, we evaluated the effects of the various camera-IMU calibration methods on the antenna downtilt and azimuth estimation accuracies using AntennaL. We performed the estimations using the pipeline proposed in this paper, in which we configured the camera-IMU calibration parameters offline using the results of the various calibration methods. For [37,50,51], we used the calibration results obtained from BoardF; for our method, we used the calibration results obtained from BoardL with w set to 0.1. Table 4 compares the performances of all calibration methods in terms of the mean, standard and maximum errors for both downtilt and azimuth estimation. All four methods show improvements in downtilt estimation accuracy compared with an uncalibrated phone in terms of both the average and standard errors. Among them, the method proposed in [37] and our method yield results that are very close to the best result achieved using the method of [51], with mean/standard errors lower than 0.5/0.3. However, in terms of the azimuth accuracy, our method not only yields a reduction in the mean/standard error of up to 1.5, but also shows good control over the maximum error, making it the only method to achieve a lower azimuth error than the conventional maximum tolerance of 15 degrees, whereas the other methods do not show obvious improvements (compared with the default camera-IMU relation). These findings confirm the superior performance of our method in simultaneously improving both the downtilt and azimuth estimation accuracy.
Extrinsic Camera Calibration from Pattern and SfM
We also evaluated the performance of our method when the camera is extrinsically calibrated using an SfM technique. We substituted the extrinsic camera parameters obtained from the checkerboards in Section 6.2.2 with results obtained from an SfM implementation in OpenMVG [55] while leaving all other details of the configurations unchanged. Figure 14 and Table 5 show the resulting estimation errors. Whereas nine out of 65 (or 13.8%) of the data cases failed to yield an estimate with a downtilt error of no more than 1.5 degrees and an azimuth error of no more than 15 degrees when the SfM technique was used, as seen from Figure 14, the majority of the results (at least 75%) show a precision comparable to that of the results obtained using a calibration pattern, as indicated by the first three quartiles reported in Table 5. The few unsatisfactory estimates are strongly related to the accuracy of the SfM calibrations. To demonstrate this, we first measured the SfM accuracy against the calibration results obtained using the checkerboards by comparing the corresponding camera rotations relative to the first viewpoint for both calibration results for each data group, where the largest rotation angle was treated as the SfM accuracy. The histogram of the rotation angles is shown in Figure 15, in which seven of the nine data groups with the largest errors (larger than 1.5 degrees) also appear among the nine instances of unsatisfactory estimates, demonstrating the strong connection between the estimation error and the SfM accuracy.
Moreover, we find that the failures in the SfM experiment can be identified from the results of the coarse visual pose estimation stage: the projection of the center point of the recovered 3D principal axis falls outside of the antenna contour from at least one viewpoint, as demonstrated in Figure 16. This situation arises in all nine of the unsuccessful estimations and can be used to trigger an instruction to the end user to capture more images or to manually select matching features in the SfM procedure to overcome the problem.
Performance on Working Antennas
In this subsection, we report an evaluation of our overall downtilt/azimuth estimation pipeline on working antennas to assess whether it satisfies the minimum accuracy requirements for industrial applications, i.e., to access whether the errors are lower than 1.5/15 degrees.
The parameter configurations and evaluation method used are the same as those applied in the evaluations presented in Section 6.2.2. As demonstrated in Table 6, the largest errors on the antenna downtilt and azimuth angles estimated using our method are all below the tolerance values, demonstrating the applicability of our method in an industrial environment. Two typical visual pose estimation results are presented in Figure 17, where the red curves represent the projections of the 3D model based on the estimated poses.
Discussion and Conclusions
The focus of our study is the development of a novel non-contact solution for estimating antenna tilt and azimuth angles using a mobile phone as the measuring device. The two key points of our pipeline are the newly proposed camera-IMU calibration method for mobile phones and the coarse-to-fine visual pose estimation method.
The major difference between our camera-IMU calibration method and the state-of-the-art [37,50,51] is the inclusion of additional DoFs between the accelerometer frame and the magnetometer frame, which allows for decoupling of the accelerometer-related error and the magnetometer-related error and therefore leads to good performance on both tilt and azimuth estimation tasks simultaneously.
The crucial distinction between our visual pose estimation method and existing ones is the coarse-to-fine strategy we adopt. With this strategy, we avoid any manual pose initialization and more importantly are able to refine the approximate pose as a constrained optimization problem for higher accuracy compared with the state-of-the-art [32,35]. Besides, our method is based on multi-view contours instead of stable visual feature, which makes it very suitable for pose estimation of the textureless and simple-shaped antennas.
The major limitation of our work is the excessive computational resource consumption of the global optimization step of the pose refinement procedure. In the future, we will attempt to alleviate this problem by adding simple user interactions and/or developing more heuristic strategies for search space reduction.
We are also aware of the influence of hand shakes on accelerometer outputs if no tripod is used. According to our experience, a simple mean filter applied on the accelerometer data can effectively reduce the impact provided the shakes are slight; nevertheless, we intend to exploit methods used in image stabilization to fundamentally address the issue. Another related problem is the simple strategy for fusion of pose measurements from multiple viewpoints: though the present method of averaging works well in most cases, it may fail to generate the optimal results when outliers exist, as indicated by the relatively large error in the last row of Table 4. To overcome this problem, we have two working directions in the future: one is to adopt more powerful fusion methods, and the other is to integrate information from more sensors for an effective quality metric for pose measurements.
At last, we note that, aside from mobile telecommunications, our method can also be useful in areas such as the space field [64], indoor navigation [65], unmanned aerial vehicles [66], and so on. Author Contributions: Weidong Geng and Zhen Wang conceived of the idea. Zhen Wang and Bingwen Jin performed the experiments and analyzed the data. Zhen Wang wrote the paper. Weidong Geng and Bingwen Jin assisted in revising and proofreading the paper.
Conflicts of Interest:
The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.
|
2017-05-15T16:26:34.065Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "03af833e279582e7863adcb67c013012b64e9043",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/17/4/806/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03af833e279582e7863adcb67c013012b64e9043",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
225856440
|
pes2o/s2orc
|
v3-fos-license
|
One Pot Use of Combilipases for Full Modification of Oils and Fats: Multifunctional and Heterogeneous Substrates
Lipases are among the most utilized enzymes in biocatalysis. In many instances, the main reason for their use is their high specificity or selectivity. However, when full modification of a multifunctional and heterogeneous substrate is pursued, enzyme selectivity and specificity become a problem. This is the case of hydrolysis of oils and fats to produce free fatty acids or their alcoholysis to produce biodiesel, which can be considered cascade reactions. In these cases, to the original heterogeneity of the substrate, the presence of intermediate products, such as diglycerides or monoglycerides, can be an additional drawback. Using these heterogeneous substrates, enzyme specificity can promote that some substrates (initial substrates or intermediate products) may not be recognized as such (in the worst case scenario they may be acting as inhibitors) by the enzyme, causing yields and reaction rates to drop. To solve this situation, a mixture of lipases with different specificity, selectivity and differently affected by the reaction conditions can offer much better results than the use of a single lipase exhibiting a very high initial activity or even the best global reaction course. This mixture of lipases from different sources has been called “combilipases” and is becoming increasingly popular. They include the use of liquid lipase formulations or immobilized lipases. In some instances, the lipases have been coimmobilized. Some discussion is offered regarding the problems that this coimmobilization may give rise to, and some strategies to solve some of these problems are proposed. The use of combilipases in the future may be extended to other processes and enzymes.
Enzymatic Biocatalysis
Enzymes are extremely precise biocatalysts, exhibiting this precision in a chemo-, regio-and stereo-product selective manner when applied in biotransformations at lab or industrial scale, so that their use has been gaining a preponderant role in the last years etc. [1][2][3][4][5][6][7][8][9][10]. This may be coupled with high substrate specificity (e.g., stereospecificity) [11][12][13][14][15][16][17][18][19][20][21][22]. Additionally, the sustainability upgrade upon switching from chemical catalysis to biocatalysis is another aspect to be taken into account, as long as Biocatalysis and Green Chemistry present many common features [23][24][25]; in fact, considering the type of catalyst used, enzymes are obtained from easily accessible renewable sources are biodegradable and fundamentally innocuous and harmless, and their use generally avoids the need for toxic and expensive metals. From the point of view of the biocatalyzed process, reaction conditions are usually very mild (atmospheric pressure, room temperature), and many protection-deprotection steps can be avoided, therefore leading to more economical synthetic routes, also generating less waste than conventional processes [16,18].
Although enzymes are very precise performing their catalytic activity, it is also true that in many cases it is necessary to increase their activity versus industrially relevant substrates (in some instances far from the physiological ones) and/or stability for making them compatible with operational conditions, mainly at the industrial level [26]. For this aim, there are several accepted strategies. One simple strategy is to exploit Nature to obtain the enzyme that best fits the specific process development, with a great advancement in metagenomics tools [27][28][29][30][31].
One very interesting paper shows the integrated use of diverse techniques to get an enzyme with new properties. It is related to the creation of an enzyme bearing an ex novo-created active center, to generate the so called plurizymes, using protein modelling and site-directed mutagenesis [88]. The same research group, in a further paper, exemplified how the coupled utilization of several tools may drive to results beyond expectations. In a second step, using dynamic simulation, protein modelling and directed mutagenesis, the plurizyme second active center activity was improved [89]. Then, an irreversible covalent inhibitor was designed, bearing a catalytic metal complex. This was attached to just one of the Ser located in the active centers, enabling a fully directed chemical modification of the plurizyme, and finally the artificial semimetal plurizyme was used in a cascade reaction involving both, the enzyme active center and the metal catalysts [89].
Modification of Monofunctional Substrates
When using monofunctional substrates, the enzyme must be selected to recognize and perform the reaction in an optimal way [90][91][92]. The situation is apparently simple-only one substrate and one product may exist in the reaction medium. However, even using monofunctional substrates, some changes in the reaction conditions may occur that can significantly affect the enzyme performance. For example, if the reaction is an ester hydrolysis performed without pH control, it is likely that the pH of the reaction medium decreases during reaction, and the intensity of this pH decrease will be related to the concentration of the substrate (Figure 1). The effect of pH should be considered on enzyme properties, including activity, before select the optimal enzyme for the process [92][93][94][95][96]. A similar pH decrease is found in the hydrolysis of some amides, e.g., β-lactamic antibiotics, where the amino group has a significantly low pK value (under pH 5) [97][98][99][100]. If that is the case, it is also likely that the optimal enzyme under the initial conditions may not be the optimal under the final reaction conditions. In this situation, it is sensible to think that the selection of the "best" enzyme to catalyze this reaction may be more complex that it initially seems. The use of initial rates under initial conditions will provide incomplete information that can lead to erroneous conclusions when considering the full reaction course (Figure 1). Full reaction courses using the target substrate concentrations should be considered to actually identify the best enzyme for a specific process. In some cases, it is not unlikely that the use of several enzymes may be a more convenient strategy, one with optimal activity under the initial reaction conditions, another with optimal activity under the final reaction conditions. The amount and proportion of both enzymes should be optimized in each specific case (kinetics of the enzymes with the substrate, concentration of the substrate, initial pH value, etc.). However, we have been unable to find any example of this use of enzyme mixtures. The situation will be different when using a high concentration of buffer that can maintain the pH throughout the reaction (but this can complicate the final downstream of the product) or if the pH value is controlled by continuous titration (but the titrating agent may affect the enzyme, substrate or product stabilities) [60,61] (Figure 1).
Figure 1.
Effect of the change in the pH value during the reaction on the selection of the optimal enzyme. Figure represents two theoretical enzymes, one with high activity at optimal pH (initial pH value) but a strong dependence on the pH, the other with a lower activity but active at acidic pH values (a). The (b) figure shows the theoretical reaction courses when the reaction was performed under a controlled pH value (solid line) or when the pH decreased along the reaction course (dashed line).
One condition that will always change during the process is the ratio between the concentrations of substrate and reaction product. Furthermore, in many instances, the enzymes may be inhibited by the reaction product [101][102][103][104][105][106][107][108]. That way, it is possible that one specific enzyme can exhibit an optimal performance in the absence of product or if the concentration of substrate is much higher than the concentration of the product, but it may suffer a strong inhibition due to its presence, stopping the reaction long before reaching the total modification of the substrate even when this may be thermodynamically feasible [109] (Figure 2). Therefore, it is possible that under industrial conditions, the optimal enzyme may not be the best enzyme under initial conditions, but an enzyme with a compromise solution offering good activity (but lower initial activity than the "optimal" enzyme) and low product inhibition, giving more linear reaction courses (Figure 2), may be desirable. In fact, the situation may be so complex that the best overall enzyme may be different depending on the initial substrate concentration utilized, the exact reaction conditions, etc. Again, it is possible that optimal reaction courses (more linear and reaching higher yields) may be obtained by mixing diverse enzymes presenting different kinetic features. Once again, we have not been able to find any paper using enzyme mixtures clearly addressed to solve this problem.
Figure 2.
Effect of the product inhibition on the reaction courses using two hypothetic enzymes, one with a very high initial activity but showing a strong inhibition by the product, the other with a lower activity but without product inhibition.
It should be also considered that several changes in the reaction conditions may be simultaneously occurring, making it even more complex to find a really definitive optimal enzyme. However, the possibility of bearing in mind these changes along the reaction course is usually not considered in the selection of a biocatalyst for a specific process. In high-throughput screening, for example, which is normally used in directed evolution [110][111][112], analyzing the whole reaction course will add some difficulties in a screening that by definition must be very rapid [113][114][115][116]. That way, the selection of an optimal enzyme as catalyst for a specific process may not be as simple as it looks, and in some cases, it may be that there is no real "optimal" enzyme.
Figure 3.
Schematic representation of enzymatic recognition capability for polyfunctional substrates: (a) regioselectivity (usually denoted as site selectivity) in transforming certain functional groups (FG, in blue) into a product (Pr in red) without altering others; (b) regioselectivity upon addition of an asymmetric reagent (R-H) to an asymmetric double bond; (c) prochiral discrimination by transforming only a functional group adjacent to the stereogenic center; (d) prochiral discrimination by transforming only a functional group directly attached to the pro-stereogenic center.
For these homo-multifunctional substrates, the term regioselectivity describes the preference for reaction of a particular atom or group in a molecule that contains at least one different atom or group of the same type ( Figure 3a). This type of regioselectivity is often referred to as site selectivity, in order to distinguish it from the capability of forming one regioisomer mainly from the other upon addition to a multiple bond (Figure 3b). The capability of recognizing prochirality upon converting only one of the groups adjacent to the stereogenic center ( Figure 3c) or only one of the functional groups attached to a pro-stereogenic center ( Figure 3d) is also noteworthy, in each case leading mainly to one enantiomer versus the other. In this case, together with a good enzyme activity, it is necessary for the enzyme to present the desired selectivity to give the target product, and the desired specificity; that is, the ability to recognize the initial substrate but not the first reaction product and to stop the reaction at this point [146,147,[153][154][155]]. An enzyme with full regio-and enantioselectivity towards the desired product and unable to recognize this product as substrate (or as inhibitor) will be the one that will give the maximum yield of the target product with minimal contamination of byproducts (other reaction products formed by modification of the target product, initial substrate) ( Figure 3).
If the full modification of the multifunctional substrate is the objective of the process, enzyme specificity becomes a problem as it can limit the recognition of some of the partially modified substrates or intermediate products ( Figure 4) and that way, that enzyme will be unable to provide the desired full modification of the substrate, giving only a partial yield. The problems in the selection of the best enzyme to catalyze the reaction that have been discussed above for the monofunctional substrates remain in this instance, but now it is necessary to consider how the enzymes recognize the different intermediate products (mono-modified, di-modified, etc., and all in different positions) [146,147,[153][154][155] (Figure 4). This may be very complex if the number of possible intermediate products is large. Moreover, in many instances the modification will not be random, and each enzyme may have a different route in the full modification of the substrate, depending on the enzyme selectivity and specificity ( Figure 5). The difficulty in selecting an optimal biocatalyst may increase as it is possible that some enzymes that are not very suitable for the first modification of the starting substrate may be more active with progressively more extensively modified intermediate products [156][157][158][159]. The complexity of the catalysts selection may further increase if some of the intermediate products are chiral, as some enantiomers may not be recognized as substrates by some of the enzymes. In the That way, once again the use of only the initial reaction rates provides fully incomplete information to select the optimal enzyme to be used in the process, making it necessary to evaluate full reaction courses to really determine the best enzyme for this process. In these instances, the combined use of several enzymes may be the best solution, as that way it is possible to use enzymes able to optimally hydrolyze each of the likely intermediate products, permitting it to always reach 100% of the conversion yield, maintaining high reaction rates [160,161] ( Figure 7). . Solving the problem of product inhibition by using mixtures of several enzymes, one with high activity and high product inhibition, and the other with lower activity but not inhibited by the product.
This ability of lipases is perhaps based on their mechanism of action, called interfacial activation, makes the lipase active center very flexible [180,181]. The active center of most lipases is covered by a polypeptide chain called "lid". The internal side of the lid is hydrophobic, interacting with the hydrophobic area surrounding the active center and isolating it from the aqueous medium.
In the presence of a hydrophobic surface, such as a drop of oil, the lid opens exposing the active center and its hydrophobic neighborhood and this lipase is adsorbed and stabilized on that surface [182]. Induction of interfacial activation is not limited to oil drops and lipases can be adsorbed via its open form on any hydrophobic surface such as hydrophobic proteins, other lipases and hydrophobic supports [81].
Oils and Fats as Heterogeneous Substrates
Oils and fats are mainly composed of triglycerides, with some free fatty acids and very small amounts of mono and diglycerides. In this context, triglycerides may be considered as trifunctional and racemic (when presenting different acyl substituents) or prochiral substrates (when presenting monofunctional acyl substituents) (Figures 8 and 9) [192][193][194][195]. There are three ester bonds between glycerin and fatty acids. If the substituents in positions 1 and 3 are the same, they are prochiral substrates and after hydrolyzing position 1 or 3 an enantiomer of the diglyceride will be produced ( Figure 8). If the substituents in these positions are different, they are already chiral substrates very likely in racemic form ( Figure 9). That is, the lipase-catalyzed hydrolysis of a "pure" triglyceride may be initially a complex problem because the substrate may be a racemic mixture. Moreover, after the first modification, several diglycerides may be produced, having different enantiomers and regioisomers, and even free fatty acid composition (Figures 8 and 9). The final intermediate product, the monoglyceride, will have similar diversity in its composition ( Figure 9).
In some instances, the goal of the research is not the full hydrolysis of the triglyceride, but to release only specific fatty acids to produce enriched solutions of the remaining glycerides or the free fatty acids [153][154][155][196][197][198]. This is the case when the nourishing value of triacylglycerols is the key point, as this property depends not only on the fatty acid composition, but also on their positional distribution. Therefore, lipases can be very useful for the preparation of novel structured lipids, possessing improved dietary or functional properties, such as low-caloric triglycerides [199], or for the enrichment of triglycerides with ω-3 poly-unsaturated fatty acids, such as eicosapentaenoic acid or docosahexaenoic acid [198,[200][201][202].
In this case, enzyme specificity is the key for an optimal result, and the selection of the optimal enzyme will pursue the higher and faster accumulation of the target free fatty acid or the target glyceride [153][154][155]174,[203][204][205].
However, a highly specific lipase becomes a serious problem when the objective is to have a full modification of this triglyceride, as the "substrate", even just the main substrate in each reaction stage ( Figure 10) (triglyceride, diglyceride or monoglyceride), will be different; additionally, if it is a hydrolytic process, the pH may be changing during the process (Figure 1). That way, selecting an optimal lipase for the full modification using a pure triglyceride becomes very problematic. Full reaction courses using the target concentration of the substrate should be studied to select the most adequate enzyme for that reaction. In this respect, the most active on the original triglyceride under the initial reaction conditions may be fully unsuitable to modify some of the final monoglycerides under a more acidic pH value. As stated above, the situation is more complex if some of the intermediate products are not good substrates for the enzyme but are good inhibitors: both reaction rates and reaction yields will be decreased ( Figure 6). Internal acyl migration is undoubtedly the main side reaction which can be found when the regiospecific synthesis of structural triglycerides is intended. In fact, this process induces serious complications for obtaining pure regioisomers, either diglycerides [206] or monoglycerides [207], through different acyl-transfer processes, mainly trans-or interesterifications. It is known that the acyl migration rate depends directly on the reaction temperature (the lower the temperature, the higher reaction time required) [208], the pH value [153,209], the water activity (probably affecting the activation energy of the reaction by modifying the charge distribution of the transition state [210][211][212]) and on the type of solvent used (generally, polar solvents are described to reduce acyl migrations [213]).
The mechanism of these acyl migrations has remained controversial, but recently Mao et al. [214] have published a very interesting study, applying quantum chemical models using density functional theory at the molecular level. With this computational technique, these authors compared two possible situations, non-catalyzed and lipase-catalyzed acyl migration. In the first case, they considered three different pathways-concerted, stepwise or stepwise including a water molecule, as shown in Figure 11-concluding that the last one, a stepwise pathway with the aid of water, shows lower activation energy for the rate-limiting step (31.7 kcal/mol for path (c) versus 41.8 kcal/mol for (b)), although in any case non-catalyzed migration will proceed extremely slowly. Interestingly, they observed how the lipase-catalyzed migration, depicted in Figure 12, was much faster than any of the non-catalyzed migration pathways, describing how the rate-limiting step (the last one) implicating a water molecule shows an activation energy of 18.8 kcal/mol, which is very similar to the one (17.8 kcal/mol) experimentally measured [210]. Acyl migration is a problem when a regioselective reaction is intended, but if a full modification of a triglyceride is pursued, it becomes an advantage [215][216][217][218][219] (Figure 13). This may somehow mitigate the effects of enzyme specificity, as this permits that the enantio-or regioisomers presented in the reaction may be evolving during the reaction in a spontaneous way giving some isomers that may be good substrates for the enzyme, enabling that strict 1,3 regioselective lipases can fully modify the triglycerides [215,220,221]. Figure 13. Full modification of triglycerides using a strict 1,3 lipase thanks to the acyl migration. The figure shows the diversity of reaction products in a 1,3-specific lipase-catalyzed hydrolysis of a model triglyceride; the first step will produce a complex mixture of regio-and stereoisomers of diglycerides, while the second hydrolytic step will furnish a prochiral 2-acylglycerol. This one, after a lipase-catalyzed acyl migration, will lead to a racemic mixture of chiral monoacylglycerol, which eventually could be hydrolyzed to glycerol.
Moreover, neither natural oils nor fats are composed of a single triglyceride, they present many different free fatty acids, in different positions and giving different enantiomers [222][223][224][225] (Figure 14). That way, being a collection of many different triglycerides, any oil or fat is in fact a very complex and heterogeneous substrate. If the situation to choose a single optimal lipase to modify just a pure triglyceride was complex, the fact that a natural oil may present dozens or hundreds of different triglycerides makes the situation very difficult [222][223][224][225]. The best lipase for the main components of the oil may be strongly inhibited by other triglycerides, or by some of the produced diglycerides or monoglycerides. Moreover, it may be that this lipase cannot recognize some of the free fatty acids attached to the glycerin, preventing it reaching full oil conversion. That way, the lipase that gives the best initial rates could not reach full oil modification or slow down the reaction in the final stages (Figures 4 and 6). Moreover, in oil hydrolysis reactions, the control of the pH is not possible using a titrating reagent [226][227][228]. The addition of a titration agent can promote the formation of soaps. That way, a decrease of the pH value is expected during the hydrolysis reaction ( Figure 1). Again, the selection of the best lipase should consider the full reaction course and may be very hard to find a single enzyme that has the best properties in the whole process. There are two cases where the full modification of all the glycerides contained in an oil or fat is desired. These are the hydrolysis of the substrates to transform all glycerides in free fatty acids [226,228] and the alcoholysis of the substrates to produce biodiesel [229][230][231][232][233][234][235].
Lipase Production of Free Fatty Acids via Hydrolysis of Oils and Fats
In oleochemistry, the main application of lipases is in the hydrolysis of vegetable oils to produce fatty acids [226,236,237]. Free fatty acids present a wide range of uses in food industry such as soap manufacturing or surfactants, and some biomedical applications [226,236,237]. The main chemical method of hydrolysis of fats and oils to produce fatty acids and glycerol involves high temperature and pressure presenting high yield. However, under these extreme conditions, oil and
TRIGLYCERIDES CONTAINING ONLY SATURATED FATTY ACIDS (SFAs) tricaprylin (AxonaTM)
fatty acids polymerization and formation of byproducts occurs, resulting in dark fatty acids and colored aqueous glycerol solutions [238]. Instead, using lipases for this process results in energy saving and minimization of thermal degradation of substrates and products, where the unsaturated fatty acids can be produced without oxidation [239].
However, enzymatic hydrolysis presents the disadvantage of enzyme specificity compared to chemical hydrolysis. Conventional chemical processes produce the full hydrolysis of the triglycerides, while using the enzymatic technology the final yield is limited by the regioselectivity or the substrate specificity of the used lipase. For example, Candida rugosa lipase produced by submerged fermentation was used for the hydrolysis of sunflower oil, resulting at the highest yield 39.5% of the original oil [240]. Fungal lipase from Aspergillus niger has been tested for castor oil hydrolysis and the best performance achieved was around 60% in 72 h [241].
Other problems that will decrease the full hydrolysis of oil by lipases are the production of mono and diglycerides and the inhibition caused by some of the fatty acids (Figures 2 and 6). The intermediate glycerides cannot be easily recognized by some lipases, while the accumulation of fatty acids in the reaction medium can produce product inhibition. Finally, during enzymatic oil hydrolysis, the reaction pH is generally uncontrolled to avoid saponification and some problems in the purification steps, being the final pH much more acid than the initial one. Thus, the reaction conditions will be heterogeneous and will be changing along the reaction course. Thus, it could be assumed that the full hydrolysis of these complex substrates, such as vegetable oils, could be better performed using a mixture of biocatalysts made up of different enzymes, with different specificities and activities [242] (Figure 6).
Transesterification
Transesterification is the reaction between triglycerides (oils and fats) and alcohols to produce fatty acid alkyl esters and glycerol. When short chain alcohols are used, like methanol or ethanol, the resulting ester collection is called biodiesel [173,243]. Currently, the main synthetic approaches used for biodiesel production are alkaline-catalyzed and acid-catalyzed transesterification (with simultaneous esterification of free fatty acids) [244,245]. The technical issues associated with chemical transesterification, such as high energy requirements, difficult recovery of the catalyst and glycerol, and environment pollution, has attracted the interest towards the enzymatic process, using lipases as catalysts [246,247].
In the transesterification reaction catalyzed by lipases several factors affect the final reaction yield. First, the lipase source: fortunately there are lipases from very diverse origins, such as animal, plants and microbiological sources, and many of them have been tested in biodiesel production [246,248]. The main problem for each specific lipase is associated to the second factor in biodiesel production, which is the triglyceride source [249]. As mentioned before, oils and fats are very heterogeneous substrates ( Figure 14). Therefore, lipase specificity will affect the enzyme activity over each substrate, thus, affecting the final reaction yield. For example, comparing soybean, sunflower and rice bran oils, and the three most used commercial immobilized lipases from Novozymes: Novozym 435 (an immobilized lipase B from Candida antarctica [250]); Lipozyme TL IM (an immobilized lipase from Thermomyces lanuginosus [251]) and Lipozyme RM IM (an immobilized lipase from Rhizomucor miehei [252,253]), the final yield changed for each lipase and each oil from 50% using Novozym 435 to 5 % using Lipozyme RM IM for sunflower oil [254].
Other aspects that change the final yield of lipase-catalyzed transesterification reactions are the alcohol source and the alcohol:oil molar ratio. As stated above, methanol and ethanol are the most used alcohols due to their low cost and the final properties of the produced ester as fuel. Nevertheless, some lipases are inhibited/inactivated by methanol or ethanol, and moreover, although the stoichiometric alcohol:oil molar ratio is 3:1, some excess of alcohol could be needed to displace the reaction towards the synthesis, as glycerin will remain in the medium as a competitor.
In this case, the choice of alcohol, as well as the lipase source, can affect the achieved yield and reaction course [254][255][256][257].
The combination of different lipases in the reaction, as will be discussed later, is an interesting way to reduce the reaction time in enzymatic transesterification and increase the final yield allowing the full alcoholysis of the triglyceride ( Figure 6).
Hydroesterification
Another possibility to produce biodiesel by an enzymatic route is using a strategy called hydroesterification. This process involves a two-step mechanism where, firstly, the tri-, di-and monoglycerides are hydrolyzed to produce free fatty acids and glycerol, and in the second step, the purified free fatty acids are esterified using an alcohol, such as methanol or ethanol [258][259][260][261][262][263].
The hydroesterification allows the use of any fatty raw material (e.g., vegetable and waste cooking oils, animal fat, acid waste from vegetable oil production) independently from its acidity and water content [249]. It promotes an advantage over the single step transesterification process, which inevitably generates soaps in the presence of fatty acids, inactivating the catalyst, and making it difficult to separate the biodiesel from the glycerol which is recovered at a high level of purity because of the absence of alcohol and salts in the aqueous phase.
Although many researches used immobilized lipases for hydroesterification, recently, the use of liquid lipases has gained popularity [235,[264][265][266]. The use of a liquid lipase instead of an immobilized lipase implies the presence of water in the process, thus, related to the first hydrolytic step. Moreover, water promotes alcohol dilution in the medium, reducing its denaturing effect on the enzyme and leading to the formation of a second liquid phase in the reaction, creating a hydrophobic interface that is known to activate many lipases [235,[264][265][266].
Advantages of the Simultaneous Use of Several Lipases to Fully Modify Multifunctional Substrates: The Concept of Combilipases
In Section 2 of this review, the difficulties in finding a single lipase that can have optimal properties in the whole reaction course of multifunctional and heterogeneous substrates [222][223][224][225], such as oils or fats, has been outlined; enzyme specificity, a feature in many instances critical to preferring biocatalysis over conventional catalysis in many processes, becomes a problem in this instance. It is possible that conversion yields may be well under the thermodynamics of the process if the enzyme is unable to modify some of the initial glycerides or intermediate products (Figures 4 and 6). Moreover, it is very likely that the reaction rate may be much lower than the expected one in the final steps of the reaction, if the activity of the enzyme is not adequate versus some of the remaining glycerides in the reaction mixture or are strong inhibitors of the enzyme (Figures 2, 4 and 6). Some authors, utilizing the lipase regioselectivity to get just a partial modification of the triglycerides, propose the enzymatic production of a biodiesel-like product, called ecodiesel, containing esters of the free fatty acids and monoglycerides [267][268][269][270][271][272].
Considering the problems for the full modification of oils, the researchers have proposed the use of a mixture of several enzymes to catalyze these reactions as a solution [160,161]. These lipase mixtures have been named combilipases [242]. Although these combilipases may also have advantages in the modification of monofunctional substrates (as previously discussed) (Figures 1, 2, 4 and 6), we have found very few studies using combilipases apart from oil modification. However, we have found many examples using combilipases in full hydrolysis [242] or alcoholysis [273] of fats or oils. The way combilipases have been utilized includes: mixtures of free enzymes, mixtures of individually immobilized lipases, or coimmobilized lipases. Next, some examples will be detailed.
Use of Lipases from Microorganisms that Produce Several Lipases
Some microorganisms naturally produce several lipase forms, or produce some lipase modifications (e.g., glycosylation) that can alter the final lipase properties in a heterogeneous way. It should be stressed that lipase features may be easily altered by very small modifications. In many instances, the enzymes are commercialized in the form of these lipase mixtures. Examples of lipase sources producing diverse lipases are Candida rugosa [274][275][276], Geotrichum candidum [277,278], Staphylococcus warneri [279], Penicillium simplissicimum [280], and Aspergillus niger [281]. Porcine pancreatic extract presents also several lipase forms [282]. These lipase extracts have been used in some instances to produce biodiesel or free fatty acids. In these cases, the mixture of the different lipases has been used [280,283]. In some instances, this may be done on purpose; in some other cases the situation may be just by chance, due to the lack of knowledge on the existence of different lipase forms [284,285] (in some cases, some of the lipases may be presented in traces and remain unknown to the researcher) [286]. In these cases, the study took advantage of mixing enzymes with different features, but the amount and proportion of each of them may not be easy to alter and it would be a chance if the enzymes ratio is the optimal one in the studied process. The situation may be somehow similar to the use of commercial enzymes cocktails of some enzymes in cascade reactions, like glycosidases [287][288][289][290][291][292]. The research cannot easily alter the composition of it.
One option to control this phenomenon is to fraction all the lipase components. However, this may be too tedious and complex, although in some instances lipase fractioning may be achieved by the successive adsorption of the lipase extract on hydrophobic supports with different hydrophobicity [284,[293][294][295]. In any case, it may be simpler to use lipases from different sources already in an isolated form than to purify and mix again the different components of a crude lipase extract, as there are no guarantees regarding the suitability of the mixture of the lipase fractions for the specific process that is under study. It is better to select lipases with the desired specificity properties than to use a natural mix of lipases crude extract. In most instances, even the researchers using these enzymes do not try to explain the results by the existence of several lipases forms.
Another possibility may be the control of the expression of one lipase and not the others, like in the case of G. candidum, which produces several lipases as a function of the free fatty acids used as lipase inducers [296].
Nevertheless, there are no systematic studies on how the changes of the lipases ratio forms alters the results in the use of these enzymes in the production of free fatty acids or biodiesel. That way, after drawing the attention to this possibility, we will not review this uncontrolled combilipase type.
Use of Lipases Mixtures in Liquid Formulations
In biodiesel production, there is a certain increase of the interest in using free enzymes as catalysts for this process [235,265,266,[297][298][299][300][301][302][303][304][305]. This interest is based on the low prices of the lipases commercialized for this use and in the launching of new lipases specially commercialized for biodiesel production in liquid formulation (e.g., Eversa Transform launched by Novozymes) [306][307][308]. Moreover, in many instances the use of unsuitable supports raises problems that may be avoided using liquid formulations. In the synthesis of biodiesel, for example, Marty and coworkers have shown in many instances that glycerin (and water) can accumulate in the support producing an enzyme inhibition or inactivation [309][310][311][312]. Although this can be solved using very hydrophobic supports [309,[313][314][315] or ultrasounds that can stir the biocatalyst particle from the inside and prevent the glycerin/water phase formation [316][317][318], some authors prefer to fully avoid the use of immobilized enzymes. In many instances, due to the non-aqueous nature of the medium, the enzymes will be used as aggregates [319][320][321][322] (that is, the problem of water/glycerin phase in the biocatalysts particle is not fully avoided, and enzyme aggregates may be hard to reproduce).
In the hydrolysis of oils, the use of inadequate immobilization systems may be also problematic. This reaction produced fatty acids, monoglycerides and diglycerides, with detergent-like and/or anion nature properties. They can promote the release of the enzyme from the supports when the employed immobilization strategy is the physical immobilization. This is the simplest enzyme immobilization strategy to produce an immobilized enzyme biocatalyst [323][324][325]. This may be solved using intermolecular crosslinking strategies or heterofunctional supports [81,326,327].
In aqueous media, lipases may have a tendency to form lipase-lipase aggregates [328][329][330][331], or be adsorbed on any hydrophobic molecule of the crude protein solution [295,332]; however, the presence of oil drops and all the detergent-like products should greatly reduce the risks of this enzyme aggregation ( Figure 15). In any case, immobilization will be advantageous for a simpler enzyme reuse and a general improvement of the enzyme features [60][61][62][63]. Nevertheless, there are many examples of using combilipases in liquid form for both applications. Figure 15. The tendency of lipase to give lipase-lipase dimers may be avoided by the addition of detergents or the interfacial activation on drops of biodiesel.
Hydrolysis of Oils and Fats Using Combilipases in Liquid Form
In a first example, lipase D from Rhizopus delemar, lipase N from Rhizopus niveus and lipase G from Penicillium sp. were used in the hydrolysis of soybean oil. These enzymes, not used in combination, gave free fatty production yields of 44%, 42% and 7.2%, respectively, after 10 h [333]. The authors showed that the use of combilipases formed by lipases G and N or lipases G and D, permitted to reach a hydrolysis yield of 95%-98% under similar conditions. This occurred even though lipase G was the least effective enzyme in the process [333], suggesting that it was able to eliminate some glyceride that could reduce the reaction rate for more efficient enzymes D and N.
More recently, lipases from different A. niger strains (named A, B, C and D) were used in the hydrolysis of soybean oil [334]. This way, different forms of the lipase produced by the fungus could be obtained and assayed in this reaction. After optimization by a three-factor mixture design and triangular surface, it was decided that a combilipase using 31.2% of lipase B and 68.8% of lipase D exhibited the optimal properties in this reaction, indicating a synergistic effect of the combilipases. This was attributed to the different fatty acid specificities of the two lipases (Figures 4 and 6), although different pH activity/stability profiles could be also relevant [334] (Figure 1). Again, initial reaction rates of lipase B were significantly lower than the activity of lipase D. Curiously, the mixture of lipases A and B gave lower reaction rates than the individual enzymes, suggesting a mutual inhibition of some of the reaction products of each enzyme [334].
In a previous paper, lard has been hydrolyzed as a source of free fatty acids using the lipases from R. miehei and Penicillium cyclopium [335]. Using the lipases in an individual way enabled a hydrolysis reaction yield of 39.9% using the lipase from R. miehei, while the lipase from P. cyclopium gave a yield of only 8.5%. The use of a combilipase of both enzymes permitted to reach a yield of 78.1%. If this was assisted with 5 min ultrasound treatment before the reaction started, reaction yields became 97% [335]. This exemplified again how the use of two lipases, even if one of them was much less effective in the overall reaction than the other, may be a very interesting way to improve the reaction performance, based on the combination of enzymes specificities.
Use of Combilipases in Liquid Form in Biodiesel Production
In a first report, lipases from R. miehei and P. cyclopium (expressed in Pichia pastoris) were utilized to catalyze the methanolysis of soybean oil in aqueous medium [336]. The lipase from R. miehei yielded 68.5% of biodiesel conversion, but when supplemented with lipase from P. cyclopium, the yields were above 95%. Again, this effect was explained by the use of lipases with different specificities. In another paper, a very "complex" combilipase has been used to produce biodiesel from nonedible oils adding methanol in a stepwise way to prevent enzyme inactivation. [337]. The authors mixed lipases from Candida rugosa, Pseudomonas cepacia, Rhizopus oryzae and porcine pancreas type II, together with Novozym 435. This permitted it to reach conversion yields of 93% that were improved to 97% by adding 10 wt% of silica gel to eliminate water from the system. However, they did not compare the results with individual lipases or less complex combilipases. In another paper, rapeseed oil deodorizer distillates were used as raw material to produce biodiesel using lipases from C. rugosa and R. oryzae, giving 92.63% after 30 h and 94.36% after 9 h of reaction, respectively [338]. The use of a mixture of both enzymes increased the biodiesel yield to 98.16% in 6 h (after optimization via response surface methodology). Another study shows how an oil rich in phospholipids and free fatty acids from Chlamydomonas sp. JSC4 (a microalgae) were used to produce biodiesel [339]. A combilipase mixing lipases from Candida cylindracea and T. lanuginosus lipase gave a high yield (over 95%). Yields were similar to those obtained using a lipase from Fusarium heterosporum expressed in Aspergillus oryzae and used as a whole-cell biocatalyst, but the reusability of this last was higher.
Phospholipids may be a problem in biodiesel production, for this reason a degumming step using a phospholipase is used in many instances. In a first example, the reactions were performed in two steps, degumming and transesterification, using crude canola oil [340]. In the first step, degumming was performed using phospholipase A2 reducing the phospholipid content 60-fold. In the second step, lipases of R. oryzae and C. rugosa were utilized to produce biodiesel. Using the individual enzymes, the yields were 68.56% and 70.15%, respectively. Using a 1:1 mixture of these enzymes, the yield increased to 84.25% [340]. In another paper, both degumming and biodiesel production reactions were performed in just one pot, using a combilipase composed of lipase and phospholipase. The researchers utilized crude soybean oil, which also requires an additional pretreatment for gums removal if it is utilized for biodiesel production [341]. The authors proposed using combilipases mixing a lipase (Callera Trans L), two phospholipases (phospholipase A1 Lecitase Ultra and phospholipase C Purifine) and a lyso-phospholipase, achieving the degumming and transesterification in a single pot. The yield of biodiesel was higher than 95%, avoiding the inhibitions caused by the phospholipids and converting part of the phospholipids into biodiesel, and the phosphorus content was lowered from 900 ppm to <5 ppm [341]. In another paper, the lipase AY from C. rugosa was selected among six lipases by their high activity, and utilized in biodiesel production, but it only gave 21.1% biodiesel yield from oil containing phospholipids [342]. The combination of this lipase with some of the other lipases was assayed, and the best solution was the combination with Callera Trans L. Optimizing the methanol stepwise addition yielded more than 95% biodiesel in 6 h.
Other Uses of Combilipases in Liquid Formulations
In some instances, the objective of using combilipases is not the production of biodiesel or free fatty acids, but to produce special triglycerides. During the process, the nucleophile substrate will be changing, at the start it will be glycerin, then monoglycerides, later diglycerides, to finally get the target product, the triglyceride. That way, the production of these complex triglycerides may be carried out using combilipases.
For example, triglycerides were produced via esterification of glycerin and a concentrate of enriched conjugated linoleic acid using the lipases from Alcaligenes sp., Penicillium camembertii and R. miehei [343]. Using only the lipase from R. miehei, yields were just under 65%. The lipases from Alcaligenes sp. and P. camembertii alone gave yields of only 3%. However, the combined use of both enzymes permitted it to obtain a yield of 83%. Moreover, the combined use of the lipase from R. miehei and lipase from Alcaligenes sp. permitted it to obtain a yield of 82% and the reactions proceeded three times faster than when using the lipase from R. miehei alone [343].
When the aim of the process is to stop the reaction in an intermediate state, using mixtures of lipases may be risky. However, considering the heterogeneity of oils, even in this situation the use of a mixture of lipases may be advantageous and could enable a higher yield, if the enzymes activities versus the target intermedium product are low. For example, a study was intended to produce monoglycerides via glycerolysis of beef tallow or palm oil [344]. Many commercial enzymes were assayed, but in the context of this review we will remark that a mixture of lipases from P. camembertii and Humicola lanuginosa gave a yield of approximately 70 wt% monoglyceride, more than either enzyme alone. However, the mixture of lipases from P. camembertii and Ps. fluorescens or R. miehei gave similar values to the ones obtained employing individually the lipases from Ps. fluorescens or R. miehei [344].
The use of mixtures of lipases to get specific structured lipids may not be a good idea, but in order to get a general mixture modification between two oils again may be beneficial by combining different specificities. In an example of this research, lipases from Rhizopus sp. and Lipozyme TL IM were utilized in individual form or as a mixture in the interesterification between Amazonian patauá oil and palm stearin [345]. This reaction is quite complex, as it involves the hydrolysis of both oils and the further esterification of the released fatty acids to the glycerides, expecting the interchange of the free fatty acids in the final triglycerides [346][347][348][349]. The lipase from Rhizopus sp. modified the sn-1,3 positions of the triacylglycerol, yielding an oil richer in saturated fatty acids in the sn-2 position. The lipase from T. lanuginosus showed no regioselectivity in this reaction, there was no alteration in the distribution of unsaturated and saturated fatty acids in the triacylglycerol, there was only a replacement of fatty acids at the same position in both oils. The use of both enzymes showed the combination of both situations, but no synergetic effects were detected [345,349].
Use of Individually Immobilized Lipases
In most industrial applications, enzymes must be immobilized to facilitate their recovery and reuse [350,351]. This was the first objective of enzyme heterogenization, as enzymes were initially quite expensive biocatalysts. The decrease in the price of enzymes (very clear in the area of lipases) makes this initial objective not so necessary at present. In fact, Novozymes has launched some new lipase products recommending its use in non-immobilized form to save the enzyme immobilization costs, as for example Eversa Transform catalyst to be used in biodiesel production [306][307][308]. Other authors remarked the possibilities of using free lipases in these reactions [235,265,266,[297][298][299][300][301][302][303][304][305]. However, a proper immobilization may have more additional advantages than just facilitating enzyme recovery, and even more so in the case of lipases [60][61][62][63][352][353][354][355][356]. A proper immobilization may increase enzyme stability by rigidifying the enzyme structure, by partitioning deleterious compounds from the enzyme environment, or by stabilizing a more favorable enzyme conformation of the lipase. Moreover, lipase immobilization may improve enzyme activity for different reasons, e.g., by producing more active lipase forms, by avoiding enzyme distortion under harsh conditions if the immobilization has provided some enzyme structural rigidification, or by reducing enzyme inhibition [62] (Figure 16). Immobilization may also alter lipase selectivity or specificity or produce enzyme purification [60][61][62][63][352][353][354][355][356] (Figures 17 and 18). That is, enzyme immobilization may be advantageous by very different reasons. Moreover, even if enzyme disposal may be economically feasible after just one reaction cycle, free lipases, as interfacially active molecules, can give rise to some problems in the purification steps, e.g., stabilizing emulsion of hydrophobic substances [357] (Figure 15). That is, even if the costs of the enzyme loss may be economically acceptable, the advantages of a proper enzyme immobilization may be too relevant to discard this powerful tool to improve the enzyme. This great potential of enzyme immobilization has promoted the continuous growth in the number of scientific publications in this apparently old-fashioned topic [64,358]. Tuning lipase activity, selectivity, specificity and stability.
Using most enzymes, an intense multipoint covalent attachment is the best way to improve enzyme stability [62]. This process may not be simple, requiring a suitable active group in the support and a proper immobilization protocol (that in many instances may have several steps) [359]. In the case of lipases, all these advantages may be obtained by a very simple immobilization strategy. Using lipases, the best protocol to have an improved biocatalyst is just a simple physical adsorption of the enzyme on the support: the interfacial activation of the lipases versus support hydrophobic surfaces [360]. This protocol produces the one-step immobilization/purification/stabilization/hyperactivation of the lipases, as immobilization involves the stabilization of the open form of the lipases [81] (Figure 18). This stabilized open form of the lipase is very stable [361][362][363], even more so than the lipases immobilized via multipoint covalent immobilization [364,365]. The method has other additional advantages, such as its simplicity, high immobilization rate and high stability of the supports that can be stored for long times without any risks of alteration. Novozym 435, the most used commercial lipase biocatalyst, is prepared using this immobilization strategy [250]. Although lipase immobilization on hydrophobic supports may be achieved under a wide range of conditions, it has been recently shown that the immobilization medium conditions may greatly alter the properties of the immobilized lipases, at least when using some lipases [366][367][368]. This may be considered as an advantage, as it permits the modulation of the enzyme properties using a single immobilization support [61], or as a problem, as this means that changes in the immobilization medium composition may produce biocatalysts with different catalytic properties (activity, specificity, stability), and this may be in some instances hard to control ( Figure 19). Figure 19. Effect of the immobilization conditions on hydrophobic supports on the conformation of the immobilized enzyme. Tuning enzyme activity, selectivity, specificity and stability.
Lipase immobilization via interfacial activation is reversible [81,325]. That way it permits the reuse of the support after enzyme inactivation, but it also raises the main problem of this lipase immobilization strategy: the lipase may be released during operation, under drastic conditions (high temperatures, organic cosolvent presence) or in the presence of detergent-like substrates or products [154,323]. This enzyme release may be avoided using heterofunctional supports [155, [369][370][371][372][373][374][375][376][377]
Hydrolysis of Oils and Fats Using Individually Immobilized Combilipases
Commercial immobilized lipases Lipozyme TL IM and Lipozyme RM IM were used in the hydrolysis of soybean oil, comparing the use of individual enzymes and combilipases [160]. Optimal results were obtained utilizing a mixture of 65% Lipozyme TL IM and 35% Lipozyme RM IM, with higher reaction rates and yields (95%) than using the individual enzymes. Later on, this research group used the mixture of three commercial immobilized lipases, adding to the previous ones, the biocatalyst Novozym 435 [242]. Although Lipozyme TL IM was the most active biocatalyst and Novozym 435 was the least active one, the combination of 80% of Lipozyme RM IM and 20% of Novozym 435 gave better activity and yields of the use of just Lipozyme TL IM.
Use of Individually Immobilized Combilipases in Biodiesel Production
The use of immobilized lipases in biodiesel production is perhaps the first application of combilipases. S. W. Kim and co-workers were pioneers in this concept and a very active group in this area. In a first paper, they showed that using immobilized lipases from R. oryzae or C. rugosa in the production of biodiesel using soybean oil, the yields were 70% (after 18 h) or 20% (after 30 h) [161]. Using of a mixture of both immobilized lipases, yields became 99% after 21 h of reaction. Later, they optimized the process, using a 75% (mass) of the immobilized lipase from R. oryzae, reaching a biodiesel yield of 98% in only 4 h [381]. In a further research, they studied this process in batch or continuous way (stepwise addition of methanol was used in the batch process) [382]. The batch process gave 98.33% after 4 h. The continuous process design required considering mass transfer problems. After optimization, a conversion yield of biodiesel of 97.98% after 3 h was achieved [382]. Later, they analyzed this use of these combilipases in supercritical carbon dioxide [383]. After optimization, the batch process gave a biodiesel conversion yield of 99.13% after 3 h; that was improved to 99.99% after 2 h when 90 mmol methanol was used in a stepwise reaction. Later, other research groups also used this concept. For example, a combilipase composed of a mixture of Novozym 435 and Lipozyme TL IM, was utilized as catalyst in the production of biodiesel from methanol and stillingia oil [384]. The objective of this research was to analyze the effect of the presence of some solvents to improve the solubility of methanol and the produced glycerol. The authors used a relation of 1.96% Novozym 435 and 2.04% Lipozyme TL IM regarding the oil weight. Optimal results were obtained using a mixture of 60% acetonitrile and 40% t-butanol (v/v) as a reaction medium. After optimization, a more than 95% biodiesel yield was obtained [384]. Later, this research group utilized Novozym 435 and Lipozyme TL IM to produce biodiesel from methanol and lard, optimizing the reaction by response surface methodology [385]. The best results were obtained using a combilipases formed by 49/51 (Novozym 435/Lipozyme TL IM) total lipases (w/w). After 20 h of reaction, a biodiesel yield of 97.2% was obtained [385]. Combilipases composed by Novozym 435 and Lipozyme TL IM were also utilized to produce biodiesel from methanol and waste cooking oil using tert-butanol as solvent [386]. After optimization, the biodiesel yield was up to 83.5%. They later studied the possibility of using ionic liquids as reaction medium [387]. They selected 1-ethyl-3-methylimidazolium trifluoromethanesulfonate, and the combilipases produced a biodiesel yield of 99% in these conditions. The combilipase was more active in this ionic liquid medium than in solvent-free or using solvents such as tert-butanol or isooctane media [387].
In another paper, five immobilized lipases were employed to produce biodiesel using ethanol and palm oil in a solvent-free system [388]. The best results obtained using the individual enzymes were obtained using the Lipase AK from Ps. fluorescens, but they were improved using a combilipase of this immobilized enzyme and Lipase AY from C. rugosa. Using a continuous packed-bed reactor, yields over 67% of biodiesel were obtained [388]. In another paper, Lipozyme TL IM and Lipozyme RM IM were utilized in biodiesel production using ethanol and soybean oil [160]. Using the central composite design and the response surface methodology, the reaction was optimized. The best results were obtained using 80% of Lipozyme TL IM and 20% Lipozyme RM IM, reaching a yield of 90% (more than doubling the results using only Lipozyme RM IM and 15% higher than employing only Lipozyme TL IM) [160]. In another paper, different lipases were utilized as catalysts in the synthesis of biodiesel from the crude oil extracted from spent coffee grounds, and they found that the biocatalyst with better performance was Novozym 435 (conversion of 60% in 4 h) [389]. After optimization (including oil purification), the conversion yields were improved to 88% in 24 h. Mixing Novozym 435 with Lipozyme RM IM, yields were improved and reaction rates enhanced [389]. In another paper, olive and palm oils were utilized to produce biodiesel using ethanol as alcohol using Novozym 435, Lipozyme TL IM and Lipozyme RM IM [273]. Optimization showed that the best results were reached using combilipases, and that the optimal composition of the combilipases depended on the substrate. Using olive oil, the optimal combilipase was composed of 58.5% of Novozym 435, 29.0% of Lipozyme TL IM and 12.5% of Lipozyme RM IM. This permitted it to reach a 95% biodiesel conversion in 18 h of reaction, while the best individually immobilized lipase (Novozym 435) gave only 50%. The composition of the optimal combilipases was very different when the oil changed. That way, using palm oil, the optimal combilipase did not have Novozym 435, but 52.5% of Lipozyme TL IM and 47.5% of Lipozyme RM IM. This gives an 80% biodiesel conversion in 18 h, while the results obtained when using the best individual enzyme for this oil, Lipozyme TL IM, was a biodiesel yield of only 44% [273]. One contribution used a combilipase composed of used and discarded immobilized lipases from C. rugosa, Ps. cepacia, R. oryzae and lipase from porcine pancreas type II, and Novozym 435, as catalysts of the biodiesel productions from nonedible oils [337]. Stepwise addition of 6 mmol of methanol to 1 mmol of oil permitted to reach a 93% biodiesel yield, and adding silica gel the yields increased to 97% biodiesel. In a further research effort, lipase B from C. antarctica and lipase from R. miehei were covalently immobilized onto epoxy-functionalized silica and utilized to produce biodiesel from methanol and wasted cooking oil [390]. The combilipase formed by both immobilized lipases was used, and response surface methodology and a central composite rotatable design were utilized to optimize the process. The best combilipase was composed by a 75% of immobilized lipase B from C. antarctica and a 25% of the immobilized lipase from R. miehei; the best results were obtained using 10% t-butanol to oil (10 wt%) and silica gel (yields were 91.5%).
The use of a plug-flow, packed-bed continuous reactor and tert-butanol as solvent was analyzed utilizing a combilipase composed of commercial immobilized lipases and two different oils [391]. The optimal combilipases varied depending on the oil. When employing wasted cooking oil, the combilipase was formed by Novozym 435 (35%), Lipozyme TL IM (40%) and Lipozyme RM IM (25%). When using soybean oil, the combilipases was formed by Novozym 435 (50%), Lipozyme TL IM (22.5%) and Lipozyme RM IM (27.5%). The presence of glass beads facilitated the flow of this viscous substrate solution in the mixture of different biocatalysts, prepared using different supports [391].
In another paper, homemade biocatalysts were prepared to produce biodiesel from ethanol and macauba pulp oil [392]. To reach this goal, lipases from Burkholderia cepacia and T. lanuginosus were covalently immobilized on desilicated and thiol-modified ZSM-5. The highest yields (just under 95%) and reactions rates (48 h of reaction) were obtained by the use of the immobilized combilipases [392].
The use of ultrasound in the biodiesel production using combilipases has been studied. For this purpose, methanol and waste frying oil or soybean oil were used as substrates and Novozym 435, Lipozyme TL IM and Lipozyme RM IM were used as biocatalysts [393]. The best combilipase composition was designed by a statistical design of three factors. Ultrasound stirring and these optimized combilipases permitted it to get a biodiesel yield of about 90% when using soybean oil and 70% using the wasted oil after 18 h of reaction [393]. The same group used ultrasound stirring in the biodiesel production from methanol and soybean oil catalyzed by individual immobilized enzymes or the combilipase composed by the mixture of these immobilized enzymes, studying the effects of pulse conditions and ultrasonic amplitude [394]. The best results were obtained using an optimum combilipase formed by 10% Lipozyme TL IM, 15% Lipozyme RM IM and 75% Novozym 435, time pulse of 15 s, duty cycle of 50% and ultrasonic amplitude of 30%. The presence of tert-butanol did not improve the yields under ultrasounds stirring, while it did under mechanical stirring, suggesting that ultrasonic technology was enough to eliminate the diffusional problems. Under optimal conditions, the proposed combilipase produced 75 % ethyl esters while the best individual lipase gave only 55% in 5 h [394].
In another paper, oil from Isochrysis galbana was used to produce biodiesel utilizing commercial lipase from Ps. cepacia and commercial lipase B from C. antarctica [395]. The enzymes were immobilized on aminated SBA-15 mesoporous silica groups. Using wet extracted oil, the individual immobilized biocatalysts gave an 85.5% yield using the lipase B from C. antarctica and 87% using the immobilized lipase from Ps. cepacia (commercial Novozym 435 gave just under 70%). The use of a combilipase composed of immobilized lipase B from C. antarctica and immobilized lipase from Ps. cepacia (in a relation 1:3) permitted to reach a yield of 97.2% [395].
Ethanolysis of soybean oil was intended using free lipases from T. lanuginosus and porcine pancreas, with very poor results using the individual enzymes [396]. Mixing equal activity proportions of both enzymes, the yields increased 5 or 100 folds, but yields were still quite low (around 20% wt). The lipases were immobilized using the cross-linked enzyme aggregate (CLEA) technique [70,[397][398][399] (Figure 21) yielding biocatalysts with 119 (lipase from T. lanuginosus) and 89% (lipase from porcine pancreas) of expressed activity. The combilipase formed by similar activities of both CLEAs permitted to reach a yield of 90.4 (wt.%), while the individual CLEAs gave a yield of 84.7 wt% using porcine pancreas lipase CLEA or 75.6 wt% using T. lanuginosus CLEA [396]. Display of enzymes on the surface of cells is an immobilization technique with growing popularity nowadays [400][401][402][403]. This technique for production of immobilized enzymes has been employed to separately express and display the lipase B from C. antarctica and the lipase from R. miehei on P. pastoris [404]. These biocatalysts were employed to produce biodiesel in tert-butanol and isooctane cosolvent media using statistical optimization. The use of a combilipase with the two displayed lipases (in different yeast cells) gave an ester yield higher than 90% in 12 h, higher than the use of the individual biocatalysts [404].
The synergy between immobilized lipase from R. oryzae and Novozym 435 in biodiesel production was showed in another research, increasing the yield by 30% compared to the results obtained using the immobilized lipase from R. oryzae [405]. After optimization, a biodiesel yield of 98.3% in 21 h was achieved. The authors also showed that the combination of Novozym 435 with other lipases with a similar regioselectivity to the lipase from R. oryzae showed similar synergies [405]. Later, the same group used rapeseed oil deodorizer distillate as raw material to produce biodiesel. This is a complex mixture of glycerides and free fatty acids, rich in phytosterols [406,407]. This makes the situation even more complex, as now the lipase must be selected to efficiently catalyze both esterification and transesterification reaction, making the concept of combilipases even more interesting. That way, biodiesel was produced using this substrate and methanol, achieving the one-pot esterification of the free fatty acids and the transesterification of the glycerides in a solvent-free system [408]. As catalysts, Novozym 435 and immobilized lipase from Ps. cepacia G63 were employed. The use of combilipases composed by both immobilized enzymes gave better results than the individual enzymes, and it gave an ester yield over 95% under optimal conditions. The process did not affect the phytosterols [408]. In a somehow similar situation, soybean oils with acid contents ranging from 8.5 to 90 and ethanol were used to produce biodiesel using Novozym 435, Lipozyme TL IM and Lipozyme RM IM [409]. Although Novozym 435 and Lipozyme RM IM were efficient in the decrease of oil acidity, a synergistic effect occurred when a combilipase using Novozym 435 and Lipozyme TL IM was used, doubling the ester production using an oil with an acidity value of 90. As in the case of the free enzymes, interesterification is one of the reactions where immobilized combilipases have been utilized. For example, the enzymatic interesterification of coconut oil and palm stearin was performed using Novozym 435, Lipozyme TL IM and Lipozyme RM IM [410]. Some dual combilipases, such as mixtures of equivalent amounts of Novozym 435 and Lipozyme TL IM or Novozym 435 and Lipozyme RM IM, presented a significant synergistic effect as well as an enhanced degree of interesterification. The authors show that the carrier material may play an important role. Combilipases formed by immobilized lipases and non-immobilized lipase from Ps. flourescens enhanced the activity of the free enzyme. A combilipase formed by 70% free lipase AK mixed with 30% of any of the immobilized lipases more than doubled the theoretical activity. The coimmobilization of the free lipase on the support was proposed to explain this effect, and that was shown by a reaction catalyzed by free Lipase AK-and an immobilized but inactivated lipase preparation [410]. However, this experiment could be explained by some role of the support where the lipase is immobilized on the reaction (e.g., perhaps facilitating the acyl migration), not necessarily by the immobilization of the enzyme, that may be hard during the reaction in such a complex medium (Figure 15). In another paper, extra virgin olive oil, tripalmitin, arachidonic acid and docosahexaenoic acid were utilized to produce structured lipids with high palmitic acid content at the sn-2 position enriched with arachidonic acid and docosahexaenoic acid [411]. This means that interesterification and acidolysis occurred and the researchers, among other possibilities, used a combilipase formed by Novozym 435 and Lipozyme TL IM to analyze if some synergistic effect could be found. In parallel, Novozym 435 was used to catalyze the intereserification reaction in a first step and Lipozyme TL IM was utilized to catalyze the acidolysis in a second step (using a sequential design). All products presented more than 50 mol % palmitic acid at the sn-2 position, but the use of one-pot approach and the combilipase made the reactions faster [411].
Other Uses of Immobilized Combilipases
Novozym 435 and Lipozyme RM IM were used in the enzymatic synthesis of kojic ester via esterification of kojic acid and oleic acid [412]. This is a reaction where just a single modification of each substrate is intended, phenol hydroxyl group of kojic acid is not very reactive. After optimization, the best results were found using equal amounts of both immobilized lipases (ester yields were 70%). This is one of the few examples of the use of combibiocatalysts in the case of monofunctional substrates, and may be a consequence of some of the changes in the reaction conditions explained in the Section 1.2.1 [412,413].
In another example, isosorbide diester plasticizer was synthesized using immobilized lipase from Yarrowia lipolytica Lip 2, Lipozyme RM IM or Novozym 435 [414]. The most efficient enzymes, immobilized lipase from Y. lipolytica or Lipozyme RM IM, did not produce the S-isomer. To avoid this limitation, the researchers used a combilipase mixing one of those immobilized enzymes with Novozym 435, greatly increasing the ester yields.
Use of Mixtures of the Same Lipase Immobilized Following Different Protocols: A Special Combilipase
Lipases, perhaps due to the flexibility of their active center and their mechanism of action, are among the enzymes whose properties may be more easily modulated via different strategies [415][416][417][418][419][420] as well as via immobilization [421][422][423][424] (Figure 17). It has been widely showed that changes in the immobilization protocol or the physical or chemical modification of the immobilized lipases may greatly alter the enzyme features [61]. Using the same immobilization mechanism, e.g., the interfacial activation of the lipase versus hydrophobic support surfaces [81,360], it has been shown that the change of the support features greatly affects the final enzyme specificity, activity and stability [425][426][427]-even immobilization of the same enzyme using the same hydrophobic support, but just changing the immobilization conditions gives very different enzyme properties [366][367][368] ( Figure 19). In fact, it has been recently shown that the lipase from T. lanuginosus immobilized on a hydrophobic support under certain conditions was a strict 1,3 selective enzyme, being unable to hydrolyze 2-monoglycerides, while the enzyme immobilized under other conditions can hydrolyze 2-monoglycerides, and that also depended on the immobilization support [428,429].
In this context, Godoy and coworkers immobilized several lipases on Lewatit ® VPOC1600 and Purolite ® ECR1604 and used the biocatalysts to produce biodiesel from ethanol and palm olein [430].
Immobilizing the same lipase on these two supports, they found that the support affected the lipase performance. For example, using the lipase from T. lanuginosus, yields went from 78.2% (using Lewatit ® VPOC1600) to 70.3% (using Purolite ® ECR1604) [430]. This showed that the immobilization support affected the properties of the lipase as catalyst of biodiesel production, an already known fact [165, 431,432] (Figure 17). The mixture of the individually immobilized biocatalysts produced better results than the use of each independently immobilized enzyme catalysts. Moreover, very interestingly, the authors of this paper showed that using the mixture of both biocatalysts of the same enzyme, the yields were better than using the best biocatalysts, and increased to 86.1% [430]. That way, immobilization following different protocols should produce enzymes with different catalytic properties, and we can call these "combilipases". We have not found any other paper showing this fact.
Coimmobilization of Lipases: Advantages, Problems and Proposed Solutions
Enzyme coimmobilization means the immobilization of different enzymes on the same particle ( Figure 22). This coimmobilized enzymes are frequently used in cascade reactions, mainly because they provide a kinetic advantage, as the in situ production of the intermediate products allows expressing the activity of the intermediate enzymes from the beginning of the reaction, saving the lag time usually found in these cascade reactions [433][434][435][436][437][438][439][440][441] (Figures 22 and 23). The full modification (hydrolysis or alcoholysis) of oils and fats may be considered a cascade reaction, as it involves three consecutive modifications of the triglyceride (Figures 4, 6 and 8-10). If the use of mixtures of lipases improves the reaction course, this means that some of the glycerides (substrate or intermediate products) are not good substrates for the lipase that performs the reaction best with the main substrate components (Figures 4 and 6). The fact that perhaps all the substrate modifications may be catalyzed by a single biocatalyst is not enough to discard the possibility of some advantage of lipase coimmobilization, as some the glycerides (initial substrate or intermediate products) may behave as inhibitors of the main enzyme, and its rapid elimination by other lipase will permit the expression of the maximum activity by the main enzyme.
However, coimmobilization of enzymes, and of lipases, has some problems [60]. The first one is the necessity of immobilizing all enzymes following the same protocol. Fortunately, for lipases, interfacial activation using hydrophobic supports is an almost universal and very good immobilization protocol that may be applied to most lipases [81]. The second problem is the possibility that the different lipases may present very different stabilities. This makes discarding all immobilized enzymes necessary when just one has been inactivated [60,442] (Figure 24). Using reversible immobilization methods, such as interfacial activation, the support may be recovered, but only in a very lucky case the most stable enzyme will remain on the support when the inactivated one is released from it. However, at least this will permit the reuse of the support (Figure 25). That means that coimmobilization of several enzymes should consider not only the advantages, but also the problems of the coimmobilization [287]. Recently, some solutions have been designed to solve this latter problem, making the reuse of the most stable lipases after inactivation of the least stable ones possible. We will briefly present these strategies at the end of this section.
Use of Coimmobilized Combilipases in Biodiesel Production
We have found uses of coimmobilized combilipases only in biodiesel production. As in the case of the use of combilipases by mixing immobilized enzymes, S. W. Kim's group initialized and spearheaded the use of coimmobilized combilipases in the production of biodiesel. In a first paper, a continuation of a previous paper where the use of non-immobilized lipases from R. oryzae and C. rugosa for degumming by the action of phospholipase A2 had given good results in the production Reuse of the support of biodiesel from crude canola oil, which had 100-300 ppm of phospholipids was presented using coimmobilized enzymes. To this goal, the enzymes were coimmobilized on silica gel [340]. After optimization, the ester yields reached a value of almost 90%. Next, they performed a study comparing a combilipase of immobilized enzymes and coimmobilized combilipases in the transesterification of soybean oil and methanol at two different pressures [443]. At atmospheric pressure, the initial reaction rates of both combilipases decreased when the methanol concentrations increased. However, under supercritical fluid conditions, the initial reaction rate of both combilipases (individually immobilized or coimmobilized) increased until methanol concentration became double the concentration of oil. Results pointed out that the coimmobilized combilipase had higher initial reaction rate, but the negative effects of methanol on enzyme stability were also higher than using the mixture of immobilized lipases [443]. Later on, after optimizing the coimmobilization process, the coimmobilized combilipase was used in two different reactors [444]. A continuous packed-bed reactor and a batch system with stepwise methanol feeding were utilized. In the last system, around 99% yield after 3 h of reaction was obtained and remained over 90% after 30 reuses. In a last paper from this group, the same lipases were coimmobilized on activated carbon modified with aminopropyltriethoxysilane and glutaraldehyde [445]. After optimization of the coimmobilization, the coimmobilized combilipase was used to produce biodiesel with very high yields using algal oil (93.8%), waste cooking oil (95.7%) and soybean oil (98.5%) after only 4 h of reaction.
Other research groups also used coimmobilized combilipases in the production of biodiesel. For example, lipase B from C. antarctica and lipase from R. miehei were coimmobilized on epoxy-functionalized silica gel using different enzyme ratios [446]. The transesterification of palm oil with methanol to produce fatty acid methyl esters catalyzed by these biocatalysts was optimized by response surface methodology and a central composite rotatable design. The best ratio between both enzymes was 2.5:1 (lipase B from C. antarctica: lipase from R. miehei), giving an ester yield of 78.5% [446]. In a continuation of the work discussed in Section 6.2, lipases from T. lanuginosus and R. miehei were coimmobilized via interfacial activation on Lewatit ® VPOC1600 and Purolite ® ECR1604 [430]. The biocatalysts were used in ethyl ester production using palm olein as substrate. The authors described that coimmobilization improved the results of the use of mixtures of independently immobilized lipase (see Section 6.2). The support also has a great effect; the results obtained using Lewatit ® VPOC1600 were better than those using Purolite ® ECR1604. The best results were obtained with coimmobilized lipases on Lewatit ® VPOC1600, biodiesel yield increased from 81.8% to 89.5% compared to the respective mixture of individually immobilized enzymes [430].
Protein-coated microcrystals are not a much-utilized immobilization strategy [389]. This immobilization technique consists in the use of water-soluble, micron-sized crystalline particles coated with the target enzyme. The biocatalysts are prepared in a one-step rapid dehydration process [447][448][449]. This strategy was used to prepare a coimmobilized combilipase coated microcrystals including lipase B from C. antarctica and lipase from R. miehei, using K2SO4 as the core of the particles, giving similar results to the best results obtained using the commercial immobilized enzymes (83% conversion in 48 h) [389].
Finally, the display of enzymes on the surface of surface of cells [400][401][402][403] has been employed to co-express and co-display lipase B from C. antarctica and lipase from T. lanuginosus on the surface of P. pastoris cell as biocatalyst for biodiesel production [450]. This permitted a 95.4% ester yield and a good operational stability.
Preparation of Coimmobilized Combilipases to Reuse the Most Stable Enzymes
As it has been explained in Section 7.1, enzyme coimmobilization has some drawbacks ( Figures 24 and 25). These problems have not been considered in any of the above uses of combilipases. If they are not considered in the preparation of a biocatalyst, coimmobilization may afford more problems than advantages [60,287]. One of the points is that coimmobilization is only reasonable when that has some clear advantage over individual enzyme immobilization, e.g., in cascade reactions [60,287] (Figures 22-25). However, in some instances, the problems are ignored and several enzymes are coimmobilized, even stating that the intention is not to produce a biocatalyst to catalyze cascade reactions in one pot, but to produce the so-called "multipurpose biocatalysts" [32,451].
This is the case of the preparation of a combiCLEA containing lipase, α-amylase, and phospholipase A2 [452]. The CLEA immobilization strategy is simple, but even using this strategy the optimal precipitant, crosslinking agent nature and concentration, feeding protein, etc., may be different for each specific enzyme [397][398][399] (Figure 21). Furthermore, it still has the problems of the coimmobilization and none of the gains, the recommendation should be to immobilize each enzyme in an individual way under optimal conditions [60,287].
This reuse of the most stable enzyme is usually ignored in most papers, where only in many few instances the stability of the different coimmobilized enzymes is even presented. However, the problem was clearly exemplified when lipase B from C. antarctica immobilized on octyl agarose was coated with polyethylenimine (PEI) and the lactase from A. niger was coimmobilized on it via ion exchange [442] (Figure 26). The lipase was much more stable than the lactase, in a way that remained fully active when the lactase was almost fully inactivated. However, thanks to the different immobilization strategies employed for each enzyme, the inactivated lactase could be released to the medium after its inactivation without affecting the activity of the immobilized lipases. Just by incubation at high ionic strength to release the inactivated lactase, the immobilized lipase could be reused for many cycles involving lactase inactivation/lactase desorption/PEI recoating of immobilized lipase/new batch of lactase immobilization [442] (Figure 26). The reuse of a support may not compensate the costs of the recycling process, but in this case, by just an incubation at high ionic strength, the immobilized lipase could be reused for many cycles, and this may have a higher economical interest. The lipase coating with PEI produced an increase in lipase activity and stability [378,380], and even the treatment with glutaraldehyde to prevent the PEI release during lactase desorption produced some positive effects [379], making this coimmobilization strategy very suitable [453]. When the researchers analyzed the stabilities of some of the most used lipases, they found a great variety in lipase stabilities, and in fact, some lipases immobilized on octyl agarose via interfacial activation [360] [454,455]. Thus, the utilization of different strategies to prepare a coimmobilized combilipase biocatalyst to reuse the most stable lipases has sense. First of all, it was shown that the use of PEI as coupling agent permitted to prepare multilayers of the same lipase [456,457] or of different lipases [458] (Figures 27 and 28). A problem found when using some lipases was that the enzymes already immobilized on enzyme-PEI composites the enzyme was released when treating with PEI to immobilize a new lipase layer, requiring to treat the biocatalyst with glutaraldehyde to prevent enzyme release via covalent enzyme-polymer crosslinking [457]. Apparently, this may not look a coimmobilized combilipase when using the same lipase. However, the strategy presented finally three different immobilized lipase forms. The lipase in the bottom layer was the lipase immobilized via interfacial activation on octyl agarose, modified with PEI and glutaraldehyde, the second lipase layer (and all intermediate lipase layers) was the lipase immobilized via ion exchange, modified with glutaraldehyde and PEI, while the last layer, if desired, could be immobilized via ion exchange but without any other further modification [457,458] (Figure 27). As explained above, the use of mixtures of the same lipase immobilized following different protocols permitted to have better results than the use of single immobilized catalysts [430], and we can call this "combilipase", as we have different lipase forms [61]. This enzyme layer by enzyme layer strategy, when immobilizing different enzymes, permitted to control the spatial distribution of the different enzymes, until five different lipases were immobilized using different spatial distribution, with very different impact on the final biocatalysts activity versus different substrates [458] (Figure 28). Figure 28. Coimmobilization of several enzymes with controlled spatial distribution using a multilayer strategy.
PEI
Following the same strategy utilized to coimmobilize lipases and lactase [442] (Figure 26), several very stable lipases (lipases A and B from C. antarctica and lipase from T. lanuginosus) were immobilized on octyl-divinylsulfone agarose, treated with PEI and coimmobilized with several less stable enzymes (lipase from R. miehei and Lecitase Ultra) via ion exchange [459]. The most stable lipases could be reused for several cycles of stress inactivation of the least stable lipases/release of these inactivated enzymes/recoating of the immobilized stable enzymes with PEI/immobilization of a new batch of the non-stable lipases [459] (Figure 26).
Another strategy that permitted the reuse of the most stable lipase [454], this time using the advantages of the lipase immobilization via interfacial activation for all lipases [81], is based in the use of heterofunctional supports [359], using supports with hydrophobic acyl chains to get the interfacial activation of the lipase [360] and reactive groups able to give the covalent lipase immobilization [369]. The event that permits the first immobilization of the lipase on these supports is the lipase interfacial activation, and later, the formation of some enzyme-support covalent bonds may occur [369] (Figure 20). The most stable lipase is immobilized on the heterofunctional support, after getting some enzyme-support covalent bonds, the other groups in the support are destroyed, and the least stable enzymes may be immobilized just via interfacial activation [454,455]. This permitted the release of the least stable lipases after its inactivation by incubation in the presence of detergents [81], and enabled the reuse of the most stable lipases, that are covalently immobilized. Several lipases with similar stabilities may be immobilized using the same immobilization strategy. The main problem of this strategy is the release of all detergent molecules from the immobilized lipase biocatalysts [454].
However, these biocatalysts have not been assayed in the production of biodiesel or free fatty acids, all the utilized coimmobilized combibiocatalysts have not considered these problems of the enzyme stability of the different biocatalysts.
Conclusions
This review shows how combilipases may have a great potential in the development of modifications of heterogeneous substrates. In fact, it looks an obvious solution to optimize this kind of processes. The improvements in free fatty acids or biodiesel production have been clearly illustrated because it is unlikely that a single lipase can modify all the different triglycerides in an oil, and less considering the partial glycerides produced during the reaction. Although we have been only able to find one case, the use of this combilipase concept may also have extension to monofunctional substrates, as the reactions conditions will change always during the reaction and this can be beneficial for some enzymes and negative for some others.
As expected, the use of immobilized enzymes offers some advantages compared to the use of the free enzymes, as recycling is simpler, the enzymes become more stable, and diverse reactor configurations may be utilized. Nevertheless, immobilization must be carefully designed, if we can really improve most enzyme features and not just facilitate enzyme recycling, immobilization advantages may benefit from their costs or derived problems. Moreover, although this has been only shown in one example, the preparation of combilipases using just one lipase but immobilized using different protocols may become an easy way to take the advantages of this idea. Our expectations are that this idea may be disseminated rapidly and many examples will be available in the future.
Another different point is the use of coimmobilized lipases. Considering just the problems derived from coimmobilization, it is necessary to consider if the gains are higher than the losses. Fortunately, lipases have been the model enzymes to design some strategies that solve the problem of enzymes bearing different stabilities. There are already strategies that permit to reuse the most stable enzymes after inactivation of the least stable enzymes. In fact, it has been showed that a combilipase may be built by immobilizing different layers of the same enzyme submitted to different modifications.
The authors of this review foresee that the use of combilipases, and combienzymes in general, will be more generalized in the near future, as the number of problems that this can solve in a reaction are many, although the use of several enzymes may complicate the design of the processes. Very likely, even some relatively simple processes will benefit from the use of mixtures of enzymes, having different responses to changes in the conditions of the medium, inhibition, specificity or selectivity. This is a reality in the case of oil and fats modification and combilipases, but we are convinced that the concept of optimal enzyme for a given process may be changed in the near future for the concept of optimal combienzyme. The formulation of these combienzymes may be in free, immobilized or coimmobilized enzymes forms, but also in combinations of different formulations.
Author Contributions: All authors search for the literature, participate in the writing of the first manuscript version and its editing. RFL and RCR designed the structure and designed the concept. All authors have read and agreed to the published version of the manuscript.
|
2020-06-04T09:04:47.601Z
|
2020-05-29T00:00:00.000
|
{
"year": 2020,
"sha1": "8110fc0c901f70b1f4e28fcce5b329c8d7ac676e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4344/10/6/605/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "22b4f82e39b63d171f716f90b1229733397d5736",
"s2fieldsofstudy": [
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
14427410
|
pes2o/s2orc
|
v3-fos-license
|
Cardiac autonomic control in adolescents with primary hypertension
Background Impairment in cardiovascular autonomic regulation participates in the onset and maintenance of primary hypertension. Objective The aim of the present study was to evaluate cardiac autonomic control using long-term heart rate variability (HRV) analysis in adolescents with primary hypertension. Subjects and methods Twenty two adolescent patients with primary hypertension (5 girls/17 boys) aged 14-19 years and 22 healthy subjects matched for age and gender were enrolled. Two periods from 24-hour ECG recording were evaluated by HRV analysis: awake state and sleep. HRV analysis included spectral power in low frequency band (LF), in high frequency band (HF), and LF/HF ratio. Results In awake state, adolescents with primary hypertension had lower HF and higher LF and LF/HF ratio. During sleep, HF was lower and LF/HF ratio was higher in patients with primary hypertension. Conclusions A combination of sympathetic predominance and reduced vagal activity might represent a potential link between psychosocial factors and primary hypertension, associated with increased cardiovascular morbidity.
INTRODUCTION
Hypertension is one of the most serious human health problems. The prevalence of hypertension in childhood is increasing. The primary hypertension is closely linked to psychosocial characteristics; therefore, it is included into psychosomatic diseases [1]. In addition, adolescent primary hypertension is associated with higher risk for coronary artery disease and cardiovascular mortality in adult life. The autonomic nervous system is known to play a major role in the interaction of the cardiovascular and central nervous systems, and in other regulatory systems. It is expected that progressive impairment in autonomic regulation participates in the initiation and maintenance of primary hypertension, in particular in children and adolescents. Cardiac function is a sensitive to autonomic inputs; thus, a study of the cardiac autonomic control can be used for the assessment of primary hypertension.
The heart rate variability (HRV), i.e., the oscillations of heart rate around the mean value, is caused by variations in the input to the sinus node from the autonomic nervous system. Multiple mechanisms underlie HRV -parasympathetic activity at high frequency (HF) band reflects mainly respiratory sinus arrhythmia (RSA) and sympathetic activity at low frequency (LF) band reflects mainly baroreceptor activity. A long-term HRV analysis can provide information about the dependence of cardiac autonomic activity on day/night rhythm in primary hypertension. Guzzetti et al. [2], using the 24-hour HRV analysis, concluded that primary hypertension is characterized by higher sympathetic activity (greater LF). On the other side, other studies emphasize a contribution of parasympathetic activity in primary hypertension development [3]. Thus, the information on long-term heart rate variability is limited and somehow controversial.
In the present study, we set out to examine the cardiac autonomic control during awake state and sleep using a long-term HRV analysis in adolescents with primary hypertension.
SUBJECTS AND METHODS
The study was approved by the Ethics Committee of Jessenius Medical Faculty, Comenius University in Martin, Slovakia. Twenty-two subjects -5 girls and 17 boys (aged 14-19 years, mean age 16 ± 2 years) with untreated primary hypertension were enrolled in the study. Primary hypertension was defined according to data recommended for hypertension diagnosis in childhood using 24-hour ambulatory blood pressure monitoring (ABPM) (Click Holter Recorder, Cardioline, Italy) with the mean systolic and/or diastolic blood pressure ≥95th percentile adjusted for height [4]. The diagnosis of primary hypertension was based on the examinations resulting in exclusion of secondary etiology of hypertension (e.g., renal, vascular, endocrine diseases, etc.).
All subjects with primary hypertension underwent 24-hour ABPM with nocturnal dip (more than 10% of the daytime blood pressure) in all hypertonics. A control group consisted of adolescents matched for age and gender (5 girls and 17 boys, mean age 17 ±1 years). All probands were non-smokers, not taking drugs and substances influencing cardiovascular sys-tem (i.e., caffeine, alcohol), and they had no evidence of mental or other diseases.
A 24-h continuous ECG monitoring started in the morning (8 a.m.). All subjects carried out normal daily activities -school lessons, afternoon rest, preparation for next day school lessons, moderate physical activity, and sleep at night (after 10 p.m.). The ECG monitoring was finished next morning at 8 a.m.
HEART RATE VARIABILITY (HRV) ANALYSIS
Artefacts in the 24-h ECG recording were eliminated using a recognition algorithm and also manually. The Hanning window was used for minimalization of spectral leakage. The HRV spectral analysis was performed using Fast Fourier transformation algorithm. Two periods from the 24-h ECG recording were selected for HRV analysis: awake state lasting from 9 a.m. to 1 p.m. reflecting daily activities and sleep lasting from midnight to 4 a.m. The following parameters were evaluated: LF -spectral power in low frequency band (0.04 -0.15 Hz) HF -spectral power in high frequency band (0.15 -0.4 Hz) LF/HF -ratio of low-high frequency powers Spectral powers were expressed in normalized units (NU), which represent the relative value of each power component in proportion to the total power minus the very low frequency (VLF) component (LF = LF *100/(total power -VLF). The representation of LF and HF in normalized units emphasizes the controlled and balanced behavior of the two branches of the autonomic nervous system. Moreover, the normalization tends to minimize the effects of the changes in total power on the values of LF and HF components. The HF power is determined mainly by parasympathetic activity (respiratory sinus arrhythmia) and the LF power is a reflection of both sympathetic and parasympathetic activities. Some studies suggest that LF, when expressed in normalized units, is a quantitative marker of sympathetic modulations. Moreover, the LF/HF ratio is considered an index of sympathovagal balance [5].
STATISTICAL ANALYSIS
The data are expressed as means ±SE. The Lilliefors test was used for the determination of gaussian/nongaussian distribution. The Mann-Whitney U test and Wilcoxon test were used for between-groups comparison (hypertension vs. control; awake state vs. sleep). P≤0.05 was considered as significant.
PRIMARY HYPERTENSION VS. CONTROL
The HF was significantly lower in adolescents with primary hypertension in both awake and sleep states, compared with those in controls (P=0.018 and P=0.004, respectively). The LF was higher in awake state (P=0.031), but did not change appreciably in sleep, in primary hypertension compared with con-trols. The LF/HF ratios were significantly higher in adolescents with primary hypertension in both awake and sleep states, compared with those controls (P=0.031 and P=0.002, respectively) ( Table 1).
AWAKE STATE VS. SLEEP
The HF was significantly higher and LF/HF ratio was significantly lower during sleep in both groups (P=0.001). There were no significant differences in LF between the awake and sleep states in either primary hypertension or control group (Table 1).
DISCUSSION
The basic finding of the present study was a decrease in parasympathetic HF activity during awake state and sleep, and an increase in sympathetic LF activity during awake state in adolescents with primary hypertension. These results indicate defective vagal regulation of the sinoatrial node associated with the shift of sympathovagal balance toward a sympathetic dominance (higher LF/HF) during day and night.
Studies concerning cardiovascular dysregulation in primary hypertension emphasize the role of a sympathetic branch of the autonomic nervous system. Enhanced sympathetic activity is widely accepted as one of the fundamental mechanisms leading to primary hypertension and may already be determined in prehypertensive subjects [6]. In addition, a close relationship between psychosocial characterics and primary hypertension is presumed. Some authors assume that excessive cardiovascular reactivity to psychological stress may have a causal mechanistic role in primary hypertension [7]. Thus, stress may affect major regulatory systems, in particular the autonomic nervous system, leading to inappropriately elevated sympathetic drive contributing to higher blood pressure. Some children with primary hypertension are characterized by typical psychological features, e.g., perfectionism or anxiety leading to chronic stress, which, in consequence, affects cardiac autonomic regulation [7]. Kamada et al. [8] found a shift of sympathovagal balance (higher LF/HF) using of the HRV analysis in A type subjects (competitive, ambitious, intensive life-style, faster breathing) compared with B type subjects (relaxing behaviour, deep breathing) [8]. The mechanisms by which psychosocial factors increase the risk of cardiovascular diseases, including hypertension, are numerous and complex; however, sympathetic overactivity seems to play a pivotal role [9,10]. Our results are in accordance with these studies indicating the sympathetic overactivity in primary hypertonics. Importantly, some authors emphasize the contribution of the parasympathetic modulation in primary hypertension [9,10], but its role is less clear. Some studies revealed lower vagal activity in young people with primary hypertension [6,11]. Others revealed a significant decrease in parameters representing vagal tone during 5-min periods not only immediately preceding or following blood pressure elevations, but also 10 and 20 min before these episodes. Moreover, low frequency component of HRV was significantly lowered 10 min before and immediately after the recording of blood pressure elevation. These results suggest that among various pathogenic mechanisms of spontaneous blood pressure elevations, sudden vagal withdrawal should be taken into account [12]. Our results confirm a decrease of the parasympathetic activity in primary hypertension.
The cardiac autonomic dysregulation in primary hypertension could be a result of the abnormal central activity and of alterations in effectors (heart, vessels) [3], but the exact mechanisms are still unclear. Moreover, patients with early primary hypertension may have a hyperkinetic pattern of hemodynamics which is characterized by higher cardiac output and higher heart rate and blood pressure [13]. It seems that a combination of sympathetic predominance and lower vagal activity can be a result of the multifactorial pathomechanisms leading to chronic elevation of blood pressure in adolescents, including the above-mentioned psychosocial factors. Thus, stress management could exert advantageous influence on hypertension prevention. It seems worth emphasizing the importance of the non-pharmacological (e.g., relaxation or physical activity) and psychosocial (e.g., decreasing anxiety or hyperreactivity to stressors) treatments of primary hypertension which may lessen cardiac autonomic dysregulation [10,14].
In conclusion, our study has revealed a combination of sympathetic predominance and vagal withdrawal indicating impaired cardiac autonomic regulation during day and night in adolescents with primary hypertension. Importantly, autonomic imbalance may be the final common pathway of numerous diseases and conditions associated with increased morbidity and mortality of hypertension [15].
|
2018-04-03T05:46:47.679Z
|
2009-12-07T00:00:00.000
|
{
"year": 2009,
"sha1": "39311b36429d2d622ae9aaddad3a7a66e60a643d",
"oa_license": null,
"oa_url": "https://eurjmedres.biomedcentral.com/track/pdf/10.1186/2047-783X-14-S4-101",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1265555fa89f7e96e5d73ea5aeed336930c93eaa",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4198521
|
pes2o/s2orc
|
v3-fos-license
|
Testing Cessation Messages for Cigarette Package Inserts: Findings from a Best/Worst Discrete Choice Experiment
This study assessed smokers’ responses to different smoking cessation topics and imagery for cigarette package inserts. Adult smokers from Canada (n = 1000) participated in three discrete choice experiments (DCEs): DCE 1 assessed five cessation benefit topics and five imagery types; DCE 2 assessed five messages with tips to improve cessation success and five imagery types; DCE 3 assessed four reproductive health benefits of cessation topics and four imagery types. In each DCE, participants evaluated four or five sets of four inserts, selecting the most and least motivating (DCEs 1 & 3) or helpful (DCE 2) for quitting. Linear mixed models regressed choices on insert and smoker characteristics. For DCE 1, the most motivating messages involved novel disease topics and imagery of younger women. For DCE 2, topics of social support, stress reduction and nicotine replacement therapy were selected as most helpful, with no differences by imagery type. For DCE 3, imagery influenced choices more than topic, with imagery of a family or a mom and baby selected as most motivating. Statistically significant interactions for all three experiments indicated that the influence of imagery type on choices depended on the message topic. Messages to promote smoking cessation through cigarette pack inserts should consider specific combinations of message topic and imagery.
Introduction
In 2000 Canada was the first country to mandate pictorial health warnings on cigarette packs; since then, over 100 countries have implemented them [1].Pictorial warnings generally illustrate the consequences of tobacco use in order to prevent tobacco use in nonsmokers and to motivate smokers to quit.Evidence from across the globe to date indicates that prominent, pictorial warnings with fear arousing content effectively promote smoking cessation intentions and behaviors [2,3].However, the Extended Parallel Process Model (EPPM) asserts that the impact of these fear-arousing messages could be further enhanced by including complementary messages that enhance response efficacy and self-efficacy [4,5].Response efficacy is comprised of beliefs that some actions (such as quitting smoking) might avert the threat of diseases caused by smoking.Messages communicating response efficacy typically focus on benefits of quitting smoking.Self-efficacy comprises beliefs that one is capable of carrying out these responses (i.e., one can quit smoking).Messages aimed at raising self-efficacy frequently communicate tips about how to quit smoking.
A lesser known characteristic of the Canadian warning label policy is its requirement for "inserts", which are small, printed leaflets inside of cigarette packs that contain messages about the benefits of quitting (e.g., response efficacy) and behavioral recommendations to increase successful smoking cessation (e.g., self-efficacy) (see: www.tobaccolabels.ca/countries/canada).Observational studies of Canadian smokers indicate that those who read inserts are more likely to have stronger downstream self-efficacy to quit and are more likely to subsequently make a quit attempt, including attempts that last for at least a month [6,7].Prior research on pictorial warnings about smoking-related harms has identified some types of imagery as more effective than other types, such as graphic illustrations of bodily harm (e.g., open heart surgery) compared to symbolic representations of risk (e.g., a bomb to represent pending heart attack) [8][9][10][11].To our knowledge, however, no research has assessed the effectiveness of pictorial complements for efficacy-enhancing messages, whether for promoting response efficacy or self-efficacy to quit.
Given the theoretical and empirical support for the Canadian labeling policy of complementing pictorial warnings with inserts containing efficacy messages, there is a need for research on how to best capitalize on the expanded space for public health messages that inserts provide.The current experimental study assessed smokers' responses to different imagery and efficacy message topics that could be used for inserts that aim to promote smoking cessation, including which smoker characteristics influenced the perceived effectiveness of messages.
Topics for Efficacy Messages about Smoking Cessation
Smokers' responses to different types of messages that target efficacy beliefs around smoking cessation have rarely been studied.Observational research among Canadian smokers has assessed attention to and effects of inserts [6,7,12]; however, these studies did not distinguish the effects of different insert content, which as of 2012 included four inserts with messages about cessation benefits and six with quitting tips.An experimental study with US smokers on insert content [13] found that smokers perceived cessation benefit messages as more effective than quitting tips messages.However, no study of which we are aware has systematically assessed the effectiveness of specific topics within these two types of message categories.
Pictorial Imagery and Smoking Cessation Messages
Pictorial imagery is a potentially important characteristic of messages, as it can enhance message recall [14,15] and persuasiveness [16].Studies have consistently demonstrated that cigarette package warnings with images are more effective than text-only warnings in promoting knowledge of tobacco-related risks and encourage smoking cessation, whether in experimental studies [3], observational studies [2], or randomized behavioral trials [17][18][19].These studies have primarily assessed imagery that illustrates smoking-related harms, where negative emotional responses can play an important role in mediating the effects of warnings on smoking cessation behaviors [18,20,21].For warnings that emphasize the negative consequences of smoking (i.e., "loss frame" messages), imagery that graphically illustrates the harms of smoking or that shows personal suffering from these harms is consistently rated by smokers and youth as more effective than imagery that symbolically represents risk (e.g., a bomb to represent a pending heart attack) [8][9][10][11]22].Such studies have provided the foundation for WHO policy recommendations about warning label content [23].
Research on smokers' responses to imagery with efficacy messages should inform recommendations for the most effective inserts to promote smoking cessation.Consistent with research on cigarette warnings, experimental research finds that smokers perceive both cessation benefit messages and cessation tips as more effective when they include pictorial imagery than when they do not [13].However, to our knowledge, no published research has systematically assessed specific types of imagery to accompany such messages.Compared to loss-frame warnings about smoking-related harms, these messages are more positive: cessation benefit messages focus on positive outcomes associated with cessation (i.e., "gain frame"), and quitting tips promote specific cessation strategies to enhance smokers' self-efficacy to quit.Some pictorial warning studies have found that smokers and youth rate messages about cessation as less effective than messages about smoking-related risks [9,10].This may be because the type of imagery that "fits" cessation messages involves less negative emotional arousal.Because efficacy messages are less likely to work through the channel of negative affect, research should determine which types of pictorial imagery will best enhance efficacy message effects.
Concordance and Message Effects
Message effects can depend on the extent to which message receiver characteristics are concordant with message attributes.Indeed, messages are believed to be more effective when recipients perceive them as personally relevant [24,25].Enhancing message relevance is critical to targeted and tailored communication approaches, which frequently match visible characteristics of actors displayed in graphics (e.g., race, sex) with message recipient characteristics [26].Compared with non-tailored print materials, tailored materials are generally better remembered, read, perceived as relevant and/or credible, and more effective in promoting behavior change [27].Hence, the visible characteristics of people portrayed in insert messages may influence smokers' responses to those messages.
Concordance between a message and its recipient may also be due to the textual content of the message.For example, some research has found that cigarette warnings about pregnancy-related harms from smoking are rated as more effective by women of reproductive age than by other groups [28].Efficacy messages about cessation benefits and quitting tips may be more relevant and effective for smokers who intend to quit or have recently tried to quit, as found for pictorial warnings that generally use loss-framed messages [29][30][31][32].Since public health messages that use cigarette packs as communication vehicles reach all smokers, it is important to determine which message characteristics are generally effective across all types of smokers, as well as within particular segments, such as those intending to quit.
Objective
The current study used a series of discrete choice experiments (DCEs) to assess Canadian smokers' preferences for different message content and imagery (adapted from those considered by Health Canada for implementation in 2020) that could be used on newly adopted inserts.Three separate DCEs were conducted in order to assess: (1) cessation benefit messages that targeted a general audience of adult smokers (DCE 1); (2) quitting tips messages that targeted a general audience of adult smokers, but particularly those interested in quitting (DCE 2); and (3) messages about pregnancy-related cessation benefits that targeted smokers of reproductive age (DCE 3).We also assessed the test-retest reliability of DCE 1 and DCE 2 in a subsample of participants who were followed up four weeks later.
Sample
Adult smokers from Canada were recruited through GMI-Lightspeed's online consumer panel.Invitation emails with a link to the online survey were sent to panel members, who were given a brief study description before providing consent.Eligible participants were 18 to 64 years old, had smoked at least 100 cigarettes in their lifetime, and had smoked cigarettes at least once in the prior 30 days.The first wave of the experiment was conducted between 22 and 31 August 2017.Sample quotas were established to recruit a minimum of 1000 participants, 50% of whom intended to quit smoking within the next 6 months.Four weeks after first participation in the experiments (20-30 September 2017), we recontacted participants to re-administer the experiments.GMI-Lightspeed has several quality control measures in place to ensure participants are engaged and provide thoughtful responses, including removal of panelists who complete the questionnaire too quickly (i.e., within 2/5ths of the median time) and regular screening of panelists with short batteries of questions and associated algorithms to identify and eliminate from the panel those who do not provide truthful, engaged responses.Participants were provided compensation that is standard for GMI-Lightspeed (i.e., baseline range = $0.30-$0.65;follow-up range = $1.00-$3.00).The study protocol received ethics approval from the IRB at the University of South Carolina (Pro00054788).
Experimental Protocol
We used discrete choice experiments (DCEs), which have been used extensively in transportation studies, environmental economics, and marketing [33][34][35].DCE protocols use full factorial or fractional factorial designs to create sets of alternatives from which participants choose.Their key strength is the ability to simultaneously assess the effects of specific stimulus characteristics on decision-making independent of other characteristics that are manipulated, while also providing an indication of the relative impact of each characteristic on choices [36].The tobacco industry uses DCEs in premarket research [37][38][39], and, in international litigation, tobacco industry experts highlight how DCEs are less biased (e.g., reduced demand effects) than the other methods that public health researchers use to study tobacco packaging and labeling [40].A growing number of tobacco research studies have used DCE methods to assess the effects of different characteristics of cigarette pack design elements, cigarette branding, cigarette sticks, and health warnings [41][42][43][44].However, none of these studies has assessed the reliability of DCE methods.Assessing the test-retest reliability of DCEs in the context of pre-market testing of inserts may be particularly important given that smokers are repeatedly exposed to inserts and the effectiveness of some message characteristics may change over time.Our approach involved "best-worst" scaling, which asks participants to choose stimulus configurations they prefer most and least, thereby increasing the precision of estimates and statistical power for assessing the reliability of DCE data [45].
Participants evaluated three blocks of material that corresponded to three DCEs.In each DCE, participants evaluated four or five sets of four inserts (i.e., "choice sets"), selecting the most and least motivating (DCEs 1 & 3) or helpful (DCE 2) for quitting.DCE 1 on cessation benefit messages unrelated to reproductive health involved a 5 × 5 within-subjects design (with a between-subjects element due to random assignment to different blocks of choice sets): five distinct topics (i.e., avoiding diabetes; avoiding arthritis, osteoporosis, and weakened immune system [new disease]; improving lung health; enhancing wellbeing; financial benefits of quitting) and five types of accompanying imagery (i.e., older male, younger male, younger female, older female, and non-human symbolic representation) were tested (See Figure 1).DCE 2 on quitting tips messages also involved a 5 × 5 within-subjects design (and between-subjects from random assignment to blocks): five messages about strategies to quit (i.e., stress reduction, physical activity, social support, nicotine replacement therapy, list of cessation strategies) and the same five imagery types as in DCE 1 were assessed (See Figure 2).In DCE 3 there were four different messages that addressed benefits of quitting before or during pregnancy (i.e., health of mom and baby; health of mom and baby, with cessation tips; health of mom, dad and baby; fertility of mom and dad, as well as healthy pregnancy and baby) and four image types (i.e., pregnant mom, mom with baby, mom and dad with baby, symbolic figure of pregnant mom) (see Figure 3).The use of multiple messages on different topics aimed to increase the generalizability of our findings beyond single messages typically used in experimental studies, which is recommended for media effects research [46].Insert message topics were selected and adapted from those developed by Health Canada as part of the process of selecting message content for inserts that will be implemented in 2020.Health Canada and the research team worked with a graphic designer to identify a range of possible images that fit each message and the typologies used (e.g., males and females that appeared younger or older than 40 for DCEs 1 & 2; symbolic images for messages in all DCEs).
Lung Health Diabetes New Diseases Well-Being Financial
For all participants, DCE 1 was followed by DCE 2 and DCE 3.For both DCE 1 and DCE 2, all 25 message combinations were used.Fifty different "choice sets" were used, each with four contrasting inserts, such that the alternatives were pairwise independent of each other across choice sets (see Figure 4 for example choice set).To reduce response burden in both DCE 1 and 2, participants were randomized to evaluate one of ten blocks, each of which included five choice sets.For DCE 3, all meaningful combinations of topics and images were used.Whereas DCEs 1 and 2 used different images for each topic, DCE 3 used the same images across topics.However, there was only one message for which all four images matched the topic.We could meaningfully match the other three messages with three of the four possible images.This resulted in 13 distinct topic and image combinations, and 13 different choice sets with four contrasting inserts.The alternatives were pairwise independent of each other across choice sets.Participants were randomized to evaluate one of three blocks that contained either 4 or 5 choice sets.Hence, at baseline, each participant evaluated 14 or 15 choice sets (i.e., 5 for DCE 1; 5 for DCE 2; and 4 or 5 from DCE 3), with the choice sets presented in random order within each experiment.The final insert configurations, choice sets, and blocks of choice sets can be found in Supplementary Tables S1-S3.
Insert message topics were selected and adapted from those developed by Health Canada as part of the process of selecting message content for inserts that will be implemented in 2020.Health Canada and the research team worked with a graphic designer to identify a range of possible images that fit each message and the typologies used (e.g., males and females that appeared younger or older than 40 for DCEs 1 & 2; symbolic images for messages in all DCEs).
For all participants, DCE 1 was followed by DCE 2 and DCE 3.For both DCE 1 and DCE 2, all 25 message combinations were used.Fifty different "choice sets" were used, each with four contrasting inserts, such that the alternatives were pairwise independent of each other across choice sets (see Figure 4 for example choice set).To reduce response burden in both DCE 1 and 2, participants were randomized to evaluate one of ten blocks, each of which included five choice sets.For DCE 3, all meaningful combinations of topics and images were used.Whereas DCEs 1 and 2 used different images for each topic, DCE 3 used the same images across topics.However, there was only one message for which all four images matched the topic.We could meaningfully match the other three messages with three of the four possible images.This resulted in 13 distinct topic and image combinations, and 13 different choice sets with four contrasting inserts.The alternatives were pairwise independent of each other across choice sets.Participants were randomized to evaluate one of three blocks that contained either 4 or 5 choice sets.Hence, at baseline, each participant evaluated 14 or 15 choice sets (i.e., 5 for DCE 1; 5 for DCE 2; and 4 or 5 from DCE 3), with the choice sets presented in random order within each experiment.The final insert configurations, choice sets, and blocks of choice sets can be found in Supplementary Tables S1-S3.
Participants who were successfully re-contacted (58%, n = 582) were assigned to the same blocks of material that they evaluated at baseline in DCEs 1 and 2. Responses to DCE 3 stimuli were not assessed due to concerns about survey length and because of expectations regarding the substantially smaller analytic sample both due to attrition and exclusion after considering participants who found no messages in DCE to be motivating (because of the narrow focus on reproductive health topics).
Measures
Before beginning the experimental protocol, all participants were told that they would be shown health information that could appear inside cigarette packages, either on small paper leaflets or printed Participants who were successfully re-contacted (58%, n = 582) were assigned to the same blocks of material that they evaluated at baseline in DCEs 1 and 2. Responses to DCE 3 stimuli were not assessed due to concerns about survey length and because of expectations regarding the substantially smaller analytic sample both due to attrition and exclusion after considering participants who found no messages in DCE to be motivating (because of the narrow focus on reproductive health topics).
Measures
Before beginning the experimental protocol, all participants were told that they would be shown health information that could appear inside cigarette packages, either on small paper leaflets or printed on the inside of packages.Both ways of delivering insert information are used in Canada, depending on whether the package is a "flip top" or "slider pack" (i.e., pack opens like a drawer), respectively.
Dependent Variables
For each choice set, participants were presented with four inserts, and for DCEs 1 and 3 they were asked "Which insert would MOST motivate you and LEAST motivate you to quit smoking?", with participants choosing one insert as "most motivating" and one as "least motivating", with mutually exclusive options allowed for each choice set (see Figure 4 and Supplementary Figures S1 and S2).Afterwards, participants were asked "Do you actually think that: (a) None would be motivating if you decided to quit, or (b) At least one would be motivating if you decided to quit?"For DCE 2, participants were asked "Which insert would be MOST helpful and which would be LEAST helpful for you if you decided to quit smoking?" with mutually exclusive choices allowed for each insert.Then, participants were asked "Thinking about these inserts, do you actually think that: (a) None would be helpful if you decided to quit, or (b) At least one would be helpful if you decided to quit?" Participants could view each choice set for as long as they wished.For each choice set, the insert selected as most helpful/motivating to quit smoking was assigned a value of 1, and the least helpful/motivating to quit was assigned a value of −1.The remaining inserts in that set were assigned a value of 0. If the participant indicated that none would be helpful/motivating, all inserts in that choice set were assigned a value of 0.
Independent Variables
Insert characteristics (i.e., message topics, imagery types) were effects coded such that coefficients reflected deviations of the group from the grand mean.Participant sociodemographics included age group in years (18-29, 30-39, 40-49, 50-64), sex (male, female), education (high school or less, college or some university, completed university or higher), and income (under $30,000, $30,000-59,999, $60,000-99,999, $100,000 and over).Smoking-related variables included: smoking frequency (every day; some days); nicotine dependence as determined by the heaviness of smoking index (HSI, a function of average cigarettes per day and time to first cigarette after waking [47]; intention to quit, with responses dichotomized into quit intention in the next 6 months or not [48]; and at least one quit attempt in the prior four months (yes, no).
Data Analysis
Within each DCE, participants who indicated that none of the inserts would be helpful/motivating for all choice sets they evaluated were excluded from the analysis for that DCE.This exclusion was due to their not contributing any meaningful information for assessing specific insert characteristics that influence choice.Using chi-square tests, we compared the demographic and smoking-related characteristics of participants who found no insert message to be motivating/useful (excluded) with those who found at least one insert message to be motivating/useful (analytic sample).Omnibus chi-square tests were used to assess whether participant characteristics differed across the blocks of stimuli to which they were randomized.
We used mixed linear regression to control for repeated measures when analyzing each DCE's analytic sample [45].Dependent variables reflected the choice of an insert as motivating or helpful to quit, depending on the DCE.Independent variables included insert characteristics (topic, imagery type), controlling for block assignment, sociodemographics, and smoking-related participant characteristics.The relative impact of each insert characteristic on choice was calculated using a utility range (i.e., the difference between each characteristic's highest and lowest estimated part-worth utility or estimated effect on choices), divided by the sum of all the characteristics' utility ranges for a given outcome.We also tested for concordance effects by assessing interactions between message imagery type and participant characteristics (i.e., sex, age).Age groups were dichotomized (18-39; 40-64) because the young vs. old contrast in the stimuli were based whether the person portrayed in the image clearly appeared younger or older than 40.Because being younger than 40 generally reflects female reproductive age, this contrast was also meaningful for DCE 3. Finally, we assessed interactions between message topic and imagery type.
We conducted these analyses for the entire baseline sample, as well as for the subsample that was followed up and repeated DCEs 1 and 2. For the sample that was followed up, the consistency of choices in DCEs 1 and 2 were based on Cohen's kappa.All data analyses were conducted using Stata v. 13.1 (StataCorp LLC, College Station, TX, USA).
Sample Characteristics
In the baseline sample (n = 1000; see Table 1), most participants were 40 or older (59%), female (58%), and daily smokers (78%).For all three DCEs, no statistically significant differences were found in participant characteristics across blocks to which participants were randomized (results not shown).The analytic samples differed for each DCE (see explanations in Sections 3.2, 3.4 and 3.6).
DCE 1-Cessation Benefit Messages for a General Audience
The proportion of participants who indicated that at least one of the DCE 1 messages motivated them to quit was 80% (n = 804).Those who opted out of all choice sets they evaluated were less likely to intend to quit (37% vs. 54%; p < 0.001) or to have tried to quit recently (32% vs. 49%; p < 0.001) compared to those who selected at least one insert.
Effects of Specific Cessation Benefit Message and Imagery on Choice
The relative importance of message topic on choice (see Analysis section) was much higher than for imagery (68% vs. 32%; see Figure 5).Correspondingly, the overall effects of message topics on message selection were statistically significant (p < 0.001), whereas the overall effects of imagery were not (p = 0.15; see Table 2).Message topics of diabetes (b = 0.014) and new diseases (b = 0.015) were significantly more likely to be selected as motivating than the average of all messages, whereas topics on wellbeing (b = −0.013)and financial benefits from cessation (b = −0.016)were identified as less motivating.No statistically significant influences for imagery type were found, although imagery of younger women was only marginally non-significant (b = 0.008, p = 0.076).
DCE 1-Cessation Benefit Messages for a General Audience
The proportion of participants who indicated that at least one of the DCE 1 messages motivated them to quit was 80% (n = 804).Those who opted out of all choice sets they evaluated were less likely to intend to quit (37% vs. 54%; p < 0.001) or to have tried to quit recently (32% vs. 49%; p < 0.001) compared to those who selected at least one insert.
Effects of Specific Cessation Benefit Message Topics and Imagery on Choice
The relative importance of message topic on choice (see Analysis section) was much higher than for imagery (68% vs. 32%; see Figure 5).Correspondingly, the overall effects of message topics on message selection were statistically significant (p < 0.001), whereas the overall effects of imagery were not (p = 0.15; see Table 2).Message topics of diabetes (b = 0.014) and new diseases (b = 0.015) were significantly more likely to be selected as motivating than the average of all messages, whereas topics on wellbeing (b = −0.013)and financial benefits from cessation (b = −0.016)were identified as less motivating.No statistically significant influences for imagery type were found, although imagery of younger women was only marginally non-significant (b = 0.008, p = 0.076).Interactions between participants' sex and imagery was marginally non-significant (p = 0.052).Males selected messages with imagery of older women to be less motivating (b = −0.015)and symbolic imagery to be more motivating (b = 0.016).No imagery was selected as superior for females.
The overall interaction between participants' age and imagery type was marginally non-significant (p = 0.056).However, when examining specific coefficients, older participants selected messages with imagery of younger women to be more motivating (b = 0.013) and imagery of older men to be less motivating (b = −0.014).The overall interaction between topic and imagery was statistically significant (p < 0.001), indicating that the effectiveness of any particular image type depended on the topic (see Supplementary Figure S3).
DCE 2-Quitting Tips Messages for a General Audience
More than three-quarters (78%; n = 778) of the sample found at least one message helpful.Compared to those who chose at least one insert as helpful, those who found no messages helpful were more likely to have lower education (i.e., 21% vs. 34% with university degree or more; p < 0.001), be more nicotine dependent (HSI: 3.28 vs. 3.01, p = 0.008), be less likely to intend to quit (35% vs. 55%; p < 0.001) or to have tried to quit recently (32% vs. 49%; p < 0.001).
Effects of Specific Quitting Tips Message Topics and Imagery on Choice
The relative impact of message topic on choice was substantially higher than for image type (85% vs. 15%; see Figure 5).Message topic effects on selection of messages were statistically significant (p < 0.001; see Table 3).The topic of support was selected as significantly more helpful (b = 0.017) and on average the topic that listed cessation strategies was selected as less helpful (b = −0.046)than other messages.Imagery type did not have a significant effect on choice, whether assessed overall or individually.Interactions between imagery type and participant sex and age were not significant (p = 0.584 and p = 0.447, respectively), although on average older participants selected messages with imagery of the young man to be less helpful (b = −0.014).The overall interaction between message topic and imagery was statistically significant (p < 0.001), indicating that the effectiveness of imagery was contingent on message topic (see Supplementary Figure S4).
Effects of Specific Message Topics and Imagery on Choice
Imagery type had a larger influence on message helpfulness than message topic (62% vs. 38%; see Figure 5); nevertheless, both message topic and imagery type had statistically significant overall effects on choices (p < 0.001 for both; see Table 4).The message topic of healthy mom and baby (b = 0.093) and of mom, dad and baby (b = 0.031) were selected as significantly more motivating than the average, whereas the topic of fertility (b = −0.118)was significantly less motivating (see Table 1).Messages with imagery of real people, whether of a mom and baby (b = 0.107), or a mom, dad and baby (b = 0.122) were selected as significantly more motivating than the average, whereas the symbolic figure of a pregnant woman (b = −0.252)were selected as significantly less motivating (see Table 4).
The overall interaction between participant sex and message imagery was statistically significant (p < 0.001).However, the pattern of responses was similar for males and females (see Table 1), except that males had particularly strong responses to messages with imagery that included a dad (b = 0.223 and 0.045, respectively), and females selected messages with imagery of a pregnant woman (b = 0.045) while males did not.
Interactions between age and imagery were found to be significant (p < 0.001).However, the primary difference between younger and older groups concerned the somewhat stronger effects of imagery on message selection amongst older compared to younger participants.No differences in direction of effect or statistical significance were found (see Table 1).Finally, the impact of imagery on message selection depended on message topic, as evinced by the statistically significant interaction between topic and imagery (p < 0.001; see Supplementary Figure S5).
Reliability of DCE
Four weeks after the baseline experiments, 58% of participants were successfully re-contacted (n = 582) and re-evaluated the same blocks of inserts they evaluated in DCEs 1 and 2 at baseline.Compared to the baseline sample, those who were followed up were more likely to be older than 40 (67% vs. 59%; p = 0.002) and less likely to intend to quit (43% vs. 50%; p = 0.001).Results were similar with regard to differences between the analytic and excluded samples due to opting out (results not shown).For both DCEs 1 and 2, the percent agreement in choice was 94%, with a Kappa statistic of 0.38 for DCE 1 and 0.37 for DCE 2, which indicates a moderate level of reliability.
Discussion
This study assessed cessation benefit and quitting tips message topics and imagery most likely to be effective for the innovative Canadian policy of using cigarette package inserts complement pictorial warnings with graphic illustrations of smoking-related harms.We found that the vast majority of smokers identified at least one general cessation benefit message to be motivating (80%) or a quitting tip message to be helpful (78%).As expected, these smokers were more likely to intend to quit or have recently tried to quit when compared to those who found no message motivating or helpful.It is noteworthy, however, most smokers who did not intend to quit also found these messages to be effective (DCE 1 = 71%; DCE 2 = 66%).While theories of health behavior, such as the theory of planned behavior [49,50], highlight the role of intention as the gateway to behavior change, observational studies have found that the association between insert exposure and subsequent cessation behavior appears at least partly independent of quit intentions [6,7]; hence, attention to insert information may become useful at the moment when one finally decides to quit-which may change in unpredictable ways over time [51].
Finding cessation benefit or quitting tips messages to be helpful was inversely associated with SES indicators (income and education, respectively).This may reflect SES-related barriers (e.g., lower support, lower access, higher stress) that can cause communication campaigns to exacerbate health inequalities [52].To better determine the health equity impact of these kinds of messages, it will be important to assess their effectiveness in the context of fear arousing warning labels, as studies have found minimal to stronger responses to fear-appeals among relatively lower SES smokers [8,22,[53][54][55][56][57][58].
Smokers' evaluations of messages about reproductive health benefits of cessation exhibited a similar pattern: smokers with cessation intentions, recent quit attempts and higher SES were more likely to select at least one message as motivating.The percentage of smokers in this group was lower than for the aforementioned messages that targeted a broader audience, yet more than half of smokers still found at least one of these reproductive health messages to be effective (58%).As expected, younger smokers were more likely to be in this group than to be unaffected, likely because of message relevance for people of reproductive age; however, females and males were equally likely to be affected, inconsistent with results for loss-frame messages on reproductive health that showed stronger responses among females [28].This may be because men were mentioned in at least some, although not all, of the messages evaluated, therefore providing them with relevant content that is often not present in warning labels.
When assessing cessation benefit and quitting tips messages with topics that targeted the general audience of smokers (i.e., DCE 1 and DCE 2), message topics explained more variation in choices than imagery types, whereas the opposite was found for reproductive health messages (DCE 3).The contrasting results between the first two DCEs and the third is likely due, in part, to the broader range of topics used for the more general cessation messages relative to the narrower range of reproductive health topics.Furthermore, different images were used within any particular image type category (e.g., young men) across general efficacy topics for the first two DCEs to make the imagery "fit" general efficacy message topics-this contrasts with the fixed images within each imagery type category across the reproductive health topics (see Figures 1-3).Hence, while any particular image of a young woman may work better or worse than another image within that category in DCE 1 and DCE 2, tighter control and assessment of the particular image in DCE 3 likely contributed to the stronger effect of imagery on choice.Indeed, we found statistically significant interactions between topic and image, indicating that the most effective image type depended on the topic (see Supplementary Figures S3-S5).For DCE 2, no image type worked better than the others and only one, relatively weak general tendency of a particular imagery type (i.e., young woman) worked better than others for DCE 1. Significant interactions indicate a better fit between image types for some topics than for others, consistent with other research suggesting that the congruence between topic and imagery in pictorial warnings influences attention and recall [59].
We found mixed support for the importance of image congruence with smokers' sex and age.For general cessation messages (DCEs 1 & 2), females were relatively uninfluenced by any particular imagery type whereas males had weaker responses to messages that portrayed an older woman.For reproductive health benefit messages (DCE 3), image congruence mattered more, with females having stronger responses to pregnancy imagery than males and males having stronger responses than females to family imagery that included a father.Smoker age mattered when considering responses to general cessation benefit messages (DCE 1; older smokers found younger women more motivating and older men less motivating), but not for quitting tips messages (DCE 2).For the reproductive health benefit messages, no striking differences were found by age, except that imagery appeared to have a somewhat stronger effect amongst older than younger participants.Participants across all age groups agreed that the symbolic figure of the pregnant woman was less motivating than the imagery of real people.Contrary to expectations, however, symbolic imagery was quite effective for some general efficacy topics (see Supplementary Figures S3-S5), which contrasts with their weak effects found in more typical, "loss-framed" pictorial warning messages [8][9][10][11]22].Hence, symbolic representations of cessation messages may be relatively more effective, perhaps due to the more abstract nature of cessation topics and the relative difficulty of pairing these topics with personified imagery.In the end, we found relatively weak evidence for any particular type of imagery as being most effective, whether assessed overall or within specific age or sex strata of smokers.However, this conclusion should be evaluated alongside our results indicating that type of image that is most effective appears to depend on the message topic.
Our study generally supported the reliability of DCE results over time.The factors associated with indicating at least one insert was motivating/helpful and the variance explained by each message attribute were consistent across the baseline and follow-up DCEs (Figure 2).Although the Kappa statistic indicated only a fair level of agreement (K = 0.36-37), such scores are not uncommon given the paradoxical nature of the Kappa statistic [60]; indeed, the observed percent of agreement across DCEs was high (94%), indicating that choices are relatively stable over time.Qualitative comparison of the statistical significance and the direction of coefficients also found that most coefficients (49/60 = 82%) led to the same conclusions across the baseline and follow-up assessments.Some discrepancies (n = 2) appeared due to reduced statistical power in the follow-up study due to attrition.Remaining discrepancies (n = 9) also may be due to statistical power issues (i.e., topics that elicited weaker responses became non-significant in the follow-up DCE 2, whereas the stronger topics did not), as well as random fluctuation in responses perhaps due to the hypothetical nature of the task performed.Another explanation, however, concerns repeated exposure.Learning of insert content may make inserts less effective over time-for example, the topic of new diseases became statistically non-significant at follow-up after being motivating to quit at baseline.Future research should more squarely explore the impact of learning, since naturalistic insert exposure involves repetition.Indeed, some prior research has found that attention to inserts with cessation messages increases over time, even while attention to warnings wears out, suggesting that some insert message characteristics may become more salient over time [7].
This study has a number of limitations, including its reliance on self-reported choice to evaluate message effectiveness.While DCEs may do a better job of obscuring study intent and reducing demand effects than other study designs, our use of a relatively limited set of message attributes (topic and imagery) may have lessened this advantage.Social desirability bias may have led to overestimation of the percentage of smokers who would be influenced by these messages.Furthermore, the images we used to represent each category were not the same for each topic and therefore did not involve tight experimental control-however, doing this would have resulted in highly variable "fit" of imagery across topics.Even so, our results indicated that perceived message effectiveness depended on the specific combination of imagery and topic.Our stimuli were presented via a relatively unrealistic online modality, although some evidence indicates that online experiment results are similar to those found when presenting warnings in person and on mock cigarette packs [8][9][10][11], as well as when assessing responses after implementation [61].Our study went some way towards assessing naturalistic repeated exposure through its assessment of reliability and potential differences in responses to messages over time-nevertheless, experimental studies that involve repeated exposure under more naturalistic conditions may be necessary to determine real effects of message characteristics.Still, results from randomized clinical trials with pictorial warnings [17][18][19] have generally found results that are consistent with those found in experimental [3] and observational studies [2].We did not collect information on some participant characteristics, such as the number of failed quit attempts or health complications from smoking, and these may provide a focus for future research on messages that explicitly address those issues.Finally, future research should also consider conducting research with general populations of smokers in order to determine the consistency of effects across key populations, particularly more disadvantaged panels that are underrepresented in online consumer panels.Special consideration should be given to smokers with mental health conditions and multiple addictions, given the higher prevalence of smoking in these populations and evidence that they respond differently to labeling messages than other groups [62].
Conclusions
In sum, our study found support for the contention that efficacy messages, whether focused on cessation benefits or quitting tips, on cigarette package inserts may help promote smoking cessation, even amongst Canadian smokers who have been exposed to similar messages through inserts for over a decade.Some message topics appeared generally more motivating and helpful than others, although future research in other countries should help determine whether other topics may be more powerful in setting where efficacy messages have not been a critical component of communication campaigns and labeling policies.Furthermore, implementing multiple inserts may allow for targeting of messages to specific groups, particularly vulnerable groups that suffer tobacco-related disparities.
In the end, the specific combinations of message topic and imagery worked best across each message domain we assessed, indicating the need for pretesting specific combinations in order to select message characteristics that are most likely to result in maximum effectiveness for smoking cessation.
Supplementary Materials:
The following are available online at www.mdpi.com/1660-4601/15/2/282/s1, Figure S1: Example choice set for DCE 2, self-efficacy messages that target general audiences of smokers, Figure S2: Example choice set for DCE 3, reproductive health response efficacy messages that target general audiences of smokers, Figure S3: Interactions between message topic and image type; Results from DCE 1, general response efficacy messages (p < 0.05 for contrast with the grand mean), Figure S4: Interactions between message topic and image type; Results from DCE 2, general self-efficacy messages (p < 0.05 for contrast with the grand mean), Figure S5: Interactions between message topic and image type; Results from DCE 3, reproductive health response efficacy messages (* p < 0.05 for contrast with the grand mean), Table S1: Choice sets and blocks for DCE 1, Response Efficacy, Table S2: Choice sets and blocks for DCE 2, Self-Efficacy, Table S3: Choice sets and blocks for DCE 3, Reproductive health response efficacy.
Figure 1 .
Figure 1.Topic and imagery for cessation benefit messages in DCE 1.
Figure 1 .Figure 2 .
Figure 1.Topic and imagery for cessation benefit messages in DCE 1.
Figure 2 .Figure 3 .
Figure 2. Topic and imagery for quitting tips messages in DCE 2.
Figure 3 .
Figure 3. Topics and imagery for reproductive health benefit messages in DCE 3.
Which message would MOST motivate you and which one would LEAST motivate you to quit smoking?Most motivating Least motivating Thinking about these messages, do you actually think that: Select one None would motivate you to quit At least one would motivate you to quit
Figure 5 .
Figure 5.Relative importance * of attributes on choice of insert (* Relative importance was calculated using a utility range (i.e., the difference between each characteristic's highest and lowest estimated part-worth utility or estimated effect on choices), divided by the sum of all the characteristics' utility ranges for a given outcome.DCE 3 was conducted only at baseline).
Table 1 .
Characteristics of study participants.
* At least one message was motivating to quit; therefore data could be used in analysis.Data from the follow-up sample show results from baseline and follow-up protocol administrations.Study 3 protocol not included at follow-up.** At least one message was helpful for quitting; therefore data could be used in analysis.Data from the follow-up sample show results from baseline and follow-up protocol administrations.HIS-Heaviness of smoking index; DCE-Discrete choice experiment.
Table 2 .
Effects of cessation benefit message characteristics on quit motivation.
Table 3 .
Effects of quitting tips message characteristics on helpfulness for quitting.
Table 4 .
Effects of reproductive health benefit message characteristics on quit motivation.
|
2018-04-03T04:15:51.920Z
|
2018-02-01T00:00:00.000
|
{
"year": 2018,
"sha1": "dd3aa94918e78a1af31d1319bef65b97d21f137e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/15/2/282/pdf?version=1517983099",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "32e9789793d3a97b584db1ef56ebbbe2833ca105",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
219173070
|
pes2o/s2orc
|
v3-fos-license
|
Time-Restricted Eating: Benefits, Mechanisms, and Challenges in Translation
Eating out of phase with daily circadian rhythms induces metabolic desynchrony in peripheral metabolic organs and may increase chronic disease risk. Time-restricted eating (TRE) is a dietary approach that consolidates all calorie intake to 6- to 10-h periods during the active phase of the day, without necessarily altering diet quality and quantity. TRE reduces body weight, improves glucose tolerance, protects from hepatosteatosis, increases metabolic flexibility, reduces atherogenic lipids and blood pressure, and improves gut function and cardiometabolic health in preclinical studies. This review discusses the importance of meal timing on the circadian system, the metabolic health benefits of TRE in preclinical models and humans, the possible mechanisms of action, the challenges we face in implementing TRE in humans, and the possible consequences of delaying initiation of TRE.
INTRODUCTION
Lifestyle-induced metabolic diseases, such as type 2 diabetes (T2D) and cardiovascular disease, are often associated with obesity, reductions in physical activity and increased consumption of energy-dense foods. Accumulating evidence suggests that when we eat may be another contributing factor to chronic disease progression (Andrzejczak et al., 2011). Lengthened daily eating patterns, in excess of 14 h/day, were evident in studies conducted in the USA and India, with less than 25% of caloric intake occurring prior to 1 pm Gupta et al., 2017). Time-restricted eating (TRE, also known as time-restricted feeding, TRF) is a novel dietary tool that recommends individuals shorten the duration of the daily eating window, without altering calorie intake or diet quality. TRE restores circadian rhythms and imparts pleiotropic metabolic benefits in animal models Delahaye et al., 2018;Hatori et al., 2012;Olsen et al., 2017;Villanueva et al., 2019;Wang et al., 2018;Woodie et al., 2018). TRE also reduces body weight and fat mass, improves glucose tolerance and reduces blood pressure in humans, particularly in those with overweight or obesity ( Figure 1) (Gabel et al., 2018;Hutchison et al., 2019;Sutton et al., 2018;Wilkinson et al., 2019). The studies to date in humans are limited in size and duration, and the effectiveness and acceptability of TRE in the general population remains unclear. The majority of TRE studies have also initiated the eating window early in the active phase, presumably to maximize the metabolic benefits. This review will discuss the metabolic benefits of TRE in preclinical models and the possible mechanisms of action. We also discuss the likely challenges of implementing TRE in humans and the possible consequences of delaying initiation of TRE.
REGULATION OF CENTRAL AND PERIPHERAL CLOCK MACHINERY
Circadian rhythms are ubiquitous periodic oscillations in internal biological process that direct behavior and metabolism such as hormonal signaling, body temperature, nutrient absorption, and metabolism (Dongen 2017;Espelund et al., 2005;Panda et al., 2002;Reppert and Weaver, 2002). At the molecular level, circadian rhythms arise from tightly controlled autonomous interlocked genetic transcriptional feedback loop that involves circadian locomotor output cycles kaput (clock) and brain and muscle ARNT like protein 1 (bmal1) as positive transcriptional factors for period (per1, per2, per3) and cryptochrome (cry1, cry2) genes (extensively reviewed in Hastings et al., 2018). The translation products of per and cry dimerize and act as negative regulators by inhibiting clock and bmal1. An additional feedback loop involves the transcriptional regulation of bmal1 by retinoic acid related orphan receptor (rora) and nuclear receptor subfamily 1, group D, member 1(rev-erba). One cycle of this feedback loop takes~24 h and is the basis of circadian rhythms in many organisms. The suprachiasmatic nucleus (SCN) is considered the master regulator of circadian rhythms and is primarily entrained by the light-dark cycle. This feedback loop also operates in Peripheral clocks are exquisitely sensitive to the fasting-feeding cycle and, as discussed in the next section, can be uncoupled from the central clock through modifications in meal delivery (Damiola et al., 2000). At the molecular level, fasting increases the AMP/ATP ratio, activating 5 0 AMP-activated protein kinase (AMPK). This in turn phosphorylates serine71 of cry1, reducing its stability (Lamia et al., 2009). AMPK also regulates the activity of Casein kinase I epsilon via its phosphorylation at serine389, which is a critical regulator of per phosphorylation and stability (Meng et al., 2008). Nicotinamide adenine dinucleotide (NAD + ) is a cofactor of several key pathophysiological enzymes and an absolute requirement of sirtuin 1 (SIRT1, a NAD + -dependent histone deacetylase). The majority of cellular NAD + comes from its salvage pathway where nicotinamide phosphoribosyltransferase (NAMPT) is the rate-limiting enzyme (Poljsak, 2018;Zhang et al., 2017). Fasting activates NAMPT, thus increasing the cellular availability of NAD + , activating SIRT1. Activated SIRT1 has been shown to directly bind with clock:bmal1 and repress the transcription of per2 (Ramsey et al., 2009). Thus, fasting also reduces both transcription and stability of per and cry, which de-represses clock:bmal1 targets and increases their amplitude (Lamia et al., 2009;Um et al., 2007). SIRT1 has also been described to regulate the acetyltransferase activity of clock (Doi et al., 2006;Nakahata et al., 2008). In contrast, the mechanistic target of rapamycin (mTOR), a nutrient-activated serine/threonine protein kinase, is activated during the fed state. This post-transcriptionally induces cry1 through an unknown mechanism (Ramanathan et al., 2018). Feeding also suppressed NAMPT function, reduced cellular NAD + , inactivating SIRT1. This abrogated SIRT1-mediated suppression of clock:bmal1 and increased per2 transcription (Ramsey et al., 2009). Hence, fasting increases the positive limb of circadian clock (clock and bmal1), whereas feeding increases the negative limb of circadian clock (cry and per).
MEALTIME IS A STRONG ENTRAINING CUE OF PERIPHERAL CLOCKS AND SUBSEQUENTLY IMPACTS METABOLISM AND RISK FACTORS FOR CHRONIC DISEASE
Disrupted feeding therefore has a marked effect on the expression of the molecular clock in peripheral tissues (Jiang and Turek, 2017) and uncouples this from the SCN (Damiola et al., 2000). For example, restricting food access solely to the light phase in mice, when this nocturnal animal normally sleeps, completely reversed the phase of the circadian clock in liver, stomach, intestine, heart, pancreas, and kidney, without affecting the phase in the SCN (Damiola et al., 2000;Davidson et al., 2003). Simply delaying meals by 4 h also resulted in a phase shift in the circadian clock in mouse liver of a similar length (Shimizu et al., 2018). Delaying a single breakfast meal by 5 h also delayed expression of genes under per control in human adipose tissue (Wehrens et al., 2017). Conversely, studies have shown that rhythmic feeding was sufficient to maintain circadian rhythms of clock genes in peripheral tissues during constant light or darkness or following lesion of SCN (Hamaguchi et al., 2015;Kolbe et al., 2019;Novakova et al., 2011). These findings show that cues from the fasting-feeding cycle are more powerful entraining cues for peripheral clocks than the light-dark cycle ( Figure 2) (Damiola et al., 2000;Wang et al., 2017).
OPEN ACCESS
Disrupting molecular clocks by altering feeding behaviors has subsequent impacts on metabolism in animal models. Reversing the phase of clock genes in peripheral organs by daytime restricted feeding was associated with weight gain, dyslipidemia, and fatty liver as compared with animals that were pairfed to an equivalent calorie level solely during the active phase (Bray et al., 2013;Yasumoto et al., 2016). This also reversed the phase of several genes involved in glucose homeostasis such as glut2, pyruvate kinase, glucokinase, and glycogen synthase in liver and several genes involved in lipid homeostasis such as acetyl CoA carboxylase, diacylglycerol-O-acyltransferase, medium chain acylCoA dehydrogenase in liver, muscle, and epididymal fat (Bray et al., 2013;Yasumoto et al., 2016). Daytime restricted feeding also reversed the phase of insulin, leptin, and ghrelin in plasma compared with mice fed only during nighttime (Yasumoto et al., 2016).
Similarly, a rotating light cycle (a mimic of shift work) altered the phase and reduced oscillation of clock genes in liver and caused higher weight gain, increased hepatosteatosis, and reduced b cell function and glucose-stimulated insulin secretion (Christie et al., 2018;Gale et al., 2011;Zhong et al., 2019). Chow fed mice under rotating light cycle also had altered phase of insulin and corticosterone in plasma and transcription factors FOXO1, PPARa, and PPARY in liver (Zhong et al., 2019), whereas, high-fat diet (HFD)-fed mice under rotating light cycle completely lost the rhythmic expression of lipogenic gene acyl-CoA carboxylase in liver (Christie et al., 2018).
Furthermore, delaying the feeding phase by just 4 h shifted peripheral clocks and increased weight gain in rats exposed to HFD (Shimizu et al., 2018). Meal delay also delayed the peak of several genes involved in glucose homeostasis, such as glucokinase, glucose 6 phosphatase, and phosphoenol pyruvate carboxykinase, and several genes and transcription factors involved in lipid homoeostasis, such as SREBP, PPARa, fatty acid synthase, carnitine palmitoyl transferase, and malic enzyme in liver. Likewise, meal delay also delayed the peak time of insulin, free fatty acids, and bile acids in plasma and circadian rise in body temperature (Shimizu et al., 2018).
TRE Induces Pleiotropic Metabolic Health Benefits in Animal Models of Obesity and Aging
TRE, defined here as the provision of food for up to 12 h during the active phase, is commonly known as TRF in animal studies to depict the eating window or food availability is externally controlled. TRE limited weight and fat gain and protected nocturnal mice and diurnal flies from the metabolic consequences of HFD Delahaye et al., 2018;Hatori et al., 2012;Olsen et al., 2017;Sundaram and Yan, 2016;Villanueva et al., 2019;Woodie et al., 2018). This included protection from inflammation (Sherman et al., 2011) and immune responses (Cisse et al., 2018) and enhanced bile acid synthesis facilitating cholesterol excretion and reduced cholesterol levels (Delahaye et al., 2018). TRE also prevented age-and HFD-induced reductions in cardiac contractile function Tsai et al., 2013) in mice and flies and restored HFD-induced loss of gastric vagal afferent mechanosensitivity . TRE restored HFD-induced dampening of the circadian rhythms in the gut microbiome (Hu et al., 2019;Zarrinpar et al., 2014) and circadian rhythms in fatty acid oxidation (Chaix et al., 2019;Hatori et al., 2012). Thus, TRE has pleiotropic metabolic benefits to protect against chronic disease in mice and flies and importantly was able to reverse the consequences of obesity and aging (Duncan et al., 2016). The beneficial effects were also evident when TRE was implemented 5 days per week and food access was allowed ad libitum during weekends Olsen et al., 2017) in HFD-fed mice.
At the molecular level, TRE increased the amplitude of expression of AMPK and mTOR (Hatori et al., 2012;Sherman et al., 2012) ( Figure 2) and NAMPT in the liver of mice that were fed HFD (Chaix et al., 2019). TRE also increased the amplitude of ribosomal protein phospho-S6 in skeletal muscle during the active phase , suggesting increased mTOR activation during feeding. TRE increased the amplitude of cry1 and per1 in the liver of mice fed chow (Greenwell et al., 2019) and restored the amplitude of bmal1, cry1, per2, and rev-erba in mice that were fed HFD (Hatori et al., 2012). In liver, TRE reduced the amplitude of pyruvate carboxylase and glucose 6-phosphatase, and increased glucokinase during the active phase Hatori et al., 2012), potentially underpinning reductions in hepatic glucose production and increased glucose utilization. TRE also reduced the amplitude of genes fatty acid synthase, stearoyl CoA desaturase, and fatty acid elongase during the active phase and increased the amplitude of hepatic triglyceride lipase during the inactive phase, which was associated with reduced lipid storage and increased triglyceride hydrolysis.
Importantly, Chaix et al. examined the effects of TRE in mice that were deficient in cry1 and cry2 at the whole-body level or deficient in bmal1 and reverba & b only in liver. In this study, TRE was effective to restore robust rhythms in genes involved in energy metabolism and nutrient utilization in the liver from all knockouts, as well as nutrient signaling pathways with higher AMPK and mTOR function (higher pS6 levels during feeding) in fasting and fed states, respectively. TRE also protected knockouts from HFD-induced weight gain, glucose intolerance, hepatic steatosis, and dyslipidemia (Chaix et al., 2019). This study proves that sustaining daily rhythms in the fasting and feeding cycle is sufficient to maintain metabolic homeostasis, independently of circadian clocks (Chaix et al., 2019).
From studies to date, it is difficult to separate whether TRE improves health, independently of changes in calorie intake. Certainly, some studies have suggested this (Chaix et al. , 2019Hatori et al., 2012). However, food intake is difficult to measure accurately, and other studies have shown lower calorie consumption in TRE mice that are fed a HFD (Delahaye et al., 2018;Sundaram and Yan, 2016) and marked weight loss initially in response to TRE Sundaram and Yan, 2016). Thus, some of the metabolic benefits of TRE may well be mediated by calorie restriction and weight loss. However, a recent study in mice showed that TRE improved glucose tolerance and reduced HOMA-IR in rats following high fat-high sugar diet, without any weight loss (Woodie et al., 2018). A recent human study also supported the notion that TRE imparts metabolic benefits independently of changes in body weight (Sutton et al., 2018).
TRE Improves Metabolic Health Outcomes in Humans
Several TRE protocols with daily meal intakes prescribed from 4 to 13 h have been trialed in people of normal weight and overweight (summarized in Table 1), although all of these trials are short-term (4 days to 16 weeks) and conducted in a small number of participants. The majority of studies report modest reductions in body weight and fat mass ( iScience Review (Hutchison et al., 2019;Jamshed et al., 2019), although this was not universally observed (Wilkinson et al., 2019). TRE for 12 weeks did not alter gut microbiome in humans (Gabel et al., 2020).
Most of the studies performed in humans have observed a reduction in self-reported energy intake (Gabel et al., 2018;Wilkinson et al., 2019), which may account for some of the beneficial weight and health effects. However, in a highly controlled, cross-over feeding trial, 5 weeks of early (e) TRE (dinner before 3 pm) increased insulin sensitivity and b cell responsiveness and reduced oxidative stress as compared with the control condition in the absence of energy restriction and weight loss (Sutton et al., 2018), and increased fasting triglyceride as a physiological response to the increased fasting duration. Four days of eTRE (8 am-2 pm) also reduced fasting and postprandial glucose and increased daytime energy expenditure and the expression of SIRT1, clock genes, and genes involved in autophagy in blood (Jamshed et al., 2019;Ravussin, 2019). Five days of TRE (10 am-5 pm) also reduced night-time glucose in participants who were overweight (Parr et al., 2020). These studies suggest that TRE could be a promising tool for the improvement of metabolic outcomes in the general population.
CHALLENGES IN TRANSLATING TRE TO HUMANS
TRE is a simple approach that could be highly beneficial in primary practice, since it does not require extensive nutrition knowledge or significant time commitment to convey to the patient in need, unlike current dietary practice guidelines (Australian Dietary Guideline, 2013). However, the majority of TRE interventions in animal models and in humans have been initiated early in the active phase Gabel et al., 2018;Hatori et al., 2012;Jamshed et al., 2019;Olsen et al., 2017;Ravussin, 2019;Sutton et al., 2018). Here, we discuss possible challenges with the translation of eTRE in the general population and the possible outcomes of delayed TRE (dTRE, i.e., allowing food consumption for identical time lengths late in the day).
Early morning is likely to be the optimal time to initiate TRE to maximize the metabolic benefits. For example, insulin sensitivity and glucose uptake are higher at the beginning of the active phase in nocturnal mice (Basse et al., 2018;Rudic et al., 2004) and diurnal humans (Sonnier et al., 2014). Similarly, lipid absorption in the intestine (Douris et al., 2011) and de novo lipogenesis in the liver are higher during the active phase in mice (Gilardi et al., 2014). Cholesterol and bile acid synthesis are also elevated early during the active phase . Furthermore, findings from observational and epidemiological studies suggest that breakfast skippers are also more likely to be overweight, have poorer glucose control, and develop T2D as compared with people who identify as breakfast consumers (Bi et al., 2015). Although other observational studies have reported that skipping breakfast, without eating late, does not link to obesity, suboptimal glycemic control, or poorer metabolic health (Azami et al., 2019;Nakajima and Suwa, 2015;Okada et al., 2019). Women who were overweight were randomized to high-calorie breakfast versus high-calorie dinners, where the high-calorie breakfast group lost more body weight and had greater reductions in waist circumference, fasting glucose, and fasting insulin (Jakubowicz et al., 2013). In another study, individuals with T2D were provided with a three-meal-per-day diet (light dinner before 8 pm) or an isocaloric six-meal diet (heavy dinner and snacks continued until 11 pm) for 12 weeks. The three-meal diet reduced body weight, glycosylated hemoglobin, and therapeutic insulin dose and significantly lowered hyperglycemic episodes by continuous glucose monitoring. Clock gene expression in blood samples also showed higher oscillation in the three-meal diet (Jakubowicz et al., 2019). Eating breakfast and lunch only also reduced body weight, plasma glucose, and hepatic fat more than eating six meals spread throughout the day (Kahleova et al., 2014). Together, these studies show that consuming meals earlier in the day are optimal for weight control and improvements in glycemic profile under isocaloric conditions. Implementing TRE early in the morning may be challenging in the general population both biologically and socially. There is large endogenous circadian variation in hunger, with peak in the evening and nadir in the morning (Qian et al., 2019;Scheer et al., 2013). This is because ghrelin, a hormone secreted by stomach that increases feelings of hunger, is under circadian regulation and is at biological nadir in the morning (Espelund et al., 2005) and peaks in the afternoon. Furthermore, family and communal get-togethers are essential factors to increase social bonding, feeling of physical and mental well-being, and overall happiness in humans. Group eating and food sharing are considered the easiest way to strengthen family and community bonds capable of providing social and emotional support (Dunbar, ll OPEN ACCESS iScience 23, 101161, June 26, 2020 7 iScience Review 2017). However, many social events are typically geared toward evening. Delaying the start time of TRE may overcome both of these issues, but the metabolic consequences of dTRE are not clear. As described earlier, fasting and feeding are known regulators of peripheral molecular clocks. Thus, it is likely that delaying TRE will delay peripheral clocks in metabolic organs. This was seen in recent human studies where skipping breakfast delayed per rhythms in adipose tissue (Loboda et al., 2009;Wehrens et al., 2017). However, whether there is a net consequence of a short phase delay in clocks on metabolic health in humans is currently unknown.
In athletes undertaking resistance training, dTRE (12-8 pm) reduced fat mass without altering fat-free mass and improved muscle performance (Tinsley et al., 2019). Blood glucose, insulin, total testosterone, and IGF-1 were also reduced in this study. However, severe time restriction, whereby all food intake was limited to one large daily meal eaten between 5 and 9 pm, impaired glucose tolerance in the following morning (Carlson et al., 2007). dTRE also failed to show any benefits in glycemic profile and body weight reduction when meal intake was limited to 4 h in the evening (anytime between 4 pm and midnight) for 4 days per week (Tinsley et al., 2017). However, the irregular patterning of meals in that study could also have contributed to this result (Farshchi et al., 2004).
Only two animal studies have directly compared eTRE with dTRE. In one study, mice underwent 6 h of TRE with HFD given either during the first half (ZT12-18) or second half (ZT18-24) of the night for 8 weeks.
Body weight gain and insulin resistance as measured by HOMA-IR were higher in dTRE than in eTRE. However, both TREs equally improved glucose tolerance compared with ad libitum (Delahaye et al., 2018). The fasting length prior to the glucose assessment was not standardized in that study (7-16 h depending on intervention), which may have contributed to the results as glucose tolerance is higher after 18 h of fasting versus 6 h of fasting (Andrikopoulos et al., 2008). In another study, rats were fed for 12 h during night whether at ZT12-24 (eTRE) or ZT16-4 (dTRE) for 2 weeks. Despite similar caloric consumption, body weight gain was higher in the dTRE group and dTRE also delayed the phase of clock, bmal1, per1, cry2, and rev-erba by 2 h and that of cry1 by 4 h in liver (Shimizu et al., 2018). The amplitudes of those genes were also lower in dTRE. However, the study was of short duration and did not include an ad libitum fed group. We conducted a preliminary study comparing dTRE with eTRE in men with obesity. This study showed that dTRE produced similar improvements in glucose tolerance as eTRE (Hutchison et al., 2019). However, when glycemic measurements were made by continuous glucose monitoring, only eTRE significantly reduced fasting glucose versus baseline. This reduction was at trend level for dTRE versus baseline, and there was no statistical difference in this improvement between TRE groups (Hutchison et al., 2019). The impact on clock genes was not examined. Larger trials comparing effects of eTRE versus dTRE are warranted.
Studies in mice have shown that the beneficial effects of TRE are dose dependent, with greater reductions in body weight, fat mass, and improvement in glucose tolerance when a 9-h protocol was implemented versus 12 and 15 h Sundaram and Yan, 2016). The optimal TRE time frame to recommend for people has not been tested. Clear improvements have been noted after 6-, 8-, 9-, and 10-h protocols (Gabel et al., 2018;Hutchison et al., 2019;Sutton et al., 2018;Wilkinson et al., 2019). It is likely that the greater time restriction would result in greater weight losses, which may maximize the metabolic benefits. However, very short feeding windows could also reduce adherence or result in poorer food choices, if the individual feels under too great a time pressure. Extending the eating window beyond 12 h is unlikely to have major beneficial metabolic effects (LeCheminant et al., 2013).
CONCLUSION AND FUTURE DIRECTIONS
TRE initiated early in the active phase shows pleiotropic metabolic benefits in animal models of dietinduced obesity and aging. Short-term TRE trials in humans have shown modest reductions in body weight and improved cardio-metabolic health in people who are overweight or obese, suggesting that TRE may be a promising therapeutic tool. However, these studies are limited in number, sample size, and study duration. The feasibility of implementing early TRE in the general population on a daily basis is unclear, and the effects of delaying TRE to increase the potential translatability and acceptability of this dietary approach are unknown. Large-scale, long-term trials are warranted to determine if TRE is a viable alternative to current practice dietary guidelines.
|
2020-05-21T09:06:45.884Z
|
2020-05-15T00:00:00.000
|
{
"year": 2020,
"sha1": "e9b77a866cee9f88095b15fd3fb5b020a5d2b8f7",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2589004220303461/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1d75ef8304a645950962befa9105aaabfb345cae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
23807299
|
pes2o/s2orc
|
v3-fos-license
|
Splenic Infarct and Pulmonary Embolism as a Rare Manifestation of Cytomegalovirus Infection
Cytomegalovirus (CMV) is a type of herpes infection that has a characteristic feature of maintaining lifelong latency within the host cell. CMV manifestations can cover a broad spectrum from fever to as severe as pancytopenia, hepatitis, retinitis, meningoencephalitis, Guillain-Barre syndrome, pneumonia, and thrombosis. Multiple case reports of thrombosis associated with CMV have been reported. Deep vein thrombosis or pulmonary embolism is more common in immunocompetent patients while splenic infarct is more common in immunocompromised patients. However, here we report a female patient on low-dose methotrexate for rheumatoid arthritis who presented with both pulmonary embolism and splenic infarct.
Introduction
Cytomegalovirus (CMV) is a type of herpes virus similar to herpes simplex, herpes zoster, or Epstein-Barr virus (EBV). It has a characteristic feature of maintaining lifelong latency within the host cell. Initial infection may just cause few symptoms, and some may shed the virus intermittently and asymptomatically, whereas, in immunocompromised individuals, reactivation may lead to a symptomatic disease. This resembles a mononucleosis-like syndrome with prolonged fever and hepatitis [1]. Thrombosis associated with acute CMV has many times been reported as various case reports ever since the 1980s [2]. However, here we report a rare case of thrombosis which involves both venous and arterial thrombosis of 2 different organ systems, namely, the lung as pulmonary embolism and the spleen as splenic infarct, respectively. To the best of our knowledge, using a PubMed search with MeSH terms ("cytomegalovirus" OR "CMV") AND "thrombosis" did not reveal any case report that reported a patient who suffered from both splenic infarct and pulmonary embolism at the same time. Also, most cases of thrombosis are reported in patients in an immunocompromised state secondary to organ transplant or immune deficiency virus. We report a case of thrombosis in a rheumatoid arthritis patient on low-dose methotrexate.
Case Report
A 62-year-old female with a past medical history of rheumatoid arthritis for the last ten years on low-dose methotrexate (2.5 mg/week) was admitted with complaints of severe left upper quadrant abdominal pain. The intensity was rated as eight out of ten on a pain rating scale and was sharp and stabbing in nature. It was associated with nausea, but there was no vomiting. The patient also complained of diarrhea of 3-5 episodes per day over the last two months. The stool was watery, nonfoul smelling, large volume, not blood tinged, and devoid of mucus. The other concomitant medications included tablet folic acid 1 mg daily and tablet meloxicam 3.75 mg daily as needed for pain. She had no other comorbidities.
On clinical examination, there was no pallor, icterus, or lymphadenopathy. The temperature was 103 ∘ F with slight tachycardia. Her abdominal examination revealed moderate tenderness in the left upper quadrant with no guarding or rebound tenderness. Cardiac examination revealed no murmurs, and respiratory system examination was unremarkable.
At this juncture, a working diagnosis of acute abdomen was considered, and the investigations were directed to diagnose any common surgical causes. Her complete blood profile revealed a total white blood cell count of 6,200 cell/mm 3 and a differential count of 4,340 neutrophils/mm 3 (70%) and 930 lymphocytes/mm 3 (15%). Platelets were 335,000/mm 3 , and the hemoglobin levels were 12.2 g/dl. Liver function tests revealed an ALT level of 73 UI/l, an AST level of 57 UI/l, an alkaline phosphatase level of 242 IU/l, and an LDH level of 298 UI/l. The C-reactive protein level was 49 mg/l. This was followed by a Computed Tomography (CT) of the abdomen done at admission, performed using intravenous and oral contrast. It showed a high-density fluid filled large defect in the superior aspect of the spleen consistent with splenic infarct (see Figure 1) and also a small pulmonary embolism in the right lower lobe. A high-resolution CT scan of the thorax with intravenous contrast confirmed the right lower lobe pulmonary embolism (see Figure 2). Lower extremity Doppler ultrasound showed no deep vein thrombosis. An echocardiography which was done revealed no valvular abnormalities.
The patient was started on anticoagulation therapy with rivaroxaban 15 mg per os (PO) twice daily. She was pancultured, and all the cultures including urine and blood were negative. Additional workup for chronic diarrhea including stool for Clostridium difficile and comprehensive stool cultures was all negative. Hematology-oncology was consulted for the hypercoagulable state. Protein C and S activity antithrombin 3, beta 2 glycoprotein and anti-cardiolipin antibody, and factor V Leiden mutation were all negative. Her liver function tests (LFTs) were abnormal. In light of elevated LFTs associated with fever and diarrhea, hepatitis virus, EBV, CMV, and Lyme titers were ordered. Hepatitis panel, EBV, and Lyme's test were negative. However, CMV immunoglobulin M (IgM) was 145 AU/ml (reference value: 0-29.9) and CMV immunoglobulin G (IgG) was 1.90 IU/L (reference value: 0.00-0.59). CMV quantitative PCR level was 43470 IU/ml (reference value: 200-2,000,000 IU/mL). CMV IgG avidity index was 0.46 (reference value ≥ 0.60). The patient was also tested for human immunodeficiency virus, and she was negative. She was started on valganciclovir with a loading dose of 900 mg twice daily for 14 days followed by a maintenance dose of 900 mg once daily for three months. Her diarrhea and abdominal pain started slowly getting better, and she had no more fevers. She was discharged on rivaroxaban for six months and valganciclovir. Repeat testing in 14 days for CMV IgM showed a level of 56.4 and CMV IgG level of 2.3. In 1 month, CMV IgM level was <30, CMV IgG was 5.01, and CMV quantitative PCR level was <200. Valganciclovir was discontinued after three months of treatment. At 3 months, CMV IgM level was <30, CMV IgG was 7.90, and CMV quantitative PCR level was <200. At 9 months, CMV IgM level was <30, CMV IgG level was >10, and CMV quantitative PCR level was <200.
Discussion
CMV manifestations can cover a broad spectrum from a little fever to as severe as pancytopenia, hepatitis, retinitis, meningoencephalitis, Guillain-Barre syndrome, pneumonia, and thrombosis [3]. CMV-associated thrombosis has been commonly reported in the literature and is independent of the other risk factors for thrombosis. In a retrospective study done by Atzmony et al. among 140 patients with acute CMV infection, the incidence of thrombosis was 6.4% ( = 9). Arterial thrombosis manifested as splenic infarcts ( = 4) and renal infarct ( = 1). Venous thrombosis presented as pulmonary embolism ( = 1), lower limb deep vein thrombosis ( = 1), upper limb deep vein thrombosis ( = 1), and jugular vein thrombosis ( = 1) [4]. However, a recent meta-analysis of 97 case reports of CMV infection associated thrombosis summarized that the majority (53.6%) of thrombosis cases occur as DVT/PE, which is followed by 25.8% in splanchnic veins and 12.4% as splenic infarcts. Also, DVT/PE is more common in immunocompetent patients while splenic infarct is more common in immunocompromised patients [5]. Other venous thromboses reported in the literature so far include internal jugular vein thrombosis, intracranial cerebral vein thrombosis, ovarian vein thrombosis, extrahepatic vein thrombosis, brachial vein thrombosis, Case Reports in Hematology 3 and azygos vein thrombosis. Other arterial thromboses rarely include myocardial ischemia and digital ischemia [5]. Thus, it is interesting to note that our patient has suffered both PE and splenic infarct.
Multiple theories have been hypothesized on the role of CMV in causing thrombosis. The first is by the activation of factor X by enhancing platelets and leukocytes adhesion to infected endothelial cells. Another hypothesis is that CMV increases the circulatory levels of factor VIII. However, the most accepted theory is that CMV transiently induces the production of antiphospholipid antibodies (APLAs), which has been seen in several in vivo studies [6]. The pathophysiology of splenic infarcts included either CMV mononucleosis associated arterial insufficiency leading to rapidly enlarging spleen and resultant infarct or an arterial embolism [7].
The management of these patients included ruling out thrombosis in other sites. This is followed by initiation of antiviral agents like valganciclovir or ganciclovir and the use of an anticoagulation agent. So far, there has been no consensus on the choice of an anticoagulation agent and its duration, which is based mostly on the clinical decision of the treating physician [7,8]. However, in our patient, anticoagulation therapy was initiated as the patient had venous thrombosis and we decided to continue the use of rivaroxaban at a dose of 15 mg PO twice daily for the initial 15 days followed by 20 mg PO once daily for six months.
Conclusion
Evaluation of CMV titers must be added to the diagnostic workup in the presence of a febrile splenic infarction or thrombosis especially when it is associated with a mononucleosis type of reaction. More research is warranted in this area before a routine diagnostic workup on thrombosis in cases of acute CMV could be advised. Thus, physicians should be more vigilant to look for complications of venous or arterial thrombosis in patients with acute CMV infection.
|
2018-04-03T06:21:03.172Z
|
2017-10-11T00:00:00.000
|
{
"year": 2017,
"sha1": "a824b0472efd2f5a61e08a08374628275627e06a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crihem/2017/1850821.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9ed5ab43bba95c5189de94ac9b3f75d298f70980",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248724472
|
pes2o/s2orc
|
v3-fos-license
|
Nutritional Risk Index as a Prognostic Factor Predicts the Clinical Outcomes in Patients With Stage III Gastric Cancer
Objective This study is aimed to determine the potential prognostic significance of nutritional risk index (NRI) in patients with stage III gastric cancer. Methods A total of 202 patients with stage III gastric cancer were enrolled in this study. NRI was an index based on ideal body weight, present body weight, and serum albumin levels. All patients were divided into two groups by receiver operating characteristic curve: low NRI group (NRI<99) and high NRI group (NRI≥99). The relationship between NRI and clinicopathologic characteristics was evaluated by Chi-square test. The clinical survival outcome was analyzed by Kaplan-Meier method and compared using log-rank test. The univariate and multivariate analyses were used to detect the potential prognostic factors. A nomogram for individualized assessment of disease-free survival (DFS) and overall survival (OS). The calibration curve was used to evaluate the performance of the nomogram for predicted and the actual probability of survival time. The decision curve analysis was performed to assess the clinical utility of the nomogram by quantifying the net benefits at different threshold probabilities. Results The results indicated that NRI had prognostic significance by optimal cutoff value of 99. With regard to clinicopathologic characteristics, NRI showed significant relationship with age, weight, body mass index, total protein, albumin, albumin/globulin, prealbumin, glucose, white blood cell, neutrophils, lymphocyte, hemoglobin, red blood cell, hematocrit, total lymph nodes, and human epidermal growth factor receptor 2 (P<0.05). Through the univariate and multivariate analyses, NRI, total lymph nodes, and tumor size were identified as the independent factor to predict the DFS and OS. The nomogram was used to predict the 1-, 3-, and 5-year survival probabilities, and the calibration curve showed that the prediction line matched the reference line well for 1-, 3-, and 5-year DFS and OS. Furthermore, the decision curve analysis also showed that the nomogram model yielded the best net benefit across the range of threshold probability for 1-, 3-, 5-year DFS and OS. Conclusions NRI is described as the potential prognostic factor for patients with stage III gastric cancer and is used to predict the survival and prognosis.
INTRODUCTION
Gastric cancer is a deadly disease with poor prognosis and remains an unsolved major clinical problem with more than one million new cases throughout the world (1). Gastric cancer is the sixth leading cause of cancer-related morbidity and the third leading cause of cancer-related death worldwide, and the majority of newly diagnosed gastric cancer occurs mainly in Eastern Asia (2). Although early detection and recent improvements in surgery and chemotherapy have improved the clinical outcome, the mortality is still high in patients with advanced gastric cancer and recurrent disease (3). Most cases are diagnosed in the late stage of the disease, resulting in overall poor outcomes, including high intratumor heterogeneity, metastases, and chemotherapeutic resistance (4). In addition to the difference in disease status, nutritional status also plays an important role in influencing the patients' prognosis, treatment effect, and clinical outcome.
Previous studies have indicated that malnutrition might lead to a poor response to anti-tumor treatment, increase the incidence of postoperative complications, and result in an unsatisfactory survival prognosis (5). As a result of the imbalance between intake of nutrients and requirements, malnutrition is a common risk factor for postoperative complications and poor prognosis in patients with gastric cancer (6). Cachexia is a complex multifactorial syndrome that affects 50% -80% of cancer patients and is also associated with 20% -40% of cancer deaths (7). Early assessment and management of nutrition for gastric cancer patients can improve clinical outcomes. Currently known indicators reflecting the nutritional status of patients include Nutritional Risk Screening (NRS), malnutrition screening tool (MST), Naples Prognostic Score (NPS), prognostic nutritional index (PNI), patient-generated subjective global assessment (PG-SGA), and body mass index (BMI) (8)(9)(10)(11)(12)(13). These indexes are the common screening tools, and each one possesses some benefits when screening patients for malnutrition. Recently, an increasing number of studies report that the nutritional risk index (NRI), which is established based on the patients' ideal body weight, present body weight, and serum albumin levels, represents a new nutrition-related prognostic scoring system (14). Researchers have shown that NRI had prognostic value for breast cancer, esophageal cancer, and oral cancer (15)(16)(17). This emerged indicator takes into account the effects of nutritional status and systemic inflammation condition on cancer prognosis. Hence, NRI is superior to other single nutritional or inflammatory markers. Several studies have also indicated that NRI was related to gastric cancer. In Oh CA and colleagues' study, they found that NRI was a predictor in postoperative wound complications after gastrectomy and played an important role in the development of wound complications with malnutrition immediately after surgery (18). Another study has shown that Geriatric Nutritional Risk Index (GNRI) was useful in predicting postoperative complications of elderly patients with GC undergoing gastrectomy, and emerged as an independent predictor of postoperative complications (19). Another study investigated whether the GNRI was affected by the number of remaining teeth, occlusal support status, and denture use in gastric cancer patients, and the result showed that GNRI was associated with the occlusal support level but not with denture use (20). However, this indicator remains limited for patients with stage III gastric cancer. As a result, the present retrospective cohort study aims to determine the prognostic significance of NRI in patients with stage III gastric cancer and to investigate the correlation between NRI and clinicopathological characteristics.
Study Population
The retrospective study included patients diagnosed with stage III gastric cancer from November 2014 to December 2017 at Harbin Medical University Cancer Hospital. Detailed clinicopathological data were obtained from the patient's medical records. The studies involving human participants were reviewed and approved by the Ethics Review Committee of Harbin Medical University Cancer Hospital (the ethics number: KY2021-09), and it adhered to the standards of the Declaration of Helsinki and its later amendments. The patients provided their written informed consent to participate in this study.
Participants were considered eligible if they were gastric cancer patients who: 1) were histologically diagnosed with stage III gastric cancer; 2) received primary tumor resection; 3) had no infection or inflammatory disorder; 4) had routine blood test performed at a week before treatment; and 5) had complete clinical recorded and follow-up data. The patient exclusion criteria were as follows: 1) malignant tumor at another site or multiple primary malignant tumors; 2) received anti-tumor therapy before surgery, including chemotherapy or targeted therapy; 3) liver and kidney dysfunction could not tolerate surgery; 4) chronic inflammatory disease or autoimmune disease; and 5) received the blood product transfusion within one month before surgery.
Nutritional Risk Index (NRI)
The NRI, comprised three factors, was based on patients' ideal body weight, present body weight (before surgery), and serum albumin levels in every patient. The NRI was calculated as follows: 1.519 × serum albumin level (g/l) + 41.7 × (present/ ideal body weight). And the ideal weight (WLo) was calculated using the following formula: Height-100-[(Height-150)/2.5].
Follow-Up
In the current study, disease-free survival (DFS)was defined as the time between the date of surgery and the time of progression with regard to recurrence or distant metastases, and all-cause death or the last follow-up. Overall survival (OS) was defined as the time between the date of surgery and all-cause death or the last follow-up. The last follow-up was assessed in December 2021. The survival data were through telephone interviews or extracted from telephone interviews.
Statistical Analysis
The Chi-square test or Fisher's exact test was used to analyze categorical variables, and t-tests were used to analyze continuous variables. Survival curves, including DFS and OS, were plotted by the Kaplan-Meier method, and the log-rank test was utilized to analyze the differences. The significant variables were identified from univariate and multivariate Cox proportional hazards regression model. The 95% confidence intervals (CIs) and hazard ratios (HRs) were performed to evaluate the association between patients' NRI and prognosis. Nomogram for DFS and OS was established on the basis of the multivariate analyses. Statistical analysis data were statistically analyzed using SPSS 22.0 (SPSS Inc., Chicago, IL, USA) and R (version 3.6.0; Vienna, Austria. URL: http://www.R-project.org/). Each test was twosided, and statistical differences were termed as P value < 0.05.
Patient Characteristics
In total, 235 patients with stage III gastric cancer were treated at Harbin Medical University Cancer Hospital between November 2014 and December 2017. Through the inclusion and exclusion criteria, 202 patients were eventually enrolled, while the remaining 33 patients were excluded ( Figure 1). There were 132 (65.3%) males and 70 (34.7%) females. The median age at the time of surgery of all cases was 61 (range from 28 to 83 years). The receiver operating characteristic curve (ROC) was used to determine the optimal cutoff value of NRI, and the value was 99. According to the optimal cutoff value of NRI, all patients were divided into two groups: low NRI group (NRI<99) and high NRI group (NRI≥99). The patient characteristics are shown in Table 1. With regard to patient characteristics, NRI showed a significant relationship with age, weightand, body mass index (BMI) (P<0.05).
Univariate and Multivariate Analyses on the Prognostic Predictors in Patients With Stage III Gastric Cancer
In univariate Cox regression analysis, NRI, A/G, PALB, FIB, Borrmann type, TLN, tumor size, S-100, and postoperative chemotherapy were related to the prognosis of gastric cancer patients for DFS, however, only NRI, FIB, Borrmann type, TLN, and tumor size were identified as the independent factor to predict the DFS upon multivariate analysis. In univariate Cox regression analysis, NRI, age, A/G, PALB, FIB, radical resection, type of surgery, Borrmann type, TLN, tumor size, CD56, S-100, and postoperative chemotherapy were associated with the prognosis of gastric cancer patients for OS, however, only NRI, type of surgery, TLN, tumor size, and CD56 were identified as the independent factors to predict the OS upon multivariate analysis. These results are shown in Table 4.
Survival Analysis and Prognostic Value of NRI
Through the univariate and multivariate Cox regression analysis, the results indicated that high NRI was related to prolong DFS (P=0.014, HR: 0.591, 95% CI: 0.389-0.899 and P=0.038, HR: 0.637, 95% CI: 0.385-0.955) and OS (P=0.006, HR: 0.557, 95% CI: 0.366-0.847 and P=0.009, HR: 0.510, 95% CI: 0.308-0.843). The median DFS and OS in the low NRI group were 35.70 months and 43.40 months, respectively. The median DFS and OS in the high NRI group were not reached. Moreover, the median DFS and OS in the low NRI group were significantly shorter than that in the high N RI group ( P= 0.01 3 a nd P=0.0006), respectively ( Figure 2).
We constructed a nomogram for individualized assessment of DFS and OS after surgery. The nomogram for DFS had unique features, and integrated NRI, FIB, Borrmann type, TLN, and tumor size by the multivariate analysis. The nomogram for OS had unique features, and integrated NRI, type of surgery, TLN, and tumor size by the multivariate analysis. The nomogram of DFS and OS was generated as shown in Figure 3. Moreover, we used the calibration curve to evaluate the performance of the nomogram for predicted and the actual probability of survival time. The prediction line matches the reference line well for postoperative 1-, 3-, 5-year DFS and OS ( Figure 4). Furthermore, the decision curve analysis (DCA) was performed to assess the clinical utility of the nomogram (the nomogram of DFS and OS by the multivariate analysis) and NRI by quantifying the net benefits at different threshold probabilities. Compared with only NRI, the nomogram model yielded the best net benefit across the range of threshold probability for 1-, 3-, 5-year DFS and OS, indicating its ability for clinical decision-making was better than only NRI ( Figure 5).
DISCUSSION
Gastrectomy as a curative treatment of gastric cancer will lead to sustained weight loss, malnutrition, and then a decline in quality of life (21). Emerging evidence suggests that the prognosis of gastric cancer is not only associated with tumor indicators, but also related to systemic inflammation, patient's condition, and nutritional status (22)(23)(24). Nowadays, due to the heterogeneity and comprehensiveness of gastric cancer, even if the same TNM is staged through the AJCC TNM staging system, the prognosis of patients may be different and vary greatly (25). As a result, it is necessary to develop an accurate prognostic risk stratification system to predict treatment outcomes.
Although some systemic inflammation or nutritional status indicators are used to assess the cancer prognosis, the single inflammation or nutrition-related marker may be misleading when the threshold is arbitrarily determined. Of late, a growing number of studies report that NRI, which is established based on serum albumin levels, present body weight, and ideal body weight, represents a novel nutrition-related prognostic scoring system. Researchers have also proven that NRI shows prognostic value for primary liver cancer, allogeneic hematopoietic cell transplantation (allo-HSCT), esophageal squamous cell carcinoma, and colorectal cancer (26)(27)(28)(29). Besides, NRI is more accurate than other prognostic factors in predicting survival. For example, NRI was an independent prognostic factor for patients' OS in a retrospective study centering on 143 patients with localized esophageal cancer (30). Another study indicated that NRI<100 in a baseline was significantly related to decreased OS in esophageal cancer patients of the SCOPE1 clinical trial (31). Furthermore, another study showed that GNRI was significantly associated with OS and cancer-specific survival (CSS) in elderly gastric cancer patients and was an independent predictor of OS; and is a simple, cost-effective, and promising nutritional index for predicting OS in elderly gastric cancer patients (32). A systematic review and meta-analysis showed that GNRI was a valuable predictor of complications and long-term outcomes in patients with gastrointestinal malignancy (33). However, there is little research on the role of NRI in predicting the prognosis of gastric cancer patients.
NRI is based on three factors, including serum albumin levels, present body weight, and ideal body weight. Nevertheless, little is known about the association between the NRI, treatment, and survival in patients with stage III gastric cancer. The current study was the first to evaluate the relationship between the NRI, clinicopathological factors, and prognosis. Our results proved that the high level of NRI was significantly related to age, weight, body mass index, TP, ALB, A/G, PALB, Glu, W, N, L, Hb, R, Hct, TLN, and HER2, respectively. Moreover, the NRI was a potential prognostic factor of DFS and OS by the univariate and multivariate Cox regression survival analyses. And the median DFS and OS in the high NRI group had longer survival than those in the high NRI group via the log-rank method. We also constructed a prognostic nomogram to predict the 1-, 3-, and 5year survival probabilities, and the calibration curve shows that the prediction line matches the reference line well for 1-, 3-, and 5-year DFS and OS. Furthermore, the decision curve analysis also shows that the nomogram model yielded the best net benefit across the range of threshold probability for 1-, 3-, and 5-year DFS and OS compared to only NRI and indicated this model had better predicting ability for clinical decision-making. There are several plausible mechanisms to explain the relationship between NRI and the prognosis of gastric cancer. The ALB is supposed to relate to the systemic inflammation affecting hepatocyte catabolism and anabolism (34). ALB also is one of the most common factors for determining the nutritional and immunological status (35). Patients with low ALB level go through poor hepatic functional reserve, which affects the tolerance to surgery and leads to worse survival time (36). BMI, defined as body mass in kilograms divided by the square of height in meters (kg/m 2 ), is the most used anthropometric measure to approximate overall body fatness for the purposes of classifying and reporting overweight and obesity (37). Weight loss is common in advanced gastric cancer, and maintaining weight and adequate nutrition during systemic treatment (38). Moreover, the weight loss is usually caused by insufficient calorie intake as a result of tumor-related anorexia, malabsorption, hypermetabolism, and gastrointestinal obstruction (39).
Certain limitations should be noted in the current study. Firstly, this study was a single-center study with limited patients and also was a retrospective study. To further enrich the literature, multicenter studies from a large number of research populations should be enrolled. Secondly, as a result of the retrospective nature, selection bias was inevitable, although the enrolled patients were selected in line with the inclusion and exclusion criteria. Thirdly, NRI was a nonspecific tumor marker, and should further study the relationship between NRI, therapeutic effect, and prognosis in a prospective study.
CONCLUSION
NRI is described as the potential prognostic factor for patients with stage III gastric cancer and is used to predict the survival and prognosis. The convenient, noninvasive, and reproducible factors are applied to guide treatment, evaluate efficacy, and estimate prognosis of gastric cancer.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
This study was reviewed and approved by the ethics committee of Harbin Medical University Cancer Hospital. The patients/ participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
HBS, HKS, and LY contributed to the study conception and design. YC, CY, and HX performed the collection of data. HG conducted the data interpretation. HBS prepared the manuscript. LL provided Funding acquisition and Project administration. All authors read and approved the final manuscript.
|
2022-05-13T13:24:54.873Z
|
2022-05-13T00:00:00.000
|
{
"year": 2022,
"sha1": "10952800df901a48f4cf6831eb9f68cc89e21180",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "10952800df901a48f4cf6831eb9f68cc89e21180",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231784476
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness of Topical Insulin Dressing in Healing of Diabetic Foot Ulcer among Diabetic Patients
Diabetes is previously known as the disease of the rich people, but now there is no partiality between the rich and poor and it has become the third leading cause of death. Diabetic Mellitus (DM) is a metabolic issue that is characteristics by chronic hyperglycemia; it is a typical and conceivably persistent disease. The aims of the present study to assess the effectiveness of insulin dressing of the diabetic foot ulcer among diabetic patients. A quasi-experimental research design with purposive sampling technique was adopted to conduct a study among 30 diabetic foot ulcer patients. Demography data was collected and wound was measured and insulin dressing was done. After one week the wound was measured. Con identiality was maintained throughout the procedure. The collected data were analyzed by using descriptive and inferential statistics. Among 30 samples pretest mean score of wound healing among patients with diabetic foot ulcer in the topical insulin dressing was 2.67±0.66 and the post test mean score was 1.43±0.57. The calculated paired ‘t’ test value of t = 15.703 was found to be statistically highly signi icant at p<0.001 level. The above inding clearly infers that topical insulin dressing to patients with diabetic foot ulcer had signi icant effect which resulted in the improvement in the level of wound healing among patients with diabetic foot ulcer.
INTRODUCTION
Diabetes is previously known as the disease of the rich people, but now there is no partiality between the rich and poor and it has become the third leading cause of death. Diabetic Mellitus (DM) is a metabolic issue that is described by ongoing hyperglycemia; it is a conceivably persistent disease (Karthikeyaan et al., 2019) The condition is now upsetting from the beginning 194 million individuals worldwide and is increment to 333 million individuals in 2025 due to longer future, idle lifestyle and changing dietary patterns. (Iraj et al., 2013) About 60% of the helpless nation on the planet sub-Saharan Africa and this zone will experience the ascent in the inability of diabetes in the accompanying 20 years. (Shahbazian et al., 2013) Diabetic Mellitus is a condition in the individual is either incapable for creating insulin or the body can't use the insulin present in the body. If untreated, diabetes can cause numerous complications, for example, intense diabetic ketoacidosis, non ketotichy per molar, diabetic foot ulcer and coma. Diabetic foot is one of the main issues in the diabetes. (Cecyli and Thamupriyadharshini, 2020) Itis described as a foot ulceration that is connected with neuropathy and decreased blood supply to lower limbs of the diabetic patients. It is evaluated that about 5% of all patients with diabetes present with a foot ulceration. (Bhittani et al., 2020) The lifetime risk of diabetic patient's giving this multifaceted nature is 15%. It has been found that 40-70% of all non-tramatic removals of the lower limbs happen in patients with diabetes. (Ramarao and Ramu, 2017) India is known as the capital of diabetes in the world. Within excess of 40 million individuals with diabetes. Diabetes mellitus is a multifaceted disease and foot ulceration is of its most common complication. (Praveen and Kumar, 2017) The frequency of the foot ulcers among individuals with diabetes range from 8% to 17%. Foot ulcers can make serious problem and hospitalization to patients and economic burden to families and health system. About 85% of diabetes-related amputations and more than half of non-traumatic lower limb amputation are common. People who have developed foot ulcers have a decreased quality of life. (Ghatage et al., 2017) 78 million people in South East Asian region are affected by diabetes mellitus and are expected to attain 140 million by 2040. WHO reports that India had 69.2 million people living with diabetes (8.7%) as indicated by the 2015 data, among these more than 36 million people remained undiscovered. The number of diabetics is expected to increase to 109 million cases out of a total estimated population of 1.5 billion in India by 2035. (Athavale et al., 2014) Neuropathy, mechanical pressure, and angioplasty are the major etiopathological factors in the development of foot ulcers in individuals with diabetes. (Martínez-Jiménez et al., 2018) Diabetic peripheral neuropathy is a heterogeneous problem that incorporates mononeuropathies, polyneuropathies, plexopathies, and radiculopathies. (Prasad et al., 2018) As diabetic neuropathy often prompts foot ulcers, it is suggested to screen all people withdiabetesannually. (Swaminathan, 2014) The basic reasons for diabetic foot ulcer development are injury, neuropathology and deformation. Moreover, the utilization of inappropriate footwear, for example, chapels which have an elastic soul and supported by a strap in the irst interdigital space, yet as know back tie, opens the feet to injury. (Kamat and Sunil, 2019) The current examination to survey the adequacy of insulin dressing on diabetic foot ulcer among diabetic patients.
MATERIALS AND METHODS
A quasi-experimental research design was utilized to evaluate the effectiveness insulin dressing in heal-ing of diabetic foot ulcer among diabetic patients at Saveetha Medical College and Hospital with purposive sampling technique was testing procedure was adopted to conduct a study among 30 diabetic foot ulcer patients those who ful ill the inclusion criteria. The inclusion criteria those who are in the below one year diagnosed, willing to participate, both genders and speak Tamil and English language. The exclusion criteria include allergic to insulin and not willing to participate. Demography data was collected and wound was measured and insulin dressing was done. After one week the wound was measured. Con identiality was maintained throughout the procedure. Gathered information was investigated by descriptive and inferential statistics. The topic has been approved by the Ethics Committee of the Institution.
The Table 1 in the current investigation shows that the pretest mean score of wound healing among patients with diabetic foot ulcer in the effective insulin dressing was 2.67±0.66. The investigation was upheld by (Sanjay et al., 2018)conducted Ef icacy of Topical Insulin Dressings V/S Normal Saline Dressing On Diabetic Foot Ulcer. The mean ulcer area at the time of admission in group A was 4.8 ± 0.6 cm2, 5.35±0.6cm2 in group B. The mean depth of ulcer at the time of admission was 8.6±0.9mm in group A, 8.4 ±0.7mm in group B.
To compare the effectiveness of insulin dressing among pre and post test between the diabetic foot ulcers among diabetic patients
The present study shows that the pretest mean score of wound healing among patients with diabetic foot ulcer in the insulin dressing was 2.67±0.66.and 't' test value of t = 0.197 was not found to be statistically signi icant. The indings of the analysis revealed that the post test mean score of wound healing among patients with diabetic foot ulcer in the insulin dressing was 1.43±0 and 't' test value of t = 5.931was found to be statistically signi icant at p<0.001 level. The above inding shows that the clearly infers that insulin dressing was found to be effective in improving the level of wound healing among diabetic patients.
The outcome was supported by comparable investigation directed by (Shafaatullah et al., 2019) Effectiveness of effective insulin in the administration of diabetic foot ulcers. To ind the advantages of effective insulin in the administration of diabetic foot ulcers. Observational assessment was led by Plastic Surgery and General Surgery Department, Baqai Medical University Karachi. We had a sum of 65 individuals, among them there were 52 males and 13 females have diabetic foot ulcer. There was improvement after treatment of diabetic foot ulcers with skin insulin shower. The wound size and depth were deceased. This treatment had more reassur-ing results than regular treatment for diabetic foot ulcers.
The Table 2 shows that the segment factors sex and occupation had measurably tremendous relationship with post test level of twisted recuperating among patients with diabetic foot ulcer at p<0.05 level and the other segment factors had not shown factually critical relationship with post test level of twisted mending among patients with diabetic foot ulcer in the compelling insulin dressing. The examination was upheld by J. Rajesh Amal Praveen et al. (2017) effective use of topical application of insulin in the diabetic foot ulcer and its correlation with ordinary saline dressing. There was no quali ication in age and length of diabetes of both the examination gathering. A gigantic portion of the patients in both Insulin and Saline classes are in age of 56 -65 years (40%). In the current examina-tion, right foot was more in luenced than left foot in both Insulin and Saline gathering. Positive family ancestry for diabetes was higher among Insulin bunch when differentiated and commonplace saline gathering. Among other diabetic burdens, basically all the subjects in luenced with fringe neuropathy in both the categories, while hypertension was found to be the second fundamental trouble. Factual hugeness was seen in recuperated cases among both the Groups. (Praveen and Kumar, 2017)
CONCLUSIONS
The inding of the study that topical insulin dressing to patients with diabetic foot ulcer had signi icant effect which resulted in the improvement in the degree of wound healing among patients with diabetic foot ulcer.
|
2021-01-07T09:06:53.524Z
|
2020-12-20T00:00:00.000
|
{
"year": 2020,
"sha1": "48a02177accb6d038ca13dd0152bfe407e9de602",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.26452/ijrps.v11ispl4.3761",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "dc63ea63dc89f3f43f22d716ce53315562d2d65e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10193355
|
pes2o/s2orc
|
v3-fos-license
|
In Quest of the Antique: The Bazaar, Exchange and Mart and the Democratization of Collecting, 1926–42
The popularization of antique collecting is typically located in the second half of the twentieth century, with the rise of ‘retrochic’ and the emergence of new markets and online trading websites for anonymously exchanging second-hand goods. Close study of the printed literature connected with the inter-war secondhand trade, however, challenges conventional chronologies in the history of consumer culture, and can provide a new perspective on the role of collecting in British social and cultural life. This article examines the period, after the late 1920s, during which The Bazaar, Exchange and Mart reinvented itself as a forum for antique and decorative art enthusiasts. It argues that, in speaking to and publishing contributions from so-called ‘small collectors’, this ‘Popular Weekly for Collectors and Connoisseurs’ helped shape a modern and democratic culture of art appreciation in which ordinary people were actively invited to participate. The private correspondence archive of a Buckinghamshire subscriber who used the Exchange and Mart to sell his collection of ‘Egyptian, Greek, and Roman Antiquities’ to readers across the country during the 1930s reveals an intimate portrait of the desires, fantasies, and pleasures associated with the popular experience of collecting in pre-war Britain.
supplement, helped anonymous individuals to buy and sell everything 'from an Autograph to an Orchid, a Toy to a Typewriter, a Mail Cart to a Motor Car'. Now a website primarily for the sale of second-hand cars, before the Second World War the twice-weekly Exchange and Mart was 'an unequalled journal for the Amateur and Collector'. 1 Its innovative function as an exchange allowing anonymous readers to 'barter' their unwanted goods and services without cash or credit, along with its encyclopaedic range of articles, saw the hybrid 'magazine-paper' enter the popular consciousness and shape perceptions of collectors and their obsessions in modern British society. During the inter-war period, in the words of George Orwell, the defining feature of this Victorian hobbyist's periodical was one that had also captured the market for middlebrow novels, 'detective stories', and 'collections of curiosities'; that is, 'the charm of useless knowledge'. This audience, he noted in 1940, was composed of a very particular sort of particular person: those who took 'a pleasure in dates, lists, catalogues, concrete details, descriptions of processes, junk-shop windows and back numbers of the Exchange and Mart'. 2 This article focuses on the period, after the summer of 1926, in which the periodical's Saturday issue (known simply as the Bazaar) became a dedicated art and curio 'Collector's Issue', recovering its central role in the popularization of antiques. 3 It examines the new, egalitarian and participatory, collecting identities which Bazaar journalists and contributors helped foster, and then explores the collecting habits of a group of the periodical's subscribers in the 1930s through a study of the private correspondence archive of an Exchange and Mart seller, revealing new facets of the relationship between collecting, mass culture, and democratization in modern Britain.
The Exchange and Mart did not introduce the concept of an 'exchange and mart' to the Victorian market. The periodical emerged directly out of an exchange column for women in Edward William Cox's society journal Queen. The idea was said to have been Cox's wife's: a disabled entomologist who spent much of her time indoors, she published a notice in the magazine asking if readers would be willing to swap their duplicates with her. 4 Cox established The Exchange and Mart as a separate periodical to help 'collectors', using pseudonyms, trade collectables and possessions including natural history specimens, autographs, photographs, jewellery, china, clothing, and even pets from the comfort of their own homes. Aimed squarely at middle-and lower-middle-class readers, the new paper was priced at twopence and issued two or three times a week. 5 Its anonymous 'deposit system' claimed to have removed the risks involved in independent trading and thereby revolutionized the market for second-hand goods, as purchases would be held at the Exchange and Mart offices until payment was received, rather than sent directly to the seller. Awed late-Victorian observers compared the magazine to a paper department store, so numerous were the objects private advertisers offered for sale or exchange. For Henry Wellcome and his professional collecting agents, as one historian of the Wellcome Collection of medical artefacts has noted, the Exchange and Mart was 'as much a part of ''the field'' as were the jungles of Borneo or the African interior'. 6 At the same time, the Exchange and Mart quickly became synonymous with the disposal or acquisition of stolen, adulterated, or faked goods. Readers of any daily newspaper during the early-twentieth century would have found the details of an array of fraudulent transactions, outright deceptions, and elaborate confidence tricks occurring up and down the country and at virtually all levels of society via the advertising supplement. Although a 'Black List' of untrustworthy readers was included in most issues, the deposit system could easily be circumvented by creative readers. 7 Unrecorded in published sources are the transactions in which purchasers had not been swindled outright, but rather were unsatisfied with what they had ordered or exchanged, at which point staff sought to arbitrate disputes themselves. 8 On the whole, however, in this 'culture of duplicity' caveat emptor applied: the paper could not be held liable for the indiscretions 5 'The Creation of a New Market: One of the Romances of the London Press', Review of Reviews, 1 (January 1904), 88; Charlotte C. Watkins, 'Edward William Cox and the Rise of ''Class Journalism''', Victorian Periodicals Review, 15 (1982), 87-93. 6 Ghislaine Skinner, 'Sir Henry Wellcome's Museum for the Science of History', Medical History, 30 (1986), 404. 7 The majority of court cases appeared to involve readers who had declined to use the Exchange and Mart deposit system, and had written to the seller to arrange the exchange of goods and payment personally. 8 'The Creation of a New Market', 87-90. Gill explained to the Review of Reviews in 1904: 'A man buys what the seller represents as a genuine Sèvres vase, and discovers, when he gets it, that it is lacking in some points which seem to him essential to a genuine Sèvres. In those cases [. . .] [w]e do not hold court and ask the witnesses to come before us. We decide the case solely on an examination of the article itself in conjunction with the correspondence of the disputants.' of its readers. 9 The cryptic nature of anonymously placed public notices could, moreover, comfortably conceal a variety of coded messages. 10 The Exchange and Mart can be seen as the progenitor of the latetwentieth-century free-advertising paper, which as anthropologist Daniel Miller noted 'operate[d] simultaneously as a safe, logical derivative of the community-based classified pages of local newspapers and an anarchic, potentially subversive and ambiguous means of laundering goods and services'. 11 Interpreting the meaning of private traders' advertisements for personal gain became a demanding skill in its own right, involving elements of risk and calculation, as well as excitement. 12 The periodical was not simply a functional tool for uniting collectables with purchasers, unwitting or otherwise, however: it also helped create and sustain a community of like-minded collectors. During the inter-war period the twice-weekly paper's advertisements must be seen in the context of its expanding range of editorial articles, serialized columns, and illustrated features, which spawned a miniature empire of inexpensive advice manuals for amateur hobbyists. 13 As a middlebrow 'Popular Weekly for Collectors and Connoisseurs', the Bazaar expanded upon the remit of earlier populist but locally based collecting magazines in aiming to capture a much broader audience than traditional art journals, demonstrating that particularism did not have to be synonymous with parochialism. 14 During its heyday in 1929, at which point it was reduced from sixpence an issue back to twopence, the collectors' Bazaar could boast a weekly circulation of a hundred thousand copies. 15 The magazine's columns provided a home for decorative art enthusiasts of all descriptions: from gentlemanly members of the London art, auction, and museum world to Mrs Annie Lee, the impecunious single mother of Laurie Lee, who wrote from the Slad Valley in Gloucestershire in 9 16 During this period, the paper would certainly tap into a form of depoliticized, domestic, and distinctively middle-class sociability. 17 But the figurative exchanges staged in its columns and the diversity of its audience reveal that it also helped foster modes of self-expression which transcended class and gender divides. The Bazaar published a range of columns by respected experts and connoisseurs while actively involving amateurs, or what were appreciatively termed 'small collectors', in debates. Contributors made a concerted effort to introduce the novice to the world of high culture and the history of decoration without prejudice or snobbery, and lobbied for antique shops and local museums to make themselves more accessible to those of modest means. Readers, whatever their social position, were actively encouraged to engage in forms of art criticism and historical research through the viewing and purchasing of ordinary second-hand collectables, informed by the transformative notion that every man or woman had the potential to become a 'connoisseur'. The Bazaar's rise to prominence in this period stands as evidence that the winds of 'democratization' present in other, more commercial, areas of society and culture were also blowing through rarefied corners of the world of art and antiques, unsettling traditional hierarchies of taste. 18 The paper capitalized upon a form of 'inclusive, pluralist participation' in the everyday politics of collecting, in common with forms of contemporary associational life and voluntary organization. 19 The Bazaar was envisaged as a form of reciprocal communication, and solicited readers' contributions in the form of both letters and short, anonymous articles. Columns such as 'The Curiosity Shop'-a regular feature which promised readers a weekly armchair 'chat' on 'curios, coins, old furniture, pictures and all the gossip of the world of connoisseurs'-contained the views of both official and unofficial correspondents in varying proportions. Meanwhile, the Exchange and Mart's 'Special Service Department' performed a function akin to that of Notes and Queries, providing answers to readers' written enquiries and offering to identify and value readers' possessions, including antiques, which could then be sold through the supplement. 20 Rather than a straightforward reflection of the popular collecting world of the late 1920s and early 1930s, then, the Bazaar can be seen as an 'arena' in which practices and cultures of collecting could be rehearsed and reformed. 21 Its columns gave untrained experts a creative outlet and a voice in discussions regarding the uses of art, history, and cultural property in everyday life. 22 Reading the inter-war Bazaar demonstrated that it was possible to unearth beautiful things from the detritus of a London street market stall, to see stateliness in a small set of china, or even to derive valuable historical insight from antiquarian printed ephemera. Dispensing with conventional categories of value in favour of the individualistic tastes of the 'small collector', it aimed to show that self-improvement and a stake in high culture could be bought at the cost of a few shillings in a second-hand shop or, indeed, via an advertising supplement (Figs 1 and 2).
'Collecting in a Small Way': Antiques in the Inter-War Bazaar
In the humble opinion of George Whiteman, a Bazaar editor, there were 'two classes of collectors', and one of them 'did not get anything like the attention it deserves'. He referred to 'those lovers of old and beautiful things, many of them with quite moderate incomes, who do not confine themselves to one branch of antiques, but buy here a chair, there a picture, elsewhere a piece of china or pewter, in order to beautify their homes or because those particular pieces appeal to them'. 23 Rather than speaking to a refined audience of scholars, curators, or dealers, Bazaar contributors chose instead to focus on the amateur and the novice, or, in the parlance of the magazine, the 'small collector'. If, as Deborah Cohen has argued, '[t]o cherish antiques was to proclaim a taste that required cultivation beyond the means of the vast majority', what the Bazaar represents is the moment at which the 'vast majority' of amateur collectors could be taken as seriously as the gentlemanly connoisseur. 24 The Exchange and Mart assumed that 'small' antique and decorative art enthusiasts would not schedule their collecting lives according to the pages of contemporary metropolitan periodicals such as the Connoisseur, Burlington Magazine, or Old Furniture, purchasing art objects and watching auction sales with the sole aim of making a profit. Instead, they bought inexpensive collectables 'on impulse', and because they loved them, not because they corresponded with a sense of what was fashionable or even particularly valuable. 25 Overlooked by historians of collecting and inter-war consumer culture, small collections and the individuals who created them provide an important insight into the desires, fantasies, and sentiments which made up the everyday interaction with the material world. The archetypal Bazaar collection was self-made; only rarely was it inherited. 'Small Collector' J. J. Elliot of southeast London claimed with pride in July 1928 to have made a collection of 'fifteen pieces of old English china'.
I earn but £110 a year and can only afford £10 a year for my hobby, but it is a joy to me to know that every few months, when I have saved up a pound or two, I can buy some beautiful little treasure which I have been watching for some time. I make a point of going round the second-hand dealers' shops regularly, and, as I never look at the things in the front windows, but go right into the back rooms and ask to turn things over, I can find among the miscellaneous collections little works of art and curios worth buying and keeping. I know, too, that they are not likely to be sold until I have saved up enough to buy them. 26 Elliot's tastes and painstaking approach to finding and buying collectables may not have been shared by fellow second-hand shoppers, but nor were they out of the reach of the majority of consumers. Revealingly, the only collecting grandees to be profiled in the Bazaar during its heyday as a smart collectors' magazine-Lord Iveagh and Lord Leverhulme-were those whose art collections had recently been opened to the public. The foundation of the Lady Lever Art Gallery was attributed by then editor, theatre critic and curio collector Maitland Davidson, to the most casual kind of collecting: the type that involved 'seeing some bit of old stuff in a shop window and taking it home'. In this case, the 'nucleus' of Lever's 'great collection' was said to be the pair of 'two little Derby biscuit ornaments' which he had purchased, while a grocer, to adorn his parlour mantelpiece in Wigan. 'As a matter of fact', Davidson, added, 'so great did [Lever's] wealth become [. . .] that he acquired a certain amount of pieces that were not quite up to the standard of his best purchases'. 27 That this modest and unpretentious conception of the art of collecting had an influence on readers is vividly demonstrated in the ways in which Bazaar correspondents used the figure of the 'small collector' to describe their own hobby. In the spring of 1930, the magazine hosted a competition asking readers to write in with the story of the origin of their own collections, believing that 'the experiences which led its readers to take up what, to the man-in-the-street, must seem a strange obsession, are worth recording. The more readily it can be seen how converts to collecting are made the easier will newcomers to the ranks be induced to join'. 28 Published entrants all confessed that their own successes and pleasures in collecting had arisen in mundane circumstances. Collectors were born simply from being in the right place at the right time: 'dragged into' a London auction sale by a friend and 'carried away by the novelty'; while staying in an 'old house'; or having seen an interesting object in a museum. 29 One reader had ducked into an antique shop in Ilford during a rain shower and had ended up buying a miniature of a lady for ten shillings, which he later donated to the National Portrait Gallery. 30 Cyril Nicholls of Whitchurch, Shropshire, had bought a set of six prints after reading about them in the Bazaar, only to be told that they were in fact worthless reprints by the magazine's Special Service Department. 'Many people, after this', he admitted, 'would be writing on ''How I Stopped Collecting,'' but it was this that really started me'. 31 The Bazaar promoted the idea that, as a small collector, it almost mattered less what was in one's collection than the experience of collecting it. The expertise provided by the paper's contributors was of a heterogeneous character; the individual 'quest' for unusual objects was encouraged above following trends in the decorative art market that might turn out to be lucrative. For instance, those who found that 'ordinary' English vernacular cottage furnishings were becoming more popular and thus valuable towards the end of the 1920s were recommended to abandon that avenue and to 'go in exclusively' for 'local' furniture particular to one county, to 'make their original contributions to a comprehensive collection of provincial furniture'. 32 An 'amateur collector' from Cumberland advised readers to take up his unusual pastime: restoring old musical instruments, as due to the lack of interest in the field, 'a great deal of material can be ''picked up,'' often for a few shillings'. 33 During this period, the Bazaar was an early enthusiast of the much-maligned, and typically inexpensive, Victorian collectables such as prints, coral jewellery, and Berlin wool work. During the 1930s, Dr John Kirk used the Exchange and Mart to purchase objects for his folk life collection, which later became the foundation of the Victorian street at the Castle Museum in York, one of the earliest examples of this type of social history display. 34 For small collectors, the pleasure of their hobby clearly lay not in the mainstream cultural significance of the objects they accumulated, but in the challenges associated with 'picking up' itself. The easy and the 'ordinary' held little appeal.
The 'quest of the antique' was, therefore, not wholly connected to the pursuit of the profitable. In September 1928, the Bazaar editor had proposed the establishment of a 'National Museum of ''Fakes''' to show novice collectors the deceptions which produced 'pseudo-antiques'. 35 His idea was universally approved in the Bazaar letters page, but met with scepticism elsewhere. 'Would there not be a great deal less happiness in the world if we were all experts?', the Manchester Guardian asked in response to the Bazaar debate. '[M]any a poorer man asks only not to be disabused as to the authenticity of his two or three pieces of ''Chippendale'' or ''Sheraton'' and to be left to think that this piece of china or that would ''fetch a lot if I ever wanted to sell it-but of course, I [never would]'''. 36 It was undeniable that antiques were things possessing codes to be deciphered. Yet in the Bazaar they became riddles with solutions which could be puzzled over and discussed with just a little rudimentary knowledge of local history or art and design. They were not treated as anxiety-inducing investments or markers of social status. Accordingly, readers were provided with a weekly diet of antiques-based quizzes, 'curiograms' (anagrams with a decorative arts theme), and the 'world's first illustrated crossword for collectors and art lovers'. 37 values, the type of collecting the paper promoted held an attraction which cannot be reduced to the acquisition of capital. An important part of the appeal of antiques was the 'modern craving for the magical' which, as Michael Saler has pointed out, animated many different forms of cultural practice after the fin-de-siècle, helping to reconcile 'the central tenets of modernity: rationalism, secularism, urbanism, mass consumerism' with a deeply felt need for enchantment, glamour, and poetry in everyday life. In the pages of Bazaar, promoting the search for hidden meanings in the material world carried with it a 'democratic message': that the 'occult significance' of things long sensed by the most dedicated antiquarians and connoisseurs could also become 'accessible to the common individual'. 39 The 'quest of the antique' was thus intimately related to what it meant to be modern in inter-war Britain. Pewter expert and collector Howard Cotterell wrote in to the Bazaar December 1928 to express his appreciation of the new direction which the Saturday issue had taken: '[i]t is so essentially human and touches upon subjects for which one looks in vain elsewhere, and subjects which are for the everyday man, as opposed to the millionaire'. 40 As James Hinton has argued, a 'taste for high culture' in pre-war Britain did not necessarily represent 'an obstacle to a democratic modernity', or a cul-de-sac of class-bound conformity. Rather, it could fuel the 'creative energies of pretty ordinary people who were not prepared to settle for being nothing but ordinary', thereby giving rise to new cultural formations and forms of collective identity that cut across older social hierarchies. 41 Writing in March 1929 to thank the magazine for publishing a recent article on a particular area of their own interest, old pamphlets, a Camberwell reader confessed simply that they collected 'without knowing it-from necessity and not from choice'. 42 Readers may have found it difficult to explain this 'strange obsession' to the 'man-in-the-street', but they understood-and the editors understood that they understood-their habits to be completely unremarkable. In praising the value of individual expression in pursuit of the old and beautiful, it had created a space where like-minded collectors could celebrate their sense of difference under mass culture together. 39
'The Quest of the Antique': Antiquarianism and Modernity in the Bazaar
If the small collector's search for the 'old and beautiful' in clothing, home furnishing, and decoration was motivated by a response to the homogenizing forces of mass manufacturing and consumerism, their quest for alternative ways of living was not one that simply took refuge in nostalgic forms of escapism. 43 The magazine suggests ways in which the 'creative engagement with mass culture', which as Matt Houlbrook has shown shaped new forms of inter-war selfhood, could be played out through visual and material as well as literary practices. 44 The Bazaar aimed to portray antique shopping as an adventurous and artistic quest for the improvement of self and society in the present: exercising collectors' natural tendency to seek out the 'curious' across the country would prompt diversification and improvement in local cultures of retailing and the curation of collections in public ownership. 45 In contrast to the largely metropolitan focus of contemporary art collecting journals, it devoted a large proportion of its editorial columns to the market for old furniture, curios, and decorative art outside of the capital, so that the Bazaar gradually came to play a key role in shaping a landscape of beauty that was as much national as it was 'provincial'. 46 It had been a Glasgow reader who had suggested the idea of a gazetteer for antiques enthusiasts and tourists in February 1927, pointing out that '[a]ll collectors and all dealers are not in London'. 47 The Saturday issue duly began to print maps, directories, and articles classifying independent antique dealers and the objects they sold, acting as a portable shop window for the casual collector, the weekend motorist, and the otherwise uninitiated. As contributors actively promoted the regions' most progressive antiques retailers, they were also calling for further modernization in standards of access to museum collections, to cultivate a broader collecting public interested in raising aesthetic standards in everyday life. In this way, the pursuit was linked to a dynamic modern commercial culture, as well as with antiquarianism. A central feature in the magazine between 1926 and 1931 was hobbyist writer Leo Forbes Outram's 'The Quest of the Antique', an illustrated column describing the author's journeys by motor car around Britain, calling at notable antique shops and nearby sites of historic interest. Along with the serialization of the Hampshire antique dealer Thomas Rohan's three popular Mills & Boon memoirs, which occupied a full page in nearly every issue of the Bazaar during this period, 'The Quest of the Antique' did much to demystify the 'secret' rituals involved in purchasing antiques for young collectors and holidaymakers. 48 Taking its title from the book of the same name published by collector and journalist Agnes Willoughby Hodgson in 1924, the column expanded the format of a traditional shopping advertorial by including snippets of local history and informal interviews with knowledgeable dealers about their own antiquarian researches or the things they had collected. 49 True to the Bazaar's progressive outlook and national remit, Outram's 'quest' was staged not in the unspoilt country village of 'Deep England', but in modernized shops and 'antique galleries' in towns and cities like Bath, Oxford, Brighton, Nottingham, Cardiff, Manchester, Liverpool, and Newcastle, as well as in London. 50 In helping the small collector efficiently manage his or her forays into the otherwise bewildering world of antiques, the Bazaar was helping reshape the provincial amateur's relationship to connoisseurial authority and high cultural expertise. 51 Outram did home in on the things that had always interested curious collectors: bargain investments, hidden drawers, historic architecture, and unusual rarities. At the same time as he uncovered the quaint and the old-fashioned, however, a common thread running throughout all his journeys was the mission to introduce readers to the most forward-thinking and well-organized dealers he could find. 'Mrs Wilkinson, who trades as E. F. Wilkinson' of Paignton was recommended as being 'in the vanguard of women dealers in the South', for example. 52 It was now a 'self-evident fact', Outram claimed in October 1927, that 'antique shops [. . .] have become ordered showrooms. The modern collector has not the patience to forage in a mass of rubbish for what he hardly expects to find, and for which he certainly can ill spare the time'. 53 Particularly noteworthy were dealers who had made a special effort to display their stock in 'museum' settings, especially if they could suggest ideas for interior decoration in small homes. 54 An 'antique gallery' in a converted cotton factory in Preston was, Outram remarked upon visiting it, 'astonishing in its size and housing capacity'; spread over some 'five floors packed with well-chosen [. . .] furniture, glass, china, [. . .] and some very fine oil paintings', it convinced him that the Lancashire dealers stood a good chance of weathering the 'depression'. 55 In addition, the column diligently reported on provincial businesses which held free temporary art exhibitions: in October 1928, for instance, Outram recommended Bazaar readers pay a visit to the free exhibition of paintings, drawings, and prints by Laura Knight on show at John Gibbins's avant-garde Ruskin Galleries in Birmingham. 56 The 'quest of the antique', like Outram's contemporary H. V. Morton's In Search of England, was 'an adventure, not an elegy'. 57 During this period, Bazaar contributors were conscious that the trade was attracting growing numbers of amateurs and merchants who were collectors first and foremost, and noted that, while this could be a boon to customers, standards were slipping as a result. 58 'Furniture, pictures, porcelain, pottery, glass, lace, needlework, bronzes, silver, all these, and more, require study', a 'Woman Dealer' pointedly reminded aspiring antique shop-owners in 1929. 59 Readers used the magazine to air their frustrations with dishonest or difficult dealers, and lobbied for improvements to the accessibility of collections for the benefit of other small collectors. In September 1929 a 'Poor Collector' from Leeds implored old furniture retailers to accept weekly hire purchase payments: '[t]he up-to-date dealer of almost everything under the sun conducts thriving business by this method', they noted; 'so why does 53 the antique dealer stand out?' 60 The editor called on decorative art dealers to follow other independent retailers by introducing clear price labels to discourage dishonest dealing and help 'nervously hesitating' collectors. 61 In the antique trade, he declared, 'whoever is standing still is moving backwards'. 62 Furthermore, Bazaar writers and collectors argued that it was possible to adapt antique collections, alongside contemporary design, to suit a variety of homes, budgets, and lifestyles. Local historian William Whiteman hoped the Saturday issue for collectors would help 'enrol' readers as 'active evangelists of beauty'. 'Every crusader against Ugliness', he suggested in 1930, 'will have modern ornaments, which are not mass-produced, or harmonious antiques, in accordance with his purse'. 63 In the Bazaar, 'antiques' could easily be accommodated in the homes of those with very limited budgets. Addressing a column on this subject to 'young married couples' with a combined income of £5 a week, Alice Jeanes made the case that there was 'no need to furnish the house completely at once': '[i]n a new home much is forgiven, and it is ''rather a lark'' to pick up one's chairs, tables and cupboards one at a time, and filling out temporarily with makeshifts [sic]'. 64 In this way, taking on the perspective of a patient, questing small collector became a common-sense adaptation to the straitened situations in which many young families would find themselves in the early 1930s. Likewise, H. Hurford Janes, a reader who wrote in 'for the benefit of the sceptical' to describe how he had decorated his one-bedroom home with a collection of old furniture for '£30' over a period of 3 years, showed that 'antiques' allowed one to cut corners, adapt, and economize without sacrificing individual expression. His 'finds' included 'a corner chair of Chippendale design', which he had 'discovered as a leather-backed office chair in Shepherd's Bush Market', and 'a Grandfather clock case used inappropriately, but conveniently, for books, shelves having been fitted inside', purchased for a few shillings from a second-hand shop. 65 The home, the Bazaar endlessly reiterated, should be furnished above all by 'beauty and the expression of personality'. 66 Individuality took precedence over a slavish devotion to established signifiers of good taste or cultural distinction. 67 As important in the 'quest' of the small collector as locating friendly dealers and inspiring curios were visits to the collections of fine and decorative art objects exhibited in public museums. ' [T]here is one thing the Victoria and Albert Museum can do better than almost any other public place I know of', writer Frank Bingham declared in 1927: '[i]t can teach you how to buy antiques'. 68 Indeed, according to the Bazaar, examples of decorative art objects akin to those in even the largest public collections could often still be picked up in street market stalls. For instance, readers who had admired the Thomas Sutton collection of eighteenth-century tea caddies in the V&A, donated in 1919, were told in 1926 that it was still 'possible for the collector to pick up tea-caddies nearly as good on the barrows in the City'. 69 During this period, the Bazaar was positioning itself as an intermediary between the amateur collector and his or her education in art appreciation and design criticism. Readers were supplied with a plethora of advice on gallery visiting, such as 'Ten Rules for Enjoying Old Masters', and printed summaries of L.C.C. lectures on the history of furniture. 70 In the summer of 1928, the paper ran a competition in which readers were invited to arrange a set of ten pictures on a blank wall, thereby 'put[ting] [them]selves in the position of a hanging committee of the Royal Academy'. 71 If gallery visiting made one a more informed antiques buyer, then practising the art of second-hand shopping could better equip small collectors to interpret museum collections. In August 1929 Louise Gordon-Stables noted that 'the Caledonian Market and the second-hand clothes shops' had lately been inundated with 'amazing bargains' in unfashionable handmade and antique lace, 'so that now is the time for the lace-lover to form the nucleus of a collection'. 72 She pointed amateurs in the direction of the collection of English lace on show at the V&A, and then to the sixteenth-century portraits in the National Portrait Gallery, where, armed with this new knowledge, they would be able to examine 'in minute detail collars, cuffs, ruffs and ruffles'. 73 It was for this reason that, just as the Bazaar could not condone backwardness in the British antiques trade, nor did it encourage lovers of the 'old and beautiful' to become complacent with current standards of display and access to public collections. In 1929, editor George Whiteman proposed an expansion in the numbers of 'small, specialised museums, properly equipped and organised' in provincial centres, and focused more particularly on the interests of antique collectors, in order that the latest techniques in museum practice might be distributed more equitably around the regions. 74 The Bazaar was not in favour of prioritizing the desires of influential 'connoisseurs' above those of ordinary art-loving museum visitors, however. Two months later, Whiteman could be found arguing for a cull of worthless donations in the smallest local museums. The senseless deification of the private collector, without regard to the needs of the vast majority, had to be stopped for the sake of the collecting world at large: Almost anyone can recall at once several small museums, that are merely jumble heaps of worthless oddments without interest or instructive value. We have ourselves seen such cuckoos in the nest as [. . .] some highly suspect flint arrowheads and scrapers found by a town councillor with a taste for archaeology. 75 By contrast, the Bazaar promoted newly opened collections such as those at Kenwood House, where 'excellent', 'home-made' cakes were served in a tea room 'decorated by a promising young artist', or Dr Johnson's House in Gough Square, where the famous attic 'will be available for social purposes'. These institutions had done much to prevent a 'dry museum atmosphere' settling over the historic architecture and objects on display. 76 The Bazaar had taken account of the concerns of provincial readers who themselves had expressed a wish to sweep away the cobwebs from regional collections to make them more accessible to a wider public. with antiques to 'allow on certain days in the year serious art-lovers, not mere sight-seers, access to these houses with a view to enriching their store of knowledge', and to donate the 'money thus obtained for admission' to a centralized funding body such as the National Art-Collections Fund. Owners of 'priceless heirlooms', he believed, should 'be prepared to share their privileges with the man in the street for at least one or two days in the year'. 77 In the Bazaar, therefore, patiently searching for 'finds' and 'picking up' knowledge on old furnishings, social customs, and decoration from well-organized antique galleries and public museum exhibits was far from backward-looking. The relationship between the public collection and the private consumer of 'art' was portrayed as symbiotic: each could bestow value and meaning upon the other. However eclectic or fragmentary the small collector's education, they had become entitled to play a role in urgent discussions surrounding the accessibility of art, history, and beauty in everyday life. The chance survival of one Exchange and Mart seller's private correspondence archive demonstrates that, in giving readers' access to collectables in their own homes, the periodical continued to play an important part in the democratization of culture throughout the 1930s.
'Dispersing Small Museum': Selling Antiquities in the Exchange and Mart
Harold Clements was a devoted Bazaar, Exchange and Mart subscriber and small collector, having advertised curios for sale in the newspaper since its redesign as a collectors' magazine in the late 1920s. 78 Upon becoming manager of the Fir Tree Hotel, a public house and bed-andbreakfast in the picturesque Buckinghamshire village of Woburn Sands, he transformed its function room into what he called 'The Fir Tree Hotel Museum of Egyptian, Greek, and Roman Antiquities'. After 1929, as its self-styled 'Curator', Clements placed notices in the paper advertising the 'dispersal' of his 'small museum' every few months, selling an assortment of statues, jewellery, vases, and 'rare' curios including coins, amulets, and seals. By 1942 he had received at least 369 letters from 112 individuals across Britain via the paper. 79 The correspondence which survives is addressed to Clements that his sales patter and 'arousing descriptions' have been lost, but the responses of the Bazaar readers to whom he showed and sold 'antiquities' furnish a rich picture of the private culture of collecting the periodical fostered before the Second World War (Fig. 3). 80 The publican's timing was auspicious. Alongside the popular spiritualist revival during the inter-war period, the discovery of Tutankhamen's tomb in 1922 had given rise to a widespread fascination with Egyptian culture and design, particularly small, everyday artefacts. 81 Reflecting both these enthusiasms and the Bazaar's eclectic range and readership, Clements's traceable correspondents had a diverse array of occupations. Through the Exchange and Mart, the publican received inquiries from and sold 'antiquities' to a fellow hotelier and two antique dealers, as well as a doctor, two dentists, a manufacturing chemist, an architect, an ironmonger, a gentlemen's tailor, the owner of a Fenchurch Street firm of general merchants, the manager of a Balham theatre company, a porter at Claridge's, a longdistance driver, a nurse-turned-housekeeper, and, after the outbreak of war, a dockworker and a factory hand. Several of his most frequent correspondents were retired, and a number were children. Others wrote to the Museum from their beds, including a terminally ill 73-year-old spiritualist in Worthing putting together a 'cabinet' of curios for two 'young friends', who informed Clements cheerily that his 'homeopathic Dr.' was doing 'wonders' for him. 82 Only one correspondent claimed to be a professional Egyptologist, although several told Clements that they had visited Egypt, or had seen ancient artefacts in London museums. The publican also sold two 'Egyptian' objects to the Director of the Science Museum. 83 A small number of collectors arranged a visit to see the 'Museum' and its 'Curator' in person, such as H. V. Morton, who motored up to Woburn Sands from his flat in Grosvenor Place in November 1935. 84 Typically, sales of Clement's ancient artefacts were conducted purely by post. Only rarely did the publican and his correspondents make use of the Exchange and Mart central office's deposit system to facilitate transactions; in the majority of cases, both parties wrote directly to each other and were evidently willing to make the exchange of goods and cash on their own terms. Clements described his curios as 'fine guaranteed antiquities', and in some instances gave a description linking them to named collectors or excavations: for example, among the objects which remained unsold by 1942 was a 3-foot-long 'Ancient Egyptian Necklace, of Cylindrical Beads, intersected with small discs, in faience. B.C. 900. Excavated at Gurob by Professor Flinders Petrie'. 85 A number of correspondents asked directly for the publican's promised 'guarantee', along with any information about the objects and their history which he might be able to give them, some withholding their payment until he had supplied it or negotiating with him over price. A Westbourne Park collector informed Clements that his prices were higher than those of G. F. Lawrence of Wandsworth, the antiquities dealer who had been involved in the sale of the Cheapside Hoard to the London Museum in 1912. 86 The majority were more confident. In June 1934 Miss Doris Cogger of Tottenham sent a postal order for seven shillings and sixpence directly to cover the cost of an Egyptian 'Sacred Amulet' and a certain 'Ancient Egyptian Necklace' excavated by Flinders Petrie, adding matter-of-factly: 'Useless without [guarantee]'. 87 In fact, many expressed their gratitude and surprise that Clements was willing to send them things sight unseen, mindful perhaps of the lingering reputation of Exchange and Mart readers for foregoing payment. George Pike declared that it was 'jolly decent' of Clements to trust him and his wife with the 'ancient jewellery', and supplied a telephone number where they might be reached in East Dulwich; another correspondent gave, uninvited, the details of 'Mr. H. T. Mead, Curator of the Canterbury Royal Museum and Beaney Institute' as his 'reference', adding that he had 'scanned the Advertisement columns of the ''Bazaar'' for a long time in the hope of seeing some such offer as yours'. 88 Most transactions ended after one purchase or inquiry, but a significant number requested Clements's full 'catalogue', or to accept the dealer's standard offer to post them a portion of his collection 'on approval'. After having secured an Egyptian 'Sacred Charm' for two shillings, which he planned to wear on a 'chain', Middlesborough paper merchant Arthur Leader told Clements: 'I think I would like a few decent sized objects now, anyway I leave it to you-wrap anything you think will do for me'. 89 Clements' repeat customers received much pleasure from having the dealer select and present antiquities to them, without obligation to purchase. Many correspondents called themselves 'amateur' or 'small' collectors, and had evidently bought things through the magazine in this way before. Mrs Cobley of Harlesden, for the Barlows were collecting books and antiques to decorate their 'isolated' old cottage at the head of a creek on the Roseland Peninsula in southwest Cornwall. 98 Benjamin confessed that all the antiquities would be kept on display, and hoped they would not 'deteriorate too much': We like to have our own specimens about the house as far as possible [. . .] and we have not room for very much more. I suppose it is really better to keep them in a Cabinet, but to me it makes them seem too remote, and we do like to feel that they are part and parcel of our own household Gods. 99 The indifference of the local villagers to Mabel's paintings meant that the winter of 1938 found them living in reduced circumstances, but despite being only 'small buyers' they kept almost everything that Clements sent them, writing to the dealer separately to surprise the other with special birthday and Christmas gifts. 100 Unusually, the publican also sent the Barlows presents; unannounced, he gave Mabel a 'Specimen of Coptic Embroidery', uncannily anticipating her interest in old textiles as an 'accomplished Lace Maker'. 101 The Bazaar seller's 'little Collection' seemed to have been 'magically fitted' for the Barlows' cottage and interior lives: for Benjamin's study of 'Religion, Philosophy, (especially Oriental), and Mythology', and Mabel's drawings. 102 The couple's friends were 'really superstitious about Egyptian antiquities', but as the Barlows told Clements: 'You will I am sure be able to choose something that will fascinate us'. 103 Were Clements's correspondents taken in by the antiquities and the fictions spun by their 'Curator'? To place them in relation to the modern sensibility of enchantment identified by Saler as a key characteristic of inter-war mass culture: had these collectors been 'naïvely' duped by the publican? Or could they instead be said to be '''ironic'' believers', having 'immersed' themselves in Clements's 'imaginary world [. . .] without mistaking [it] for reality', aware that antiques in the pages of the Exchange and Mart were not always as advertised? 104 Only 5 of the 112 correspondents represented in this archive expressed doubt about the things they had purchased, one having noticed that the 'Statuette' he had purchased had been broken and glued back together. 105 The only 'Egyptologist' with whom Clements corresponded explained in a curt letter that even curios sold in Egypt were now being made 'in Birmingham', though he conceded: 'Some of the things I bought from your Museum are quite good'. 106 Birmingham collector A. J. Newton professed to be 'quite in love with' the vases and 3-foot-long Egyptian blue bead necklace from Flinders Petrie's excavation which he had purchased in 1932, but offered tentatively: '[a]t the same time, I cannot help thinking, and saying, that they are only replicas'. 107 Meanwhile, some of Clements's customers had undoubtedly invested in the idea that the curios were not mass-produced copies, but 'genuine antiquities'. Arthur Leader, for example, sent a panicked but apologetic note to ask whether the 'Sacred Charm' he was wearing had an 'EVIL repute' or could be 'considered unlucky', hoping Clements would not think him 'a bit potty'. 108 In March 1933 self-described 'decorative' collector Mr Lendon Bowers requested the publican send back a 'small iridescent glass bottle-Roman (Jerusalem, 1st Cent.)' which he had previously seen on approval and declined. 109 Three months passed before he wrote from Kilmarnock to Woburn Sands enclosing his payment of six shillings, and to express his 'pleasure' in his serendipitous new purchase.
[A]ltho' it is not the one I meant which was included in the specimens sent & which had more iridescent glaze thereon, I am pleased with it. It did not occur to me that you might have more than one of same place & period. This one is certainly perfect in design. 110 Some of Clements's happy customers had undoubtedly been wholly taken in by the seller's self-proclaimed status as a museum-keeper; perhaps, for some, this itself was sufficient guarantee of the special authenticity of the objects he sold, in contrast to purveyors of massproduced consumer goods.
It might be equally suggested, however, that the real naïfs were those who expressed their disappointment upon realizing that the 'ancient' artefacts they had purchased from a cabinet in the function room of a pub in Buckinghamshire were fake. The 'buffering role' played by the modern 'ironic imagination' which, as Saler writes, prevented 'complete acceptance or acquiescence into any particular cultural construct', would have helped smooth anonymous and unregulated Exchange and Mart transactions. 111 The Bazaar had, after all, helped spread the message that its readers could have a hand in determining the meaning and value of old collectables: for the small collector, an antique did not have to be one-of-a-kind for it to be 'perfect in design'. The satisfaction of the correspondents who had allowed themselves to become enchanted by the publican's fictions can be seen as a rational response to a material world in which nothing was ever as it seemed. By its very nature, the quest of the antique required a leap of faith. 112 Assuming the identity of either an amateur 'Curator' of antiquities or a 'collector' of antiques and decorative art during this period was a whimsical indulgence, though one which could have profound meaning in everyday life. As Benjamin Barlow put it in 1938, finding Clements and his antiquities via the Bazaar had been a 'privilege and a pleasure': 'You have sent a Collection which has fulfilled our dreams in a most subtle way'. 113 Conclusion 'Papers like the Exchange and Mart', Orwell declared in 1939, 'only exist because there is a definite demand for them, and they reflect the minds of their readers as a great national daily with a circulation of millions cannot possibly do'. 114 In their quest to find enchantment in everyday life, Bazaar, Exchange and Mart subscribers did not simply constitute a collective of narrow-minded or nostalgic eccentrics. They were staunchly individual but firmly of their own time, having been admitted into a club of thousands of responsive readers to whom no desire was too far-fetched; no problem too troublesome; no hobby or obsession too trivial. As an effortless means of purchasing, exchanging, and disposing of used or unwanted goods from the comfort of one's own home, the paper undoubtedly also played a part in sustaining, as well as entertaining many people in times of economic uncertainty and domestic crisis. In the Exchange and Mart, second-hand goods were never second best. Even as it was redesigned as a stylish art periodical in 1926, the Bazaar's 'Popular Weekly for Collectors' would continue to speak to these modest yet modern concerns: the eclectic range of articles, debates, and viewpoints it published traversed traditional connoisseurial, class and gender hierarchies and in so doing won a
|
2018-04-03T00:10:44.832Z
|
2016-11-01T00:00:00.000
|
{
"year": 2017,
"sha1": "20b1f3b40ce6859d6933c2c35233293b9a5ac080",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/tcbh/article-pdf/28/2/159/17615648/hww050.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "195ca3ada2de443f189bf1e2d7428a514b949217",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History",
"Medicine"
]
}
|
202035700
|
pes2o/s2orc
|
v3-fos-license
|
Interaction of carbohydrate binding module 20 with starch substrates
CBM20s are starch-binding domains found in many amylolytic enzymes, including glucoamylase, alpha-amylase, beta-amylases, and a new family of starch-active polysaccharide monooxygenases (AA13 PMOs). Previous studies of CBM20–substrate interaction only concerned relatively small or soluble amylose molecules, while amylolytic enzymes often work on extended chains of insoluble starch molecules. In this study, we utilized molecular simulation techniques to gain further insights into the interaction of CBM20 with substrates of various sizes via its two separate binding sites, termed as BdS1 and BdS2. Results show that substrate binding at BdS1 involving two conserved tryptophan residues is about 2–4 kcal mol−1 stronger than that at BdS2. CBM20 exhibits about two-fold higher affinity for helical substrates than for the amylose random coils. The affinity for amylose individual double helices does not depend on the helices' length. At least three parallel double helices are required for optimal binding. The binding affinity for a substrate containing 3 or more double helices is ∼−15 kcal mol−1, which is 2–3 kcal mol−1 larger than that for individual double helices. 100 ns molecular dynamics simulations were carried out for the binding of CBM20 to an extended substrate containing 3 layers of 9 60-unit double helices (A3L). A stable conformation of CBM20–A3L was found at BdS1. However, when CBM20 binds A3L viaBdS2, it moves across the surface of the substrate and does not form a stable complex. MD simulations show that small amylose helices are quickly disrupted upon binding to CBM20. Our results provide some important molecular insights into the interactions of CBM20 with starch substrates, which will serve as the basis for further studies of CBM20-containing enzymes, including AA13 PMOs.
Introduction
Starch is one of the most abundant natural polymers found on earth, which has been playing important roles in human society, including both food and non-food sectors. 1 Starch consists of $20-30% of amylose and $70-80% amylopectin. 2 Amylose contains linear polymers of several to thousands of a(1/4) linked D-glucose units. Amylopectin also contains a(1/6) linkages at about every $30 units along the a(1/4) linked chain. In biology, starch is metabolized via hydrolysis to oligosaccharide by amylolytic enzymes. Amylose oen exists in nano/microcrystal forms of helices that are resistant to hydrolysis by amylolytic enzymes. In contrast, amylopectin is highly branched and has less ordered structures that are more amenable to hydrolysis.
Amylolytic enzymes, including alpha-amylase, beta-amylase, and gamma-amylase (glucoamylases) oen contain one or more carbohydrate binding modules (CBMs), such as CBM20. CBMs are ubiquitous with 84 families spreading in all kingdoms of life. 3 They are oen thought of as supporting modules that help carbohydrate active enzymes to bind to their target substrates. However, accumulated data suggest that CBMs's roles are more diverse. Pre-incubation of amylose with a stand-alone CBM20 was found to signicantly enhance its hydrolysis by a truncated glyucoamylase without any CBM. 4 It was proposed that CBM20 disrupted the helical structure of amylose, making it more amenable to hydrolysis by amylase. Two starch binding sites were revealed in CBM20. 5 Binding site 1 (BdS1) contains two critical tryptophan residues. Binding site 2 (BdS2) does not contain any tryptophan residues, but contains two tyrosine residues. It was also later shown on the basis of Atomic Force Microscopy (AFM) that CBM20 disrupted the structure of soluble amylose, presumably via binding at these two binding sites. 6,7 Due to the low resolution of AFM, the atomic detail of CBM20-amylose interaction was not clear.
In the last decade, extensive studies of carbohydrate activate enzymes in both academic and industrial sectors were driven by the enormous demand for next-generation biofuels. Among the large number of new enzymes discovered, polysaccharide monooxygenases (PMOs) or lytic PMOs (LPMOs) exhibit unpreceded oxidative mechanism in glycosidic bond cleavages. [8][9][10][11][12] Some PMOs could boost the activity of glycoside hydrolases, [13][14][15][16] and thus could play key role in reducing the cost of biomass conversion to fermentable sugar, the bottleneck step in cellulosic biofuel production. During this period, the interests in the role of CBMs in substrate binding and catalysis of carbohydrate-active enzymes were also reignited. 17-21 CBM20 was found as a C-terminal domain in the majority of starchactive PMOs (AA13), [22][23][24] the only PMO family that oxidatively cleaves on a-glycosidic bond in starch. CBM20 appears to have signicant roles in the activity of AA13 PMOs on various type of starch.
Understanding the nature of binding between CBMs and starch substrates is of great fundamental and practical interests, which has proved to be almost infeasible to obtain experimentally at the atomic levels. Previous studies using AFM 6,7 or NMR 5 only revealed the interaction of CBM20 with individual molecules of soluble amylose or starch analogues, respectively. Although these studies revealed the binding sites (NMR) or how CBM20 would disrupt small amylose molecules, they did not reect the true interaction of CBM20 with insoluble substrates relevant to the industries. In this present work, we used molecular docking, molecular dynamics simulations to obtain atomic-level insights into the interaction between CBM20 and various starch substrates, including a large bundle of extended amylose double helices. Our study provides new insights into the interactions of CBM20 with starch substrates and provide some important implications to the biochemistry of CBM20 and CBM20-containing enzymes.
Materials and methods
Input structure of CBM20 NMR structure of a complex of CBM20 with beta-cyclodextrin (PDB ID: 1AC0) reveals two binding sites (Fig. 1A). 5 Binding site 1 (BdS1) contains two critical tryptophan residues W543 and W590. Binding site 2 (BdS2) does not contain any tryptophan residues, but contains two tyrosine residue Y527 and Y556. The structure of this CBM20 and its binding sites will be used for molecular docking and molecular dynamics simulations in this study.
The structure of amyloses including 3 layers of 9 60-unit amylose double helix (A3L) (Fig. 1C) was generated using PyMOL 1.3 and the Carbohydrate builder at the GLYCAM server. 26 A 60-unit amylose double helix (ADH60) was rst generated using the Carbohydrate builder using the j and 4 angles obtained from a crystal structure of an amylose-A (A-amylose_2009-popov_expanded.pdb) 27 available at POLY-SAC3DB. 28 Crystal packing parameters were also obtained from this structure and used to generate A3L with PyMOL.
Molecular docking
CBM20 and substrates were parameterized using Auto-dockTools 1.5.6. 29 The ligands were docked to receptor utilizing Autodock Vina version 1.0 (ref. 30) with the optimization using Broyden-Fletcher-Goldfarb-Shanno (BFGS) scheme. 31 The exhaustiveness was chosen as 40. The CBM20 molecule was set as rigid molecule during simulation. CBM20 was used as the ligand in the docking studies with A3L and as the receptor in all other cases. In the docking experiment to A3L, a grid of 4.0 Â 4.0 Â 6.0 nm was used for the CBM20-A3L complex, which is centered at the center of A3L. The substrates containing 1-9 parallel ADH60 molecules (nADH60, n ¼ 1-9) resembling fragments of one layer in A3L were docked to CBM20 using a grid of 2.0 Â 30 Â 30 nm. The ADH10, ASH10, and ASC10 molecules were docked to CBM20 with the grid size of 4.0 Â 4.0 Â 6.0 nm. During the simulation, the helical substrates were set as rigid molecules while ASC10 was fully exible. The best docked model was designated to the binding pose with the lowest binding affinity.
Molecular dynamics simulation
The CBM20 molecule was parameterized by Amber99SB-ildn force eld. 32 All of substrates were parameterized using GLY-CAM 06j-1 force eld. 26 The CBM20-starch complexes structures were generated via molecular docking. These complexes were solvated using TIP3P water model. 33 The CBM20 + A3L complex was put into a $1921 nm 3 rectangular box, resulting in a system containing more than 194 300 atoms. The CBM20 + ADH10/ASH10 complexes were inserted into a 585 nm 3 dodecahedron box, forming systems consisting of more than 59 000 atoms.
The solvated systems were energy-minimized with the steepest descent scheme. Aer that, the CBM20-substrate complexes were relaxed in 500 ps of NVT ensemble with a harmonic positional restraint force applied on them, which were then relaxed in 500 ps of NPT ensemble. The last snapshot of NPT simulations was used as the starting conformation of MD simulation. The MD simulation length ranges from 50-100 ns. The MD simulation were performed on GPU using GRO-MACS 5.1.3 (ref. 34) with the MD parameters are derived from our previous works. 35,36 The simulation temperature was set at 300 K. All-bonds were constrained using the LINCS method. 37 The non-bond cutoff was set at 0.9 nm. The particle mesh Ewald was used to mimic the electrostatic interaction. van der Waals interaction simulated with a 0.9 nm cutoff. The regions of A3L forming contacts with CBM20 were le unrestrained, while other regions and atoms were restrained using a small harmonic potential during MD simulations (Fig. S1 †).
Data analysis
Structural analysis. The hydrogen bond (HB) is noted when the distance between donor and acceptor is smaller than 0.35 nm and the angle between acceptor-hydrogen-donor is larger than 135 . Intermolecular sidechain contacts (SCs) between non-hydrogen atoms of individual residues of CBM20 to A3L substrate were counted when the spacing between two atoms were smaller than 0.45 nm. The polar contact between two charged groups of two molecules were predicted using ligand site scheme of PyMOL package. IMPACT was used to predict the collision cross section (CCS) of the protein. 38 The end-to-end distance of amylose is considered as the distance between C1 of the rst glucose unit to C4 of the last glucose unit. Clustering method 39 was performed to search MD-rened structures of the system using GROMACS tools "gmx cluster". 40 The secondary structure of CBM20 was predicted using DSSP protocol. 41 The collective-variable free energy landscape (FEL) was constructed using GROMACS tools "gmx sham". 40 The number of SCs between two molecules and the CCS of CBM20 were selected as the coordinates for the FEL.
Pull-down assays NCU08746 were expressed and puried as previously described. 23 Cornstarch (S4126) and corn amylopectin (10120) were purchased from Sigma-Aldrich. 100 mg of each substrate was washed three times with 1 mL of 10 mM sodium acetate buffer pH 5.0 (buffer A) using centrifugation. The nal pellet was re-suspended in 1 mL buffer A. NCU08746 was added to each substrate suspension to 5 mM nal concentration. The assays were incubated at room temperature with gentle rotation for 30 minutes, which were then centrifuged to separate the pellets from the supernatant. The pellets were then washed three times with 1 mL buffer A. The nal pellet was resuspended in 1 mL buffer A. 50 uL aliquots of the supernatants collected aer the initial incubation step and 3 washing steps, as well as the nal suspension were mixed with 10 mL 6Â SDS-PAGE sample buffer. SDS-PAGE was then carried out using Protean TGX precast gels (Bio-Rad) as instructed by the manufacturer. The gels were analyzed using a ChemiDoc Imaging system (Bio-Rad).
Results and discussion
Binding affinity of CBM20 to starch substrates derived from molecular docking Molecular docking of CBM20 to A3L substrate. Molecular docking is an efficient method to study the binding affinity and binding pose between two biomolecules. 42 Extended amylose helix bundles have not been previously studied due to the difficulty in understanding their structure with experimental methods. Here we generated a model of amylose bundle consisting of 3 layers of 9 60-unit double helices based on the crystal of small amylose double helices, 25 which will provide unprecedented insights into CBM20-amylose interaction.
Because the CBM20-A3L complex is very large, we chose a 4.0 Â 4.0 Â 6.0 nm docking grid that is large enough to provide the space for CBM20 to alternate between various binding positions to A3L. Two docking poses of CBM20 to A3L substrate were observed (Fig. 2). These poses are consistent with the observed binding sites of CBM20 with b-cyclodextrin reported previously. 43 The binding affinity at BdS1 (À15 kcal mol À1 ) is signicantly larger than that at BdS2 (À12.5 kcal mol À1 ), which is in good agreement with the numbers of H-bonds between CBM20 and A3L found at the two binding sites. At BdS1 K578, S592, and E589 residues forms 4 H-bonds to A3L, while at BdS2, only two H-bonds are formed by T557 and R616. Moreover, the critical aromatic residues W543 and W590 at BdS1 appear to contribute to the binding of CBM20 to A3L signicantly more that by Y527 and Y556 at BdS2. While the aromatic rings of Y527 and Y556 at BdS2 do not align well on a surface, W543 and W590 side chains are positioned close to one another in BdS1 and form a relatively at surface as found in many other CBMs. 44 Molecular docking of CBM20 to small amylose substrates. The majority of starch exists in a mixture of short helices and exible coils. 2 Thus, we studied the interaction of CBM20 with a amylose double helix (ADH10), amylose single helix (ASH10), and random coil (ASC10), each of which contains 10 glucose units. To retain the helical structures, ADH10 and ASH10 were set as rigid molecules, while ASC10 was fully exible. The binding affinity values obtained for CBM20 and ADH10, ASH10, and ASC10 are shown in Fig. 3. These values are similar to those obtained experimentally for CBM20 complexes with b-cyclodextrin and some oligosaccharide. 18,45 For all three substrates, the binding affinity at BdS1 is signicantly larger (by 1.8-3.5 kcal mol À1 ) than that at BdS2. The average binding affinity of both binding sites deduced for ADH10, ASH10, and ASC10 is À10.9 AE 0.5, À10.2 AE 0.3, and À5.6 AE 0.3 kcal mol À1 . This result clearly indicates that CBM20 highly prefers helical amyloses over random coil.
Effect of substrate size on binding affinity to CBM20. The binding affinity of CBM20 to 10-unit substrates are about 2-3 kcal mol À1 weaker than that to A3L described above. To further examine the effect of substrate size on the binding affinity, we performed additional docking experiments of CBM20 to substrates containing 1 to 9 60-unit parallel double helices (nADH60, where n ¼ 1-9) (Fig. 4). The binding affinity of CBM20 to 1ADH60 ($À11.5 kcal mol À1 ) is similar to that to ADH10, indicating that the length of the helix does not have any clear effect on the binding affinity. In the nADH60 series, the binding affinity increases as n increases from 1 to 3. The binding affinity then plateaus ca. À15 kcal mol À1 when n $ 3, which is similar to that of A3L described above. This result indicates that CBM20 requires at least 3 parallel double helices for optimal binding, which is consistent with the optimized structure of CBM20-A3L complex shown in Fig. 2 and the MD result vide infra. Binding affinity essentially remains the same when the substrates has more helices.
Molecular dynamics of the CBM20-A3L complex Molecular dynamics simulation. Although molecular docking provides decent details on the binding of CBM20-A3L, it does not take the dynamics of the system into account. MD simulations were thus carried out for the CBM20-A3L complex in comparison to CBM20 alone to elucidate the effects of the dynamics on the binding of complexes. To better describe the interaction between CBM20 with A3L, the amylose helices forming contacts with CBM20 were not constrained during MD simulations.
The CBM20-A3L complex at BdS2 did not reach the equilibrium states aer 100 ns of MD simulations. The protein moved on the surface of A3L substrate during the entire MD trajectory ( Fig. S2 and S3 †). Analysis of several snapshots of the CBM20-A3L complex at BdS2 reveals a few polar contacts involving mainly T522, T614, Y527, and Y566. Further analyses of the CBM20-A3L binding process at BdS2 were not performed.
The binding at BdS1 reached equilibrium states aer 40 ns (Fig. S4 †), and analyses were performed for this system in the last 60 ns of MD simulation. Overall, CBM20 forms 14.78 AE 1.46 side chain contacts (SCs) and $6.09 AE 1.19 hydrogen bonds (HBs) to the substrate (Fig. S5 †). The probabilities of intermolecular contacts between individual residues of CBM20 to A3L substrate are shown in Fig. 5. Critical residues involved in the binding are D542, W543, E544, E576, K578, D585, D586, S587 and W590. Previous NMR study revealed that W543, K578, and W590 are key residues in the binding of CBM20 with betacyclodextrin. 46 The residues 541-545, 576, 578, 585-590, 592, and 595 form intermolecular SC contacts to substrate in at least half of MD simulation time (Fig. 5). Especially, residues 542, 543, 578, 585-588, and 590 adopt SC contacts to A3L in all equilibrium snapshots (Fig. 5). Moreover, residues 542-544, 576, 578, 585-587, and 590 form HB to the substrate in more than 35% of the equilibrium snapshots. In addition, four independent trajectories with the same starting initial structure but different generated starting-velocity were also carried out (Fig. S6 †). The superposition of the CBM20-A3L complex at BdS1 in different independent MD trajectories help validate our results (Fig. S7 †). CBM20 undergoes small structural changes upon binding to A3L at BdS1s. There are slight changes in the b-content (from 39.44 AE 1.92% to 38.90 AE 2.40%) and coil content (from 48.68 AE 3.11% to 46.82 AE 2.71%) of CBM20 when it binds to A3L (Fig. S8 †). Lager changes are observed for the helical structure content (from 13.75 AE 1.92% to 1.33 AE 1.76%) and turn content (from 0 to 11.09 AE 2.59%). The collision cross section, which represents the overall size of CBM20, decreased from 14.89 AE 0.16 to 14.70 AE 0.15 nm 2 (Fig. S9 †). In addition, the region of A3L that binds CBM20 appears to be slightly stabilized compared to that in free A3L as indicated by slightly lower RMSD throughout 100 ns MD simulation (Fig. S10 †).
The relative binding free energy between the CBM20 protein to A3L substrate was evaluated by using the free energy perturbation method. The obtained results indicate that the Paper electrostatics free interaction energy dominates over the van der Waals free interaction energy during the binding process of the CBM20 protein to the A3L substrate (Fig. S11 †).
Optimized structure of CBM20-A3L complex at BdS1. The free energy landscape of the CBM20 + A3L complex was constructed with two collective variables including the CCS of CBM20 and the number of SC contacts between CBM20 and A3L substrate. The result is shown in Fig. 6. One optimized structure of the complex was observed in the minimum A (Fig. 7). The residues W543, E544, E576, K578, D585, D586, and W590 are found to form 11 polar contacts with the substrate (Fig. 7), which is consistent with docking and whole trajectory analyses described above.
CBM20 disrupt the helical structure of small substrates MD simulations were carried out without any restraint to gain insights the interactions of CBM20 with small amylose substrates. MD simulations were carried out for the CBM20-ADH10 and CBM20-ASH10 complexes obtained with dockings. Structural change of soluble complexes was monitored over MD 6 Free energy landscape of CBM20-A3L complex at BdS1 constructed for all equilibrium snapshots using the number of SC between two molecules and CCS of CBM20 as the coordinates. The minimum A is found at ($14.73 nm 2 ; $19.5) coordinate. simulation intervals. Interestingly, we observed that the amylose helices were rapidly disrupted (Fig. 8). Thus simulations were only carried out for 50 ns for both complexes.
The structural changes of ADH10 during simulation under effects of CBM20 were measured through the end-to-end distance of amylose chains. At BdS1, the end-to-end distances of the two chains in ADH10 decreased from 4.1 nm to $3 nm and $1.5 nm aer 50 ns of MD simulations (Fig. 8A). At BdS2, the end-to-end distances also decreases to $3.5 and 3.0 nm (Fig. 8B). The curves representing these distances over time are clearly different from one another. Thus, it is evident that the double helix was rapidly disrupted in the presence of CBM20 at both binding sites, which is consistent with the structures of the double helix taken at various snapshots ( Fig. 8A and B). Likewise, the single helix was also disrupted at both binding sites of CBM20 within 50 ns of MD simulations (Fig. 8C and D). It is worth noting that the isolated ADH10 and ASH10 in solution was stable during 50 ns of MD simulations ( Fig. S12 and S13 †). This result is consistent with previous experimental results with small soluble amylose. 4,6 Several isothermal calorimetry studies revealed 5-8 kcal mol À1 binding affinities for oligosaccharides of the CBM20s of the glucoamylase from Aspergillus niger 45 and starchactive polysaccharide monooxygenases (AA13) from Magnaporthe oryzae and Aspergillus terreus. 18 Moderate affinity for starch granules were also demonstrated for these CBM20s. Given the moderate affinity of CBM20 for starch, we have been able to use CBM20 as an affinity tag for facile purication of NCU08746, an AA13 polysaccharide monooxygenase from Neurospora crassa (NcAA13), using an amylose resin column.
We performed pull-down assays of NCU08746 with starch granules and amylopectin from corn to compare the binding affinity between these substrates. SDS-PAGE analysis indicate that, for both starch granules and amylopectin, there is a signicant amount of protein remained in the supernatant (Fig. 9). A fraction of the protein was found to bind to the pellet, which was gradually released to the solutions in each washing steps. This result is consistent with previous studies that CBM20 has moderate affinity for starch substrates. Moreover, the density of the bands corresponding to NCU08746 on the gels indicate that NCU08746 has higher affinity for amylopectin than for starch granules. Both amylopectin and starch granules used in this study are from corn. It is likely that the separation of amylopectin from starch granules made the helical regions more accessible to CBM20s, resulting in higher affinity. This result is consistent with our docking studies that CBM20 has higher affinity for helical amylose than for exible coil amylose.
Previous activity 4 and AFM 6,7 studies provided evidences that CBM20 disrupted starch structure. These studies carried out on individual relatively short chain amylose molecules that are soluble. Our computational studies on short chain amylose is consistent with these experimental studies and provide a dynamics picture of the amylose disruption process. However, for extended insoluble amylose chains in a large bundle, our MD simulations suggest that the disrupting effect of CBM20 might not be as strong as for short chain amylose.
Moreover, our computational study indicates that CBM20 has higher affinity for ADH10 and ASH10 than for ASC10. It might be possible that CBM20 rst binds to the helical region of starch then disrupts this structure, resulting in weaker affinity. This could explain the moderate affinity of CBM20 for starch, which allows for subsequent dissociation of CBM20 to make the substrate available for hydrolysis by catalytic domains of amylases.
Conclusions
Our molecular docking, molecular dynamics simulations, and pull-down assays provide new insights into the interactions of CBM20 with various starch substrates. First, CBM20 has two binding sites, namely BdS1 and BdS2, that exhibit different binding affinity to starch substrates. Binding at BdS1 involving two conserved tryptophan residues is 2-4 kcal mol À1 stronger than that at BdS2 with two conserved tyrosine residues. Second, CBM20 has higher affinity for helical amylose molecules than for random coil starch molecules. The binding affinity for the double helices does not depend on the length of the helices. CBM20 requires 3 parallel helices for optimal binding and binding affinity does not change when the substrate has more than 3 helices. Finally, CBM20 quickly disrupts the helical structure of short substrates, but does not disrupt the helices in extended substrate during 100 ns MD simulations. On extended substrate, CBM20 from stable complex at BdS1, but moves along the substrate surface when interacting via BdS2.
These insights, which were not observed in previous studies with small soluble amylose and cyclodextrins, are helpful for future studies on the interactions of CBM20-containing starchactive enzymes with industrially relevant starch substrates. Recently, we found that CBM20-containing AA13 polysaccharide monooxygenases (AA13 PMOs) has about two folds higher activity than that of the corresponding catalytic domains. 47 In addition, AA13 PMOs is likely the rst PMO family, and the rst oxidative enzyme family, to have processivity-like activity. AA13 PMOs may slide on the amylose helices and cleave the glycosidic linkages separated by multiples of a helical turns, generating major products with degree of polymerization (DP) of 6n (n ¼ 1, 2, 3.). CBM20 appears to help the enzymes retain longer on extended amylose helices and generate more products at higher DP. The understanding of the interaction of CBM20 with starch substrates obtained in this work will serve as the basis for computational studies of CBM20 containing AA13 PMOs with starch substrates, which will provide further inside into the processivity-like activity of these enzymes.
Author contributions
The manuscript was written through contributions of all authors.
Conflicts of interest
The authors declare no conicts of interest.
|
2019-09-10T00:27:53.072Z
|
2019-08-08T00:00:00.000
|
{
"year": 2019,
"sha1": "928f7159dc847e7f01f8a71144a6f9d36ae37c76",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra01981b",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c6e1d0361ced58a88909dc47b25de132cc20209",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
260299413
|
pes2o/s2orc
|
v3-fos-license
|
Role of carboxylates in the phase determination of metal sulfide nanoparticles †
Techniques are well established for the control of nanoparticle shape and size in colloidal synthesis, but very little is understood about precursor interactions and their effects on the resultant crystalline phase. Here we show that oleate, a surface stabilizing ligand that is ubiquitous in nanocrystal synthesis, plays a large role in the mechanism of phase selection of various metal sulfide nanoparticles when thiourea is used as the sulfur source. Gas and solid-phase FTIR, 13 C, and 1 H NMR studies revealed that oleate and thiourea interact to produce oleamide which promotes the isomeric shift of thiourea into ammonium thiocyanate, a less reactive sulfur reagent. Because of these sulfur sequestering reactions, sulfur deficient and metastable nanoparticles are produced, a trend seen across four different metals: copper, iron, nickel, and cobalt. At low carboxylate concentrations, powder XRD indicated that the following phases formed: covellite (CuS); vaesite (NiS 2 ); smythite (FeS 1.3 ), greigite (FeS 1.3 ), marcasite (FeS 2 ) and pyrite (FeS 2 ); and cattierite (CoS 2 ). At high sodium oleate concentration, these phases formed: digenite (CuS 0.55 ), nickel sulfide (NiS), pyrrhotite (FeS 1.1 ), and jaipurite (CoS).
Introduction
For many applications of solid state and nanocrystalline materials, the identity and purity of the crystalline phase is essential to function.The phase space is highly complex.For example, in the geological record, there are nine iron sulfides, four cobalt sulfides, seven nickel sulfides, and eight copper sulfides, of differing stoichiometries and polytypes each with their own physical properties.
The current and potential applications of the metal sulfides are highly varied even within each metal class.As examples, while pyrite (FeS 2 ) is a paramagnetic iron sulfide and is useful in various environmental oxidative processes, 1 greigite (FeS 1.3 ) shows superparamagnetic behavior at small particle sizes which is potentially useful for treatment of cancer through magnetically induced hyperthermia. 2In the copper sulfide family, many of the copper sulfides including digenite (CuS 0.55 ) and covellite (CuS) possess localized surface plasmon resonances (LSPR) in the near IR and so can be used as plasmonic semiconductors in optoelectronic devices. 3Covellite (CuS) has also been used as a catalytic glucose oxidizer for glucose detection. 4n the cobalt sulfide family, cattierite (CoS 2 ) has been used in lithium-sulfur battery cathodes to accelerate the redox reactions of polysulfides 5 while jaipurite (Co 9 S 8 ) has been used as a supercapacitor. 6In the nickel sulfide family, vaesite (NiS 2 ) is an electrocatalyst for the hydrogen evolution reaction (HER) 7,8 while nickel sulfide (NiS) can be used as a supercapacitor. 9ationally synthesizing one phase over the other can be challenging when dealing with transition metal sulfides because of the multiple phases of varying stoichiometry and symmetry.There are many one-off reports of colloidal syntheses of these metal sulfides, but the reasons behind phase selection under certain conditions remain occluded.Common sulfur precursors include thiourea, 10,11 elemental sulfur, 12 sodium sulfide, 13 thioacetamides, 14 carbon disulfide, 15 oleylaminesulfur(thioamides), 16 dithiocarbamates, 17,18 thiobiurets, 19 thiols, 20 and thioesters, 21 among many others. 22ven with such a vast library of sulfur reagents, studies of their decomposition pathways rarely elucidate complete mechanisms.Rhodes et al. were able to select for specific iron sulfide phases based on the strength of the C-S bonds of the chosen thiols, thioethers, and disulfides.Stronger C-S bonds yielded sulfur poor pyrrhotite (FeS) while weaker C-S bonds yielded sulfur rich pyrite (FeS 2 ).While the general trend was straightforward, it was found that the unique decomposition mechanism of diallyl disulfide, facilitated by the oleylamine solvent, was essential to the formation of pyrite (FeS 2 ). 21The Hogarth group has also contributed to our understanding of precursor decomposition pathways and their effect on the synthesized phase. 23,24Most notably, they were able to achieve four various phases of the nickel sulfide family: a-NiS, b-NiS, Ni 3 S 4 and NiS 2 .Starting out with a series of bis(dithiocarbamate) complexes, [Ni(S 2 CNR 2 ) 2 ], Hollingsworth et al. and Roffey et al. were able to link decomposition pathways of their precursor to the synthesized phase through temperature and concentration studies.Very few other decomposition routes have been studied. 25,26hiourea has become one of the more popular sulfur reagents because it is inexpensive, solid at room temperature, has a low vapor pressure compared to other sulfur reagents, and readily reacts at temperatures as low as 150 1C with transition metal cations.In addition, Hendricks et al. showed that changing the N-substitution on a library of thioureas can vary the conversion rate over more than five orders of magnitude, which impacted the nucleation rate of lead sulfide (PbS), 10 zinc sulfide (ZnS), 27 and cadmium sulfide (CdS) nanoparticles. 28n many of the aforementioned studies, metal carboxylates, whether added directly or formed in situ, are common metal precursors as the carboxylate ligands solubilize the metal ion in a high boiling organic solvent and act as surface stabilizing ligands for the product nanocrystals. 29,30Carboxylates can affect particle nucleation and growth in contradictory ways depending on the synthetic environment.Demortie `r et al.
showed that increasing the ratio of oleic acid (ligand) to ironoleate complex (metal precursor) increased the size of the synthesized iron oxide nanoparticles when using di-n-octyl ether as the solvent. 31Baaziz et al. saw the same pattern when using these conditions, but noted that changing the solvent also has an effect on the size of nanoparticles. 32In contrast to di-n-octyl ether, using octadecane as a solvent resulted in a decrease in particle size with an increase in oleic acid concentration.The unique solvent environment of each synthesis likely influences the decomposition of the metal precursor complex and the effect of oleic acid.Solvent and precursor choice are one of the many factors that may change the way the metal precursor decomposes influencing the time of the nucleation process.
Since carboxylate influences nucleation and growth, it comes as no surprise that carboxylate ions are also known to affect phase formation.4][35] This particular metal selenide has only two polytypes.In contrast, the phase diagrams of the mid transition metal sulfides are far more diverse with several polytypes and differing stoichiometries.One may hypothesize that oleate, as a strong ligand for first-row transition metal ions, may slow the release of metal precursors for particle formation and yield metal-poor phases.Yet at the high temperatures of nanocrystal synthesis, nucleophilic carboxylate could reasonably be expected to react with some sulfur reagents, especially in the presence of Lewis acidic metal centers.It remains a mystery how oleate will influence phase in transition metal sulfide nanocrystal formation.
Here we carefully examine the role of carboxylate in synthesis of iron, cobalt, nickel, and copper sulfides with thiourea as the sulfur reagent.High concentrations of carboxylates cause the formation of sulfur poor phases indicating that carboxylates parasitically react with thiourea.We will provide evidence that under low carboxylate concentration, the active sulfur source is thiourea, whereas at high carboxylate concentration, the active sulfur source changes to a mixture of carbon disulfide and thiocyanate.
Experimental
All nanoparticle synthesis reactions were performed in ovendried three-neck round-bottom flasks using standard Schlenk techniques under argon atmosphere.A thermocouple was used to monitor the internal temperature of the reaction.
Synthesis of copper(II) oleate precursor
The synthesis is adapted from Tappan et al. 29 A mixture of sodium oleate (9.85 mmol) and anhydrous copper(II) chloride (4.93 mmol) was added into a 100 mL three-neck round-bottom flask.A solvent mixture of ethanol (10 mL), deionized water (8 mL), and hexanes (17 mL) was then added into the flask.
The solution was heated to 70 1C for 25 min after which an additional portion of hexanes (10 mL) was added.The solution was reheated to 70 1C and kept at that temperature for 4 h.After the reaction was cooled to room temperature, the solution was washed three times with deionized water in a separatory funnel.After the separation process, the product was vacuumed to form a dry and teal powder.
Synthesis of transition metal sulfides nanoparticles
A solution of metal precursor (0.50 mmol) in octadecene (ODE) (10 mL) was added to a 25 mL three neck round-bottom flask (Fig. S16, ESI †).Thiourea (3.0 mmol) and ODE (5 mL) were added to an addition funnel, connected to round-bottom flask.The apparatus was placed under vacuum while the three-neck flask was heated at 100 1C for 30 min.After refilling with nitrogen, the three-neck flask was heated 170 1C for 1 h.The flask was then heated to either 210 1C (for nickel and cobalt) or 220 1C (for copper and iron).The addition funnel was warmed with a heat gun to approximately 170 1C (B5 min) to allow the thiourea to dissolve in the ODE, and then the contents were added swiftly to the round-bottom flask (Scheme 1).The solution was continuously stirred at 1100 rpm and kept in the reaction vessel for 60 min, with aliquots taken at 5, 30, and 60 min.Nanoparticle products were isolated by precipitation with ethanol (10-25 mL), centrifugation (8000-8700 rpm), and resuspension with chloroform (3-10 mL) three times.Higher volumes of washing solvents were used with higholeate reactions.
Synthesis of transition metal sulfides nanoparticles (NMR scale)
In a nitrogen filled glovebox, an NMR tube was loaded with metal carboxylate (0.0798 mmol) and thiourea (0.136 mmol) at a 1 : 1.7 ratio, and additional sodium oleate to obtain final metal:carboxylate ratios of 1 : 2, 1 : 3, and 1 : 4. The tube was capped with a septum and removed from the glove box.A nitrogen-filled balloon with a needle was attached to the NMR tube to allow for safe gas expansion during heating (Fig. S17, ESI †).The tube was held in an oil bath for either 10 or 60 min at 150, 170, 200, or 220 1C.After cooling, DMSO-d 6 (0.6 mL) was added for NMR analysis.In DMSO-d 6 , the organics dissolved, but the particles remained at the bottom of the NMR tube.In control reactions, the same procedure was employed but with a combination of oleic acid (0.32 mmol) and thiourea (0.32 mmol).
Preparation of samples for gas FTIR analysis
A 25 mL three-neck round-bottom flask was loaded with (a) thiourea (13.1 mmol) and ODE (10 mL) (b) Nickel stearate (0.5 mmol), sodium oleate (3.0 mmol), and ODE (10 mL), (c) thiourea (13 mmol), oleic acid (13 mmol), and ODE (10 mL) (d) nickel stearate (0.5 mmol), thiourea (3 mmol), and ODE (15 mL) (e) nickel stearate (0.5 mmol), thiourea (3 mmol), sodium oleate (3 mol), and ODE (15 mL).Two gas adapters were attached to the round-bottom flask to allow for the flow of nitrogen gas into and gasses out of the flask.The flask was then placed under vacuum at 100 1C for 30 min before refilling with N 2 .The gas IR cell outlet was attached to a bubbler and was flushed with nitrogen to eliminate any atmospheric gasses.In IR cell inlet was attached to the flask through a gas adapter.The whole system was flushed one more time with nitrogen until the spectrum reached a steady state, then the nitrogen flow rate reduced to B2 bubbles per s.Then, the flask was heated, and the IR spectra of the outflowing gasses were collected approximately every 10 1C from 100 1C to 220 1C.
Characterization
Nanoparticles were characterized with powder X-ray diffraction (pXRD) using a Rigaku Smart Lab Diffractometer with Cu Ka X-ray (l = 0.154 nm) radiation source set to 40 kV and 44 mA.Before analyzing the sample, it was dissolved in chloroform and drop-cast on a low background pXRD plate.The patterns were matched to the corresponding phase using ICSD database.
For decomposition product analysis, nuclear magnetic resonance (NMR) was used. 1 H NMR and 13 C NMR were collected in a Bruker Advanced HD 400 MHz Spectrometer.
Gas-phase transmission measurements of gas products from the studied chemical reactions were performed using a Bruker Vertex 70v Fourier transform infrared (FTIR) spectrometer along with a Pike short-path gas cell with KRS5 windows.The IR source was globar in the FTIR bench, and detector was liquid-nitrogen-cooled HgCdTe (MCT) detector.The gas cell was placed in the sample compartment of the FTIR bench, which was under a constant N 2 purge.The remainder of the FTIR bench was also under a constant N 2 purge.The Pike shortpath gas cell is equipped with four ports: two external ports for N 2 purging of the areas outside of the gas cell windows and two Scheme 1 General synthesis of transition metal (M n+ ) sulfide nanoparticles to give either sulfur rich or sulfur poor phases.
internal (inlet and outlet) ports for supplying gas to the gas cell (Fig. S18, ESI †).The gas cell was initially purged with N 2 using inlet and outlet ports, and a background transmission measurement was recorded under N 2 purge.
Solid-phase transmission measurements of solid products from the studied chemical reactions were performed using an attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR) measurements of solid products were performed using ThermoFisher Scientific Nicolet iS5 FTIR equipped with an iD7 ATR accessory.One drop of the sample was drop-cast on the monolithic diamond crystal window.After complete drying the solid sample was secured with a sample presser.All samples were collected at room temperature and normalized to a background scan taken before collecting experimental data.
Results and discussion
Long chain carboxylic acids and carboxylate ions, especially oleic acid and oleate are ubiquitous in nanocrystal synthesis.For many years now, they have been a go-to surface stabilizing ligand, and many studies have used oleate to control both shape and size of nanocrystals, especially metal chalcogenides.This is especially important due to the high solubility of metal carboxylates in oily and high-boiling solvents like octadecene.Traditionally, a metal reagent such as a metal halide or a metal oxide is first heated with an oleate or a stearate to form the metal carboxylate complex that is later used as a precursor for a nanoparticle synthesis. 29,36,37he relationship between size and oleate concentration can be complicated.Increasing oleate concentrations allows for lowering of surface energy and stabilization of smaller particles.However, in some cases, oleate is known to bind precursors, slowing reaction rates, causing fewer nuclei to form and resulting in fewer, larger particles.What has not been investigated well is how oleic acid might influence crystalline phase in systems where multiple phases of different stoichiometries are possible.Again, oleate may be affecting the thermodynamics of the growing nuclei and it might influence the precursor kinetics, both of which may impact phase composition.
The role of oleate is well-studied in the canon of CdS and CdSe quantum dot synthesis. 33In addition to being an X-type ligand to surface cations, Cd(oleate) 2 acts as a Z-type ligand, terminating surface anions.Oleate has been shown to influence polytypism in CdSe.As an X-type ligand, it especially stabilizes the zinc blende phases, because these present eight charged [111] surfaces, whereas hexagonal wurtzite phases only present two charged [001] facets.How might oleate influence phase when both polytypism and phases of multiple stoichiometries are possible?Will oleate only affect particle size, or crystalline phase as well?Will behaviors transcend across several metals, each with their own unique chemistries and d-electron counts?
To first examine if oleate influences phase in the synthesis of transition metal sulfides, metal sulfides of Fe, Co, Ni, and Cu were synthesized according to a modified one-pot hot-injection method reported by Joo et al. 38 In this synthesis, a solution of thiourea solubilized in ODE was injected into a solution of metal carboxylate via an addition funnel at 210-220 1C in ODE for 1 h.Sodium oleate was included to achieve overall oleate : metal ratios of 1 : 2, 3, 4, 8. Our initial hypothesis was that sodium oleate would shift the equilibrium of oleate/stearate dissociation from the metal complex and cause the formation of metal-poor nanocrystalline phases; however, to our amazement, the very opposite trend was observed.
Characterization with pXRD showed that as the amount of additional sodium oleate was increased, the product became increasingly sulfur poor (Fig. 1 and Table S1, ESI †).For copper, at low concentrations of oleate, covellite (CuS) formed, and as the concentration of oleate was increased, there resulted in an increasing proportion of digenite (Cu 1.8 S).For nickel and cobalt, the resultant phase changed from MS 2 to MS.For iron, low concentrations of oleate gave mostly the two polymorphs of FeS 2 (pyrite and marcasite) and the spinel polymorphs greigite and smythite (FeS 1.3 ).High concentrations yielded the low sulfur content pyrrhotite (FeS 1.1 ).
All of the families of metal sulfides studied contain structures with approximate cubic close packed (CCP) or hexagonal close packed (HCP) staking of S 2À or S 2 2À anions.There was no visible trend in CCP vs. HCP structures with changing oleate concentration.For example, the copper series saw a shift from hexagonal covellite (CuS) to cubic digenite (Cu 1.8 S) while nickel series saw the opposite trend from cubic vaesite (NiS 2 ) to hexagonal NiS (Table S1, ESI †) with increasing oleate concentration.
The decrease in sulfur content with increased oleate concentration disproved the initial hypothesis that oleate would bind the metal centers, decreasing metal reactivity.Therefore, it was instead hypothesized that the oleate was parasitically interacting with the sulfur precursor, thiourea.
0][41] Thiourea undergoes two main thermal processes when heated to temperatures higher than 170 1C. 39The first process, which can happen between 171.2 and 187.5 1C is the isomerization of thiourea into ammonium thiocyanate (Scheme 2).In a fully equilibrated melt, the thiocyanate concentration triples that of thiourea. 42herefore, it was important in the experimental design that thiourea was only briefly and consistently heated before addition to the heated metal solution.Inconsistent preheating may be a source of irreproducibility in literature nanocrystal preparations.
The second process in the decomposition of thiourea occurs between 187.5 to 246.2 1C and results in a loss of 80% of the total weight of thiourea.The gaseous decomposition products of this step include carbon disulfide (CS 2 ) and ammonia (NH 3 ).Other studies suggest that at temperatures above 500 1C, cyanamide (H 2 NCN) and hydrogen cyanide (HCN) are also products. 40Cyanamide has also been detected as one of the decomposition products starting at 200 1C, but likely trimerizes into melamine, so it is never seen in the gas phase at lower temperatures. 43uclear magnetic resonance (NMR) was used to analyze the decomposition products of the copper-based reactions in hope of identifying a parasitic side reaction.Copper was chosen for deeper study because it often produces diamagnetic products, which would be easier to study by NMR. (Fig. 2).Reactions were performed on NMR scale (without the solvent ODE) at varying temperature (150, 170, 200, and 220 1C), metal: oleate concentration (1 : 2, 1 : 3, 1 : 4), and time (10 and 60 min).Thiourea was added at a ratio of 1.7 : 1 thiourea : metal for the metal studies and at a ratio of 1 : 1 thiourea : oleic acid for the NMR studies without metal present.Immediately after cooling, DMSO-d 6 was added to dissolve the product mixture for 1 H and 13 C NMR (Fig. S1-S5, ESI †).
At low temperatures and times (150 1C), the reaction of copper oleate with thiourea showed only minor changes from the starting material.The broad signal at d = 7.25-7.75ppm can be ascribed to the protons of thiourea complexed to the copper oleate (2) since it is shifted down-field from a thiourea control (6.75-7.25 ppm, Fig. S7, ESI †).As temperatures and times increased, the thiourea signal decreased and was replaced with a new product with a singlet at 6.00 ppm.This was assigned to melamine (4), which is a known product of thiourea decomposition. 39,43In addition, two new sharp singlets at 6.66 ppm and 7.21 ppm were observed.These protons were identified as resulting from the terminal NH 2 protons of a new product, oleamide (1).In control experiments without the presence of the copper, this reaction required temperatures of 220 1C for 60 min to go to completion (Fig. S3, ESI †), whereas with the copper present, the reaction started at 150 1C at 10 min and could go nearly to completion at 200 1C in 10 min (Fig. S1, ESI †).Therefore, the Lewis acidic copper promotes the transformation of oleate to oleamide.Furthermore, the amount of amide that formed was linearly proportional to the amount of oleate added (Fig. 2(D)).To check that the formation of oleamide is not exclusive to copper only, similar temperature and time studies were performed for nickel (Fig. S6, ESI †).The formation of oleamide was apparent for the sample heated to 150 1C for 60 minutes, although, most of the NMR signal was disrupted by magnetic nickel sulfide nanoparticles.
The high temperature reaction of thiourea and carboxylates to give amides is known. 44It is proposed that thiourea undergoes a transformation into ammonium thiocyanate around 170 1C and reacts with the carboxylic acid group of sodium oleate to form oleamide (Scheme 3). 44Thiocyanate signals are also present in the NMR spectra and occur at around 7.0 ppm for 1 H NMR (NH 4 + ) and 130.7 ppm for 13 C NMR (SCN À ) 45 (Fig. S1, S2 and S11, ESI †).In 1 H NMR, the peak shifts position between 6.9 and 7 ppm depending on the sample, which we suggest is from solvation effects and differing degrees of coordination to carboxylate.Despite the identification of melamine and amide byproducts, that leaves us with the question: where is all the sulfur going, if not to the synthesis of the nanoparticles?
In their study of amide formation, Mittal et al. suggested that as thiourea reacts with a carboxylate to produce an amide, Scheme 3 Reaction of thiocyanic acid with the carboxylate ion to produce an amide and carbonyl sulfide gas.
Fig. 3 Gas FTIR of thermal decomposition products of (A) thiourea (13.1 mmol) in ODE at 206 1C, (B) nickel stearate (0.5 mmol) and sodium oleate carbonyl sulfide (OCS) gas, hydrogen (H 2 S), and ammonium polysulfides are possible sulfur-based byproducts.It was then hypothesized that in the nanocrystal syntheses sulfur may be escaping in the form of a gaseous sulfide (OCS or H 2 S) or as a polysulfide, thereby lowering the amount of available sulfur for the nanoparticle synthesis.
The gaseous product of the reactions of thiourea in ODE between 100 and 220 1C were monitored using in situ gas-cell IR spectroscopy (Fig. 3).In all spectra, the water signal between 1300 and 2000 cm À1 appears due to either contamination from air or one of the precursors.
Control studies showed the main gasses produced from thermal decomposition of thiourea (Fig. 3(A)) are carbon disulfide (CS 2 ) and ammonia (NH 3 ) consistent with previous reports. 39No H 2 S was identified.Nickel stearate heated in the presence of sodium oleate yielded CO 2 from decarboxylation (Fig. 3(B)).Thiourea heated in the presence of oleic acid produces substantial amounts OCS as Mittal et al. had predicted. 44CS 2 and CO 2 arise from direct decomposition of thiourea and carboxylate.Ammonia is also present from the aforementioned thermal decomposition of thiourea.(Fig. 3(C)).OCS is known to disproportionate to CS 2 and CO 2 based on thermodynamic calculations (Gibbs energy and equilibrium constants) at temperatures above 800 1C; thus it is not a likely path here but may occur in a nanocrystal synthesis if the transformation can be catalyzed by metal ions. 46n nanocrystal syntheses with nickel present, it was found that increasing the oleate concentration did not cause more sulfur-based gases to evolve and instead the opposite was true.When thiourea and nickel oleate were heated together (thiourea : oleate 3 : 1), NH 3 , CO 2 , CS 2 , and OCS gasses were produced (Fig. 3(D)).However, when additional sodium oleate was added (thiourea : oleate 3 : 4), it eliminated the evolution of CS 2 gas and decreased the amount of OCS gas.(Fig. 3(E)).Therefore, increasing oleate concentration does not yield sulfur poor phases because of a side reaction that produces sulfur-based gaseous species that escape.Moving forward, the alternative explanation should also answer why CS 2 does not evolve when high oleate concentrations are used.We propose, therefore, that under high oleate concentrations, CS 2 is the active sulfur source in the formation of metal sulfides, rather than thiourea.
Previously, it was mentioned that thiourea isomerizes to [NH 4 + ][ À SCN].At room temperature, the NMR studies showed predominantly but heating and the addition of sodium oleate pushes the equilibrium towards the ammonium thiocyanate.The change to thiocyanate is rationalized because of the production of oleamide, and because oleate (pK a B 5) is a stronger base than thiocyanate (pK a B 1.1) and will preferentially coordinate the ammonium in the anhydrous conditions.To test this hypothesis, copper sulfide nanoparticles were synthesized at low and high oleate concentrations employing identical amounts of washing solvents before centrifugation.The supernatant solutions were studied by ATR-FTIR (Fig. 4).In both cases, stretches from the amide byproduct (3353 cm À1 and 3180 cm À1 for N-H and 1658 cm À1 for CQO stretches) could be identified along with thiocyanate ion (2060 cm À1 ).Unreacted carboxylate (CQO 1556 cm À1 ) was when high oleate concentrations were employed.Most importantly, an increase in sodium oleate concentration in the reaction resulted in a much stronger thiocyanate stretch in the washings.The increase in thiocyanate indicates that the transformation of thiourea to ammonium thiocyanate was promoted by carboxylates.
Since oleate forces the transformation of thiourea to thiocyanate, we tested if thiocyanate is an active sulfur precursor for metal sulfide formation.Thiocyanate is known as a sulfur source in nanocrystal reactions.It has been previously reported that when copper thiocyanate (CuSCN) is heated between 180 1C and 280 1C in oleylamine, sulfur poor djurleite (CuS 0.52 ) is formed. 47In our laboratory, under analogous conditions to the above experiments, ammonium thiocyanate was reacted with copper oleate (ammonium thiocyanate: oleate: metal 3 : 1 : 0.5) in ODE.The result was sulfur-poor metastable digenite (CuS 0.55 ) nanoparticles (Fig. S13, ESI †), similar to when high oleate conditions are used with thiourea (Fig. 1 and Table S1, ESI †).While it is most likely that thiocyanate is the sulfur source, it is also possible that CS 2 is an active sulfur source since CS 2 can be released from the thermal decomposition of thiourea (Fig. S14, ESI †).Further evidence for a metal thiocyanate intermediate at high oleate concentrations comes from the aforementioned reaction of copper(II) oleate with thiourea, in the presence of 8 : 1 oleate: metal.Copper thiocyanate was identified as an impurity (Fig. 1) in the sulfur-poor CuS 0.55 product.Regardless of the path, thiocyanate appears to be a more reluctant sulfur source than thiourea, resulting in sulfur-poor metal-sulfide phases throughout the different metals studied.
Conclusions
In summary (Scheme 4), our understanding of the system is that on its own, thiourea can thermally decompose into CS 2 and NH 3 gases as seen in gas-phase IR (I).Under low oleate conditions, thiourea coordinates to metal centers starting at temperatures as low as 150 1C (II).Thiourea decomposition is promoted by the metal center and becomes the preferred and uninhibited sulfur source for metal sulfide formation, yielding sulfur-rich metal phases (III) of Fe, Co, Ni, Cu.
Under high oleate conditions, thiourea isomerizes to ammonium thiocyanate (IV) driven by the coordination of ammonium to oleate.Ammonium thiocyanate and carboxylates produce amides and OCS gas via a reaction that is promoted by the metal centers (V).Ammonium thiocyanate becomes the sulfur source in the formation of metal sulfides (VI, VII).Ammonium thiocyanate is a reluctant sulfur source compared to thiourea, and sulfur-poor metal sulfides of Fe, Co, Ni, Cu result.
These results came contrary to our initial hypothesis that increased oleate concentration would slow metal reactivity and highlight the importance of deep-dives in the molecular transformations that occur in colloidal synthesis.
While these studies have resulted in an explanation for stoichiometric phase control, one detail that has not yet been explained is why these particular reaction conditions had a tendency to produce some rare metastable polymorphs (similar or identical stoichiometry, but different crystal packing) (Table S2, ESI †).Smythite (FeS 1.3 ) and marcasite (FeS 2 ) formed in addition to their more stable counterparts greigite (FeS 1.3 ) and pyrite (FeS 2 ).Jaipurite (CoS) selectively formed over the more stable cobaltpentlandite (CoS 0.89 ).Nickel sulfide (NiS) formed selectively over the more common and stable millerite (NiS).Cubic digenite (CuS 0.55 ) formed over the more stable hexagonal polymorphs chalcocite (CuS 0.50 ) and djurleite (CuS 0.52 ).Polymorphic phase control is a complex field that we are currently studying.
Scheme 2
Scheme 2 Isomerization of thiourea to thiocyanic acid at temperatures over 170 1C.
Fig. 2
Fig. 2 NMR studies of copper series: (A) 1 H NMR taken on a 400 MHz instrument of thiourea and copper oleate heated to 220 1C for one hour (DMSOd 6 was injected post-cooling); (B) 1 H NMR taken on a 400 MHz instrument of thiourea and copper oleate heated to 150 1C for ten minutes (DMSO-d 6 was injected post-cooling); (C) structures of oleamide (1), metal-coupled thiourea (2), oleic acid (3), and melamine (4); (D) relationship between amount of sodium oleate and amount of oleamide produced (see SI for calculation method).
Fig. 4 Scheme 4
Fig. 4 ATR-FTIR of the nanoparticle solution after centrifugation isolated at low oleate (black) and high oleate (blue).
|
2023-07-30T15:15:23.775Z
|
2023-08-14T00:00:00.000
|
{
"year": 2023,
"sha1": "f1ec75a585a55f18c1a7bd3e75eda8c90c0aa39e",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/nh/d3nh00227f",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c0e4d498110999361a8da382fcd1a3690e168d11",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
90007621
|
pes2o/s2orc
|
v3-fos-license
|
Design, synthesis, in vitro and in vivo evaluation of tacrine–cinnamic acid hybrids as multi-target acetyl- and butyrylcholinesterase inhibitors against Alzheimer's disease
School of Pharmacy, Nanjing University of C State Key Laboratory Cultivation Base f University of Chinese Medicine, Nanjing, 21 Jiangsu Key Laboratory for Functional S University of Chinese Medicine, Nanjing, 21 School of Nursing, Nanjing University of Ch E-mail: 1691@163.com; Tel: +86-15952007 Department of Medicinal Chemistry, Chi 210009, China. E-mail: sunhaopeng@163.c † Electronic supplementary informa 10.1039/c7ra04385f Cite this: RSC Adv., 2017, 7, 33851
Introduction
Alzheimer's disease (AD) is the most prevalent form of late-life mental failure in humans and it affects about 6% of the population aged over 65. 1 It is estimated that more than 18 million people presently suffer from AD, and the number is predicted to sharply increase to 70 million by 2050. 2 The cardinal features of AD include progressive memory impairment, disordered cognitive function, altered behavior such as depression, hallucination, delusion, and agitation, and a progressive decline in language function. 3 So far, it is well accepted that AD is a multifactorial syndrome deriving from a complex array of neurochemical factors. During the process of AD, cholinergic neurons and synapses of the basal forebrain are selectively lost, causing cognitive impairment. 4 These ndings inspired several theories about AD pathogenesis, including cholinergic dysfunction, 5 amyloid cascade, 6 hyperphosphorylation of sprotein, 7 cell cycle hypothesis, 8 and brain-derived neurotrophic factor hypothesis. 9 Additionally, oxidative stress, 10 free radical formation, 11 metal dyshomeostasis, 12 and mitochondrial dysfunction, 13 are also reported to be tightly correlated to the development of AD by supplying an inammatory microenvironmental condition. These theories increase understanding of the basic mechanism of AD, and, also depict a more complex AD scenario.
The cholinergic hypothesis of the pathogenesis of AD asserts that dysfunction of cholinergic system, mainly decline of acetylcholine (ACh) level, results in the cognitive and memory decits. Therefore, recovering cholinergic function is believed to be benecial for the treatment of AD. 14 Generally, ACh can be hydrolyzed by two types of cholinesterases (ChEs), namely acetylcholinesterase (AChE) and butyrylcholinesterase (BuChE). Although the elucidation of the pathophysiology of AD provides multiple potential drug targets for designing effective drugs, acetylcholinesterase inhibitors (AChEIs) still serve as the main therapeutic agents applied clinically for AD.
The enzymatic site of human AChE is a narrow gorge with a length of approximate 20Å, which contains two binding sites: the catalytic active site (CAS) at the bottom and the peripheral anionic site (PAS) near the entrance of the gorge. 15,16 CAS is in charge of the hydrolysis of ACh and is consisted of key residues including Ser203, Glu334, and His447, which are referred to as the catalytic triad. 17 PAS is composed of several aromatic residues such as Trp86, Trp286. 18 It has been proved that PAS is closely related to both hydrolysis of ACh and neurotoxic cascade of AD through AChE-induced b-amyloid (Ab) aggregation. 19 Under normal condition, AChE is more active and can hydrolyze about 80% of ACh in human brains. 20 However, both the level and the activity of AChE in AD patients are found to be remarkably reduced, leading to the compensative upregulation of BuChE, which further modulates the ACh levels. 21 Therefore, inhibitors of both AChE and BuChE, such as tacrine and rivastigmine, are expected to exert potent therapeutic effect on AD. Unfortunately, instead of curing or preventing the neurodegeneration, these drugs can only enable a palliative treatment. 22 Considering the multifactorial nature of AD, the traditional agents designed by one-molecule one-target approach is insufficient to provide enough benets. Thus, designing compounds that can simultaneously regulate multiple signicant targets in the development of AD, has emerged as a new strategy. These compounds, which are referred to as multi-target-directed ligands (MTDLs), 23 are considered to offer additional properties other than cholinesterase inhibition. Substantial studies have been performed to achieve different types of MTDLs, many of which have been proved to show promising pharmacological effects on AD. [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39] These results encourage medicinal chemists to continue this work. In recent years, designing MTDLs based on tacrine has attracted the attentions of medicinal chemists throughout the world and numerous related publications are disclosed to describe the efforts on this eld. Compared to other AChEIs, tacrine is a good scaffold for the design of MTDLs due to its simple structure and high ligand efficiency (LE), which means tacrine can potently inhibit AChE with small number of nonhydrogen atoms. Moreover, tacrine has a good endurance against substantial structural modication while retaining the target-based activity, further provides a sound basis for the design of MTDLs. However, there is no newly approved small molecular agent for the treatment of AD in recent years, and most of the MTDLs remain at the stage of preclinical study. Therefore, it is still urgently needed for us to design new MTDLs and to fully understand the structural requirement through detailed structure-activity relationship (SAR) study.
Our group has been dedicated to the discovery of new MTDLs for nearly a decade. Previously, Fang L. et al. 40 and Chen Y. et al. 41 have disclosed a series of tacrine-ferulic acid hybrids as multifunctional potent ChEs inhibitors, most of which effectively inhibited ChEs in vitro in nanomolar range. These compounds were also proved to exert multiple functions, including antioxidant activities, vasorelaxation effects, and NOdonating behavior. In vivo studies by using the scopolamineinduced cognition impairment mouse model conrmed that these compounds can ameliorate the cognitive impairment and reduce the hepatotoxicity compared to the reference compound tacrine. These studies provided us promising lead compounds for further research. However, structural modication, especially on the ferulic acid moiety, is still limited and needs to be further elucidated. To deepen the understanding of the structural requirement for tacrine-ferulic acid hybrids, here we report the structural modication of ferulic acid moiety, which is replaced by cinnamic acid with different substitutions. The target compounds are synthesized and evaluated for their in vitro and in vivo activities related to the treatment of AD, including in vitro assays for cholinesterase catalytic activity, Ab 1-42 self-aggregation, cyto-protective effects against hydrogen peroxide and antiproliferative activity in PC-12 cells. Additionally, we also report the in vivo behavioral and hepatotoxic evaluations for the optimal compound selected from in vitro assays. Based on these results, we hope to supply more useful information of structure-activity relationship (SAR) that can guide further discovery of new MTDLs against AD.
Compound design and chemistry
Compound CY-1, which was previously reported by our group, was used as the lead compound for structural modication. 41 The ferulic acid moiety of CY-1 was replaced by cinnamic acid with various substitutions at different positions of the phenyl ring (Fig. 1).
Cholinesterase inhibitory activity and SAR analysis
The inhibitory effects of the synthesized compounds against AChE from electrophorus electricus (eeAChE) and BuChE from equine serum (eqBuChE) were determined, following Ellman's method. 43 The data were expressed as IC 50 values ( Table 1). Most of the compounds were proved to be potent inhibitors of ChEs, with IC 50 values lower than 100 nM. The AChE IC 50 value of 8, an analog without any substitution at the cinnamic acid moiety, was higher than most of the substituted ones. Methyl substitution (9)(10)(11) led to an increase of AChE inhibitory activity. The position of the methyl group was also considered, showing that the activity was para-> meta-> ortho-. Interestingly, 11 showed much improved activity on AChE (IC 50 ¼ 34.3 AE 1.8 nM), while its inhibitory effect on BuChE remarkably reduced (IC 50 ¼ 86.9 AE 6.6 nM). When the methyl was replaced by a 4-methyl carbonate substitution (32), the compound exhibited a higher selectivity on AChE (AChE IC 50 ¼ 71.2 AE 2.4 nM, BuChE IC 50 ¼ 342.0 AE 61.5 nM). The results suggested that bulky functional groups were well tolerated by AChE, while they were restricted by BuChE. Therefore, bulky groups at para-position can be a good choice to enhance the target selectivity on AChE. Next, we designed a series of methoxy group analogs to evaluate the impact of this group on activity. For mono-substituted compounds (12)(13)(14)(15)(16)(17)(18)(19), the activity on AChE was para-> meta-> ortho-, the same to that of methyl analogs. Substitution of methoxy group at meta-position (13) seemed have no impact on the target selectivity (AChE IC 50 ¼ 47.4 AE 2.0 nM, BuChE IC 50 ¼ 57.4 AE 5.6 nM). It was noticeable when methoxy group was at ortho-position (12), it showed a 7.99-fold selectivity on BuChE (AChE IC 50 ¼ 123.8 AE 7.6 nM, BuChE IC 50 ¼ 15.5 AE 1.5 nM). Similar results were also observed on 9 with 2-methyl substitution (AChE IC 50 ¼ 80.6 AE 9.8 nM, BuChE IC 50 ¼ 37.3 AE 3.3 nM). We inferred that the shape difference of the binding site between AChE and BuChE led to such phenomenon. The binding site of AChE is narrow and long, while BuChE is broad and short. As a result, substitution at the ortho-position can lead to steric hindrance to the narrow binding site of AChE but was tolerated by BuChE. Oppositely, para-substitution resulted in the elongation of the molecular shape of the inhibitors, and was more suitable for the binding site of AChE. For multi-substituted compounds (15)(16)(17)(18)(19), we could also observe a trend that ortho-substitution (15 and 16) was preferred by BuChE, while para-substitution was better for AChE (19). Interestingly, for triOCH 3 analogs, 17 was very potent on both AChE and BuChE (IC 50 ¼ 17.3 AE 0.6 and 23.3 AE 2.3 nM, respectively), while 18 showed considerably high selectivity on AChE.
Next, we evaluated the impact of halogen atoms on the ChEs activity. When substituted by Cl (20)(21)(22), the activity on AChE was para-> meta-> ortho-, while it showed an opposite manner on BuChE. The results were in accordance with those from methyl and methoxy group substitution. When substituted by different halogen atoms, the activity on AChE was -Cl (22) z -Br (24) > -F (23). Meanwhile, the three compounds with parasubstitution were more selective on AChE than BuChE, a similar manner as mentioned above. Then -NO 2 (27)(28)(29) and -CF 3 (25)(26) and -OCF 3 (31), three electron-withdrawing groups, were introduced as the R group. It was noteworthy that analogs with -NO 2 substitution were the most active compounds on AChE among all the derivatives, with IC 50 values in a single-digit nanomolar range. Meanwhile, they all exhibited a remarkable selectivity toward AChE than BuChE (SI ¼ 0.04-0.15). The impact of -CF 3 and -OCF 3 was lower than -NO 2 , however, they also showed the same target selective rule to other groups mentioned above.
We subsequently introduced amino (37-39) as hydrogenbond donating group. We found that such groups resulted in a remarkably reduced activity on AChE (IC 50 ¼ 54.7 AE 8.8-173.3 AE 41.2 nM) except 37, which was active toward both AChE and BuChE (IC 50 ¼ 28.7 AE 2.7, 18.7 AE 2.2 nM, respectively). Considering the hydrophobic nature of the binding site, especially for AChE, introduction of polar substitutions may lead to the improper intermolecular recognition, thus reducing the activity. Inspired by this, we replaced the hydroxyl to benzyloxyl. The ortho-substitution (33) exhibited high selectivity on BuChE, while meta-(34) and para-substitution (35 and 36) were preferable to AChE. These results further suggested the steric hindrance of the groups on the target selectivity.
To further validated the inhibitory activities of synthesized compounds on human ChEs, the representative compounds, 35 and 36 were selected for determination (
Kinetic study of AChE inhibition
To further analyze the binding manner of the synthesized compounds to huAChE, the potent inhibitor 36 was selected as Table 2.
Molecular modeling studies
To investigate the binding pattern of the synthesized compounds with huAChE, molecular docking studies were performed using Discovery Studio (DS). 35, and 36 were selected as representative compounds. As shown in Fig. 3A and B, both the compounds bound to AChE in a dual-site manner by occupying both CAS and PAS. The 1,2,3,4-tetrahydroacridin moiety of the two compounds inserted into the CAS. The tetrahydroacridin moiety formed multiple p-p stacking contacts with the aromatic sidechains of Trp86 and Tyr124. These hydrophobic contacts provided driving force for the binding of the compounds to CAS of huAChE. The substituted phenyl ring of cinnamic acid moiety was located at the PAS of huAChE binding groove. 44 It formed p-p stacking contacts with the sidechain of Trp286, Tyr341. It was noticeable that the methoxy group at meta-position of the phenyl ring (36) formed an additional p-alkyl interaction with the sidechain of Trp286. The binding difference may explain the slightly better huAChE inhibitory activity of 35 than 36.
In summary, the binding mode of the selected compounds supported the mixed-type of binding manner revealed by the kinetic study.
Inhibition of self-induced Ab 1-42 aggregation
All compounds were evaluated for their inhibitory capacity on self-induced Ab 1-42 aggregation based on a thioavin T-based uorometric assay. Curcumin, a natural product that is known to inhibit the Ab 1-42 self-aggregation, was used as the reference compound. Most of the analogs only showed poor or moderate inhibition on Ab 1-42 self-aggregation (ranging from 4.2 AE 0.1-37.8 AE 3.9%, Table 3). Two compounds, 35, and 36, exhibited inhibitory rate over 40% (40.7 AE 1.9, 42.2 AE 2.6%, respectively). Interestingly, 36 was potent on both AChE and self-induced Ab 1-42 aggregation, indicating its potential for acting as a multi-target compound. It seemed that a methyl group was preferred in inhibiting Ab 1-42 self-aggregation, especially when substituted at the para-position. Larger groups such as t-Bu led to a greater than 2-fold decrease in activity. Mono-substitution of methoxy group led to the completely loss of the inhibitory effect, however, di-or trisubstitution of methoxy groups (16,18,19) supplied moderate . Compounds were shown in stick mode colored in yellow. Key residues were labeled as thin stick mode colored in white. Intermolecular interactions were shown as dot lines with different colors according to the type of the interaction: light green, hydrophobic contact; pink, p-p stacking; purple, p-alkyl contact. activity. Electron-withdrawing group, such as -NO 2 and -CF 3 , remarkably reduced the activity. Substituent groups such as halogen, hydroxyl or amino group had no impact on the inhibition of Ab 1-42 self-aggregation, no matter what position they were at. It was noticeable that benzyl substituted compounds (33)(34)(35)(36) exhibited the best activity among all the derivatives, suggesting this benzyl group was not only important for AChE inhibition, but also acted as a preferred moiety for designing new MTDLs.
Cell toxicity and cyto-protection effects of the compounds in PC-12 neuroblastoma cells We next focused on the cell toxicities of the synthesized compounds. They were evaluated for the anti-proliferative effects against neuroblastoma PC-12 cell line. Most of the synthesized compounds exhibited IC 50 values above 30 mM (Table 3). 35 and 36 showed the best safety on PC-12 cells (IC 50 ¼ 92.2 AE 8.8 mM and 84.6 AE 7.3, respectively). Considering their good inhibitory activity on ChEs and self-induced Ab 1-42 aggregation, especially 36, they were selected for further in vivo evaluations. It was noteworthy that the nitro-substituted compounds 27, 28, and 30 showed much stronger antiproliferative activity against PC-12 cells than most of the derivatives (IC 50 ¼ 34.5 AE 2.5, 23.7 AE 1.2, 13.5 AE 1.7 mM, respectively), indicating their potential cytotoxicity. Although these nitro-substituted compounds showed the best inhibitory effects on AChE, they only exerted poor activity on self-induced Ab 1-42 aggregation. Taken together, these compounds were not investigated for their in vivo activity. Besides, CF 3 Although strong electron-withdrawing groups were preferred for AChE inhibition, it seemed that they were prone to cause a strong cytotoxicity. Similar manner was observed in 3-OCF 3 substituted analog 31 (IC 50 ¼ 10.0 AE 1.0 mM). Highly toxic compounds also included 18, 22, and 33 (IC 50 ¼ 18.7 AE 1.4, 20.1 AE 3.0, and 14.5 AE 1.9 mM, respectively).
Then, we evaluated the cytoprotective effects of 35 and 36 on H 2 O 2 -induced cell damage. Treatment with 500 mM H 2 O 2 for 24 h caused over 60% death rate of PC12 cells compared with the control group (Fig. 4). When pretreated with 35 and 36 for 24 h, the mortality rate of PC-12 cells caused by H 2 O 2 was signicantly attenuated. Such protective effect exhibited dosedependent manner for both the two compounds. 36 showed a better cytoprotective effect than 35. It increased the cell viability to 63.1 AE 2.1% and 72.5 AE 2.1% at the concentration of 20 and 40 mM, respectively. These results indicated that 36 had a potential in antagonizing the oxidative stress.
Behavioral studies
Improvement of cognitive ability is the most signicant prole of anti-AD agents. Based on the multiple evaluations mentioned above, compound 36 with the best multipotent activity prole was selected for in vivo behavioral study by using a Morris water maze test. The animal model was built on the basis of scopolamine-induced cognition-impaired adult ICR mice and was applied for the cognitive improvement effects of 36. Compound 35 was also evaluated with the aim to understand the importance of the methoxy group. Tacrine (20 mmol kg À1 body weight) was used as positive control. 35, 36, and tacrine were orally administered to the ICR mice 30 min before intraperitoneal (ip) administration of scopolamine (1 mg kg À1 ) or saline solution for 10 consecutive days to adapt the apparatus. The test included 5 days of learning and memory training and a probe trial on the sixth day. The mean escape latency values of all the groups on the sixth day were shown in Table 4 and Fig. 5A. Compared to the control group, scopolamine led to a remarkable delay of the latency to target (8.9 AE 4.0 seconds vs. 45.2 AE 11.6 seconds), indicating that the cognitive impairment mouse model was successfully built. Treatment of tacrine ameliorated the impairment and the latency to target reduced to 36.3 AE 11.6 seconds (*p < 0.05). 35 exhibited a comparable activity to tacrine (36.4 AE 14.3 seconds, *p < 0.05). Compared to tacrine and 35, 36 signicantly reduced the latency to target (13.2 AE 7.6 seconds, ***p < 0.001), indicating that 36 considerably ameliorated the cognitive impairment of the treated mice and was much better than tacrine. The results also suggested the critical role of the methoxy group of ferulic acid moiety of compound 36. Removal of this group led to the markedly decrease of the in vivo activity as compared to 35. We confer there may be two reasons for the results: (1) the methoxy group may enhance the ability of 36 to penetrate the blood-brain barrier (BBB) and target the central nervous system (CNS); (2) the methoxy group may prevent the metabolism at meta-position of the phenyl ring, thus enhance the concentration of the compound to target CNS.
The distance to target (Table 4 and Fig. 6B) and the trajectories of the mice in each group were also analyzed. Compared to the control group, administration of scopolamine remarkably led to the extended distance to target (1637.3 AE 517.1 cm vs. 292.8 AE 206.4 cm, ****p < 0.0001). Tacrine and 35 reduced the distance to target (1125.3 AE 367.1 cm, 1274.9 AE 452.6 cm, respectively). When treated with 36, the distance to target was signicantly shortened (469.5 AE 278.8, ****p < 0.0001). These results were supported by trajectory analysis. As shown in Fig. 6B, the trajectory of the mice in scopolamine model group was very long and disordered, while tacrine and 35 groups (Fig. 6C and E) showed shortened distances, but still much longer than the control group (Fig. 6A). Mice treated with 36 almost recovered to the normal cognition (Fig. 6D), with a similar orientation and distance to that of the normal mice. Taken together, these results supported that 36 remarkably ameliorated the cognition impairment caused by scopolamine.
Hepatotoxicity studies
Given that the serious hepatotoxicity of tacrine has been the primary limitation for its clinical use, to ensure the safety of 35 and 36 for further development, we next investigated the possible drug-induced hepatotoxicity by comparing their toxic prole to tacrine. were also in accordance with those from the behavioral studies. Therefore, it is important to introduce proper groups at this position in order to avoid undesired metabolism.
To further analyze the hepatotoxicity of 35 and 36, morphologic studies by immunohistochemical staining were applied. Treatment of tacrine (Fig. 8B), 35 (Fig. 8C) or 36 (Fig. 8D) did not result in remarkable morphologic changes in liver compared to the control group (Fig. 8A). Taken together, 36 exhibited the highest safety among all the test compounds, ensuring its further development.
Conclusions
CY-1 was a tacrine-ferulic acid hybrid reported by our group previously. Guided by this compound, in the present studies, a series of tacrine-cinnamic acid hybrids were designed and synthesized so as to identify the optimal substitution on the phenyl ring of the cinnamic acid moiety. Although there are several publications about tacrine-ferulic acid hybrid, as far as we concerned, this is the rst medicinal chemistry study on the ferulic acid moiety. In vitro assays proved that most of the compounds effectively inhibited ChEs in the nanomolar range. Additionally, some interesting information was summarized from the SAR study and can guide the further optimization of this series of compounds. 36 was one of the most potent analogs, which was about 4-fold more active than the parent compound CY-1 against AChE. Kinetic studies and molecular docking indicated that 36 inhibited AChE in a mixed-type manner by simultaneously binding to CAS and PAS of AChE. This compound effectively inhibited the self-induced Ab 1-42 aggregation, and exhibited cytoprotective effects against H 2 O 2 induced cell damage. Meanwhile, it was proved to be non-toxic to PC-12 cells when it exerted its biological functions, indicating its good safety. The in vitro assays conrmed the multifunctional potent manner of 36 as potential anti-AD agent. Therefore, it was subjected to in vivo evaluation including Morris water maze test and hepatotoxicity studies. 36 remarkably reduced the scopolamine-induced cognitive impairment in animal model and showed very low hepatotoxicity under the therapeutic concentration. Altogether, 36 can be considered as a promising lead compound for further identication of new anti-AD agents.
Experimental sections
Chemistry General experimental. Melting points were determined on a Mel-TEMP II melting point apparatus and are uncorrected. 1 H-NMR spectra were recorded with a Bruker Avance 300 MHz spectrometer at 300 K, using TMS as an internal standard. MS spectra were recorded on a Shimadzu GC-MS 2010 (EI) or a Mariner Mass Spectrum (ESI) or a LC/MSD TOF HR-MS Spectrum. All compounds were routinely checked by TLC and 1 H NMR. TLCs and preparative thin-layer chromatography were performed on silica gel GF/UV 254 supported by glass plate, and the chromatograms were performed on silica gel (200-300 mesh) visualized under UV light at 254 and 365 nm. Purity for nal compounds was greater than 95% and was measured by HPLC with Agilent Technologies 1260 innity C 18 4.60 mm  150 mm column using a mixture of solvent methanol/water or acetonitrile/water at the ow rate of 0.5 mL min À1 and peak detection at 254 nm under UV. All solvents were reagent grade and, when necessary, were puried and dried by standards methods. Concentration of solutions aer reactions and extractions involved the use of a rotary evaporator operating at a reduced pressure of ca. 20 Torr. Organic solutions were dried over anhydrous sodium sulfate. Analytical results are within (0.40% of the theoretical values).
In vitro inhibitory evaluations on AChE and BuChE
The investigation of the inhibitory effects of the test compounds was performed followed the method of Ellman et al., using a Shimadzu 160 spectrophotometer. AChE (EC 3.1.1.7, Type VI-S, from Electric Eel, C3389; from human, C1682) and BuChE (EC 3.1.1.8, from equine serum, C0663; from human, B4186), 5,5 0 -dithiobis(2nitrobenzoic acid) (DTNB, D218200), acetylthiocholine iodide (ATC, A5751), and butyrylthiocholine iodide (BTC, B3253) were purchased from Sigma-Aldrich (St. Louis, MO, USA). AChE/BuChE stock solution was diluted before use to give 2.5 units per mL (for eeAChE, eqBuChE and huAChE) or 0.5 units per mL for huBuChE. ATC/BTC iodide solution (0.075 M) was prepared in deionized water. DTNB solution (0.01 M) was prepared in water containing 0.15% (w/v) sodium bicarbonate. For buffer preparation, potassium dihydrogen phosphate (1.36 g, 10 mmol) was dissolved in 100 mL of water. The pH of the solution was adjusted to 8.0 AE 0.1 by KOH. Stock solutions of the test compounds were dissolved in ethanol to give a nal concentration of 10 À4 M when diluted to the nal volume of 3.32 mL. For each compound, a dilution series of at least ve different concentrations (normally 10 À5 to 10 À9 M) were prepared.
For measurement, a cuvette containing 3.0 mL of phosphate buffer, 100 mL of AChE or BuChE, 100 mL of DTNB and 100 mL of the test compound solution were added. Aer the addition of 20 mL of ATC or BTC, the reaction was initiated and solution was mixed immediately. Two minutes (eeAChE and eqBuChE) or een minutes (huAChE and huBuChE) aer substrate addition, the absorption was determined at 25 C (eeAChE and eqBuChE) or 37 C (huAChE and huBuChE) at 412 nm. For the reference value, 100 mL of water replaced the test compound solution. For determining the blank value, additionally 100 mL of water replaced the enzyme solution. The measurement for each concentration was performed in triplicate. The inhibition curve was tted by plotting percentage enzyme activity (100% for the reference) versus logarithm of test compound concentration. The IC 50 values were calculated by GraphPad Prism 5 and the data were shown in mean AE SEM.
Kinetic studies of AChE inhibition
Kinetic studies were performed in the same manner to the determination of ChEs inhibition, while the substrate (ATC/BTC) was used in concentrations of 25, 50, 90, 150, 226, and 452 mM. The concentrations of test compounds were set to 0, 20, 60, 100, 200 nM for 36. The enzymatic reaction was extended to 4 min (eeAChE and eqBuChE) or 20 min (huAChE and huBuChE) before the determination of the absorption. V max and K m values of the Michaelis-Menten kinetics were calculated by nonlinear regression from substrate-velocity curves using GraphPad Prism 5. Linear regression was used for tting the Lineweaver-Burk plots.
Molecular modeling studies
The docking study was performed by CDOCKER module implemented in Discovery Studio (version 3.0, BIOVIA, USA). CDOCKER is a grid-based molecular docking method that employs CHARMm. 46 Random ligand conformations are generated from the initial ligand structure through high temperature molecular dynamics, followed by random rotations. The random conformations are rened by grid-based simulated annealing and a nal grid-based or full forceeld minimization. The solutions are then clustered according to position and conformation and ranked by energy.
The co-crystal structure of huAChE bound with donepezil was selected for molecular docking. The structure was download from Protein Data Bank (PDB, ID: 4EY7). It was prepared by "Prepare Protein" module in DS for further docking. Missed sidechains were added and the water molecules were removed, then it was protonated at pH 7.4. For the test compounds docked into huAChE, they were rst sketched in DS, and then prepared by using "Prepare Ligands" module to protonate at pH 7.4. The resulted molecules were minimized by "Minimize Ligands" module. The "Smart Minimizer" algorithm was used to perform the minimization, with max steps set to 2000, RMS Gradient set to 0.01. Other parameters were set as default. 47 A sphere (in 10Å radius) around donepezil was determined as the binding site, including both CAS and PAS of huAChE. For the simulated annealing, heating steps and cooling steps was set to 2000 and 5000, respectively, while heating and cooling temperature was set to 700, and 310, respectively. Other parameters were kept as default. Aer docking, ten top-ranked conformations were retained for analysis. Binding patterns of the docked molecules were described by DS visualizer. 48 Inhibition of self-induced Ab 1-42 aggregation Inhibition of self-induced Ab 1-42 aggregation was measured using a Thioavin T (ThT)-(T3516, Sigma-Aldrich, St. Louis, MO, USA) binding assay as previously described. 49 Aliquots of 2.0 mL of Ab 1-42 (AS-64129-05 Anaspec Inc.), lyophilized from 2 mg mL À1 HFIP (1,1,1,3,3,3-hexauoro-2-propanol, 52517, Sigma-Aldrich, St. Louis, MO, USA) and dissolved in DMSO, were incubated for 24 h at room temperature in 0.215 M sodium phosphate buffer (pH 8.0) at a nal concentration of 500 mM. Test compounds were dissolved in DMSO and then diluted by buffer to a nal concentration of 20 mM. Aer incubation, the samples were diluted to a nal volume of 150 mL with 50 mM glycine-NaOH buffer (pH 8.5) containing 5 mM thioavin T. Fluorescence signal was determined (excitation wavelength 450 nm, emission wavelength 485 nm) on a SpectraMax Paradigm Multimode Reader (Molecular Device, USA).
The inhibitory rate of Ab 1-42 aggregation was calculated according to the following equation: (1-IFi/IFc) Â 100%. Here, IFi and IFc were the uorescence intensities obtained for absorbance in the presence and absence of inhibitors, respectively, aer subtracting the background uorescence of the 5 mM thioavin T solution. Each measurement was measured in triplicate. The inhibitory rate of the test compound was shown in mean AE SD.
Cyto-protection and cell toxicity against PC-12 neuroblastoma cells
Cytotoxicity was determined by using 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyltetrazoliumbromide (MTT) assay. The PC-12 cell line was purchased from Cell Culture Center at the Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences. MTT was purchased from Sigma (M2128, St. Louis, MO). It was dissolved in phosphate buffered saline (PBS) to a stock concentration of 5 mg mL À1 and stored at À20 C. PC-12 cells were plated in 96-well plates, raised to a population of 1 Â 10 4 cells per well, and incubated overnight. Aer cells were treated with density gradient of test compounds or DMSO for 24 h at 37 C or treated with 500 mM H 2 O 2 for another 12 h, 20.0 mL of MTT solution was added into each well of the plate and incubated for 4 h. Then the solution was removed and 150.0 mL of DMSO was added into each well to dissolve the MTTformazan crystals. DMSO was used as a negative control. The absorbance values (OD value) were read at 570 nm by Elx800 Absorbance Microplate Reader (BioTek, Vermont, USA). The inhibitory rate for each concentration of the test compound was calculated by the equation as follows: Here, OD test , OD blank , and OD control stand for the OD value from test compound, background, and DMSO, respectively. The IC 50 values were calculated by GraphPad Prism 5 and the data were shown in mean AE SEM.
Behavioral studies
Behavioral studies were performed by using adult male ICR mice (8-10 weeks old, weight 20-25 g), which were purchased from the Yangzhou University Medical Center (Yangzhou, China). Scopolamine hydrobromide was supplied by Aladdin Reagents (H1507073, Shanghai, China). Tacrine was synthesized in our lab with >95% purity as determined by HPLC.
The mice were separated into ve groups as follows: (i) vehicle as blank control, (ii) scopolamine as model group, (iii) tacrine plus scopolamine as positive control, (iv) compound 36 plus scopolamine as test group, and (v) compound 35 plus scopolamine as test group. Tacrine, 36 and 35 (20 mmol kg À1 body weight) were orally administered to mice in groups (iii), (iv), and (v), respectively, 30 min before the ip administration of scopolamine (1 mg kg À1 ) or saline for 10 consecutive days.
Cognitive function was evaluated by the Morris water maze analysis-management system (Panlab SMART 3.0, America), according to the method previously described. 37 The maze was placed in a lit room with visual cues at 25 C. An escape platform (10 cm diameter) was located in the center of one quadrant of the circular pool (120 cm diameter, 60 cm height) with a depth of 40 cm water. The behavioral study of each mouse included 5 days of learning and memory training and a probe trial on day 6. The animal starting positions faced to the pool wall, and were pseudorandomized for each trial. For the cognitive evaluation, each mouse was individually evaluated on both visible-platform (days 1-2) and hidden-platform (days 3-5) versions of the water maze. All mice received nonspatial pretraining during the rst two training days, which prepared them for the subsequent spatial learning test. During the two days, mice were trained to nd the platform that was labeled by a small ag (5 cm tall). The hiddenplatform version was used to determine the retention of memory to nd the platform. During the hidden-platform training trials, the escape platform was placed 1 cm below the surface of the water. On each day, the animal was subjected to two trials, each of which lasted for 90 s. The time for the mouse to nd the platform (a successful escape) was recorded. If a mouse failed to reach the platform within 90 s, the test was terminated and the animal was gently navigated to the platform by hand. Whether a mouse was successful or failed to reach the platform within 90 s, it was kept on the platform for 30 s. On the last day (day 6), the platform was removed from its location and the animals were given a probe trial in which they had 90 s to search for the platform. The time taken to reach the missing platform and the number of times the animals crossed the platform location were recorded.
Data for the time of escape latency, the trajectory traveled, and the number of platform location crossings were recorded by Panlab SMART 3.0 and processed by Graphpad Prism 5.
Hepatotoxicity studies
Hepatotoxicity was evaluated according to the method previously described 32 by using adult male ICR mice (8-10 weeks old, weighing 20-25 g) obtained from the Yangzhou University Medicine Centre (Yangzhou, China). Tacrine and the test compounds were dissolved in a sodium carboxymethyl cellulose (CMC-Na) solution (0.5 g CMC-Na in 100 mL distilled water). Concentration of 3 mg/100 g body wt of tacrine, corresponding to 151.5 mmol kg À1 body wt, was administered intragastrically (ig). Equimolar dose of test compounds to that of tacrine was administered ig. 8, 22, and 36 h aer the administration, heparinized serum was collected from the retrobulbar plexus and subjected to hepatotoxicity evaluation. The activity of aspartate aminotransferase (AST) and alanine aminotransferase (ALT), two indicators of liver damage, was determined using corresponding assay kit (EF551 and EF550 for ALT, EH027 and EF548 for AST, Wako, Japan). The data were processed by Biochemical Analyzer (HITACHI 7020, Japan).
1 h aer the collection of retrobulbar blood, mice were sacriced and livers were harvested for morphological studies by using immunohistochemical method. Two 3 mm sections of each liver extending from the hilus to the margin of the le lateral lobe were isolated by Ultra-Thin Semiautomatic Microtome (Leica RM2245, Germany) and immediately placed in 10% buffered formaldehyde, xed for two days, and embedded together in one paraffin block by using Paraffin Embedding Station (Leica EG1150H, Germany). Subsequently, 5 mm sections were prepared from these paraffin blocks. They were deparaffinated and stained with hematoxylin and eosin or by means of the periodic acid-Schiff procedure for glycogen.
|
2019-08-20T01:41:31.649Z
|
2017-07-04T00:00:00.000
|
{
"year": 2017,
"sha1": "676916a22ab6ef8d4c45eb9d0ea060ce3c45f94c",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c7ra04385f",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "60b0cbf16c4a17e8448b4d5b727ee6dc839152a4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
246411620
|
pes2o/s2orc
|
v3-fos-license
|
Random walks on complex networks under node-dependent stochastic resetting
In the present work, we study random walks on complex networks subject to stochastic resetting when the resetting probability is node-dependent. Using a renewal approach, we derive the exact expressions of the stationary occupation probabilities of the walker on each node and the mean first passage time between arbitrary two nodes. Finally, we demonstrate our theoretical results on three networks with two different resetting protocols, validated by numerical simulations as well. We find that under a delicate setting it is advantageous to optimize the efficiency of a global search on such networks by the node-dependent resetting probability.
I. INTRODUCTION
First passage underlies a wide variety of stochastic phenomena across diverse fields [1][2][3][4]. Indeed, chemical and biochemical reactions [5], foraging strategies of animals [6], and the spread of diseases on social networks or of viruses through the world wide web [7] are often controlled by first encounter events.
In the last decade, there has been an increasing interest in first passage under resetting (see [8] for a recent review). Resetting refers to a sudden interruption of a stochastic process followed by its starting anew. Interestingly, for a one-dimensional Brownian motion subject to stochastic resetting [9], the occupation probability at stationarity is strongly altered. The mean time to reach a given target for the first time can become finite and be minimized with respect to the resetting rate. Some other interesting features of resetting Brownian motions or random walks have also been unveiled. The mean perimeter and the mean area of the convex hull of a two-dimensional resetting Brownian motion were exactly computed, which showed the two quantities grow much slowly with time than the case without resetting [10]. For random walks on a d-dimensional hypercubic lattice under resetting [11], the average number of distinct sites visited by the walker grows extremely slowly with the time steps, and the so-called recurrence-transience transition at d = 2 for standard random walks (without resetting) disappears in the presence of resetting. In a finite onedimensional domain, the distribution of the number of distinct sites visited by a random walker before hitting a target site with and without resetting was deduced, and the distribution can be simply expressed in terms of splitting probabilities only [12]. Moreover, different types of resetting protocols and Brownian motions have been considered, such as temporally or spatially dependent resetting rate [13][14][15][16], in the presence of external potential [17][18][19], run-and-tumble particles [20][21][22], active particles [23,24], and so on [25]. These studies have triggered an enormous recent activities in the field, including statistical physics [26][27][28][29][30][31][32][33][34], stochastic thermodynamics [35][36][37], * chenhshf@ahu.edu.cn chemical and biological processes [38,39], optimal control theory [40], and single-particle experiments [41,42].
Random walks on complex networks not only underlie many important stochastic dynamical processes on networked systems [43][44][45][46], such as transmission of virus or rumors [7,47,48], population extinction [49,50], neuronal firing [51], consensus formation [52], but also find a broad range of applications, such as community detection [53][54][55], human mobility [56][57][58], ranking and searching on the web [43,[59][60][61][62]. However, the impact of resetting on random walks in networked systems has received only a small amount of attention [63][64][65][66][67][68]. Random walks on networks under resetting have many applications in computer science and physics. For instance, label propagation in machine learning algorithms [69], or the famous PageRank [70], can be interpreted as a random walker with uniform resetting probability to all the nodes of the network. Human and animal mobility consists of a mixture of short-range moves with intermittent long-range moves where an agent relocates to a new place and then starts local moves [6,71,72]. Until recently, Riascos et al. studied the impact of stochastic resetting with a constant probability on random walks on arbitrary networks [73]. They have established the relationships between the random walk dynamics and the spectral representation of the transition matrix in the absence of resetting. Furthermore, they discussed the condition under which resetting becomes advantageous to reduce the mean first passage time (MFPT) [74]. Subsequently, the result has been generalized to the case when multiple resetting nodes exist [75,76].
In the present work, we aim to generalize the previous study to the case when the resetting probability at each node is not a constant, but is node-dependent. The natural generalization not only brings some new challenges from a theoretical point of view, but also may find practical perspectives in technical aspects. In search processes on networks, if a searcher has partial information about the present position such as node's degree, can one design a node-dependent resetting strategy to enhance the search efficiency? This may be important for heterogeneous networks, encountering on most empirical systems [46,77]. In the standard teleportation scheme of PageRank, one teleports to nodes uniformly at random, i.e., the probability to land on each node is the same. An al-ternative choice is a "personalized PageRank", in which the landing probability is localized around one node or a small number of nodes [78]. Such a choice has been shown to be beneficial to reducing the effect of teleportation [79], also finding its applications in community detection [80]. Taking the advantage of renewal structure in Markovian processes, we derive the occupation probability of the walker at each node at stationarity and the MPFT between arbitrary two nodes. We find that the two quantities can be calculated from the matrix defined in Eq. (12). We then apply our theoretical results to three concrete networks, and consider two different settings of node-dependent resetting probability, i.e, that depends on the shortest path length to the resetting node or node's degree. We observe that both the two settings can further optimize the efficiency of a global search compared with the case when the resetting probability is a constant.
II. MODEL
First of all, we define the standard discrete-time random walks on an undirected and unweighted network of size N [43]. Assuming that a particle is located at node i at time t, at the next time t + 1 it hops to one of neighboring nodes of node i with equal probability. Thus, the transition matrix W among nodes can be written as W = D −1 A, where A is the adjacency matrix of the underlying network, and D = diag{d 1 , · · · , d N } is a diagonal matrix with d i = N j=1 A ij being the degree of node i.
We now incorporate stochastic resetting with a nodedependent resetting probability into the standard random walk model. We first choose a node as the only resetting node, labelled with r. Then, at each time step, the particle either performs a standard random walk with the probability 1 − γ i or is reset to the resetting node r with the probability γ i . The resetting probability γ i is dependent on some attribute of node i. In the following, we consider that γ i is a function of the degree of node i or the shortest path length between node i and the resetting node r, although our next deduction is general and can be also applied to other types of functions.
III. STATIONARY OCCUPATION PROBABILITY
Let us denote by P ij (t) the probability that node j is visited at time t, providing that the particle has started from node i at t = 0, which satisfies a first renewal equation [14,16,31], where P nores ij (t) denote the probability of all possible trajectories that the particle starts from node i at t = 0 and ends at node j at time t, without undergoing any reset event during the time interval [0, t]. Therefore, the first term in Eq.(1) accounts for the particle is never reset up to time t, while the second term in Eq.(1) accounts for the particle is reset at time t ′ for the first time, after which the process starts anew from the resetting node for the remaining time t − t ′ . P nores ij (t) can be calculated as Here |i denotes the canonical base with all its components equal to 0 except the ith one, which is equal to 1. I and W are respectively the identity matrix and transition matrix without resetting, and Y = diag{γ 1 , · · · , γ N } being a diagonal matrix. It can be proved thatW can be written as the spectral decomposition (see Appendix A for details),W = N ℓ=1 λ ℓ |ψ ℓ ψ ℓ |, where λ ℓ is the ℓth eigenvalue ofW, and the corresponding left eigenvector and right eigenvector are respectively ψ ℓ and |ψ ℓ , satisfying ψ ℓ |ψ m = δ ℓm , and N ℓ=1 |ψ ℓ ψ ℓ | = I. Thus, Eq.(2) can be rewritten as Performing the Laplace transform for Eq.
whereP nores ij (s) can be obtained from Eq.(4), given bỹ Letting i = r in Eq.(5), we obtaiñ Substituting Eq.(7) into Eq.(5), we havẽ (8) Inverting Eq.(8) is difficult; however, we can instead calculate the stationary occupation probability by evaluating the limit, Substituting Eq.(8) into Eq.(9), and after some tedious calculations, we obtain (see Appendix B for details) Eq.(10) can be rewritten in the form of matrix, where we have defined the matrix Z as The entry Z rj denotes the average time spent on the node j before the particle is reset having started from the resetting node r.
IV. MEAN FIRST-PASSAGE TIME
Let us suppose that there is a trap located at node j. Once the particle arrives at the trap, the particle is absorbed immediately. Let us denote by F ij (t) as the probability that the particle visits node j at time t for the first time assuming that the particle has started from node i at t = 0. The first passage probability F ij (t) and the occupation probability P ij (t) satisfy the following renewal equation [43], In the Laplace domain, Eq. (13) becomes Furthermore, let us define Q ij (t) as the survival probability of the particle up to time t, providing that the particle has started from node i at t = 0. Obviously, The MFPT from node i to node j is calculated as T ij = lim s→0Qij (s), given by (for Appendix C for details) Eq. (16) can be rewritten as the matrix form, It is also useful to quantify the ability of a process to explore the whole network. For this purpose, we define T (j) as the global MFPT (GMFPT) to the target node j [81,82], averaged over all the starting node i except for node j, Furthermore, one can average the GMFPT over all nodes and get a property of the whole network which was introduced as the graph MFPT (GrMFPT) [83],
V. NODE-INDEPENDENT RESETTING PROBABILITY
For node-independent resetting probabilities, γ i ≡ γ for each i, Eq.(3) can be reduced toW = (1 − γ)W. Therefore, the eigenvalues ofW, λ ℓ , and the eigenvalues of W, ξ ℓ , have a simple relation, λ ℓ = (1 − γ)ξ ℓ . Meanwhile,W and W share the same eigenvectors. Since W is a stochastic matrix that satisfies the sum of each row equal to one, its maximal eigenvalue is equal to one. Without loss of generality, we let ξ 1 = 1 and the absolute values of other eigenvalues are always less than one. The right eigenvector corresponding to ξ 1 = 1 is given simply by |ψ 1 = (1, · · · , 1) ⊤ . Under such a case, Eq.(10) can be rewritten as In the second line of Eq.(20), we have utilized the facts |ψ 1 = N k=1 |k and ψ ℓ |ψ 1 = δ ℓ1 . The first term in Eq. (20) is the stationary occupation probability in the absence of resetting [43], and the second term in Eq. (20) is a nonequilibrium contribution due to the resetting processes.
In the case of a constant resetting probability Eq.(16) can be rewritten as Eq. (20) and Eq.(21) recover to the results of Ref. [73].
VI. NODE-DEPENDENT RESETTING PROBABILITY
As shown in Sec.III and Sec.IV, we have derived the exact results for the stationary occupation probability and for the MFPT under a general node-dependent resetting probability. We now turn to the specific form of the resetting probability. To the end, we assume that the resetting probability is a function of an attribute of nodes, such as the node's degree, the shortest path length to the resetting node, etc. On the one hand, the assumption is simple enough so that we can conveniently validate our theory. On the other hand, such a consideration may be reasonable from a practical point of view. For example, in the searching process on a network the searcher may collect some local information of its present position, such as node's degree, etc. Thus, the searcher can adjust its resetting probability in terms of the local information. Since the resetting probability γ i on each node is bounded between 0 and 1, we take γ i as a power function of an attribute f i of node i, subject to an upper limit γ max , given by where α is a parameter that controls the dependence of resetting probability on node's attribute, and µ is used to adjust the average value of resetting probabilities. γ max is a cutoff value of resetting probability, and is set to be γ max = 1 unless otherwise specified. In particular, α = 0 corresponds to the case of resetting with constant probability [73].
We first consider f i = d(i, r), where d(i, r) denotes the shortest path length between node i and the resetting node r. In Fig.1, we show the results on a ring network of size N = 50 (see the inset of Fig.1(a)), from which we choose one of nodes as the only resetting node. In Fig.1(a), we plot the GrMFPT as a function of the average resetting probability,γ = N −1 N i=1 γ i , for three different values of α. We compare the analytical results (solid lines in Fig.1(a)) against the same obtained from direct numerical simulations (symbols in Fig.1(a)). In all simulations, we have used 2 × 10 3 averages to estimate the MFPT between arbitrary two nodes. The results are found to be in excellent agreement between theory and simulations. The GrMFPT, T , shows a nonmonotonic dependence onγ. There exists an optimal value ofγ =γ opt for which T admits a minimum, T = T min . Comparing to the case without resetting (see horizontal dashed line in Fig.1(a)), there is a wide range of γ ∈ (0,γ c ) for which T can be decreased, in the sense that the resetting is able to optimize the efficiency of searching processes. Obviously, the larger value ofγ c , the wider range for optimizing the GrMFPT comparing with the case without resetting, andγ c is thus a measure for optimization scope via resetting.
To investigate the impact of the node-dependent protocol onγ c , we calculateγ c as a function of α, as shown in Fig.1(b). Also, in the inset of Fig.1(b), we showγ opt and T min as a function of α. We find that all the three quantities vary nonmonotonically with α. Noticeably,γ c andγ opt show their maxima at α = −1.6, although T min shows a minimum at α = 0 (corresponding to the case with a constant probability resetting). This implies that an appropriate negative correlation between the resetting probability at a node and its distance to the resetting node can expand the scope of optimization for the GrMPFT on ring networks. This result is counterintuitive because that one may naturally think that it is more beneficial when the resetting happens more frequently in the region away from the resetting node.
In Fig.2, we show the results on a finite Cayley tree of coordination number z = 3 and composed of n = 5 shells (see the inset of Fig.2(a)). The nodes in the outermost shell have degree 1, whereas the other nodes have degree z. The root node is set to be the only resetting node. In Fig.2(a), we also observe that the GrMPFT exhibits a minimum at an optimal value ofγ opt . Comparing with the case of without resetting (see the horizontal dashed line), the GrMFPT can be accelerated in the range of 0 <γ <γ c .γ c shows a monotonic increase with α, as shown in Fig.2(b). This indicates that when the resetting probabilities at the outer nodes are larger than those at the inner nodes, the scope of optimization for the GrMFPT becomes wider. Furthermore, as α increases,γ opt shifts to a larger value and T min is decreased gradually, as shown in the inset of Fig.2(b).
Finally, we consider the case when the resetting probability depends on the node's degree, i.e., f i = d i , where d i is the degree of node i. In Fig.3, we present the result on a Barabási-Albert (BA) network [84] of size N = 50 and average degree k = 2 (see the inset of Fig.3(a)). We choose a node as the only resetting node (red triangle). In Fig.3(a), we again see that the GrMPFT shows a nonmonotonic change withγ. For 0 <γ <γ c , the GrMPFT is less than that in the absence of resetting (see dashed line in Fig.3(a)). When the resetting probabilities at those nodes with larger degrees are larger than those at those node with smaller degrees, the optimization region shrinks, see for example α = 0.5 in Fig.3(a). Conversely, the optimization region is expanded, see for example α = −0.5 in Fig.3(a). In Fig.3(b), we plotγ c as a function of α.γ c increases monotonically with α. In addition, as α increases,γ opt decreases monotonically and T min increases slowly, as shown in the inset of Fig.3(b).
VII. CONCLUSIONS
To conclude, we have explored the impact of stochastic resetting on the diffusion and first passage properties of discrete-time random walks on networks where the resetting probability is node-dependent. We have derived the exact expressions of stationary occupation probabilities of the walker on each node and the MFPT between arbitrary two nodes. The two quantities (see Eq.(11) and Eq. (17)) can be calculated from the matrix Z defined in Eq. (12). Our deduction is general and is able to apply any protocol of node-dependent resetting probability. For concreteness we have considered two different resetting protocols on three types of networks. The first resetting protocol under consideration is that the resetting probability is a function of the distance between a node and the resetting node. The other is dependent on node's degree. To quantify the efficiency of global searching, we have paid our attention to the so-call GrMFPT, that is the MFPT averaged over all pairs of different nodes. The results show that the GrMFPT exhibits a nonmonotonic change with the mean resetting probabilityγ. There exists a wide range ofγ ∈ (0,γ c ) for which the GrMFPT is lower than that in the absence of resetting. Comparing to the case of constant resetting probability, the scope for optimizing the GrMFPT can be further expanded for certain settings of parameter, thereby embodying the advantage of the node-dependent resetting probability.
There are still many open questions concerning the resetting paradigm. In this work we only focused on a simple random walk model but one could generalize to other types of random walks, such as biased random walks [44,85], maximum entropy random walks [86], and so on. Moreover, it would be interesting to consider the effect of resetting costs on searching processes. In this context, how to find an optimal trade-off between minimizing the GrMFPT and the resetting costs is a challenging issue, although some important progress has been made recently in continuous systems [40]. whereà = UAU is a real-valued symmetric matrix that can be expressed in terms of spectral decomposition, where λ ℓ is the ℓth eigenvalue ofÃ, and the corresponding left eigenvector and right eigenvector are respectively φ ℓ | and |φ ℓ , satisfying φ ℓ |φ m = δ ℓm , and N ℓ=1 |φ ℓ φ ℓ | = I. In terms of Eq.(A1), we obtaiñ where the eigenvalues ofW are the same as those ofÃ, and eigenvetors ofW are given by |ψ ℓ = U|φ ℓ and ψ ℓ | = φ ℓ |U −1 . W are less than one for max{γ 1 , · · · γ N } > 0. Furthermore, we turn to evaluate the value of k γ kP nores rk (0). It is not hard to verify As mentioned before, all the eigenvalues ofW are less than one in the presence of resetting, and thus I −W is nonsingular. Taking where we have utilized Eq.(6). Therefore, the limit in Eq.(B1) has the form of 0/0, and thus we then apply the L'Hôpital rule to calculate the limit, which leads to
|
2022-01-31T02:15:20.806Z
|
2022-01-28T00:00:00.000
|
{
"year": 2022,
"sha1": "b7b2067937467848c2be5e5f03ffaf7e91351291",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "241ac947ab3c913111f47b10e5462da889c79185",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
14265971
|
pes2o/s2orc
|
v3-fos-license
|
Nasal and perirectal colonization of vancomycin sensitive and resistant enterococci in patients of paediatrics ICU (PICU) of tertiary health care facilities
Background Enterococci normally inhabit the intestinal tract of humans and are also a potential pathogen in causing nosocomial infections. The increase in antibiotic resistance and transfer of antibiotic resistance gene to Staphylococcus aureus (S. aureus) due to co-colonization has increased its importance in research. The aim of the study was to evaluate local epidemiology of nasal and rectal colonization with Enterococcus faecalis (E. faecalis) and Enterococcus faecium (E. faecium) in patients of Paediatrics Intensive Care Unit (PICU) and correlation with clinical and socioeconomic factors. Methods The nasal and perirectal swab samples were collected from 110 patients admitted in PICUs of three tertiary care hospitals of Rawalpindi Medical College, Pakistan. The identification of enterococci was done by biochemical tests and by PCR for ddl, vanA and vanB genes. Antibiotic susceptibility testing was performed by disc diffusion and MICs were determined for vancomycin, tetracycline, ciprofloxacin and oxacillin only. Results Out of 220 nasal and perirectal samples, 09 vancomycin-resistant enterococci (VRE) and 76 vancomycin-susceptible enterococci (VSE), consisting of 40 E. faecalis and 45 E. faecium were isolated. PCR successfully identified both species with ddl primers and VRE with vanA primer. With disc diffusion method, all isolates were resistant to most of the antibiotics tested except linezolid, quinupristin/dalfopristin, teicoplanin and vancomycin. VRE showed resistance to teicoplanin and vancomycin both and none was resistant to linezolid and quinupristin/dalfopristin. Generally, E. faecium isolates were more resistant than E. faecalis. MICs of vancomycin for nasal and perirectal VRE were 512 mg/L and 64 to 512 mg/L respectively. VRE were more in patients with prolonged hospitalization, from urban localities and those having pneumonia. Conclusion Present study reveals high colonization and antibiotic resistance in enterococcal isolates from nasal and perirectal area. Nasal colonization by enterococci in PICU is more alarming as VRE may cause infection and can transfer this resistance gene to other microorganisms like S. aureus.
Background
Enterococci normally inhabit the intestinal tract of humans and animals and may colonize the human oral cavity, vagina, hepatobiliary tract and skin of healthy individuals [1]. Two species, E. faecalis and E. faecium, have been more frequently isolated from clinical samples than other species of enterococci, accounting for 80 to 90% and 5 to 10%, respectively [2].
As compared to other Gram-positive organisms, enterococci have relatively low virulence but, in recent years they have emerged as the nosocomial pathogens of the 1990s [3][4][5][6]. Several factors, including ubiquitous distribution as intestinal flora and the widespread use of broad-spectrum antibiotics and invasive devices have contributed to the emergence of enterococci as important pathogens [7] and perhaps most important is their extensive resistance to a wide range of antimicrobial agents. These properties allow this organism to survive and multiply with a selective advantage over other fecal flora in a hospital environment where antimicrobial agents are heavily used. The aim of the study was to evaluate local epidemiology of nasal and rectal colonization with E. faecalis and E. faecium in PICUs patients and correlation with clinical and socioeconomic factors.
S. aureus persistently or intermittently can colonize the nasal cavity and perirectal area of healthy humans and transfer of vancomycin-resistant gene due to cocolonization with enterococci has been reported in some previous studies, so nasal samples were also collected with the hypothesis that enterococci (E. faecalis and E. faecium) can co-exist with S. aureus and may transfer resistance gene [8][9][10][11].
The study also included risk factors like age of the patient, reason for admission, stay of patient in the hospital at the time of sampling, residential (rural, urban) and socioeconomic status (high, middle, low) of the patients.
Methods
The prospective microbiological surveillance study was carried out at Microbiology Laboratory, Holy Family Hospital, Rawalpindi, Pakistan and Microbiology Research Laboratory, Quaid-I-Azam University, Islamabad, Pakistan during the period from March to September 2010. After ethical approval of study, granted by Ethical Committee of RMC and Allied Hospitals, Rawalpindi, Pakistan (No. EC/1721-22/RMC/dated: 03/03/2010), written consents were taken from the parents and guardians of the children before sampling. Samples of the anterior nares and perirectal area were obtained from every patient admitted in PICUs of Allied Hospitals of Rawalpindi Medical College, Rawalpindi, Pakistan, either newly admitted or transferred from other units of the same or different hospitals. Patients with duplicate admissions during the study period were excluded.
Samples were processed within two hours of collection. The swabs were inoculated onto Bile Aesculin Agar (BAA) (Oxoid, UK) plates and were incubated at 45°C for 24 to 72 hours. Characteristic pinpoint colonies and colonies with black zone around were subcultured on Mueller Hinton Agar (MHA) with 6% NaCl (Oxoid, UK) at 45°C for confirmation. Further identification of these isolates was done by pink or red colonies on KF Streptococci Agar (KFSA) (Oxoid, UK), negative catalase and coagulase tests and gamma-hemolysis on Sheep Blood Agar (SBA) (Oxoid, UK) after overnight growth at 37°C. All confirmed enterococci isolates were preserved in 16% v/v glycerol broth and in Microbank tubes (Pro-Lab Diagnostics, US) at -70°C.
Molecular identification
The species identification of enterococci (E. faecalis and E. faecium) was done using PCR by targeting ddl E. faecalis and ddl E. faecium genes. The vancomycin-resistant gene was identified with vanA and vanB primers. All the primer sequences used have been used in previous studies [12] and were obtained from Sigma Geno §ys (Sigma Aldrich, USA), Alpha DNA (Alpha DNA, Germany) and e-Oligo (Gene Link, USA) ( Table 1).
DNA was extracted with Wizard W Genomic DNA purification kit (Promega Corporation, USA) according to manufacturer's instructions. DNA was also extracted manually with Triton X lysis buffer by the method used in the previous study for DNA extraction from bacterial colonies [13]. The isolated DNA was stored at 2 to 8°C.
Amplification was carried out in Biometra T1 Thermocycler (Biometra, Germany) with initial denaturation at 95°C for 4 min, then 30 cycles of denaturation at 95°C for 30 sec., annealing at 52°C for 1 min and extension at 72°C for 2 min followed by final extension at 72°C for 7 min. The final PCR product was held at 4°C until removed.
The PCR product was analysed on 1% agarose gel. DNA ladder (O'Gene Ruler) of 100 bp and 1 kb was used to compare the size of PCR amplified fragments. Electrophoresis was done at 100 V for one hour and gel was viewed under Molecular Imager Gel Doc XR + System, Bio-Rad Laboratories, US.
Patient's clinical data
Patient's data, including age (<12 years), gender, residential and socioeconomic status, clinical diagnosis, history of vancomycin intake, surgical interventions, invasive procedure and devices, current medication profile was collected from the hospital record.
The Statistical Package for Social Sciences (SPSS) version 13.0 was used for statistical analysis by average, ±standard deviation, chi-square test (Cross tabulation) and t-test. The p-value ≤0.05 was considered as "statistical significant."
Antimicrobial susceptibility testing
With disc diffusion method, all nasal and perirectal isolates were 66 to 100% resistant to cephalexin, cefoxitin, cephalothin, cephradine, ciprofloxacin, erythromycin, gentamicin, methicillin and oxacillin. Whereas nasal isolates were 36 to 55% resistant to penicillin G, ampicillin, amoxicillin/clavulanic acid, imipenem, levofloxacin and tetracycline, while perirectal isolates showed variable resistance to these antibiotics. Both nasal and perirectal isolates were 05 to 18% resistant to teicoplanin and vancomycin and were susceptible to linezolid and quinupristin/dalfopristin (Table 3). Perirectal E. faecium isolates showed higher resistance than E. faecalis in disc diffusion test.
For ciprofloxacin all E. faecalis and E. faecium were resistant with MICs from 04 to 512 mg/L (t-test, P < 0.05). The oxacillin MICs ranged from 8 to 512 mg/L for both isolates and were resistant (t-test, P < 0.05) ( Table 5).
Patient's clinical data
In this study, the average patient stay was 5.42 (SD ± 5.79) days in PICU at the time of sampling. Statistically, no significant association was found in rate of enterococcal isolation with duration of stay of patients in PICU (Chi-square test: nasal: 20.505 & perirectal: 19.481, P > 0.05). However, the isolation of VRE both form nasal and perirectal area was more form patients who have longer hospital stay. There were 06/09 VRE isolates from patients who stayed more than two days in the unit (Table 6).
Frequency of enterococci in rural and urban patients
There was high isolation of both E. faecalis and E. faecium from urban patients than the rural patients ( Table 6). There were 08/09 (88.9%) VRE from urban patients and 01/09 (9.1%) from rural. In 29 nasal isolates, all the 03/03 VRE (02 E. faecalis and 01 E. faecium) were from urban patients. VSE from urban and rural were 18/ 26 (26.9% E. faecalis and 42.3% E. faecium) and 08/26 (7.7% E. faecalis and 23.1% E. faecium) respectively. However, no statistically significant association of residential status with nasal isolates was present (Chi-square test: 5.569, P > 0.05).
Association between clinical diagnosis and enterococcal colonization
The admitted patients with different disease conditions were categories into seven groups that were aspiration pneumonia, meningitis, pneumonia, renal failure, tetanus, tuberculosis and miscellaneous group. Diseases which appeared in more than five patients were given a separate group and diseases which were in less than five patients were grouped into "miscellaneous group". (Table 6). However, no significant association was found with nasal isolates (Chi-square test: 9.350, P > 0.05). Similarly, in perirectal VSE, 29/50 (30% E. faecalis and 28% E. faecium) were isolated from pneumonia patients followed by other leading group "miscellaneous group" which were having 16/50 (14% E. faecalis and 18% E. faecium) enterococci. Remaining 04/50 isolates (6% E. faecalis and 2% E. faecium) were from patients with meningitis. No significant association with perirectal isolates (Chi-square test: 22.958, P > 0.05). There were 12 patients who were suffering with pneumonia harboring enterococci both in nasal and perirectal samples. There was no significant correlation with other parameters of clinical data.
Discussion
E. faecalis and E. faecium are potentially good focal species for microbiological surveillance study as they accounts for 80 to 90% of human enterococcal infections [17]. These two species were focused in the present study also because these are the common nosocomial agents and normal inhabitants of human intestinal tract, female genital tract, and less commonly in oral cavity [18][19][20]. E. faecalis is the most frequently occurring species of enterococci [21] than others but in the present study, results depict that the isolation rates of both species E. faecalis and E. faecium were almost equal. Isolation of VRE is highly significant both in causing infection in the individuals themselves and in transmission of vancomycin resistance to staphylococci. In the present study, isolation of VRE and colonization of VSE in nasal samples is alarming although the carriage rate of nasal and perirectal VRE is not much high. A study by Karimi et al. [22] 16.9% VRE were isolated from stool samples of hospitalized children. This isolation rate is much higher as compared to the present study. Burger & Muller [23] reported the carriage rate of glycopeptide resistant enterococci (GRE) from different body sites. They concluded that in 20 patients, the GRE isolation was most frequently from stool samples (95%) whereas from other sites, including mouth, nose, throat, rectum and perineum recovery was low (25%). However, VRE isolation rate is very low in the present study but as a whole, there is high perirectal carriage rate of enterococci, which is usual. The nasal carriage rate is comparatively high in the present study, although the main areas of colonization and isolation for enterococci are stool, rectum and perirectal area [24]. Only few patients were positive for both perirectal-VRE and nasal-VRE. VRE were low in frequency but comparatively high in nasal samples. The higher nasal VSE colonization may be due to poor hygiene of the patients.
Out of several different genes mediating vancomycin resistance, vanA and vanB resistance gene were targeted for identification as these gene clusters can be acquired and often transferable [25]. PCR analysis successfully identified E. faecium and E. faecalis along with nasal and perirectal VRE using ddl and vanA and vanB primers respectively ( Table 2) like study of Dutka-Malen et al. [26].
High antimicrobial resistance is a characteristic of the enterococci, although some species like E. faecium are intrinsically more resistant than others [27]. In the present study, both the nasal and perirectal E. faecium isolates were more resistant than E. faecalis isolates. None of the nasal and perirectal isolates including VRE were resistant to linezolid and quinupristin/dalfopristin, the drugs of choice for these isolates. All the isolates showed high resistance to gentamicin like a previous study [28]. In an Iranian study [29] MICs of vancomycin for VRE isolates were from 32 to 512 μg/ml with similar results in our study. MICs of tetracycline for nasal and perirectal enterococci were from 2 to 256 mg/L and 0.5 to 256 mg/L respectively, which correspond with another report [30]. Resistance to ciprofloxacin was higher than other reports [31].
The urban patients were more colonized with VRE than the rural patients. This difference might be due to the irrational use of antibiotics in urban community. In a study by Oberoi & Aggarwal [32] high frequency of E. faecium in urban hospitalized patients was observed and that could be due to chronicity of cases or wider use of broad-spectrum antibiotics.
In a study by Berk & Verghese [33] reported some Gram-positive cocci including enterococci have significance in nosocomial respiratory infection and there are chances of occurrence of enterococcal pneumonia in patients receiving broad-spectrum antibiotics [34,35]. In the present study, there is more isolation of enterococci both E. faecalis and E. faecium, from the patients of pneumonia. This might have some correlation with nosocomial respiratory infection. More prospective study in this regard is under consideration. Isolation rate of VRE was low and is not possible to correlate it with vancomycin use as has been reported that treatment with vancomycin is not a risk factor for VRE colonization and infection [36,37]. The present study was only a microbiological surveillance study and not a study of intervention to decrease colonization rate and analysis of nosocomial infections caused by VRE.
Conclusion
High nasal and perirectal colonization rate by E. faecalis and E. faecium in children in PICUs in particular 2.7% VRE and 23.6% VSE nasal colonization is alarming as the anterior nares are not the common niche for these. Further studies are required to elaborate transfer of vancomycin-resistance genes in Staphylococcal nasal carriers co-colonized with VRE.
|
2017-06-21T12:16:48.209Z
|
2013-03-27T00:00:00.000
|
{
"year": 2013,
"sha1": "55c487002657552d5412ab0a587c3b51bcdc116a",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-13-156",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7c3e57bddee07b409211ed459f979bef21f3c61",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
207568844
|
pes2o/s2orc
|
v3-fos-license
|
Plasticity in Insect Olfaction: To Smell or not to Smell?
In insects, olfaction plays a crucial role in many behavioral contexts, such as locating food, sexual partners, and oviposition sites. To successfully perform such behaviors, insects must respond to chemical stimuli at the right moment. Insects modulate their olfactory system according to their physiological state upon interaction with their environment. Here, we review the plasticity of behavioral responses to different odor types according to age, feeding state, circadian rhythm, and mating status. We also summarize what is known about the underlying neural and endocrinological mechanisms, from peripheral detection to central nervous integration, and cover neuromodulation from the molecular to the behavioral level. We describe forms of olfactory plasticity that have contributed to the evolutionary success of insects and have provided them
In insects, olfaction plays a crucial role in many behavioral contexts, such as locating food, sexual partners, and oviposition sites.To successfully perform such behaviors, insects must respond to chemical stimuli at the right moment.Insects modulate their olfactory system according to their physiological state upon interaction with their environment.Here, we review the plasticity of behavioral responses to different odor types according to age, feeding state, circadian rhythm, and mating status.We also summarize what is known about the underlying neural and endocrinological mechanisms, from peripheral detection to central nervous integration, and cover neuromodulation from the molecular to the behavioral level.We describe forms of olfactory plasticity that have contributed to the evolutionary success of insects and have provided them with remarkable tools to adapt to their ever-changing environment.
INTRODUCTION
Insects rely on olfaction to locate mating partners, food sources, habitats, and oviposition sites, and to escape predators.Insects may encounter odorants emitted from individuals belonging to the same species, such as mating partners (e.g., sex pheromones in moths) and nestmates (e.g., alarm pheromones in ants), or belonging to different phases (e.g., aggregation pheromones in locusts) (130).Insects also exploit odor signals emitted from organisms such as enemies, potential hosts, and food sources, and from other sources of natural or anthropogenic origin, for example, that signify potential oviposition sites (14, 99).
To detect these olfactory signals and cues, insects have developed a sophisticated sensory system consisting of olfactory receptor neurons (ORNs) situated in sensilla on the antennae and mouthparts (64).Odor molecules penetrate through the cuticular pores and then are transported by odorant binding proteins (OBPs) to the ORN membrane, where they interact with receptors, ultimately leading to the generation of action potentials (59, 127).The signal is then transmitted through the ORN axon to the primary olfactory center of the brain, the antennal lobe (AL) (6).There, ORNs make synaptic contact with intrinsic neurons, the local interneurons, and with output neurons, the projection neurons, which transfer information to higher brain centers such as the mushroom bodies (MBs) and the lateral protocerebrum (30).Centrifugal neurons, which have a modulatory role, send axon branches to the AL (6, 56).The AL consists of a species-specific number of globular neuropil, the glomeruli (105), whose activation is odor specific and reproducible within a given species (46).Individual glomeruli become enlarged, such as the macroglomerular complex (MGC) in male moths, in which a large number of ORNs equally tuned to the sex pheromone are present (54).
Insect responses to biologically active chemical stimuli may vary not only according to biotic and abiotic environmental factors and/or previous experience, but also as a function of the physiological state.For development and reproduction to occur, insects must respond to relevant chemical stimuli at the right time.For example, responses to sexual signals should occur at reproductive maturity under environmental conditions suitable for mate finding, mating, and producing offspring.Also, responses to food odors should depend on the state of satiety.Insects must thus respond to odorants in coordination with their own physiological state.
To cope with these variable conditions, insects modify their olfactory systems by neuronal plasticity.Two types of behavioral olfactory plasticity and their neuronal basis have been studied so far: plasticity induced by (a) physiological changes and (b) environment-(e.g., biotic and abiotic factors) or experience-induced changes (e.g., different forms of learning).Here, we focus on the first type of plasticity in adult insects.Adaptations to physiological changes, either as short-term modifications (e.g., via modulation) leading to changes in neural activity or as long-term modifications (e.g., via life-history traits) leading to changes in gene expression and neural structure, are common in animals.
We summarize the recent literature on olfactory plasticity in insects, including economically important insect groups such as pollinators, agricultural pests, and disease vectors.Experiencedependent olfactory plasticity has been well summarized elsewhere (4,24,34).Here, we focus on modulatory effects of the physiological state that occur gradually, such as age-dependent, feeding state-dependent, and circadian rhythm-dependent effects, or immediately, such as matingdependent effects.We describe both behavioral modifications and, where known, the neural mechanisms underlying modulation, from peripheral detection to central nervous integration.
AGE-DEPENDENT PLASTICITY
The lifespan of adult insects generally varies between a few days and a few weeks; in exceptional cases, such as honey bee or termite queens, adults can live for many years.Depending on its lifespan In males and females, age-dependent olfactory plasticity is linked to maturation of reproductive organs (sexual maturation).This plasticity is, however, not restricted to responses to odors from conspecifics; it also affects responses to hosts, food, or oviposition cues.The best-studied insects in this context are moths and hematophagous mosquitoes and bugs, but we also provide examples from other insects, including flies, wasps, and bees.
Age Modulation of Pheromone-Guided Behavior
For insects in which one sex produces a sex pheromone that attracts the opposite sex, many cases of maturation in early adult life have been reported.In some noctuid moths, for example, males are more apt to respond to the female-produced sex pheromone over the first days following emergence (Figure 1) (45,115,121), in parallel with the maturation of the male sex accessory glands (SAGs) (36).In tephritid flies, the behavioral response of females attracted to male-emitted sex pheromones depends on ovarian development (41,73).
In male moths, biogenic amines and hormones are involved in age-dependent behavioral sensitivity to sex pheromones.In Agrotis ipsilon, juvenile hormone ( JH) biosynthetic activity increases concomitantly with age and pheromone response (36).By manipulating the JH level, researchers could decrease and increase the behavioral response (i.e., the percentage of males flying upwind toward a pheromone source) of sexually mature and young immature males, respectively (45,62).Similarly, injections of 20-OH ecdysone (20E) increased the behavioral response of young A. ipsilon males to pheromone, whereas injections of cucurbitacin, an antagonist of 20E receptors, decreased the responses of sexually mature males (37).Biogenic amines such as octopamine and dopamine also influenced age-dependent behavioral olfactory plasticity (2, 62).In A. ipsilon, the modulatory action of 20E, octopamine, and dopamine on pheromone-guided behavior seems to occur via their receptors, as the injection of their antagonists or their knockdown strongly decreased the behavioral response of mature males (1,2,35,37).Age can also influence the response of insects to other types of pheromones.In the locust Schistocerca gregaria, the response to the main aggregation pheromone component, phenylacetonitrile, is age dependent.Young males and females with low levels of JH display aggregation behavior and are attracted to phenylacetonitrile, whereas older adults with high JH levels no longer respond to this component (58).In the honey bee, Apis mellifera, responses of workers to the queen mandibular pheromone decrease with age (97, 123).Octopamine increased responses of young bees to the brood pheromone, an activator of age-and JH-mediated foraging behavior, and decreased the negative effect on foraging caused by the presence of old bees (8).It is now important to study how the different neuromodulators interact in the different model systems and whether there is a hierarchy in their influence on age-dependent olfactory-guided behavior.
Age Modulation of Behavioral Responses to Nonpheromonal Odors
Behavioral responses to plant or animal host odors as well as nonhost volatiles also underlie age-dependent changes in different insects.Food odor cues are often most attractive in early adult life, and oviposition-site cues become more attractive later.Female mosquitoes begin host searching and blood feeding only 24 to 72 h after adult emergence (65).Before this time, young female mosquitoes search for sugar-rich resources and are not attracted to vertebrate odors.Older Aedes aegypti females are more responsive to CO 2 than younger ones are (19,47).Also, in the obligatory hematophagous triatomine bugs, which have similar host-seeking behavior throughout development, attraction to host cue CO 2 is age dependent.In Rhodnius prolixus, recently molted nymphs become highly attracted to CO 2 beginning 7 days after ecdysis, when they are anatomically (e.g., mouthpart sclerotization) and physiologically (e.g., biosynthesis of blood digestive enzymes) mature enough to ingest blood (Figure 2) (17).
Attraction Attraction
Recently molted
Attraction Attraction
Recently fed
++++ ++++
Plasticity in triatomine bug olfactory-guided behavior as a function of age, circadian rhythm, and starvation.Bugs are attracted to hosts during the night.After feeding and following ecdysis, they are transiently no longer attracted to their hosts.Attraction resumes after starvation and time after ecdysis.Red crosses indicate level of attraction.In females of different herbivorous insects, including moths and beetles, both positive responses to host-plant odors and negative responses to nonhost-plant volatiles can increase with age (7, 87).By contrast, the avoidance response to the fruit-related odor benzaldehyde decreased with age in Drosophila melanogaster (32).Also, specificity to and the range of olfactory cues alone or in combination with other sensory cues to which insects respond can change with age.In the pepper weevil, Anthonomus eugenii, specificity to host-plant odors increases with age and maturation (3).In the ant Pheidole dentata, the range of olfactory cues to which it responds increases with age, in coordination with the number and types of tasks performed in the colony, and this change is accompanied by increasing levels of serotonin and dopamine but not octopamine (113).In the tephritid fly Neoceratitis cyanescens, sexually mature females use a combination of visual and olfactory host-plant cues, whereas immature females and males orient only toward olfactory host-plant cues and largely ignore visual cues (22).
In contrast to increasing olfactory sensitivity during maturation in early adult life, senescence or aging can negatively affect olfactory responses.For example, D. melanogaster showed an agedependent aversion to or a decrease in attraction to innate odors from approximately 14 days post eclosion (52).Investigations into the potential aging effects on longer-lived insects would be valuable.
Age Modulation in the Peripheral Olfactory System
Sensitivity of insect antennae to pheromones, as measured by electroantennographic (EAG) or single sensillum recordings, can either increase with age or be independent of age.In a few moth species (Ostrinia nubilalis, Spodoptera littoralis, and Pseudaletia unipuncta), the pheromone-detecting sex showed increasing EAG or ORN responses with age (33, 84, 112), whereas in A. ipsilon no effects of age were found in EAG responses to sex pheromone (45).In S. littoralis males, EAG responses increasing with age were correlated with changes in the expression of certain ecdysone receptors in the antennae: Whereas expression of SlEcR was constant throughout adulthood, expression of both SlUSP and SlE75 increased steadily (15).Also, age-related changes in behavioral responses to aggregation pheromones in locusts and to the queen mandibular pheromone in honey bees were not reflected by changes in EAG responses (58, 97).The decreasing behavioral attraction of honey bee workers to the queen mandibular pheromone correlated with changes in the expression of biogenic amine receptors in the antennae.The expression of the dopamine Amdop2, the octopamine Amoa1, and the tyramine Amtyr1 receptors increased with age, whereas expression of Amdop3 decreased (86,123).
EAG responses to nonpheromonal odors increased with age, e.g., S. littoralis females to plant odors (84) and female Phormia regina blow flies to odors from oviposition substrates (26).In the mosquito Ae. aegypti, the sensory neurons on the antennae and maxillary palps respond increasingly to lactic acid and CO 2 , respectively, with age in correlation with increasing behavioral responses to these host odors (28, 47).Bohbot et al. ( 19) correlated an increased response of octenol-sensitive ORNs on the antennae of Ae. aegypti with an increase in odorant receptor gene expression from day 1 to day 6 post emergence.Investigations using molecular genetics tools with D. melanogaster need to confirm whether observed correlations between gene expression and antennal sensitivity indeed have a functional connection.
Age Modulation in the Central Olfactory System
Clear correlations between changes in the central nervous system and age-dependent modulation of olfactory-guided behavior have been found in some insect species.Most of the described Odor-guided behavior: locomotion elicited by an odorant effects occur at the AL level, but age-dependent changes within the MBs have also been observed.A. ipsilon males demonstrate prominent changes in the sensitivity of sex pheromone-responding AL neurons in accordance with age and hormone levels: AL neurons become increasingly sensitive to the sex pheromone as the insect ages, and high levels of JH, 20E, octopamine, and their receptors allow this increase in sensitivity (2,5,43,62).Central processing of plant odors, by contrast, is age independent (48).An anatomical correlate of olfactory maturation has been found within the AL of the sphingid moth Manduca sexta.The relative size of the pheromone-processing MGC glomeruli increases during the first days of adult life (57).Contrary to findings on sex pheromone responses in male moths, AL sensitivity to aggregation pheromones in the locust S. gregaria decreases with age and JH level, consistent with observed behavioral changes (58).
In social insects such as honey bees and ants, but also in D. melanogaster, correlates of agedependent behavioral changes have been identified within both the AL and the calyces of the MBs.Owing to an increase in synaptic density in honey bees and D. melanogaster, certain glomeruli of the AL increase in size with age (32, 128).Also, in vivo optical imaging experiments have shown that odor responsiveness increases within the AL glomeruli of honey bee workers during the first days of adult life (125).At the MB level, the volume of the calyces increases with age while the density of synaptic microglomeruli decreases (49).Within the calyces, the membrane surface area of projection neuron synaptic boutons increases and the number of postsynaptic partners (Kenyon cells) decreases with age (50).In the carpenter ant Camponotus floridanus, both the AL and the MBs increase in size and contribute to increasing brain volume with age, correlated with an increase in the complexity of worker tasks (51).Whether the neuromodulators and their receptors involved in age-dependent physiological changes also play a role in the anatomical modifications would benefit from further investigation.
FEEDING STATE-DEPENDENT PLASTICITY
For many insects the attractiveness of food/host odors is dependent on the delay after the last food intake.Generally, food odors become more attractive as starvation is prolonged.This form of plasticity, however, has been studied predominantly in blood-sucking insects, as many of them transmit infectious diseases to humans.In addition, after a blood meal, insects engage in different activities related to their biology, such as mate finding, searching for oviposition sites, or returning to refuges.
Effects of Feeding State on Behavioral Responses to Odor
The nutritional state of the insect influences odor-guided behavior in both blood and nonblood feeders.Sugar feeding influenced parasitoid wasps' choice between host and food cues (77,82), and in D. melanogaster, starvation increased the attractiveness of food-odor sources (38).In mosquitoes, a blood meal distends the abdomen and the subsequent ovarian development suppresses hostseeking behavior, which in Ae. aegypti (69,70) and Anopheles gambiae (119) is generally restored 24 h after oviposition.Once a blood meal large enough to initiate ovarian development has been obtained, mosquitoes are attracted to olfactory cues associated with suitable oviposition sites (68).
In the case of triatomine bugs, the response of all developmental stages to CO 2 and other odors depends on their feeding status (Figure 2).R. prolixus starved for a short duration is attracted to host-related cues and repelled by aggregation pheromone, whereas insects starved for a long duration are attracted to both (102).Unfed R. prolixus is highly attracted to CO 2 , whereas 48 h after a blood meal, the bug becomes unresponsive to CO 2 , which then becomes repellent after 72 h and remains repellent or neutral for at least 20 days (16, 18).Moreover, insects fed saline solution or even starved bugs injected with hemolymph of fed insects are also unresponsive to or repelled by host cues such as CO 2 and heat (18).Postfeeding behavioral aversion to CO 2 seems to be induced by a mechanical distension of the abdomen and by an unidentified factor in the hemolymph that modulates olfactory responses (18).Blood feeding triggers physiological processes such as molting in nymphs or egg-laying in adult females and modifies host attraction behavior, with different time courses for different insects, according to their life-history traits.A high sensitivity to host odors when blood feeders are engaged in other relevant tasks would be a waste of energy and even life-threatening as hosts often display defensive behavior.
Effects of Feeding State on Peripheral Detection and Central Processing of Odors
In blood-feeding insects, both down-and upregulation of the sensitivity to host and ovipositionrelated odors, respectively, has been found on the antennae following a blood meal.In Ae. aegypti, a humoral factor downregulates the sensitivity of lactic acid receptor neurons situated in grooved peg sensilla, which are used during host localization (23, 29), whereas ORNs in trichoid sensilla increase their sensitivity to oviposition site-emitted compounds 72 h after a blood meal (117).In An. gambiae, there is a complex change of ORN responses after blood feeding: Depending on the neuron types, both up-and downregulation of responses to different odors were found (98).By contrast, ORNs of starved tsetse flies, Stomoxys calcitrans stable flies, and triatomine bugs were more sensitive to host-related odors (93, 101, 126).Proteins involved in peripheral odor detection, such as odor receptors (ORs) and OBPs, are suggested to modulate the peripheral system.In An. gambiae the putative odorant receptor AgOr1, which is expressed only in female antennae, is downregulated 12 h after blood feeding (42), during which olfactory responses to human odorants are substantially reduced (119).Likewise, down-and upregulation of genes, such as those encoding for ORs and OBPs in Ae. aegypti and An.gambiae, have been documented as changes in transcript accumulation induced by a blood meal (20, 83, 103).Similarly, the expression of antennal OBPs in Glossina morsitans morsitans and the transcript levels of the antennal olfactory co-receptor genes in R. prolixus vary as a function of the nutritional state (75,80).
Neuromodulators, such as peptides and serotonin, regulate numerous feeding functions (91, 114).Physiological analyses of D. melanogaster indicate that increased attraction to a food odor after starvation might originate from rich temporal dynamics of gene expression and modulation by insulin and the short neuropeptide F at the first synapse within the olfactory system (39,104).Insulin signaling regulates the expression of the short neuropeptide F, leading to increased sensitivity of presynaptic neurons within the AL and therefore to more robust food-searching behavior in starved flies (104).In R. prolixus, the level of serotonin circulating in hemolymph increases after blood feeding (74).Furthermore, serotonin-immunoreactive neurons innervating the AL of mosquitoes display volumetric changes in their varicosities in response to blood feeding, indicating serotonin release at the synapses (116).Whether serotonin influences olfactory responses related to host-seeking behavior needs to be confirmed.In Ae. aegypti, the head peptide, Aea-HP-I, released from neurosecretory cells in the brain and midgut, was suggested to inhibit host seeking after a blood meal (23).Despite these findings, we are only beginning to discover how neuropeptides modulate the olfactory system.
RHYTHM-DEPENDENT PLASTICITY
Olfactory-guided behavior, similar to many other activities, varies according to the time of day.Circadian rhythms help animals remain tuned to their environment, allowing them to anticipate Zeitgeber: sensory cue present in the environment that helps synchronize the internal circadian rhythm the arrival of near-future cyclic conditions.Oscillations are recurrent approximately every 24 h and persist under constant conditions.Although an endogenous rhythm is defined as self-sustainable, an animal's internal clock is sensitive to external cues such as light, temperature, humidity, foodrelated cues, and social interactions, which serve as zeitgebers to maintain a rhythm
Rhythm-Dependent Behavioral Responses to Odors
A circadian rhythm of olfactory sex communication occurs in cockroaches (76,132) and moths (Figure 1) (25, 55, 78,79,118).This rhythm is maintained in constant darkness at least for some time, and other zeitgebers such as pheromone exposure can replace cyclic light conditions (118).In the diurnal gypsy moth, Lymantria dispar, octopamine injected prior to the onset of scotophase increased the percentage of contacts with pheromone sources during both photophase and scotophase, whereas an injection in early photophase did not have any effect (79).Studies with moths have demonstrated that synchronizing female calling with male orientation responses during the photoperiod minimizes metabolic costs (25,55,118).
Circadian rhythms of behavioral responses to olfactory stimuli have also been reported for blood feeders (Figure 2).Nocturnal triatomine bugs seek a blood meal mainly at dusk (81) and are guided by CO 2 (among other cues) released by sleeping hosts (12).Correspondingly, during this time the bugs are maximally attracted to CO 2 (12, 13, 16) and do not orient toward aggregation pheromones released by conspecifics around shelters (16).Conversely, at dawn, after feeding, they return to their shelters and are highly responsive to aggregation pheromones (16).Experiments under constant darkness reveal that the responsiveness to CO 2 is controlled by a circadian clock, whereas the response to the aggregation pheromone is not (12, 16).Circadian rhythms are thus important for the odor-guided behavior of insects with different life styles, helping them elicit appropriate responses at the appropriate time, but more case studies are needed to expand general concepts.
Rhythm-Dependent Antennal Function
In many insect species, the sensitivity of ORNs to odors, measured by EAG and single sensillum recordings, appears to be under circadian control.In the cockroach Leucophaea maderae and in D. melanogaster a circadian clock regulates antennal responses to food-related odors (71, 94).In blood-feeding insects such as triatomine bugs, tsetse flies, and mosquitoes, antennae respond to host-related stimuli in synchrony with their behavioral rhythm of response: Antennae are more sensitive when the insects seek a host (21, 101, 106, 122).In moths, peripheral pheromone detection is generally independent of the circadian rhythm (95, 129); however, in S. littoralis males pheromone sensitivity decreases significantly at the end of scotophase, which correlates with rhythm-dependent expression of an odorant-degrading enzyme gene in the antenna (89).Also in mosquitoes, olfactory-related genes, including OBPs, sensory appendage proteins, and the olfactory co-receptor Orco, underlie a circadian rhythm of expression (27, 106, 108).Quantitative proteome analysis revealed that the expression of OBP transcripts (from genomic analysis) corresponds with a peak in protein abundance at the same time of the night (i.e., dusk) and EAG olfactory sensitivity to host odorants (106, 107).Altogether these results show that the olfactory machinery of mosquitoes is tuned for host odor detection and location as a function of their activity period.
The existence of peripheral oscillators necessary to mediate rhythmic olfactory responses was first shown in the antennae of D. melanogaster (120).Mutant flies lacking the clock genes period and timeless lost their ability to respond to odors in a rhythmic fashion, further confirming control by a circadian clock (71).Abolishing the clock by molecular targeting of transcriptional regulators of the core clock mechanism showed that the antennal ORNs, but not central neurons, can function Postmating switch: behavioral changes in odor responsiveness of males and females following mating as autonomous pacemakers (120).Rhythmic expression of clock gene products has since been observed in the antenna of many insects, suggesting that antennal oscillator-modulating olfactory sensitivity is a common feature (88, 89, 111).
MATING-DEPENDENT PLASTICITY
Many insects undergo significant physiological changes during mating.These modifications induce often drastic changes in male and female responses to odors involved in sex recognition, such as sex pheromones, or host attraction, such as host-plant odors in herbivores or animal host odors in blood-feeding insects; the female response to oviposition-site cues undergoes changes as well.Generally, responses to sex attractants are switched off after mating, whereas responses to host odors or oviposition-site cues are switched on (i.e., postmating switches).In most species these effects are reversible and after a species-specific time interval, the original state is resumed.
Mating-Induced Changes in Behavioral Responses to Sex Pheromone
Mating-dependent olfactory plasticity has been studied in detail in the male noctuid moth A. ipsilon (Figure 1).In this species, the olfactory switch-off occurs very rapidly after the onset of copulation and lasts throughout scotophase (44,124).This behavioral switch-off seems to be independent of JH, 20E, and the biogenic amines octopamine and serotonin (11, 36, 124) and originates from the SAGs (124).Moreover, this inhibition is restricted to sex pheromone, as newly mated males still respond to plant odors (9).Interestingly, the addition of sex pheromone inhibited the response of mated males to even flower odors, but enhanced the response to sex pheromone in virgin males (9).Similarly, mating decreased responses of Plutella xylostella males to sex pheromone or mixtures of pheromone and host-plant odors, even though the addition of plant odors strongly increased the response of virgin males to sex pheromone in the field (100).In S. littoralis, newly mated males ceased to respond not only to sex pheromone but also to host-plant odors (cotton leaves), whereas they still responded to food odors such as lilac flowers (72) (Figure 1).
Similar effects occur in species in which males produce the sex pheromone to attract females.In several true fruit flies and the parasitic wasp Nasonia vitripennis, mated females cease to be attracted to the male-emitted pheromone and this inhibition can last up to four weeks, depending on the species (41,60,61,63,109).Females begin instead to be attracted to fruit odors after mating, as shown in the Mediterranean fruit fly, Ceratitis capitata (60, 61).The lack of behavioral responses to sex-related cues after mating is thus a common phenomenon in insects, but different neuromodulators seem to be involved, leading to different time courses for the switches in behavior.
Mating-Induced Changes in Behavioral Responses to Oviposition and Host Cues
Behavioral responses to plant odors may also vary in female herbivores according to mating status.Behavioral responses to host-plant odors are often enhanced after mating, as females must find a suitable oviposition site (Figure 1).Indeed, mated female moths are more attracted to host-plant volatiles than virgin females are (85,110).
In female blood-feeding insects, mating inhibits host search but elicits responses to ovipositionsite cues.Only mated female mosquitoes are highly attracted to oviposition-site stimuli (68).Secretions of the male SAGs induced virgin females to engage in oviposition site-seeking behavior (131), and transplanting conspecific male SAGs with their major peptide component, Aea-HP-I, into virgin Ae. aegypti females significantly reduced their host-seeking behavior (40,90).After oviposition, Ae. aegypti females gradually recover their behavioral and physiological responses to host cues and are no longer behaviorally attracted to oviposition cues (66).These mechanisms, however, cannot be generalized to all mosquito species: Male An. gambiae SAG content neither initiates refractory mating behavior nor stimulates oviposition (67).Different mechanisms in different insect species thus seem to modulate behavioral responses to oviposition and host cues.
Mating-Induced Modulation Within the Olfactory Pathway
As for other types of modulation, sensitivity changes occur at different levels of the olfactory system after mating, depending on the insect species.In S. littoralis males, EAGs, single sensillum recordings, and in vivo calcium imaging revealed that antennal neurons were less sensitive to the sex pheromone and host-plant odors after mating, whereas neuronal responses to flower odors were not modified (72).In S. littoralis females, the behavioral postmating olfactory switch from food odor to host-plant odor originates also from modulation in the peripheral olfactory system (84,110).In A. ipsilon males, on the other hand, no difference in antennal sensitivity to the sex pheromone was observed, and responses to flower odor were enhanced after mating only at high stimulus doses (10, 44).Also, in other moth species such as Vitacea polistiformes and Cydia pomonella, EAG responses to the sex pheromone did not differ between virgin and mated males (31, 96).In A. ipsilon, modifications to odor sensitivity have been found within the AL: Neurons are less sensitive to the sex pheromone by several orders of magnitude after mating, whereas AL responses to a flower odor are not modified (9, 10, 44).Nevertheless, when presenting the sex pheromone and the flower odor simultaneously, flower-odor-responding neurons within ordinary glomeruli of the AL in virgin males show synergistic responses to the mixture, but high doses of the sex pheromone inhibit mated males' responses to flower odor (9).Neither octopamine nor serotonin seems to be involved in mating-induced sensitivity changes in the AL (11).The mechanisms by which peripheral and central neurons change their sensitivity after mating remain to be investigated.
CONCLUDING REMARKS AND PERSPECTIVES
We have reviewed the plasticity of insect olfaction as a function of the physiological state.This plasticity of sensory systems, together with experience-induced plasticity, is an important evolutionary strategy that optimizes vital resources for survival and reproduction.High sensitivity has metabolic costs (92) and should therefore only be present when a resulting behavioral output leads to an increase in fitness.In addition, locomotor activity in response to a sensory cue would be a waste of energy if the organism's physiology were not ready for the final behavioral output.Most studies reviewed here were performed under laboratory conditions and did not take into consideration the metabolic costs.Future investigations should confirm in a more natural context the findings from laboratory studies in conjunction with metabolic costs.Behavioral changes in accordance with age, feeding status, circadian rhythm, and mating status have been investigated primarily in insects with well-described olfactory communication systems, such as cockroaches, bees, locusts, and moths, and in blood-feeding species, such as mosquitoes and triatomine bugs.These organisms have enabled researchers to describe changes in sensitivity along the olfactory pathway.However, very little is known about the role of higher brain centers in physiological state-dependent forms of plasticity.In some cases the role of hormones and neuromodulators such as biogenic amines and peptides has been described (Figure 3), but synaptic plasticity at the anatomical, physiological, and molecular levels should be studied.With the molecular, biochemical, and genetic tools emerging for an increasing number of species, it might now be possible to identify genes, and the corresponding proteins and neurons in which they are expressed, that play a role in physiological state-dependent plasticity not only in model organisms but also in nonmodel insects.From a socioeconomic point of view, investigations into disease vectors, agriculturally important insects, and their natural enemies will provide scientists new opportunities to develop alternative control strategies by exploiting the knowledge on naturally plastic behavior and the underlying neural mechanisms in species that rely heavily on their sense of olfaction to reproduce.
SUMMARY POINTS
1.The physiological state of insects influences olfactory-guided behavior by modulating peripheral detection and central nervous processing of odors.
2. Odor-guided behavior in response to intraspecific (i.e., pheromone) and interspecific (i.e., host odors) cues is influenced by age and adult development.Such modulation occurs at the peripheral and central nervous system levels.
3. Responses to food odors depend on the development of the feeding organs and the degree of starvation or satiety, signaled, for example, via stretch receptors in the abdomen and factors circulating in the hemolymph.So far, evidence has been found primarily for modulation of peripheral sensitivity.
EN61CH17-Anton ARI 1 December 2015 21:29 4. Circadian rhythms coordinate insect communication and adapt behavior to the ecology of an insect species.Clock genes in the antennae allow autonomous rhythmicity of odor detection.
5. Mating switches the odor responses of both male and female insects.Whereas responses to sex pheromones and animal host odors are usually switched off after mating, responses to host-plant and oviposition-site cues are switched on.
Figure 1
Figure 1Plasticity in moth olfactory-guided behavior as a function of age, circadian rhythm, and mating.Females emit increasing amounts of sex pheromone and males respond increasingly to this pheromone with age.Both sexes are active only at night.After mating, males transiently stop responding to the sex pheromone until the next night and females begin responding to host-plant odors in search of an oviposition site.Red crosses indicate level of attraction.
HP-I): peptide suggested to inhibit host seeking in Aedes aegypti mosquitoes
Figure 3
Figure 3Actors of olfactory plasticity at different levels of the olfactory pathway.Biogenic amines, hormones, and neuropeptides modulate the peripheral and central nervous systems, leading to changes in sensitivity to behaviorally active odors and thus to changes in behavior.Abbreviations: Aea-HP-I, Aedes aegypti head peptide I; JH, juvenile hormone; LN, local interneuron; ORN, receptor neuron; OBP, odorant binding protein; ODE, odorant degrading enzyme; OR, olfactory receptor; PN, projection neuron; sNPF, short neuropeptide F.
|
2018-04-03T03:47:32.421Z
|
2016-03-16T00:00:00.000
|
{
"year": 2016,
"sha1": "210be1cbafd6ff6616f446cfcf7bbb267ba00a04",
"oa_license": "CCBYNCSA",
"oa_url": "https://ri.conicet.gov.ar/bitstream/11336/19586/1/CONICET_Digital_Nro.23648.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "2b68a406ab4defbdbc3f223a2ff61936be73db75",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
5728776
|
pes2o/s2orc
|
v3-fos-license
|
A curriculum focused on informed empathy improves attitudes toward persons with disabilities
Empathy is an important component of the provider-patient relationship. In the United States one in five persons has a disability. Persons with disabilities perceive gaps in health care providers’ understanding of their health care preferences and needs. The purpose of this study was to use valid and reliable assessment methods to investigate the association between empathy and attitudes toward persons with disabilities and advocacy. An educational module was developed to enhance health care students’ capacity for informed empathy. Pre- and post-assessment measures included the Attitude toward Disabled Persons scale (ATDP), the Attitudes toward Patient Advocacy Microsocial scale (AMIA) and the Interpersonal Reactivity Index (IRI). ATDP (t(94) = −5.95, p = .000) and AMIA (t(92) = −5.99, p = .000) scores increased significantly after the education module. Correlations between the pre- or post-module ATDP or AMIA scores and the IRI scores were not significant. Empathy in general may not be sufficient to ensure optimal attitudes toward persons with disabilities or advocacy in pre-health care professionals. However, a curriculum based on informed empathy and focused on the experiences of persons with disabilities can result in more positive attitudes toward and advocacy for people with disabilities.
Introduction
The Center for Disease Control and Prevention estimates the prevalence of disability to be 20 % [1]. Considering the prevalence of disability in the United States increased between 2002 and 2005 and increases as individuals age [1,2], health care providers are likely to care for patients with disabilities and therefore can benefit from an increased awareness of the needs of their patients with disabilities. The Institute of Medicine (IOM) identifies patient centredness as a core component of quality health care, and defines patient centredness as health care that establishes a partnership between practitioners, patients, and their families to ensure that decisions respect patients' wants, needs, and preferences [3]. Patient-centred care is supported by good provider-patient communication so that patients' needs and wants are understood and addressed [4]. However, having a disability has been found to negatively affect provider-patient communication [5][6][7]. Patients with disabilities report faulty communication, and express the need for better communication with health care providers [8,9]. Individuals with disabilities want to be treated as equals in the patient-provider relationship and argue that a lack of education regarding disabilities is a major cause of miscommunication [8].
Compassion and empathy are additional components of patient-centred care. Empathy is considered a vital and important aspect of any professional helping and healing relationship, a core component of humanistic health care [10][11][12][13][14]. Even though there is general agreement that empathy is a critical component of any health care provider-patient relationship, it is difficult to define [15,16]. Empathy is often considered a multidimensional construct [11,15,17]. Davis defined four components of empathy: (a) perspective taking (PT), the ability of the respondent to adopt the perspective or point of view of others; (b) empathic concern (EC), the tendency for the respondent to experience feelings of warmth, compassion, and concern for others undergoing a negative experience; (c) personal distress, the tendency of the respondent to experience feelings of discomfort and anxiety when witnessing the negative experiences of others; and (d) fantasy, the tendency of the respondent to identify strongly with fictitious characters in books, movies, or plays [17]. Empathy has also been defined as attunement, the process of matching emotional expressions and connectedness between two participants [18]. Larson and Yao [14] state that there should be a skill, or behavioural, dimension to empathy which reflects the interpersonal processes that happen between people, while the cognitive and affective dimensions to empathy are part of an intrapersonal process that happens within a single person.
Cox states that accurate and compassionate empathy is partly contingent on the extent to which the observer has experienced the emotions being imputed to the other [19]. Others view empathy as an attribute that enables health care providers to understand the inner experiences of patients, to communicate this understanding, and to respond in a therapeutic way [20]. Empathy facilitates the development of mutual trust, shared understanding, and optimal communication, allowing patients to feel understood and 'listened to' [10,15,16,19]. The manner in which health care providers express empathy for persons with disabilities may contribute to their perception that their situation is not fully appreciated [21]. Ultimately, it is imperative that health care professionals learn how to adequately convey empathy because it has been linked to positive outcomes, such as reduced physiological distress, improved self-concept, reduced anxiety, and increased satisfaction with treatment [12,18,21]. However, because health care professionals cannot have all of the same experiences as their patients, they need other ways to gain the empathy required to provide quality care.
Most people find it easier to be empathic toward people like themselves, in part because personal experiences shape and define ones empathic understanding [14,16]. Consequently, a training programme that captures and conveys the perspectives of specific groups, in this case persons with disabilities, may be effective in developing informed empathic care. For the purposes of this study, informed empathy refers to knowledge about the impairments, activity limitations, and participation restrictions that can be associated with having a disability, blended with an appreciation of the personal impact these issues can have on individuals, their families, and those who provide their care [22,23].
Persons with disabilities report environmental and attitudinal barriers when trying to access health care [8,24,25]. Manifestations of attitudinal barriers are negative stereotypes, condescending or patronizing remarks, and the inability of others to see beyond the individual's main impairment [26]. The attitudinal barriers perceived by persons with disabilities may contribute to inadequate communication from health care providers, resulting in an incomplete understanding of medical histories and a lack of thoroughness [24,27,28], potentially contributing to suboptimal care and health inequities for persons with disabilities.
This study involved pre-health professional students and evaluated the impact of an innovative curriculum that focused on patient-centred care for persons with disability, an area of the curriculum not historically featured in medical education. The assessment measures sought to investigate the relationship between empathy in general and attitudes toward persons with disabilities and attitudes toward advocating for patients with disabilities. It is hypothesized that: (a) a curriculum focused on informed empathy would be an effective teaching method, and (b) higher empathy scores, especially PT, would be associated with more positive attitudes toward and advocacy for people with disabilities.
Methods
This study was approved by the medical school's institutional review board.
Curriculum development
The curriculum was designed to evoke reflections about attitudes, empathy, and the role of advocacy for health care professionals. To ground the educational experience in authentic representation of patients' experience, I developed a DVD specifically for this curriculum that consisted of narratives by and about persons with disabilities. A total of 11 men and seven women with various types of disabilities were recruited from the university's office for students with disabilities, physical medicine and rehabilitation clinics, and an association for the visually impaired and blind. I met with and obtained informed consent from each participant and explained that the DVD was being created as an educational tool. Participants were asked to share experiences or other information that they wanted current or future health care professionals to know. They were encouraged to provide an artistic interpretation, for example, a drawing, a poem, or photographs of their experiences. Everyone received a pen, notebook, disposable camera, micro-cassette recorder, and a bag to carry all of these items. Additional artistic supplies were made available when requested. Each person provided written or recorded narratives about their life and health care experiences. After reviewing their narratives, follow-up conversations were held with many of the participants to clarify their material. From the information provided, a 60-min DVD was created. It contains an oral summary of 18 narratives, each linked to one or more images. In some cases, the image is a photo, drawing, or collage that was provided by the individual. If the individual chose not to provide an image, the principal researcher and a colleague, whose formal background includes medical education and the fine arts, selected paintings from gettyimages.com. These images were selected based on the initial emotions perceived from the narratives rather than using a systematic method. Individual music compositions were recorded for each narrative and image pairing to enhance the feelings conveyed.
Student participants
The student participants were enrolled in health-related courses and were recruited to participate in the study through web-based course sites. Informed consent was obtained online at the link to the surveys, which were administered pre-and postmodule. Students who completed all pre-and post-module surveys were entered in a random draw for a $100 Visa gift card. One gift card per course was awarded. The students had approximately 2 weeks to complete the pre-module surveys and 1 week to complete the post-module surveys. Ninety-five students across seven courses completed the pre-IRI, pre-and post-ATDP scale and pre-and post-AMIA. Most of the participants were white females without a disability who were planning to enter a health profession (Table 1).
Curriculum implementation
The curriculum was taught at a large Midwestern university and a local community college in health-related undergraduate courses. For most of the courses, this intervention was the only curriculum content about the psychosocial aspects of disability. However, one course for dental hygienist was about patients with special needs.
The time spent teaching the curriculum ranged from 1 to 3 h. At the request of the course instructors, the principal researcher taught the curriculum in each course. The sessions began with definitions, including disability, health, patient centredness, and advocacy. It was stressed that disability is an umbrella term, making a narrow, specific definition difficult. The sessions also included discussions to engage the students about their experiences with persons with disabilities and advocacy. The students were given background information about how the DVD was made, including the fact that the narratives are in the speakers' own words and address the following major themes: the fear and desperation they felt when their disability was diagnosed, others' perception of disability, the desire for independence and acceptance, family support and struggles, and their experiences with medical professionals. In each session, participants spent 20-25 min viewing the DVD and then discussed its content and their impressions. The discussion was initiated by having students answer core questions such as: Which reaction/response did you understand the most or least? Which accommodations are reasonable and how much is enough?
Measures
Assessment measures included the Attitude toward Disabled Persons (ATDP), the Attitude toward Microsocial Advocacy (AMIA), and the Interpersonal Reactivity Index (IRI). The ATDP provides an objective and reliable measure of attitudes toward persons with physical disabilities (a = .80) [29]. It was created to measure attitudes toward persons with disabilities in general, rather than toward persons with specific types of disabilities. The ATDP, developed in 1960, continues to be one of the most widely used and tested instruments to measure attitudes toward persons with disabilities [30]. The ATDP has been found to be a reliable measure across different populations, and it is sensitive to changes following instruction. It measures the attitudes of persons with and without disabilities, and validation and replication studies have identified differences in responses by gender [29]. Responses of persons without disabilities are assumed to reflect either acceptance of persons with disabilities or rejection/prejudice, depending on whether they perceive people with disabilities as similar or different and inferior. The responses of persons with disabilities are based on the assumption that most people with disabilities will respond to the questions on the ATDP by using themselves as a frame of reference, which provides information about their self-perception and perception of others with disabilities [31]. The ATDP is a self-report 20-item survey in which respondents use a six-point Likert scale, from (-3) I disagree very much to (?3) I agree very much, to indicate the extent of their agreement or disagreement with each item. There is no neutral point. Scores range from 0 to 120, with higher scores indicating a more favourable attitude. Individual item responses on the ATDP cannot be interpreted; only total ATDP scores are meaningful. In addition, since the ATDP uses a Likert scale, absolute interpretation of raw scores is not possible because the degree of the attitude expressed by each item is not known [31].
The Attitude toward Patient Advocacy scale was developed to evaluate nurses' attitudes toward patient advocacy. For this scale, patient advocacy is conceptualised as a process or strategy consisting of a series of specific actions for preserving, representing, or safeguarding patients' rights, best interests, and values. Based on this conceptual framework, patient advocacy includes safeguarding patients' autonomy, acting on behalf of patients, and championing social justice. This scale has two subscales, the Attitude toward Macrosocial Advocacy (AMAA) and the AMIA; however, since the curriculum focuses on microsocial advocacy, only the AMIA subscale was used in the current study. The AMIA contains 45 items and responses are scored on a 6-point Likert scale ranging from (1) strongly disagree to (6) strongly agree, with a high score reflecting strong support for advocacy. In the original validity and reliability studies, the mean for the AMIA (45 items) was 244.67 (SD = 18.17) (a = .92) [32] with scores ranging from 45 to 270. For this study, the AMIA wording was modified to address patients with disabilities and two questions were combined, reducing the total number of items to 44 with scores ranging from 44 to 264.
The IRI was developed to assess the multidimensional nature of empathy. It was designed to capture individual variations in cognitive, PT tendencies as well as differences in the types of emotional reactions experienced [17]. The IRI has been found to be one of the most reliable and valid measures of self-assessed empathy [33]. It has been used with many different groups, including medical professionals. The IRI is a 28-item, self-report questionnaire consisting of four 7-item subscales, each tapping into some aspect of the global concept of empathy. IRI subscale scores range from 0 to 28, with higher scores indicating a stronger manifestation of that dimension of empathy. Respondents indicate for each question how well the item describes them. Responses are scored on a 5-point scale from (0) does not describe me well to (4) describes me very well. The four subscales are: (a) fantasy (FS), which measures the tendency of the respondent to identify strongly with fictitious characters in books, movies, or plays, for example; (b) PT, which measures the ability of the respondent to adopt the point of view of other people; (c) EC, which measures the tendency of the respondent to experience feelings of warmth, compassion, and concern for others undergoing negative experiences; and (d) personal distress (PD), which measures the tendency of the respondent to experience feelings of discomfort and anxiety when witnessing the negative experiences of others.
Significant differences between males and females on all subscales have been identified, with females having higher scores. In Davis' normative data, the mean scores for the IRI subscales were FS [17]. Only scores for the individual subscales are meaningful. The IRI was not developed to provide a summation or a total score.
Analysis
Paired t tests were performed to evaluate the extent of change in students' performance on the pre-and post-module ATDP scores and AMIA scores. The IRI was only administered pre-module because the aspects of empathy measured by the IRI were not a focus of the curriculum and thus were not expected to change. Pearson correlations were performed to evaluate the magnitude of association between (a) the IRI subscales and pre-and post-ATDP scores, and (b) the IRI subscales and pre-and post-AMIA scores. This resulted in 16 different correlation tests; therefore Bonferroni's correction for multiple tests was calculated.
Results
Prior to instruction, there were no statistically significant differences across the courses on the students' ATDP scores, AMIA scores, or empathy scores (Table 2). This provided empirical justification for aggregating students across courses into one group.
Paired t tests showed a statistically significant increase in the ATDP and AMIA scores following the educational module ( Table 3).
The mean for the pre-IRI empathy subscales were FS
Discussion
This study established the feasibility of education involving authentic representation of persons with disabilities and student self-reflection. The active engagement of students encouraged them to self-reflect and consider the challenges people with disabilities face in general and when obtaining health care. This innovative educational module involved patients in the curriculum design and curricular material, an example of authentic patient-centred education. The curriculum resulted in a significant increase in ATDP and AMIA scores, well-established assessment measures, possibly through the process of gaining informed empathy.
The DVD was a powerful contributor to the effectiveness of the curriculum. This likely reflects the students' recognition that the DVD authentically portrayed the experiences of persons with disabilities. Further, the DVD included a diversity of characteristics, which contributed to the likelihood that students were able to identify with some aspect of the narratives, an initial step in developing informed empathy for persons with disabilities. For example, it may have been the age, ethnicity, or type of disability of a DVD participant; it may have been a reference to an area, or restaurant that the student goes to or is familiar with; it may have been an experience or activity the DVD participant was unable to access or do that the student does regularly without difficulty, such as using public transportation. The results, however, did not support the hypothesis that higher empathy scores, specifically PT, would correlate with higher ATDP or AMIA scores. Compared with the means reported by Davis, the students' scores on the empathy subscales were equivalent to the normative data [17]. The scores for the IRI empathy subscales were not correlated with the pre-or post-module ATDP or AMIA scores. This may be because the IRI is not sensitive to the issues included on the ATDP or AMIA. Measuring different dimensions of empathy or a global measure of empathy may be better associated with attitudes toward and advocacy for persons with disabilities.
There was already good evidence of the reliability of the ATDP, AMIA, and IRI; however, I sought to explore their associations with a very relevant conceptual framework developed outside the domain of medicine. The narratives in the DVD are not limited to medical scenarios; of the five major themes identified in the DVD narratives through qualitative analysis, only one was related to experiences with medical professionals. Consequently, class discussions were not limited to the interactions that a person with a disability may have with medical personnel or a health system. The students were encouraged to consider and discuss interactions (experienced or observed) with individuals with disabilities and the attitudes expressed, reactions witnessed, and barriers and opportunities identified. This is important since health is influenced by more than a diagnosis, disability, medical professional, or hospital. In an effort to capture these possible aspects of influence, most of the assessment tools are not specific to medicine (e.g., ATDP and IRI). And although the AMIA is specific to health care, the classroom discussions about advocacy extended beyond medicine. This created an opportune setting to teach about advocacy, which has been identified as a component of professionalism [34,35] and is receiving increased attention in medical education. A patient-centred approach toward advocacy education allowed the students to discern examples of advocacy that may be especially pertinent to individuals with disabilities.
When creating the DVD there was a focus on eliciting participants' experiences about the health care they received, and on any life experiences they felt were important for current or future health care professionals to know. Participants were encouraged to tell their stories in their own words. This allowed them to emphasize the actions, attitudes, and feelings that were important to them. By explaining the methods and reasoning used to create the DVD, I modelled for the students a method of helping persons with disabilities feel 'listened to'. This innovative educational module allowed the speakers to be regarded as individuals with unique concerns, not merely a disability or illness to be 'fixed'.
The curriculum utilizes stories, art, paintings, images, guided discussions about shared experiences and feelings, and self-reflection to help students understand people who may be very different from themselves. Course instructors have observed that this unique curriculum 'got students to open their minds when considering the barriers to caring for their patients.' Students have commented that 'class participation is both enriching and thought provoking.' This study demonstrates, as other studies have, that literature, film, and art are effective in developing and enhancing informed empathy [13,14].
Future studies could include transcribing class discussions and using qualitative analysis to better understand the process that is contributing to the improved attitude scores. Another consideration is the possible longitudinal effects of the curriculum, especially on clinical practice. Providers, who as students were trained using this curriculum, may develop better communication with individuals with disabilities, resulting in a more therapeutic relationship and improved satisfaction of care for both the patient and provider. Using different tools to assess attitudes and empathy may provide additional information on the effectiveness of the curriculum. In addition, since the expression of empathy by providers and the empathic needs of patients can vary based on the situation, gender, ethnicity, or age, exploring how these areas intersect and influence attitudes could help health care providers to better understand their own reactions, responses, and biases. Studies involving a more balanced number of males and females may determine the effect, if any, that gender has on pre-and post-assessment scores. Lastly, using direct observation to assess attitudes toward real patients with disabilities could provide information about the effectiveness of the curriculum on improving/maximizing attitudes and communication in the clinical setting.
Strengths and limitations of study
Strengths of this study are (a) participants were students from a variety of pre-health courses, (b) the use of well-established assessment measures, and (c) matched preand post-education comparisons. This study is limited by the relatively small number of participants, its cross-sectional methodology, and the use of questionnaires, which may have resulted in socially desirable answers. Additionally, students were recruited from only two sites. Most participants were white females without disabilities; therefore, the results may not be generalized to other populations. This study did not assess the long-term influence of the educational module.
Essentials
• Patient-centred education is an effective teaching method.
• Persons with disabilities are effective, compelling narrators.
• Attitudes toward and advocacy for individuals with disabilities can be enhanced through informed empathy. • Informed empathy can be tailored toward specific groups.
|
2016-05-12T22:15:10.714Z
|
2013-03-06T00:00:00.000
|
{
"year": 2013,
"sha1": "29dd360b4e12156f9cb5006f766b582ead6faed1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40037-013-0046-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97a41852179b4ae15ae83c81986ea1334781a24d",
"s2fieldsofstudy": [
"Medicine",
"Philosophy"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
209774444
|
pes2o/s2orc
|
v3-fos-license
|
Developing processes for manufacturing metal aviation technology components using powder bed fusion methods
. Favorable parameters for selective melting methods using electron and laser radiation have been established to obtain the required geometric, physical and mechanical characteristics of thin-walled parts for aviation purposes from H18N9T (analogue AISI 321) and Ti6Al4V alloys. Parts were manufactured and field tests were carried out on the stand. It has been shown that the technological processes developed using the SLM and SEBM methods can be recommended for the manufacture of thin-walled parts working in conditions of rapidly changing deformations.
Introduction
Success in the development of aviation depends on a reduction in the mass of aircraft structures [1]. The use of monolithic structures instead of prefabricated ones gives, depending on the type of structures, a savings in weight from 5 to 10 % [2]. The resource of monolithic structures is higher than that for prefabricated ones. Stiffeners, reinforcements, connecting elements and sheathing in a composite structure are made separately and then connected in a monolithic system, where they form an organic whole [3]. The assembly of many parts, which in themselves and as parts of the whole can be well designed, involves a number of sources of defects. The separation of the design into a large number of small individual parts during the design and manufacture adversely affects the weight, cost and accuracy of manufacture, as well as the quality of the outer surface. All this creates the prerequisites for a wider implementation of monolithic structures in the practice of aircraft construction. However, when switching to monolithic structures, it is increasingly necessary to accept a general rise in the cost of their production associated with the technological complications necessary to reduce weight and increase the resource and reliability of structures. When replacing prefabricated structures with a single monolithic part, the question of semi-finished products and methods for manufacturing parts arises in connection with the continuing growth in the size and complexity of such structures.
Recently, methods of selective melting of a metal powder by electron (SEBM) and light beam (laser) (SLM) methods, which belong to the category of Powder Bed Fusion additive manufacturing processes, have become widespread in production [4]. Powder bed fusion is an additive manufacturing process in which thermal energy selectively fuses regions of a powder bed [ISO / ASTM 52900: 2015 (en) [5]]. The processes of electronic and optical radiation differ in technical implementation, but are close to each other in geometric and energy parameters. Therefore, it is advisable to consider the basic physical phenomena for concentrated energy flows from a single "energy" position [6]. Parts obtained by selective melting methods have the following advantages compared to parts manufactured by other methods: the possiblity to obtain complex curved surfaces, internal cavities and protrusions located in different planes, with minimal subsequent machining [7]. This is especially important in the manufacture of parts from difficult materials. For thin monolithic structures obtained by selective melting, high values for the utilization of metal powder have been achieved.
The objective of this work is to establish preferred parameters for selective melting by electron and laser radiation to obtain the required geometric, physical and mechanical characteristics of H18N9T and Ti6Al4V alloys used for the manufacture of protective aircraft air intake lattice modules (Fig. 1a).
Selective laser melting of the materials EOS StainlessSteel PH1 (Germany) and PRH18N9T was carried out on an industrial EOS M 280 setup (manufacturer EOS GmbH, Germany) and an experimental setup for SLM (ALAM) with the following technical characteristics: wavelength -1070 nm, beam divergence -0,2°, pulse duration and relative intensity -continuous radiation, maximum power -200 watts. Experiments to determine the influence of the main technological parameters of electron beam melting were carried out on an A2 ARCAM installation (Sweden) [11]. The installation is equipped with an electron beam gun with a tungsten thermionic emission cathode and an accelerating voltage of U = 60 kV with a maximum power of 3500 W and a working vacuum chamber with a maximum product dimension of 350 × 350 × 250 mm. The diameter, depending on the installation capacity, varies continuously within the range of from 200 to 1000 microns. A study of the surface of experimental samples (thin walls), as well as geometric measurements were carried out using a VEGA 3 LMH scanning electron microscope from TESCAN (Czech Republic). The study of microstructures and pole density analysis was performed on a JSM-6610 LV scanning electron microscope form JEOL (Japan) with an attachment for texture analysis. The phase and structural state of the alloys was studied using light, electron scanning and transmission microscopy. The geometric parameters of the lattice modules were measured on a multi-sensor coordinate measuring machine for high-precision measurements in the conditions of the Werth SCOPE-CHECK workshop manufactured by Werth Messtechnik (Germany), and roughness measurements were performed on a Hommel Tester T8000 profilograph-profiler manufactured by Hommelwerke Gmbh (Germany). An experimental assessment of block strength by the fatigue vibration test method was determined on a vibrating electrodynamic bench of type V850-440L.
Research results and discussion of the results
According to the results of granulometric composition tests, the total content of particles that do not correspond to the size of the main fraction corresponds to 20.97% for PRH18N9T, and 8.71% for Ti6Al4V Arcam Titanium. The average diameter (arithmetic mean) for PRH18N9T xk,0 = 52.74 μm from the average sample, and for Ti6Al4V Arcam Titanium xk,0 = 71.54 μm, was calculated from the data obtained from the samples. It was established that Ti6Al4V Arcam Titanium has a narrow distribution range and a large number of spherical particles, as a result of which high fluidity is achieved, and the roughness parameter of products obtained by selective electron beam melting is reduced. Particles of a spherical shape have a small surface area and are thus easier to apply to the surface of the working zone in selective electron beam melting installations. According to the results of chemical composition tests, PRH18N9T particles have a high degree of homogeneity, which is caused by the fact that, before spraying, the melt was heated until the hereditary structure of the solid state of the alloy components was completely destroyed, and the melt particles dispersed into drops crystallized at high speeds up to tens of thousands of degrees per second. Particles PRH18N9T are characterized by a spherical particle shape, high fluidity, microcrystalline structure, equiaxial type morphology and a small number of satellites.
In order to determine the influence of the SLM process parameters on the structure and properties of the products, experiments were carried out on the manufacture of rollers made of PRH18N9T, varying the laser radiation power, scanning speed, and powder layer thickness. Rollers made of PRH18N9T on the substrate were fabricated with laser radiation power from 80 to 120 W and scanning speeds from 80 to 180 mm / s. The thickness of the powder layer that was applied to the substrate was 50 μm. The length of the roller (melt pool) was 15 mm for all experiments. For a layer thickness of 50 μm and a laser radiation power of 100 W, stable single rollers made of PRH18N9T can be formed at scanning speeds vs = 80 -180 mm / s (Fig. 1c). The protective medium was compressed gaseous argon.
To study the physical and mechanical characteristics of the selected preferred modes, samples of various shapes were made. The properties of the samples obtained are shown in table 1. To determine the rational parameters of the electron beam treatment of Ti6Al4V Arcam Titanium powder, cubic samples were made whose the model consisted of five thin walls 100 μm wide (Fig. 1d, 1e). This thin wall width of the model was chosen in such a way as to ensure its manufacture in one pass of the electron beam. The electron beam power varied from 100 to 400 W, and the displacement velocity was chosen so as to provide an electron beam linear energy communicated by the thin wall from 0.25 to 1.25 J / mm. As a result of measuring the width of the thin walls obtained, an experimental approximating relationship was constructed between the width of the thin wall and the technological parameters of the process (power and velocity of the electron beam). The width measurement of the thin walls showed a correlation with the diameter of the electron beam in accordance with the linear energy Q used, [J / mm]. It was found that the width of thin walls increases in direct proportion to an increase in linear energy, while measurements showed that the walls obtained at linear energy of 0.50 and 0.75 J / mm have an almost equal width. The most homogeneous microstructure is characterized by samples of products obtained with linear energy Q = 0.75 J / mm. Therefore, further studies were carried out using this linear energy in an extended power range.
Samples to study the mechanical properties (Table 2) of Ti6Al4V Arcam Titanium alloy powder were made at an Arcam A2 selective electron beam melting facility, when powder layers of about 50 μm thickness were successively deposited onto a platform made of corrosion-resistant steel pre-heated to 750 °C. The electron beam velocity varied from 240 mm / s to 1200 mm / s. Preferred modes are selected taking into account the task at hand: minimum wall thickness. Tempering + aging 1128 8 The ultimate goal of this study was to improve the existing manufacturing technology for protective airplane air intake grilles using additive manufacturing methods. According to the study results of problems in the operation of aircraft, there was a need to reduce the mass of the protective air intake grille with a simultaneous increase in its corrosion resistance. When the jet engine is operating, vortex bundles and small tornadoes appear at the entrance to the air intake, and foreign objects are sucked into the channel from the surface of the flight strip. Foreign objects entering the engine may break it. The air intake protection grill module is the element responsible for protecting the air intake channel from the ingress of foreign objects, and is also an obstacle for air entering the engine. Currently, in the manufacture of protective grid modules from corrosion-resistant H18N10T tape, the main technological process is the formation of reliable, strong and tight joints between frame elements and stiffeners due to contact spot welding and soldering. The modules used are characterized as considerably labor-intensive -cutting, bending, manual assembly of almost seventy parts, welding, soldering, applying metal and nonmetallic coatings with control after each operation is used in their manufacture. In addition, corrosion and soldering fatigue defects are found in operation. To fulfill these requirements (Table 3), the protective grating modules were designed and manufactured from EOS PH1 Stainless Steel and PRH18N9T powders by selective laser melting and from Ti6Al4V Arcam Titanium grade powders by selective electron beam melting (Fig. 1f). A prototype lattice module was made from PH1 EOS Stainless Steel powder to assess the possibility of its manufacture by selective laser melting. Modules of PRH18N9T and Ti6Al4V Arcam Titanium powders passed fatigue vibration tests and tests for resistance to external factors according to State Standard RV 20.39.304. To study rapidly changing deformations, the modules were mounted on a bench and exposed to vibrators (pulsators) that reproduced working loads on the modules; strains were recorded using an oscilloscope. The tests were carried out on a vibrating electrodynamic stand and in a salt fog chamber according to State Standard RV 20.57.306. The modules were affixed on the bench table and vibration sensors were attached to them, which exercise control over the loading parameters and the state of the module. The research results showed that the lattice modules are not subject to destruction during aircraft operation. Vibration tests have shown the ability of modules to provide a given strength and durability. After a test cycle in the salt fog chamber, no traces of corrosion damage were detected on the modules. The minimum thickness of the inclined walls of the module frame was 0.3 mm.
Conclusions and Recommendations
Arcam Titanium Ti6Al4V and PRH18N9T powder materials were selected and studied. Preferential SLM regimes were determined for the material PRH18N9T and the relationships between SLM parameters, structure, and the physical and mechanical properties of the samples were determined. The selective electron beam melting method is considered to be the most promising for the additive production of products from titanium and its alloys in aircraft manufacturing. The effects of the main technological parameters for thin wall width were determined using the selective electron beam melting method. Differences in the microstructure and surface of the experimental samples of the products obtained were determined at different linear beam energies. The optimal linear energy of the beam was found. We consider it to be advisable to conduct further research in an extended power range.
Based on test results, both protective grill modules can be used in the manufacture of aircraft. The transition of the protective air intake grille module from an assembled unit to a monolithic part, while replacing the steel alloy with titanium, provides the following advantages: 1. Weight reduction while maintaining necessary strength and resistance to highfrequency fluctuations; 2. High corrosion resistance in a marine environment; 3. A reduction in the complexity of manufacturing due to the lack of soldering, welding, fitting and assembly work and subsequent straightening; 4. All parts are exactly the same, which ensures interchangeability.
|
2019-11-22T00:41:26.655Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "cfc202007d92eac5c844a5fc73971569a5ef7432",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/47/matecconf_icmtmte18_00116.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "bd55e9d24321087092620e66bb0d26ca2c853b3b",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
154649907
|
pes2o/s2orc
|
v3-fos-license
|
CROSSING THE BORDER. MIGRATION FLOWS IN THE MEDITERRANEAN SEA
Alessandro Dal Lago wrote in 1999 that equality of all human beings and their right to freely move around the world are obvious principles although lack a clear legal drafting. Nevertheless «human kind is divided into majorities of national citizens holding rights and with formal guarantees, and minorities of illegitimate foreigners (neither citizens nor fellow country men and women) who are legally and factually denied these guarantees» (Dal Lago, 1999:9). Today policies and representations of migrants in Italy and Europe confirm the increasing disintegration of values such as humanity and rationality faced with a renewed (not only cultural) racism. The current economic crisis and the return of nationalism in the wake of the ongoing globalization are the framework within which a zero tolerance policy is reiterated towards these masses. In the age of global mobility the discussion is being reopened, as if we were suddenly back in the darkest Middle Ages, on people’s freedom to move in a shared space searching for their own life and future plans. The acknowledgement of a ius migrandi and the related debate, together with a new possible epistemology of frontiers (Rodríguez Ortiz 2011, Trigo 1997), continue to be themes for reflection no longer and not only for social science.
Alessandro Dal Lago wrote in 1999 that equality of all human beings and their right to freely move around the world are obvious principles although lack a clear legal drafting. Nevertheless «human kind is divided into majorities of national citizens holding rights and with formal guarantees, and minorities of illegitimate foreigners (neither citizens nor fellow country men and women) who are legally and factually denied these guarantees» (Dal Lago, 1999:9). Today policies and representations of migrants in Italy and Europe confirm the increasing disintegration of values such as humanity and rationality faced with a renewed (not only cultural) racism. The current economic crisis and the return of nationalism in the wake of the ongoing globalization are the framework within which a zero tolerance policy is reiterated towards these masses. In the age of global mobility the discussion is being reopened, as if we were suddenly back in the darkest Middle Ages, on people's freedom to move in a shared space searching for their own life and future plans. The acknowledgement of a ius migrandi and the related debate, together with a new possible epistemology of frontiers (Rodríguez Ortiz 2011, Trigo 1997, continue to be themes for reflection no longer and not only for social science.
The following writing is the result of a project carried out by a group of researchers of the University of Salento called "H.O.S.T. -Hospitality, Otherness, Society and Theatre" that tried to describe migratory experiences overcoming barriers between different subjects and interacting with the artistic environment linked to the Astragali city theatre in Lecce.
Our aim was to consider the migration theme while keeping two perspectives intertwined: the sociological research and the artistic research and practice. We chose to follow a path, which is both symbolic and physical, along a number of key places, the stops of contemporary migration in Europe: Salento, Paris, Cyprus, Cadiz, Nicosia, Patras and Zakynthos. Indeed these embody today the so-called Mediterranean model of migrations (Perrone 2007). A model that has slowly though constantly developed during the second half of the 1970s when, following the oil shock, the decennial workforce demand from North-central European countries was interrupted. This caused migratory flows to shift to countries of the northern Mediterranean shore which, at that time, lacked a coherent system of rules on immigration.
In these border and crossing places sociological research and artistic practice substantiated each other by giving a voice to the experiences of migrants, artists and researchers. The stop in Salento was for us researchers particularly meaningful because we questioned again our migrant' status from our very birth in a borderland. Moreover, by choosing for the research in Salento a particularly complex category of migrants, such as that of asylum seekers, we tried to share for a number of weeks moments of everyday life with the people we decided to interview in the facilities where they were lodging through the SPRAR project (Protection system for asylum seekers and refugees). Sharing everyday life experiences helped us reconstruct a migration story which is partly far from life stories that asylum seekers are often forced to tell in the order required by the legal and administrative procedures needed for the recognition of refugee status.
The following is a short theoretical reflection on migrations and on the concept of border that has flanked us since the beginning of this project.
Borders in the age of global mobility
Marking borders is a way of exercising control over population, a technology of power based today on a renewed governmentality, in a foucaultian sense, able to modify the balances between security and freedom, by enlarging the surveillance space in and outside States starting from their boundaries and their underlying power relationships. The favourite target of this modern governmentality on the border are immigrants. A new elimination experiment of the human surplus to be added to the criminalization of those individuals who are traditionally considered at the margins of society because they are marginal: drug addicts, poor people and deviants. The war to different people, to strangers and foreigners is now a total social fact because it penetrates into the current neoliberal societies through communication practices founded on the power asymmetry between individuals enjoying rights and individuals being denied rights, between people and non-people (cf. Palidda 2009 andDal Lago 1999). Security narratives that build new threats and dangers linked to the indiscriminate access of external enemies who would hit the sovereignty and security of national communities. The migration flows that have involved most European countries over the last few years have caused the rise of a form of "apartheid" for immigrants from "non-member countries". A dangerous people in conflict with the need to patrol state-owned facilities, that are put under surveillance, deprived of basic human rights and forced to permanently live on the border: «neither absolutely inside nor totally outside, […] It would be naïve to think that the development of such an institutional racism in Europe has no relation to the ongoing The migration management, at a European level and beyond, has increasingly turned into a security-sovereignty issue supported by new transnational actors acting as "security bureaucrats beyond the state" (Bigo 2000) who, through the use of the institutional knowledge and disciplinary technologies have not only redefined new risks, but also transferred a strengthened permanent control from borders to city centres. Over the last few years a new European regime of border control has been created which is aimed not at strengthening a fortress wall but rather, according to Mezzadra, «at managing a process of differential inclusion of immigrants.
[…] Even though the control policies of the European external borders have rhetorically tried to stop refugees' and migrants' movements they haven't had the effect of sealing off borders. No fortress wall has been built but rather a "barrage" system, a "filtering" mechanism, a system to control mobility». (Mezzadra, 2007: 31-41) A number of authors state that the current de-bordering process would make global borders fluid and porous thus enabling an increasingly freer circulation of people and goods. Actually the processes we are witnessing are just an economic liberalization of borders and their security rearrangement: a border regime, according to Sassen, aimed at the management of differential mobility processes regarding different categories of people and goods (Sassen 2009, Sassen 2008. A police story and practice that legitimates states to protect citizens through the systematic use of violence on frontiers which are now becoming permanent territories of exception. A filtering system more than a blockade against undesirable minorities, a management of the mobility of migratory flows which aims at the quick border-crossing of "good faith travellers" and at discouraging the access of those who intend to elude immigration laws. This two-fold objective perfectly embodies the paradox of international policy on migration issues. A border reconfiguration policy resulting in both security and insecurity. (Rea, Jacobs, 2011) This policy, through visible means such as border check operations, administrative detention centres, and invisibles means such as administrative (and often arbitrary) procedures established to filter categories of immigrants and to gather information on them, puts under surveillance all kinds of human mobility presenting each immigrant with a "migratory risk" (Rea 2013, in press, Processes of bordering in the age of mobility). The result of these coercive and preventive measures is the creation of categories of "desirable" and "undesirable" people, even before they arrive at the frontiers. (Bigo 2010) Are borders only something marking separation, inclusion through exclusion, or better, a space for hybridization, race mixing and an opportunity to approach the Other? Considering borders as a wider category, without excluding his violent character, led us to add auto-biographical traces starting from our physical and geographical position on the border. This writing was drawn up on the outskirts of Europe: southern Italy, Salento. However our work was started in Brussels, the capital of Europe, where we moved for bibliographical reasons, and finished in Salento. Thus, it is a reflection that started on the border, at the margin, moved to the centre during its theoretical working out and then returned to the border. A reflection that, in a regime of geographical and physical immobility, would not contain such remarks. Beginning a journey while carrying out a study on migrations induced in us a feeling of disorientation in both a physical and a symbolic sense, an openness to multiple experiences that took us a little away from the main subject and brought us closer to feeling as though we were strangers in a country with different roads, language and nuances. A journey which forcefully changed both our temporal and our spatial location. A different temporality at a different speed: the meridian rhythm of the start became the syncopated rhythm of the North, our destination. This shared feeling of initial disorientation was the start of one attempt to imagine the conditions of those who travel for reasons other than ours. It is indeed true that «[…] in the current world situation, according to social science, everybody has some immigrant's features. We are living on multiple borders. There are many kinds of migrants and many ways to be a migrant.» (Floriani 2004, from Jedlowski:6).
Writing while travelling and feeling the sense of tiredness and estrangement, though in a marginal and temporary way, the nostalgic loneliness felt also by those "vagabonds"/migrants, as defined by Bauman, at their arrival in a foreign land. It was a way to reconsider a number of categories taken for granted from a different point of observation. (Baumann 1999) Also because those travellers who, according to Matera, «feel the distinctive "disorientation", do not recognize usual places and forms, must twist their conceptual tools to understand and their linguistic tools to describe. The travel report is based on the ethnic monograph, but reflects first of all the reawakening of senses now filled with the perception of the otherness, which is then used to construct a thinking and writing system aimed at interpreting it». (Matera, 1986: 83) Such a thinking system which develops through the acknowledgement of one's new placing is able to face any movement or change, tends to shift any intellectual sedimentation from the vision of the elsewhere and builds new speech spaces and new representation. Garofalo observes that studying migrations is a «journey without any definitive destination: when the journey starts we must be ready to question our safe categories and concepts from both the point of view of personal perception and a theoretical analytical point of view». (Garofalo, 2012 Our vision does not pretend to be right, but while moving from one place to another is at least privileged to be at the margins and in the centre at the same time and to avoid a non-vision which we would see if we were still. Rethinking ourselves while we were moving has made the method more difficult but has certainly kept both the interpretative fluency and openness which are fundamental to carry out an analysis of migratory flows. Therefore we have tried to gain access to the dialectics of living on borders where migrants are the exemplary figures, and the border is considered as a place where differences touch each other and test their limitedness through each other. (Mezzadra 2011) We have considered ourselves closely linked to the border idea, starting from our birth in a borderland which means, as defined by Carmelo Bene, to address to "the real/imaginary" (Carmelo Bene 1983).
A liquid border
Mediterranean outskirt Salento is a borderland with people forced to stay on the border. Here the border is the Mediterranean Sea which is not only the starting point but also the landing point. As suggests Braudel (Braudel 2003), the Mediterranean is understood as space/movement where nothing is immobile but everything is transformed, contaminated, hybridized leaving visible and invisible traces on the surface and on the bottom: «The Mediterranean is a multitude of maritime and land routes, linked amongst themselves, hence cities, from the most humble to medium and large, all holding each other by the hand. Roads, more roads becoming a system of circulation. Through this system we can fully understand the Mediterranean Sea that can be defined as space-movement in its broadest sense. The landscape and the sea (the basis of its everyday life) are added to the gifts of movement. The faster the movement the larger the amount of gifts which take shape in visible consequences» (Braudel 1992:51) The Mediterranean Sea is the limes which urges us to use a kind of interaction that is able to catch the sense of differences, the heterogeneity of parts related to each other, a kind of interaction through differentiation experienced especially by younger generations who are more permeable to cultural contaminations (Cusumano 2010).
Speaking, writing, telling, listening to migrations meant speaking, writing, telling and listening to ourselves and the Mediterranean. Cassano wrote in his Pensiero meridiano : «today Mediterranean means to put the border in the centre, the line of contact and division between people and civilizations […] this sea is at the same time internal and external, inhabited and crossed, this seaborder interrupts the rule of identity and aims at forcefully hosting division.» (Cassano, 2007: 23) A division which, in this place, is the awareness of being race mixed from our birth. In Salento, on the days when the tramontana blows we can see the Albanian mountains, whereas Radio Tirana frequencies break into our car along the Adriatic coastline and we receive on our cell phone welcome messages from Greece. It is also a linguistic Babel, when we hear the sweetness and musicality of the griko of our parents in some villages of the "Grecìa Salentina" 2 . Our somatic features are often conflicting with each other and the same is true for landscapes: blond hair, light eyes, very dark skin owing to Turkish, Norman, Spanish and Greek dominations. Living in a land like Salento means living in a land antithetical to any pureness and fundamentalism, where a monolithic and integral "we" does not exist because our "we" is filled with "others" (Cassano 2007). This does not mean making an apology of our territory but rather telling an experience of marginality or better telling from an experience of marginality. Marginality is now chosen as a place of residence which becomes a space of radical openness and opportunities for the production of a counter hegemonic discourse […] present not only in words but also in the ways of living and being" (Bell Hooks, 1998: 68-72).
A borderland with people forced to stay on the border is: «a vague and undetermined place created by the emotional residue of an unnatural boundary. It is in constant state of transition, as stated Mexican sociologist Gloria Anzaldua, the people living on the border are los atraversados: the squint-eyed, the perverse, the queer, the troublesome, the mongrel, the mulatto, the half-breed, the half dead; in short, those who "cross over, pass over, or go through the confines of the 'normal'» (Anzaldua, 2006: 29) Salento is a borderland filled with passageways, contradictions and conflicts but is also a land of crossings, of meetings just like any other borderland. Living on the border means giving a new meaning to one's identity, and our identity was crossed and fecundated by migrants. Furthermore ours is a border placed in the South where , as happens in other southern parts of the world, the resulting thought is a meridian thought in the sense suggested by Cassano, that is a thought which one can feel inside: «where the sea begins, the shore breaks the earth's fundamentalism (first of all that of economy and development), when it is clear that borders are not places where the world ends, but rather where different people meet and a new challenge for the relationship with the other becomes true and difficult» (Cassano, 2007: 7). Then migrations have been a pretext to tell ourselves, to listen to ourselves again: a listening education to find out that we are migrants in migrants' stories and "I" becomes "You", because no identity exists without otherness in and out of ourselves. Our aim was not to tell and build migratory identities, but rather to give a new voice to migrants. We tried to stay silent in order to listen to their voice, which made a "surprising representation of ourselves" (Cassano, 2007: 34). Considering migrants without studying, judging and analysing through external categories has meant (to us) returning them the dignity of autonomously thinking individuals, thus interrupting a long tradition where they are thought by others. As a consequence the forbidden speech of the infamous people has been also our speech, and their story has become a narrative practice of shared resistance. Nevertheless we could have probably produced further stereotypes and clichés while translating and choosing one interpretation of those stories and voices; anyway we could not avoid to use an approach which is autobiographic, narrative and subjective of migrations. As the Cameroonian writer and anthropologist Geneviève Makaping states: (Makaping, 2001:53) This quote comes from a book whose title contains a question posing us a doubt (and also a challenge) that was the main subject of our work: "what if the others were you?" Yes, if we were the others ? We have tried to be constantly moving in order to change our point of observation, both internally and externally, while trying to listen to the countless voices of the Mediterranean. The Mediterranean Sea is not a borderless ocean but rather a sea between lands; unlike other seas it has the problem of the relationship between multiple identities, of the difficulties of living necessarily together. The Mediterranean is a border sea, his position could turn it into a privileged place for intercultural dialogue. Remaining on the Mediterranean, at the margins, has brought back the problem of the relationship between different identities and cultures that had to live together in such a difficult but necessary way. This is not a question of not resisting the romantic temptation to imagine a mythic placea utopian Mediterranean landscape. Instead, this is a question of thematizing a place, real and symbolic at the same time, which can be an alternative to the oceanic drifts of globalization (Cassano, Zolo, 2008), which is able to grasp and question emerging themes such as immigration policies, the relationship between Islam and modernity, the Mediterranean roots of such Europe that has difficulties in finding its own legitimization, autonomy and identity.
Narration beyond borders
Our attention is to be focussed on the immigration actors starting from an autobiographical approach to a sociological classic work: The Polish peasant in Europe and America by Thomas and Znaniecki. Moreover, the works of the young Weber on the conditions of agricultural workers in the Eastern Prussian provinces present us with a detailed study of the migrations of German peasants 3 . What strikes in Weber's work is the consideration of migrants' subjective reasons, revealing aspects studied by social science only in the 1920s owing to the analyses published by the researchers of the Chicago School. While carrying out an in-depth study on the migration dynamics developing at that time in the Prussian countryside, Weber highlighted the subjective point of view of the young German migrants by identifying their will to escape from the authoritarian and paternalistic oppression of landowners as the main reason for them to abandon their land and seek freedom.
In his work Diritto di fuga Mezzadra reminds us that we owe to Weber the intuition of the «origin of migration as an individual refusal, a claim of a right to secession and escape from the patriarchal organization in force in the Eastern Prussian territories, which becomes a social process to the same extent that it appears standardized» (Mezzadra, 2006:48) Furthermore, Mezzadra underlines the need to highlight the subjective features of migrations that question «the migrant figure of a weak subject, hollowed by hunger and misery and needing care and help which have been widely diffused, particularly in Italy, over the last few years». (Mezzadra, 2006: 11) Discussing the subjective character of migrations and also their natural unpredictability or turbulence (Papastergiadis, 2000), means not considering the prevailing interpretation of migrations as a systemic event exclusively linked to the objective causes of migrations. This does not mean to remove such causes, but rather to try and return a personal history to subjects too often considered "without history" (Mezzadra, 2006: 52). The "hydraulic" macro models, by privileging the expulsion reasons in the country of departure and the causes for attraction in the countries of arrival linked to an economic determinism, are useful to explain the general features of the migration event and some of the objective reasons, but are unable to read the deep subjective nature of a migrant, who is not only a pawn, workforce in the hands of a systema world subject to market rules. Choosing the subjective migrant option also means acknowledging and legitimizing the exercise of the right to escape describe by Mezzadra, which reveals: «the irreducible singularity of a migrant able to make subjective choices, highlighting the exemplarity of the migratory experience as a limit of the modern political experience. This limit forces us to re-think the overall reference framework so as to strengthen the ongoing reflections for a political analysis of contemporary migrations.» (Mezzadra, 2006: 52 and following) The material used to write this research project is only made up of the stories that migrants tell. They tell us their stories, but we are neither deaf nor aphonic and let these stories involve ourselves by breaking the limit. And these stories become also ours. We chose the biographical interviews because we thought that their use could open an alternative path to knowledge and sociological research able to interrupt a prevailing belief on migrations. The use of an autobiographical method and its underlying retrospective reflection are able to activate the discovery of new, unexpected identities, communities and links. As a matter of fact, the constituent narration of biographical interviews is a cognitive practice for both the interviewed and the interviewer because, as suggests Jedlowski, there are always two people narrating and sharing a story, one narrator and one listener (Jedlowski 2009). It is about a reflective practice where everybody recognizes themselves, in themselves or in the others, since the description of the other by oneself is always a definition of the self. This generates in the autobiographical narration an attempt to recognise and redefine the self as a dynamic instance that can give birth to a manifold, wandering, nomad identity lacking stability and able to present various dimensions (Di Stefano, educatt.unicatt.it).
Indeed narration triggers off a new sense construction and re-construction process that is not confined to one's biography but can be generalized because the autobiographical narration is composed of memory production, identity, The story, the autobiography can have in this sense many individual and social meanings; it can become a means for memory transmission and re-definition of personal and collective identity. «In a certain sense an individual», as Pecchinenda suggests, «has got no story, but is the story» (Pecchinenda, 1999: 176). Stories are not only produced through the language that can lead to different versions, but narration becomes soon fundamental for social interactions: «We constantly construct and reconstruct ourselves to meet the needs of the situations we encounter, with the guidance of our memories of the past and our hopes and fears for the future.
[…] There is now evidence that if we lacked the capacity to make stories about ourselves, there would no such thing as selfhood.
[…] The construction of selfhood, it seems, cannot proceed without a capacity to narrate. Once we are equipped with that capacity, we can produce a selfhood that joins us with others, that permits us to hark back selectively to our past, while shaping ourselves for the possibilities of an imagined future». (Bruner, 2002:86-87) For the collection of life stories we deliberately tried to tell and listen to a particularly delicate category of migrants: asylum seekers. I will not discuss the reasons for such a choice since for us it was a "natural" fact. A natural sociological and human curiosity to understand women and men who arrive at our border neither for economic reasons nor for an improvement of their life standards, but rather to protect life itself. People who, in the countries of arrival, are often subject to further humiliations imposed by a widespread tragic common sense according to which asylum seekers are nothing but beggars resorting to expedients to obtain the access to "hosting" territories, also by testifying violence and persecutions perpetrated against them (but never happened), or dead bodies that, according to the police, have never existed (Vassallo 2010). So far very few, at least in our region, has been told on this category. Still less has their voice been listened to. Moreover, it is a category we chose because it is a challenge for narration. Their and our narration.
_______________________________________________________________________________________________________
It's been a challenge for us because the stories of asylum seekers are often reticent, sometimes because telling pain is so terrible that one remains dumb and inaudible at the same time; «sometimes stories are inconsistent and actually false owing to migrants' desire to adapt their stories to the provisions ruling the right to asylum» (Jedlowski 2012). Indeed, on several occasions we have noticed in migrants' voices not the expression of their subjectivity but rather the construction of a speech order that is abstract and forced by our bureaucratic procedures for the achievement of the refugee status. A series of pretenses followed most times by: «false statements on their arrival in the attempt to hide their identity or to highlight the ethnical or identity-making features considered reliable in view of the right to asylum, […] and a series of progressive shifts of migrants to forms of alteration of the self which sometimes are irreversible and cause interior fractures as well as renewed grounds for expulsion». (Triulzi, 2007: 10) We chose to hear their stories because it seems that the right to asylum has become a matter for abstract and invisible officers of nation and international institutions who evaluate the applications for asylum on the basis of a weak legal regime after summing subjective opinions that often reveal a discretional character resulting from «a technocratic power-wielders that are subject to political and ideological bias» (Valluy, 2009: 45).
The story use has assumed an extraordinary value of resistance and life stories have become counter-narratives, antenarratives: «or better narrative fragments without any (current and future) consistency and organization of true stories, but with the expression of the possibility to tell in ways other than the current.
[…] they continue to be potential repertoires of alternative visions of facts that, in case of conflicts, can turn into embryos of stark counter-narratives, resources for the construction of alternative communities» (Jedlowski, 2009: 36).
Alternative communities resulting from "narrative communities" as defined by Jedlowski, or better «a group of individuals who accept to exchange the roles of narrators and listeners.
A narrative community is maybe what we have tried to construct by opening a space, a friendly environment where every story, every word has had an equal right to exist and resist, without interdict, without any kind of exclusion imposed by the prevailing rhetoric. We believe that only through the attempt to return the right of speech to those deprived so far can new links be generated and thus new forms of community able to find a sense of future together.
Post-scriptum
At the conclusion of our study on migrations we have realized to what extent this macro theme can question the very foundations of democracy, selfhood and citizenship. Migrants cause us to radically re-think these themes by placing them back in the public field in a new form, proposing new challenges, new pictures, new languages and new practices. Their mobility itself is a strategic resource to start change processes not only in the countries of arrival but also in the places of origins 4 .
Far from being a category of revolutionists, migrants can trigger off processes of subjectivity construction introducing new legal, political and social issues. They can be political factors of deep transformations starting from the urgent need to rebuild the relationship between rights and citizenship at the basis of the balance between universalism and particularism in citizenship issues; a balance that has to secure for migrants «those political, civil and social rights that enable us to participate in all aspects of common life as full members of society and that would promote their sense of belonging and help soften possible conflicts» (Mezzadra, 2006: 87-88). Therefore, migratory experiences, while being an opportunity to enlarge the democratic and epistemological standard of cultural, racial and distributive pluralisation of democracy, can make room for and guarantee the plausibility of anti-hegemonic democratic practices and concepts by outlining new emancipatory horizons.
|
2017-09-07T15:10:16.510Z
|
2015-02-10T00:00:00.000
|
{
"year": 2015,
"sha1": "c68cfe19d200b8c2cfdbabb4cdcb1170984e169f",
"oa_license": "CCBY",
"oa_url": "http://revistas.ucm.es/index.php/NOMA/article/download/48323/45207",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "aeda027346ddc5c8baddcb5beaaa1fd0dc151168",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
17137717
|
pes2o/s2orc
|
v3-fos-license
|
Adulthood Cruelty; a Psycho-analysis under Pbc Syndrome
Introduction Today's world is much more competitive than previous socialization and living and livelihood. Modern interchanging socialization being enhances various difficulties where person always face different type of pressure within relationship whatever personal (professional, marital, survival) or interpersonal texture of thinking and feeling and reciprocal behavior. Adulthood Cruelty arises and accumulated under a streaming anxiety stress where compromises with substance environment being install retaliation in mind setup. Developmental Psychology may opened up the passage of scientific study of persons attitude where psychological anticipation being veil out of particular human character, behavior and probable deviation on endangerment. When particular person beings change over the course of his/ her life within short but sustainable time zone (few days/few weeks/few months) require positive mental support without making any gesture on his/her usual appreciation because he/she may carries PBC Syndrome in unconscious mind set up. Objective The prime objective of this study is to find out the causes and the factors behind extreme cruelty of an adult personality. The specification of cruelty carries by the socially diverted persons or mentally adulterate under surrounding environment or helpless methodology for living and livelihood or willing choice of shortcut living and livelihood. But how appreciable good and self-established personalities become victim in PBC Syndrome? And why post-marital personalities arise as a sadistic killer with extreme cruelty? Identify the causes behind being another objective of this study.
Introduction
Today's world is much more competitive than previous socialization and living and livelihood. Modern interchanging socialization being enhances various difficulties where person always face different type of pressure within relationship whatever personal (professional, marital, survival) or interpersonal texture of thinking and feeling and reciprocal behavior. Adulthood Cruelty arises and accumulated under a streaming anxiety stress where compromises with substance environment being install retaliation in mind setup. Developmental Psychology may opened up the passage of scientific study of persons attitude where psychological anticipation being veil out of particular human character, behavior and probable deviation on endangerment. When particular person beings change over the course of his/ her life within short but sustainable time zone (few days/few weeks/few months) require positive mental support without making any gesture on his/her usual appreciation because he/she may carries PBC Syndrome in unconscious mind set up.
Objective
The prime objective of this study is to find out the causes and the factors behind extreme cruelty of an adult personality. The specification of cruelty carries by the socially diverted persons or mentally adulterate under surrounding environment or helpless methodology for living and livelihood or willing choice of shortcut living and livelihood. But how appreciable good and self-established personalities become victim in PBC Syndrome? And why post-marital personalities arise as a sadistic killer with extreme cruelty? Identify the causes behind being another objective of this study.
Background
The background of this article was an experimental knowledge of psychological management on different ageing educated (Secondary and higher) adult personality from different socio-economic status who were involved with extreme cruelty. The personality behavior of three male and one female adult age between 30-45 years educated (Secondary and higher) and self-established married Participant were anticipated and making interaction. The factor (Client existence) identified as nonclinical experiment. The first Participant was convicted for homicidal case against his girlfriend. The second and the third Participants were accused for brutal homicidal case against own Daughter and sister who bound by under cast marital relationship ignoring family consensus. The only female Participant was convicted for homicidal case against own Husband for cross sexual friendship.
All the Participants were selected on the basis of the present and past personality behavior and cooperative motivation under the presence/absence of personality makeover and anxiety exposure.
Analysis
Personality leads person's behavior. In this psychological study involves a rage of humanities field, such as personality development or cognitive personality and makeover personality behavior on adulthood or post-marital adulthood within socialization. It arises on participant's personality behavior especially on past and present veil beneath experiences and interpretation on substance cruelty towards humanities being identify as adulthood "point blank cruelty or PBC".
Now question why adult become experiencing point blank cruelty (PBC)?
This particular Psychological observation on participants being appear that when a personality make self-committed decision to overcome another personality or terminate other personality from the way of his/her exert appreciation then that decision making personality leads his/her mind setup on a sadistic thinking in the point of darkness brutality towards Mind Circulation Flows (MCF) and disburse reciprocal behavior that become a point blank cruelty or PBC.
When PBC transform into a Syndrome?
Cruelty is a natural instinct of adulthood character included with personality soon after Hormonal secretion towards sexual transformation in maturation that stirs sexual desire in rage coition. From another angel it may say that sexual arousal and physical intimacy never fulfill without Cruelty and its trusted application on substance under the process of personality development.
Remarkable, Sex is a magical feeling to adulthood personality under cooperative coition within true appreciation. It may be a biological panacea on high scale of stress involvement or in unorthodox Cruelty.
Ironically this Cruelty experiences anything to "take responsibility" for trust one whatever relationship (sexual norm) being diverted towards multiple relationship (more than one sexual existence) for the cause of reluctant or avoidance or may negligence in post sexual experience or adulthood relationship. Meanwhile these types of behavioral approach induce extra-care motivation in personality desires or responsible commitment. If personality motivation being involve in sexual resuscitation through secondary "take over responsibility" and seeking trust. Then impairing changes in mood, energy, thinking, and behavior (Bipolar disorder) stir emerging relationship under silent anxiety attack. Most of time person trapped for such dual "take over responsibility" where commitments on agreed (spouse) responsibility after post-marital relationship making extraorientation of demand under humanities and socialization that silently engulf the original personality. In repercussion, motivation in thinking, feeling and decision making personality substantially accumulate anxiety stress in mind circulation flows that create single way thinking in the point of darkness brutality under "Pressure Circulation Flows" and arise as PBC Syndrome.
What are the Symptoms in PBC Syndrome?
In this Psychological study on PBC Syndrome being identify few specific symptoms under the anticipation of personality behavior and personality motivation. Remember, People cannot victim in PBC Syndrome until at least five symptoms associated in personality behavior and motivation. More specifically say when at least five symptoms among the above list being accumulate in one personality then the particular person become a victim in PBC Syndrome.
Symptoms
Moreover, person who are calm, gentle and innocent looking personality being carries larger probability to fall in PBC Syndrome. It happen because human beings change over the course of their life in mature adulthood or in a marriage relationship where Commitment on post marriage begin with trust to take any responsibility whatever it sexually availability or responsive activities or romantic love making, personality (mostly female partner) did make the crucial mistake in assessment of adulthood relationship on post marital adulthood texture and go forward as their own motivation involvement and initiate basic platform for PBC syndrome.
How does the PBC syndrome progress?
It appears in analysis on personality motivation and personality behavior in which "committed responsibility" during adulthood stage or in post-marital relationship making extra-orientation of demand and "takeover responsibility" occasionally converted as monotonous perception before partner's appreciable intra-personal and interpersonal relationship (family relationship) where spouse mostly reclaim "take way responsibility. The contrast in personality begin with these anything to take responsibility and committed responsibility and emotional trust being slide away person's emotional and physical need for sex and gratification behind the adjustment on substance socialization and relationship (personal, spouse etc.) and the balance of personality motivation enforce under family planning in which personality motivation leads into single way thinking that associated preliminary stage of PBC syndrome.
It also appears that persons (guys)in adulthood or post marital environment make completed marital relationship in the meaning of family planning (parenthood) mostly looses importance on relationship in the sense of substance sexual need or emotional relation being motivated on trusted sheltering sexual resuscitation and personality gradually installing multiple anxiety stress in mind setup where mind circulation flows accumulated single way thinking in substance environment and personality disburse extreme cruelty on trusted partner.
Person once face or experiences anything like PBC Syndrome then the particular personality behavior become more introvertness to carry "trusted responsibility" until it disburse on extreme exploration. Most surprising information arise on behavioral analysis that person gets back almost normal feeling after PBC exploration or cruelty exposure.
What are the factors causes behind the PBC syndrome?
The study on PBC syndrome tries to find out the factors behind the causes of exploring extreme cruelty under appreciable personality behavior and trusted Motivation within humanities and socialization. Factors causes behind the PBC syndrome identify as bellow:- The most important information arise on this particular study of adulthood personality behavior and personality motivation in cruelty exposure where major personality (participants) make an exposure that they experiencing sex or sexual desire with external personality (excluding committed one) and involve with take over commitment relation to carry responsibility. Their committed responsibility (family obligation) discriminating on reciprocal trust that scanning under cross friendship sexual relationship.
The adulthood personality arise as victim in early PBC syndrome due to fear for cross sexual exposure in the sense of experiencing sex or desire for making sex where personality motivation being trusted and possessive on relationship and claiming the same from the counterpart. But expectation on trusted relationship appear as breach of trust and personality face or experiences anything to "take responsibility" on commitment and silently install anxiety in mind set up which accumulate amount of stress that gradually involves distorted or misinterpreted real perception on emotional need or sexual resuscitation in post marital innings or mature adulthood and in familiar personality exposure. Basically some kind of fear for such exposure makes assumption in MCF and personality cover-up under multiple personality Disorder such as ETD or Bipolar Disorder or similar like Schizophrenia and fouling on real personality behavior. In result person unable to come out or unable to find out second option for distorted thinking or feeling paralysis that make alleviation of subsequent stress and become a victim on PBC syndrome.
Management
The study on adulthood personality motivation and post marital Psychological behavior on trusted surrounding substance make exposure of veil beneath the need or desire of Physical and emotional sexual relationship. And its repercussion on ignorance or in sexual resuscitation through secondary trusted commitment is being analysis under the light of background information where past and present mental health compares with personality development and cognitive installation.
It is an appreciation on study of Adulthood Human personality behavior towards socialization under relationship being tuff task to find out the real cause behind the Adulthood extreme Cruelty disbursement within surrounding environment through the different cognitive foundation in which personality development acc-umulate motivation that penetrating well buildup relationship among the spouse and spouses veil beneath obligation, responsibility and commitment.
How is PBC syndrome diagnosed?
Basically PBC syndrome is a Psychological issue where Anatomical influence on personality development and mental health care never being denied. During the management of PBC syndrome it emphasis on "real value diagnosis" through the incite observation of personality behavior. On early stage of PBC syndrome when personality going through multiple behavior changes on substance relationship and submerge anxiety make disturbance on Motivation being suspected under prevention diagnosis process. The entire diagnosis process of PBC Syndrome may divide in three steps like;-First medical examination: Generally the first step in diagnosis process on suspected personality needs to examine by a doctor and pursuance specialist for making confirm about the non-existence of any Sevier Anatomical disorder.
Second psychological counseling: Then particular personality need to treatment under joint coordination of a Psychiatrist and a Psychologist where Psychologist doing Counseling on mental health care and make assessment that either particular personality carries PBC syndrome or not. If yes, then the particular personality better need to Hospitalize under closely monitoring and medication that these complications can be treated to get cure soon.
Third apply therapy: Particular personality first suspected and after examination by a doctor it very much needed Psychological Counseling within a few days to prevent larger scale anxiety installation in mind setup. The supportive care and Therapy is the most important part of this (PBC syndrome) treatment where affected person get back confidence for making confrontation before arise issue or problems.
Conclusion
In conclusion of this study on Adulthood Cruelty make vast exposure on human's sexual resuscitation under secondary "takeover responsibility". Where agreed "committed responsibility" towards demand orientation or emotional negligence in frustration on bad feeling under sexual resuscitation being associated early stage of PBC Syndrome for the cause of unhappiness.
Adulthood maturation is a line of compromise where personality independently make decision on living and livelihood under variable cognition that install through the journey from infant to adolescence being explore as personality development. Developmental psychology examines the influences of nature and nurture in the way of development of humanities.
This study observes that personality that carries inconsistent sexuality easily fall in higher scale of anxiety stress under committed responsibility about their spouses' reclaim thoughts, feelings, habits; likes and dislikes accumulate early stage of PBC Syndrome.
Etiologically PBC syndrome carries early or acute stage and recurrent stage of mental illness within substance environment. On early or acute stage possibility of prevention depend how to make behavioral motivation under close assessment during associated relationship going through demand orientation of personality behavior. It also says that PBC syndrome arise through the fluctuated motivation in substance family relationship where demand factor became a stimulating agent for anxiety stress accumulation.
At the end of this study of Adulthood cruelty exposure on comprehensive sex, sexual desire or sexual resuscitation within adulthood maturation and post marital adulthood relationship under the PBC Syndrome being appear that aggressive personality carries less probability to be a victim on such Psychological Disorder.
|
2016-10-11T18:22:25.667Z
|
2016-08-09T00:00:00.000
|
{
"year": 2016,
"sha1": "75024a47e9328ba56959d08f4080a98a542244be",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/adulthood-cruelty-a-psychoanalysis-under-pbc-syndrome-2380-5439-1000181.pdf",
"oa_status": "HYBRID",
"pdf_src": "Grobid",
"pdf_hash": "75024a47e9328ba56959d08f4080a98a542244be",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
204975277
|
pes2o/s2orc
|
v3-fos-license
|
Unmet need in rheumatology: reports from the Targeted Therapies meeting 2019
Objectives To detail the greatest areas of unmet scientific and clinical needs in rheumatology. Methods The 21st annual international Advances in Targeted Therapies meeting brought together more than 100 leading basic scientists and clinical researchers in rheumatology, immunology, epidemiology, molecular biology and other specialties. During the meeting, breakout sessions were convened, consisting of 5 disease-specific groups with 20–30 experts assigned to each group based on expertise. Specific groups included: rheumatoid arthritis, psoriatic arthritis, axial spondyloarthritis, systemic lupus erythematosus and other systemic autoimmune rheumatic diseases. In each group, experts were asked to identify unmet clinical and translational research needs in general and then to prioritise and detail the most important specific needs within each disease area. Results Overarching themes across all disease states included the need to innovate clinical trial design with emphasis on studying patients with refractory disease, the development of trials that take into account disease endotypes and patients with overlapping inflammatory diseases, the need to better understand the prevalence and incidence of inflammatory diseases in developing regions of the world and ultimately to develop therapies that can cure inflammatory autoimmune diseases. Conclusions Unmet needs for new therapies and trial designs, particularly for those with treatment refractory disease, remain a top priority in rheumatology.
AbsTrACT
Objectives To detail the greatest areas of unmet scientific and clinical needs in rheumatology. Methods The 21st annual international Advances in Targeted Therapies meeting brought together more than 100 leading basic scientists and clinical researchers in rheumatology, immunology, epidemiology, molecular biology and other specialties. During the meeting, breakout sessions were convened, consisting of 5 disease-specific groups with 20-30 experts assigned to each group based on expertise. Specific groups included: rheumatoid arthritis, psoriatic arthritis, axial spondyloarthritis, systemic lupus erythematosus and other systemic autoimmune rheumatic diseases. In each group, experts were asked to identify unmet clinical and translational research needs in general and then to prioritise and detail the most important specific needs within each disease area. results Overarching themes across all disease states included the need to innovate clinical trial design with emphasis on studying patients with refractory disease, the development of trials that take into account disease endotypes and patients with overlapping inflammatory diseases, the need to better understand the prevalence and incidence of inflammatory diseases in developing regions of the world and ultimately to develop therapies that can cure inflammatory autoimmune diseases. Conclusions Unmet needs for new therapies and trial designs, particularly for those with treatment refractory disease, remain a top priority in rheumatology.
bACkgrOund
The Advances in Targeted Therapies meeting (ATT) has met annually for 21 years, bringing together clinical scientists and immunology and molecular biology experts from around the world. The meeting focuses on clinical and translational research, in immune-mediated inflammatory diseases (IMIDs) and stimulates collaboration between basic scientists and clinicians. The meeting's objective is to update participants regarding the latest insights regarding disease mechanism(s) and pathophysiology and recent developments with both existing and novel targeted therapies in the field of IMIDs with a focus on rheumatological diseases. Previously, a consensus document describing the recommended use of targeted therapies within rheumatology was produced from this meeting. 1 However, with the expanse of targeted therapies and the recent clinical recommendations published from both American College of Rheumatology and the European Union League Against Rheumatism, 2-4 a document covering all targeted therapies across all disease indications became too complex and voluminous as a single manuscript. Accordingly, the annual meeting's output was modified to discuss key unmet needs within the field, consistent with the meeting's underlying objective of promoting innovation and collaboration. 5 With the 2019 meeting, we conducted a similar process to review and update these unmet needs, but in this case, prioritise and highlight the most important needs in the field.
key messages
What is already known about this subject? ► Key unmet needs in field of rheumatology clinical and basic science research have been highlighted previously, but vary over time as the field progresses.
What does this study add? ► The Advances in Targeted Therapies meeting (ATT) focuses on clinical and translational research, in immune-mediated inflammatory diseases (IMIDs) and stimulates collaboration between basic scientists and clinicians. With the 2019 meeting, we reviewed, updated and prioritised the unmet research needs in the field ► This effort highlighted several overarching themes: the need to innovate clinical trial design with emphasis on studying patients with refractory disease, the development of trials that take into account disease endotypes and patients with overlapping inflammatory diseases, and the need to better understand the prevalence and incidence of inflammatory diseases in developing regions of the world.
How might this impact on clinical practice? ► The prioritisation and highlighting of research needs, particularly in aspects of clinical trial design, will ultimately result in improvements in therapy and potentially the better targeting of therapies toward patients with specific disease sub-types.
MeTHOds
We assigned conference participants to disease-specific breakout groups which included psoriatic arthritis (PsA), rheumatoid arthritis (RA), axial spondyloarthritis (axSpA), systemic lupus erythematous (SLE) and other systemic autoimmune rheumatic diseases including vasculitis. Experts in each group were tasked with identifying unmet needs in three categorical areas: clinical care, clinical science and therapeutic development and basic/ translational science. A 'facilitator' and 'rapporteur' led each group's discussion and summarised their results, and the groups were asked to highlight notable progress made towards previously identified needs as well as identify new areas of need. This year, each group was asked to then prioritise their discussion and detail the top several needs within each disease-specific area.
resulTs rheumatoid arthritis
There was broad agreement that management of patients with RA who are refractory to available treatments ('refractory' or 'treatment resistant' RA) is arguably the greatest unmet need in RA (at least in the developed world). However, a careful clinical definition of the refractory state is needed, so that we are not confounding true treatment-refractory disease with patients with RA who are undertreated, non-adherent to treatment or who have comorbid fibromyalgia or other sources of non-inflammatory pain. Once a clinical definition of 'refractory' RA is achieved, a molecular definition of the refractory state should follow and should be differentiated from molecular definitions of early RA, established RA, RA in flare and RA in remission. Single cell analysis of synovial and/or circulating cells (including gene expression) may enable us to phenotype RA into subgroups or states of disease. 6 Molecular characteristics at single cell level should be compared with whole synovial tissue molecular profiling with the aim of identifying peripheral blood surrogates of tissue pathology (liquid biopsy) and treatment response. The definitions of molecular subgroups could eventually lead to a personalised approach to treatment. For example, data generated may suggest that a combination or sequence of biologics may be efficacious in some individuals. Alternatively, molecular subgrouping may identify novel targets proximal in the disease process-that is, in the early adaptive immune response-that could be targeted for drug development and clinical trials. Importantly, patients who have received multiple biologics/ small molecules should not be excluded from clinical trials since they have the greatest unmet need. Novel targeted therapies should be studied in refractory patients, as should novel combinations or sequences of existing therapies, similar to the way oncologists use checkpoint inhibitors. In particular, we should carefully move forward with combination therapy studies in refractory patients, with a commitment to resolving issues of cost, safety (eg, infection and malignancy) and the reluctance of manufacturers to combine each other's agents. Efforts to identify optimal dosing and levels of our currently available therapies, as a single treatment or in combination, are also essential to optimise treatment of refractory patients. Finally, it is important to recognise that despite many successful therapies for RA, less than half of patients with RA are in remission, 10%-15% are refractory, and there is still no cure for this disease. [7][8][9] Continued commitment on the part of our funding agencies, pharmaceutical partners and scientific investigators is essential to advance research and discovery efforts to understanding the heterogeneity of RA pathogenesis and effective sustainable treatments.
Psoriatic arthritis
In the last few years, there have been an increasing number of medications with different mechanisms of action which have shown benefit in PsA in randomised clinical trials and have been approved by regulatory agencies, including an IL12-23 inhibitor (ustekinumab), two IL-17A inhibitors (secukinumab and ixekizumab), an oral PDE4 inhibitor (apremilast), an oral JAK inhibitor (tofacitinib) and abatacept. 10 11 While very gratifying, the homogeneity imposed by clinical trial design may exclude important patient subgroups. For example, the great majority of patients have polyarticular involvement (entry criteria: ≥3-5 inflamed joints) with few studies examining oligoarticular disease (<5 inflamed joints); thus, the common oligoarticular PsA represents an unmet need in PsA trials. Although the varied clinical domains of PsA, (eg, enthesitis, dactylitis, spondylitis) can show response to treatment, only a subset of patients demonstrate these domains and thus the measured response may not achieve statistical significance if the subset is too small. Furthermore, a domain such as PsA spondylitis, with symptomatic inflammatory back pain in about 15% and asymptomatic sacroiliitis in about 30% of patients, 12 is not measured by the standards of axSpA trials, including centrally read MRI. The best way to measure oligoarticular disease in trials remains an unmet need, and since the oligoarticular phenotype is a common presentation in clinical practice, we are not able to entirely accurately extrapolate results from trials to clinical practice. For treatment of the spondylitis component of PsA, we rely on data from axSpA trials, which also may not be accurately extrapolatable. Trials of the IL-12/23 inhibitor ustekinumab and the IL-23 inhibitor risankizumab have failed in ankylosing spondylitis. 13 Even though these agents have demonstrated benefit and been approved for PsA, their ability to benefit the spinal component of PsA remains unproven and needs to be tested.
Phase IIIB or IV trials which specifically enrich the patient population for the domain or subtype in question are needed. Enrolment criteria could require oligoarticular disease or spondylitis or enthesitis for example, although measurement techniques for these disease aspects still need to be developed. Specific ultrasound or MRI (eg, axial clinical and imaging measures for a spondylitis-specific trial, entheseal-specific measures and imaging for an enthesitis trial) are needed. It is not clear how the results of these trials could be incorporated into regulatory labelling for the medication, but these would provide important clinical data helpful for clinical decision-making.
A second area of major unmet need in PsA is management of the therapy refractory patients who have 'tried everything'. Emergence of new approved therapies will partially address this need, as would rational 'combination' studies. Clinicians are more frequently trying unapproved combination approaches, for example combining a biological medication (TNFi, IL-17i and so on) with an oral agent such as a PDE4i or JAKi. Combination therapy trials are urgently needed, although the safety of such combination approaches is unknown, particularly with regard to infection, where a greater risk has been suggested in some combination trials for RA. 14 A third major area of unmet need is better understanding of, and accounting for, the role of central sensitisation (CSS) (chronic widespread pain, fibromyalgia) in amplifying symptom severity. Recent studies have demonstrated that 15%-40% of patients with PsA and other rheumatic, chronic pain and inflammatory conditions may have concomitant CSS. When CSS is concomitantly present with PsA, disease activity measures which include patient-reported outcomes, (eg, pain, patient global) Viewpoint are nearly twice as severe when compared with a similar PsA cohort that lacks CSS. [15][16][17] Patients with PsA with concomitant CSS are less likely or unable to achieve targets of treatment such as minimal disease activity, [16][17][18] Højgaard et al demonstrated in this population lack of correlation between tender entheseal examination and evidence of objective evidence of inflammation by ultrasound. 17 18 While patients with CSS are historically excluded from PsA trials, it is difficult to exclude all such patients. Several measures have been developed to ascertain the presence of CSS/fibromyalgia; 17 however, there remains a need for more objective biomarkers which are more feasible to use in clinical and trial settings. In this respect it is noteworthy, that the treat-to-target recommendations for PsA explicitly state that 'The choice of the target and of the disease activity measure should take comorbidities, patient factors and drug-related risks into account' (recommendation #8); 19 this simply means that an index developed for measuring disease activity in PsA should not be used to score a comorbid condition, alternatives will then have to be used. Similarly, a prerequisite for application of classification criteria for RA is that a patient has no other diagnosis, such as SLE. 20
Ankylosing spondyloarthritis
In 2018, the spondyloarthritis discussion group identified a variety of unmet needs which included: understanding the relationship of peripheral disease to axial disease; early recognition and diagnosis of disease; understanding the causes/relationship of extra-articular disease including bowel and eye disease to the joint disease; improved imaging technologies and interpretation; development of biomarkers for prognosis and choice of therapy; a wider choice of biological therapies; an ability to improve prognosis (disease modifying treatment); direct comparison among TNF inhibitors with regard to efficacy and safety; more frequent disease remission; improved referral to a rheumatologist and international collaboration. 21 Although this list is comprehensive, additional themes were identified as most important. First, the need to better understand the microbiome is paramount. While it is highly likely that the gut microbiome is contributing to the disease, we do not know which bacteria are most important, which portion of the bowel is most important, the mechanism by which the bacteria affect the disease, the role of non-gut microbiota, the role of nonbacterial microbiota or how best to therapeutically alter the gut microbiome as by diet of faecal transplant. Second, the failure to establish IL-23 as an effective therapeutic target in ankylosing spondylitis means that we need to understand more completely the IL-23-IL-17 axis and the role of IL-23 and additional cytokines in the molecular pathogenesis of this disease. [22][23][24][25] This effort should include a more complete understanding of the relative function of all members of the IL-17 family, including IL-17F and further understanding of which cells secrete IL-17 and why this does not seem to be under the control of IL-23 in this disease. 26 We also need a better understanding as to how the disease results in both new bone formation and osteoporosis. 27 Unfortunately, it still takes many years in daily clinical practice before a diagnosis of axial SpA is made. 28 29 Therefore, approaches for referral in primary care and for early diagnosis have to be further developed and implemented. Last, there is still further need for international agreement (and implementation) on nomenclature of axial SpA. 30 31
systemic lupus erythematosus
Recent failures of clinical trials in SLE demonstrate weaknesses in current methodology and opportunities for improvement in multiple areas. [32][33][34][35][36][37] The theme of improving clinical trial design, including limiting disease heterogeneity, was prioritised in discussion. Specifically, learning from already available data was deemed essential. Analysis of the primary data from completed clinical trials, especially combining those from several studies, can provide essential insights that can guide decisions for new studies. 38 Comparing the characteristics of the patients that participated in the trials with the data that are available from independent patient registries could be helpful to identify a bias in trial patient selection that might help to better understand trial outcomes. Issues that may confound clinical trials, including which patients should, or perhaps more importantly, should not be enrolled can be addressed using this type of analysis. Furthermore, evaluation of potential outcome measures [39][40][41] and the effects of background therapy or comorbidities that impact relative response to the study drug can be determined. This type of analysis has limitations related to which patients were actually enrolled in the trials to be analysed. Here, an appropriate serological test to identify autoantibody positive patients based on sound technology is paramount. 42 43 Other datasets that may inform clinical trial design in different ways include patient registry studies, electronic medical record cohorts and administrative datasets, although issues of data quality, completeness and timeliness must be considered. [44][45][46][47] Lupus trials are typically conducted with background therapy 32 35 48 49 and there is little agreement on how this should be controlled during the conduct and analysis of a study. 43 In fact, the 'standard of care' medication in SLE in general has not been defined. 43 50 There are important ongoing issues surrounding the disease heterogeneity that also affect clinical trial design. 43 With respect to inclusion criteria, targeting a single organ or specific subgroup could lead to more definitive conclusions regarding a study drug. 51 The marked variability in disease severity of enrolled participants could also impact the ability to draw conclusions. 52 For example, including participants with low disease activity could introduce floor effects that limit the ability to separate placebo from active treatment. On the other hand, patients with the greatest need of novel treatment approaches, namely with life threatening disease, 53 are usually excluded from clinical trials. The impact of disease duration and previous treatment on the study population may also influence the effect of a study drug. The selected outcome measures can substantially influence whether a clinical trial meets its intended endpoint. New potential outcome measures have been proposed, such as the SLE-disease activity score, 54 intended as a continuous variable and the Lupus Low Disease Activity State. 55 Another outcome measure, LuMOS, was developed from analysis of the belimumab trials and shows superior ability to detect change compared with the standard SRI-4. 38 Other potentially novel outcome variables for this heterogeneous disease might include hierarchical outcomes. Using biomarkers either for inclusion or outcome may solve issues surrounding disease heterogeneity.
Novel trial designs that could be used for SLE include adaptive designs currently used in oncology. 56 Drug withdrawal trials 57 or trials that use flare for inclusion or outcome could also be considered as they allow the participation of patients with more severe disease. Novel designs might focus on reducing the impact of placebo response, including placebo response related to pretrial non-compliance. 58 59 In considering targets of treatment, it is tempting to focus on autoimmune inflammatory manifestations where exciting new discoveries provide novel targets. 60 However, it is essential to include patient-focused unmet needs. 61 62 These include symptoms that impact quality of life such as pain, fatigue and cognitive dysfunction ('lupus fog') which are typically resistant to immune-focused therapies. Treatments that could improve medication adherence, especially in socially deprived populations and by approaches which require less frequent dosing, or that can mitigate the important concern of reproductive issues, are needed. Overall, there are abundant opportunities for clinical scientists, pharmaceutical companies and regulatory bodies to collaborate towards improved methodology to provide better patient outcomes.
Other systemic autoimmune rheumatic diseases
This group highlighted the unmet needs primarily within systemic sclerosis this year, and similar to other groups, identified the issue of improving clinical trials of utmost importance. Recent and current clinical trials have failed to demonstrate efficacy for a variety of agents in the treatment of this disease, although the results suggest that some disease manifestations may actually be improved by certain agents. 63 One difficulty in designing clinical trials to date has been the heterogeneity of disease manifestations. It might be appropriate to design trials for a specific manifestation for example (eg, lung disease).
Alternatively an acceptable, sensitive, specific and quantitative combined outcome measure that would be acceptable to regulatory agencies could speed the design and development of trials for registration of new therapeutic agents. 64 A dearth of predictive biomarkers also makes it difficult to target drug trials to those with the greatest potential for benefit from specific therapeutic interventions. 65 66 Finally, inclusion of patient-reported outcomes of specific manifestations (eg, calcinosis) could allay patients' concerns about entering trials. 67
suMMAry
The convening of the 21st ATT afforded the possibility to discuss and articulate major unmet needs in the field of rheumatology, and across domains there were several overarching perceived unmet needs (table 1). It was generally understood that there has not been sufficient emphasis on trial designs which concentrated on well-defined disease subtypes. Many diseases have multiple subtypes (eg, axial and peripheral PsA or limited/diffuse systemic sclerosis with multiple serological subtypes) and trial designs which mix those subtypes could obscure the success of treatments in specific subgroups. Likewise, trial designs which are able to dissect (or include) overlapping diseases are also needed. While there has been some success in treating moderate to severe patients with various inflammatory rheumatic diseases and even inclusion of some patients with Disease Modifying anti-Rheumatic Drugs (DMARD)-refractory disease in RA, this remains a top unmet need in RA that has been even less carefully examined in patients with other diseases. For example, patients with PsA are often included in trials only if they have been naïve to previous conventional synthetic DMARD (csDMARDs) or biologic DMARD (bDMARDs); more attention needs to be paid to patients who are more 'difficult-to-treat' across all conditions, as well as those who have multiple complications or comorbidities or those who have failed other csDMARDs or bDMARDs.
Last, while progress has been made in treating patients who used to have unmet need within countries and regions such as Australia, Japan, North America and the European Union, it was highlighted that more emphasis needed to be placed on understanding unmet needs in other countries and continents such as Africa, multiple areas in Asia and Central and South America.
|
2019-03-28T13:33:24.871Z
|
2019-07-01T00:00:00.000
|
{
"year": 2019,
"sha1": "b62b05695756e21890598238d30fba27878fed7a",
"oa_license": "CCBYNC",
"oa_url": "https://ard.bmj.com/content/annrheumdis/79/1/88.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4aa63009f860ac31d7773fd75d4f436b8528bd43",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
46933605
|
pes2o/s2orc
|
v3-fos-license
|
The Dependence of the Peak Velocity of High-Speed Solar Wind Streams as Measured in the Ecliptic by ACE and the STEREO satellites on the Area and Co-Latitude of their Solar Source Coronal Holes
We study the properties of 115 coronal holes in the time-range from 2010/08 to 2017/03, the peak velocities of the corresponding high-speed streams as measured in the ecliptic at 1 AU, and the corresponding changes of the Kp index as marker of their geo-effectiveness. We find that the peak velocities of high-speed streams depend strongly on both the ar- eas and the co-latitudes of their solar source coronal holes with regard to the heliospheric latitude of the satellites. Therefore, the co-latitude of their source coronal hole is an im- portant parameter for the prediction of the high-speed stream properties near the Earth. We derive the largest solar wind peak velocities normalized to the coronal hole areas for coronal holes located near the solar equator, and that they linearly decrease with increas- ing latitudes of the coronal holes. For coronal holes located at latitudes&60{\deg}, they turn statistically to zero, indicating that the associated high-speed streams have a high chance to miss the Earth. Similar, the Kp index per coronal hole area is highest for the coronal holes located near the solar equator and strongly decreases with increasing latitudes of the coronal holes. We interpret these results as an effect of the three-dimensional propaga- tion of high-speed streams in the heliosphere, i.e., high-speed streams arising from coro- nal holes near the solar equator propagate in direction towards and directly hit the Earth, whereas solar wind streams arising from coronal holes at higher solar latitudes only graze or even miss the Earth.
Introduction
Since the 1970s, it is well known that solar coronal holes, that is, coronal regions with a reduced density and temperature as compared to the ambient corona and an open magnetic field topology, are the source of high-speed solar wind streams, that is, supersonic plasma streams transcending our solar system (Nolte et al., 1976). The supersonic plasma streams propagate radially away from the rotating Sun and form a branch of the Parker's spiral (Parker, 1958). Thereby, they compress the preceding plasma of the slow solar wind and form a shock region, known as stream interaction region (SIR). Whenever the high-speed solar wind streams and the associated SIRs hit the Earth, they compress the Earth's magnetosphere and may cause geomagnetic storms. Since high-speed solar wind streams are the major cause of minor and medium geomagnetic storms at Earth at the solar declining phase (Richardson et al., 2000), and since high-speed solar wind streams are thought to precondition the interplanetary space and the state of the Earth's magnetosphere for subsequent stronger events like coronal mass ejections (CMEs; Gonzalez et al., 1996), the forecast for the properties of high-speed solar wind streams is of high interest.
The current real-time forecast models for the velocity of high-speed solar wind streams near the Earth are based on simulations of the heliosphere and of the solar corona (e.g., ENLIL, Odstrcil, 2003), on an empirical relationship to the flux tube expansion factor of the magnetic field evaluated between the bottom of coronal holes and the source surface at about 2.5 R ⊙ (e.g., the Wang-Sheeley-Arge model, Arge & Pizzo, 2000;Arge et al., 2003), and/or on a statistical relationship to the area of coronal holes (e.g., the Empirical Solar Wind Forecast, Reiss et al., 2016;Rotter et al., 2012). Note that the empirical and statistical forecast models are related to each other: Fainshtein and Kaigorodov (1994) showed that the area, the flux tube expansion factor, and the photospheric mean magnetic field density below the bottom of coronal holes depend on each other.
In the following, we focus on the statistical relationship between the area of coronal holes and the velocity of high-speed streams as measured in the ecliptic at 1 AU. In 1976, Nolte et al. (1976) showed that the areas of three low-latitude coronal holes which crossed the central meridian in total 15 times between May 1973 and February 1974 correlate with the peak velocities of the corresponding high-speed solar wind streams with a Pearson correlation coefficient cc = 0.96. Abramenko et al. (2009) extended the study to 44 single low-latitude coronal holes observed between 2001 and 2006 and found that their area correlates with the peak velocity of the corresponding high-speed streams at L1 with cc = 0.75. Karachik and Pevtsov (2011) revealed that the projected areas of 108 single coronal holes observed between 1998 and 2008 also correlate with the peak velocity at L1 with cc = 0.41-0.65, with the highest correlation at medium solar activity, that is, at the rising and declining phase of the solar cycle. Further, Wang and Sheeley (1990) studied the 3 month averages of the total area coronal holes covering on the Sun's disk between 1967 and 1988 and found that they correlate with the 3 month averages of the velocity of high-speed streams measured near the Earth. Vršnak et al. (2007a) derived that also the area of coronal holes within a meridional slice of [−10 ∘ , 10 ∘ ] correlates well with the peak velocity of high-speed streams measured at L1 with cc = 0.62, studying a period of 100 days in 2005. Both Wang and Sheeley (1990) and Vršnak et al. (2007b) reported that the correlations degrade when polar coronal holes were excluded.
Further, Vršnak et al. (2007b) showed that the total area coronal holes cover within a meridional slice of [−10 ∘ , 10 ∘ ] also correlates with the drop of the geomagnetic Dst index induced by the impacting high-speed solar wind streams. The Pearson correlation coefficient between the areas and the Dst index is cc = 0.31. The correlation increases to cc = 0.86 when taking into account the Russel-McPherron effect (Russell & McPherron, 1973). Again, the correlation decreases if the polar coronal holes are excluded from the analysis.
Note that these results are statistical relationships between the coronal hole area, the peak velocities of high-speed solar wind streams at L1, and the strength of geomagnetic storms. They neglect the threedimensional propagation of high-speed solar wind streams in the heliosphere and thus the three-dimensional nature of the relation between coronal hole areas, high-speed solar wind stream peak velocities at L1, and strengths of geomagnetic storms.
The three-dimensional distribution of the solar wind in the inner heliosphere was first investigated by the satellite Ulysses (Marsden, 2001). Investigations based on data from Ulysses showed that at solar minimum, the heliospheric distribution of the solar wind is dominated by high-speed solar wind streams from medium to high heliospheric latitudes arising from large polar coronal holes and by the slow solar wind streams near the ecliptic (McComas et al., 2000). In contrast, at solar maximum, both slow-and high-speed solar wind streams are apparent from the ecliptic up to high latitudes (McComas et al., 2001). Further, Ulysses sampled the heliospheric distribution of a high-speed solar wind stream arising from a stable circumpolar coronal hole at solar maximum. They found that the heliospheric velocity distribution strongly depended on the boundary of the polar coronal hole and that the corresponding high-speed stream expanded down to ≈55-70 ∘ heliospheric latitude (McComas, 2003).
However, it is not clear (1) whether the area of single coronal holes or the total area coronal holes cover on the solar disk is the better predictor for the high-speed solar wind stream peak velocity at L1, (2) how the presence 10.1002/2017JA024586 of large polar coronal holes contribute to the speeds of high-speed solar wind streams arising from single low-latitude and midlatitude coronal holes, (3) how the morphology of coronal holes affects the speeds of high-speed streams, (4) how the relationship between high-speed stream peak velocities at L1 and coronal hole areas is affected by the three-dimensional propagation of high-speed solar wind streams in the heliosphere, and (5) how the relationship between high-speed stream peak velocities and coronal hole areas change over the solar cycle.
In this paper, we analyze the properties of 115 coronal holes observed by the satellites Solar Dynamics Observatory (SDO), Solar Terrestrial Relations Observatory (STEREO) A, and STEREO B distributed over all latitudes between 2010 and 2017, their relationship to the peak velocity of their related high-speed solar wind streams measured by the satellites Advanced Composition Explorer (ACE), STEREO A, and STEREO B, and their relationship to the strength of geomagnetic storms induced by the high-speed solar wind streams for a subset of the 52 Earth-directed high-speed solar wind streams. Besides the well-known relationship between the high-speed solar wind stream peak velocities and the areas of their source coronal holes, we find a distinct relationship to the co-latitude of their solar source coronal holes and interpret this result as a consequence of the three-dimensional propagation and expansion of high-speed solar wind streams in the heliosphere.
The paper is structured as follows: section 2 describes shortly the data sets used and the data reduction performed; section 3 performs the analysis. Section 4 presents the results: section 4.1 shows the dependency of the peak velocities of high-speed streams as measured in the ecliptic at 1 AU on the area and latitude of their source coronal holes and section 4.2 the dependency of the Kp index on the area and latitude of the source coronal holes. In Section 5 we discuss the results.
Data Sets and Data Reduction
To determine the properties of the coronal holes, we use extreme ultraviolet (EUV) 193 Å filtergrams recorded by the Atmospheric Imaging Assembly (AIA) on board the SDO (Lemen et al., 2012) and provided by the Joint Science Operations Center (http://jsoc.stanford.edu/), and EUV 195 Å filtergrams recorded by the extreme ultraviolet imagers (EUVIs) on board of the twin satellites STEREO A and STEREO B (STA and STB, Howard et al., 2008) and provided by the Virtual Solar Observatory (https://sdac.virtualsolar.org/cgi/search). The AIA-193 Å filtergrams show the emission from the Fe XII ions in the coronal plasma at a temperature of 1.6 MK (peak response), and the EUVI 195 Å filtergrams the emission from the Fe XII ions at a temperature of 1.4 MK (peak response). All images were normalized to an exposure time of 1 s, rotated to solar north, and rescaled to a spatial resolution of 2.4 arcsec/pixel by considering the conservation of flux.
To analyze the velocities of high-speed solar wind streams, we use in situ solar wind bulk velocity measurements from the Solar Wind Electron, Proton, and Alpha Monitor (McComas, Bame, et al., 1998) on board of the ACE and provided by Caltech (http://www.srl.caltech.edu/ACE/ASC/level2/index.html), and in situ solar wind bulk velocity measurements from the PLasma And Supra-Thermal Ion Composition investigation (PLASTIC; Galvin et al., 2008) instrument on board of STA and STB and provided by the PLASTIC consortium (http://aten. igpp.ucla.edu/forms/stereo/level2_plasma_and_magnetic_field.html). The Solar Wind Electron, Proton, and Alpha Monitor-Ion instrument is a spherical section electrostatic energy per charge analyzer, measuring the energy of solar wind ions from 0:26 to 35 keV, which is dominated by solar wind protons (McComas, Bame, et al., 1998). Based on this data, Caltech provides a level 2 data set containing the absolute values of the hourly averaged bulk solar wind speed. The PLASTIC instrument is an electrostatic energy per charge and time-of-flight analyzer, measuring the solar wind proton bulk parameters in an energy range from 0.3-10.6 keV.
For the study of the geomagnetic consequences of the Earth-directed high-speed streams, we use the geomagnetic Kp index. The Kp index is a measure of the disturbances of the horizontal component of the Earth's magnetic field strength averaged over 13 observatories located between 44 and 60 ∘ latitude (Bartels, 1949;Menvielle & Berthelier, 1991). Here we use the Kp index as listed in the OMNI database (https://omniweb.gsfc. nasa.gov/form/dx1.html).
Methods
We manually selected 115 solar coronal holes and the corresponding high-speed solar wind streams in the time range from August 2010 to March 2017. Note that we define every solar wind stream arising from a coronal hole which produces a SIR as a high-speed solar wind stream. The criteria for choosing the events were the following: 1. The coronal holes show a well-defined boundary as seen in the SDO/AIA-193, STA/EUVI-195, and STB/EUVI-195 filtergrams. 2. The coronal holes are isolated, that is, that no other significant coronal holes were in their surrounding ( Figures A1-A3). 3. And only one distinct peak appeared in the in situ bulk solar wind velocity measurements within 1.5 to 7 days after the center of mass of the coronal holes crossed the central meridian.
Since for each event only one significant coronal hole was at the solar central meridian and only one distinct peak appeared in the velocity measurements in the time afterward, these coronal holes could be undoubtedly related to the in situ measured high-speed solar wind streams. Each of the events was rechecked manually by inspecting the solar wind bulk velocity, proton density, proton temperature, and magnetic field vector time lines and further against the interplanetary coronal mass ejection (ICME) lists of Richardson and Cane (2004) and Jian et al. (2013) in order to exclude ICME events; the continuously updated ICME lists can be found at http://www.srl.caltech.edu/ACE/ASC/DATA/level3/ icmetable2.htm and http://www-ssc.igpp.ucla.edu/∼jlan/STEREO/Level3/ STEREO_Level3_ICME.pdf. Further, we rechecked each event whether the magnetic polarity of the high-speed solar wind stream matches the magnetic polarity of its source coronal hole (Neugebauer et al., 2002), whereby we presumed that the magnetic polarity of a coronal hole does not change in its lifetime. In total, this data set covers 115 of the 594 high-speed streams measured at ACE, STEREO A, and STEREO B during the time range of interest.
We extracted the borders of the 115 coronal holes under study visually by applying an intensity based thresholding technique on the AIA-193 and EUVI-195 EUV images based on Rotter et al. (2012). First, we corrected the EUV images for the EUV limb brightening due to the increased optical depth by applying the annulus limb correction (Verbeeck et al., 2014). Then, we extracted the coronal holes by the thresholding technique. Finally, a morphological operator with a median kernel of 9 pixels was applied (Figure 1a;Rotter et al., 2012). For each coronal hole, we derived its projection-corrected area A CH , and the latitude of its projection-corrected center of mass CH .
For each coronal hole selected, we manually assigned the peak velocity in the hourly averaged solar wind bulk velocity data in the time range of 1.5 to 7 days after the center of mass of the coronal hole crossed the solar central meridian and denote it as the peak velocity of the corresponding high-speed solar wind stream v p (Figure 1b). For the analysis of the strength of geomagnetic storms, we assigned the peaks of the Kp index in the time range of −1.5 to 1 day around the times of the peak velocities of the high-speed streams (Figure 1d). Note that the SIRs created by the interaction of the high-speed solar wind streams with the preceding slow solar winds are located in front of the high-speed streams (Figure 1c) (Belcher & Davis, 1971). Therefore, the peaks in the Kp indices can already appear when the corresponding SIRs sweeps over the Earth, and thus earlier than the peaks in the velocity time lines of the high-speed streams.
The distribution of the solar coronal hole areas and high-speed stream peak velocities at 1 AU versus the solar latitudes of the coronal holes are given in Figure 2. The EUV images of all coronal holes selected are printed in Figures A1-A3, and the properties of the coronal holes, high-speed solar wind streams, and geomagnetic storms analyzed are printed in Table A1.
The Dependency of the High-Speed Solar Wind Stream Peak Velocities as Measured in the Ecliptic at 1 AU on the Areas and Co-Latitudes of Their Solar Source Coronal Holes
In this section, we show that the peak velocities of the high-speed solar wind streams are dependent on the areas and the co-latitude of their source coronal holes. We define the co-latitude of the source coronal hole as the heliospheric latitudinal angle between the position of the coronal hole and the position of the measuring satellite co = CH − sat . Since all the measuring satellites are in the ecliptic, sat varies in the range of ≈ ±7 ∘ . Figure 3a shows the scatterplot of the peak velocities of the high-speed streams v p versus the areas of the coronal holes A CH ; the co-latitudes of the coronal holes are color coded. The well-known wide-scattered dependency between the coronal hole areas and the high-speed stream peak velocities is visible, the Spearman's correlation coefficient r S is 0.50. However, the color coding points to a further dependence on the co-latitude of the source coronal hole.
In Figures 3b-3e, we replot the coronal hole areas versus the high-speed stream peak velocities separately for coronal holes located at co-latitude between 0 ∘ -15 ∘ , 15 ∘ -30 ∘ , 30 ∘ -45 ∘ , and >45 ∘ . In each of the panels, the peak velocities of high-speed streams increase with increasing areas of their source coronal holes. In addition, the regression line of the A CH -v p relationship is significantly steeper for coronal holes with smaller co-latitude, that is, for coronal holes located near the solar equator, than for coronal holes with large co-latitude, that is, located at medium to high latitudes. This means that the peak velocities of high-speed solar wind streams as observed in the ecliptic at 1 AU do not only depend on the area but further on the co-latitudes of their source coronal holes.
Next, we evaluate the relationship of the relative velocity increase per coronal hole area (v p − v offset )∕A CH as function of the co-latitude co of the source coronal holes. First, we presume an offset velocity v offset of 350 km s −1 and plot the relative velocity increase per area versus the absolute co-latitudes of the coronal holes (Figure 4a). It is clearly visible that the relative velocity increase per coronal hole area depends on the co-latitude of the source coronal hole; the corresponding Spearmans's correlation coefficient is r S = −0.67. The highest relative velocity increase per area is obtained for coronal holes with small co-latitude, that is, located near the solar equator, and they statistically turn to zero at an absolute co-latitude of ≈60 ∘ . This means that a coronal hole of a given area causes the highest high-speed stream peak velocity v p in the ecliptic at 1 AU when it is located at the solar equator, that the peak velocity measured decreases linearly with increasing co-latitude of the source coronal hole, and that v p ∕A CH even statistically turns to zero if the coronal hole is located at co-latitudes ≳60 ∘ .
In order to exclude that these results are dependent on the presumed offset velocity and on our manually selected data set, we vary the presumed offset velocity v offset from 300 to 500 km s −1 and calculate the corresponding Spearman's correlation coefficients and their 0.95 confidence intervals by resampling the data set 10 5 times with bootstrapping ( Figure 4b). For offset velocities <375 km s −1 , the Spearman's correlation coefficient stays at a high level of ≈ −0.67 at a confidence interval of [−0.55, −0.77] and decreases down to −0.48 at a confidence interval of [−0.28, −0.60] for an offset velocity of 500 km s −1 . The decrease of the correlation coefficient for high offset velocities is mainly due to the small coronal holes in the dataset: when we exclude the smallest coronal holes with A CH < 3 ⋅ 10 10 km 2 , we get a correlation coefficient of −0.61 at a confidence interval of [−0.46, −0.73] for an offset velocity of 500 km s −1 . Note that for all offset velocities chosen, the relative velocity increase per area turns to zero at an absolute co-latitude of ≈0.60 ∘ (not shown here).
As a further test, we examine the confidence level with regard to whether the relative velocity increase per area is really dependent on the co-latitude of the source coronal hole and not on its solar latitude. To do so, for each offset velocity, we resampled the data set with bootstrapping 10 5 times. For each sample, we calculated the Spearman's correlation coefficient between (1) the relative velocity increase per coronal hole area and the absolute co-latitude of the coronal hole, (2) the relative velocity increase per area and the absolute solar latitude of the coronal hole, and (3) the difference of the absolute values of these two correlation coefficients. Then, the confidence level is given by the relative number of the samples in which the difference of the absolute correlation coefficients is positive, that is, in which the correlation with the co-latitude is higher than with the solar latitude. The confidence level and the mean Spearman's correlation coefficients for each offset velocity are plotted in Figure 5. The red line shows that in 75% to 95% of the samples drawn, the absolute co-latitude yields a better correlation with the relative velocity increase per area than the absolute solar latitude. The black lines show that the mean difference in the Spearman's correlation coefficients is about 0.04. Therefore, it is the co-latitude which affects the peak velocity of high-speed streams we measure in the ecliptic at 1 AU and not the solar latitude of the source coronal hole.
Finally, in order to quantify the peak velocity-area-co-latitude dependency, we fit the data using a least squares fit and the approach The first term gives us the offset velocity, the second term the relative velocity increase per coronal hole area, and the third term a correction factor depending on the co-latitude of the source coronal hole. In this fit, the offset velocity should be seen as a statistical best fit parameter without clear physical meaning. It does not correspond to the velocity of the slow solar wind, and it does not mean that high-speed streams always have a minimum peak velocity of 478 km/s. Further, note that corresponding to the fit the relative velocity increase per coronal hole area turns to zero when the co-latitude of the source coronal hole is 61.4 ∘ with respect to the measuring satellite. In Figure 6a, we show the peak velocities calculated by equation (1) versus the peak velocities observed; the dashed line marks the one-to-one correspondence. The data are well distributed around the one-to-one correspondence at a medium scatter. The Pearson's correlation coefficient of the calculated peak velocities to the measured peak velocities is cc= 0.70, the Spearman's correlation coefficient is r S = 0.72, the mean absolute error is 57 km s −1 , and the root mean square error of the calculated to the measured peak velocities is 70 km s −1 .
Figure 5. Dependence of the Spearman's correlation coefficients r S of the data sets
(dashed black line) on v offset . The red line gives the confidence level that (v p − v offset )∕A CH is better correlated with | co | than with | CH |, dependent on v offset .
The Dependency of the Kp Index on the Area and Co-latitude of Coronal Holes
In this section, we show that the strength of geomagnetic storms induced by high-speed solar wind streams are dependent on the areas and co-latitudes of the source coronal holes. Note that here we can naturally use only events recorded by SDO and ACE; thus, the dataset decreases to 52 events. the source coronal hole is located near the solar equator and get smaller with higher co-latitude of the source coronal hole. This means that the geomagnetic storm caused by a high-speed solar wind stream arising from a coronal hole with a given area is statistically stronger when the coronal hole is located near the solar equator than at medium latitudes, and usually weak if the coronal hole is located at higher latitudes.
Discussion and Conclusions
We investigated the dependence of the properties of solar coronal holes observed by the satellites SDO, STEREO A, and STEREO B on the peak velocity of high-speed solar wind streams as measured in situ in the ecliptic at 1 AU by the satellites ACE, STEREO A, and STEREO B, and the strength of their induced geomagnetic storms from August 2010 to March 2017. From a set of 115 solar coronal holes and corresponding high-speed solar wind streams, and from a subset of 52 geomagnetic events for the Earth-directed high-speed solar wind streams, we found the following: 1. The peak velocity of high-speed solar wind streams as measured in the ecliptic at 1 AU depends linearly on both the co-latitude and the area of the solar source coronal hole: 2. High-speed solar wind streams arising from solar coronal holes located near the ecliptic result in the highest solar wind peak velocities per coronal hole area in the ecliptic. 3. The high-speed stream velocity increase per coronal hole area statistically turns to zero for coronal holes located at co-latitudes ≳61.4 ∘ . 4. The Spearman's correlation coefficient between the high-speed stream velocity increases per coronal hole areas, and the co-latitudes of the source coronal holes are higher than between the high-speed stream velocity increases per coronal hole areas and the solar latitudes of the source coronal holes. 5. The Kp indices, that is, the strength of geomagnetic storms induced by high-speed solar wind streams, depend similarly on the areas and co-latitudes of the source coronal holes.
at a Pearson correlation coefficient of 0.72 between the coronal hole areas and the peak velocities. Further, our results agree well with the results from Karachik and Pevtsov (2011), who investigated the dependency between the properties of 108 coronal holes distributed over all latitudes and the solar wind peak velocities at L1. They found that the Pearson's correlation coefficient between the area of coronal holes as measured in the image plane, A ip , and the peak velocities is slightly higher (cc = 0.55) than the correlation coefficient between the projection-corrected areas and the peak velocities (cc = 0.50). Though, they did not use explicitly a dependency on the solar latitude of the source coronal holes. Since the coronal hole areas as measured in the image plane are related to the projection-corrected areas in the form of
10.1002/2017JA024586
their results yield implicitly a dependency of the peak velocities of high-speed streams on the solar latitude of the source coronal holes. Our results also agree with Robbins et al. (2006), who divided the solar disk into segments of 14 ∘ longitude and 30 ∘ latitude. They calculated for each segment the fractional area covered by coronal holes and correlated the fractional areas of the segments within meridional slices with the solar wind speeds at L1 by a multilinear fit. They found that the weight of segments near the solar equator is higher than the weight of segments at higher latitudes, that is, that a coronal hole with a given area results in a faster high-speed solar wind stream at L1 when the coronal hole is located near the solar equator than at higher latitudes. Due to the large size of their segments, they were not able to determine the functional relationship to the solar latitude.
Our findings confirm the well-known relationship between the areas of coronal holes and the peak velocities of high-speed solar wind streams but additionally quantify the dependence on the co-latitudes of the measuring satellites relative to the positions of the source coronal holes on the Sun. Certainly, the solar wind speed is affected by various further parameters as the morphology of the coronal hole, which were not part of this study. In the following, we give an interpretation to the dependency of the peak velocity of high-speed streams as measured in the ecliptic at 1 AU and the geomagnetic Kp index on the co-latitudes of the source coronal holes.
The dependence on the co-latitudes of the source coronal holes we found may be related to the three-dimensional propagation of high-speed solar wind streams in the heliosphere, which was studied by the Ulysses satellite. During Ulysses' fast-latitude scans, McComas (2003) derived the latitudinal velocity profile of two high-speed solar wind streams arising from polar coronal holes. Thereby, showed that the velocity increased sharply from the low-latitudinal slow solar wind to the polar high-speed solar wind stream, that is, across the SIR, by ≈190 km s −1 over only ≈6 ∘ latitude. However, the velocity further increased by ≈170 km s −1 over a distance of ≈30 ∘ latitude inside the high-speed stream toward its center. This means that the latitudinal velocity profile in the front of the high-speed solar wind stream, that is, the two-dimensional plane in the high-speed stream parallel to the high-speed stream-SIR interface, is not flat. Further, McComas (2003) showed that the polar high-speed solar wind stream extended down to heliospheric latitudes of only ≈55-70 ∘ strongly depending on the boundary of the polar coronal hole; that is, it missed the Earth.
First, let us presume that in general high-speed streams propagate radially away from the Sun in three dimensions and thereby expand. In general, we expect the highest high-speed stream velocities in the center of the high-speed stream front, and lower velocities in the flanks of the high-speed stream. Thus, when the source coronal hole is located in the ecliptic, then the center of the corresponding high-speed stream will also propagate in the ecliptic toward our measuring satellite, and we will measure the peak velocity in the center of the high-speed stream, that is, its maximum velocity. However, when the source coronal hole is located at medium solar latitudes, then the center of the corresponding high-speed stream will propagate radially away from the Sun toward medium heliospheric latitudes, and in the ecliptic we will only measure the flank of the high-speed stream resulting in lower peak velocities. The exact high-speed stream peak velocity we measure is therefore determined by the exact latitudinal position of the satellites within the high-speed stream front, which is given by the angle between the satellite and the radial propagation direction of the center of the high-speed stream. This angle equals the co-latitude we defined, that is, the latitudinal angle between the measuring satellite and the solar source coronal hole. Note that this interpretation is supported by the results of and McComas (2003) described above. It is further supported by the fact that the peak velocities per coronal hole area are always correlated better with the co-latitude of the source coronal holes than with their solar latitudes, which means that the heliospheric latitudinal distance of the measuring satellite to the source coronal hole is the relevant parameter.
If our interpretation is correct, the functional dependence of the high-speed stream peak velocities on the co-latitudes of the source coronal should apply for all empirical relationships between high-speed stream peak velocities and coronal hole parameters, in particular for relationships regarding the coronal hole area, the coronal hole brightness (e.g., Obridko et al., 2009), and the inverse flux tube expansion factor (e.g., Wang & Sheeley, 1990).
The same interpretation is valid for the dependence of the Kp index on the co-latitude and area of the coronal hole. The co-latitude determines the position of Earth in the high-speed stream front, and thus also its geomagnetic consequence. When a coronal hole of a given area is located at the ecliptic, directly looking toward the Earth, we can expect the Earth to be directly hit by the high-speed stream with stronger geomagnetic consequences; when it is located at higher latitudes, the Earth will be farther out in the flanks of the high-speed stream and only be grazed. When a coronal hole is located at high co-latitudes ≳61.4 ∘ , the corresponding high-speed solar wind stream will eventually even not expand down to the ecliptic near Earth and thus miss the Earth.
Appendix A: Data Set
In this appendix, the complete data set used is given. Figures A1-A3 show the Solar Dynamics Observatory/ Atmospheric Imaging Assembly-193 Å filtergrams at which the coronal holes were at the central meridian of the Sun. The coronal holes used are marked in light blue. Table A1 contains the dates at which the coronal holes were located near the central meridian of the Sun, the corresponding remote sensing satellite, the heliospheric latitudes of the satellites, the areas and solar latitudes of the coronal holes, the peak velocities of the corresponding high-speed streams, and the Kp index for Earth-directed events.
|
2018-04-26T13:12:06.000Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "fadb52866dc85484e8779c1505ae6e102ddd9126",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/2017ja024586",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "628d36b4732bd3faf4a7ddc4dcfe15cd3a9c14d5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Geology",
"Medicine"
]
}
|
214701621
|
pes2o/s2orc
|
v3-fos-license
|
Efficient Computation of the Nonlinear Schrödinger Equation with Time-Dependent Coefficients
Motivated by the limited work performed on the development of computational techniques for solving the nonlinear Schrödinger equation with time-dependent coefficients, we develop a modified Runge–Kutta pair with improved periodicity and stability characteristics. Additionally, we develop a modified step size control algorithm, which increases the efficiency of our pair and all other pairs included in the numerical experiments. The numerical results on the nonlinear Schrödinger equation with a periodic solution verified the superiority of the new algorithm in terms of efficiency. The new method also presents a good behaviour of the maximum absolute error and the global norm in time, even after a high number of oscillations.
Introduction
We consider the following dimensionless form of the nonlinear Schrödinger (NLS) equation: Equation (1) represents atomic Bose-Einstein condensates (BECs), where ψ represents the mean-field function of the matter-wave, t is the time and x the longitudinal coordinate. Furthermore, Equation (1) can also be applied in the context of nonlinear optics [1], for the study of optical beams, in which case, ψ represents the complex electric field envelope, t is the propagation distance and x is the transverse coordinate [2,3]. The varying coefficients a(t) and b(t) denote the dispersion and nonlinearity, respectively.
The solution ψ(x, t) of Equation (1) satisfies the global norm conservation law [4]: The analytical solution of the NLS with varying coefficients has attracted great interest in the recent past (e.g., see [5][6][7] or more recent works in [8][9][10][11] and references therein). Additionally, the computation of the NLS is a critical part of the verification process of the analytical theories. This has been achieved in the case of non-varying coefficients, with success for a large number of comparative numerical algorithms [12][13][14][15][16].
Other work performed on the computation of NLS with varying coefficients is limited, especially when they have a periodic/oscillatory behaviour in time. In the work of Serkin and Hasegawa in [17], a set of new soliton solutions of the NLS are found, which allow the investigation of special cases. Some of these introduce solitary solutions with periodic-oscillatory nature and are investigated by Hong and Liu in [4], along with a conservation law for a varying-coefficient NLS. On the other hand, Tang et al. create a predictor-corrector method for the NLS with varying coefficients in [18].
In general, for initial/boundary problems that present a periodic/oscillatory behaviour, a very popular and efficient computational approach is using the method of lines, where the numerical algorithm used for the time integration is specifically designed for this behaviour. Among these time steppers, very wide-spread are the ones with variable coefficients, such as phase-fitted and/or amplification-fitted, trigonometrically-fitted etc. (e.g., [19][20][21][22][23][24][25]). This approach requires a well-defined dominant frequency of the oscillations along the propagation coordinate and, when this is not the case, they under-perform. A different approach is to construct optimised numerical methods that have constant coefficients, which do not rely on the knowledge of the frequencies of the problem (e.g., [26][27][28][29][30][31]). Following these techniques, no fitting is performed. Instead, properties such as the phase-lag, commonly also referred to as dispersion error, and the amplification-error, commonly also referred to as dissipation error, are optimised.
Another methodology that is used along with single-step methods is the step size control, which allows the method to automatically adjust the step size and thus reduce the computation effort [19][20][21][22][23]26,29,[31][32][33][34][35][36]. For some types of methods, the local error estimation can be performed using an embedded estimator, which has many advantages over other techniques, such as extrapolation [32]. For Runge-Kutta (RK) methods, this estimation can be very cheap computationally, since a second approximation is performed using the same internal stages of the method, with negligible additional cost.
In this article, we construct an explicit 6(4)-order, eight-stage Runge-Kutta pair with constant coefficients that has increased phase-lag and amplification-error orders and stability intervals. The values of the coefficients have deliberately chosen to have similar orders of magnitude, to minimise the round-off error. Additionally, the low-order method is developed with similar stability characteristics to the high-order one, to improve the local error estimation for extreme step sizes. For a complete list of the criteria satisfied, see Section 3.
Additionally, we develop a modified step size control algorithm, specifically designed for the NLS equation, which increases the efficiency of our pair and all other pairs included in the numerical experiments. For its development, see Section 4.
The structure of this paper is as follows: • in Section 2 required basic theory concepts are reported; • Section 3 shows the construction and the analysis of the new RK pair; • in Section 4 we present the new modified step size control algorithm; • in Section 5 we present the numerical experiments on the NLS; • in Section 6 we discuss conclusions.
Explicit Runge-Kutta Pairs
A Runge-Kutta method is considered an extension of the Euler method that provides a more accurate approximation by the use of additional derivative evaluations. To achieve a certain algebraic order p, then the following set of equations must hold y (n) (x) = y consists of two methods of different orders: the high-order (c, A, b) and the low-order (c, A,b) of orders q and p, respectively, with p < q: where The values of y andŷ are yielded by the high-and low-order methods respectively.ŷ only contributes to the local truncation error estimation (the step size control is discussed in Section 4).
According to rooted tree analysis [37], there are 37 equations that must be satisfied to obtain a Runge-Kutta method of sixth algebraic order: Additionally, equations c i = s ∑ j=1 a ij , i = 1, . . . , s should hold.
Phase-Lag and Stability
When dealing with initial value problems of oscillatory/periodic nature, it is critical to optimise the numerical method for solving these exact problems. This, however, is often very complicated or even impossible, due to the complexity of the problem, in addition to having developed a very problem-specific algorithm with limited applications. An efficient way to produce methods that are optimised for periodic problems is to deal with a test problem that has similar behaviour but at the same time is simple enough to produce useful results. One test problem that possesses such properties is with exact solution y(t) = y 0 e i ω t , which represents the circular orbit on the complex plane and ω its frequency. Equation (5) The phase-lag measures the error of the angle and the amplification error measures the error of the radius, as the numerical method approximates the circular orbit of Equation (5).
Definition 2. [26]
For the Runge-Kutta method of Equation (3), if |R(i v I )| < 1 and |R(v I + )| > 1, for every v ∈ I I and every suitably small positive , then the imaginary stability interval is I I = (0, v I ). (3), if |R(v R )| < 1 and |R(v R − )| > 1, for every v ∈ I R and every suitably small positive , then the real stability interval is I R = (v R , 0).
Definition 4.
The stability region is defined as the set S = z ∈ C : |R(z)| < 1 .
Construction and Analysis
For the construction of the new pair, we satisfy the following criteria: 1. Sixth algebraic order, which implies the 37 equations of Equation (4) From a number of solutions that satisfy the restrictions of criteria (1) and (2), we select the one that maximises the order of phase-lag and amplification-error, i.e., criterion (3), maximises the real stability interval, i.e., criterion (4) and finally selects the values of coefficients to have similar orders of magnitude, i.e., criterion (5), in this order. Additionally, we chose the coefficientsb of the low-order method, as presented in the last row of Table 1, to satisfy criterion (6).
The stability regions of both high-order and low-order methods of the new pair, based on Definition 4, are presented in Figure 2. The real stability intervals of the high-order and low-order methods are (−4.31, 0) and (−4.25, 0) respectively. This satisfies criterion (6). The phase-lag orders of the new pair, based on Definition 1, are 8(4), while the amplification-error orders are 9(5). The coefficients of the new RK pair are presented in Table 1. Table 1.
A Modified Step Size Control Algorithm
There are various algorithms regarding the selection of the step size, when a local error estimation is known (see e.g., [32] and references therein). The most well-known is the following: Provided a step size h n , each subsequent step size h n+1 of the pair in Equation (3) TOL represents the tolerance or maximum allowed local error. If EST < TOL, then the step is accepted and the solution advances, otherwise the step is rejected and then repeated with a new step size given by Equation (6).
Here, we introduce a modification of the above algorithm, to include the difference of the squares as the local error estimation. For the pair of Equation (3), since q p + 1, we have: and The nature of NLS suggests the use of This means that C h p+1 ŷ n The efficiency of this modification is presented in Section 5.1.
A periodically solitary solution to the aforementioned problem is as proved in [17]. For the computation of this problem, we chose the method of lines. Regarding the semi-discretisation of the term ∂ 2 ψ ∂x 2 , we chose a 20th order symmetric central finite difference scheme to diminish the effects of numerical dispersion. In order to further minimise the error due to the semi-discretisation, we also choose a wide lattice x ∈ [−150, 150], with ∆x = 0.1. The numerical solution of ψ(x, t) 2 versus the coordinates x and t, when solved by method of Table 1, is presented in
Modified Step Size Control
The advantage of the modification of the step size control presented in Equation (10) as compared to the widely used Equation (7) can be observed in Figure 4 for t ∈ [0, 30π]. We observe an increase in efficiency among all three pairs compared (solid versus dashed line) when using the modified versus the original step size control algorithm. Additionally, the new pair is still the most efficient among all pairs when the modified step size control algorithm is used. (7).
Results
The compared methods and their properties are presented in Table 2. The efficiency of all compared methods in terms of the maximum absolute global norm error versus the function evaluations is presented in Figure 5 for t ∈ [0, 30π]. For each algorithm, the most efficient step size control is used, if applicable, which for all embedded pairs is the modified algorithm presented in Equation (10). Otherwise, a constant step is used. Table 2.
Furthermore, the maximum absolute error of the solution and of its square versus t are presented in Figures 6 and 7 respectively, for t ∈ [0, 300π]. The maximum absolute error of the solution versus x is shown in Figure 8 and, finally, the error of the global norm is presented in Figure 9 and is measured by applying the Simpson rule on the conservation law, i.e., Equation (2), and evaluating the difference E ψ(x, t) − E ψ(x, 0) .
Conclusions
We carefully selected a set of criteria for the construction of a new Runge-Kutta pair for the efficient computation of the nonlinear Schrödinger equation with varying coefficients. Then, from a plethora of solutions that satisfy the restrictions, we chose the one that optimises a set of properties in a way that follows the procedure of Section 3. Thus, we have developed an explicit 6(4)-order, eight-stage Runge-Kutta pair that has maximised phase-lag and amplification-error orders, for improved behaviour when solving the NLS with periodic/oscillatory solutions and constant coefficients, since the problem does not exhibit a dominant frequency. The stability characteristics were chosen so that the high-order method has a maximised real stability interval and that the low-order method has similar stability characteristics as the high-order one, to improve the local error estimation for extreme step sizes. Furthermore, the values of the RK coefficients have deliberately chosen to have similar orders of magnitude, to minimise the round-off error.
We compared the new RK method to other methods of the literature on a case of the NLS with a periodic-oscillatory solution. The numerical results verified the superiority of the new algorithm in terms of efficiency. The new method also presented a good behaviour of the maximum absolute error and the global norm in time, even after a high number of oscillations.
Additionally, we have developed a modified step size control algorithm, which was applied to the new pair as well as to other pairs of the literature. The results showed increased efficiency of all pairs included in the numerical experiments, compared to what the original step size control algorithm produced. The new RK pair was still the most efficient one among all pairs when using the modified step size control algorithm.
|
2020-03-12T10:57:03.287Z
|
2020-03-07T00:00:00.000
|
{
"year": 2020,
"sha1": "2b027deeabae432bd567c29039009b0d7f99532d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/8/3/374/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3e73f6f1f493b4c97ff623343298308b29724679",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
251018344
|
pes2o/s2orc
|
v3-fos-license
|
Privacy and Transparency in Graph Machine Learning: A Unified Perspective
Graph Machine Learning (GraphML), whereby classical machine learning is generalized to irregular graph domains, has enjoyed a recent renaissance, leading to a dizzying array of models and their applications in several domains. With its growing applicability to sensitive domains and regulations by governmental agencies for trustworthy AI systems, researchers have started looking into the issues of transparency and privacy of graph learning. However, these topics have been mainly investigated independently. In this position paper, we provide a unified perspective on the interplay of privacy and transparency in GraphML. In particular, we describe the challenges and possible research directions for a formal investigation of privacy-transparency tradeoffs in GraphML.
Introduction
Graphs are a highly informative, flexible, and natural way to represent data. Graph based machine learning (GraphML), whereby classical machine learning is generalized to irregular graph domains, has enjoyed a recent renaissance, leading to a dizzying array of models and their applications in several fields [1,2,3,4,5]. GraphML models have achieved great success due to their ability to flexibly learn from the complex interplay of graph structure and node attributes/features. Such ability comes with a compromise in privacy and transparency, two indispensable ingredients to achieve trustworthy ML [6].
Deep models trained on graph data are inherently blackbox, and their decisions are difficult for humans to understand and interpret. The growing application of these models in sensitive applications like healthcare and finance and the regulations by various AI governance frameworks necessitate the need for transparency in their decision-making process. Meanwhile, recent research [7,8,9,10] has highlighted the privacy risks of deploying models trained on graph data. It has been suggested that these models are even more vulnerable to privacy leakage than models trained on non-graph data due to the additional encoding of relational structure in the model itself [7].
Consequently, an increasing number of works are focussing on explaining [11,12,13,14] the decisions of black box GraphML models in a post-hoc manner, designing interpretable models [15,16,17] as well as privacy preserving techniques for real world deployments of graph models [18,19,20].
Despite the growing research interest, the current state of the art considers privacy and transparency in GraphML independently. While transparency provides insight into the model's working, privacy aims to preserve the sensitive information about the training data 1 . The seemingly conflicting goals of privacy and transparency call for the need of a joint investigation. To date, any gain in privacy or transparency is usually compared to any drop in model performance. However, questions like "what effects would be releasing post-hoc explanations have on the privacy of training data?" or "how well can we interpret the decisions of privacy-preserving graph models?" have so far received little attention [21,22].
In this position paper, we provide a unified perspective on the inextricable link between privacy and transparency for GraphML. Besides, we sketch the possible research directions towards formally exploring privacytransparency tradeoffs in GraphML.
Graph Machine Learning
The key idea in graph machine learning is to encode the discrete graph structure into low dimensional continuous vector representations using non-linear dimensionality reduction techniques. Popular classes of GraphML meth-Was Bob part of training data?
Bob to be denied of loan In terms important features and connections (marked as red) Explanation: Decision: ods include random walk based strategies [23,24] which encode structural similarity of the nodes exposed by their co-occurrence in random walks; matrix-factorization based [25] which rely on low rank factorization of some node similarity matrix; and the most popular graph neural networks (GNNs) [26,27] which learns node representations by recursive aggregation and transformation of neighborhood features. These methods are usually non-transparent and are shown to be prone to privacy leakage risks.
Towards improving the adoption of these methods in sensitivity applications like healthcare and medicine the community has started paying attention to the aspects of transparency and privacy. However these aspects have been so far studied independently (see also Figure 1 for an illustration). A formal investigation into the linked role of transparency and privacy in achieving trustworthy GraphML is missing.
Transparency for GraphML Models
Transparency for deep models, as in GraphML, is usually achieved by providing explanations corresponding to decisions of an already trained model or by building interpretable by design or self-explaining models. Numerous approaches have been proposed in the literature for explaining general machine learning models [28,29,30,31]; however, models learned over graph-structured data have some unique challenges.
Specifically, predictions on graphs are induced by a complex combination of nodes and paths of edges between them in addition to the node features. A trivial application of existing explainability methods to graph models cannot account for the role of graph structure in the model decision. Consequently several graph specific explainability approaches have been recently developed which focus primarily on explaining graph neural networks' decisions for node and graph classification [32,33].
Explanations usually include the importance scores for nodes/edges in a subgraph (or node's neighborhood in case of node-level task) and the node features [11,12,13]. Figure 2 depicts an example of an explanation over graph data. Depending on the explanation method, the importance scores could be either continuous (soft masks) or binary (hard masks). A few works have also been proposed to explain dense unsupervised node representations [34,35]. In terms of methodologies, several techniques based on input perturbations [11,12,13], input gradients [36,37], causal techniques [34,38,33] as well as utilizing simpler surrogate models [14] have been explored.
Another methodology to provide transparency is to develop interpretable by design models [15,16,39]. Such models usually contain a self-explanatory module trained jointly with the learner module. Explanations are thus, by design, faithful to the model.
A few other works also focus on unifying diverse notions of evaluation strategies [40,37] necessary for effectively assessing the quality and utility of explanations. Figure 2: An example explanation in terms of features and node attribution over a social network in which a node represents a user and edges represent friendship relation. Node features correspond to demographic attributes of the user. Neighboring nodes with high importance scores are marked green.
Despite the progress in improving transparency of GraphML techniques its effect on data privacy has escaped attention. While transparency could increase the utility of the model, for sensitive applications any unaddressed concerns for privacy can hinder the full adoption of the models and further dissuade the participants to share their data.
Privacy in GraphML
Deep learning models, in general, are known to leak private information about the employed training data. Recent works showed that trained model on graph data can leak sensitive information about the training data (see Figure 3) like node membership [7,8], certain dataset properties [41] and connectivity structure of the nodes [9]. In Figure 3 we illustrate the possibility of different privacy attacks given access to trained GraphML model. Compared to general deep learning models, GraphML is more vulnerable to privacy risks as they incorporate not only the node features/labels but also the graph structure [7]. Privacy-preserving techniques for graph models are mainly based on differential privacy [42,7,19,20] and adversarial training frameworks [43,44,45]. The key idea in differential privacy [46] is to conceal the presence of a single individual in the dataset. In particular, if we query a dataset containing individuals, the query's result will be probabilistically indistinguishable from the result of querying a neighboring dataset with one less or one more individual. For machine learning models, such probabilistic indistuinguishability is achieved by adding appropriate levels of noise at different levels of model development. For instance, [42] employs objective perturbation mechanism to develop differential private network embeddings. Olatunji et al. [7] combines the knowledge-distillation framework with the two noise mechanisms, random subsampling, and noisy labeling to release graph neural networks under differential privacy guarantees. In particular it uses only a random sample of private data to train teacher models corresponding to nodes in an unlabelled public dataset. The final model which is later released is trained on public data using the noisy labels generated by the teacher models. Other works [20,19] do not build a separate public model but achieve DP via adding noise directly to the aggregation module of GNNs. Adversarial defence to privacy attacks on GNNs is proposed in [43], in which the predictability of private labels is destroyed and the utility of perturbed graphs is maintained. An adversarial learning approach based on mini-max game between the desired graph feature encoder and the worst-case attacker is proposed in [44] to address the attribute inference attack on GNNs.
Despite the growing number of works in improving privacy in GraphML, its effect on transparency of these models is not at all studied. The complex mechanisms employed to ensure privacy further hurts the model transparency.
Consequently it is not clear if existing explainers can be used to explain the decision making process of privacy-preserving models.
A Unified Perspective
Graphs are powerful abstractions that facilitate leveraging data interconnection to represent, predict, and explain real-world phenomena. Exploiting such explicit or latent data interconnections, on the one hand, makes GraphML more powerful but also brings in additional challenges, further exacerbating the need for a joint investigation of privacy and transparency. In following
Bob
Is Bob a part of training data? Node Membership Inference :
Relation reconstruction :
Who are friends of Bob?
Tries to infer private information Figure 3: Given access to trained model or embeddings trained on graph data, an adversary can launch several attacks to infer membership, relations or attributes of a node.
we discuss the key issues arising due to the independent treatment of privacy and transparency for GraphML.
Diverse explanation types and methods
Model explanations for graph data are usually in the form of feature and neighborhood (subgraph) attributions. In particular, importance scores for node features and its neighboring nodes/edges are released as explanations. Neighborhood attributions or structure explanations are a more direct form of information leakage. They can be, for example, leveraged to identify nodes in the training set or infer hidden attributes of sensitive nodes using the attributes of their neighbors. Besides, the data points (nodes) in graph data are correlated, thus violating the usual i.i.d. assumption over data distributions. Consequently, the decisions and explanations over correlated nodes might themselves be correlated. Such correlations among released explanations might be exploited to reconstruct sensitive information of the training data. For example, the similarity in feature explanations for recommendations to two connected users might reveal the sensitive link information they want to hide. Towards this [22] show that the link structure of the training graph can be reconstructed with a high success rate even if only the feature explanations are available.
Transparency of private models
Moreover, due to the correlated nature of the graph data, privacy-preserving mechanisms on graph models need to focus on several aspects such as node privacy, edge privacy, and attribute privacy [20]. This leads to more complex privacy-preserving mechanisms, which results in a further loss of transparency. To understand the issue, consider a simple differential privacy-based mechanism in which randomized noise is added to the model's output. Such noise could alter the final decision but not the decision process that an explanation (according to its current definition) is usually expected to reveal. Model agnostic approaches for explainability, which only assume black-box access to the trained model, might be misguided by such alteration in the final decision.
The curse of overfitting
In traditional machine learning, we can randomly divide the data into two parts to obtain training and test sets. It is more tricky in graphs where the data points are connected, and random data sampling may result in non i.i.d. train and test sets. Even for the task of graph classification where the graphs constitute the datapoints instead of the the nodes, distributional changes are common in train and test splits [47] due to varying graph structure and size. Specifically, the train set may contain specific spurious correlations which are not representative of the entire dataset. This puts GraphML models at a higher risk of overfitting to sample specific correlations rather than learning the desired general patterns [48]. Existing privacy attacks have leveraged overfitting to reveal sensitive information about the training sample [49]. Exploiting associated explanations, which in principle should reveal learned spurious correlations, can further aid in privacy leakage.
Research Directions
Based on the described issues and challenges in the previous section, we recommend the following research directions towards a formal investigation of privacytransparency tradeoffs.
1. New Threat Models. A first step is to quantify the privacy risks of releasing post-hoc explanations. Towards that, we need to design new threat models and structure-aware privacy attacks in the presence of post-hoc model explanations. Care should be taken to formulate realistic assumptions on adversary's background knowledge. For example, in highly homophilic graphs, an adversary might be able to approximate well the link structure of the graph only if the node features/labels are available. What information explanations could leak in addition when explanations are provided?
2. Risk-utilty assessment of different explanation types and methods. Model explanations for GraphML can be in the form of feature or node/edge importance scores. Besides, existing explanation methods are based on different methodologies and might be discovering different aspects of the model decision process. Depending on the dataset and application, certain types of explanation methods and types of explanation (feature or structural) might be preferred over others. A dataset and application-specific risk-utility assessment might reveal more favorable explanations for minimizing privacy loss. For instance, [22] finds that gradient-based feature explanations have the least predictive (faithfulness to the model) power for the task of node classification but leak the most amount of information about the private structure of the training graph. In such cases, one can decide not to reveal such an explanation as it has little utility for the user.
3. Transparency of privacy-preserving models. Besides evaluating the privacy risks of releasing explanations, it is essential to analyze the transparency of privacy-preserving techniques. It is not clear if existing explanation strategies can faithfully explain the privacy-preserving models' decisions. Questions like what should be the properties of explanations of such models? What constitutes a faithful explanation? need to be investigated. Consequently new techniques to explain privacy preserving models need to be developed.
Reducing overfitting.
Overfitting is usually considered a common enemy for model effectiveness on unseen data and privacy. Recently, a few works have proposed interpretable by design models for example using stochastic attention mechanisms [39], graph sparsification strategies [16] etc. These methods are claimed to remove spurious correlations in the training phase leading to a reduction in overfitting. A possible research direction is further exploiting such transparency strategies to minimize privacy leakage.
Conclusion
There has been an unprecedented rise in the popularity of graph machine learning in recent years. With its growing applications in sensitive areas, several works focus independently on their transparency and privacy aspects.
We provide a unified perspective on the need for a joint investigation of privacy and transparency in GraphML. We hope to start a discussion and foster future research in quantifying and resolving the privacy-transparency tradeoffs in GraphML. Resolution of such tradeoffs would make GraphML more accessible to stakeholders currently tied down by regulatory concerns and lack of trust in the solutions.
|
2022-07-25T01:15:58.424Z
|
2022-07-22T00:00:00.000
|
{
"year": 2022,
"sha1": "af10e2205b6162ac4b76e01ba140056c9a43a32b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "af10e2205b6162ac4b76e01ba140056c9a43a32b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
71239807
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of the Behavior of Carbon Nanotubes on Cementitious Composites
Nanotechnology has brought significant innovations in science and engineering. Carbon nanotube has been considered a new and outstanding material in nanoscience field with great potential application in the construction industry. The main objective of this study is to analyze the behavior of cementitious materials produced with the insertion of carbon nanotubes of multiple walls in different concentrations and compare their physic-mechanical properties with plain mortar. This research covers the examination of nanoscale cement products and the use of carbon nanotubes to increase the strength and durability of cementitious composites. Three different ratios of carbon nanotubes have been searched: 0.20, 0.40, and 0.60%. To evaluate the mechanical properties of the samples, destructive and nondestructive tests were carried out to obtain compressive strength, tensile strength by diametrical compression, and dynamic modulus of elasticity as well as to determine their deformation properties. Methods of instrumentation such as scanning electron microscopy and porosity were also used in the analysis of microstructure of the materials. The study presents graphs, tables, and figures describing the behavior of CNT added to mortars samples, allowing a better understanding of the use of this new material in the construction industry.
Introduction
The construction industry is a branch of engineering of great importance, providing the development of various activities for the benefit of civilization and performing significant influence on the organization of society. To cite some examples that prove this relationship between the society and the construction industry, we may think of the infrastructures of water, sewer, and transportation of a country. The activities involved in construction modify the organization of a city over time due to different architectural styles which are formed randomly as well as the distribution and use of those buildings [1].
Cement is a construction material commonly used due to its low cost and high compressive strength, and the enhancement of its performance has been a concern of the research community. Excellence effects performed by nanotechnology have allowed the development of cementitious products of low cost, high performance, and long duration, which may lead to unprecedented uses of these materials in the construction industry [2,3].
The mechanical behavior of cementitious materials depends on structural elements and phenomena occurring in micro-and nanoscale. As a result, nanotechnology may modify the molecular structure of the material, which leads to improving the properties of the bulk of material. Nanotechnology can also improve the mechanical performance, volumetric stability, durability, and sustainability of structure [2,4].
One of the most desired properties of nanomaterials in the construction sector is their capability to confer a mechanical reinforcement to the materials' structure based on cement. Carbon nanotubes (CNT) have superlative mechanical properties and therefore have a promising future when combined with ordinary Portland cement, forming a nanocomposite [5]. CNTs are macromolecules of carbon 2 ISRN Nanomaterials atoms in a periodic hexagonal arrangement with a cylindrical shell shape and categorized as single-walled nanotubes (SWCNTs) and multiwalled nanotubes (MWCNTs). Wrapping a graphene sheet into a seamless cylinder shape conceptualizes the structure of a SWCNT, while MWCNT consists of multiple graphene sheets rolled in on themselves to form a tube shape. CNTs can have diameters ranging from 1 to 100 nm and lengths up to millimeters [6][7][8][9].
The study presented in this paper aims at assessing how carbon nanotubes can affect cement composites in terms of microstructure and physical-mechanical properties [2,3]. The experimental procedures took place in the laboratories of Universitat Politècnica de Catalunya (UPC), Barcelona, Spain, performing tests as destructive tests, scanning electron microscopy, and porosity measurements [10].
The tests were carried out with mortar samples in cylindrical shape of dimensions 4.4 cm × 8.0 cm with contents of carbon nanotubes of 0.20%, 0.40%, and 0.60% on the weight of cement to compare their physic-mechanical properties to the equivalent properties of plain mortar without nanotube.
The tests were made with the samples at the ages of 3 days, 7 days, and 28 days using a replication factor equal to 3 for each blend. The presentation and analysis of the results of the tests were made with the data processing accomplished based on the theory of "Design of Experiments (DOE)" using the commercial MINITAB program [11].
Experimental Work
The study was oriented to essentially obtain comparative results between mortar samples with and without carbon nanotubes with no concern with maximization or process improvement. Therefore, it was used in the experiment commercial materials frequently applied in construction, always maintaining the relationship between the components in the preparation of the mortar mixtures and following the same methodology for samples manufacturing and testing [12].
Tests Carried Out in Laboratories
The well-known "Nondestructive Testing methods-NDT" have been object of several researches in laboratories of excellence, worldwide for a long time. Good reasons for adopting these methods may be observed given that they do not affect the appearance or performance of the structure being analyzed. In addition, these tests can be performed in the same place or in a place very close to it and enable a constant monitoring of the structure, allowing evaluation of possible variations over time [13].
Among the NDT methods available, the ultrasound technique may be considered as the most promising one for the evaluation of concrete structures, given that it allows performing homogeneity test of the material. It is possible to perform a total control of the structure, including eventual change of its parameters over time. For example, by analyzing the variation in the propagation speed of an ultrasonic wave one can obtain the degree of compactness of the structure or even detect heterogeneous regions inside the material [14,15].
The ultrasound test was performed according to what was established in the Spanish standard UNE-EN 14579. The test of each specimen consisted in passing through the material a wave of frequency of 55 kHz produced by an ultrasound generator and measuring the corresponding propagation time in microseconds ( s). Figure 1 shows the test.
Using the propagation time, the value of dynamic modulus of elasticity may be obtained by the following equation: where MOE is the dynamic modulus of elasticity,density, = 0, 16-Poisson coefficient, L-length of the specimen, and -propagation time.
Destructive
Tests. The destructive tests were performed to evaluate compression resistance and tensile strength of the cylindrical samples of 4.4 cm × 8.0 cm in order to compare the behavior of mixtures with CNT with the reference mortar (CN0) without nanotubes [6].
The tests of compressive and tensile strength by diametric compression were performed at the Laboratory of the Department of Materials Science and Metallurgical Engineering, University of Barcelona (UB), at the ages of 3, 7, and 28 days after manufacturing the test pieces in accordance with European Standard UNE- EN 196-1 (2005). A hydraulic press of brand INCOTECNIC PA/MPC-2 was used in this test, controlled by computer, with capacity of 20 tons as illustrated in Figure 2.
For each blend and age, 3 samples (replication factor) were used in the test. The reading provided by the press at the time of rupture of the sample corresponds to the compressive/tensile strength measured in kiloniltons (kN). The compressive stress and tensile stress in megapascal (MPa) were obtained, respectively, with the application of (2) and (3): where is the compressive strength at the time of rupture of the sample, F is tensile strength by diametric compression at the time of rupture of the samples, and S, d, and L correspond, respectively, to the cross-sectional area, diameter, and height of each sample. Additional tests were made to obtain the flexure curves of the samples. These tests were performed for reasons of feasibility and objectiveness in prismatic specimens of dimensions 4.0 cm × 4.0 cm × 16 cm ( Figure 3) and only for the traces CN0 and CN4 at the age of 28 days.
In the press, the specimens were positioned, supported at two points on the surface opposite to the application of force, so that the load was applied at the center of the body, as described by the European Standard NP EN 1992-1-1.
Scanning Electronic Microscopy (SEM)
Test. The microscopy is a technique used to characterize the microstructure of a material. With the SEM used in this test, the interaction of a thin electron beam focused on the microarea or volume analyzed generated a series of signals that could be used to characterize the properties of the sample such as its composition, morphology, and crystallography.
The samples used for this test were obtained with removing material from the disrupted specimens at the age of 28 days and dried in an oven at 100 ∘ C for 48 hours. The samples were mounted on a support to be coated with a paint based on silver and carbon, to facilitate the detection of a beam produced by an electronic microscope. Figure 4 shows the sample set CN0, CN2, CN4, and CN6 mounted for the test.
Trial to Evaluate the Porosity of Samples.
The test was carried out on specimens of mortar CN0, CN2, CN4, and CN6, at the age of 28 days, using a porosimeter located in the Materials Laboratory of the UPC, in accordance with the European Standard UNE-83-309-90 (1990). The samples were obtained by removal of material from the specimens at the age of 28 days disrupted and dried in an oven at 100 ∘ C for 72 hours. To each mixture two samples (replication factor equal to two) were used, to achieve more significant results.
The procedure consisted of weighing the dried samples ( sec ) every 24 hours until the weight change became less than 1%. Then the samples were transferred for the vacuum chamber of the porosimeter where they were submerged in water for about 8 hours. After this test, the samples were again weighed on a scale to obtain the weight saturated ( sat ) and then weighed the specimens suspended in an aqueous solution for determining the hydrostatic weight ( hid ). The value of the porosity of the samples is determined by (4) In addition to the porosity of the samples, the test also provided values for the relative density and the apparent 4 ISRN Nanomaterials density of the cementitious composites. These quantities were determined, respectively, by (v) Carbon Nanotubes produced and supplied by the Nanomaterials Laboratory of Physics Department of Universidade Federal de Minas Gerais (UFMG) Brazil. They were produced by the method of chemical deposition in vapor phase identified by MWCNT HP2627 and have the following characteristics: type, multiwalled carbon nanotubes (MWCNT), weight, 60 g, purity, >93%, other carbon structures, <2%, contaminants, <5% of catalyst powder type MgO-Co-Fe, and dimensions, 99% of the CNT external diameter between 5 nm and 60 nm and a length estimated from 5 m e 30 m.
The European standard EN 196-1 (2005) establishes the standard composition between components for the manufacture of mortar in the following ratios: cement/sand, 1 : 3 and cement/water 1 : 1/2. Based on these ratios, plus the CNT and additive, the blends were prepared for manufacturing the specimens for test in the cylindrical shape with dimensions of 4.4 cm × 8.0 cm. As well as what was done with the sand and water, the additive and the percentage of carbon nanotubes in the mixture were taken based on the weight of cement.
The specimens had to be manufactured in batches of six due to the small capacity of the mixer for preparation of mortars associated with the plan of the test in the ages of 3, 7, and 28 days and the shared use of the laboratory of the UPC. The production of mortar and molding of the specimens were performed on the materials laboratory of the "Escola Politècnica Superior d' Edificació of UPC" following the methodology outlined in the following.
(1) Initially cement and sand were weighed on a precision balance gram brand, model ST-4000, maximum capacity of 4,000 g, and accuracy of ±0.1 g. Then the two materials were mixed manually until they reached a homogeneous appearance.
(2) Then, water and additive were weighed on the same balance.
(3) The additive and water were mixed manually in a plastic container for about 5 minutes.
(4) Next, the nanotube was weighed on the same balance for the blends that include this material.
(5) The nanotube was added to water with the additive and mixed by hand for 5 minutes. Then this whole was submitted to the sonication for 60 minutes to obtain a dispersion of nanotubes and a better homogenization of the mixture. For this purpose, we used the equipment ultrasonic P2000 clining qteck Gmbh.
(6) The cement and sand (previously mixed) were accommodated within the mixer, brand Matest model E93, with maximum capacity of 3 kg together with the mixture of the water, additive, and CNT.
(7) After a mixing time of 15 minutes, the mortar was removed from the mixer for molding the samples. The densification of the samples was made in two layers using a manual vibrating platform. The surface finish of the samples was performed with the aid of a spatula.
(8) After molding, the specimens were kept for 24 hours in a chamber at a temperature of 21.4 ∘ C and 99% relative humidity.
(9) After 24 hours, the specimens were demoulded and then were returned to the greenhouse, where they were kept until the test date. Figure 5 shows a batch of six samples ready for testing.
Test Results and Data Analysis
The following shows the results of tests performed with samples of mortar, with and without carbon nanotubes, as well as the comparative statistical analysis of measurements obtained by making use of the technique of Design of Experiments (DOE) [11]. Due to the limitations found for the tests, the experiments were unable to be performed in random order. The solution adopted was to follow the order of disruption of the samples in their different ages of three, seven, and twenty-eight days. So, the DOE model used in the treatment of the data of this study was the "Comparative Design, " not randomized.
The scientific treatment model that was applied to the results obtained from the tests and used in the MINITAB is summarized in Table 1. Table 2, obtained by running the program MINITAB, shows the sequence of the events with the combination of variables, percentage of nanotubes and age of the samples, and the results of the destructive and nondestructive tests, where cp (MPa) represents the values of the compressive strength, tr (MPa) represents the values of the tensile strength by diametrical compression, and EMOd (GPa) represents the values of the dynamic modulus of elasticity-MOE .
Assessment of Resistance of Materials to Efforts of Compression and Tension.
The ability to withstand compression forces and tension is an important indicator when evaluating the mechanical strength of cementitious materials. Although the performance of these materials depends on several other factors, it provides a good indication of product quality. Low values of resistance indicate that the mortar or concrete has problems in its structure. These problems may come from the use of the unsuitable materials till a bad formation of its internal structure, due to lack of densification or absence of proper healing, for example. Tables 3 and 4 summarize the statistical parameters, representing the result of the experiment performed to evaluate the compressive strength and tensile strength by diametric compression of the samples, respectively. The average values of cp and tr were generated directly by MINITAB, whereas the standard deviation (Dp) and the variance (Var) were calculated, respectively, where represents the value of the readings obtained during the test, is the average of readings, and n is the number of samples used in the test. In Table 3, the percentage of gain or reduction was taken in relation to the reference sample CN0.
With the values of cp and tr , inserted in Table 2, the MINITAB program, using the technique of regression analysis, generated the statistical model profiles to the compressive and tensile tests represented, respectively, by the graphics on Figures 6 and 7.
The validation of a model is possibly the most important step in the sequence of statistical model building. The residual plots are used to ensure that the assumptions associated with the ANOVA model are not violated. The ANOVA model assumes that the random errors are independent and normally distributed with the same variance for each treatment.
The normal probability plot is a graphical technique for testing a data distribution model of an experiment. It is estimated, thereby, whether a data set presents or not a Gaussian distribution profile.
It is possible to see from the graphics (a), on both Figures 6 and 7, that the values of cp and tr are distributed on concentrated form, almost entirely along the straight line, which leads us to conclude that the model performed by the test has the profile of a normal distribution. This fact is ratified by the histograms (graphics (c)); they are configured with a bell-shaped symmetry, which is evenly distributed around zero, showing that the assumption of normality is likely to be true.
Graphics (b) and (d), on both Figures 6 and 7, depict an experiment with consistent methodology, since the residues are randomly distributed around zero, on a band within ±2%, meaning that there is no deviation in the process. The divergences found on these graphs are the values corresponding These values correspond to the samples CN0, tested at the age of 3 days with results of cp and tr far from the average. This may be due to an error in the reading process or the poor quality of the specimens. Figures 8 and 9 show the variation of the average value of the compressive strength and tensile strength by diametrical compression, respectively, with the percentage of carbon nanotubes and the age of the samples of mortar.
It can be observed that the composite with 0.4% of nanotubes had the best performance at all ages with substantial increase of compressive strength and tensile strength, compared to reference sample CN0. The compressive strength had the highest gain at the age of 7 days in about 42%, while the tensile strength reached its peak with an increase of approximately 31% at age of 28 days.
The insertion of carbon nanotubes in composites CN2 and CN6 did not affect significantly the result of the compressive strength, still being observed that the samples CN6 had had their values reduced at the ages of 7 days and 28 days.
However, it is noteworthy that all samples with carbon nanotubes had increases in the tensile strength at ages 7 days and 28 days. The CN2 sample reached a gain of 12.85% at the 8 ISRN Nanomaterials age of 28 days in relation to the reference sample CN0. This result corroborates the good performance of the nanotubes in improving the tensile strength, acting in a significant way on the weak point of the concrete which is its tensile strength [16].
The fact that the sample CN4 has had better performance may indicate that there is a range considered "optimal" to the insertion of nanotubes into arrays of cement, and this band should be close to the values quoted. It was still noticed that, out of that band, there is no significant gain and even loss of resistance can occur.
The increase obtained in the compressive strength is also related to the dispersion of carbon nanotubes in the array. When the dispersion is well performed, the nanotubes are diluted in the cement paste homogenously by making interconnections with the calcium silicate, hydrated with the grains of the mixture with no occurrence of punctual agglomeration. This leads to a denser matrix which contributes to obtaining a new tougher material [17].
The graphics in Figures 8 and 9 also provide a more general statistical view of what goes on with the experiment due to the interference of the variables in the process. The variation of the average values of compressive strength and tensile strength can be seen, with the combination of each level of a variable with the levels of the other variables. The compression test shows that there is only interaction between the curves representing the samples with the percentage of carbon nanotubes of 0.0% and 0.6% in the ages between 3 days and 7 days when the average value of the cp of the composites reaches a level around 24 MPa. On the other hand, the tensile test depicts an interaction between the curves CN0, CN2, and CN6 in the ages between 3 days and 7 days when the average tensile strength of the mixture reaches a level of around 2.90 MPa. Figures 10 and 11 show a three-dimensional view of the result of the test performed with cement samples with and without carbon nanotube for comparative evaluation of compression and traction resistances. This chart is often used to identify the conditions for parameter optimization in conducting an experiment.
It can be seen clearly from the graphic of Figure 10 that the peak of tr occurs for the composite CN4 at the age of 28 days when it reaches the value of 39.90 MPa, and from the graphic of Figure 11, the peak of tr occurs, for the same composite CN4 with the age of 28 days when it reaches the value of 4.18 MPa. Figure 12, "the main effects plot, " shows the evolution of the averages of cp and tr with each value of each variable, combining the effect of the other variables as if they were independent.
In Figure 12(a), it can be seen, for example, that the average of cp is around 25 MPa in the age of 3 days of 29 MPa at the age of 7 days and 32 MPa at the age of 28 days. From the viewpoint of the variable (NTC%), the average value of Figure 12(b) shows the evolution of the average resistance to traction of the samples. It is noted that the average value of tr , is around 2.93 MPa at the age of 3 days increasing for 3.15 MPa at 7 days and reaching 3.56 MPa at 28 days of age. From the point of view of the variable, percentage of nanotubes, the average of tr is 3.0 MPa for CN0, passing to 3.17 MPa for CN2, peaking 3.71 MPa for the CN4 mixing, and falling to 3.0 MPa with the CN6 composite. Table 5 summarizes the statistical parameters representing the result of ultrasound test conducted for obtaining the dynamic modulus of elasticity of the samples. Table 2, the MINITAB program, using the technique of regression analysis, generated the statistical profile model to the nondestructive test represented by the graphics in Figure 13.
With the values of MOE inserted in
It is possible to see from Figure 13(a), that the values of MOE are distributed on concentrated form, almost entirely along the straight line, which leads us to conclude that the model performed by the test has the profile of a normal distribution. This fact is ratified by the histogram (Figure 13(c)); it is configured with a symmetry bell-shaped, which is evenly distributed around zero, showing that the assumption of normality is likely to be true. Figures 13(b) and 13(d) depict an experiment with consistent methodology since the residues are randomly distributed around zero on a band within ±5%, meaning that there is no deviation in the process. The divergences found on these graphs are the values corresponding to the inputs 10 and 34 of Table 2 reported from MINITAB as "Unusual Observations for Emo . " These values correspond to the two samples of CN6 tested in the age of 3 days with results of MOE far from the average. This may be due to an error in the reading process or the poor quality of the specimens.
Evaluation of the Resistance of Materials to Bending
Stresses. The results reported by the INCOTECNI press, during the bending test of samples CN0 and CN4, are shown in Table 6, and the graphic corresponding to the curves of flexion in the form of load versus deformation is displayed in Figure 14.
The curves of flexion, obtained from the test, show that the composite CN4 resulted in the best performance with the breaking point of 2.99 kN exceeding 15% of the breaking load supported by the mixing without nanotubes (CN0). This result corroborates, this important characteristic of the carbon nanotubes that is, causing significant improvement occurs in the well-known weakness of concrete which is their low resistance to bending. Figure 14 also shows that from the beginning of application of the load to point of rupture, the sample CN4 withstood a deformation of 0.90 mm, whereas with the sample CN0, the deformation was 0.81 mm, indicating that the insertion of nanotube made the composite more malleable, that is, with greater deformation capacity, and so can withstand more loading.
As a reference, we mention that Li et al. [18] had performed the test of flexion in prismatic samples with dimensions of 4 × 4 × 16 cm achieving a 25% increase in the flexural strength for mortars with content of 0.50% of carbon nanotube treated. Batiston [19], however, achieved lower results with the largest value of around 5% for the same test and samples of same dimensions as those used by Li et al. Batiston justifies the difference in the result due to the form factor of carbon nanotubes used, a fact that may have influenced the test results.
In general, there are factors that may contribute to the differentiation in the results, for example, the type of additive used, the form factor of the CNT, the methodology used in the manufacture of samples, the curing process of the material, and so on. But, in all cases reported, the insertion of CNT in cementitious composites resulted in improvement in their mechanical properties [20].
Assessment of the Microstructure of the Samples.
The porosity tests and evaluation by SEM were performed for the different blends, at the age of 28 days. The results obtained on the percentage of pores and the apparent and relative densities of the samples are shown in Table 7. The apparent density considers the effect of the pores and, hence, the amount of water contained in the sample volume, while the relative density excludes the influence of the water on the measuring process and is therefore higher than the apparent density [21].
For a comparative view of the magnitudes of the average values obtained from test porosity, Figures 15 and 16 show, respectively, the variation of the percentage of pores and of the densities with addition of different amounts of carbon nanotubes in the mortar samples.
The values of the relative density and apparent density for the four samples were very close however, the sample CN4 presented the largest value, indicating a more dense structure caused probably by the filling of the pores and by the better interconnection between the grains of the mix due to the presence of the nanotube percentage that showed the best results.
In evaluating the porosity of the samples, the CN4 blend also showed the better performance with a reduction of approximately 15% of pores compared to the reference mixture CN0. This result leads us to understand that the composite with 0.4% of carbon nanotubes showed a denser structure not only by filling the pores but also by the formation of more products from hydration, producing pores with smaller diameters. It is also observed that the samples CN2 showed no significant improvement in their microstructure and the mixture CN6 even worsened with the insertion of carbon nanotubes [22].
The decrease in the percentage of pores and the increase in the density of the mixture with insertion of nanotubes are a positive factor for the durability of concrete structures, since, with such a microstructure, the movement of aggressive agents within the material becomes more difficult. Using the SEM technique, it was possible to visualize, through images, the structure of the samples, the material present, and even certain details of byproducts formed by the hydration of cement. The SEM images obtained for samples of mortar CN0, CN2, CN4, and CN6 increased several times as indicated in each set of images which are shown in Figures 17 and 18. The images for the sample CN0 confer with the theory as reported in the literature regarding the formation of acicular crystals single or in clusters which represent one of the products of the cement hydration: the ettringite. Beyond, it is possible to note the formation of some plates, which indicate the presence of calcium hydroxide.
It may be noted in the CN2 blend that the incorporation of nanotubes in the array has changed its morphology. By comparing the images between CN2 and CN0, magnified 10,000 times, for example, it is observed that CN2 contains acicular crystals smaller than those formed in the sample without nanotubes. The presence of carbon nanotubes in CN2 is also evident in the amplifications of 20,000 and 100,000 times.
In the images of the blend CN4, one can identify a greater concentration of carbon nanotubes arranged as web interlacing the compounds of hydrated cement. Another important difference between the samples is more homogeneous hydration observed in the CN4 compound, taking into account that this blend is formed by several acicular crystals evenly distributed, whereas in the sample without nanotubes these crystals are found in random points.
The sample CN6 apparently also showed morphology with better hydration than the reference sample CN0, since, as shown in the images, the acicular crystals are better dispersed in the matrix of the CN6. The presence of carbon nanotubes also provided a better connection of the cement clinker compared to the reference sample.
Conclusions and Final Remarks
Traditional materials in the construction industry, such as concrete, steel, asphalt coatings, and glass are used in large scale and produced in large quantities. To give just one example, the cementitious materials exist for over 2000 years, and currently more than two tons of concrete per person on average is used annually worldwide. Historically, the evaluation of the properties of these materials has only been possible in a macroscale.
The understanding of the nanoscale behavior of the cement matrix and its interaction with other components used in the built environment can provide a powerful approach to develop superior concrete with better properties and more effective control of the degradation process.
Carbon nanotubes, when incorporated into cementitious materials, present a remarkable characteristic, producing the best results with low levels of adding and behavior of Gaussian distribution that is, it identifies a range considered "optimal" for insertion of nanotubes, and out of this range, the properties of the composites have worse results.
In the experiment reported in this study, we found that the mixture of 0.4% of carbon nanotubes (CN4) was the mortar that showed the best performance in relation to the reference sample, achieving an increase of approximately 40% in compressive strength, 30% in tensile strength, 15% in flexural strength, about 25% in the dynamic modulus of elasticity, and a higher rate of structural compaction. The composites with addition of 0.2% and 0.6% of CNT did not show significant results for these same characteristics.
Initially, a better performance for the tensile strength was expected, since nanotubes have good performance for this characteristic. Although the samples have shown significant gains on tensile strength, were found better results for the compressive strength of the materials with the addition of the carbon nanotubes were found.
The mechanical properties investigated are the most used in the field of cement based materials and serve as a measure of quality of mortars or concretes. The images of SEM and porosity tests were performed in order to obtain additional information about the microstructure of the new composites with nanotubes.
It is noteworthy that this experiment had a character purely of exploration comparative where the characteristics of the composites with nanotubes were compared with the equivalent characteristics of the pure mortar. Different results could be obtained for the same phenomena investigated if we changed, for example, the properties of materials such as nanotube type, the type of additives and granulometry of sand. But we believe that the results, in qualitative terms, will be the same.
The results derived from this research and from other studies in the literature leave no doubt of the benefits received by inserting carbon nanotubes in cementitious products. Besides the increments obtained on microstructure and mechanical properties of the composites, the use of nanomaterials in construction can represent savings and greater profitability in the enterprises, as well as a positive step towards preserving the environment when using Portland cement in a more efficient and durable in concrete structures.
|
2019-03-08T14:23:36.646Z
|
2013-08-27T00:00:00.000
|
{
"year": 2013,
"sha1": "090ee2efa1b90b0bc4f5871a68bde9cf9abc4425",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2013/415403.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2e38e005f4d211d9e348850a4d18c8cd92308117",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11548696
|
pes2o/s2orc
|
v3-fos-license
|
An analytical model for the illuminance distribution of a power LED
: Light-emitting diodes (LEDs) will play a major role in future indoor illumination systems. In general, the generalized Lambertian pattern is widely used as the radiation pattern of a single LED. In this letter, we show that the illuminance distribution due to this Lambertian pattern, when projected onto a horizontal surface such as a floor, can be well approximated by a Gaussian function.
Introduction
Due to the rapid development of solid state lighting technologies, light-emitting diodes (LEDs) will largely replace incandescent and fluorescent lamps in future indoor illumination systems.An LED based illumination system may consist of a large number, e.g., hundreds or even thousands, of spatially distributed LEDs with narrow beams.The reason for this large number mainly lies in the fact that a single state-of-the-art LED [1], which can produce a luminous flux of 200 lumen, still cannot provide sufficient illumination for an indoor environment, where an illuminance of about 400 − 1000 lux (lumen per m 2 ) is normally needed.An appealing feature of such a system with narrow beam LEDs is that it can provide localized, colorful, and dynamic lighting effects, especially because the intensity level of each LED can be easily changed.
For such an illumination system, in order to optimize the intensity levels of all the LEDs to achieve certain desired lighting effects, it is essential to have an accurate model for the illuminance distribution of a single LED.In particular, in this letter, we consider the lighting effect rendered on a flat surface, e.g., the floor, by a single LED, assuming the symmetry axis of the LED's radiation pattern to be perpendicular to the floor.More specifically, a two-dimensional (2D) model for the illuminance distribution is proposed.
In the literature, e.g [2,3,4,5], as well as in the datasheets of actual LED products, e.g., [6], various radiation patterns of the LEDs are provided as functions of the observation angle with respect to the LEDs.One of the most widely used patterns is the generalized Lambertian pattern [2].In this letter, by contrast, we provide a 2D analytical model, as a function of the location on the floor, for the lighting pattern due to a single LED.More specifically, based on the generalized Lambertian pattern, we provide a simple yet accurate analytical model of the lighting effect on the floor due to a single LED.
The emitted light from an LED propagates through free space, illuminating a target location, e.g., the floor.The free space optical channel in principle consists of the line of sight (LOS) path and diffuse reflections.In this paper, we focus on the modeling of the illuminance distribution due to the LOS path.The optical power from diffuse reflections is known, by a good approximation, to be uniformly distributed and to be much smaller than that from the LOS path [7], and therefore is neglected in this paper.
The proposed model for the illuminance distribution is presented in Section 2. Section 3 concludes this letter.
Illuminance distribution
Figure 1 depicts the geometry of an LED and an illuminated location with a flat surface, where r is the distance between the LED and the illuminated location, the projection of r onto the flat surface has length d, and h denotes the vertical distance between the LED and the flat surface.The polar angle of the location with respect to the LED is denoted by θ , and the angle of light incidence on the location is clearly equal to θ .Thus, from the generalized Lambertian pattern, the illuminance, i.e., the optical power per unit area, at the location is a function of d or equivalently θ .For convenience in describing the illuminance distribution on a flat surface at a distance h, we write it as a function of d, denoted by f L (d), where f 0 is the total illuminance, m is the Lambertian mode number and m > 0. The mode number is a measure of the directivity of the light beam and is related to the semiangle of the light beam at half power, denoted by Φ 1/2 , by m = − ln(2)/ ln(cos(Φ 1/2 )) [2].Therefore, a larger m corresponds to a narrower beam.Commercially available LED lenses can shape the beam of the Lambertian-type LEDs into 8,9,10], which correspond to m = 45 and m = 181, respectively.Hence, for the sake of convenience in this paper, we focus on the range from m = 25 to m = 200.
Gaussian approximation
For the sake of analytical conveniences and tractability when discussing the illumination effects of multiple LEDs, we would like to use an approximate model for the actual f L (d).
For instance, in [11], the two-dimensional (2D) Fourier transform is used as The analytical form of F L (u, v) for an integer m can be obtained as where m!! denotes the double factorial and K 0 (•) is the modified Bessel function of the second kind.We can see that it is cumbersome to evaluate the values of F L (u, v) for a large integer m.Moreover, to our best knowledge, there is in general no analytical expression of F L (u, v) for a non-integer m.Therefore, we are particularly interested in the approximation models that can potentially bring convenience in the analysis of illumination effects by multiple LEDs.More particularly, in this paper, we propose a Gaussian approximation of Eq. ( 1).It can be observed from Eq. ( 1) that the value f L (d) at d = 0 is the largest, and decreases as d increases.Moreover, for an illumination effect, the human visual system tends to focus on the bright region rather than the background.Hence, we start from d = 0 and approximate the rate of decrease in f L (d).
We take the derivative of Eq. ( 1) with respect to d and get When d is small compared to h, i.e., which is a property that defines the Gaussian function.This motivates us to approximate f L (d) as a Gaussian function.The approximation error in f L (d) can be obtained as ( From Eq. ( 5), the approximation error remains small even when d gets larger, since f L (d) decreases quickly, especially when m is large, with the increase of d (see Fig. 2).
Next, we derive the key parameters in the Gaussian approximation of f L (d), denoted by , where σ 2 is the variance and c is a normalization factor.Thus, the derivative of f g (d) with respect to d is Comparing Eq. ( 4) and Eq. ( 6), we get σ 2 = 2h 2 m+3 .Further, letting f L (0) = f g (0), we get c = f 0 m+1 m+3 .Thus we have As an example, the comparison between f L (d) and f g (d) is illustrated in Fig. 2 for the case h = 3 meter and for different m.The illuminance at every d is normalized by the value at d = 0, i.e. the curves shown in Fig. 2 are actually 10 log 10 f L (0) and 10 log 10 f g (0) .Here, we look at the numerical data on a logarithmic scale, since human eyes perceive brightness logarithmically, which property is known as Weber's law.Further, the range of relative illuminance is considered to be between 0 and -20 dB.This range is taken because illuminance levels below -20 dB are no longer visible to human eyes [12] when one is focused on the center part of the light pattern.It can be seen that the Gaussian approximation is very accurate when m is large, i.e., when the light from the LED is quite focused.The difference between f L (d) and f g (d) is slightly larger for a smaller m, e.g., there is a 1 dB difference for m = 50 at d = 1.2m.
The difference between f L (d) and f g (d) can be explained as follows.Comparing Eq. ( 1) and Eq. ( 7), we observe that the approximation we make is actually h 2 on the logarithmatic scale.Through the approach of Taylor expansion, we know Hence in the above Gaussian approximation, we take only the first term in Eq. ( 8).Moreover, from Fig. 2, the range of d of interest is 0 ≤ d < h.In this range, − m+3 2 ln(1 h 2 , since the second term in Eq. ( 8) is larger than zero.Therefore we get f L (d) > f g (d), as shown in Fig. 2. The difference between f L (d) and f g (d), as can be seen from Eq. ( 8) as well as Fig. 2, increases with d, resulting a larger mismatch in the tail of the illuminance distribution.Now, in order to compensate for this difference, we propose another Gaussian approximation, denoted by fg (d), with a slightly larger variance σ 2 = 2h 2 m , i.e.
which is also depicted in Fig. 2. It can be seen that, in general, fg (d) provides a better fit of f L (d), and yet has the benefit of a simpler expression than f g (d).Equivalently from the Taylor expansion, see Eq. ( 8), the approximation error is now compensated by 3 Note that here we only proposed a simple yet effective compensation for the Gaussian model.The discussion on the optimum compensation for f g (d), which might exist for a given range of d and certain criterion of optimality, is however beyond the scope of this paper.
As introduced in the beginning of this section, the Gaussian approximation is proposed in this paper for analytical conveniences when computing the 2D Fourier transform.The illuminance distribution functions considered in this paper, namely f L (d), f g (d) and fg (d), are circularly symmetric.Therefore, we can easily obtain the equivalent expressions for these functions as f L (x, y), f g (x, y) and fg (x, y) in the 2D Cartesian coordinate system.Henceforth, the 2D Fourier transform can be applied to these functions, resulting in F L (u, v), F g (u, v) and Fg (u, v), respectively.For the Gaussian approximations, F g (u, v) and Fg (u, v), we can get the analytical expressions as for any m > 0, no matter m is an integer or a non-integer.In order to evaluate the performances of the Gaussian approximations in terms of Fourier transform, we present some numerical results in Fig. 3. Here, we again look at the numerical data on a logarithmic scale by evaluating 10 log 10 F L (0,0) , 10 log 10 F g (u,v) F g (0,0) and 10 log 10 Fg (u,v) Fg (0,0) , respectively.Moreover, we focus on an integer m such that we can numerically compute F L (u, v) using Eq. ( 2).Furthermore, due to the symmetric property between u and v, and for the sake of convenience, we only show the values of the Fourier transform as a function of u at v = 0.It can be seen that both F g (u, v) and Fg (u, v) give good approximations of F L (u, v).The accuracy of the approximations is higher for a larger m, i.e. when a light beam is narrow.Furthermore, Fg (u, v) is closer to F L (u, v) when F L (u, v) is large, e.g. 10 log 10 F L (0,0) > −10 dB, where the major part of the signal energy lies.
Impact of diffuse light
In above discussions, we focus on the LOS path.In practice, light also propagates through one or more diffuse reflections to arrive at some location.Due to the nature of diffuse reflections, the light contribution from these non-LOS paths is almost uniformly distributed over the area of a room.A min-to-max variation in the illuminance of less than 3 dB is observed in the literature [13].Moreover, the total received power from diffuse reflections is much smaller than that from the LOS path.In [7], a 10-20 dB difference is observed between the power from the diffuse paths and that from the LOS path at the center of the radiation beam.Since we focus on the illuminance distribution due to the LEDs with narrow beams, diffuse light mostly has to undergo at least two reflections before arriving at the location, unless the LED is located very close to a wall or other objects.Therefore the path loss is even higher and we can treat the effect of diffuse light reflections on illumination rendering to be negligible.
Concluding remarks
In this letter, we show that the illuminance distribution on a flat surface by a single LED with a generalized Lambertian radiation pattern can be well approximated by a Gaussian function.The approximation error is negligible for the LED with a narrow beam width, e.g. 10 o to 5 o .In addition to the analytical Gaussian model obtained, we also provide a modified Gaussian model which gives a better fit of the actual illuminance distribution.An application for this Gaussian model is that we can efficiently analyze the illuminance distributions for the illumination system consisting of a large number of LEDs.
#(Fig. 1 .
Fig. 1.LOS path geometry between an LED and a flat surface.
Fig. 2 .
Fig. 2. The illuminance distribution at h = 3 meter due to a single LED.
|
2017-04-19T00:06:38.031Z
|
2008-12-22T00:00:00.000
|
{
"year": 2008,
"sha1": "c6563db0bcc417f6f30bf5b7aa4a2562d6d1a678",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.16.021641",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "09e7094bfe2f287a1caf46836a71a7cbef074ecc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
246189224
|
pes2o/s2orc
|
v3-fos-license
|
Screening and functional validation of lipid metabolism-related lncRNA-46546 based on the transcriptome analysis of early embryonic muscle tissue in chicken
Objective The study was conducted to screen differentially expressed long noncoding RNA (lncRNA) in chickens by high-throughput sequencing and explore its mechanism of action on intramuscular fat deposition. Methods Herein, Rose crown and Cbb broiler chicken embryo breast and leg muscle lncRNA and mRNA expression profiles were constructed by RNA sequencing. A total of 96 and 42 differentially expressed lncRNAs were obtained in Rose crown vs Cobb broiler chicken breast and leg muscle, respectively. lncRNA-ENSGALT00000046546, with high interspecific variability and a potential regulatory role in lipid metabolism, and its predicted downstream target gene 1-acylglycerol-3-phosphate-O-acyltransferase 2 (AGPAT2), were selected for further study on the preadipocytes. Results lncRNA-46546 overexpression in chicken preadipocyte 2 cells significantly increased (p<0.01) the expression levels of AGPAT2 and its downstream genes diacylglycerol acyltransferase 1 and diacylglycerol acyltransferase 2 and those of the fat metabolism-related genes peroxisome proliferator-activated receptor γ, CCAAT/enhancer binding protein α, fatty acid synthase, sterol regulatory element-binding transcription factor 1, and fatty acid binding protein 4. The lipid droplet concentration was higher in the overexpression group than in the control cells, and the triglyceride content in cells and medium was also significantly increased (p<0.01). Conclusion This study preliminarily concludes that lncRNA-46546 may promote intramuscular fat deposition in chickens, laying a foundation for the study of lncRNAs in chicken early embryonic development and fat deposition.
INTRODUCTION
Rose-crown (RC) chickens are a cultivated Chinese chicken breed that is well known for its very large comb (the RC breed is introduced in Supplementary Figure S1).Cobb broiler (CB) chickens are a large, fast-growing commercial broiler breed, and its weight can reach 3 kg at the age of 6 weeks [1].Compared with CB chickens, RC chickens exhibit a slower growth cycle but have more delicious meat and are thus more favored by consumers.Meat quality is influenced by multiple factors, such as genetic, nutritional and environmental factors, among which genetic differences play the main role [2].Therefore, it is of great significance and commercial value to study methods for improving chicken quality on the basis of genetic differences.
Intramuscular fat (IMF) is an important factor affecting meat quality that is distributed in muscle and muscle fiber tissues [3].IMF deposition can simultaneously promote the separation of muscle fiber bundles and improve muscle tenderness by loosening the cross-links among muscle fibers, fat and connective tissue [4].Increased fat content contributes to better meat flavor while improving tenderness and juiciness, particularly when it occurs as IMF at levels higher than 2.5% [5].Previous studies have confirmed that the IMF content of slow-growing chickens is significantly higher than that of fast-growing broilers in the late growth period [6].The number of fat cells in animal muscle is determined during the embryonic and early developmental stages.In the late growth stage, fat deposition occurs only through increases in the fat cell volume [7].Some studies of chicken embryos have shown that from embryonic day 17 (E17) to postnatal day 1, IMF is rapidly deposited in muscles but that the amount of IMF in muscle decreases sharply during later development [8].Thus, embryonic muscle development and IMF deposition have critical effects on meat quality and meat production.
Long noncoding RNAs (lncRNAs) are transcripts longer than 200 nucleotides that lack a protein-coding ability but some can encode small peptides [9].Previous studies focused on fat metabolism revealed that lncRNAs potentially regulate multiple biological processes, such as preadipocyte differentiation, fat cell differentiation and IMF deposition [10].The intramuscular fat-associated long non-coding RNA (lncRNA IMFNCR) acts as a molecular sponge to bind to miR-128-3p and miR-27b-3p, thereby increasing the expression of the peroxisome proliferator-activated receptor γ (PPARγ) and promoting fat cell differentiation and IMF deposition in chicken muscle [11].Additionally, the lncAD has been shown to inhibit thioredoxin reductase 1 (TXNRD1) expression in a cis-regulatory manner and to decrease intramuscular preadipocyte adipogenic differentiation and promote cell proliferation [12].However, little is known about the expression profiles of lncRNAs related to IMF deposition in early embryonic development.
In the study, lipid droplets in embryonic muscle tissue of two chicken with different genetic backgrounds at 7 and 8 days were stained with oil red, and the triglyceride content in muscle at 42 days after hatching was compared.Differentially expressed lncRNAs and mRNAs were identified from breast and leg muscles of two chicken embryos.We then analyzed the effect of lncRNA-ENSGALT00000046546 (lncRNA-46546) on immortalized chicken preadipocyte 2 (ICP2) cells (College of Animal Science and Technology, Northeast Agricultural University, China) fat metabolism and proliferation [13].This paper provides a valuable base for further studies on the molecular mechanism underlying chicken IMF deposition, leading to a better understanding of the biological process.
Ethical statement
All experimental animals were handled according to a protocol approved by the Medical Ethics Committee of the First Affiliated Hospital, Medical College, Shihezi University (A2016-095, 9 March 2016).All animal experiments were in line with the Guide for the Care and Use of Laboratory Animal by International Committees.
Sample preparation
Two chicken breeds, RC and CB, were subjected to highthroughput sequencing.One hundred fertilized RC and CB chicken eggs were incubated at 37°C under 60% humidity, and E7 leg muscles (L) and E8 breast muscles (B) were surgically collected on a clean bench, immediately placed in liquid nitrogen and then stored at -80°C.The embryonic brain was collected for sex identification by referencing the protocol of Vucicevi et al [14].Male samples were divided into four groups, rose-crown chicken leg muscles (RCL), Cobb broiler chicken leg muscles (CBL), rose-crown chicken breast muscles (RCB), and Cobb broiler chicken breast muscles (CBB), for high-throughput sequencing, with 3 replicates of each group.At 42 days after hatching, breast and leg muscles were sampled from the two chicken breeds and stored at -80°C for the determination of IMF and triglyceride (TG) contents in 3 replicates.
Determination of tissue intramuscular fat content and oil red O staining
Intramuscular fat determination was performed in 10 g samples of breast and leg muscle.After the samples were thawed at 4°C for 48 h, adherent adipose and connective tissue were removed from the muscle and then freeze-dried overnight.Petroleum ether fat extraction from the resultant dried product was conducted for 8 h using a Soxtec Extraction System, and the extracted fat was then dried for 1 h at 105°C.The IMF content was expressed on a freeze-dried basis.
Fresh and equally sized breast (E8) and leg (E7) muscle tissues were collected for compression slices.After slicing and fixing, compression slices were stained with oil red O and hematoxylin.The staining results were observed with a light microscope.
RNA library construction and transcriptome assembly
Total RNA was extracted using TRIzol Reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions.The purity, concentration, and integrity of the total RNA were checked using a Nano Photometer spectrophotometer (IMPLEN, Munich, Germany), a Qubit 2.0 Fluorometer (Life Technologies, Carlsbad, CA, USA), and a RNA Nano 6000 Assay Kit with a Bioanalyzer 2100 system (Agilent Technologies, Santa Clara, CA, USA), respectively.The sequencing libraries were generated from rRNA-depleted RNA with the NEBNext Ultra Directional RNA Library Prep Kit for Illumina (NEB, Boston, MA, USA).After cluster generation with a TruSeq PE Cluster Kit v3-cBot-HS (Illumina, USA), the libraries were sequenced on the Illumina HiSeq 4000 platform at the Novogene Bioinformatics Institute (Beijing, China), and 150 bp paired-end reads were generated.Clean reads were obtained by removing reads containing adapters, reads containing poly-N sequences and low-quality reads from the raw data.The chicken reference genome and gene model annotation files were downloaded from the Ensembl genome browser (Ensembl Release 90, Gallus gallus 5.0, http://asia.ensembl.org/Gallus_gallus/Info/Index).The reference genome index was built using HISAT2-build (v2.0.4), and paired-end clean reads were aligned to the reference genome using HISAT (v2.0.4) with "--rna strandness RF" and other parameters set to the defaults [15].The mapped reads of each sample were assembled by using StringTie (v1.3.1)[16].
LncRNA identification
The transcriptome splicing results were based on the structural characteristics of lncRNAs and the functional characteristics of noncoded proteins.First, we used Cuffmerge software to merge the transcripts and removed the transcripts whose chain direction was uncertain.The identified transcripts were then screened via the following 5-step screening process, and the selected lncRNAs were used as the final candidate lncRNA set for subsequent analysis: i) transcripts with ≥2 exons were selected; ii) transcripts with a length >200 bp were selected; iii) Cuffcompare (v2.1.1)software was used to screen out transcripts that overlapped with the exon region of the database annotation, which were used for lncRNA annotation in the subsequent analysis; iv) the expression level of each transcript was calculated with Cuffquant software, and transcripts with ≥0.5 fragments per kb of transcript per million mapped reads (FPKM) were selected; finally, v) three software programs were used for coding potential analysis to identify lncRNAs: Pfam-sca (E-value<0.001,v1.3), CPC2 (score<0, v0.1), and CNCI (score<0, v2).The intersection of the results of the three programs was used as the lncRNA dataset predicted by this analysis.
Differential expression and conservation analyses
Differential expression was determined from the digital transcript or gene expression data using a model based on the negative binomial distribution [17].In the RCB vs CBB and RCL vs CBL comparisons, transcripts with a Q-adjusted value<0.05 were identified as differentially expressed.To calculate the sequence conservation of transcripts, two programs in Phast (v1.3) were used: phyloFit and phastCons (v1.3) [18].phyloFit was used to compute phylogenetic models for conserved and nonconserved regions among species and was run with the parameter --tree "(mm10, (galGal4, hg19))"; then, the model and hidden Markov model transition parameters were used to compute a set of conservation scores of lncRNA and coding genes with phastCons.
Target gene prediction and functional enrichment analyses
To explore the function of lncRNAs, we first predicted the cis and trans target genes of the lncRNAs.Cis activities refer to lncRNAs acting on neighboring target genes.The coding genes located 10 kb-100 kb upstream and downstream of each lncRNA were searched.The trans function of target gene prediction for lncRNAs is to identify lncRNAs by their expression levels using the Pearson correlation coefficient (|r|>0.95) as a final result.Gene ontology (GO) enrichment analyses of differentially expressed genes or lncRNA target genes were implemented with the GO seq R package (Release 2.12) [19].Differentially expressed lncRNA genes were statistically enriched in Kyoto encyclopedia of genes and genomes (KEGG) pathways by using KOBAS (v2.0) software [20].GO terms and KEGG pathways with corrected p-values<0.05 were considered significantly enriched in differentially expressed genes.
Rapid amplification of cDNA ends
Rapid amplification of cDNA ends (RACE) polymerase chain reaction (PCR) was performed to obtain the fulllength sequence of lncRNA-46546.Total RNA from breast muscle tissue was employed as the template for nested PCR using a SMARTer RACE cDNA Amplification Kit (Takara, Tokyo, Japan) following the manufacturer's instructions.The RACE PCR products were cloned into the pUC19 vector (Takara, Japan) and sequenced by Sangon Biotech (Shanghai, China).
Primers and small interfering RNAs
Primers were designed using Premier Primer 5.0 software (Premier Biosoft International, Palo Alto, CA, USA) and synthesized by Sangon Biotech (China).The U6 small nuclear RNA and glyceraldehyde-3-phosphate dehydrogenase genes were selected as reference genes.The quantitative real-time PCR (qRT-PCR) primer information is provided in the supplemental material (Supplementary File S1).The primers used for cloning the full-length chicken lncRNA-46546 are shown in Table 1.Among these primers, lncRNA-46546-5′RACE-outer and lncRNA-46546-5′RACE-inner were used to clone the lncRNA 5′ sequence, lncRNA-46546-3′RACE-outer and lncRNA-46546-3′RACE-inner were used to clone the lncRNA 3′ sequence, and lncRNA-46546-5 and lncRNA-46546-3 were used to amplify the full-length sequence.The PCR products were subsequently excised with the KpnI and XhoI restriction endonucleases and ligated into the pcDNA3.1(+)plasmid vector.The overexpression vector was named pcDNA3.1(+)-46546.The small interfering RNAs (siRNAs) used for the specific knockdown of lncRNA-46546 were designed and synthesized by GenePharma (Shanghai, China) and are listed in Table 2.
Cell oil red O staining and triglyceride content determination
Cells were washed with phosphate-buffered saline (PBS) and stained using an oil red O Stain Kit (Solarbio, BeiJing, China) following the manufacturer's instructions.The cells were then observed and photographed with an inverted fluorescence microscope.
After harvesting the cells, an appropriate amount of PBS was added to resuspend the cells, which were then subjected to ultrasonic cell disruption at 130 W power 10 times, for 8 to 10 s each time at 15 s intervals, followed by centrifugation at 4°C and 12,000 rpm for 5 min.Finally, the supernatant was collected as the protein sample.The protein content was determined using the bicinchonininc acid method.TG assay working fluid (Nanjing Jiancheng, Nanjing, China) was mixed with the protein, or with ddH2O as a blank control, followed by incubation at 37°C for 10 min, and the optical density (OD) was then determined at a 510-nm wavelength.Moreover, cell culture medium was collected to measure TG contents using the TG content assay kit of Beijing Boxbix Science &b Technology Co., Ltd.(Beijing, China) Standard and blank wells were included to measure the OD at 420 nm wavelength.
CCK-8 assay
After counting the cells, they were seeded in a 96-well plate.When the cells reached a density of 50% to 60%, they were transfected with pcDNA3.1(+)-46546,pcDNA3.1(+),siRNA, and siRNA-NC, and a blank control was also set up, each with 3 replicates.The proliferation of the cells was monitored at 24, 48, 72, and 96 h using a Cell Counting Kit-8 (Dojindo, Kyushu, Japan).Every 24 h, CCK-8 solution was added to the medium, and the OD at 450 nm was determined using an enzyme-labeled instrument after incubation for 1 h.
qRT-PCR verification
Total RNA was reverse transcribed using the PrimeScript RT reagent Kit with gDNA Eraser (TaKaRa, Japan), and qRT-PCR was performed using a LightCycler 96 (Roche, Basel, Switzerland).Each 20 μL reaction contained 10 μL of LightCycler 480 SYBR Green I Master Mix (Roche, Switzerland), 7 μL of ddH 2 O, 1 μL of cDNA, 1 μL of a specific forward primer (10 pmol/μL) and 1 μL of a specific reverse primer (10 pmol/μL).The reaction conditions were as follows: preincubation at 95°C for 5 min, followed by denaturation at 95°C for 10 s, annealing at the optimal temperature for 20 s, and extension at 72°C for 10 s for a total of 45 cycles, and then a final incubation at 95°C for 5 s, 65°C for 1 min, and 97°C for 1 s for melting curve analysis.Gene expression levels were normalized using the 2 -ΔΔCT method.Differential expression analysis was performed via one-way analysis of variance with SPSS 22.0 software, and p<0.05 was defined as indicative of a significant difference.
Statistical analysis
The experimental data were collated and analyzed by EX-
Oil red O staining and IMF and TG content determination
Lipid deposition in the breast muscles of RC embryos was greater than that in CB embryos on E8, whereas lipid deposition in the leg muscles in RC embryos differed less from that in CB embryos on E7 (Figure 1A).At 42 days after hatching, the IMF content of RC breast muscle was significantly higher than that of CB breast muscle; the IMF content of RC leg muscle was also higher than that of CB leg muscle, but not significantly so (Figure 1B).The results of TG content determination showed that contents of RC breast and leg muscles were significantly higher than those of CB muscles (Figure 1C).
Sequencing results and identification of lncRNAs and mRNAs
According to the analysis of the RNA-seq results of RCB, CBB, RCL and CBL (3 replicates in each group), the number of raw reads ranged from 91,374,036 to 128,934,028.The Q20 values ranged from 96.27% to 98.18%, the Q30 values ranged from 90.6% to 95.12%, and the GC concentration ranged from 45.25% to 50.17%.These results showed that the sequencing data from all 12 samples met the requirements for subsequent analysis.A total of 355,066 assembled transcripts were obtained by splicing and merging the total mapped transcripts (Figure 2A).Through comparison with the Ensembl database, a total of 1,100 annotated lncRNAs were identified, including 1,095 long intergenic noncoding RNAs (lincRNAs) and 5 miscellaneous RNAs (miscRNAs).A total of 13,180 noncoding transcripts (Figure 2B) were identified with the three screening software programs (Pfam-sca, CPC and CNCI), comprising 5,867 lincRNAs (44.5%), 5,606 intronic lncRNAs (42.5%) and 1,707 antisense lncRNAs (13.0%) (Figure 2C).In addition, 30,252 annotated mRNAs were identified.
Comparison of lncRNA and mRNA characteristics
By comparison, we found that lncRNAs had fewer exons (Figure 2D) and shorter open reading frames than mRNAs (Figure 2E).The annotated lncRNAs and novel lncRNAs were shorter in length than the mRNAs (Figure 2F).The ln-cRNAs in different libraries showed similar expression levels (Figure 2G), and the expression levels of the mRNAs were higher than those of the lncRNAs (Figure 2H).The results of conservation analysis showed that the exons of the lncRNAs were more conserved and that their introns and promoters were similarly conserved to those of mRNAs (Figure 2I).These feature comparisons are consistent with the results of previous studies and confirm the accuracy of our lncRNA screening results [21].
The correlation coefficient can represent the degree of similarity among samples.We performed a Pearson correlation analysis on all samples, and the results showed that the correlation among samples was greater than 0.9, indicating a high correlation (Figure 3G).Cluster analysis results showed that all samples presented repeatable results except for CBB3 and CBL3, potentially because of individual differences.
qRT-PCR verification
To confirm the accuracy of the RNA-seq results, we randomly selected 8 differentially expressed lncRNAs (ENSGALT000 ) for verification, each with 3 replicates.The results showed that all lncRNA and mRNA expression trends were consistent with the RNA-seq results (Supplementary Figure S2), indicating that our RNA-seq results were reliable.
Enrichment analysis of differentially expressed mRNAs
GO analysis revealed the functions of genes in stage-specific modules, and pathway analysis revealed essential pathways and metabolic networks of genes.In this study, a total of 1,016 GO terms were significantly enriched (p<0.05) according to the GO results for the comparison between the RCB and CBB groups.A total of 471 GO terms were significantly enriched (p<0.05) according to the GO results for the comparison between the RCL and CBL groups.The significantly enriched GO terms of the mRNAs were related primarily to organelle and cell biological processes (Figure 4A, 4B).KEGG pathway analysis identified 15 significantly enriched pathways (p<0.05),including the ECM-receptor interaction, adherens junction, biosynthesis of amino acids, RNA degradation and transport, protein processing in endoplasmic reticulum and mammalian target of rapamycin (mTOR) signaling pathways (Figure 4C, 4D).
RNA-seq expression level and genome location of lncRNA-46546
According to our RNA-seq results, lncRNA-46546 was differentially expressed only between the RCB and CBB groups.The expression levels of lncRNA-46546 and the AGPAT2 gene are shown in Table 3.On chicken chromosome 17, lncRNA-46546 is located 11.2 k-bp upstream of the AGPAT2 gene (Supplementary Figure S3A), suggesting that the AGPAT2 gene may be a potential cis target gene of lncRNA-46546.lncRNA-46546 has 5 exons, and its predicted sequence length is 1,337 bp.Its accession number in NCBI is LOC100858649.
Amplification the full-length sequence of lncRNA-46546
The amplified portion of the 5' end of the lncRNA was by 1,222 bp in length, of which 919 bp was completely consistent with the known sequence of lncRNA-46546 (Supplementary Figure S3B; Supplementary File S4A).The amplified portion of the 3' end of the lncRNA was 1,100 bp in length, of which 924 bp coincided with the known sequence of lncRNA-46546, and there was a poly-A tail structure at the 3′ end (Supplementary Figure S3C, Supplementary File S4B).The full-length sequence of lncRNA-46546 was obtained by touchdown PCR and ligated to the pcDNA3.1(+)plasmid vector.
Detection of overexpression and interference efficiency
ICP2 cells were transfected with the recombinant plasmid pcDNA3.1(+)-46546,the empty plasmid pcDNA3.1(+),siRNAs and siRNA-NC, with ddH2O serving as a blank control, and the expression of lncRNA-46546 was detected by qRT-PCR.Each group included three replicates.The results showed that the expression of lncRNA-46546 increased significantly after transfection with pcDNA3.1(+)-46546,reaching a level 10.96 times higher than that following pcDNA3.1(+)transfection and 12.49 times higher than that in the blank control treatment (Figure 6A).Compared with the results of siRNA-NC treatment, the interference efficiencies of the 3 siRNAs were as follows: siRNA-1, 41.7%; siRNA-2, 67.3%; and siRNA-3, 79.4%.Therefore, siRNA-3 presented the best interference efficiency and was selected for subsequent experiments (Figure 6B).
lncRNA-46546 promotes lipid deposition and triglyceride synthesis in ICP2 cells
We first detected the effects of lipid deposition and TG synthesis after the overexpression and knockdown of lncRNA-46546 in ICP2 cells.Each group included three replicates.The results are shown in Figure 6C.Relative to the pcDNA3.1(+)treatment, the deposition of lipids following pcDNA3.1(+)-46546 treatment was greater and denser, and the deposition of lipids following siRNA-3 and siRNA-NC treatment was relatively low.The results of TG content determination showed that after transfection with pcDNA3.1(+)-46546, the TG content of ICP2 cells was significantly increased relative to that in the other treatment groups, while the TG content of ICP2 cells was significantly decreased after the transfection of siRNA-3.Moreover, the analysis of the TG content of the cell culture medium also showed that the TG content of ICP2 cells transfected with pcDNA3.1(+)-46546was significantly higher than that in the other treatment groups and that the TG content of ICP2 cells transfected with siRNA-3 was significantly lower, which was very similar to our findings within cells (Figure 6D).These results indicate that lncRNA-46546 can promote the formation of lipids in ICP2 cells (Figure 6E).
lncRNA-46546 promotes the expression of AGPAT2 and some genes
To explore the effect of lncRNA-46546 on mRNA expression in ICP2 cells, the expression of mRNA in the cells was detected by qRT-PCR after different treatments, each with 3 replicates.The results showed that the expression levels of the lncRNA-46546 and AGPAT2 genes were significantly increased after the overexpression of lncRNA-46546.Conversely, the expression of lncRNA-46546 was significantly decreased, and the expression of the AGPAT2 gene was decreased and did not reach a significant level after knockdown of lncRNA-46546 (Figure 7A, 7B).According to these results, lncRNA-46546 can promote rather than inhibit the expression of the AGPAT2 gene.We preliminarily identified AGPAT2 as the cis target gene of lncRNA-46546.Then, we also detected the expression of several classical genes closely related to lipid metabolism, and the results are shown in Figure 7.The expression levels of the PPARγ, CCAAT enhancer binding protein alpha (C/EBPα), fas cell surface death receptor (FAS), sterol regulatory element binding transcription factor 1 (SREBP1), and fatty acid binding protein 4 (FABP4) genes were significantly increased after the overexpression of lncRNA-46546.After the knockdown of lncRNA-46546, the expression levels of the PPARγ and SREBP1 genes were significantly decreased, but the expression level changes in the C/EBPα, FAS, and FABP4 genes were not significantly different.The changes in lipoprotein lipase (LPL) gene expression were not significantly different between the treatments.We also examined the changes in the gene expression levels of diacylglycerol acyltransferase 1 (DGAT1), diacylglycerol acyltransferase 2 (DGAT2), and lipid phosphate phosphohydrolase 1 (LPIN1), which are downstream of AGPAT2.The expression levels of the DGAT1 and DGAT2 genes were sig- nificantly increased after the overexpression of lncRNA-46546 (Figure 8A, 8B).The expression level of the LPIN1 gene did not significantly change in the overexpression group, but after the knockdown of lncRNA-46546, its expression level was significantly reduced (Figure 8C).These data demonstrated that lncRNA-46546 is involved in the lipid metabolism of ICP2 cells.
lncRNA-46546 inhibits the proliferation of ICP2 cells
To further understand the function of lncRNA-46546, we used the CCK-8 assay to detect the proliferation of ICP2 cells at 24, 48, 72, and 96 h after the overexpression and knockdown of lncRNA-46546, and ddH 2 O served as a blank control.Each group included three replicates.The results showed that the proliferation of ICP2 cells was significantly inhibited after the transfection of pcDNA3.1(+)-46546and decreased gradually beginning at 48 h (Figure 8D).After the knockdown of lncRNA-46546, the proliferation of ICP2 cells was not significantly affected but was lower than that in the blank control group (Figure 8E).In summary, these data indicated that lncRNA-46546 has an inhibitory effect on cell proliferation.
DISCUSSION
In recent years, studies have shown that lncRNAs are widely distributed in animals, and lncRNAs present higher spacetime and tissue specificity than coding genes and are less conserved among species [22]; these characteristics increase the difficulty of lncRNA research, but the use of RNA-seq facilitates lncRNA research.RNA-seq performed in humans has determined the molecular regulation mechanisms of fat accumulation between different groups of samples with different genetic backgrounds [23].In this study, 96 and 42 differentially expressed lncRNAs were identified in the RCB vs CBB and RCL vs CBL comparisons, respectively.Interestingly, only 6 differentially expressed lncRNAs were shared between the two comparisons, possibly because lncRNAs show strong tissue specificity, consistent with the results of previous studies [24].
Numerous studies have shown that lncRNAs can regulate not only the expression of neighboring protein-coding genes through cis-acting mechanisms but also the expression of genes located on other chromosomes through trans-acting mechanisms [25].In this study, 566 cis and 42 trans candidate target genes were predicted from 132 differentially expressed lncRNAs.Some of the candidate target genes were signifi-
www.animbiosci.org 187
Chen et al (2023) Anim cantly enriched in GO terms associated with fat metabolism and have been previously investigated and reported; these genes present functions such as inhibiting the expression of the NOTCH1 gene, which increases fatty acid oxidation in hepatocytes and reduces IMF deposition in the liver [26].The activation of NOTCH1 gene expression promotes the proliferation of preadipocytes [27].The overexpression of Id3 inhibits adiponectin and the differentiation of preadipocytes [28].ACSBG2 and AGPAT2 have been reported to be involved in the lipid metabolism of chickens [29].KEGG pathway analysis revealed the enrichment of NOTCH1 in the NOTCH signaling pathway, Id3 in the transforming growth factor β signaling pathway, ACSBG2 in the PPAR signaling pathway, and AGPAT2 in the glycerolipid metabolism, glycerophospholipid metabolism and metabolic pathways, and some of these pathways are considered classic lipid metabolism signaling pathways.
According to the above results, we selected the differentially expressed lncRNA-46546 among the many differentially expressed lncRNAs for further functional research.The candidate target gene AGPAT2 is a key rate-limiting enzyme in TG biosynthesis in adipocytes [30].It belongs to the glycerol triphosphate pathway of de novo TG biosynthesis.In the TG synthesis pathway, glycerol-3-phosphate acyltransferase, mitochondrial (GPAM), AGPAT2, LPINs and other genes are involved in the enzymatic reactions of lysophospholipid acid (LPA), phospholipic acid (PA), and glycerol diester (DG) successively generated from glycerol triphosphate (G3P) [31].These are important genes that regulate TG synthesis.The only enzyme that catalyzes the last step of TG synthesis is diacylglycerol acyltransferases (DGATs) [32].A large number of studies show that congenital generalized lipodystrophy is caused by a lack of the AGPAT2 gene, indicating that the AGPAT2 gene is closely related to fat metabolism [33].Studies have shown that LPINs and DGATs are also involved in fat metabolism and play important roles in the synthesis of TGs [34].The results of this study showed that overexpression of lncRNA-46546 regulated and promoted the expression of downstream potential cis-target gene AGPAT2.The expression levels of DGATS and lipid phosphate phosphohydrolase (LPINS) were also correlated with AGPAT2.The expression of AGPAT2 affected the generation of LPA and PA.Therefore, the expression levels of downstream LPINS and DGATS change correspondingly, promoting the transformation of LPA and PA to DG and TG.We speculate that the expression of the AGPAT2 gene and its downstream genes in the pathway is also promoted by other factors (such as HIF-1 and seipin proteins).HIF-1 directly regulates the expression of the AGPAT2 gene [35], and seipin and the AGPAT2 gene can interact during early adipogenesis and potentiate the activity of adipogenic enzymes [36].These factors need to be farther verified.
According to our results, lipid deposition and TG synthesis were promoted after the overexpression of lncRNA-46546 in ICP2 cells.The variation trend of intracellular and extracellular TG contents was consistent with the variation trend of the expression levels of AGPAT2 and its downstream DGAT genes.Because the increased expression level of lncRNA-46546 affects the expression level of its target gene AGPAT2, affecting the generation of LPA.Finally, TG content in cells changed accordingly.In further studies, we found that after the overexpression of lncRNA-46546, the expression of PPARγ, C/EBPα, FAS, SREBP1, and FABP4 were also significantly increased.After knockdown, the expression levels of PPARγ and SREBP1 genes were significantly decreased.These results further confirm that lncRNA-46546 affects lipid metabolism and TG production.PPARγ and C/EBPα play important roles in early adipocyte differentiation [37].PPARγ is a key regulator of adipogenesis, a necessary and sufficient condition for adipogenesis.So far, no factor has been found that can induce fat formation in the absence of PPARγ [38].Recent studies have revealed that PPARγ and C/EBPα target genes are co-localized.PPARγ -C/EBPα positive feedback pathway enables pluripotent cells to differentiate into adipocytes [39,40].Ramanathan et al [33] found that when the expression of the AGPAT2 gene was knocked down, the expression of the PPARγ and C/EBPα genes was inhibited, and cell TG synthesis was also inhibited; Subauste et al [41] confirmed these results.Thus, an association clearly exists between the expression of the AGPAT2, PPARγ, and C/EBPα genes, but not with FAS, SREBP1, and FABP4 genes.Our results also prove this finding.Finally, we detected the effect of lncRNA-46546 on the proliferation of ICP2 cells, and the results showed that overexpression of lncRNA-46546 inhibited cell proliferation; in contrast, its knockdown had no significant effect on cell proliferation, but the number of cells was lower than that in the blank control group.Knocking down the expression of the AGPAT2 gene has been reported to lead to the accumulation of lysophosphatidic acid in cells, and excessive accumulation of lysophosphatidic acid can promote the proliferation of preadipocytes [42].In this study, overexpression of lncRNA46546 led to up-regulation of AGPAT2 expression, resulting in decreased accumulation of intracellular lysophosphatidic acid, thus inhibiting proliferation of preadipocytes.
In summary, our study screened out a highly differentially expressed lncRNA-46546 in chickens with two different genetic backgrounds.Functional verification at ICP2 cell level showed that LncRNA46546 promoted cell lipid deposition and TG synthesis by regulating its potential cis-target gene GAPAT2, and it can inhibit the proliferation of ICP2 cell.This study identified the molecular function of a lncRNA and provides good ideas for further exploring IMF deposi-
Figure 1 .
Figure 1.Oil red O staining and determination of IMF and TG contents.(A) Oil red O staining of compression slices of RC and CB breast (E8) and leg muscle (E7) tissue.Lipids were dyed red by oil red O, and nuclei were dyed blue by hematoxylin.Magnification: 40× (B) Comparison of IMF contents in breast and leg muscles between the two chicken breeds.** Denotes p<0.01.(C) Comparison of TG contents in breast and leg muscles between the two chicken breeds.** Denotes p<0.01.* Denotes p<0.05.IMF, intramuscular fat; TG, triglyceride; RC, rose-crown; CB, Cobb broiler.
Figure 2 .
Figure 2. Screening and characteristics of lncRNAs and mRNAs.(A) A total of 355,066 transcripts were assembled by using Cufflinks with a stringent filtering pipeline to discard transcripts without all the characteristics of lncRNAs.(B) Identification of lncRNAs by using CPC, CNCI, and PFAM.A total of 13,180 transcripts were identified by the three software programs, and both protein-coding transcripts and putative protein-coding transcripts were removed.(C) Classification of novel lncRNAs.(D) Exon numbers, (E) ORF length distributions and ORF sequences of lncRNAs predicted by EMBOSS: getorf.(F) mRNA, annotated lncRNA and novel lncRNA lengths.(G) Box plot of the expression levels of lncRNAs in four libraries (shown in log10 (FPKM+1)).(H) Violin plot of the expression levels of mRNAs and lncRNAs (shown in log10 (FPKM+1)).(I) Conservation scores of mRNAs, annotated lncRNAs and novel lncRNAs.ORF, open reading frames; FPKM, fragments per kb of transcript per million mapped reads.
Figure 3 .
Figure 3. Analysis of differentially expressed lncRNAs and mRNAs.(A), (B), (D), and (E) Volcano plots of differentially expressed lncRNAs and mR-NAs from two comparisons (RCB vs CBB and RCL vs CBL).(C) and (F) Venn diagram of differentially expressed lncRNAs and mRNAs from two comparisons (RCB vs CBB and RCL vs CBL).(G) Correlation map between all samples.The color range from blue to white indicates high to low correlations (RCB, rose-crown chicken breast muscles; CBB, Cobb broiler chicken breast muscles; RCL, rose-crown chicken leg muscles; CBL, Cobb broiler chicken leg muscles).
Figure 4 .
Figure 4. GO and KEGG analysis of differentially expressed mRNAs.(A) and (B) Histogram of the GO enrichment of differentially expressed mR-NAs.Red denotes biological processes, green denotes molecular functions, blue denotes cellular components; the top 30 terms identified in the analysis are displayed.(C) and (D) Scatter plot of the KEGG enrichment of differentially expressed mRNAs.The top 20 terms identified in the analysis are displayed.GO, gene ontology; KEGG, Kyoto encyclopedia of genes and genomes.
Figure 5 .
Figure 5. GO analysis of differentially expressed lncRNAs.(A) GO analysis of cis target genes in RCB vs CBB.(B) GO analysis of trans target genes in RCB vs CBB.(C) GO analysis of cis target genes in RCL vs CBL.(D) GO analysis of trans target genes in RCL vs CBL.Red denotes biological processes, green denotes molecular functions, blue denotes cellular components; the top 30 terms identified in the analysis are displayed.GO, gene ontology; RCB, rose-crown chicken breast muscles; CBB, Cobb broiler chicken breast muscles; RCL, rose-crown chicken leg muscles; CBL, Cobb broiler chicken leg muscles.
Table 1 .
Primers used for rapid amplification of cDNA ends polymerase chain reaction and vector construction
Table 2 .
siRNAs sequences used for RNA interference CEL software, and the fluorescence quantitative PCR results were calculated by 2 -ΔΔCT method.SPSS 22.0 software was used for one-way analysis of variance or independent sample T test, Duancan's method for significance test, and Pearson method for correlation analysis.Results are expressed as "mean±standard deviation".
Table 3 .
Information on lncRNA-46545 and AGPAT2 expression from RNA-seq analysis
|
2022-01-23T16:04:54.442Z
|
2022-01-21T00:00:00.000
|
{
"year": 2022,
"sha1": "ae3258c81713709bb3604a0601ae15f5c2bf8dde",
"oa_license": "CCBY",
"oa_url": "https://www.animbiosci.org/upload/pdf/ab-21-0440.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "862740c57c3dedb0460b56050903d19f4b95e036",
"s2fieldsofstudy": [
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
204497469
|
pes2o/s2orc
|
v3-fos-license
|
IMPACT OF MACROECONOMIC INDICATORS ON DEVELOPMENT PATTERNS: CASE OF TOURISM INDUSTRY IN ASEAN REGION
The purpose of this research is to examine the impact of macroeconomic indicators on tourism revenue from five states of ASEAN region. To address this objective secondary data is collected over last 18 years from 2001-2017 with annual observations. Macroeconomic indicators include inflation, oil prices, industrial growth, exchange rate stock market index, and gross domestic product over time. Method of the study is based on regression OLS estimation with robust standard errors. Empirical findings indicates that key determinants for the change in tourism revenue in selected countries are exchange rate, stock market index, inflation and industrial growth. However, impact of GDP on tourism revenue is also significant for Malaysia, Indonesia, and Brunei. Study findings can be very much beneficial for present decision-making regarding growth in tourism industry in ASEAN region. Limitations of the study includes less than 20 years of time duration, ignoring the microeconomic indicators of tourism revenue and cross-sectional analysis. Future studies can address these limitations which better understanding and practical implications.
Introduction and Background of the Study
For the world trade and prosperity tourism is one of the key driver (World Tourism Organization, 2019;Hossain et al, 2018;Islam et al, 2018;Kabir et al, 2018;Chkalova et al., 2019;Shevyakova et al., 2019). Importantly, over the time ASEAN tourism industry has grown. According to the World travel and tourism council (2017) they mentioned that overall 7.6 trillion US$ has generated revenue from travel and tourism which is 10.2% of overall global GDP. In addition, ASEAN countries are providing equal promotion of all their region destination to develop more sustainable. Therefore, tourism and travel considered as one of key contributor for generating revenue. In terms of revenue generation tourism industry has significance impact in the economy for future continuous growth. Therefore, effects of macroeconomic variables are true extent for tourism revenue. Presently, Ordinary least square regression and quantile regression are not yet used to predict the effect of macroeconomic variables on tourism revenue of ASEAN countries. All number of countries in ASEAN community are considered equally intention to identify the relationship of macroeconomic variables. Additionally, the growth of tourism industry is found in all regions who are providing more safety and healthy facilities to tourists (Mahrinasari et al. 2019).
ASEAN countries association are being held in 1967 in Thailand with initial five countries member. For instance, Malaysia, Singapore, Indonesia, Philippines, Thailand and Malaysia. Thereafter, additional countries join ASEAN community such as Vietnam, Brunei Darussalam, Myanmar, Laos and Cambodia. They caption the motto of "One vision, one identity and one community". These joint communities are tending to generate quality opportunities after collaboration with the basic pillars. These three pillars are named as Economic community, political security community and socio-cultural community. According to ASEAN Tourism Agreement (2015) they reported 13 articles regarding sustainability goals in terms of social progress, culture development, training for enhancing skills, collaboration for living better standards to provide organization cooperation with international region.
Overview of Tourism Industry in ASEAN
From the context of tourism industry, a strategic plan is developed by ASEAN member for the time period of 2016 to 2025. It is believed that tourism industry in ASEAN member is playing its vital role for the economic growth and financial progress. For this purpose, all member states have consolidated their services in terms of quality of services to the tourists, marketing, human resource development, investment in mega projects, participation from the local community, sustainable development and attracting more tourists in local market. Under this objective for 20025, the factor of promotion and marketing covers the enhancement of ASEAN tourism and statistical framework. To offer diversify products in tourism, activities under the title of complete and ongoing identification of new product development with marketing efforts have been defined. For the attracting tourism investment, plan is developed which coordinates the convergence and investment for the tourism infrastructure. For raising capability and capacity in the field of tourism, mutual recognition is conducted for the professional development. While for the enhancement of facilities, overall agreement between ASEAN countries is developed under the title of "article 2 of ASEAN Tourism Agreement 2002".
Following are the core highlights under the tourism plan of 2025: l The member states will contribute towards the GDP through more tourism up to 12 to 15 %. l The share of employment in tourism industry could increase from 3 to 7 %. l Spending in terms of per capita by those who are visiting these member states will increase from 877 US dollars to approximately 1500 US dollars.
l The activity of community-based tourism could increase from 43 to more than 300.
After the review of tourism industry in ASEAN, since 2001 to 2014, total number of tourists from international market has been increased to 105 million in 2014, which are 42 million back in year 2001. This significant increase indicates the fact that huge amount of revenue is coming towards these countries, providing more growth opportunities and financial outcomes. While in depth analysis of this arrival indicates the fact that 12 percent arrival is from Europe, more than 30 percent from Asia, while 46 percent of tourists were coming from Intra ASEAN. In addition, 4 percent are those who belongs to America and 4 percent also from Oceania. In addition, rest of the 4 percent from total tourists are not specified very well. Figure 2 provides a comprehensive view of this trend during 2014 (Secretariat, 2016;Keho, 2017;Khan, 2018;Lari et al, 2017). The forecasting of international arrivals to ASEAN is also presented under the below findings, which covers there region; world, Asia & pacific, and southeast Asia. It is believed that 2010-2020 3.8 percent tourists are coming from world economy, while specifically 5.7 percent are coming from Asia & Pacific and 5.8 percent from Southeast Asia (Secretariat, 2015). (Figure 1). Based on above overview, this study has been conducted for the tourism revenue in ASEAN from the context of macroeconomic indicators. After the detailed review of literature, it is found that earlier studies are missing with the context of ASEAN and majority of focus is towards the Europe and other developed countries having significant tourism market. However, the focus towards ASEAN specifically from the context of macroeconomic determinants and tourism industry is missing. In this context, this study will reasonably be covering the literature gap, both theoretically and practically. The rest of the study is as follows. Next section has provided critical review of the literature for tourism industry and macroeconomic indicators in different regions. Section three explains variables, their operational significance with literature evidence. Section four indicates the methods being applied in the study. Section five shows results and their valuable discussion for the association between variables. Last section gives conclusions, limitations and some future directions.
Related Literature
In the developed countries, international arrivals of tourist affecting factors has been study extensively. However, according to the tourist arrivals report of 2010-15 shows the five-year tourist arrivals continuous increased trend. In this regard, such studies yet not carried to explore the relationship of macroeconomic indicators and revenue of tourism among ASEAN countries. More importantly, in previous studies seasonal analysis based on each year has been abandoned. Therefore, this study bring more extensive in nature to explore influencing macroeconomic factors.
Various studies argued that arrival of tourism is better explained by income because it shows the detailed earning of tourism sector (Croes & Vanegas Sr, 2005;Jang, Bai, Hong, & O'Leary, 2004;Wang, 2009;Wattanakuljarus & Coxhead, 2008). It is further added that higher income are bigger spender then lower income. According to Jang et al. (2004) who argued that in USA Japanese young educated travelers are more interested to visit comparatively to other developed countries. Algieri (2006) explains key indicators of Russian tourism, for instance gross domestic product (GDP) had found positive significant long run cointegration association. He further explained that intense demand of foreigner travelers are more concern to luxurious and good services which tends to increase in the income level of tourism. In contrary, Dritsakis (2004) used vector error correction approach to determine relationship of macroeconomic factor on Greece industry.
This study concluded that higher income in Great Brittan and Germany are leading due to attractive macroeconomic factors. In another study of Lim, Min, and McAleer (2008) who used ARIX model to express the influence of income change of New Zealand from Japanese arrivals. It is argued that origin country income level is associated from tourist arrival factors. This study is concerning to use industrial production index (IPI) due to lack of monthly data of GDP because in previous there are very few studies who mentioned the importance of industrial production index for tourism demand. In Jeju Island, Singapore, Thailand, and Philippines has correlational demand which is statistically proved through using VEC industrial production (Seo, Park, & Yu, 2009). Thereafter, another important macroeconomic factor is exchange rate. There is a rich literature of exchange rate that influence tourism demand in past studies. According to the Box and Cox (1964) and Croes and Vanegas Sr (2005) (Algieri, 2006;Hiemstra & Wong, 2002;Saayman & Saayman, 2008;Wang, 2009;Hussain et al., 2018). In previous studies, inflation is not much exploring to determine true determinants of tourism revenue. According to (Chen, 2007a) and (Chen, 2007b) who argued that in hotel stock return of china has association with macro and non-macro factors. Regarding stock index and crude oil has proposed to influence the tourist demand in ASEAN countries because besides higher income group there are lower-and middle-income group are also visiting tourist destination. Therefore, higher crude oil price has influence on tourist demand. According to Saayman and Saayman (2008) who used travel cost as proxy of crude oil price to determined tourist demand but this study is extend to use variation of crude oil price to determine appropriate result to generate demands of tourist by local and foreign travelers.
In addition. Some other studies have explored the relationship between macroeconomic factors and tourism industry. For instance, (Meo, Chowdhury, Shaikh, Ali, & Masood Sheikh, 2018) have examined the impact of change in oil prices, exchange rate for the demand in tourism industry of Pakistan. For this purpose, they have applied ARDL approach with the cointegration method. Findings of their study reveals the fact that there is a significant effect of CO2 emission on the value of tourism demand in Pakistan. While other factors like quality of institution have shown their long run association with the prices of oil, inflation, changes in exchange rate for tourism demand. In addition, some others like (Buckley, Gretzel, Scott, Weaver, & Becken, 2015) explored the relationship between the facility of local infrastructure in the form of transportation on the value of tourism industry. He explained that increase in the oil prices have their significant and adverse influence on the tourism demand in local market. The study of (Katircioglu, Katircioglu, & Altun, 2018) explained that changes in the oil price is associated to the economic growth and both demand and supply forces in the economy. They have explained that supply side is directed effected due to more production cost, while demand side is affected as well. It is also believed that tourism industry can also create the income inequality and can affect the Kuznets curve hypothesis (Alam & Paramati, 2016). Becken (2011) has explained the relationship between oil and global tourism industry. It is believed that oil price is linked to world economy and in similar way to the tourism too. To analyze this objective, author has explored the economy of New Zealand under the situation of dwindling of global oil. Study was based on the four different research phases. Their finding indicates the fact that significant association between tourism industry and oil price in the world economy exists. (Blomberg, Hess, & Jackson, 2009) also consider the factor of oil price and its association with the tourism. (Gunter, 2018;Lechner et al, 2018;Habib & Mucha Sr, 2018;Madhusudhanan, 2018) examines the conditional forecasting for the export of tourism and tourism export prices in the region of EU 15-member states under the title of global vector autoregression method. Time duration of the study was 2013 to 2017. Findings through GVAR explains that global tourist income is relative associated to the price ensurity. Practical implication of the study reveals that global market share is rising in competition.
The detailed investigation and critical review of present literature shows that regional economic indicators are directly or indirectly affecting the tourism industry in different economies. However, the studies from the context of south Asian and ASEAN members are very limited. Notable work can be viewed in the studies of (Hall, 1992;Hall & Page, 2012;Hitchcock, King, & Parnwell, 2018;Richter, 1989;Wong, Mistilis, & Dwyer, 2011;Moussa, 2018). The focus of these studies is based on the review of tourism sector through intergovernmental collaboration, super nationalist's alliance, economic impacts, economic development, and under the title of preconditions and framework for the policy development as well. In this regard, intergovernmental coordination is found to be very much significant and persistent. To the best of authors findings, this study is very first attempt to explore the idea of tourism revenue in targeted economies through regional economic indicators. Industrial Growth IGR Annual growth rate in overall industry of a country (Barnett, 2016;Oakey & Rothwell, 2018)
Research Methodology
This study has used secondary data from five economies in ASEAN region. For this purpose, data is collected over last five years for macroeconomic indicators are tourism revenue in selected countries. After the collection of data regression analysis approach is applied under separate econometric equations, covering the causal relationship between the independent and dependent variables of the study. For better understanding of applied methodology, following equations will help the reader to understand the predicted relationship between variables. Relationship under these equations is tested through STATA-14 version, while adjusting the standard errors in the coefficients through robust command.
. Results and Discussion
For the effect of macroeconomic indicators on tourism revenue in Malaysia, Table 2 provides statistical findings. For regional economic factors, inflation, price of oil, exchange rate, stock market index, industrial growth rate and gross domestic product are added in the model. Through inflation, effect on tourism revenue is Malaysia, it has significant negative influence. It means that more increase in the prices of goods and services are local level is leading towards adversely affecting the revenue from the tourism industry. This effect is significant at 1 percent level of significance. While effect of oil prices in the local market has indicates its negative but insignificant impact on tourism revenue of Malaysia. While exchange rate is found to be significant determinant of tourism revenue, showing the coefficient of -.058 and standard error of .164 respectively. Through stock market index, industrial growth both indicators have shown their insignificant association with the level of tourism revenue. But the effect through GDP is found to be positively significant with the coefficient of .227, standard error of .042 and t-value of 10.31 as well. F-test shows significant of the model while R-square indicates an explanatory power of .463 for the region of Malaysia. Table 3 indicates regression findings for tourism revenue and macroeconomic determinants in the region of Indonesia. Through oil price, effect on tourism revenue is found to be significant and negative at 1 percent chance of error. It means that more oil prices in Indonesia has their adverse influence on revenue from tourism. For exchange rate, effect on tourism revenue is found to be positively significant at 1 percent chance of error.
While stock market index also shows its no influence on tourism revenue. Through GDP, coefficient of .1024 explains that increasing gross domestic products have their positive influence on tourism revenue in Indonesia. The determination of tourism revenue through macroeconomic variables on tourism industry is significant as Ftest has a score of 94.648, with p-value of .000. While explained variation in Dependent variable is .812 shows a good variation in tourism revenue by targeted macroeconomic indicators. Table 4 specifies regression findings for tourism revenue and macroeconomic determinants in the Thailand. It is observed that effect of oil price on tourism revenue is found to be significant and positive at 5 percent chance of error. It means that more oil prices in Thailand is showing their direct impact on revenue from tourism. For exchange rate, effect on tourism revenue is found to be negative but insignificant at 1, 5 and 10 percent chance of error. While stock market index also shows its positive and significant influence on tourism revenue with the coefficient of .585. Through GDP, coefficient of .029 explains that increasing gross domestic products have their positive but insignificant influence on tourism revenue in Thailand. In addition, F-test has a score of 9.233, with p-value of .000, significant at 1 percent. While explained variation in Dependent variable is .787 due to all macroeconomic regional indicators. Table 5 shows the effect of macroeconomic variables on tourism revenue in Singapore. It is found that inflation, oil price and exchange rates has no direct influence on the receipt from tourism. While stock market index, and industrial growth has shown their positive and significant impact with the coefficients of 1.458 and .081. It means that more receipt of tourism industry is possible with the factors like stock market index and overall industrial growth in the economy. While effect through GDP is found to be insignificant on tourism revenue in the region of Singapore. For tourism revenue in the region of Brunei, Table 6 reflects the effect of macroeconomic indicators. It is found that inflation and oil price found to be insignificant determinants of tourism revenue. While exchange rate has shown a negative and significant effect of -.012 in Brunei. It means that more volatility in exchange rate causing to an opposite effect on tourism income. While through stock market index effect is significantly positive with the coefficient of .154 and standard error of .018 respectively. Additionally, both industrial growth and GDP are found to be significant affecting the tourism receipt.
Conclusions and Future Directions
This study has examined the effect of macroeconomic factors on tourism revenue in five ASEAN countries. For this purpose, study has conducted OLS regression analysis technique for all five states. It is observed that in the region of Malaysia, regional economic indicators like Inflation, exchange rate changes, and gross domestic product have their significant impact on the income from tourism industry over last 18 years of the study. In case of Indonesia, key indicators from the macroeconomic environment for tourism revenue are found to be oil prices, exchange rate, industrial growth and gross domestic products. Regression findings for the region of Thailand has shown that key factors to affect tourism income are oil price, stock market index, and industrial growth with good explanatory power of the model. Empirical findings for Singapore suggested that there is a significant need to focus on the factors like stock market index, and industrial growth having their direct influence for increasing value of tourism revenue in the country. While in case of Brunei, it is found that key determinants are exchange rate, industrial growth and gross domestic product along with stock market index. Bases on these findings, this study is highly recommended to the reserachers, academics and policy makers at regional level who are analyzing the relationship between tourism industry and its key determinants at macrolevel. Study findings can be very much beneficial for present decision-making regarding growth in tourism industry in ASEAN region. Additionally, students in the field of economics and business management can review the causal relationship between the key variables of this study. However, this study has several limitations. At first study is just focusing on the macroeconomic factors, while ignoring micro indicators. At second, sample period is less than 20 years which assumes to be not very good for the long run analysis. At third, cross sectional and comparative analysis of selected states is missing in this study. Future work should address these limitations for better findings and more appropriate managerial implications.
|
2019-09-26T08:56:08.405Z
|
2019-09-20T00:00:00.000
|
{
"year": 2019,
"sha1": "65d192c1c56d02c117ba2e536d05bb277e927c28",
"oa_license": null,
"oa_url": "https://doi.org/10.9770/jssi.2019.9.1(19)",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "212a05d60ccb9c4a4285597d5c2a97a89ec3c483",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
266444073
|
pes2o/s2orc
|
v3-fos-license
|
MITO39: Efficacy and Tolerability of Pegylated Liposomal Doxorubicin (PLD)–Trabectedin in the Treatment of Relapsed Ovarian Cancer after Maintenance Therapy with PARP Inhibitors—A Multicenter Italian Trial in Ovarian Cancer Observational Case-Control Study
Simple Summary This multicenter, retrospective analysis had the objective of comparing the efficacy of PLD-Trabectedin in patients who had already been treated with PARP-I (cases) before vs. PARPi-naïve patients (controls). Data from 166 patients were collected, composed of 109 cases and 57 controls. In total, 135 patients were included in our analyses, composing 46 controls and 89 cases. We found a median PFS of 11 months (95% IC 10–12) in the control group vs. 8 months (95% IC 6–9) in the case group (p value 0.0017). The clinical benefit rate was evaluated, with an HR for progression of 2.55 (1.28–5.06) for the case group (p value 0.008) persisting when adjusted for BRCA mutation. The study showed a statistically significant difference in terms of PFS, suggesting that a previous exposure to PARP-i might inhibit the efficacy of PLD-Trabectedin. Regarding tolerability, no remarkable disparity was noted. Abstract Objective: While PLD-Trabectedin is an approved treatment for relapsed platinum-sensitive ovarian cancer, its efficacy and tolerability has so far not been tested extensively in patients who progress after poly ADP-ribose polymerase inhibitor (PARPi) treatment. Methodology: This multicenter, retrospective analysis had the objective of comparing patients receiving PLD-Trabectedin after being treated with PARP-I (cases) with PARPi-naïve patients. Descriptive and survival analyses were performed for each group. Results: Data from 166 patients were collected, composed of 109 cases and 57 controls. In total, 135 patients were included in our analyses, composing 46 controls and 89 cases. The median PFS was 11 months (95% IC 10–12) in the control group vs. 8 months (95% IC 6–9) in the case group (p value 0.0017). The clinical benefit rate was evaluated, with an HR for progression of 2.55 (1.28–5.06) for the case group (p value 0.008), persisting when adjusted for BRCA and line with treatment. We compared hematological toxicity, gastro-intestinal toxicity, hand–foot syndrome (HFS), fatigue, and liver toxicity, and no statistically significant disparity was noted, except for HFS with a p value of 0.006. The distribution of G3 and G4 toxicities was also equally represented. Conclusion: The MITO39 study showed a statistically significant difference in terms of PFS, suggesting that previous exposure to PARPi might inhibit the efficacy of PLD-Trabectedin. Regarding tolerability, no remarkable disparity was noted; PLD-Trabectedin was confirmed to be a well-tolerated scheme in both groups. To our knowledge, these are the first data regarding this topic, which we deem to be of great relevance in the current landscape.
Introduction
Ovarian cancer (OC) is a formidable adversary in the realm of women's health, ranking as the seventh most common neoplasia among women.What makes it even more concerning is that it is the fourth most lethal cancer due to its tendency to elude early detection.In its nascent stages, ovarian cancer often remains asymptomatic, leading to a bleak 5-year overall survival rate of just 30% [1].
This characteristic alone underscores the dire need for more effective management strategies, particularly those that enhance early detection and treatment.
The prevailing standard of care for newly diagnosed high-grade serous ovarian cancer (HGSOC) typically involves primary or interval cytoreduction, followed by platinumbased chemotherapy.In recent years, the landscape of ovarian cancer treatment has undergone a significant transformation, catalyzed by the introduction of PARP (Poly ADP-Ribose Polymerase) inhibitors.Initially, these inhibitors were reserved for relapsed cases, but following the groundbreaking results of trials such as SOLO1 [2], PRIMA [3], and PAOLA1 [4], they have been integrated into first-line therapy regimens.PARP inhibitors have ushered in a new era of ovarian cancer treatment.
However, this advancement has brought about a fresh set of challenges.While PARP inhibitors have extended progression-free survival (PFS), the reality is that a significant proportion of patients will eventually experience disease relapse.This has given rise to a growing population of individuals for whom treatment strategies remain uncertain.Decisions regarding the subsequent lines of therapy, including whether to continue with PARP inhibitors or explore alternative approaches, pose complex dilemmas for both patients and oncologists.
One intriguing avenue in the quest to address these challenges involves investigating the efficacy and tolerability of non-platinum-based chemotherapy regimens in the population of patients who have previously been exposed to PARP inhibitors.Currently, there is no definitive guidance on whether patients treated with PARP inhibitors should follow Cancers 2024, 16, 41 3 of 10 different treatment paths than those who have not received such therapy.The debate around the optimal sequence of treatments is a topic of active research and discussion.
More specifically, the efficacy and tolerability of non-platinum-based chemotherapy regimens have not been described in this population.
The combination of pegylated liposomal Doxorubicin (PLD) and Trabectedin is currently approved for individuals with relapsed OC who have experienced a platinum-free interval of at least 6 months.The approval is rooted in the compelling results of phase III trials such as OVA3012 [5] and the Monk et al. [6] study.These findings were subsequently reinforced by INOVATYON [7] and the real-world phase IV NIMES ROC [8], which emphasized that the benefits of this therapy persisted regardless of prior exposure to bevacizumab, another commonly used drug in the management of ovarian cancer.
To further elucidate the potential benefits of PLD-Trabectedin therapy, the TRAMANT-01 study, identified by its EUDRACT number 2017-000987-14, is currently underway.This trial seeks to determine whether a maintenance therapy involving PLD-Trabectedin, in comparison to Trabectedin alone, could be advantageous for patients who achieve at least a partial response after completing six cycles of combination therapy.
The trial results will be important, as a maintenance therapy with Trabectedin on its own, if proven statistically significant, will aide in providing a possible maintenance therapy in patients who have already been treated with Bevacizumab and PARPi and might not have an alternative maintenance option.These investigations into alternative therapies are a testament to the dynamic and ever-evolving nature of ovarian cancer treatment.
In response to the urgency of generating more data and insights in this domain, the Multicenter Italian Trials in Ovarian Cancer (MITO) initiated the MITO39 study.This retrospective analysis was designed to bridge the knowledge gap regarding the use of pegylated liposomal Doxorubicin (PLD) and Trabectedin, both platinum-free treatment options, in patients previously exposed to PARP inhibitors as opposed to those who had not been exposed, which were included in registrative trials.By comparing the efficacy and tolerability of these treatments in these two patient populations, the MITO39 study aims to generate a hypothesis in this still unclear scenario that can help tailor treatment strategies for individual patients and ultimately improve their chances of survival and quality of life.
Materials and Methods
The MITO39 study was conducted across multiple MITO centers, ensuring that patients were treated by experienced gynecological oncologists adhering to the current standards of care.The cohorts were drawn from cases treated at participating centers between 2009 and 2022, following the established inclusion and exclusion criteria.
This study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Clinical Research Council of IRCCs Candiolo, Italy (date of approval 21 February 2021), as well as the Ethical Committee of Ospedale Mauriziano (date of approval 15 September 2022, code of approval 382/2022).Data were retrospectively collected from 166 cases, composed of 109 cases-patients subjected to PLD-Trabectedin after PARP inhibitors-and 57 controls-patients subjected to PLD-Trabectedin without ever being treated with PARP inhibitors.
The number of patients accrued was based on the maximal effort made by the centers that have participated, who have gathered all patients treated at their facility between 2009 and 2022 corresponding to our previously established inclusion and exclusion criteria.The criteria called for patients with a confirmed diagnosis of advanced epithelial ovarian cancer and known BRCA status who had been treated, according to standard practice, with PLD-Trabectedin, with or without previous exposure to PARP inhibitors.
The primary focus of this study was to compare the efficacy and tolerability of PLD-Trabectedin between the two patient groups.Additionally, we also collected data on various types of toxicity, including hematological, gastro-intestinal, hand-foot syndrome, fatigue, and liver toxicity.Follow-up data were tracked until November 2022 to provide a comprehensive overview of patient outcomes.Demographic and clinical data were retrospectively retrieved from dedicated databases at each institution.The database elements included: patient characteristics at initial diagnosis (demographics, tumor stage according to the International Federation of Gynecology and Obstetrics [FIGO] criteria, histology, year of diagnosis), BRCA status, surgical, chemotherapy and PARPi regimen details, number and type of chemotherapy regimens performed after PARPi, PLD-Trabectedin toxicity details, and PFS for PLD-Trabectedin.Statistically significant differences were present for the BRCA status, as well as for the stage at diagnosis; while we deemed significant for our study the difference in BRCA status and therefore carried out a multivariate analysis, we did not think that the difference in initial staging would influence the response to chemotherapy in a multi-treated metastatic population.
Patients were divided into the two study groups, and descriptive and survival analyses were performed for each.Due to missing information, 135 patients were included in our analyses, composed of 46 controls and 89 cases.The populations were compared for categorial variables (age, stage at diagnosis, type of surgery at diagnosis, histology), and no statistical difference was found (Table 1).Clinical response to therapy was evaluated using the modified Response Evaluation Criteria in Solid Tumor (RECIST) version 1.1.
Progression-free survival (PFS) was defined as the time between the date that therapy started and the date of disease progression or death or last contact.At the time of the data cut off, all but one patient had experienced progression of disease (PD).
The results are presented as median and interquartile range [IQR] for continuous variables and number and percentage for categorical ones.A comparison of variables between the patients treated with PLD-Trabectedin and the control group was performed using the Mann-Whitney or Fisher's exact test, when appropriate.
Cancers 2024, 16, 41 5 of 10 PFS was estimated using the Kaplan-Meier method and compared with the log-rank test.Based on clinical assumptions, a multivariate Cox model was performed to describe the treatment risk factor associated with PFS adjusted for BRCA and age; the model was stratified by line of treatment in order to account for different basal risks.The proportional hazard assumption was checked using the Schoenfeld residuals.A p-value less than 0.05 was considered statistically significant.All statistical analysis were performed using R version 4.2.1.Based on clinical assumptions, a multivariate analysis using the Cox model was carried out on PARPi and BRCA data to identify the risk of PFS.
We also decided to stratify our population based on the line at which PLD-Trabectedin was administered, dividing between 1st, 2nd, 3rd, and >4th line, as we deemed the timing of PLD-Trabectedin as a potential confounding element.
Details of the PARPi regimens administered are outlined in Table 2.
cut off, all but one patient had experienced progression of disease (PD).
The results are presented as median and interquartile range [IQR] for continuous variables and number and percentage for categorical ones.A comparison of variables between the patients treated with PLD-Trabectedin and the control group was performed using the Mann-Whitney or Fisher's exact test, when appropriate.
PFS was estimated using the Kaplan-Meier method and compared with the log-rank test.Based on clinical assumptions, a multivariate Cox model was performed to describe the treatment risk factor associated with PFS adjusted for BRCA and age; the model was stratified by line of treatment in order to account for different basal risks.The proportional hazard assumption was checked using the Schoenfeld residuals.A p-value less than 0.05 was considered statistically significant.All statistical analysis were performed using R version 4.2.1.Based on clinical assumptions, a multivariate analysis using the Cox model was carried out on PARPi and BRCA data to identify the risk of PFS.
We also decided to stratify our population based on the line at which PLD-Trabectedin was administered, dividing between 1st, 2nd, 3rd, and >4th line, as we deemed the timing of PLD-Trabectedin as a potential confounding element.
Details of the PARPi regimens administered are outlined in Table 2.
Results
Median PFS was 11 months (95% IC 10-12) in the control group vs. 8 months (95% IC 6-9) in the case group (p value 0.0017) ().The clinical benefit rate (CBR) was evaluated as well, with an HR for progression of 2.55 (1.28-5.06),p value 0.008) for the case group, calculated based on the stratification of the line of treatment.It also persisted when adjusted for the statistically different presence of BRCA mutations in the two groups.
We also looked at the distribution of G3 and G4 toxicities, which were equally represented.
Discussion
PARP inhibitors in platinum-sensitive disease in clinical practice have revolutionized the treatment of OC, with most patients undergoing at least a partial response to platinum therapy, receiving it as a maintenance therapy in clinical practice.
Although astounding, even in the best-case scenario of a BRCA-mutated patient receiving Olaparib as a first-line therapy, the updated analyses at 5 years of the SOLO1 trial reported a PFS of 48% (95% CI 41-55), which would mean that 52% had relapsed and had needed further treatment [2].
Thus, exposure to PARP inhibitors has selected a population with fundamental differences compared to those represented in all the registrative trials of the chemotherapy lines currently approved for OC; however, this has now changed and all current trials account for PARP-inhibitors-treated patients; this brings about the question of whether the chemotherapy regimens we have used previously have the same efficacy and tolerability on tumors whose microenvironment has been changed by the interaction with PARP inhibitors, which has shown to have a great impact [9][10][11].
Currently, there are a variety of possible approaches to a patient who progresses on or after PARP inhibitors [12,12].Although a solid therapeutic algorithm is still missing, the use of loco-regional treatments such as surgical intervention or radiotherapy (RT) and the continuation of PARP inhibitors therapy is widely spread in oligo-metastatic progression, with data so far coming only from retrospective evidence alone [13,14]; therefore, the validation of this approach through a randomized prospective trial is needed.
In case of a progression not amenable to surgical or radiation treatment, a further line of chemotherapy is recommended based on the platinum-free interval (PFI); in the post-PARP inhibitors era, several concerns have been raised regarding the efficacy of platinum-based therapies in these patients [15] as newly emerging data point towards a platinum resistance due to a cross-resistance mechanism [16,17].While many efforts are ongoing on a translational level to better understand this [18][19][20][21][22][23], we are starting to gather clinical evidence [24,25]; in a post hoc analyses of the SOLO2 trial, the time to second progression (TTSP) was significantly longer in the placebo cohort than the Olaparib cohort, with a TTSP of 14.3 vs. 7.0 months (HR 2.89, 95% CI 1.73-4.82)for platinum salts, while for non-platinum-based therapies, it was 8.3 vs. 6.0 months (HR 1.58, 95% CI 0.86-2.90)[26].
Similarly, an Italian retrospective real life study of Olaparib [27] highlighted an ORR of only 22.2% to platinum in patients with a PFI of more than 12 months who had progressed after having received Olaparib.
The MITO39 is the second and largest study to focus on the efficacy of non-platinumbased CT in platinum-eligible patients who progress after PARP inhibitor therapy.
With this retrospective study, we were able to show a statistically significant difference in terms of PFS between patients previously treated with PARP inhibitors and PARPinhibitors-naïve patients treated with PLD-Trabectedin, thus suggesting that a previous exposure to PARP inhibitors might hinder the efficacy of the regimen.Interestingly, however, the performance of PLD-Trabectedin in PARP inhibitors-exposed patients was similar to the one observed in other studies where patients were PARP-naïve.
Regarding tolerability, the comparison did not yield a remarkable disparity; PLD-Trabectedin was confirmed to be a well-tolerated and manageable scheme in both groups.
The present study contains both limitations and strengths.Limitations include those associated with the retrospective nature and the intrinsic risk of confounding due to the absence of randomization, the relatively small number of patients included, and the significant difference in BRCA-mutated patients between the two groups.The strengths of this study are the homogeneity of the cases enrolled, in terms of clinical features and treatment administered, and the soundproof statistical analyses conducted.
To our knowledge, these are the first data regarding the activity of a non-platinum agent in patients treated with PARP inhibitors compared to PARP-inhibitors-naïve controls, which we deem to be of great relevance in the current era; PARP inhibitors have now been a standard of care for years, prompting the need to further explore whether exposure to this targeted therapy has any impact on later lines.This investigation aims to determine if such exposure could lead to a change in perspective when selecting the appropriate treatment for a patient who experiences a relapse after maintenance therapy.
While PLD-Trabectedin still stands as an option for platinum-sensitive relapsed disease, the evidence gathered by this study suggests that the activity of a non-platinum-based regimen might be lower than expected in patients pre-treated with PARP inhibitors, which will need to be factored in when deciding the next line of treatment for such a patient.Many different characteristics need to be considered when deciding a new strategy, and this includes previous PARPi exposure.
The observational nature of our study limits the significance of our findings, which will need to be validated further; it would also be beneficial to better understand on a molecular level whether there are drug-specific resistance mechanisms that might justify our data.
Interestingly, previous data have shown that PLD + Trabectedin after PARP inhibitors achieves similar oncological outcomes compared with a platinum-based regimen [28].Therefore, in a sequence strategy, PLD + Trabectedin still remains a reasonable choice following PARP inhibitors progression, sparing platinum compound for subsequent recurrence.
|
2023-12-22T16:24:58.248Z
|
2023-12-20T00:00:00.000
|
{
"year": 2023,
"sha1": "d1c93d7c6c7d01adc7d613f2f9e5abd46b69892b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/16/1/41/pdf?version=1703084823",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a6f7105f4f79229a1fd4cf35a1cd71ab342ec2cf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
216565063
|
pes2o/s2orc
|
v3-fos-license
|
THE INTERNATIONAL JOURNAL OF HUMANITIES & SOCIAL STUDIES Gendered Differences in the Teaching of Sex Education among Secondary School Teachers in Auta Balefi, Nigeria
with regards this positive attitude, female teachers had a higher percentage of teachers with such attitude (Ngoloba, 2008). Finally, in a study of the same nature in Botswana, it was discovered by the authors that majority of the teachers were willing to teach sex education in their schools as they were of the view that it would be a step towards reducing the numerous challenges adolescents faced with regard to their reproductive and sexual health. Despite this high level of Abstract : Sex education in schools has continued to gain support from various stakeholders especially with students and parents alike identifying the school as a viable source of information on adolescent reproductive health. Evidence from literature on the other hand juxtaposes the success of school-based programmes on the relationship that exists or not between the socio-demographic characteristics of teachers and their teaching of sex education. The study was aimed among others at investigating the relationship between socio-demographic factors and the teaching of sex education. Eleven secondary schools with a total of 219 teachers were included in the study. Due to the relatively small population size, the study made use of the entire population. Two hypotheses were formulated to test at 0.05(95%) the level of significance on gender differences and educational qualification in teaching sex education. The result indicated that there is no significant relationship between the sex of teachers and their attitude towards sex education, and that there is no significant relationship between the educational status of teachers and their attitude towards sex education. This is because majority of the teachers have some form of training on sex education which neutralizes personal differences such as sex and educational status. Further findings revealed that 90.5% of the respondents have positive attitude towards sex education and were willing to teach sex education in their various schools. The authors recommend, among others, that continuing encouragement might be necessary to support teachers’ interest in teaching sex education.
enthusiasm, barriers placed by culture and lack of training affected the teaching of same (Kasonde, 2013). It is therefore against this backdrop that this study conducted in Auta Balefi, seeks to investigate such relationships between the sociodemographic characteristics of teachers and their teaching of sex education. In doing this, it is necessary to state that majority of the schools located in the study area were found to be privately owned and not state or public owned. This finding had implications for the results as there is the tendency for such schools to have access to better resources. Also, the study was narrowed down specifically to the relationship between two out of the various socio-demographic characteristics of the respondents, gender and educational qualification respectively. This was due to the aforementioned private owned nature of the schools which affected the results on other factors such as religion, skewing them to one side. The major objectives of the study were to explore the relationship between gender and educational qualification of teacher and their teaching of sex education.
Methodology
The research was carried out in Auta Balefi; a small sub-urban town located in Karu Local Government Area of Nasarawa State, about 26 kilometres from Abuja the Federal Capital City of Nigeria. The study population consisted of all secondary school teachers in Auta Balefi which has eleven (10 private and 1 public) secondary schools with a total number of two hundred and nineteen (219) teachers. Due to the relatively small population size and the accessibility to the population of the study, it was not necessary to draw samples of teachers from each school. Therefore, all the teachers in the 11 schools were included in the study.
A structured questionnaire with both open and closed-ended questions was designed to elicit information from the respondents. The questionnaire was self-administered to the teachers and it comprised of questions on sociodemographic characteristics of respondents, content of sex education, and attitude of respondents towards sex education. Questions were designed in such a way that teachers who had positive response towards sex education were seen as having a positive attitude towards that teaching of sex education and those with negative responses were considered as having a negative attitude towards the teaching of sex education in schools,.
The data gathered from the field of study was analysed using the descriptive statistics such as simple percentage tables as well as Chi Square to test the hypothesis. The data were analysed using the Statistical Package for the Social Sciences (SPSS) and were presented in tables as percentage and frequency distributions. Out of the 219 copies of the questionnaire administered, 210 were returned and adjudged usable for analysis. This constitutes a 96% response rate.
Results
This section covers a description of the socio-demographic characteristics of respondents, the test of hypotheses and discussion of findings. From Table 1, it can be observed that 54.8% of the respondents are male while 45.2% are female. Also, 41.4% of the respondents fall within the age range of 30-34 years, 24.3% of the respondents fall within the age range of 25-29 years as well as those that are 35 years and above and 10.0% fall within the age range of 20-24 years. With Regard to educational qualification, the modal frequency is a tie where 36.2% respondents have a National Certificate of Education (NCE) as well as a Bachelor's degree, 9.5% are Diploma holders (OND/HND), and 6.7% of the respondents have a Post Graduate Diploma in Education while 5.7% have a Senior School Certificate as well as a Master's degree. Majority of the respondents are either married (55.7%) or single (43.8%). Finally, 97.6% are Christians while 2.4% are Muslims. This finding reveals that majority of the teachers across the 11 secondary schools in Auta Balefi are Christians. This is due to majority of the schools being privately owned. Table 2, the calculated chi-square value 0.001 is less than the critical value 3.841. Therefore, the null hypothesis is accepted and the alternate hypothesis is rejected. This implies that there is no significant relationship between the gender of teachers and their attitude towards sex education. Male and female teachers alike seem likely to have a positive attitude towards sex education. This finding could be as a result of the fact that majority (75.7%) of the teachers have had some form of training on sex education and thus the training has served as a factor that led to a positive attitude towards sex education among teachers irrespective of their sex and has therefore neutralized any individual difference that could influence teachers attitude towards sex education. Table 5, the calculated chi-square value 3.223 is less than the critical value 11.070. Therefore, the null hypothesis is accepted and the alternate hypothesis is rejected. This implies that there is no significant relationship between the educational status of teachers and their attitude towards sex education. The attitude of teachers towards sex education is not influenced by their educational qualification or status. Teachers within the different categories of educational qualification seem to have the same attitude towards sexuality irrespective of their status. This finding is also because majority (75.7%) of teachers have had training on sex education and the training has served as a factor that led to a positive attitude towards sex education among teachers irrespective of their educational qualification and has, therefore, neutralized any individual difference that could influence teacher's attitude towards sex education.
Discussion
The result indicated that there is no significant relationship between the sex of teachers and their attitude towards sex education, and also that there is no significant relationship between the educational status of teachers and their attitude towards sex education. This is due to the fact that majority of the teachers have some form of training on sex education which neutralizes personal differences such as sex and educational status. This study also found that majority of the teachers in Auta Balefi are in support of the introduction of sex education in schools and as such have a positive attitude towards it and also that teachers are of the view that sex education should cover a wide range of topics. This finding confirms the view averred by Mkumbo (2012) whose study revealed that an overwhelming majority of teachers in both rural and urban districts supported the teaching of sex education in schools, and the inclusion of a wide range of sex education topics in the curriculum. Ngoloba () also revealed that the attitudes of teachers towards the inclusion of sex education in the curriculum were positive and their attitudes impact positively on their teaching of the subject. However, the findings by Ngoloba (2008) also revealed that female teachers hold more positive attitudes towards sex education than male teachers. This is in contrast with the findings of this study which showed that male and female teachers alike have a similar attitude towards sex education.
Furthermore, 48.6% of the respondents are of the view that the responsibility of teaching sex education is for the parents and as such see the home (51.0%) rather than the school (30.5%) as the best place to teach sex education. This finding confirms that of Onwuezobe and Ekanem (2009)where it was discovered that a great number of teachers view the teaching of sex education as the responsibility of the parents (46.1%) and as such sees home (43.7%) as even a better place than school (38.6%) to impact such knowledge. Also from their findings, most of the teachers (55.6%) considered ages 10 -14 years or Junior Secondary School level as the appropriate period for introducing sex education. This is confirmed by the finding of this study as 55.2% of the respondents indicated Junior Secondary School as the appropriate stage to introduce sex education to students. With regard to the benefits and perceived risks of sex education, it was discovered that about 52% of the respondents perceived sex education as mostly beneficial in the area of promotion of abstinence. This confirms the findings by Kasonde (2013) that 92% of the respondents in his study agreed that sex education delays sexual debut, but different from the findings by Onwuezobe and Ekanem (2009) where about 68% percent of the teachers perceived sex education as mostly beneficial in preventing unplanned pregnancy. Their findings also revealed that 56% of the respondents were of the opinion that it will promote early exposure to sexual relationship which is confirmed by this study as 55.7% of the respondents were of the opinion that sex education will promote early exposure to sexual relationships. Finally, whereas Kasonde (2013) showed that lack of training and culture can serve as barriers to sex education, in this study majority of the teachers (75.7%) indicated that they have had some form of training on sex education. This is why the sex and educational qualifications of respondents were not significantly related to their attitudes towards sex education.
Conclusion
The study was aimed at finding out the attitude of teachers towards sex education in Auta Balefi and also to see whether certain factors such as sex and educational qualification had any significant effect on teacher's attitude towards sex education.
Based on the findings of this study, it can be concluded that a majority of teachers in Auta Balefi have a positive attitude towards sex education in schools and also are willing to teach sex education in their various schools. There is no significant relationship between the sex of teachers and their attitude towards sex education and there is also no significant relationship between the educational qualification of teachers and their attitude towards sex education. In other words, gender and educational qualifications do not have any significant effect on teacher's attitude towards sex education. Training on sex education served as a factor that led to this finding as it neutralized individual differences such as sex and educational qualification.
|
2020-04-02T09:31:37.286Z
|
2019-09-30T00:00:00.000
|
{
"year": 2019,
"sha1": "2e4e8aec27eba0efe1e38cd3472b1831f924ae38",
"oa_license": null,
"oa_url": "http://www.internationaljournalcorner.com/index.php/theijhss/article/download/148316/103818",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "31a8607f0e9c1e8d828b06cd3cb0e6180f85ff9d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
24616142
|
pes2o/s2orc
|
v3-fos-license
|
Raptor Binds the SAIN (Shc and IRS-1 NPXY Binding) Domain of Insulin Receptor Substrate-1 (IRS-1) and Regulates the Phosphorylation of IRS-1 at Ser-636/639 by mTOR*
In normal physiological states mTOR phosphorylates and activates Akt. However, under diabetic-mimicking conditions mTOR inhibits phosphatidylinositol (PI) 3-kinase/Akt signaling by phosphorylating insulin receptor substrate-1 (IRS-1) at Ser-636/639. The molecular basis for the differential effect of mTOR signaling on Akt is poorly understood. Here, it has been shown that knockdown of mTOR, Raptor, and mLST8, but not Rictor and mSin1, suppresses insulin-stimulated phosphorylation of IRS-1 at Ser-636/639 and stabilizes IRS-1 after long term insulin stimulation. This phosphorylation depends on the PI 3-kinase/PDK1 axis but is Akt-independent. At the molecular level, Raptor binds the SAIN (Shc and IRS-1 NPXY binding) domain of IRS-1 and regulates the phosphorylation of IRS-1 at Ser-636/639 by mTOR. IRS-1 lacking the SAIN domain does not interact with Raptor, is not phosphorylated at Ser-636/639, and favorably interacts with PI 3-kinase. Overall, these data provide new insights in the molecular mechanisms by which mTORC1 inhibits PI 3-kinase/Akt signaling at the level of IRS-1 and suggest that mTOR signaling toward Akt is scaffold-dependent.
The IRS-1 and IRS-2 proteins are closely related and represent substrates of the insulin and insulin-like growth factor receptors (9,10). They consist of several structurally and functionally distinctive domains. At their amino terminus lies a pleckstrin homology domain that is involved in coupling IRS proteins to the insulin receptor. Next to the pleckstrin homology domain are the phosphotyrosine binding domain and SAIN (Shc and IRS-1 NPXY binding) domain (9,11). The phosphotyrosine binding domain directly interacts with the phosphorylated NPXY motif of the -subunit of the insulin and insulin-like growth factor receptors (12). The role of the SAIN domain in the physiology of IRS proteins is currently unknown. The carboxyl-terminal tail of IRS-1 contains a number of phosphotyrosine motifs that serve as docking sites for SH2-containing proteins, including enzymes, such as the PI 3-kinase and adapter molecules, such as Grb-2. mTOR bound to Rictor phosphorylates and activates Akt (4). However, diabetic-mimicking conditions (hyperglycemia and hyperinsulinemia) inhibit PI 3-kinase/Akt signaling in a Raptordependent manner via phosphorylation of IRS-1 at Ser-636/639 by mTOR (13). These serine residues lie next to the 632 YMPM motif that binds the PI 3-kinase upon insulin stimulation, and their phosphorylation inhibits the PI 3-kinase activity associated with IRS-1 (13). mTORC1 regulates additional inhibitory phosphorylations on IRS-1, such as Ser-307 and Ser-312, which interfere with the ability of IRS-1 to interact with the insulin receptor (13)(14)(15). Therefore, under diabetic-related conditions mTORC1 may inhibit in vivo glucose disposal by suppressing the PI 3-kinase activity associated with IRS-1. The physiological relevance of these findings is supported by recent studies that showed (a) the phosphorylation of IRS-1 at Ser-636/639 is increased in noninsulin-dependent diabetes mellitus subjects with a concomitant reduction in Akt activity (16), (b) rapamycin-mediated mTORC1 inhibition reduces in vivo the phosphorylation of IRS-1 at Ser-636/639 and stimulates insulin-mediated glucose uptake in skeletal muscle of human subjects (16,17), and (c) adipose-specific knock-out of raptor results in lean mice that are resistant to diet-induced obesity and exhibit insulin hypersensitivity with improved glucose tolerance (18).
The finding that Akt is regulated by mTOR through both positive (directly via mTORC2 (4)) and negative (indirectly via mTORC1 at the level of IRS-1 (13)) signals raises the question of signaling specificity. Therefore, it is important to determine (a) the role of individual mTORC1 and mTORC2 components in the phosphorylation of IRS-1 at Ser-636/39, (b) the role of PI 3-kinase/PDK1/Akt axis in the feedback inhibition of IRS-1 by mTORC1, and (c) the molecular mechanisms by which mTORC1 phosphorylates IRS-1. In this report I systematically knocked down all the known components of mTORC1 and mTORC2, and I demonstrate that the selective ability of mTORC1, but not mTORC2, to suppress the IRS-1-associated PI 3-kinase/Akt signaling depends on Raptor. Knockdown of mTOR, Raptor, and mLST8 abolishes serum-and insulin-induced phosphorylation of IRS-1 at Ser-636/639 and stabilizes IRS-1 after exposure to long term insulin stimulation. Knock down of Rictor and mSin1 did not affect the aforementioned phosphorylations. Knockdown of PI 3-kinase and PDK1, but not Akt1 or Akt2, suppresses the phosphorylation of IRS-1 at Ser-636/639 triggered by insulin. Mechanistically, Raptor, but not any other component of mTORC1 or mTORC2, interacts with the SAIN domain of IRS-1 and presents IRS-1 to mTOR. IRS-1 lacking the SAIN domain is not phosphorylated at Ser-636/639, and it is resistant to mTORC1-mediated inhibition of PI 3-kinase/Akt signaling. Further studies described herein also showed that the SAIN domain of IRS-2 interacts with Raptor, suggesting a common molecular mechanism by which mTORC1 regulates the phosphorylation of IRS proteins.
Cell Culture-MCF-7, HEK293, HepG2, and C2C12 myoblasts were from the American Tissue Culture Collection. The HEK293 cell line engineered to stably express wild type IRS-1 has been previously described (13,19). HEK293 cells stably expressing Raptor were generated by infecting HEK293 cells with a pMSCV-TAP-Raptor retrovirus (6) and selected for 1 week with 2 g/ml puromycin. All cell lines were grown in high glucose (25 mM) Dulbecco's modified Eagle's medium (#11995-065; Invitrogen) supplemented with 10% fetal bovine serum and antibiotics.
Cell Lysis, Immunoprecipitation, and Western Blotting-Cells were washed in ice-cold phosphate-buffered saline and solubilized in lysis buffer (50 mM Tris, pH 7.5, 200 mM NaCl, 1% Triton X-100, 10 mM Na 3 VO 4 , 50 mM NaF, 1 mM -glycerophosphate, 1 mM sodium pyrophosphate, 1 mM EDTA, 1 mM EGTA, 1 mM phenylmethylsulfonyl fluoride, 50 nM okadaic acid, supplemented with a mixture of protease inhibitors). The lysates were cleared by centrifugation for 10 min at 14,000 ϫ g at 4°C. For immunoprecipitation, 0.5-1 mg of a given lysate was mixed overnight via gentle agitation with 1-2 g of specific or nonspecific antibodies coupled to protein A-or G-Sepharose beads. After extensive washing with lysis buffer, beads were resuspended in SDS sample buffer, supplemented with 5% -mercaptoethanol, boiled for 5 min, and subjected to Western blot analysis using standard Western blotting protocols.
Subcellular Fractionation-Cells were washed twice in icecold phosphate-buffered saline and homogenized in buffer A (0.3% CHAPS, 20 mM HEPES, pH 7.4, 1 mM EDTA, 2 mM MgCl 2 , 1 mM phenylmethylsulfonyl fluoride, 1 mM Na 3 VO 4 , supplemented with a mixture of protease and phosphatase inhibitors, Roche Applied Science). The homogenate was centrifuged at 4000 ϫ g for 10 min at 4°C, and the supernatant was layered on the top of a linear 10 -40% or 15-35% iodixanol gradient (Optiprep D1556) in 0.25 M sucrose, 1 mM EDTA, 20 mM HEPES and centrifuged to equilibrium at 30.000 rpm for more than 20 h at 4°C in a Beckman type SWTi41 rotor. Fractions were collected from the top and analyzed by Western blotting.
GST Pulldown Experiments-BL21 cells transformed with different GST-fused IRS-1/2 protein fragments were induced with 100 M isopropyl 1-thio--D-galactopyranoside overnight at room temperature. BL21 cells were lysed in 50 mM Tris (pH 8.0, 200 mM NaCl, 1% Triton X-100, 2 mM dithiothreitol, 1 g/ml lysozyme, 0.1 mM phenylmethylsulfonyl fluoride, and 1ϫ Complete protease inhibitor mix (Roche Applied Science). The lysates were cleared by centrifugation at 10,000 ϫ g for 20 min at 4°C, and the proteins of interest were pulled down by incubating with 50 l of equilibrated glutathione beads for 2 h at 4°C. After extensive washing with lysis buffer, bound proteins were eluted in Laemmli buffer and analyzed by SDS-PAGE electrophoresis.
Lentivirus-mediated Short Hairpin RNA Silencing and Short Interference RNA-The oligonucleotides encoding the short hairpin RNA expression cassettes targeting human mTOR (pLKO1-shmTOR), human Raptor (pLKO1-shRaptor), and human Rictor (pLKO1-shRictor) transcripts have been previously described (4). Additional oligonucleotides encoding short hairpin RNA expression cassettes targeting different components of the mTORC1 and mTORC2 were as follows:
mTORC1 and Insulin Resistance
The oligonucleotides were annealed and subcloned into the pLKO.1 vector (#8453, Addgene) following the manufacturer's instructions. The pLKO.1-shGFP construct was used as the control (#12273, Addgene). For lentivirus production, HEK293T cells were co-transfected with the pLKO.1-based plasmid expressing the appropriate short hairpin RNA cassette and the packaging plasmids pCMV-dR8.2dvpr and pCMV-VSV-G with the calcium phosphate method. Virus-containing supernatants were collected at 48 h after transfection and used to infect different cell lines in the presence of 8 g/ml Polybrene (Sigma). Infected cells were selected for 2 days with 1-2 g/ml puromycin (Sigma) and analyzed 4 -6 days after infection. siRNA against the human mLST8 (#S100425460 and #S100425467) and mSin1 (#S100287861) were from Qiagen, and they were transfected using Lipofectamine 2000 (Invitrogen). Transfections were carried out using the final siRNA concentration of 80 nM. Transfection efficiency, measured with the use of a fluorescein-conjugated control nonspecific siRNA (Cell Signaling #6201), was higher than 80%. Cells were grown for 48 -60 h before each experiment.
Plasmid Construction-The pCMV-Myc-IRS-1 construct has been previously described (13). The amino acid numbering is based on the human IRS-1. To generate the IRS-1 constructs described in this study, the following primers were used. (i) For the pCMV-Myc-IRS-1 ⌬920 -1236 construct we used 5Ј-TTG GGG GAT CCC AAG GCA AGC-3Ј (forward) and 5Ј-AAG CTT CTA AAC TGA AGG GGA GCT ACG GGA AGT-3Ј (reverse). The PCR product was subcloned into the BamHI and HindIII restriction enzyme sites in pCMV-Myc-IRS-1. (ii) For the pCMV-Myc-IRS-1 ⌬TOS ( 942 MKMDLG 947 mutated to 942 ANAAAG 947 ) construct we used 5Ј-GAA GAG TAC GCG AAC GCG GCC GCG GGG CCA-3Ј (forward) and 5Ј-TGG CCC CGC GGC CGC GTT CGC GTA CTC TTC-3Ј (reverse). (iii) Statistical Analysis-Results are expressed as the means Ϯ S.E. Differences between two groups were assessed using the two-tailed Student's t test. Western blot band densitometry was done with the Igor Pro Software (Wavemetrics, Inc).
RESULTS AND DISCUSSION
Components of mTORC1 and mTORC2 Co-fractionate with IRS-1 in Equilibrium Density Gradients-Raptor interacts with IRS-1 and regulates the phosphorylation of IRS-1 at Ser-636/ 639 by mTOR (13). Since our previous report, studies by other groups showed that mTOR partitions between two distinct complexes (5,6,8,20), mTORC1 (mTOR, Raptor, mLST8, and PRAS40) and mTORC2 (mTOR, Rictor, mLST8, and mSin1). These findings raised the following questions of 1) whether additional components of mTORC1 are involved in the phosphorylation of IRS-1 and 2) whether mTORC2 participates in the regulation of IRS-1 phosphorylation either in a direct or indirect mechanism, e.g. by regulating the activity of Akt. To address this question the sedimentation profile of mTORC1 and mTORC2 components has been analyzed in equilibrium density gradients derived from HepG2 cells solubilized in the presence of 0.3% CHAPS to preserve the integrity of mTORassociated complexes. In HepG2 cells growing in nutrient-rich conditions (e.g. in the presence of serum, glucose, and amino acids) the sedimentation profile of IRS-1 peaks in fractions 4 -5 and 8, suggesting that it exists in two distinct subcellular compartments (Fig. 1A). The majority of Raptor and mTOR cofractionate with IRS-1 in fraction 8, consistent with the fact the IRS-1 interacts with Raptor (13). Although a significant amount
mTORC1 and Insulin Resistance
of IRS-1 is present in fraction 8, the majority of the protein is found in fractions 4 and 5. Fraction 8 also contains less amounts of p85␣ than fractions 4 and 5. This is consistent with the inhibitory role of mTORC1 in the assembly of IRS-1 with PI 3-kinase (p85␣/p110␣). In these experiments it has also been observed that additional components of mTORC1 (mLST8 and PRAS40) and mTORC2 (Rictor, mSin1, and mLST8) co-fractionate with IRS-1, suggesting that these proteins may be involved in the inhibition of IRS-1 by mTORC1 (Fig. 1A). Interestingly, under the same conditions only a tiny fraction of PRAS40 cofractionates with IRS-1/mTORC1 at fraction 8, presumably because of Akt-mediated phosphorylation of PRAS40, which leads to its dissociation from mTORC1. Last, only a tiny fraction of S6K1 co-fractionates with IRS-1/mTORC1 at fraction 8, whereas the vast majority of S6K1 co-fractionates with IRS-1 at fractions 4 and 5, which lack mTOR. Because both mTORC1 and S6K1 directly phosphorylate IRS-1, this finding may suggest that they target different pools of IRS-1.
mTORC1, but Not mTORC2, Regulates the Phosphorylation and Stability of IRS-1-To study the relative role of mTOR in the context of mTORC1 and/or mTORC2 in the regulation of IRS-1-associated PI 3-kinase/Akt signaling the endogenous mTOR, Raptor and Rictor were knocked down in different cell lines. Fig. 1B shows that in HepG2 cells Raptor knockdown attenuates the serum-and insulin-induced phosphorylation of IRS-1 at Ser-636/369 and stabilizes IRS-1. Decreased phosphorylation of IRS-1 at Ser-636/639 inversely correlates with the phosphorylation status of Akt. On the contrary, Rictor knockdown did not affect the phosphorylation of IRS-1 at Ser-636/639, although it decreased the phosphorylation of Akt at Ser-473 (4 -6). Similarly, in MCF-7 cells knockdown of Raptor reduced the phosphorylation of IRS-1 at Ser-636/ 639 and induced a marked upregulation in the phosphorylation of Akt (Fig. 1C). Knockdown of Rictor did not affect the phosphorylation of IRS-1 at Ser-636/639 and caused a marked reduction in the phosphorylation of Akt. Knockdown of mTOR affected the viability of MCF-7 cells and caused a dramatic decrease in the levels of a number of proteins, including IRS-1. This resulted in a reduction in the phosphorylation of Akt, mainly at Ser-473 (4 -6), and a marked up-regulation in the total levels of Akt. Because knockdown of mTORC1 and Whole cells lysates were analyzed by Western blotting. C, C2C12 myoblasts were serum-starved overnight and pretreated with the indicated inhibitors for 45 min after by insulin stimulation. Whole cell lysates were analyzed by Western blotting. D, HEK293 cells stably expressing IRS-1 infected with the indicated lentiviruses were serum-starved and stimulated with serum and insulin. Whole cell lysates were analyzed by Western blotting. E, C2C12 myoblasts were serum-starved overnight and pretreated with the indicated inhibitors for 45 min followed by insulin stimulation. Whole cell lysates were analyzed by Western blotting. F, HEK293 cells stably expressing IRS-1 infected with the indicated lentiviruses were serum-starved and stimulated with serum and insulin. Whole cell lysates were analyzed by Western blotting. AUGUST 21, 2009 • VOLUME 284 • NUMBER 34 JOURNAL OF BIOLOGICAL CHEMISTRY 22529 mTORC2 affected the levels of endogenous IRS-1 in different cell lines (Fig. 1, B and C, and data not shown), HEK293 cells were engineered to stably express IRS-1 to measure the stoichiometry of IRS-1 phosphorylated at Ser-636/639 upon mTOR, Raptor, and Rictor knockdown. Fig. 1D shows that knockdown of mTOR and Raptor attenuated the insulin-induced phosphorylation of IRS-1 at Ser-636/639, whereas knockdown of Rictor was without an effect. Last, Fig. 1E shows that Raptor knockdown did not alter the total levels of tyrosine phosphorylation of IRS-1. This finding suggests that mTORC1 inhibits the PI 3-kinase/ Akt signaling associated with IRS-1 by phosphorylating Ser/Thr residues, such as Ser-636/639, rather by inhibiting the ability of the insulin receptor to tyrosine phosphorylate IRS-1. Along the same lines, Raptor knockdown specifically enhanced the phosphorylation of Akt after insulin stimulation without affecting the phosphorylation of Erk (extracellular signal-regulated kinase) (Fig. 1E).
mTORC1 and Insulin Resistance
Long term insulin stimulation destabilizes IRS-1 and induces insulin resistance. This effect can be reversed by rapamycin, suggesting that inhibition of mTORC1 may stabilize IRS-1 (10,(21)(22)(23). However, it has been shown that long term rapamycin treatment also inhibits mTORC2 (24). Therefore, to address whether mTORC1 and/or mTORC2 regulate the stability of IRS-1 upon long term insulin stimulation, Raptor and Rictor were knocked down in HepG2 cells and C2C12 myoblasts. After stimulation with insulin for 16 h, it has been found that Raptor, but not Rictor, knock down stabilized IRS-1 in both cell lines ( Fig. 2A). Further experiments in HepG2 cells confirmed that Raptor knockdown increased IRS-1 half-life from 4 to Ͼ10 h compared with control cells (Fig. 2B). Taken together, the data presented in Figs. 1 and 2 suggest that mTORC1, but mLST8, but Not mSin1 and S6K1, Regulates Serum-and Insulin-stimulated Phosphorylation of IRS-1 at Ser-636/639-mLST8 binds the kinase domain of mTOR and participates in the formation of both mTORC1 and mTORC2 complexes (5,20). mLST8 is required for mTORC1 and mTORC2 activity toward S6K1 (20) and Akt (5), respectively. mLST8 co-fractionates with mTORC1, mTORC2, and IRS-1 (Fig. 1A). This raises the question of whether mLST8 participates in the mTORC1-dependent phosphorylation of IRS-1. To address this question the endogenous mLST8 has been knocked down in HEK293 cells stably expressing IRS-1, and it has been found that mLST8 is required for both insulin-and serum-induced phosphorylation of IRS-1 at Ser-636/639 (Fig. 3A). Knockdown of mLST8 in HEK293 cells also suppressed serum-and insulin-dependent phosphorylations of both S6K1 and Akt. Therefore, mLST8 positively regulates mTOR kinase activity toward its substrates (IRS-1, S6K1, and Akt). Interestingly, mLST8 mRNA and protein levels are up-regulated in 3T3-L1 adipocytes after long term insulin stimulation in a concentration-dependent manner (25), suggesting that mLST8 may participate in the inhibitory effect of mTORC1 on IRS-1 signaling. mSin1 interacts with Rictor and positively regulates mTORC2-mediated Akt phosphorylation at Ser-473 (5). mSin1 co-fractionates with IRS-1, suggesting that it may be involved in the phosphorylation of IRS-1 (Fig. 1A). To address this question, mSin has been knocked down, and it has been found that although mSin1 is required for the phosphorylation of Akt at Ser-473, it did not affect the serum-and insulin-stimulated phosphorylation of IRS-1 at Ser-636/639 (Fig. 3A). S6K1, a downstream target of mTORC1, phosphorylates IRS-1 at multiple inhibitory serine residues (15,26). S6K1 directly phosphorylates IRS-1 at Ser-307 and inhibits its interaction with the insulin receptor (14). S6K1 knockout mice are deficient in phosphorylating IRS-1 at Ser-636/639 (26). However, by using a rapamycinresistant mutant of S6K1, we have shown that rapamycin is still capable of suppressing phosphorylation of IRS-1 at Ser-636/639, suggesting that mTOR/Raptor per se, rather than S6K1, are important for the aforementioned phosphorylation. Consistently, S6K1 does not phosphorylate IRS-1 at Ser-636/639 in vitro (15), whereas mTORC1 does (13). Moreover, Fig. 3B shows that knockdown of S6K1 in HEK293 cells stably expressing IRS-1 did not affect the phosphorylation of IRS-1 at Ser-636/639 upon serum or insulin stimulation. Therefore, S6K1 is not directly involved in the phosphorylation of IRS-1 at Ser-636/639. However, S6K1 AUGUST 21, 2009 • VOLUME 284 • NUMBER 34 participates in the negative feedback regulation of PI 3-kinase/Akt signaling by phosphorylating other rapamycinsensitive serine residues, such as Ser-307 (13)(14)(15).
mTORC1 and Insulin Resistance
mTORC1 Phosphorylation of IRS-1 at Ser-636/639 Is Akt-independent-Next, it has been addressed as to whether the PI 3-kinase/PDK1/Akt axis is involved in the regulation of IRS-1 phosphorylation at Ser-636/ 639. Two commonly used PI 3-kinase inhibitors, LY294002 and wortmannin, inhibited, in a dosedependent manner, the insulinstimulated phosphorylation of IRS-1 at Ser-636/639 in C2C12 myoblasts in a manner that parallels mTORC1 activity (Fig. 3C). However, LY294002 and wortmannin inhibit both mTOR and PI 3-kinase with similar IC 50 values (27). Therefore, the observed effect in Fig. 3C may reflect the inhibition of mTORC1 per se rather the inhibition of PI 3-kinase. To address this issue, the PI 3-kinase and PDK1 were knocked down in HEK293, and it has been found that both of them are required for the activation of the mTORC1 pathway as well as for the phosphorylation of IRS-1 at Ser-636/639 upon serum and insulin stimulation (Fig. 3D).
The PI 3-kinase/PDK1 axis and mTORC2 are involved in the activation of Akt. Moreover, Akt phosphorylates TSC2 (tuberous sclerosis complex) (28) and PRAS40 and activates mTORC1 (7,8). Therefore, it is likely, that Akt may regulate its attenuation after insulin stimulation by inducing the activation of mTORC1. To address this question, cells were treated with Akt inhibitors IV and VIII after insulin stimulation. Fig. 3E shows that both inhibitors failed to inhibit the insulin-stimulated phosphorylation of IRS-1 at Ser-636/639. Similarly, knockdown of either Akt1 or Akt2 in HEK293 cells did not interfere with the ability of mTORC1 to phosphorylate IRS-1 at Ser-636/ 639 (Fig. 3F). These findings suggest that (a) Akt1 and Akt2 have similar signaling abilities regarding the phosphorylation of IRS-1 by mTORC1 and (b) Akt1 and Akt2 have a minor role in the mTORC1-mediated feedback inhibition of the PI 3-kinase signaling associated with IRS-1. Raptor Interacts with IRS-1 in a Growth Factor-and Glucosedependent Manner-The finding that mLST8 (current study) and PRAS40 (7) regulate the phosphorylation of IRS-1 at Ser-636/639 and the observation that IRS-1 co-fractionates with components of both mTORC1 and mTORC2 in equilibrium density gradients raised the question as to whether those pro-
mTORC1 and Insulin Resistance
teins interact with IRS-1. To address this question we transfected HEK293 cells engineered to stably express IRS-1 with either equal amounts (Fig. 4A) or equal molarities (Fig. 4B) of Myc-tagged components of mTORC1 and mTORC2. Fig. 4, A, B, and C, show that Raptor is the only protein that interacts with IRS-1, suggesting that mLST8 and PRAS40 affect IRS-1 phosphorylation by regulating the kinase activity of mTOR. Interestingly, we found that the interaction between Raptor and IRS-1 is dynamic and is regulated both by growth factors and glucose (Fig. 4D) as is the case with the phosphorylation of IRS-1 at Ser-636/639 (13). Consistently, in serum-free media Raptor did not co-fractionate with IRS-1 (Fig. 4E, upper panel), whereas insulin stimulation induced the phosphorylation of IRS-1 at Ser-636/639 and its translocation to denser compartments along with mTOR and Raptor (fractions 5-8 in Fig. 4E, lower panel).
The SAIN Domain of IRS-1 Interacts with Raptor and Regulates the Phosphorylation of IRS-1 at Ser-636/639-The domain of IRS-1 that interacts with Raptor is currently unknown. Raptor binds both S6K1 and 4E-BP1 proteins via their TOR-signaling motifs (TOS) (29 -34). 4E-BP1 has a TOS motif (FEMDI) in its carboxyl terminus, whereas S6K1 has a TOS motif (FDIDL) in its amino terminus (29 -34). A point mutation of the conserved Phe residue to Ala abolishes 4E-BP1 and S6K1 binding to Raptor and impairs their phosphorylation by mTOR. Sequence analysis revealed that IRS-1 has a putative TOS motif ( 937 GTEEYMKMDL 946 ) that may interact with Raptor (30,32). However, it lacks the critical Phe residue (instead it has a methionine). Fig. 5, A and B, show that deletion of the carboxyl terminus of IRS-1 (IRS-1 ⌬920 -1236) as well as mutation of the putative TOS motif did not affect the phosphorylation of IRS-1 at Ser-636/639. Surprisingly, in the same experiment it has been found that deletion of the SAIN domain (amino acids 250 -584) of IRS-1 attenuated the phosphorylation of IRS-1 at Ser-636/ 639 and increased IRS-1 binding to PI 3-kinase (Fig. 5, B and C). This is consistent with the inhibitory role of those phosphorylations in the interaction between IRS-1 and PI 3-kinase upon insulin stimulation (13). Last, the IRS-1 ⌬495-1236 mutant, which lacks the two SH2 domains, did not interact with PI 3-kinase and served as control.
To further characterize the importance of the SAIN domain, a series of IRS-1 constructs fused to GST protein were generated (Fig. 5D). These constructs were used in pulldown experiments using cell extracts obtained from HEK293 cells stably expressing Raptor. Fig. 5, D and E, show that the GST-IRS-1 260 -380 fragment efficiently pulled down endogenous as well exogenous Raptor, suggesting that this domain of IRS-1 directly interacts with Raptor. On the contrary, the GST-IRS-1 380 -500 fragment failed to pull down Raptor. Interestingly, we found that the GST-IRS-1 260 -500 and GST-IRS-1 260 -700 fragments that harbor the whole SAIN domain of IRS-1 exhibited higher ability to interact with Raptor in vitro, suggesting that additional elements of the SAIN domain are crucial for this interaction. Importantly, none of the GST-fused IRS-1 fragments pulled down mTOR or mLST8.
To assess in vivo the role of SAIN domain in the mTORC1mediated phosphorylation of IRS-1 at Ser-636/639, a series of IRS-1 mutants was generated that lacked different parts of the SAIN domain (Fig. 6A). Fig. 6B shows that deletion of the SAIN domain completely abolished the phosphorylation of IRS-1 at Ser-636/639. Interestingly, although in vitro a GST-IRS-1 260 -380 fragment is sufficient for the interaction with Raptor, deletion of the SAIN domain either in its amino or carboxyl terminus decreased phosphorylation of IRS-1 at Ser-636/639, suggesting that additional elements of the SAIN domain are required in vivo for the efficient phosphorylation of IRS-1 by mTORC1. This is consistent with the in vitro finding that the full length of SAIN domain significantly enhances the interaction with Raptor (Fig. 5E). Indeed, IRS-1 lacking the SAIN domain (IRS-1 ⌬250 -500) failed to co-immunoprecipitate with Raptor (Fig. 6C). Last, point mutations of Ser-307/312 to alanine, two well established negative phosphorylations mediated by S6K1 and JNK (c-Jun NH 2 -terminal kinase) (14,15,(35)(36)(37)(38)(39), respectively, did not affect the phosphorylation status of IRS-1 at Ser-636/639, suggesting that those phosphorylation events occur independently (Fig. 6B). Overall, data presented in Figs. 5 and 6 suggest that the domain of IRS-1 from amino acids 260 to 380 is required for the interaction between IRS-1 and Raptor, whereas the full-length SAIN domain (amino acids 250 -584) dramatically enhances in vitro and in vivo the interaction between IRS-1 and Raptor and the phosphorylation of IRS-1 at Ser-636/639 by mTORC1.
IRS-1 and IRS-2 proteins exhibit significant homology in their SAIN domains (9,11). Moreover, insulin triggers the phosphorylation of IRS-2 on serine residues (recognized by the phosphoserine 14-3-3 binding motif (4E2) monoclonal antibody, Cell Signaling #9606) and promotes the binding of p85␣/PI 3-kinase to IRS-2 in a rapamycin-dependent manner (Fig. 6D). This suggests that IRS-2 may also interact with Raptor via its SAIN domain. To address this hypothesis a GST-IRS-2 SAIN (amino acids 249 -534) fusion protein has been generated, and its ability to interact in vitro with Raptor has been examined. Fig. 6E indeed shows that the SAIN domain of IRS-2 interacts with Raptor, suggesting a common molecular mechanism by which mTORC1 regulates the phosphorylation of IRS proteins.
Overall, this report shows that 1) Raptor defines the selective ability of mTORC1 to interact with and to regulate the phosphorylation of IRS-1 at Ser-636/639, 2) mLST8 in the context of mTORC1 is required for the phosphorylation of IRS-1 at Ser-636/639, 3) Raptor knockdown stabilizes IRS-1 under diabeticmimicking conditions, and 4) the SAIN domains of IRS-1 binds to Raptor and allosterically regulates the phosphorylation of IRS-1 at Ser-636/639 by mTOR (Fig. 6F).
|
2018-04-03T03:02:07.205Z
|
2009-06-26T00:00:00.000
|
{
"year": 2009,
"sha1": "531985fe61cd99a0128606824528ba2683f483d8",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/284/34/22525.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "7cd4db9c20ec9e216a5e8918e35ded9e4893a0d4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
258125335
|
pes2o/s2orc
|
v3-fos-license
|
Allogenic Synovia-Derived Mesenchymal Stem Cells for Treatment of Equine Tendinopathies and Desmopathies—Proof of Concept
Simple Summary Horses are high-level athletic athletes prone to musculoskeletal injuries. Tendon/ligament injuries are the most frequent types of injuries which that are very difficult to treat. Instead of tissue regeneration, usually, fibrous scar tissue develops which leads to decreased functionality of the injured area and threatens the participation of sport horses. The aim of regenerative medicine is to find a treatment that promotes tissue regeneration and that allows the equine patient to return to the same level of athletic performance in the shortest time period possible. In this study, we developed a solution of equine synovial membrane stem cells and autologous serum, to be injected at the lesion site to promote tissue regeneration. We describe the processes of tissue collection, preparation, isolation of synovial stem cells, expansion, culture, cryopreservation, and posterior preparation with autologous serum. The solution was tested in 16 tendons and ligaments of equines. After treatment, all equine patients underwent a physical rehabilitation program and were monitored with physical and ultrasonographic exams. The results were very promising, and thus, support the use of equine synovial stem cells and autologous serum in the treatment of tendonitis and desmitis. Abstract Tendon and ligament injuries are frequent in sport horses and humans, and such injuries represent a significant therapeutic challenge. Tissue regeneration and function recovery are the paramount goals of tendon and ligament lesion management. Nowadays, several regenerative treatments are being developed, based on the use of stem cell and stem cell-based therapies. In the present study, the preparation of equine synovial membrane mesenchymal stem cells (eSM-MSCs) is described for clinical use, collection, transport, isolation, differentiation, characterization, and application. These cells are fibroblast-like and grow in clusters. They retain osteogenic, chondrogenic, and adipogenic differentiation potential. We present 16 clinical cases of tendonitis and desmitis, treated with allogenic eSM-MSCs and autologous serum, and we also include their evaluation, treatment, and follow-up. The concerns associated with the use of autologous serum as a vehicle are related to a reduced immunogenic response after the administration of this therapeutic combination, as well as the pro-regenerative effects from the growth factors and immunoglobulins that are part of its constitution. Most of the cases (14/16) healed in 30 days and presented good outcomes. Treatment of tendon and ligament lesions with a mixture of eSM-MSCs and autologous serum appears to be a promising clinical option for this category of lesions in equine patients.
Introduction
Tendonitis and desmitis are defying clinical challenges in equine patients that require long recovery periods, and ineffective tendon repair can affect their sport careers. Tendons operate near their functional limits during maximal exercise, and their ability to adapt to stress and self-repair is limited. A controlled exercise program alone or in combination with a variety of conservative treatments, such as corrective shoeing and nonsteroidal anti-inflammatory drugs (NSAIDs), is still the gold standard therapy for equine tendon disease [1]. Current treatments often do not fully repair or regenerate the injured or affected tendon nor lead to its total functional recovery [1,2].
The aim of tendinopathy treatment is to achieve tissue regeneration and return to complete organ function and performance. Recently, tissue engineering approaches have attracted attention for tissue repair. Among the approaches, the use of mesenchymal stem cell-based therapy has increased, since it is a promising approach for tissue repair and regeneration including tendinopathy and desmitis [1,[3][4][5][6].
Mesenchymal stem cells (MSCs) can be isolated from several tissue sources such as bone marrow, peripheral blood, dental pulp, umbilical cord, and amniotic fluid [7]. MSC characteristics have been defined by the Mesenchymal and Tissue Stem Cell Committee of the International Society for Cellular Therapy (ISCT), and include plastic adherence when maintained in standard culture conditions; expressing clusters of differentiation (CDs), such as CD44, CD90, and CD105; and no expression of major histocompatibility complex (MHC)-class II markers and of hematopoietic-related markers (CD45 and CD34) [8]. Finally, MSCs must be able to differentiate in vitro into, at least, osteoblasts, adipocytes, and chondroblasts, in the presence of adequate differentiation culture media [8].
Synovial membrane mesenchymal stem cells (SM-MSCs) were initially isolated, in 2001, by De Bari et al. [9], from human knee joints and showed significant proliferative ability in culture, even after Passage 10 (P10), and multilineage differentiation potential in vitro [9]. These cells represent a good source of MSCs and a promising therapeutic tool mostly for musculoskeletal pathologies [10]. Sakagushi et al. compared the properties of different sources of human stem cells, i.e., bone marrow, synovium, periosteum, skeletal muscle, and adipose tissue, and observed the superiority of synovium as a source for MSCs for treatment of musculoskeletal pathologies as they had more ability to chondrogenesis. Pellets of synovium-derived stem cells were larger and expressed more intense staining for chondrogenic differentiation [11].
Animals 2023, 13, 1312 3 of 28 SM-MSCs have higher chondrogenic capacity than other studied sources of MSCs, such as bone marrow (BM-MSCs) [12,13]. Cartilage pellets from SM-MSCs have been reported to be significantly larger than those from BM-MSCs [12]. SM-MSCs have a higher production of uridine diphosphate glucose dehydrogenase (UDPGD) [13], an enzyme that converts UDP-glucose into UDP-glucuronate, one of the two substrates required by hyaluronan synthase for hyaluronan polymer assembly. In addition, Sox-9, collagen type II (Col-II), and aggrecan, specific markers for chondrogenesis, as well as cartilage-specific molecules such as cartilage oligomeric matrix protein (COMP) have also been found in high amounts in equine synovial fluid-derived MSCs and the extracellular matrix, respectively by reverse transcription polymerase chain reaction (RT-PCR) [13].
In a recent study, using a rabbit model, Bami et al. highlighted the superiority of SM-MSCs in terms of chondrogenesis, osteogenesis, myogenesis, and tenogenesis [14]. A study of xenogenic implantation of SM-MSCs in equine articular defects also confirmed better healing of the cartilage of affected knees as well as a higher expression of collagen type II, indicating the presence of hyaline cartilage in the healed defect [15].
SM-MSCs have been defined as MSCs due to their phenotypic profile and differentiation potential. Even though there are no specific antibody markers to identify these MSCs, there is general agreement that MSCs should be negative to hematopoietic markers CD34 and CD45 and positive to CD44, CD73, CD90, and CD105 [16]. Mochizuki et al. found that SM-MSCs maintained their proliferative ability, regardless of which region they were collected from in the synovium [17].
In 2003, Fickert et al. reported that the markers CD9, CD44, CD54, CD90, and CD166 could be used to identify MSCs isolated from the synovium of human patients with osteoarthritis (OA), and they also confirmed that CD9/CD90/CD166 triple-positive cell subgroups had obvious chondrogenic and osteogenic differentiation abilities [18].
Nevertheless, the immunophenotype characterization of equine MSCs (eMSCs), as well as in other veterinary species, has not yet been completely established [19]. This is a major challenge, since the expression of certain adult stem cell markers may differ between species. For that reason, it is a need to define a set of CD markers which can be uniformly applied for the identification of eMSCs [8,20].
Horses are high performance athletes prone to musculoskeletal diseases, i.e., osteoarticular, as well as tendon/ligament lesions and fractures of various degrees due to sportand age-related injuries. These pathologies resemble human musculoskeletal conditions, and therefore, horses are a valuable animal model for assessing stem cell and cell-based therapies prior to the translation of results into humans [21]. The use of a therapy that can regenerate these structures and restore their complete functionality instead of ordinary healing is the aim of our study and of equine practitioners throughout the world.
Recent studies have suggested that MSCs can self-renew, migrate to injury sites (homing), perform multilineage differentiation, and secrete bioactive factors, thus, increasing proliferation and migration of tendon stem/progenitor cells via paracrine signaling and increasing the regeneration ability of tissues with poor aptitude [1,[3][4][5]22,23].
In fact, the knowledge of the importance of this paracrine action has opened doors to cell-free therapeutic strategies in regenerative medicine. The soluble factors (cytokines, chemokines, and growth factors) and nonsoluble factors (extracellular vesicles and exosomes) released in the extracellular space by MSCs, commonly known as secretome, have become the focus of novel therapeutic approaches due to their key role in cell-to-cell communication, their active influence on immune modulation, and their pro-regenerative Animals 2023, 13, 1312 4 of 28 capacity both in vitro and in vivo [23]. Therefore, in this study, secretome was also analyzed with the prospect of being used therapeutically, in the future, in similar clinical cases.
In the present study, equines used as show jumping and dressage athletes as well as leisure horses with acute and chronic lesions were treated with intralesional administration of the considered combination, i.e., autologous serum and eSM-MSCs. The treatment consisted of two injections, 15 days apart. Pre-and post-treatment evaluations consisted of clinical, orthopedic, and tendon/ligament ultrasound exams. None of the selected equine patients had previously received any other regenerative treatment.
Study Design and Horse Selection
This prospective longitudinal study was performed in Portugal between February 2016 and January 2019. Sixteen horses, from 5 to 22 years old with acute and chronic signs of lameness were enrolled in this study (11 males and 5 mares), whose sport activities were distributed over show jumping (14), dressage (1), and leisure (1). The horses were all outpatients from an equine ambulatory clinic. This study included the treatment of 16 tendons, i.e., 14 superficial digital flexor tendons and 2 deep digital flexor tendons, and 4 suspensory ligaments.
Lameness was scored based on the American Association of Equine Practitioners (AAEP) scale (Table 1) and confirmed by using a positive regional nerve block. Flexion and pain to pressure tests were also evaluated [24]. Table 1. Score systems used by the veterinary surgeon to assess lameness, and responses to flexion and pain to pressure tests [25,26].
Parameter
Score Clinical Implication AAEP Grading 0 No Lameness 1 Lameness not consistent 2 Lameness consistent under certain circumstances 3 Lameness consistently observable on a straight line. 4 Obvious lameness at walk: marked nodding or shortened stride 5 Minimal weight bearing lameness in motion or at rest Flexion Test 0 No flexion response 1 Mild flexion response 2 Moderate flexion response 3 Severe flexion response Pain to pressure 0 No pain to pressure 1 Mild pain to pressure 2 Moderate pain to pressure 3 Severe pain to pressure
Inclusion and Exclusion Criteria
In this study, horses with acute or chronic lameness (Table 2), with diagnosed tendonitis and/or desmitis and with no signs of systemic disease were accepted in the inclusion criteria. Injured horses were treated in acute stages of disease, except for two equine patients (Patients 3 and 6). Patient 3 had an injury the year before this treatment and laser therapy had been performed, without a complete recovery. After that, he had a re-injury and, at this time, this treatment was suggested. Patient 6 was referred by another clinician who tried, unsuccessfully, to treat this patient. Patient 6 was sent to the field for one year and then re-evaluated. At this time, and as its trainer wanted to improve its quality of life, this treatment was proposed by its clinician. The lameness grade of each equine patient is specified in Table 2. Considering the established exclusion criteria, the selected equine patients should not have been under any other medical treatment (including nonsteroidal anti-inflammatory drugs, intra-articular corticosteroids, hyaluronan, glycosaminoglycans, platelet-rich plasma (PRP), and other MSC preparations) for at least 2 months before the Animals 2023, 13, 1312 5 of 28 allogenic eSM-MSC treatment and did not receive any additional medical treatment (except for that described in the treatment plan) for at least 2 months post the cell-based treatment. Table 2. Equine patient and lesion characterization. The left column characterizes the equine patients: Sex, male (M) or female (F); age measured in years old (yo); sports modality (SM), show jumping (SJ), dressage (Dre), and Leisure (Lsr); lameness score (AEEP score) pretreatment. The right column characterizes the lesions: Structure affected, superficial digital flexor tendon (SDFT), deep digital flexor tendon (DDFT), and suspensory ligament (SL) left branch (LB); affected limb, right frontlimb (RF), right hindlimb (RH), left frontlimb (LF) and left hindlimb (LH).
Ethics and Regulation
This study was carried out in accordance with the Organismo Responsável pelo Bem Estar Animal (ORBEA) from ICBAS-UP, project number P289/ORBEA/2018 recommendations and authorization. Treatments were performed with permission and signature of an informed consent from the equine patient's legal trainer, following a thorough explanation on the procedure itself and possible risks and associated effects, in accordance with national regulations and project approval from the competent authorities. In addition, no animals were euthanized for this study.
Donor Selection and SM Collection
The eSM-MSC donor was a young and healthy foal, 7 months old, who died accidentally when running in the arena. The trainer authorized synovial membrane collection from the hocks, knees, and fetlocks. The synovial membrane was evaluated and its appearance was transparent, bright, and smooth; in addition, the presence of viscous and transparent synovial fluid confirmed its soundness. The skin covering the incisional field was surgically cleaned with chlorohexidine and alcohol. The skin and subcutaneous tissue were incised, debrided, the articular capsule was opened, and the synovial membrane was isolated and extracted into a Dulbecco s phosphate buffered saline (DPBS) container. The samples were transported to the laboratory with ice packs for refrigerated temperatures. Figure 1a presents the fresh tissue arrival and Figure 1b shows the preparation at the laboratory. Figure 2 shows a schematic representation of the process from eSM-MSC collection to the Animals 2023, 13, 1312 6 of 28 administration of the combination, i.e., eSM-MSCs and autologous serum (1 × 10 6 cells/mL and 1 mL of autologous serum in a total volume of 2 mL). isolated and extracted into a Dulbecco′s phosphate buffered saline (DPBS) container. The samples were transported to the laboratory with ice packs for refrigerated temperatures. Figure 1a presents the fresh tissue arrival and Figure 1b shows the preparation at the laboratory. Figure 2 shows a schematic representation of the process from eSM-MSC collection to the administration of the combination, i.e., eSM-MSCs and autologous serum (1 × 10 6 cells/mL and 1 mL of autologous serum in a total volume of 2 mL). isolated and extracted into a Dulbecco′s phosphate buffered saline (DPBS) container. The samples were transported to the laboratory with ice packs for refrigerated temperatures. Figure 1a presents the fresh tissue arrival and Figure 1b shows the preparation at the laboratory. Figure 2 shows a schematic representation of the process from eSM-MSC collection to the administration of the combination, i.e., eSM-MSCs and autologous serum (1 × 10 6 cells/mL and 1 mL of autologous serum in a total volume of 2 mL). Schematic representation of the event sequence from the collection of synovial membrane to the administration of the therapeutic combination. After the collection, the synovial membrane is transported to the laboratory where it is separated from the whole tissue, decontaminated, incubated, and digested. Then, cells are cultured and expanded and finally cryopreserved in a cell bank. When needed for treatment, cells are prepared with autologous serum, and then applied in the selected equine patient.
eSM-MSC Isolation
After collection, the equine synovial membrane was prepared at the Laboratory of Veterinary Cell-based Therapies from ICBAS-UP. The isolation protocol for eSM-MSCs was developed by patented proprietary technology Regenera ® (PCT/IB2019/052006, WO2019175773, Compositions in use for the treatment of musculoskeletal conditions and methods for producing the same leveraging of the synergistic activity of two different types of mesenchymal stromal/stem cells, Regenera ® ). Fresh tissue was transported to the laboratory facilities in a hermetically sealed sterile container in transport media (supplemented with 3% (v/v) penicillin-streptomycin (Gibco ® , Waltham, MA, USA) and 3% amphotericin B (Gibco ® ) and processed within a period of up to 48 h. The synovial tissue was digested using collagenase and the isolated cells were incubated in a static monolayer culture using standard MSC basal medium supplemented with 10% fetal bovine serum (FBS) and maintained in standard culture conditions (37 • C, 5% CO 2 , and humidified atmosphere) until they reached confluence. Cells from confluent cultures were cryopreserved in 10% dimethylsulphoxide (DMSO) and FBS, at a concentration of 3 × 10 6 cells/mL, using a control rate temperature freezer (Sy-Lab Cryobiology, SY-LAB Geräte GmbH, Purkersdorf, Austria). For expansion optimization, cells were cryopreserved in passages (P) between P2 and P3 to generate suitable master cell banks (MCBs). Expansion, thereafter, was analyzed during a maximum of 20 cumulative population doublings (cCPDs). The range of cCPDs chosen allowed for enough expansion to maximize the number of cells in the working cell banks (WCB) but kept the cCPDs within the genomic stability range.
Tri-Lineage Differentiation Protocols
For all the differentiation protocols, cells in P4 were used after thawing.
Adipogenic Differentiation and Oil Red O Staining
For the adipogenic differentiation protocol, 1 × 10 4 cells/cm 2 were seeded in the wells of a 12-well plate (cell culture plates, 12-well, VWR ® , Suwanee, Atlanta, GA, USA), with the addition of the standard culture medium. The plate was incubated under standard conditions for 4 days. After this period, the culture medium of 10 wells was replaced by complete adipogenesis differentiation medium (StemPro ® Adipogenesis Differentiation Kit, Gibco ® , Waltham, MA, USA), 2 wells were used as controls and maintained with the standard culture medium. Following the manufacturer's instructions, the media were replaced every 3-4 days and the cells maintained in differentiation for 14 days. At the end of this period, the oil red O staining protocol was performed using a handmade solution. The culture differentiation medium was removed, and the wells were gently washed with PBS. Cells were fixed with 4% formaldehyde (3.7-4% buffered to pH 7, reference# 252931.1315, Panreac AppliChem ® , Darmstadt, Germany) for 10 min at room temperature, and the wells were washed 3 additional times with phosphate-buffered saline (PBS). Oil red O solution was added to each well and the plate incubated for 10-20 min at room temperature. Oil red O was discarded, and any excess dye was removed by several washes with PBS. PBS was added to each well for visualization. The aim of this assay was the identification of rounded cells with intracytoplasmic lipid vacuoles and their red coloration due to the exposure to the oil red O solution.
Chondrogenic Differentiation and Alcian Blue Staining
Thawed eSM-MSCs were automatically counted, and cell viability determined (%). Then, the cells were centrifuged, supernatant removed, and the pellet resuspended in culture medium to generate a cell suspension with 1.6 × 10 7 viable cells/mL. To generate micro-mass cultures, 5 µL droplets of the cell suspension were placed in the center of 10 wells of a 96-well plate (cell culture plates, 96-well, VWR ® , Suwanee, Atlanta, GA, USA) to induce chondrogenic differentiation. The plate was maintained under standard conditions for 2 h. After this time, chondrogenic differentiation medium (StemPro ® Chon- drogenesis Differentiation Kit, Gibco ® , Waltham, MA, USA) was added to 8 wells; the other 2 wells were considered to be controls and to these, the standard culture medium was added. Following the manufacturer's instructions, the media were replaced every 3-4 days and cells maintained in differentiation for 14 days. At the end of this period, the Alcian blue staining, pH 2.5, protocol was performed (Alcian Blue 8GX, Sigma-Aldrich ® , St. Louis, MO, USA). The culture differentiation medium was removed, and the wells were gently washed with PBS. The cells were fixed with 4% formaldehyde for 20 min at room temperature, and the wells were washed 3 additional times with PBS. Alcian blue solution was added to each well and the plate incubated for 30 min at room temperature. Then, the Alcian blue was discarded, and the wells were rinsed 3 times with 3% acetic acid (v/v). For neutralization of acidity and for visualization by inverted phase contrast microscopy, distilled water was added to all wells. The aim of this assay was the identification of chondrogenic aggregates and their coloration in blue due to the exposure to Alcian blue solution.
Osteogenic Differentiation and Alizarin Red Staining
For osteogenic differentiation, 8 × 10 3 cells/cm 2 were seeded into the wells of a 12well plate. The plate was maintained under standard conditions for 4 days. After this period, the culture medium of 10 wells was replaced by complete osteogenic differentiation medium (StemPro ® Osteogenic Differentiation Kit, Gibco ® , Waltham, MA, USA), and 2 wells were used as controls and maintained with the standard culture medium. Following the manufacturer's instructions, the media were replaced every 3-4 days and the cells maintained in differentiation for 21 days. At the end of this period, the alizarin red s staining protocol was performed using a commercial solution (alizarin red staining solution, Milllipore ® , Burlington, MA, USA). The culture differentiation medium was removed, and the wells were gently washed with PBS. The cells were fixed with 4% formaldehyde for 30 min at room temperature, and the wells were washed twice with distilled water. One ml of 40 mM of alizarin red solution was added to each well and the plate incubated for 30 min. Then, the alizarin red solution was discarded, and the wells were rinsed 3 times with distilled water until the supernatant became clear. For visualization by inverted phase contrast microscopy, PBS was added to all the wells. The aim of this assay was to identify calcium-containing osteocytes stained in red after exposure to alizarin red solution.
Karyotype Analysis
The eSM-MSCs in two different passages (P4 and P7) were submitted to cytogenetic analysis to determine the genetic stability in terms of chromosome number and occurrence of neoplastic changes. For both passages, 70-80% confluence was reached. Then, the culture medium was changed and supplemented with 10 µg/mL colcemid solution (KaryoMAX ® Colcemid™ Solution, Gibco ® , Waltham, MA, USA). After 4 h, the eSM-MSCs were collected and resuspended in 8 mL of 0.075 M KCl solution, followed by incubation under standard conditions for 15 min. After centrifugation (1700 rpm), 8 mL of ice-cold fixative comprising methanol and glacial acetic at a proportion of 3:1 was added and mixed. Afterwards, the cells were centrifuged again. Three fixation rounds were carried out. After the last centrifugation, the suspension of eSM-MSCs was spread over glass slides. A karyotype analysis was performed by one scorer on Giemsa-stained cells. For the different passages, a specific number of cells in metaphase were evaluated depending on the number of cells with a normal karyotype identified, guaranteeing a better representation of the population under study.
Secretome Cell Conditioned Medium (CM) Analysis
The eSM-MSCs were harvested from equine synovial membrane and maintained in culture, as previously described. The cells in P4 were subjected to an analysis of their conditioned medium (CM) to identify cytokines and chemokines secreted after conditioning. When in culture, after reaching a confluence of around 70-80%, the culture medium was removed, and the culture flasks were gently washed with DPBS two to three times.
Then, the culture flasks were further washed two to three times with the basal culture medium of each cell type, without any supplementation. To begin the conditioning, nonsupplemented DMEM/F12 GlutaMAX™ (10565018, Gibco ® , Thermo Fisher Scientific ® , Waltham, MA, USA) culture medium was added to the culture flasks, which were then incubated under standard conditions. The culture medium rich in factors secreted by the cells (CM) was collected after 48 h. The collected CM was then concentrated five times. After collection, it was centrifuged for 10 min at 1600 rpm, and its supernatant collected and filtered with a 0.2 µm syringe filter (Filtropur S ® , PES, Sarstedt, Nümbrecht, Germany). For the concentration procedure, Pierce™ Protein Concentrator, 3k MWCO, 5-20 mL tubes (88525, Thermo Scientific ® , Waltham, MA, USA) were used. Initially, the concentrators were sterilized following the manufacturer's instructions. Briefly, the upper compartment of each concentrator tube was filled with 70% ethanol (v/v) and centrifuged at 300× g for 10 min. At the end of the centrifugation, the ethanol was discarded, and the same procedure was carried out with DPBS. Each concentrator tube was subjected to two such centrifugation cycles, followed by a 10 min period in the laminar flow hood to complete drying. Finally, the upper compartment of the concentrator tubes was filled with plain CM (1 × concentration) and subjected to new centrifugation cycles, under the conditions described above, for the number of cycles necessary to obtain the desired CM concentration (5×). The concentrated CM was stored at −20 • C and subsequently subjected to a Multiplexing LASER Bead analysis (Eve Technologies, Calgary, AB, Canada) to identify a set of biomarkers present in the Equine Cytokine 8-Plex Assay (EQCYT-08-501). The list of searched biomarkers includes basic fibroblast growth factor (FGF-2), granulocyte colony-stimulating factor (G-CSF), granulocyte macrophage colony-stimulating factor (GM-CSF), monocyte chemoattractant protein-1 (MCP-1), interleukins (IL) IL-6, IL-8, IL-17A, and human growth-regulated oncogene/keratinocyte chemoattractant (GRO/KC). All samples were analyzed in duplicate.
Immunohistochemistry
Early passages of eSM-MSCs-P0 and -P3 were maintained in culture until a confluence of 70-80% was reached, and then enzymatic detachment was performed with 0.25% trypsin-EDTA solution. A cytoblock was performed fixing the cells with Sure Thin ® (Stat-lab®, Gerwig Ln Columbia, Columbia, MD, USA). Consecutive sections were cut at 2 µm, deparaffinized, hydrated, and submitted to immunohistochemical analysis using the No-volink™ Polymer Detection Systems (Leica Biosystems ® , Vista, CA, USA) kit, according to the manufacturer's instructions. Information regarding the primary antibodies and antigen retrieval recovery methods used in this study is summarized in Table 3.
The antibodies were selected to confirm the pluripotent and mesenchymal origin of the eSM-MSCs' octamer-binding transcription factor 4 (OCT4), homeobox protein (NANOG), proto-oncogene receptor tyrosine kinase or stem cell factor receptor (c-kit), synovial origin (lysozyme), and non-epithelial origin histogenesis (vimentin). Additionally, pan-cytokeratin (AE1 and AE3), synaptophysin, CD31, and glial fibrillary acidic protein (GFAP) were used to confirm there were no vascular, epithelial, neuronal, and neuroendocrine origins of cells, respectively. For each antibody, appropriate negative and positive controls were included, and all primary antibodies were incubated overnight.
The final step consisted of microscopic cell observation, evaluation, and photograph using the microscope Eclipse E600 (Nikon ® , Tokyo, Japan) and the software Imaging Software NIS-Elements F Ver4.30.01 (Laboratory Imaging ® , prague, mmun republic). A semi-quantitative score was used for mmunoexpression evaluation, consisting of the percentage of labeled cells (<5%, 5-80%, and >80%) and labeling intensity (0, negative; +, weak; ++, moderate; and +++, strong). Immunoreactivity was considered positive when distinct nuclear and cytoplasmic staining was recognized in at least 5% of the cells.
eSM-MSC Solution Preparation
The eSM-MSC solution for local clinical application in the 16 equine patients, was a combination of allogenic eSM-MSCs suspended in autologous serum. Prior to preparation of the final therapeutic combination, autologous serum was isolated from whole blood. Then, 10 mL samples of whole blood were collected into dry blood collection tubes, and after clotting, they were centrifuged at 2300 rpm for 10 min and their supernatant (serum) collected and transferred to a 15 mL falcon. Then, the serum was inactivated through a water bath at 56 • C for 20 min followed by cooling on ice. Finally, the serum was centrifuged and filtered using a 0.22 µm syringe filter and stored at −20 • C until further use. Cryopreserved P3 eSM-MSC batches were thawed in a 37 • C water bath, and the content was transferred to a 10 mL tube with autologous serum and slowly diluted, followed by the addition of sterile DPBS until reaching 10 mL. Then, the mixture was centrifuged at 1600 rpm for 10 min. The supernatant was discarded, and the cell pellet was re-suspended in a mixture of autologous serum at a ratio of 0.8:1. Cell counting and viability were determined by using the trypan blue exclusion dye assay (Invitrogen TM , Waltham, MA, USA) using an automatic counter (Countess II FL Automated Cell Counter, Thermo Fisher Scientific ® , Waltham, MA, USA). Then, the cell number was adjusted to 5 × 10 6 cells/mL, and then 2 mL of the solution of eSM-MSCs suspended in autologous serum was transferred to a perforable capped vial and preserved on ice until the time of administration.
Treatment Protocol
Twenty structures, tendons and ligaments, were treated with a mixture of allogenic eSM-MSCs and autologous serum. The same treatment protocol was used in every case. All equine patients were submitted to identification, anamnesis, physical examination (cardiac and respiratory frequency, body temperature, mucous membrane examination, inspection of the whole body, and palpation), orthopedic examination (evaluation of the limbs, gait inspection and movements (walk, trot and gallop), and flexion test of the main joints for 60 s followed by trot). Lameness was evaluated at a walk and a trot on hard surface and scored on a scale from 0 to 5, according to the AAEP parameters. Complementary diagnostic exams included regional nerve blocks (to identify the pain area), radiographs, and ultrasound image as reported in other studies [21,24,25,[27][28][29][30][31][32].
Following the assumptions of the exclusion criteria, the horses did not receive any treatment before or after the administration of the therapy protocol. In the case of adverse events occurring, such as inflammatory/anaphylactic reactions or infections, the horses should be immediately evaluated and treated with anti-inflammatories or antibiotics, in accordance with their clinical status. The equine patients were monitored in the 48 h after treatment and any occurrences were registered. Following the treatment, the equine patients were assessed periodically to control the equine patient's healing evolution and to provide valid comparative data among equine patients within the same study group. Table 4 presents the lesion type casuistic. Selected horses were sedated with detomidine (0.02 mg/kg), trichotomized, a regional nerve block was performed with lidocaine 2% (20 mg/mL, 2 mL/point), and the surgical skin was disinfected with chlorohexidine and alcohol. The therapeutic combination was aspired to a 2 mL syringe and homogenized, ultrasound was used to identify the lesion site, and an ultrasound guided injection was performed at the lesion over three different points. Finally, a bandage was applied to the limb. All equine patients were injected with phenylbutazone (2.2 mg/kg, IV, SID) at the end of the treatment. The established protocol included a second eSM-MSC administration 15 days after the first treatment using the same protocol.
Clinical Evaluation-Serial Evaluations
Tissue regeneration was estimated through a lameness evaluation, pain to pressure test, limb inflammation, sensitivity, and ultrasound image (reduction of hypoechoic area and fiber alignment). Lesion ultrasonographic evaluations were performed using a 7.5 MHz linear transductor probe (Sonoscape A5 ® , Shenzhen New Industries Biomedical Engineering Co Ltd., Shenzhen, China). For each assessment, a complete examination of the structure was conducted by means of longitudinal and transverse scans. The obtained images were evaluated at each examination for two parameters: lesion echogenicity and lesion longitudinal fiber alignment (FA). The contralateral healthy limb was used as comparison. The evaluation was performed on the treatment day (Day 1) as well as on Days 15, 30, and 45 post-treatments, as presented in Figure 3. According to the classification proposed by Guest et al., this is a short term period study [33].
The rehabilitation program consisted of an exercise-controlled program with stall confinement and increasing the amount of time for exercise. Early mobilization included weight-bearing activities, strengthening, and flexibility, and stall rest alone was used as infrequently as possible, as presented on Table 5 [34][35][36][37][38]. Regular ultrasound evaluations were also performed.
The rehabilitation program consisted of an exercise-controlled program with stall confinement and increasing the amount of time for exercise. Early mobilization included weight-bearing activities, strengthening, and flexibility, and stall rest alone was used as infrequently as possible, as presented on Table 5 [34][35][36][37][38]. Regular ultrasound evaluations were also performed. Figure 3. Timeline of the eSM-MSC treatment protocol and rehabilitation program. The day before the first treatment (T0), blood from the equine patient was collected to prepare autologous serum. At T0, the mixture of autologous serum and eSM-MSCs was injected intralesionally after a clinical and ultrasound examination. After 15 days, the same procedure was repeated. At day 30 (T2), a clinical and ultrasound examination was performed and if a favorable outcome was considered, the horse progressed to a physical rehabilitation program. During the physical rehabilitation program, the equine patient was also re-evaluated at days 60 and 90. Table 5. Physical rehabilitation program. After eSM-MSC treatment, all horses were submitted to a rehabilitation program consisting of two days of box rest followed by 13 days of 10 min of hand walking. The bandage applied on treatment day was removed 24 h after treatment. At Day 15, the second treatment was performed followed by another 15 days of rehabilitation, until Day 30. Between Days 30 and 45, the work consisted of 20 min hand walking; between Days 45 and 60, the work was 30 min of hand walking; between Days 60 and 75, the work was 30 min of hand walking plus 5 min trotting; and finally, between Days 75 and 90, each horse was submitted to 30 min of hand walking plus 10 min of trotting. After this, the horses could return to full work. The day before the first treatment (T0), blood from the equine patient was collected to prepare autologous serum. At T0, the mixture of autologous serum and eSM-MSCs was injected intralesionally after a clinical and ultrasound examination. After 15 days, the same procedure was repeated. At day 30 (T2), a clinical and ultrasound examination was performed and if a favorable outcome was considered, the horse progressed to a physical rehabilitation program. During the physical rehabilitation program, the equine patient was also re-evaluated at days 60 and 90.
eSM-MSC Isolation
eSM-MSCs were successfully isolated from equine synovial membrane samples and the average total number of cells isolated from the samples was 1.2 × 10 5 and 5.6 × 10 5 at Days 6 and 11, respectively, and expanded from the donor. Cells were observed radiating from the explants and those identified in culture showed clear plastic adherence and mostly fibroblast-like morphology, an essential feature to characterize cells as MSCs (Figure 4a,b).
eSM-MSC Isolation
eSM-MSCs were successfully isolated from equine synovial membrane samples and the average total number of cells isolated from the samples was 1.2 × 10 5 and 5.6 × 10 5 at Days 6 and 11, respectively, and expanded from the donor. Cells were observed radiating from the explants and those identified in culture showed clear plastic adherence and mostly fibroblast-like morphology, an essential feature to characterize cells as MSCs (Figure 4a,b).
Adipogenic Differentiation-Oil Red O Staining
Adipogenic differentiation was confirmed by the presence of large red stained lipid vacuoles in the cytoplasm due to exposure of oil red O staining.
Chondrogenic Differentiation-Alcian Blue Staining
Chondrogenic differentiation was confirmed by the presence of proteoglycans' marked deposition in the extracellular matrix which stained blue, confirming the presence of chondrogenic aggregates.
Osteogenic Differentiation-Alizarin Red Staining
Osteogenic differentiation was demonstrated by the presence of extracellular calcium deposits stained red by alizarin red solution, which dyes chelate complexes with calcium.
Karyotype Analysis
The cytogenetic analysis revealed the presence of 36% normal cells in P4 and 32% normal cells in P7. Tetraploidy was present in 4% of P4 cells and 8% of P7 cells. Aneuploidy represented 60% of the cells in both passages, hypoploidy being the most representative (56%), as shown at Table 6 and Figure 6.
Osteogenic Differentiation-Alizarin Red Staining
Osteogenic differentiation was demonstrated by the presence of extracellular calcium deposits stained red by alizarin red solution, which dyes chelate complexes with calcium.
Karyotype Analysis
The cytogenetic analysis revealed the presence of 36% normal cells in P4 and 32% normal cells in P7. Tetraploidy was present in 4% of P4 cells and 8% of P7 cells. Aneuploidy represented 60% of the cells in both passages, hypoploidy being the most representative (56%), as shown at Table 6 and Figure 6.
Karyotype Analysis
The cytogenetic analysis revealed the presence of 36% normal cells in P4 and 32% normal cells in P7. Tetraploidy was present in 4% of P4 cells and 8% of P7 cells. Aneuploidy represented 60% of the cells in both passages, hypoploidy being the most representative (56%), as shown at Table 6 and Figure 6.
Immunohistochemistry
The eSM-MSCs showed strong expressions of OCT4/NANOG, vimentin, and lysozyme which confirmed marked stem cells, non-epithelial cells, and synovial cells, respectively; weak expression of GFAP; and no expression of CD31, synaptophysin, and pan-cytokeratin, as seen in Figure 8, which confirmed no vascular, neuronal, and epithelial origins of cells. Except for GFAP, in which a smaller number of cells exhibited weaker cytoplasmic immunolabeling in P3 as compared with in passage P0, there was preservation of immunoexpression of all the antibodies between passages P0 and P3. The combination of the positive and negative expressions of these different markers confirmed the expected mesenchymal origin of the cells. Figure 8 presents the immunolabeling of the eSM-MSCs.
Immunohistochemistry
The eSM-MSCs showed strong expressions of OCT4/NANOG, vimentin, and lysozyme which confirmed marked stem cells, non-epithelial cells, and synovial cells, respectively; weak expression of GFAP; and no expression of CD31, synaptophysin, and pancytokeratin, as seen in Figure 8, which confirmed no vascular, neuronal, and epithelial origins of cells. Except for GFAP, in which a smaller number of cells exhibited weaker cytoplasmic immunolabeling in P3 as compared with in passage P0, there was preservation of immunoexpression of all the antibodies between passages P0 and P3. The combination of the positive and negative expressions of these different markers confirmed the expected mesenchymal origin of the cells. Figure 8 presents the immunolabeling of the eSM-MSCs.
Treatment Results
No horse had any adverse event that required study cessation, unplanned procedures, or additional treatments. All intra-tendinous injections and follow-up procedures had no adverse reactions (inflammation, infection, deterioration of the lesion, increased lameness), as shown by Godwin et al. (2012) [39]. No horse had abnormalities identified on the weeks following the injection.
Tendon/ligament regeneration occurred in a time frame of less than 30 days in 80% of the cases and between 30-90 days in 20% of the cases. In this study, eight horses had a lesion on the right front limb, six horses had a lesion on the left front limb, and two horses had a lesion on the right hind limb. There were 14 acute cases and two chronic cases. Chronic cases were diagnosed 6 months before our approach.
After Day 90, meaning they had completed the proposed rehabilitation physical program, the horses started cantering and started to return to their usual work plan. By Day 120 post the first treatment, 87.5% of the horses were back to full work, with the exception of the 12.5% who needed another 30 days to return to full work.
All horses returned to the same level of sport activity they had before injury. Tables 2 and 7 summarize the recovery progress, with the respective ultrasound images in Figures 9 and 10. At Day 30, the group that fully recovered demonstrated both a fulfilled ultrasound cross-sectional area and good fiber alignment. There was also no evidence of pain and lameness. Below, transversal and longitudinal ultrasound images of four cases on Day 1 and on Day 30 are presented. After the eSM-MSC treatment, all horses were submitted to a rehabilitation program, as explained in Table 4.
Radiograph exams were performed to rule out the presence of other associated pathologies and regional nerve blocks were performed to better localize the injured region originating the pain.
Ultrasound images at Day 1 and at Day 30 clearly illustrate the evolution of tendon regeneration. Changes in echogenicity, fiber alignment, and cross-sectional area are evident, as seen in Figure 10.
Patient
Ultrasound images at Day 1 and at Day 30 clearly illustrate the evolution of tendon regeneration. Changes in echogenicity, fiber alignment, and cross-sectional area are evident, as seen in Figure 10. Table 7. Longitudinal fiber alignment and cross-sectional area echogenicity loss is presented [27].
Discussion
Recently, eSM-MSCs have become an interesting subject for those who study cellular and cell-based therapies due to their promising ability to promote tissue regeneration with high capacity of regeneration of articular structures, tendons, and ligaments. Regarding the collection, isolation, expansion, freezing, and thawing protocols used in this clinical trial, it was possible to use these cells in equine tendon regenerative treatments. The full characterization of eSM-MSCs presents a significant challenge since eSM-MSCs are not as well studied as MSCs from other species, namely human MSCs. However, in this study, their stemness and origins were confirmed through different processes: trilineage differentiation, karyotype, secretome, and immunohistochemistry. All the SM-MSC cultures presented monolayer culture, plastic adherence capacity, and fibroblast-like shape [40][41][42][43], accomplishing some of the minimal criteria defined by ISCT. Successful osteogenic, chondrogenic, and adipogenic differentiation was also demonstrated. De Bari et al. [9] were the first group of researchers to isolate MSCs from synovial tissues.
The karyotype presented some genomic variations when the number of passages was increased. That was consistent with some studies regarding genomic variations along cell passages [44][45][46][47][48]. DNA replication is a critical event for timely genome duplication. Errors in replication lead to genomic instability across evolution [49]. Prieto Gonzalez et al. considered that genomic instability, incurred during the process of stem cell isolation, culture expansion, and reprogramming, might be the most critical point of a stem cellbased therapeutic approach as a viable option from the clinical perspective [50]. Peterson et al. highlighted that there was very little evidence linking genomic abnormalities, for example, in human pluripotent stem cells (hPSCs) with tumorigeneses [44]. The frequency and effects of variations have increased with the development of even more sensitive methods for detecting genomic variation [45].
As reported by Simona Neri, the interpretation of genetic instability and senescence of cultured MSCs is controversial, but the increasing incidence of genetic alterations at advanced culture times clearly indicates that few culture passages correspond to a reduced chance to harbor dangerous alterations. Therefore, prudent behavior is desirable with a reduction in culture times as much as possible to avoid safety concerns [51]. More studies must be performed in this area.
During the last decade, it has been shown that the therapeutic effectiveness of MSCs is due mainly to the release of paracrine factors, namely CM, composed of soluble (cytokines, chemokines, and growth factors) and nonsoluble factors (extracellular vesicles) that are primarily secreted in the extracellular space by stem cells [52]. CM's paracrine signaling can be considered to be the primary mechanism by which MSCs contribute to the healing process, and therefore, their study has become an interesting subject [53,54].
In our study, eSM-MSCs revealed a CM with a high level of KC/GRO, MCP-1, Il-6, FGF-2, G-CSF, GM-CSF, and IL-8. This highlights the intense activity of fibroblasts, producing KC/GRO that is chemotaxic for neutrophils during inflammation. MCP-1 is essential for reperfusion and the successful completion of musculoskeletal tissue after an ischemic injury [55]. Macrophages are tissue resident cells involved in tissue regeneration along with their inflammatory and infection responses [56]. IL-6 is a proinflammatory and angiogenic interleukin capable of increasing the expression of growth factors; reactivating, for example, intrinsic growth programs of neurons; promoting axonal regrowth; and creating a link between inflammation and tissue regeneration [57,58]. FGF-2 is a recognized GF responsible for proliferation of tenogenic stem cells. FGF-2 signaling has been reported to produce a tendon progenitor population that expressed scleraxis during somite development [59]. FGF-2 plays a crucial role in cell proliferation and collagen production, becoming a useful GF for tissue regeneration by promoting stem cell proliferation [60]. G-CSF is a cytokine that mobilizes bone marrow-derived cells (BM-DCs) to peripheral blood. A study suggested that injection of G-CSF to promote BM-DC release in the target area, i.e., rotator cuff, effectively enhanced rotator cuff healing by promoting tenocyte and cartilage matrix production [61]. Wright et al. presented a study that confirmed skeletal muscle damage, including muscle damage following strenuous exercise, induced an elevation in plasma G-CSF, implicating it as a potential mediator of skeletal muscle repair [62]. Recent human trials have shown the benefits of G-CSF administration as a treatment for neuromuscular diseases, considering that G-CSF affects skeletal muscle, leading to functional improvements [63][64][65][66][67][68]. GM-CSF is an hematopoietic growth factor with proinflammatory functions [69]. Major sources of GM-CSF are T and B cells, monocyte/macrophage endothelial cells, and fibroblasts. Neutrophils, eosinophils, epithelial cells, mesothelial cells, Paneth cells, chondrocytes, and tumor cells can also produce GM-CSF [70]. Paredes et al. evidenced that elevated levels of proinflammatory factors such as those found at these cells CM (GM-CSF, G-CSF, Il-6, IL-8 and IL-17), were implicated in the activation of resident tendon cells for effective healing, stimulating tendon cell proliferation [71,72]. IL-8 is one of the major mediators of inflammatory response and is a potent angiogenic factor. This is similar to IL-6, but IL-8 has a longer half-life [73].
A recent study highlighted that hematopoietic factor promoted tendon healing in aged mouse tendons. Histochemical results demonstrated that vascularization of the injury site was significantly elevated. It was concluded that vascular endothelial growth factor (VEGF) played an important role in decreasing adipocyte accumulation and also improved vascularization of the tendon during aged tendon healing. Active regulation of VEGF may improve the treatment of age-related tendon diseases and tendon injuries [74].
Studies with human BM-MSCs using a human-specific proteome profiler array with different angiogenic factors such as VEGF-A, IL-6, IL-8, platelet-derived growth factor A (PDGF-A), endothelin-1 (ET1), and urokinase plasminogen activator (uPA), which had not been previously reported in the CM of human MSCs, were also identified in an equine array, confirming what we found in this study [75]. This factor has been proposed as a modulator of the different neovascularization stages, through the enhancement of VEGF gene promotor activity [75,76]. Schokry et al. [77] reported that BM-MSC therapies have recovery times of 3-6 months and conservative therapeutic methods allow recovery in 12-18 months without regeneration but with formation of fibrous scar tissue. Retrospectively, no re-injuries of tendons have occurred in horses treated with this new approach, during the study frame time. In the literature [78], Smith et al. referred to a low percentage re-injury rate of 27% for SFD tendonitis treated with bone marrow stem cells. Horses returned to "full function" as defined by Cook et al. and modified by Guest et al. [33,79].
A study using a murine osteoarthritis (OA) model demonstrated that an injection of MSCs CM, similarly to injection of MSCs, resulted in early pain reduction and had a protective effect on the development of cartilage damage in a murine OA model, by using the regenerative capacities of the MSCs-secreted factors [80].
Interestingly, the results accumulated so far have provided evidence that veterinary patients affected by naturally occurring diseases should provide more reliable outcomes of cell therapy than laboratory animals, thus, allowing translating potential therapies to the human field. More recently, a cell-free therapy based on MSCs CM has been proposed. Even though there are very few clinical reports to refer to in veterinary medicine, recent acquisitions suggest that MSC-derived products may have major advantages compared to the related cells, for example, they are considered safer and less immunogenic [52]. As evidenced before, eSM-MSC CM factors are able to promote tendon healing by reducing inflammation and fatty infiltration, stimulating cell proliferation and tenogenic differentiation [81].
In this study we used a cell-based therapy instead of CM itself, but we were aware of its effect and potential on cell-based therapies; its advantages and therapeutic effects were the reason why this study was performed.
To better characterize the cells under study, we performed immunohistochemistry assays. The choice of markers was based on a previous work [8] and included several of the criteria used for humans, as determined by the ISCT. Results of our study demonstrated the presence of the embryonic stem cell markers OCT4 and NANOG. Detection of these markers has been previously described by Beltrami et al., in multipotent adult stem cells (HMASC) from human bone marrow [82], as well as, by Riekstina et al., who also demonstrated the presence of these markers in HMASC derived from bone marrow, adipose tissue, heart, and dermis [83]. Greco et al. also evidenced elevated expression of OCT4 in P3 MSCs and hypothesized OCT4 expression could be an indicator of MSC differentiation potential in clinical diagnostics [84]. In equine characterization of synovial fluid and membrane-derived MSCs, Prado et al. also evidenced the presence of NANOG and OCT4 markers [19]. In contrast, Fulber et al. had no positive results for these two markers in equine mesenchymal stem cells of synovial tissues [43]. Vimentin, a mesenchymal stem cell marker, was also detected, suggesting the mesenchymal origin of cells. The presence of lysozyme confirmed the synovial origin of cells, as stated by Fulber et al. [43].
The immunohistochemistry analysis showed the absence of CD31, sinaptophysine, and pan-cytokeratin expressions, confirming no vascular, neuronal and epithelial origins of cells. GFAP was weakly expressed, being less expressed in P3 than in P0 cells. CD31 was performed to investigate the presence of hematopoietic cells in eSM-MSCs. The expression of VEGF was not found, these results being similar to those from Fulber et al., and to other authors that evidenced the absence of hematopoietic markers [43,85]. The absence of neuronal and dermal markers was also consistent with other studies [19,43].
In our clinical trial, we treated mainly early acute lesions; 87.5% of the cases were acute lesions of tendons or ligaments. Therefore, we created a master cell bank of allogenic eSM-MSCs suitable for treatments in early acute phases versus treatments with autologous cells where time of tissue collection, preparation, and cell culture need to be considered. Furthermore, cell harvesting for autologous treatment is an invasive procedure which is unnecessary with this new eSM-MSC solution. The possibility of having a master cell bank enables faster healing of the organ and a quicker return to sport life. Horses spend less time in recovery time and have a regenerated tissue instead of a fibrotic tissue. These are some advantages of the eSM-MSC solution. Another concern is that in the early stages of the lesion there is an inflammatory phase; however, the paracrine factors released by eSM-MSCs also have anti-inflammatory action, reducing inflammation.
Chronic cases represented 12.5% of the cases, involving four structures. Three of the horses recovered in 30 days and one of the horses had a delayed recovery time.
The delayed recovery time in 20% of the structures, meaning 12.5% of the horses, was due to, in Case 6, an increased number of involved structures (more than one tendon or ligament) and a foot conformation abnormality, as the horse had a fetlock hyperextension that was impairing correct tendon healing. This was corrected with special shoeing. Inappropriate rehabilitation program (Case 7) was another cause of delayed recovery time. As soon as the corrective shoeing was performed, ligament regeneration started.
We could also conclude that lameness grade was not directly correlated with lesion cross-sectional area. Horses with ultrasonographic cross-sectional grade 1, 2, and 3 lesions presented lameness grade 4/5, which was observed in 9 of 16 patients. Lameness grade 3/5 was presents in 4 of 16 of equine patients with ultrasonographic cross-sectional grade 1 and 2 lesions. Lameness grade 2/5 was present in 3 of 16 equine patients with ultrasonographic cross-sectional grade 1 lesions. Kamm et al. (2021) concluded that based on the evidence to date, tendons appear to have improved healing when treated with allogeneic MSCs, and the use of these treatments in equine tendon and ligament lesions is warranted [86]. Colbath et al. (2020) claimed that some of the advantages of using allogenic stem cells include the ability to bank cells and to also reduce the treatment time, to collect MSCs from younger donor animals, and the ability to manipulate banked cells prior to administration [87]. Some of the disadvantages focused on the risk of immunological reactions. However, currently, there are several studies in horses accumulating evidence that allogeneic MSCs may be a safe alternative to autologous MSCs [87]. Nevertheless, the donor's health must always be taken into consideration as well as the donor's age [88].
Conclusions
To sum up, this study accomplishes the criteria for reporting veterinary and animal medicine research for MSCs in orthopedic applications [33] and the ISCT perspective on immune assays for MSC's criteria for advanced phase clinical trials [89], confirmed by plastic adherence, tri-lineage differentiation, synovial membrane origin, spindle-shaped cells, as well as proliferative and immune modulatory capacity proven by immunohistochemistry and CM.
From a clinical point of view, the idea of having an allogenic eSM-MSC cell bank is very interesting. Therefore, the possibility of having a universal donor who can provide a large amount of eSM-MSCs, to culture and preserve non-immunogenic cells whose availability is immediate, allowing a quick and effective therapeutic answer in acute stages of musculoskeletal lesion is the paramount goal of orthopedic medicine.
From a "one-health" perspective, equines play an important role as a model for human musculoskeletal disorders; the high-level analogy between human and equine structures may have a great translational value for both species for future clinical aspects [28,90]. There are significant resemblances between equine SDFT and human Achilles tendon with respect to the size of anatomical structure and load, function (energy store), pathophysiology of tendon injury, and the healing response under activity or traumatic rupture compared to other species [90]. Moreover, considering the result of tendinopathy in equine species which reflects the conditions encountered in humans, the horse is accepted as an appropriate model in this area by the research community and by other authorities such as the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA).
Based on the clinical, ultrasonographic, and performance outcomes identified in the present study, the use of eSM-MSCs together with autologous serum solution has proven its efficiency for tendon and ligament repair and contributes to reduce the recovery period and subsequent rapid return to athletic activity. The therapy was demonstrated to be safe and had no adverse findings. The clinical results and athletic outcomes of the horses were very positive. Comparing our study with others, using for example BM-MSCs, it seems that our new approach has shorter recovery times and fewer re-injuries [39,77]. These results encourage the use of eSM-MSCs and autologous serum for the treatment of tendonitis and desmitis, since they can regenerate tendon and ligament tissue and regain organ function, enhancing the return to competition in excellent time frames. Informed Consent Statement: Not applicable.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author on request.
Acknowledgments:
The authors greatly appreciate the animals' proprietaries and caretakers for accepting to participate in the present study.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-04-14T15:18:30.853Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "7c159e0b8b296e7c21517cfd538588c60dd0c992",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/13/8/1312/pdf?version=1681266053",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7cbff10b7415d8f9699010354515fed9cd2da7a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
229318575
|
pes2o/s2orc
|
v3-fos-license
|
Pathogenic Genome Signatures That Damage Motor Neurons in Amyotrophic Lateral Sclerosis
Amyotrophic lateral sclerosis (ALS) is the most frequent motor neuron disease and a neurodegenerative disorder, affecting the upper and/or lower motor neurons. Notably, it invariably leads to death within a few years of onset. Although most ALS cases are sporadic, familial amyotrophic lateral sclerosis (fALS) forms 10% of the cases. In 1993, the first causative gene (SOD1) of fALS was identified. With rapid advances in genetics, over fifty potentially causative or disease-modifying genes have been found in ALS so far. Accordingly, routine diagnostic tests should encompass the oldest and most frequently mutated ALS genes as well as several new important genetic variants in ALS. Herein, we discuss current literatures on the four newly identified ALS-associated genes (CYLD, S1R, GLT8D1, and KIF5A) and the previously well-known ALS genes including SOD1, TARDBP, FUS, and C9orf72. Moreover, we review the pathogenic implications and disease mechanisms of these genes. Elucidation of the cellular and molecular functions of the mutated genes will bring substantial insights for the development of therapeutic approaches to treat ALS.
Introduction
Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease characterized by both upper and lower motor neuron degeneration, paralysis, and ultimately limiting survival from two to five years after onset [1]. Disease onset typically occurs in late middle-life with the mean age being 65 years. It results in relentless progressive muscle atrophy and weakness, ending with respiratory failure [2,3]. In addition, this neurodegenerative disorder has an estimated worldwide mortality rate of 30,000 patients per year [4]. ALS cases are estimated to occur in 2-3 per 100,000 individuals in Europe, and less than one in Asia [5], thus categorizing it as a rare disease. Furthermore, up to 10% of ALS-affected individuals have an affected family member or members with familial ALS (fALS), in which most have inherited the disease in an autosomal dominant manner [6]. The remaining ALS patients, with no clear genetic linkage, are called sporadic ALS (sALS) [7]. At present, mutations in over 50 genes have been shown to contribute to the ALS pathogenesis [8,9]. Some of them such as SOD1, C9orf72, FUS, and TARDBP were shown to present deleterious mutations, while other variants mostly found by association studies rarely occur in the less frequent genes [8][9][10]. Several studies have identified oxidative stress, glutamate excitotoxicity, apoptosis, neurofilament dysfunction, protein misfolding and aggregation, impairment of RNA processing, disrupted axonal transport, endosomal trafficking Timeline of ALS gene discoveries and researches for SOD1, TARDBP, FUS, and C9orf72. The Y-axis shows the number of publications in PubMed that include the terms "ALS" and "the gene name" by year until November 2020. Several novel ALS-associated genes have been proposed by the researchers over the past two years. Farhan et al. identified DNAJC7 as a novel gene for ALS that encodes a member of the heat-shock protein family, HSP70, and has a key role in protein function such as protein folding and stabilization. Alteration of HSP70 and DNAJC7 gene expressions causes protein aggregation in ALS model [17]. Another gene, WDR7, was proposed by Course et al., in which the human-specific 69 bp variable number tandem repeat in the last intron of this gene may be associated with ALS. WDR7 repeat expansions may act similar to the specific range of CAG repeat expansion numbers at ATXN2, which are enriched in ALS cases. It was shown that WDR7 repeat expansion could form microRNAs, RNA aggregates, and lead to RNA editing [18]. Further study has shown that ATXN1 overexpression disturbs TDP-43 nucleocytoplasmic transport, which leads to a decrease in the nucleocytoplasmic ratio of TDP-43. Hence, mislocalization and aggregation of TDP-43 can be considered the hallmark of ALS [19]. Finally, ACSL5, a neurotoxic A1 astrocyte-related gene, is upregulated in ALS cases. ACSL5 can induce A1 astrocytes, leading to motor neuron death and ALS progression. Overexpression of ACSL5, similar to the previously discovered gene GPX3, is associated with rapid weight loss in humans [8,20]. Although the mentioned genes above have been identified as ALS associated genes, the study on their contribution to ALS pathogenesis is still limited.
Dobson-Stone et al. identified a novel missense variant in CYLD gene as the genetic cause of ALS in a large European Australian family [16]. Previously, they found a disease locus on chromosome 16 by genome-wide linkage analysis [56]. The missense variant in CYLD leads to alteration of CYLD immunoreactivity in the brain tissue [16], and shows two ALS-associated pathological phenotypes: an elevation of cytoplasmic TDP-43 level [57] and an impairment of autophagy function [58]. Overexpression of CYLD inhibited transport of TDP-43 into the nucleus from the cytoplasm. In the nucleus, TDP-43 controls the expression of ATG7, which mediates the fusion of lysosomes with autophagosomes. Decreased expression of ATG7 resulted in a loss of autolysosome formation [57][58][59]. Notably, overexpression of CYLD leads to the failure of autophagosome-lysosome fusion, causing the malfunction of autophagy [16,60] (Figure 2A). Thus far, several studies have described the clinical evolution and the genetic findings of the KIF5A gene in sALS and fALS [12,13,69,70]. The genome-wide association study comparing 20,806 ALS cases and 59,804 controls discovered KIF5A as a novel gene associated with ALS [13]. Independently, rare variant analysis was conducted on 426 patients with fALS and 6137 control subjects, which identified enriched KIF5A splice-site variants in the cases. The genetic variants associated with ALS are located at the C-terminal cargo-binding tail domain of the KIF5A gene, which Sigma-1 receptor (S1R), another potential therapeutic target gene in ALS discovered by Couly et al., regulates mitochondrial respiration and controls cellular defense against endoplasmic reticulum and oxidative stress [15]. S1R is mainly located in a special compartment of the Endoplasmic reticulum (ER) called the mitochondria-associated membrane (MAM) and has a role in ATP production [61]. S1R also protects TDP-43-induced toxicity by rescuing ATP production. ATP binds to the N-terminal domain of TDP-43 to enhance its oligomerization, and prevents the aggregation of TDP-43 into its toxic form [62]. Mutant S1R (mS1R) leads to ATP depletion and perturbs mitochondrial dynamics and respiration [15]. Therefore, it is demonstrated that an overexpression of mS1R is neurotoxic, leading to mitochondrial dysfunction, thus highlighting the role of S1R in ALS therapy [15] ( Figure 2B). A recent study demonstrated that mS1R leads to Drosophila photoreceptor organization alteration and spontaneous walking behavior [15]. Moreover, the motor performance in S1R knockout mice diminished including muscle weakness, axonal degeneration, and loss of motor neuron [63]. Finally, the protective role of S1R has been dementated in the ALS mice (G93A) model with S1R knockout by behavioral and longevity experiments [64].
Cooper-Knock et al. proposed variants in the GLT8D1 gene which are recognized to be causal in ALS [14,65]. GLT8D1, expressed within the Golgi, is a member of the glycosyltransferase family 8 involved in catalyzing the transfer of glycosyl groups. In addition, gangliosides are synthesized in the ER, which are remodeled to maturation from the cis-Golgi to the trans-Golgi network via glycosylation by GLT8D1 [66]. The mature gangliosides, which are moved to the cell membrane, are involved in cell signaling [67] and produce neuroinflammation in motor neurons. ALS-associated mutations in GLT8D1 prevent the normal activity of glycosyltransferase enzyme and negatively impacts ganglioside signaling. The overexpression of mutant GLT8D1 increases ganglioside signaling, which leads to the transit of mature gangliosides to the cell membrane where they disrupt cell signaling. In contrast, the knock down of GLT8D1 impairs its glycosyltransferase activity from the Golgi and diminishes ganglioside signaling. The previous study demonstrated that both knock down and overexpression of mutant GLT8D1 induce motor neuron dysfunction and produce cytotoxicity in zebrafish consistent with ALS. In the previous studies, genetic variants in this gene showed a significant increase in disease severity and cytotoxicity in ALS patients [14,68] ( Figure 2C).
Thus far, several studies have described the clinical evolution and the genetic findings of the KIF5A gene in sALS and fALS [12,13,69,70]. The genome-wide association study comparing 20,806 ALS cases and 59,804 controls discovered KIF5A as a novel gene associated with ALS [13]. Independently, rare variant analysis was conducted on 426 patients with fALS and 6137 control subjects, which identified enriched KIF5A splice-site variants in the cases. The genetic variants associated with ALS are located at the C-terminal cargo-binding tail domain of the KIF5A gene, which is also expressed in neurons. Considering the contribution of axonal transport deficits in the pathogenesis of motor neuron degeneration [71][72][73], variants in KIF5A disrupt axonal transport and amyloid precursor protein (APP) depletion in the synapse, which causes neurodegeneration. Therefore, a lack of KIF5A expression, which transports cargo by binding to distinct adaptor proteins, has been associated with the accumulation of phosphorylated neurofilaments and APP in the neuronal cell, which leads to cytoskeletal defects [13,74]. Moreover, previous studies confirmed the involvement of intracellular transport processes and strengthened the role of cytoskeletal defects of mutated KIF5A in ALS pathogenesis ( Figure 2D). The first genotype-phenotype relationship showed that ALS patients with KIF5A loss-of-function mutations correlated with disease onset at an earlier age and longer survival [13]. The second was proposed by Brenner et al., in which adult onset, rapid progression, and early death were shown in the patients with KIF5A splice-site mutations [12]. The SOD1 gene (encoding superoxide dismutase 1 (Cu/Zn)), which maps to chromosome 21q22.1, was the first gene identified in fALS [75]. According to a recent genome-wide meta-analysis, approximately 15-30% of fALS and less than 2% of sALS cases have been identified to have the pathogenic variants of SOD1 [24]. Currently, 180 genetic variants have been discovered to affect the functional domains of the SOD1 gene including D90A, which is identified as the most frequent missense variant (Figure 4). Recently, the SOD1 homozygous truncating variant, c.335dupG, with total absence of SOD1 activity was identified in ALS affected patients [76]. Depending on the genetic variant, different molecular morphological changes can result, and patients with SOD1-related ALS who harbor particular variants have distinct clinical features [10,77]. For example, patients with the A4V, H43R, L84V, G85R, N86S, and G93A variants show rapid disease progression and shorter survival times, while patients carrying the G93C, D90A, or H46R variants show a longer life expectancy [78].
function, causes synaptic defects and dysfunctions. (D) C9orf72: (1) The nuclear RNA foci are generated by the aggregation of repeat-containing C9orf72 RNAs in the nucleus and cause neurotoxicity. (2) Sequestration of RanGAP by G4C2 RNA disrupts the nucleocytoplasmic transport function. The loss of nuclear Ran depletes nuclear TDP-43 levels and elevates cytoplasmic TDP-43 levels. (3) Furthermore, the imported dipeptide repeats (DPRs) into the nucleus are associated with nucleolar proteins and cause nucleolar stress. (Inlet) The loss of C9orf72 function in endosomal trafficking regulation by interactions with nuclear pore complex proteins eventually increases cytoplasmic TDP-43 inclusions. Furthermore, reduction of C9orf72 expression inhibits Shiga toxin transportation from the plasma membrane to the Golgi apparatus, and alters the ratio of LC3, an autophagosome marker, leading to autophagy dysregulation.
SOD1
The SOD1 gene (encoding superoxide dismutase 1 (Cu/Zn)), which maps to chromosome 21q22.1, was the first gene identified in fALS [75]. According to a recent genome-wide meta-analysis, approximately 15-30% of fALS and less than 2% of sALS cases have been identified to have the pathogenic variants of SOD1 [24]. Currently, 180 genetic variants have been discovered to affect the functional domains of the SOD1 gene including D90A, which is identified as the most frequent missense variant (Figure 4). Recently, the SOD1 homozygous truncating variant, c.335dupG, with total absence of SOD1 activity was identified in ALS affected patients [76]. Depending on the genetic variant, different molecular morphological changes can result, and patients with SOD1-related ALS who harbor particular variants have distinct clinical features [10,77]. For example, patients with the A4V, H43R, L84V, G85R, N86S, and G93A variants show rapid disease progression and shorter survival times, while patients carrying the G93C, D90A, or H46R variants show a longer life expectancy [78]. SOD1 is an antioxidant homodimeric protein of 153 amino acids, containing one copper and one zinc atom [22,80]. It can be localized from nucleus to cytosol or mitochondrial intermembrane space. The function of SOD1 is to protect cells from reactive oxygen species toxicity. Both copper and zinc play specific roles in SOD1 activity and structural stability, respectively, and are directly involved in the deactivation of toxic superoxide radicals [22,81,82]. Previous studies have supported that SOD1-ALS is caused by a gain of function, increasing the function of producing free radicals [83]. Mutant SOD1 is an antioxidant homodimeric protein of 153 amino acids, containing one copper and one zinc atom [22,80]. It can be localized from nucleus to cytosol or mitochondrial intermembrane space. The function of SOD1 is to protect cells from reactive oxygen species toxicity. Both copper and zinc play specific roles in SOD1 activity and structural stability, respectively, and are directly involved in the deactivation of toxic superoxide radicals [22,81,82]. Previous studies have supported that SOD1-ALS is caused by a gain of function, increasing the function of producing free radicals [83]. Mutant SOD1 modifies the oxidative activity, which causes accumulation of toxic hydroxyl radicals [77,84]. Accumulation of free radicals in the intermembrane space of the mitochondria leads to mitochondrial damage and disrupted protein folding, significantly affecting distal axons of motor neurons [82,85,86]. ER is a cellular compartment including chaperone-assisted proteins to help fold proteins. Mutant SOD1 activates ER stress which leads to activation of the unfolded protein response (UPR) and ER-associated degradation (ERAD). Resulting in the refolding of misfolded proteins and export of misfolded proteins from the ER to the ubiquitin proteasome system (UPS) for degradation, respectively. Long activation of the ER can cause pro-apoptotic consequences [87]. On the other hand, mutations in SOD1 impairs axonal transport. Therefore, misfolded SOD1 is not able to transport across the mitochondrial membranes and accumulates in the outer mitochondrial membrane, triggering the mitochondrial-dependent cell apoptosis program [88] (Figure 3A).
Pansarasa et al. identified a difference between protein expression and mRNA levels in wild-type SOD1 and sALS patients, and then proved their hypothesis on translocation and re-localization of the missing SOD1 in the nucleus. Furthermore, they found that the higher amounts of soluble SOD1 in the nucleus are positively correlated with a longer duration of disease, indicating a possible protective role of SOD1 [89]. Therefore, SOD1 can be considered as a good target for ALS therapy. In this perspective, riluzole, a sodium channel blocker and glutamate release inhibitor, has been applied to improve ALS symptoms and is approved by the FDA for treating ALS [90]. shRNA, miRNA, and RNAi have been evaluated for mediating SOD1 silencing in transgenic mice, which are under investigation for ALS treatment [91]. Moreover, SOD1-ALS patients show some features and clinical characteristics that are slightly different compared to other ALS patients such as earlier age of onset, longer duration of disease, and motor symptoms that begin more often in the lower limbs [92,93].
TARDBP
In the 1990s, Leigh et al. concluded that neuronal cytoplasmic ubiquitinated inclusions were found in the spinal cord samples from ALS patients [94]. In 2006, TAR DNA-binding protein 43 (TDP-43) was discovered as the main reason for the protein aggregation in sALS cases [57,95]. Later, in 2008, several studies identified genetic variants in the TARDBP gene and the deviation of TDP-43 as a primary cause of ALS and neurodegeneration [26,[96][97][98][99]. To date, over 40 variants have been identified in various ethnic groups with around 5% in fALS and up to 2% in sALS cases [100]. The majority of variants are missense variants located in the glycine-rich region at the transcript carboxy-terminal, which interacts with other heterogeneous ribonucleoproteins ( Figure 5). The carboxy-terminal region is also involved in pre-mRNA splicing regulation [57,101]. Further studies on both fALS and sALS patients have shown the existence of TDP-43 in cytoplasmic aggregates of those without pathogenic variants in the TARDBP gene and carrying C9orf72 hexanucleotide repeat expansions [102][103][104].
Mutant SOD1 activates ER stress which leads to activation of the unfolded protein response (UPR) and ER-associated degradation (ERAD). Resulting in the refolding of misfolded proteins and export of misfolded proteins from the ER to the ubiquitin proteasome system (UPS) for degradation, respectively. Long activation of the ER can cause pro-apoptotic consequences [87]. On the other hand, mutations in SOD1 impairs axonal transport. Therefore, misfolded SOD1 is not able to transport across the mitochondrial membranes and accumulates in the outer mitochondrial membrane, triggering the mitochondrial-dependent cell apoptosis program [88] (Figure 3A). Pansarasa et al. identified a difference between protein expression and mRNA levels in wildtype SOD1 and sALS patients, and then proved their hypothesis on translocation and re-localization of the missing SOD1 in the nucleus. Furthermore, they found that the higher amounts of soluble SOD1 in the nucleus are positively correlated with a longer duration of disease, indicating a possible protective role of SOD1 [89]. Therefore, SOD1 can be considered as a good target for ALS therapy. In this perspective, riluzole, a sodium channel blocker and glutamate release inhibitor, has been applied to improve ALS symptoms and is approved by the FDA for treating ALS [90]. shRNA, miRNA, and RNAi have been evaluated for mediating SOD1 silencing in transgenic mice, which are under investigation for ALS treatment [91]. Moreover, SOD1-ALS patients show some features and clinical characteristics that are slightly different compared to other ALS patients such as earlier age of onset, longer duration of disease, and motor symptoms that begin more often in the lower limbs [92,93].
TARDBP
In the 1990s, Leigh et al. concluded that neuronal cytoplasmic ubiquitinated inclusions were found in the spinal cord samples from ALS patients [94]. In 2006, TAR DNA-binding protein 43 (TDP-43) was discovered as the main reason for the protein aggregation in sALS cases [57,95]. Later, in 2008, several studies identified genetic variants in the TARDBP gene and the deviation of TDP-43 as a primary cause of ALS and neurodegeneration [26,[96][97][98][99]. To date, over 40 variants have been identified in various ethnic groups with around 5% in fALS and up to 2% in sALS cases [100]. The majority of variants are missense variants located in the glycine-rich region at the transcript carboxyterminal, which interacts with other heterogeneous ribonucleoproteins ( Figure 5). The carboxyterminal region is also involved in pre-mRNA splicing regulation [57,101]. Further studies on both fALS and sALS patients have shown the existence of TDP-43 in cytoplasmic aggregates of those without pathogenic variants in the TARDBP gene and carrying C9orf72 hexanucleotide repeat expansions [102][103][104]. TARDBP/TDP-43 is a DNA/RNA binding protein composed of 414 amino acids that are encoded by TARDBP/TDP43 gene. TDP-43 belongs to the ribonucleoprotein family and has various functions such as gene transcription, microRNA processing, RNA splicing and stabilization, and mRNA transport [105]. TDP-43 has both nuclear localization and export signals and continuously shuttles between the nucleus and cytoplasm [106,107]. Any interference in the normal intracellular trafficking of TDP-43 between the cytoplasm and nucleus can result in cytoplasmic aggregation and the loss of nuclear TDP-43 function in regulating transcription, splicing, and mRNA stability [108][109][110]. The challenging question about the role of TDP-43 in ALS is whether a toxic gain of function of cytoplasmic aggregates or a loss of its normal function in the nucleus is responsible for disease. During cellular stress, TDP-43 plays a significant role in controlling mRNA stability, translation, and nucleocytoplasmic transport [111]. In the case of ALS, while the loss of nuclear TDP-43 function leads to dysmorphic nuclear shape, deregulation of the cell cycle, and apoptosis [112], an overexpression of TDP-43 leads to abnormal mRNA accumulation in the nucleus, cytoplasmic accumulation [113,114], and a lost normal functioning in the nucleus. Furthermore, the upregulation of cytoplasmic TDP-43 forms the inclusion bodies and the capacity to propagate among cells as a "prion-like" protein, a significant reason for the neurodegeneration observed in motor neurons [115]. However, several studies used cultured cells, animal models, and patients autopsies, demonstrating how cytoplasmic TDP-43 aggregates have an important role in motor neuronal death and neurodegeneration observed in ALS patients [115,116]. Melamed et al. proposed that ALS is associated with loss of nuclear TDP-43 [117]. Their results confirm that reduction of nuclear TDP-43 inhibits regeneration of motor axons, which is the consequence of reduction in stathmin-2 (STMN2), an essential protein for axonal growth and maintenance ( Figure 3B) [117,118].
The discovery of TARDBP/TDP-43 and FUS, two RNA-binding proteins, highlights the importance of RNA processing in ALS pathogenesis. Previous studies have focused on understanding how exactly these mutant proteins disrupt RNA transcription and modification [119]. As expected, upregulation of TARDBP/TDP-43 in motor neurons increased ALS risk by altering RNA splicing and stability [120][121][122]. On the other hand, lack of TARDBP/TDP-43 in the forebrains of mice resulted in age-dependent brain atrophy by downregulating protein Tbc1d1 in skeletal muscles, leading to compromised neuronal function [121,122]. In addition, Tsao et al. developed a Tardbp knockout mice model and showed a decrease in TARDBP/TDP-43 levels as well as a loss in body weight caused by an increase in fat oxidation and acceleration of fat loss in adipocytes [121]. Another study in Tardbp knock-in ALS mice indicated that mutant TDP-43 causes early-stage and dose-dependent motor neuron degeneration [123]. A recent study demonstrated that ALS patient-derived TARDBP/TDP-43 mutation at the carboxyl-terminal domain (M337V) causes splicing deregulation without motor neuronal degeneration in mice [124].
FUS
The next discovered gene that may rival the impact of TDP-43 on ALS research is the FUS gene (also known as TLS), which maps to the ALS6 locus on chromosome 16p11.2. Variants in the FUS gene are known to be causal for fALS [41,125]. The disease onset of ALS patients with ALS6 variants spans across a wide range of ages (from 26 to 80 years old) with a mean duration of around 33 months [80]. Over 70 variants in FUS have been identified in ALS patients, some of which have proved to be causal ( Figure 6). ALS patients with FUS variants have a shorter life span, although extensive intrafamilial variability has been observed [126]. Waibel et al. showed two truncating FUS variants associated with consistent early onset and an aggressive disease course [127]. FUS variants have been reported to contribute 4% and 1% in fALS and sALS cases overall, respectively [128]. The most recent FUS variants were indicated as the most frequent cause of early-onset ALS (at ages less than 35 years) in German fALS patients with 8.7% frequency [127,129]. FUS is an RNA binding protein encoded by a ubiquitously expressed gene and composed of 526 amino acids. FUS, EWSR1, and TAF15 belong to FET families, which are involved in transcription and alternative splicing by interacting with the transcription pre-initiation complex and various splicing factors. Under normal physiological conditions, FUS is mostly localized in the nucleus, but it shuttles to the cytoplasm and functions in nucleocytoplasmic transport [130]. Therefore, mutant FUS (mFUS) disrupts nucleocytoplasmic transport, leads to a depletion in the nucleus, aggregates in the cytoplasm, and causes neurotoxicity [131]. Mutant FUS binds to mature mRNAs in the cytoplasm, unlike WT FUS, which binds to precursor mRNAs. Abnormal binding of mFUS affects mRNA expression but has only modest expression changes. Mutant FUS not only suppresses global protein translation but also local protein translation, impairing the dendrites and axon terminals. Considering FUS as a part of RNA transport granules and its role in activated synapses [132,133], there would be defects in synaptic homeostasis and dysfunction in cells suffering from FUS mutations [134,135]. Diminishing of the proteins required for synaptic maintenance and function may lead to an ALS phenotype [136]. Furthermore, the gene binding profile has been altered in mFUS, which leads to neurotoxicity and mitochondrial size reduction due to disruption of the translation of transcripts associated with mitochondrial function [137] (Figure 3C).
In a study developed in 2015, while homozygous FUS knockout mice survived into adulthood, they had the phenotypes related to neuropsychiatric and neurodegenerative conditions, but different from ALS [138]. In further study, a model with conditionally removed FUS developed from the motor neurons of mice showed no significant effect on motor neuron survival or function. This suggests FUS is an RNA binding protein encoded by a ubiquitously expressed gene and composed of 526 amino acids. FUS, EWSR1, and TAF15 belong to FET families, which are involved in transcription and alternative splicing by interacting with the transcription pre-initiation complex and various splicing factors. Under normal physiological conditions, FUS is mostly localized in the nucleus, but it shuttles to the cytoplasm and functions in nucleocytoplasmic transport [130]. Therefore, mutant FUS (mFUS) disrupts nucleocytoplasmic transport, leads to a depletion in the nucleus, aggregates in the cytoplasm, and causes neurotoxicity [131]. Mutant FUS binds to mature mRNAs in the cytoplasm, unlike WT FUS, which binds to precursor mRNAs. Abnormal binding of mFUS affects mRNA expression but has only modest expression changes. Mutant FUS not only suppresses global protein translation but also local protein translation, impairing the dendrites and axon terminals. Considering FUS as a part of RNA transport granules and its role in activated synapses [132,133], there would be defects in synaptic homeostasis and dysfunction in cells suffering from FUS mutations [134,135]. Diminishing of the proteins required for synaptic maintenance and function may lead to an ALS phenotype [136]. Furthermore, the gene binding profile has been altered in mFUS, which leads to neurotoxicity and mitochondrial size reduction due to disruption of the translation of transcripts associated with mitochondrial function [137] (Figure 3C).
In a study developed in 2015, while homozygous FUS knockout mice survived into adulthood, they had the phenotypes related to neuropsychiatric and neurodegenerative conditions, but different from ALS [138]. In further study, a model with conditionally removed FUS developed from the motor neurons of mice showed no significant effect on motor neuron survival or function. This suggests that a loss of FUS is not a sufficient cause for ALS [139]. However, Sasayama et al. found another layer of results when they used Drosophila FUS knockdown models [140]. They showed that decreasing the expression of Drosophila ortholog of FUS plays an important role in the degeneration of motoneurons and locomotive disability in the absence of abnormal cytoplasm aggregates. This suggests that the pathogenic mechanism of FUS-ALS can be considered as a loss of physiological FUS function in the nucleus rather than cytoplasmic FUS aggregate toxicity [140]. Moreover, mFUS transgenic rats developed progressive paralysis due to a loss of neurons in the cortex and hippocampus [141]. Later, Chen et al. showed that age-dependent progressive motor neurons were damaged when WT, R524S, or P525L mFUS were over-expressed in photoreceptors [142]. Recently, it was demonstrated that mFUS causes accumulation of NEAT1 isoforms and paraspeckles, which contribute to degenerating spinal motor neurons [143].
Chromosome 9 Open Reading Frame 72 (C9orf72)
Hexanucleotide repeat expansion (HRE), GGGGCC (G 4 C 2 ), in the non-coding region of the C9orf72 gene was found as the most common inherited cause of ALS in European cohort in 2011 [45,144]. Renton et al. proposed that the G 4 C 2 HRE in the first intron on the affected haplotype in ALS patients is larger compared to healthy subjects (less than 30 HREs) [144,145] (Figure 7). However, increasing the repeat expansions is thought to be pathogenic. The exact cut-off between normal alleles and pathogenic expanded alleles is still unclear. In European cohorts, this HRE frequently occurs approximately in 40% and 7% of fALS and sALS, respectively, but is less frequent in Asian cohorts [24].
Cells 2020, 9, x FOR PEER REVIEW 12 of 23 expanded alleles is still unclear. In European cohorts, this HRE frequently occurs approximately in 40% and 7% of fALS and sALS, respectively, but is less frequent in Asian cohorts [24]. [146].
The known mechanisms for how HREs cause the disease can be categorized into two primary mechanisms [147]. First, mechanisms driving C9orf72 gain of function where RNAs containing G4C2 and C4G2 expanded repeats are bi-directionally transcribed and then aggregated in the cell nucleus. Dipeptide repeat proteins (DPRs) can be generated from repeat-containing RNAs that leave the nucleus. The imported DPRs into the nucleus can bind to the nucleolar proteins and cause nuclear stress [147][148][149][150]. On the other hand, RNA repeat-expansions can be retained in the nucleus and generate RNA foci, which has an important role in RNA-binding proteins, changes in RNA processing, nucleocytoplasmic transport impairment, and toxicity in the cell. C9orf72 RNA can sequester nucleolar proteins, bind to nuclear pore complex proteins, and disrupt nucleocytoplasmic Figure 7. C9orf72 gene structure, transcript variants, and protein isoforms. The C9orf72 gene consists of 11 coding exons (green) and non-coding exons (orange). The G4C2 or C4G2 HRE is located in the first intron of variants 1 and 3 and within the promoter region of variant 2. This figure was adapted from Balendra and Isaacs's study [146].
The known mechanisms for how HREs cause the disease can be categorized into two primary mechanisms [147]. First, mechanisms driving C9orf72 gain of function where RNAs containing G4C2 and C4G2 expanded repeats are bi-directionally transcribed and then aggregated in the cell nucleus. Dipeptide repeat proteins (DPRs) can be generated from repeat-containing RNAs that leave the nucleus. The imported DPRs into the nucleus can bind to the nucleolar proteins and cause nuclear stress [147][148][149][150]. On the other hand, RNA repeat-expansions can be retained in the nucleus and generate RNA foci, which has an important role in RNA-binding proteins, changes in RNA processing, nucleocytoplasmic transport impairment, and toxicity in the cell. C9orf72 RNA can sequester nucleolar proteins, bind to nuclear pore complex proteins, and disrupt nucleocytoplasmic transport. In addition, sequestration of RanGAP by the G4C2 RNA causes a reduction in the nuclear-cytoplasmic (N/C) distribution of Ran GTPase (Ran) and disrupts functional nucleocytoplasmic transport [131,151]. The nucleocytoplasmic trafficking correlated with mislocalization of TDP-43 in the cytoplasm. The second is C9orf72 loss-of-function mechanisms. In this mechanism, HRE leads to disrupted transcription, downregulation of C9orf72, and a loss of function. Previous studies showed the involvement of C9ORF72 in endosomal trafficking regulation as well as the inverse correlation between C9-isoform interactions with the nuclear pore complex and TDP-43 cytoplasmic inclusion levels. Then, a strong reduction of C9orf72 expression caused an inhibition of Shiga toxin transportation from the plasma membrane to the Golgi apparatus and altered expression of the autophagosome marker LC3 ratio [152,153]. Therefore, neurons from ALS patients with variants in C9orf72 have increased sensitivity to autophagy inhibition, suggesting that reductions in gene levels can lead to cellular distress [146] (Figure 3D).
In C9orf72-ALS patients, the age of onset is mostly between 30 and 70 years and bulbar onset has been more frequently observed [154]. Not only ALS and FTD but also parkinsonism and psychotic symptoms can be caused by C9orf72 expansions [155,156]. Considering the decreasing levels of C9orf72 mRNA and proteins in ALS patients with repeat expansions [157,158], Koppers et al. tested the validity of this hypothesis by developing a C9orf72 conditional knockout mouse model. However, evidence of motor neuron degeneration or motor deficits were unobserved [159]. A recent report case study highlights the phenotypic variability, including age of onset, within a family with the C9orf72 repeat expansion [160].
Conclusions
Although we have come a long way since proposing SOD1 as the first ALS gene 27 years ago, the causative pathogenic mechanisms in ALS remain obscure. The function of mutant CYLD, S1R, GLT8D1, and KIF5A as the recently discovered genes involved in ALS pathogenesis is illustrated in Figure 2. In addition, Figure 3 provides previous findings of the genetic effects of mutant SOD1, TARDBP, FUS, and C9orf72 and the fundamental mechanisms of ALS pathogenesis. According to the genes and genetic functions reviewed in this paper, multiple factors are involved in ALS disease development and progression, in which the most commonly proposed ALS pathogenic mechanisms are RNA metabolism and protein metabolism. Furthermore, genetic and phenotypic variants between patients make it difficult to draw general conclusions on ALS pathogenesis and predict the future outcomes in ALS research. However, further research on finding novel genes, gene modifiers, and their molecular pathways might improve our understanding about this neurodegenerative disorder, which is still fatal. In addition, new therapeutic strategies, by either the identification of shared disease pathways or targeted therapies known for genetic variants can be developed by proposing novel ALS causal genes. As of now, riluzole is the only successfully established therapy for ALS that exerts transient effects on cortical and axonal hyperexcitability [173], whereas several drug trials targeting glutamatergic neurotransmission have been unsuccessful [174]. Regarding the irreversibility of genetic variants, developing therapeutic approaches is difficult and specific drugs that target the treatment of ALS are quite limited [175]. Considering how genetic variants result in epigenetic modifications and how epigenetic alterations are reversibly modulated in various neurodegenerative disorders such as Alzheimer's disease and Huntington's disease, it would be reasonable to design therapeutic approaches that target the epigenetic components to ameliorate the onset and symptoms of ALS [176][177][178]. Therefore, in future studies, a combination of treatments that modulate the multiple targets of epigenetic components could be the more effective therapeutic strategy for treating ALS.
|
2020-12-17T09:10:41.437Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "13e9bde1315eecf55ff33a8e3ba0d80dfd809d28",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/9/12/2687/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca198a288de610f5bba28a08a05a6c3ca01a003a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
228089567
|
pes2o/s2orc
|
v3-fos-license
|
In Vitro and In Vivo Study of the Short-Term Vasomotor Response during Epileptic Seizures
Epilepsy remains one of the most common brain disorders, and the different types of epilepsy encompass a wide variety of physiological manifestations. Clinical and preclinical findings indicate that cerebral blood flow is usually focally increased at seizure onset, shortly after the beginning of ictal events. Nevertheless, many questions remain about the relationship between vasomotor changes in the epileptic foci and the epileptic behavior of neurons and astrocytes. To study this relationship, we performed a series of in vitro and in vivo experiments using the 4-aminopyridine model of epileptic seizures. It was found that in vitro pathological synchronization of neurons and the depolarization of astrocytes is accompanied by rapid short-term vasoconstriction, while in vivo vasodilation during the seizure prevails. We suggest that vasomotor activity during epileptic seizures is a correlate of the complex, self-sustained response that includes neuronal and astrocytic oscillations, and that underlies the clinical presentation of epilepsy.
Introduction
About 2% of the population experiences an unprovoked epileptic seizure at least once in their lives, and epilepsy research has a long history of in vitro and in vivo experimentation [1]. These events are clearly recognizable on electroencephalogram recordings or by psychosomatic manifestation, but the underlying biological mechanisms are not yet fully understood [2][3][4]. Human epilepsy is most often defined as a manifestation of periodic self-sustaining paroxysmal dysfunction of the brain, characterized by excessive synchronized firing of neurons united in a common network. Unlike normal neurovascular coupling, epileptic seizures place supranormal demands on the brain's regulatory mechanisms as a result of a pathological increase in the rate of oxygen consumption following both local and ictal events [4]. An early hypothesis proposed that neuronal damage following severe epileptic seizures was caused by local cerebral hypoxia, but this theory was refuted by later studies [5][6][7]. Epileptic seizures include dynamic changes of intracellular and extracellular ionic concentrations as well as changes in the processes of neurovascular coupling, which can be studied experimentally or by using complex mathematical modeling [8][9][10].
It was found that epileptic seizures induce long-term increases rather than decreases in local cerebral oxygenation as well as increases in local blood circulation and is a reliable marker of an underlying epileptic discharge [11]. Massive firing of neurons in the epileptic foci increases energy Brain Sci. 2020, 10, 942 2 of 13 consumption by local brain cells. This energy is produced by cellular metabolism from oxygen and glucose supplied by blood through the capillary network. Thus, in response to transient local seizures, nearby capillaries need to increase local blood circulation. This mechanism, known as neurovascular coupling, is defined as the physiological linkage between transient neural activity and the regulation of cerebral blood circulation, and this is common during normal functioning of the brain [12,13]. Functional magnetic resonance imaging (fMRI), intrinsic optical imaging (IOS), and near-infrared spectroscopy (NIRS) can measure blood oxygenation variations associated with transient neural activity [14][15][16]. Optical methods in animal models are currently used to measure blood circulation intensity, flow changes, and local oxygenation in the cerebral cortex and are routinely interpreted as changes in neuronal activity [10,[17][18][19][20][21][22]. However, the relationship between epileptic seizures and local neurovascular coupling processes is much more complex. Despite the numerous findings obtained in recent years, the potential mechanisms underlying the relationship between changes in local blood vessels and pathologically synchronized neurons remain unclear.
Using modern brain imaging methods, it was previously demonstrated that increases in cerebral blood flow (CBF) occur after the onset of epileptic seizures [23,24]. It was also shown that the metabolic rate of oxygen consumption and CBF have no direct relationship during an ictal event [23]. Another study reported a fast decrease in local oxygenation that preceded the increase in CBF in the seizure area. This phenomenon, known as the "initial dip", although controversial, proves that, for a brief period of time after ictal onset, neurons experience oxygen deficiency until cerebrovascular regulation dilates vessels to augment blood circulation [22].
It should be noted that the relationship between vasodilation and an ictal event is perhaps not so simple. It cannot be stated unequivocally that seizure onset begins completely independently of local vasodilation and other changes in neurovascular coupling [22]. Moreover, the reactions of blood vessels and the adjacent astrocytic syncytium possibly contribute to triggering and supporting epileptic seizures, at least in some cases [11,22].
In this study, we hypothesized that cerebral autoregulation may be impaired in the zone of formation of epileptic seizures. As is well known, tonic-clonic seizure-like events can be induced by elevation of K + or lowering of Ca 2+ or Mg 2+ in the extracellular space [22,25]. As a non-selective potassium channel blocker, 4-amynopiridine (4-AP) inhibits K + outward current, which, in turn, causes prolongation of action potentials and increases the excitability of both inhibitory and excitatory neurons, the former of which is pivotal in the development of epileptic seizures. The 4-AP model generates epileptic seizures lasting from a few tens up to a few hundreds of seconds, with periods between seizures lasting minutes [4,10].
Our task was to study the reactions of neurons, blood vessels, and astrocytes in the area of epileptic activity. We performed experimental studies both in vitro and in vivo, which allowed us to obtain uniquely comprehensive results. Although in vitro experiments on living brain slices using the 4-AP model are common [26], we used slices containing a fragment of a pressurized blood vessel that is rarely used because it need special experimental dexterity. Unlike the simulation of the blood flow with chemical preconstruction, that might evoke a myogen response, this experimental method allowing more natural conditions for blood vessels during the experiment [27,28].
The presence of a small fragment of blood vessel in a living slice of brain tissue allows not only investigation of neuronal activity by electrophysiological and optical methods but also monitoring of the reactions of pressurized blood vessels during the seizure. Combined use of the 4-AP model of epileptic seizures in vitro and in vivo has provided interesting results that may shed light on neurovascular coupling in the epileptic seizure area.
Materials and Methods
In total, 18 adult rats (male, 250-400 g, 3-5 months) were used for in slice as well as in vivo experiments. Ten male Wistar rats, which was originally obtained from Animal Resource Center, Universidad Central del Caribe (Bayamon, Puerto Rico) and maintained in Universidad Central Brain Sci. 2020, 10, 942 3 of 13 del Caribe animal facilities, were used for in vitro experiments. Eight male Wistar rats used for in vivo experiments were obtained from Rappolovo Nursery, Russian Academy of Medical Sciences, (St. Petersburg, Russia) and maintained in the Saint Petersburg University animal facility. All procedures involving rodents were conducted in accordance with the National Institutes of Health (NIH) regulations concerning the use and care of experimental animals and approved by the UCC Institutional Animal Care and Use Committee (IACUC, for in vitro experiments, approval #10-XI-00) and the Ethical Committee for Animal Research of Saint Petersburg State University (for in vivo experiments, approval #131-03-4). Surgical procedures were performed using sterile/aseptic techniques in accordance with institutional and NIH guidelines. To minimize discomfort, the animals were anesthetized in all procedures involving surgery and before euthanasia.
Brain Slice Preparation and Patch-Clamp
In total, 10 rats between 30 and 60 days of age were rapidly decapitated. Hippocampal slices (400 µM) were prepared using a vibratome (VT1000S, Leica Microsystems GmbH, Wetzlar, Germany) in artificial cerebrospinal fluid (ACSF) containing (in mM) 127 NaCl, 2.5 KCl, 1.25 NaH 2 PO 4 , 25 NaHCO 3 , 2 CaCl 2 , 1 MgCl 2 , and 25 d-glucose, ice cold, saturated with a 95% O 2 /5% CO 2 gas mixture at pH 7.4. Totally 16 slices from 10 different animals were used. Slices were perfused (0.1 mL/sec) with the same ACSF at room temperature. For whole-cell recordings, membrane currents and voltages were measured with the single-electrode patch-clamp technique. Cells were visualized using an Olympus infrared microscope fixed on an X-Y stage (Narishige Int. Group, Japan) and equipped with differential interference contrast (model BX51WI, Olympus, Japan). Two piezoelectric micromanipulators (MX7500 with MC-1000 drive, Siskiyou, Inc., Grants Pass, OR, USA) were used for voltage-clamp and current-clamp recording. An additional two MN4 manipulators (Narishige Int. Group, Japan) were used for a local field potential (LFP) electrode and a pressurizing micropipette. All manipulators and microscopes were separately fixed to an anti-vibration table (VH-AM, Newport Corporation, CA, USA). A MultiClamp 700A patch-clamp amplifier with a DigiData 1322A interface (Molecular Devices, Inc., Sunnyvale, CA, USA) was used for recording and stimulation. The pClamp-10 software package (Molecular Devices, Inc., CA, USA) was used for data acquisition and analysis. Borosilicate glass pipettes (O.D., 1.5 mm; I.D., 1.0 mm; World Precision Instruments, Sarasota, FL, USA) were pulled to a final resistance of 8-10 MΩ for astrocyte recordings in four steps using a P-97 puller (Sutter Instrument Co., Novato, CA, USA). Electrodes were filled with the following solution (in mM): 130 K-gluconate, 10 Na-gluconate, 4 NaCl, 4 phosphocreatine, 0.3 GTP-Na 2 , 4 Mg-ATP, and 10 HEPES, and the pH was adjusted to 7.2 with KOH. Astrocyte recordings were considered only if the membrane potential (MP) was negative, up to −80 mV, and there was low input resistance (<20 MΩ). Experiments with a brain section and electrodes were performed using an infrared video monitoring system ( Figure 1C). Constant video monitoring allowed us to control that there are no movements of the slice as a whole during ictal events or large swelling occurs during the experiments.
To Pressurize Blood Vessels in the Slice
A glass electrode with a tip of~20 µm in diameter was filled with ACSF and fixed in a standard patch-clamp holder, while the holder was connected to a pressure control and management system. We used relatively large blood vessels, with a diameter of 100-150 µm, lying mainly in the plane of the slice. We picked the cut end of the vessel visible on the slice surface. The slice was oriented so that the tip of the glass electrode rested against the cut end of the vessel. The tip of the pressure electrode was then moved inside the vessel using a micromanipulator. Next, a pressure of 0-50 torr was applied. The smaller derivative vessels of 10-20 µm in diameter were inspected to see whether these vessels changed in diameter at different pressures, and the limits of vessel diameter were determined (maximum diameter, minimal diameter) ( Figure 1A,B). A mean pressure of 20-30 torr was sustained Brain Sci. 2020, 10, 942 4 of 13 continuously throughout the experiment to maintain the diameter in the middle of the diameter range thus providing the vessel a dilation ability.
Piezo-electrode: To make a mechanosensitive electrode, borosilicate glass electrodes were pulled so that the tip of the electrode became~1 µm (for intracellular recording) and then additionally heated in a micro-forge for 10 min. After extensive heating, the electrode became mechanosensitive and generated a piezo potential of~0.1 mV/µm upon tip bending, due to ceramic formation [29]. This electrode was then installed in a micromanipulator with a standard patch-clamp holder, connected to an isolated voltage amplifier (DP-301, Warner instruments, Holliston, MA, USA), positioned against the vessel wall, and calibrated using a microscope to monitor vasomotor activity. The second channel of the amplifier was used to connect a standard low-resistance electrode to record the LFP (filtered at 0.1 Hz with a high-pass filter) from the slice.
Two glass microcapillaries were used for simultaneous patch-clamp recording from a closely neighboring neuron and astrocyte or two astrocytes, also simultaneous with LFP activity and vasomotor activity in our experiments.
with the same ACSF at room temperature. For whole-cell recordings, membrane currents and voltages were measured with the single-electrode patch-clamp technique. Cells were visualized using an Olympus infrared microscope fixed on an X-Y stage (Narishige Int. Group, Japan) and equipped with differential interference contrast (model BX51WI, Olympus, Japan). Two piezoelectric micromanipulators (MX7500 with MC-1000 drive, Siskiyou, Inc., Grants Pass, OR, USA) were used for voltage-clamp and current-clamp recording. An additional two MN4 manipulators (Narishige Int. Group, Japan) were used for a local field potential (LFP) electrode and a pressurizing micropipette. All manipulators and microscopes were separately fixed to an anti-vibration table (VH-AM, Newport Corporation, CA, USA). A MultiClamp 700A patch-clamp amplifier with a DigiData 1322A interface (Molecular Devices, Inc., Sunnyvale, CA, USA) was used for recording and stimulation. The pClamp-10 software package (Molecular Devices, Inc., CA, USA) was used for data acquisition and analysis. Borosilicate glass pipettes (O.D., 1.5 mm; I.D., 1.0 mm; World Precision Instruments, Sarasota, FL, USA) were pulled to a final resistance of 8-10 MΩ for astrocyte recordings in four steps using a P-97 puller (Sutter Instrument Co., Novato, CA, USA). Electrodes were filled with the following solution (in mM): 130 K-gluconate, 10 Na-gluconate, 4 NaCl, 4 phosphocreatine, 0.3 GTP-Na2, 4 Mg-ATP, and 10 HEPES, and the pH was adjusted to 7.2 with KOH. Astrocyte recordings were considered only if the membrane potential (MP) was negative, up to −80 mV, and there was low input resistance (<20 MΩ). Experiments with a brain section and electrodes were performed using an infrared video monitoring system ( Figure 1C). Constant video monitoring allowed us to control that there are no movements of the slice as a whole during ictal events or large swelling occurs during the experiments. Photomicrograph of a blood vessel in the living brain slice under normal conditions (A) and when pressure is applied inside the vessel (B). (C) Blood vessel with microcapillary inserted inside and the neuron and astrocyte with attached microelectrodes. A glass electrode was inserted inside the blood vessel in a 400 µm brain slice preparation, and 20-30 torr pressure (see text) was applied.
In Vivo Imaging of the Epileptic Seizures
For each in vivo experiment 8 Wistar adult rat were anesthetized with an i.p. injection of a mixture of ketamine (90 mg/kg) and xylazine (12 mg/kg) and fixed onto a stereotaxic frame. On the dorsolateral part of the skull, a cranial window of ∼5-7 mm 2 was made. After craniotomy, intracortical injection of 0.3 µL of a 25 mM solution of 4-AP in artificial cerebrospinal fluid (ACSF) was done by Hamilton syringe injection at 0.5 mm below the surface to induce local epileptic seizures. To suppress cortical tissue motion induced by breathing and heart rhythm, the region of interest was covered with mineral oil and a cover glass. For LFP recording, a metal high-impedance microelectrode (glass-coated tungsten, R 1 MΩ) was positioned at the region of interest at about 0.5 µm below the cortical surface. The reference electrode was a silver plate (1-2 mm 2 ) implanted over the cerebellum. Neural activities were fed into a multichannel amplifier (USF-8; Beta Telecom), band-pass filtered at 0.1-200 Hz, and digitized. A 12-bit CCD camera (QuickCam, 640 × 480 pixels) with a built-in objective was focused 0.5 µm below the cortical surface around the 4-AP injection site, and images were acquired with at 3 fps. We used an algorithm comparing frames obtained in the ictal and in the interictal periods, and corresponding frames were selected during off-line analysis based on LFP data.
Illumination was provided with an LED at 630 nm (red light), which was homogeneously projected onto the region of interest. All images were analyzed off-line. At each time point, the averaged light reflection intensity of the pixels within the region of interest was quantified and normalized to the mean baseline value. Pseudocolor images were obtained by digitally amplifying the difference with a zero-image using the Metamorph program (Universal Imaging Corp., Downingtown, PA, USA) and the histological Photoshop plugin. We compared the width of vessels during spike-wave discharges (SWD) to the width of the vessel in the interictal period, which was taken as 100%.
Chemicals and Materials
All chemicals and materials not specially mentioned were purchased from Sigma-Aldrich (St. Louis, MO, USA).
Statistics and Measurements
GraphPad Prism 7.03 (GraphPad Software, Inc., La Jolla, CA, USA) was used for calculations of the Kolmogorov-Smirnov normality test, the ordinary t-test and one-way ANOVA to determine statistical differences, as indicated for each experiment. Values were determined to be significantly different if the p-value was <0.05.
In Vitro Experiments
The potassium channel blocker 4-AP (100 µM) in ACSF perfusion solution was applied to the hippocampal slice, totally 16 slices from 10 different animals were used. Patch-clamp was performed on astrocytes and neurons near a pressurized vessel using voltage-clamp and current-clamp modes, respectively. Patch-clamp of a pair of nearby astrocytes in voltage-clamp mode (Figure 2A) showed that a powerful inward current (corresponding to temporary astrocyte depolarization) in the astrocyte membrane occurred simultaneously with the onset of seizure-like events and usually started 2-5 min after the application of 4-AP ( Figure 2B). These depolarization events occurred randomly throughout the recording without a regular period, and were observed in all astrocytes that we were able to study. Upon washout, these depolarization events, which at first lasted a few minutes, became shorter over time after 4-AP application. They then shortened to 10-20 sec duration and finally disappeared after 30-40 min after application of 4-AP. It was found that membrane potential (MP) of astrocytes was significantly reduced ( Figure 2D) from −86.5 ± 1.0 mV to −82.0 ± 0.8 mV about 10 min after the application of 4-AP (paired t-test: p = 0.0017, t = 3.9, df = 14, n = 15). The one-way ANOVA post-test for linear slope showed that there is a statistically significant linear trend for MP during 10 min (F(1, 70) = 76.9, p < 0.0001).
Patch-clamp electrodes, which record the astrocyte MP, also recorded relatively high-frequency, low-amplitude spike-like oscillations, corresponding in shape and frequency to the known electrical activity of epileptic seizure-like event (M = 0.43 Hz, SD = 0.06). The duration of this seizure-like activity was 0.5-3 min (average, 75 s), and these events were recorded both by the extracellular electrode and by patch-clamp of the nearest astrocytes. As well as powerful depolarization of the astrocyte membrane, high-frequency, low-amplitude oscillations caused by 4-AP were recorded on all astrocytes during epileptic seizure periods. These oscillations started almost simultaneously with, and persisted along with, a deep inward-current (depolarization, Figure 2B) and had not ended after 30-40 min of 4-AP washout. Activity in the nearby astrocyte pair occurred synchronously ( Figure 2B), with a very clear positive correlation, Pearson correlation coefficient was 0.97. Analysis of cross-correlation lag values showed a zero-lag time (+0.98 at lag time = 0. calculated with standard Clampfit V10.2-012 cross-correlation function, Figure 2C). Thus, we clearly observed that nearby astrocytes generated their low-amplitude, high-frequency discharges with a high degree of synchronization.
Simultaneous recording from neurons (CA1 zone, hippocampus) and nearby astrocytes enabled an understanding of how their activities after 4-AP application are related ( Figure 3A, upper and lower traces). The upper trace is for an astrocyte recorded in voltage-clamp mode, while the lower trace is for the neuron (current-clamp mode). As in previous experiments, 100 µM 4-AP in ACSF perfusion solution was applied to the hippocampal slice over a period of 4 min with a following washout [30]. This led to seizure-like activity in the slice, corresponding to occasional bursts of synchronized spike activity in neurons ( Figure 3A, lower trace). Simultaneous recordings show that the neuronal bursts corresponded to a high-amplitude, 15 s inward current (corresponding to temporary depolarization) in the astrocyte ( Figure 3A, upper trace). After this event, fast spike-like oscillations appeared in the astrocytes, which corresponded with spikes or giant EPSPs in the neuron ( Figure 3A, colored insert). The cross correlation function between neuronal activity and the high-frequency, low-amplitude activity of astrocytes ( Figure 3B) was also high (CCF = +0.88, Clampfit, cross-correlation), with a stable delay different for each neuron-astrocyte pair, which can be described as a constant phase shift.
A blood vessel (diameter 100-150 µm) was perfused with ACSF at constant pressure through the inserted micropipette, while smaller derivative blood vessels were also pressurized. The electrode positioning is shown ( Figure 4A). A mechanosensor was positioned on the pressurized blood vessel, and movements of the blood vessel wall were recorded during the seizure, which corresponded to large inward currents in the astrocytes ( Figure 4B). Application of 4-AP caused epileptic activity in the slice, and this activity was clearly visible by the LFP as a high-frequency, high-amplitude event ( Figure 4B). The mechanosensor recorded a distinct vasoconstriction response, accompanied by electrophysiological correlates. The effect of vasoconstriction caused by seizure-like event corresponded only to slow inward currents (producing temporary depolarization) in astrocytes, but later high-frequency spike-like oscillations in astrocytes corresponding to seizure-like events (according to the LFP) were also observed ( Figure 4B). This effect was stable and was observed in all cases when seizure-like activity occurred simultaneously with the application of pressure inside the vessel. No vasomotor activity was observed between seizure-like events, regardless of whether pressure was applied inside the vessel. We also failed to find correlations between the intensity of vasoconstriction and the pressure applied to the vessel. Vasoconstriction started at the same time as the epileptic seizure and ended when a long-lasting inward current (depolarization) event in the astrocyte ended. Seizure-like event recorded according to the LFP in most cases began with a single high-amplitude discharge, and closer to the end of the recording the amplitude of the spikes decreased. Vasomotor reactions detected by the mechanosensor are inextricably linked to changes in neuronal activity as well as changes in inward currents in the astrocytes in the epileptic seizure zone ( Figure 4B).
In Vivo Experiments
In vivo experiments were performed on the whole brain using intrinsic optical imaging in combination with LFP recording. Seizure-induced optical responses were monitored by IOS imaging within the cortical region. Intracortical injections of 4-AP induced recurrent seizures, typically lasting a few seconds for each event (17.6 ± 3.8; from 6.9 to 32.5 s) and with events recurring for about 2 h after the injection [31,32].
A seizure was defined as a series of discrete spikes, with an onset consisting of high-frequency discharges (10.59 ± 0.21 Hz), followed by an evolving rhythmic, high-amplitude activity (3.04 ± 0.05 Hz) with a distinct termination ( Figure 5A). In most cases, epileptic seizures began with a single high-amplitude spike, and the frequency of spikes increased over the course of the seizure. During the IOS session, we recorded an optical signal from the region of interest, with a frame duration of 0.1 s and a frequency of 1 frame every 10 s. The beginning and end of each frame were marked on the LFP recording with specific double-spike artifacts ( Figure 5B). In all cases, decreased light reflectance was observed in the epileptic foci during the epileptic seizures compared with the interictal period. Data were collected before, during, and for up to two hours after epileptic seizure induction, and acquired images were stored digitally. In all experiments, IOS changes showed statistically significant linear trend only after 4-AP injection (F(1, 81) = 31.9, p < 0.0001) ( Figure 5C), before injection there was no statistically difference in this parameter.
As is well known, in the part of the spectrum that we used (~630 nm), a decrease in reflection is associated with an increase in hematocrit. In the case of relatively large blood vessels, an increase in hematocrit is almost exclusively associated with vasodilation. Through comparison of the simultaneously acquired LFP and IOS data, we observed vasodilation corresponding to epileptic seizures ( Figure 5D). We analyzed changes in the lumen of vessels outside and during spike-wave discharges (SWD), the width of the vessel in the interictal period was taken as 100%. Vasodilation during a seizure was significant, it was on average 110.3 ± 0.5% (one-sample t-test, H0: mean equals 100%; n = 90 t =23.3 df = 89; p < 0.0001).
These hemodynamic changes confirmed an increase in local microcirculation before the seizures, and the elapsed time between vasodilation and seizure onset depended on the distance from the site of the 4-AP injection. Without exception, in all experiments there was a significant increase in the diameter of blood vessels, correlating with the duration of epileptic seizures. The onset and termination of this vasodilation were delayed by electrophysiologically recorded epileptic seizures for no more than 1 s. The vessel appeared in a dilated state for the entire duration of the epileptic seizures and narrowed simultaneously with the end of the seizures. As recorded with the LFP electrode, pre-seizure activity was relatively low, and strong IOS was observed only during electrographic seizures.
Discussion
Epileptic seizures are a complex phenomenon that, with different pathways of development, can trigger both vasoconstriction and vasodilation. Vasodilation and vasoconstriction caused by epileptic seizures have been repeatedly described in the literature. Using different methodological approaches, it was shown that epileptic seizures are accompanied by local vasodilation, and this phenomenon is associated not only with pathological synchronization of neurons but also with slow depolarization of the astrocyte membrane [33]. It should be noted that the same authors showed that electroconvulsive seizures caused a rapid elevation in astrocyte endfoot Ca 2+ that was confined to the seizure period. Vascular smooth muscle cells expressed a significant elevation in Ca 2+ both during and following seizures. [34]. Additionally, they found biphasic reaction: arterioles dilated in response to the seizure, with a decreasing amount of dilation with increasing distance from the 4-AP injection site. The biphasic reaction (vasoconstriction in the preictal period and vasodilation during a later stage of the seizure) was only evident in the remote area [35].
Thus, increases in Ca 2+ in astrocyte endfeet correlating with vasoconstriction at the onset of seizure and with vasodilation during the latter part of the seizure have been shown [36]. It was also shown that pericytes are involved in the control of capillaries' vasomotion and depolarization of pericytes in the postictal phase can lead to vasoconstriction [12,37].
Usually, epileptic activity initiates a local increase in cerebral metabolism and CBF, but decreases in CBF have been demonstrated surrounding the epileptic focal area [35]. An imbalance of inhibition and excitation causes neural network hyperexcitability and eventually leads to seizures. The network activity of neural and glial cells is an important factor that regulates the multidimensional response of the vascular system, including the interaction between interconnected blood vessels [2]. Early epileptic studies suggested that vasospasms caused by seizures led to local ischemia, but later hyperemia-the opposite of ischemia-was found in the epileptic seizure area [38,39]. Seizures induce reversible vasodilation and increases in the local blood flow, resulting in an overshooting supply of oxyhemoglobin [40]. Using several models of epileptic mice, it was demonstrated that vasospasms are more likely to occur in the ictal zone capillaries of epileptic mice than in control animals [40].
The 4-AP model of epileptic seizures allows us to investigate pathologically synchronized neural activity in vitro as well as in vivo [41]. We used a unique technique of applying artificial pressure to the interior of a fragment of a vessel located in a living brain slice. This technique allows one to simultaneously register vasomotor activity and the cellular activity of neurons and astrocytes. In our experiments we found that 4-AP induced synchronized neural activity led to significant potassium release ( Figure 3, lower trace). Simultaneously, there was an MP change in astrocytes due to a high-intensity inward current, as the astrocytes tried to absorb the potassium (Figure 3, upper trace), with the phase probably corresponding to the beginning of the ictal period. These synchronized neurons then produced simultaneous spikes, reflected as "high-frequency" signals in astrocytes (Figures 2 and 3). Each individual neuronal spike corresponded to a fast spike-like oscillation in the astrocyte with a stable delay, as is illustrated in Figure 3. All astrocytes near these synchronized neurons had their high-frequency signals synchronized with zero lag time ( Figure 2).
If we compare LFP signal recorded by an extracellular electrode in vitro and filtered through a high-pass filter (Figure 4), we can see that it resembles an EEG signal recorded during in vivo studies ( Figure 5) within the ictal period ( Figure 5A). By contrast, high-diameter arteries in in vivo experiments were clearly dilated during the beginning of the ictal period ( Figure 5C,D), while our in vitro experiments on brain slices showed that there is clear vasoconstriction in small diameter arterioles accompanying astrocyte activity (Figure 4). Our data obtained in vivo using a 4-AP model of epileptic seizures corresponds with other reports of ictal events rapidly accompanied by local vasodilation of relatively large (≥50 µm) vessels in the zone of epileptic seizure [36]. Meanwhile, optical methods have shown that low oxygenation (probably corresponding to vasoconstriction) is recorded in the focal location of the ictal zone, while in the peripheral blood vessels the CBF becomes reduced [42].
The results obtained in our study that relate astrocyte membrane currents to vasoconstriction during seizures are consistent with other studies obtained in recent years with epileptic models [35]. Normally, astrocytes are able to remove a large amount of K + from the extracellular space, since it can be spatially buffered via redistribution through gap junctions into the syncytial network of gap junction-coupled astrocytes [43,44]. Therefore, any pathological change in gap junction coupling could impact astrocytic functions and may contribute to seizure occurrence. In the case of 4-AP models, gap junctions function normally, but extracellular K + rises due to neuronal hyperactivity.
Our results demonstrate that in a model of in vitro epileptic seizure, event onset is accompanied by vasoconstriction in small blood vessels, while during the seizure large vessels exhibit vasodilation, as was found previously [35]. Three-dimensional seizure events are complicated, and the phenomenon of vasoconstriction in the epileptic foci have been challenging to study in vitro. It might be why it has been suggested that various forms of epileptic activity only increase local cerebral blood circulation. For example, it was demonstrated using the bicuculline seizure model that CBF increases dramatically with seizure onset, reaching a maximum after 15-60 s [45].
In summary, mechanosensor recordings allowed us to describe properties of synchronous epileptiform discharges and vasomotor activity induced by 4-AP in an in vitro living cortico-hippocampal slice. We observed that neuronal oscillations during an epileptic seizure precede fast spike-like events in astrocytes. Since astrocytes are combined into an astrocytic syncytium by gap junctions, they start to remove potassium from the extracellular space locally, but the resulting current spreads throughout the syncytium. This, in turn, supports the existence of pathological neural depolarization. This process is accompanied by local vasomotor activity, which is closely related to neuronal and astrocytic activity, but its biological role is not entirely clear.
Conclusions
Our results obtained in vitro and in vivo reveal a relationship between ictal events in neurons and astrocytes and vasomotor events. There remains little doubt that astrocyte properties contribute to epileptic seizure onset, spread, and termination, which can be attributed to their synchronous depolarization that follows neuronal oscillations. Moreover, using a combination of in vitro vasomotor activity recordings and IOS imaging in vivo, we observed local constriction of small blood vessels and dilation of relatively large blood vessels at different time points before and during the seizure.
|
2020-12-10T09:06:05.820Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "715e0e6d4aa6b36b22d724018cb5d82b91f1295a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/10/12/942/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df3163cb28aadcbeeb3833161e2b716c3928e82f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238793492
|
pes2o/s2orc
|
v3-fos-license
|
Iranian Bazaars and the Social Sustainability of Modern Commercial Spaces in Iranian Cities
According to some factors such as participation, interactions, identification and security, Iran's traditional bazaars are good examples of social sustainability. In fact, bazaars are not considered as merely an economic environment but also an environment for many social activities due to their status and their location in the important environments and centres of the city, and the significant role and social status of market's businessmen in the city. However, in the modern industrial era and with appearance of new urban elements, it can be observed that many spaces for commuting and many urban traditional environments took important social-cultural functions. Under these circumstances, this research used the descriptive analytical method to focus on evaluating the environments of persistent traditional social business centres in order to achieve persistence in modern social business centres through evaluating and studying the historical background of business centres, urban services and traditional elements that form them.
INTRODUCTION
Public spaces have always been important parts of cities, having much to do with basic routines in a city's life (Cybriwsky, 1999). They are spatially and mentally important parts of cities and play numerous roles in cities and the inhabitants' lives (Nouria, Rafieian and Ghasemi, 2019). Public spaces are dealt with all parts of the built and natural environment that is openly and freely accessible to and useable by all (Neal, 2010;Carmona, Magalhães and Hammond, 2008;Madanipour, 2003;Nissen, 2008;Parkinson, 2009), including those predominantly used for residential, commercial or community/civic purposes (Carmona, Magalhães and Hammond, 2008) and intended for social interaction, relaxation or building passages (Cybriwsky, 1999).
From the past, bazaars have been one of the most vital and important public places due to their various economic and social roles. In addition to an impact on general urban structure and formation of neighbourhoods, they are known as one of the important symbols or valuable and identifiable public architecture. They have been always considered as one of the most important fields of common interactions in civil life. Nowadays, considering the recent transformations and technological progress, online shopping and business, construction of small and big business centres as well as new and wide streets in cities, it can be said that a part of the businesses in traditional bazaars has moved to online bazaars, streets and business centres. This, in turn, led the traditional bazaars to gradually lose their status as centres of dynamic economy in many cities and turn into places that are important more due to their cultural and historical aspects. In fact, bazaars no longer can continue their past functions in cities and this leads them to lose their relations with other urban elements and people's social life. This loss of relation that, in itself, is the end of bazaars' lives will eventually lead to gradual some bazaars' death. And places called "passages" have replaced for these elements that, due to business problems, have not been able to be effective in gathering local residents. Therefore, it is befitting to take variables that will increase the level of social wealth in these centres into consideration in designing and planning new business centres (Ghasemi, Hamzenejad and Meshkini, 2019).
Many studies have been conducted in different fields with different approaches, such as architectural, social, geographical and economic, on bazaars, especially on traditional bazaars. Some studies focusing on defining and evaluating the Iranian bazaars, such as the bazaar in Iranian cities from the collection of cities in Iran (Kiani, 1985), Iranian bazaars (Sultanzadeh, 2014), Grand Bazaar of Isfahan (Shafaghi, 2006;Khalili and Fallah, 2018), the historical district of Rasht Great Bazaar (Pourzakarya and Bahramjerdi, 2019), bazaar morphology (Rajabi, 2006), developments in Islamic Iranian bazaars (Saraei, 2010), the role of social capital in the economic situation of traditional bazaars in Iran (PourJafar and PourJafar, 2011); as well as the studies on modern bazaars and commercial complexes can be mentioned as references for planning and designing the shopping centres (Taghvaei and Baygloo, 2008) and commercial complexes (Talebian, Atashi and Nabizadeh, 2013).
This research tries to provide an environment, appropriate for economic and social needs in modern business environment, by relying on persistent social patterns of Iranian traditional business environments with the help of previous sources and researches, through exploratory method.
LITERATURE REVIEW
The word "bazaar" is an Old Persian word that is now an integral part of Iranian culture (Kermani and Luiten, 2009). According to available information in historical sources, since the beginning of the first century AH, in many new cities and almost all the old cities, permanent bazaars have existed with pre-constructed spaces. Since Seljuk era bazaars have grown and blossomed in urban areas and in Safavid era, due to high level of security, the development of relations and business exchange expansions had peaked (Sultanzadeh, 2014).
From an economical perspective, the term "bazaar" refers to the places in which supply and demand meet each other and end up with equilibrium, in a direct or indirect way (Biglari, 1956). It is a place for trade, buying and selling goods, or is concourse of buyers and sellers (Department of Housing and Urban Development, 2019). However, the concept of bazaar in many Islamic countries, especially Iran, comprises more extensive meanings rather than trading. This idea is mainly inspired by the existence of numerous mosques, schools, hussainias, 1 tekyehs, 2 saqqakhanas 3 and various religious centres in Iranian bazaars. In essence, Iranian bazaars have always been considered as the socio-economic and cultural centres of a city, concentrating all public activities.
Most of the activities and commutes throughout the city took place in bazaars. Bazaars were the most important communicational channels between citizens, where in addition to exchanging goods and money, the biggest amount of information and news were transferred, or given to people by the government. Another social function of bazaars was the public welcoming of important persons and dignified or royal guests. Guests after entering the city would pass through bazaars' main lanes and would be welcomed by people. During national and religious celebrations, bazaars were the most important places and were decorated to welcome these celebrations. In addition to these, sports competitions, such as wrestling, championship and zourkhaneh (the place to do traditional Iranian sports) rituals, took place in many bazaars. Another important social function of the bazaars was mourning rituals in Muharram (the first month of Islamic calendar). Usually each business held its own rituals and often competed against each other in having held the best rituals. Mourning rituals were also held for other occasions, such as the passing of great religious missionaries and other great persons (Sultanzadeh, 2014). Therefore, bazaar was the most important element, both in political-economic and in social-cultural and in economic-religious areas and the results of its needs and goals formed the image of the city.
Some of them considered bazaars a fundamental and core part of an Islamic city (Birshak, 1971), but to some others, it was mosques and bazaars (Von Grunebaum, 1961). Another group, according to the ancient and geographical texts, considered city to be divided into three main parts: citadel, congregational mosque and the bazaar (Ashraf, 1978), and finally, some considered mosques, bazaars, government citadel and the core of residential neighbourhoods, walls and bulwarks, and gates to be important elements in the city's construction (Alsayyad, 1991;Hourani, 1970;Meshkini and Ghasemi, 2018).
However, the unity of congregational mosque and bazaars, and their importance is visible in fundamental structure of the Islamic cities. The fact, that socioeconomic and to some extend religious and political, life and function of these cities will persist, is absolute. Some people still consider bazaars to be the centre of all urban activities. In Islamic cities, bazaars are physically dependent on congregational mosques and their functions are close to theirs. Due to the higher priority of religious responsibilities, congregational mosques are built in appropriate places in the centre of the city and bazaars as people's livelihood centre and union activities, are usually built next to congregational mosques (as shown in Figure 1).
Although bazaars, at first, were built for economic reasons, but their physical features and architecture have turned them into a world of activities, social interactions and urban events (Rezaei and Oskouei, 2010). Iranian bazaars are under the influence of Islamic beliefs, worshipping ceremonies, local and cultural traditions, geographical features, economic functions, government's constitutions and people's social behaviours. Therefore, each one of these instances create different images that lead to diversity in bazaars environments and show the power of conformability and significance of these environments (Rajabi and Sefahan, 2009). Also, since bazaars have been fundamental and were the centres for urban activities, they were considered as city centres from the economic, social, cultural and political points of view. Furthermore, due to their increased defining roles in cities' destiny, bazaars' centrality, as much as their sociality, in defining citizens' life styles expanded over time.
Figure 1. Aerial Photo of the Qaisariya Bazaar in Lar in Fars Province
Iranian bazaars have consisted of elements such as structural identities, each of which, as a semantic element with visual diversity as well as the environment, have led to its sustainability throughout the centuries (as shown in Table 1). These elements include: 1. Rasteh (a lane with covered roof usually with shops): Bazaars as mercantile and service complexes were divided into rows of shops called rasteh. Bazaars often are linear and constructed alongside the most important urban roads. Thus, the most important and fundamental element of bazaars, is their main rasteh. Along the main rasteh different unions are located, in a way that each union was located in a part of the main rasteh.
In some greater cities, there were two or more main parallel or transverse rasteh. In medium and big cities, in addition to the main rasteh, there were some adjunct rasteh parallel or vertical to the main rasteh, which were the results of bazaars expanding to adjunct alleys (Sultanzadeh, 2014).
Charsoo (crossroads):
The transverse section of the two main rasteh.
In some historical eras, deriving from the Arabic word, sogh (market) was replaced with charsogh.
3. Maidan (square): Next to or along some of the important bazaars in great cities, there is an urban square or an area, because market was the most important road in the city and in most cases, connected to an urban square (Sultanzadeh, 2014).
4.
Hojreh or dukkan (small shop): Shops were the simplest and smallest elements in the market's environment (as shown in Figure 2). In fact, the fundamental element of the market's environment was the shop, where is the epitome of the social aspect of the market (Raymond, 2005).
Dalan (hall, corridor):
Dalan is some sort of connector, which often in architectural spaces is linear and plays the role of connector between outer and inner space or only the inner space of the building. And on either side of it, there are usually shops (Sultanzadeh, 2014).
6. Caravanserai: A caravanserai is a place for residence or embarkation and disembarkation for businessmen and international businessmen. It has existed since 2,500 years ago in Iran, in cities and in bazaars and outside of cities on the way of businessmen's and travellers' routes and played the role of hotels and motels; it had the space to keep goods and baggage.
Caravanserai is subjective spaces and has a central courtyard, with shops that would be built in one or two floors, around the four sides of the courtyard (Pirnia, 1969). The courtyard was used for disembarkation and the whole space was allocated to supplying particular goods and items. The difference between tim and timche is that in timche, only one type of business takes place while in tim, many types of business can take place (Rajabi, 2006).
Qeysarrieh:
The places, which in their architectural features, look like an adjunct rasteh, dalan or timche or in few instances, like caravanserai, but their function, was often supplying invaluable or luxurious items, especially expensive clothes. Therefore, qeysarriyeh's spaces had one or more entries that were closed during night time (Pirnia, 1969).
9. Jelokhan (mini square): Jelokhan is an urban space, that consists of a connector space in the form of a small square, which from four or three sides is surrounded and has a constructed space. It used as an entry space, lobby or gathering area (Sultanzadeh, 2014).
14. Doors and gates (to increase security in different areas) (Rajabi, 2006). Observing the human scale in a desirable and popular way The shape and appearance of the bazaar has been spatially organised in accordance with the beneficial human needs and conditions; this spatial organisation includes the role of production and supply of goods alongside the religious, social, cultural derived from the spirit of Islamic thought that acts as a living organism and has become an independent entity in stages such as shaping, evolution, and transformation.
Self-sufficiency (khudbasandagī)
Maximise the use of existing and available facilities without violating the resources and future needs 1. The traditional bazaar is precisely consistent with self-sufficiency and in a variety of locations, with its full complement of materials on the site and particularly indigenous, it represents a complete form of traditional architecture that is consistent with its environment.
2. The lack of repetition and imitation of patterns, styles, and imported foreign functions in the Iranian bazaar, the consistency of the features of space with the human characteristics of humankind will satisfy many of the social needs in the bazaar.
3. The traditional bazaar stands as the authority symbol ancient traditions against the particular urban spatial reflections arising from the growth of modern thought, such as the western and modern streets and squares of the high-rise buildings.
Inward-looking
Maintaining the inner spaces from the external conditions and organising interior architectural spaces 1. In the buildings that exist in the traditional bazaars such as saras, timches, mosques, tekyehs, religious schools, husseiniyahs and caravansaries, the most crucial part of the open space is an interior part, which draws the human movement inward and creates a sense of fixation, security and proximity in the individual or buyer. 3. There are hierarchical spaces of pause and attention to privacy in the Iranian bazaar.
Avoiding nonessentials (parhīz az bīhudagī) Architectural targeting and avoid doing in nonessentials works The Iranian bazaar is an example of the use of the space with the least distraction and the maximum use, and even the decorations are suitable and applicable, by avoiding the futility of space and avoiding unnecessary embellishments.
Structural rigidity (niyārish) and homogeneous proportion (paymūn) Resistance and stability of the building and the determination of the analogy between the components of the building Structural rigidity and paymūn give the symmetry and durability to these monolithic and interconnected bazaars and so that it is impossible to dissociate them and create a unified and permanent form that this semantic unity is itself a factor of stability and indelibility of this element.
MODERN IRANIAN BAZAARS
"Passage" in English means a place to pass through; a space that connects two buildings to each other. In French, routes, on either side of which there are business environments, are also called "passages" (Sultanzadeh, 2014). At the end of the 18th and the beginning of the 19th centuries, in most great cities, passages were gradually constructed (Taghvaei and Baygloo, 2008) and in many countries, progress in industry, mass production and consumerism led to big bazaars constructed to meet people's needs in shorter time (Zadeh, 2009). In Iran, big passages and chain bazaars started to appear during the Pahlavi era (Rajabi, 2006). In the 20th century, with daily increase in urban population and vehicles, and suburban population, construction and business companies turned to constructing new buildings and shopping centres, in order to provide comfort for people and customers for shopping and receive better profits. The formation of passages along streets and urban caravanserai along market lanes can be partially considered alike since passages were usually built in those streets and areas of the city, where on the one hand, economy had boomed and on the other hand, expanding shops on the edge of the sidewalks and streets would not be easy. Initial passages, looked a lot like urban caravanserais, but gradually they turned into new forms. Some of which were allocated to offices. Buildings that in some fields look like passages are practically different from the initial passages (Sultanzadeh, 2014). As a result, dominant forms of business spaces in Tehran today are in the form of streets and passages that function competitively (as shown in Figure 3).
Figure 3. An example of Modern Bazaar in Iran
Todays, in addition to traditional bazaars and passages and big chain bazaars, there are different shopping centres, constructed in recent years and some are in the planning and construction phase. In the past few years, the merely business function of bazaars has changed into business-entertainment and creating places for leisure time and entertainment (as shown in Figure 4) in such environments in modern forms are now common in Iran (Ahour et al., 2013). Library
and (f) Cafés and Restaurants
The most important elements of the new business centres are as follows: 1. Lobby or an entrance saloon that plays a defining and glorious role in the whole collection and does not have a social role and is at best as a waiting space.
2. Large and small saloons with glamorous exhibition showcase, with great differences from the traditional market and attract a large number of audiences.
3. Corridors that play the motivational role of traditional market by eliminating the qualities of natural light and increasing the excitement of lighting.
4.
A large space that is usually visible from all parts of the shopping complex and has a coffee shop and a dining room.
5.
Floor restaurants that are usually located on the highest floors with beautiful views and traces and high attractiveness of entertainment and nutrition centres.
6. Computer game centres and gyms for children and the youth, which are located on the lower or upper floors.
7. Cinemas: In some commercial centres, some specialised cultural centres have been added that invite the educated customers. Nevertheless, these audiences do not interact and participate only in an independent friendly or family nucleus using a cultural package.
RESULTS
Traditional bazaars had a wider range of social functions, as compared to the modern bazaars. Traditional bazaars' functions in the past can be divided into four categories: economic, political, social-cultural and religious. Modern and traditional bazaars only share the economic function and other functions in modern bazaars are either completely gone or not as important. The leisure-entertainment function is only seen in the modern bazaars and traditional bazaars do not play this function.
These functions (as shown in Table 2) will be discussed further.
Religious
Locating the bazaar next to a religious monument or build a religious place such as mosque and various religious centres in bazaars.
Social-cultural 1. A place to meet, transfer customs, traditions, ideas, news and manners of social behaviour 2. A place for mourning, national and religious celebrations and so on.
Recreational/ leisure
This function in traditional bazaars was less important.
Functions Economic
Response to the economic needs.
Recreational/ leisure 1. Build places for children's play and entertainment for other people. 2. Build places such as coffee shops and restaurants for leisure and relaxation in shopping malls.
Economic Function
This function is known as the main function of bazaar, as an economic organisation since bazaars have always been the main business centre in the city and was places for buying and exchanging life's necessary and unnecessary goods and items and people in the past would answer most of their livelihood needs in bazaars. This function can be considered the mutual function, in both traditional and modern bazaars.
Social-Cultural Function
This function has two aspects, conceptual and visual. The visual aspect is about the market's structural architecture that, which represents a rich culture in the past, through which, the image of the past society and culture is formed, On the other hand the structural type of market that, includes different parts, such as rasteh (bazaar street), to which all adjunct lanes vertically connect and hallways that, themselves look like caravanserais or shopping centres and are connected to the main and adjunct lanes and other parts of the market, such as hammam (public bathhouse), zourkhaneh, qahuwakhana (coffeehouse), madrasa (school) and masjid (mosques), which all are the backgrounds of the conceptual aspect of this function and represent some type of special communicational space between people and people working in bazaars and generally between many circles of people, who would interact with each other in these places every day and would help this unity development among the people in the society. In fact, this function is the same as what habermas interprets as public field i.e. in the 19th century in Europe, different people from intellectuals to ordinary people gathered around in public places, such as cafés and canteens, to discuss and talk about important events of the day and have reasonable talks for the good or bad of the society (Outhwaite, 2009).
Political Function
Government has always played an important role in bazaars, in different ways; whether being in harmony with religion or not bazaars, in many cases, were constructed and expanded next to government citadels and alongside its entry gate. Bazaars and their services, throughout history, have always had such powers that have always been of importance to rulers and governments of the time. Therefore, mostly in all of the old cities of Iran, government and political centres were constructed near bazaars (Pourahmad, 1997).
Religious Function
Since its formation, market has had so strong relations with religion that among the most important places in market were mosques including Shah (Imam) Mosque and seminary schools and places, where religious rituals, such as celebrations and mourning, were held could be found in markets. Moreover the businessmen had strong relations with the men of the cloth; the businessmen, who wanted to benefit more from their economic activities, needed to have both religious and social high status. First, they had to have halal income and to this end, they had to pay khums (the one-fifth tax) and zakat and therefore, ulama (the men of the cloth) would confirm that their property is halal. Second, by doing social-religious significant activities, such as constructing mosques and schools and going on a pilgrimage to Mecca, they would introduce themselves as devout persons. On the other hand, marriage between the family of businessmen and clergies would strengthen these relations, all these would result in, in a two-way relation between reputable businessmen and legitimacy of clergies, boom of businesses and ulama could answer their financial needs, through businessmen (Kamali, 2018).
The Recreation-Leisure Function
This function is specific to modern bazaars and business complexes. In fact, in traditional bazaars this role was not as important. In the architecture of the modern complexes, building places for children to play and places such as cafés and restaurants show that bazaars, apart from economic needs, partially meet the customers' leisure needs, too. This would help the customers to relax and get away from the pressure of work during the day and have hours with their families, filled with pleasure and happiness. On the other hand, nowadays, the concept of shopping, in itself, is of leisure nature and helps to pass free time because in the past, most of the people would go to bazaars to meet their need but today many people buy more than what they need. However, modern bazaars, nowadays, as new public arenas have diminished the prominent role of the traditional bazaar and caused new relationships along with a new lifestyle. The role of the traditional bazaar in identifying cities has diminished. If, in the near past, every city was known for its traditional bazaar and the modern bazaar architecture gave a distinct identity to the city, now dozens of shopping malls with modern architecture have symbolically replaced by traditional and local architecture of traditional bazaars. On the other hand, the traditional relations that dominated in the bazaar and spread throughout the city have changed. Changing most of its activities, the bazaar performs the first space transfer from a specific location to a location that changes over time. As a result of these developments, the bazaars of the Iranian cities had different destinies: 1. Some of these bazaars such as the Semnan Bazaar were worn out due to isolation and lack of desired access and were gradually moved to the sidewalks. Erosion of empty spaces destroyed them.
2. Some of them such as the Ardabil Bazaar were torn by the streets and the remaining parts changed their activities depending on the new situation in the city.
3. Some of the bazaars such as the Isfahan Bazaar have retained their original identity due to the region's richness in the production of domestic industries and have been kept from the influence of new situation to some extent.
Todays, like old neighbourhoods, alleys and urban spaces, markets cannot meet today's consumerist and fashion-oriented needs. Moreover, extensive physical changes such as building parks, restaurants, stadiums, museums, cultural centres, etc. have also occurred in shopping malls and the changes in the body of contemporary shopping malls can be interpreted in the change in lifestyle and social relations.
In fact fundamental changes, in the new era, took place in the correlation of values and coherent lifestyle, which led to the independence and entertainment of shopping and turned its location from the mediating the important religious and political centre to a recreational centre and a factor to concern fashion and satisfying the need to be up-to-date and global. This is, on the one hand, the result of development in social welfare and, on the other hand, an incompetency in social solidarity.
CONCLUSION
Economic and social transformations in today's Iranian urban lives have been effective in the general situation of bazaars. These transformations that resulted from the population growth, street expansion and new street shops have disestablished traditional market as the only business centre in cities. Nevertheless, they have never been an obstacle against the market growth. During the past two decades, physical expansion of the cities in Iran, on the one hand and social and political growth in cities, accompanied with new ideas in urban engineering and urbanisation, on the other hand, led to the formation of new roles and functions in all urban, local and regional divisions and some special parts of the city to be specified to business functions. These functions were formed through the construction of passages and big modern business complexes to gradually disestablish traditional bazaars as the main business centres providing numerous luxurious goods to meet the people's needs. Many experts introduce different reasons for traditional bazaars' relative falter; the main reason is known to be the change in people's lifestyle in cities, the expansion of business chain complexes and online shopping.
Analysing Iranian bazaars, as public environments, where most of the Iranian's social events took place, reveals most of undeniable visual, conceptual and functional features. These social caused a special discipline in bazaars to be performed, which changed the people's behaviour towards bazaars as only a business environment and turned bazaars into great and vast environments for social activities. According to the new perspective towards bazaar, bazaar and viewers (people, businessmen) form a united entity and have a social life alongside each other. Apart from the fact that core and main parts of bazaars play an important role in the formation of this social life, the physical appearance of the market inevitably conforms to the Iranian social lifestyle and activities.
Since bazaars in Iran are known as one the most important urban factors in social interactions and one of the most important and fundamental places to gather together and have fun, increasing social interactions in modern business centre as new and modern generation of bazaars is one the most important problems and current social topics, which needs further discussion. Thus, five important principles in traditional Iranian architecture, including human scale architecture, self-sufficiency, avoiding non-essentials, inward-looking and structural rigidity are discussed as follows: 1. Physical patterns of market in the past and the study of feasibility of patterning by using these physical patterns in modern business centres: Though traditional bazaars had different lanes and areas for each union, the businessmen were encouraged to coexist and have unity rather than a false competition in business.
2. The harmony in supply and demand undermined the possibilities of false competitions.
3. The higher religious organisations in market were involved in encouraging unity among the businessmen and on special religious or political occasions directed people in one carnival.
4.
To encourage ethics, a set of rules was defined by the veterans, rather than forums and syndicates, of each field for the same field and everyone followed the rules; in case of a complaint, the dispute was finished by a mediator.
5. This controlled friendly and public environment led to relative security and hindered any abuse or robbery, whether by friends or strangers. This would happen through inward-looking physic and cohesive system of arranging lanes and their combination with religious and educational, and even residential centres. Today, the big partition between life and business centres and the separation of these centres from other needs has created an unhealthy competition among businessmen, which has replaced that friendly environment with tension.
It seems that breaking passages and big bazaars into smaller regional bazaars, accompanied with the booming of the cyberspace alongside the real life, not only reduces the harmful transportations in the city but also the decreased direct interaction of people, caused by the cyberspace. Therefore, it can provide opportunities for qualitative and low-risk interactions in near residential areas with collectivity of cultural, entertainment and religious behaviours.
|
2021-09-09T20:45:18.632Z
|
2021-07-30T00:00:00.000
|
{
"year": 2021,
"sha1": "d40fda457590e60d8567f5b4b4725af5cb0dff3d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21315/jcdc2021.26.1.1",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d8402ce9221348176fe2d02b042657cc28f7c522",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
235323369
|
pes2o/s2orc
|
v3-fos-license
|
Long-term exposure to polypharmacy impairs cognitive functions in young adult female mice
The potential harmful effects of polypharmacy (concurrent use of 5 or more drugs) are difficult to investigate in an experimental design in humans. Moreover, there is a lack of knowledge on sex-specific differences on the outcomes of multiple-drug use. The present study aims to investigate the effects of an eight-week exposure to a regimen of five different medications (metoprolol, paracetamol, aspirin, simvastatin and citalopram) in young adult female mice. Polypharmacy-treated animals showed significant impairment in object recognition and fear associated contextual memory, together with a significant reduction of certain hippocampal proteins involved in pathways necessary for the consolidation of these types of memories, compared to animals with standard diet. The impairments in explorative behavior and spatial memory that we reported previously in young adult male mice administered the same polypharmacy regimen were not observed in females in the current study. Therefore, the same combination of medications induced different negative outcomes in young adult male and female mice, causing a significant deficit in non-spatial memory in female animals. Overall, this study strongly supports the importance of considering sex-specific differences in designing safer and targeted multiple-drug therapies.
AGING Nevertheless, there is little experimental data about the potentially negative effects caused by polypharmacy and on the mechanisms behind these effects [11]. Drug safety studies often exclude older patients and are limited to monotherapies. Another important aspect which is poorly investigated is the influence of sex in drug use and response [12]. This is of particular importance in older adults since they often have altered pharmacokinetics, pharmacodynamics, efficacy, and toxicity [13,14] which have shown to change between men and women. [15][16][17][18]. Therefore, sex represents a relevant factor to take into account when investigating adverse events related to polypharmacy.
We recently performed a study to explore the effects of long-term concomitant administration of five different medications on locomotion, anxiety, and cognition in mice [19]. The drugs included in the polypharmacy treatment were the most frequently used by older adults in Sweden [20], and among the most frequently used drug classes also in other European countries [21][22][23][24]. Importantly, we observed that polypharmacy impaired exploration and cognitive functions in young adult wildtype male mice [19].
In this study, female mice were administered the same polypharmacy regimen, containing aspirin, paracetamol, simvastatin, metoprolol and citalopram, with the aim of investigating the effects of multi medications in female animals and allow comparison with our previous study in young adult male mice [19]. Animals were fed with the polypharmacy diet and then assessed for locomotor function and coordination, cognitive tests, and anxietylike behavior. Hippocampal tissues were analyzed to measure any changes in protein markers which could be related to the behavioral outcomes observed in polypharmacy mice. The following parameters were monitored as basic health indices: food/water intake, body weight (BW), serum creatinine and alanine aminotransferase (ALT) levels.
Treatment tolerance and health parameters
The treatment was well tolerated by the animals and no increase in mortality was observed in polypharmacy fed mice compared to controls: all the mice reached the end of the study in good health. Polypharmacy fed mice showed a significant BW gain during the study period while controls did not (week 1 vs week 8: control group BW= 26 ± 1.2 g vs 28 ± 1.3 g, p= 0.09; polypharmacy group BW= 26 ± 0.7 g vs 30 ± 1.1 g, p= 0.001, two-way ANOVA repeated measurements; Figure 1A). No significant differences in mean food or water intake (FI, WI) were found between the two groups over the study period ( Figure 1B, 1C), nor in the weekly average ( Figure 1D, 1E). However, both controls and treated animals revealed a significant reduction of FI during the last 4 weeks (control group FI, week 3: 4.5 ± 0.2 g/day/mouse, week 8: 2.5 ± 0.1 g/day/mouse, p=0.02; polypharmacy group FI, week 3: 3.7 ± 0.1 g/day/mouse, week 8: vs 2.4 ± 0.1 g/day/mouse, p=0.02, two-way ANOVA repeated measurements; Figure 1D). The average FI was very close to the estimated one, therefore the drug concentrations taken by polypharmacy animals corresponded to the expected ones. Only in the last week the registered FI (2.4 ± 0.1 g, polypharmacy group, Figure 1D) was about 20% less than the anticipated one, meaning that the final drug dosage consumed was: 80 mg/Kg/day metoprolol, 80 mg/Kg/day paracetamol, 16 mg/Kg/day aspirin, 8 mg/Kg/day simvastatin and 8 mg/Kg/day citalopram, which is within the therapeutic dose range in humans for these medications [19].
As markers for renal and hepatic health status we measured serum levels of creatinine and ALT at the end of the treatment. Dot histograms in Figure 1F illustrate as there were no significant changes of the two markers between control and multiple-drug administered mice.
Polypharmacy diet did not affect locomotor activity and anxiety-like behavior
We used open field (OF) locomotor cages to study general locomotor activity over a 30-min free exploration trial. Horizontal and vertical activity were analyzed over the total test duration and in time intervals of 10 min in order to monitor the habituation phase and the next exploratory patterns. The treatment did not alter the horizontal or rearing activity analyzed per time interval (Figure 2A), nor the total locomotion (horizontal activity: 241.6 ± 24 m vs 182.7 ± 13 m; rearing: 40.9 ± 6 s vs 32.4 ± 13 s, control vs polypharmacy group respectively, data not shown). The map in Figure 2B illustrates that control and polypharmacy animals showed a similar pattern of movements in locomotor cages.
Motor coordination and forelimb strength were assessed through Rotarod and Grip strength tasks. The analysis of latency to fall over the three Rotarod test trials showed that control mice significantly improved the performance on the rotor in trial 3 compared to trial 1 while polypharmacy mice did not ( Figure 2C). Despite this there were no significant differences between the two groups. The outcomes from Grip strength test did not highlight relevant differences between control and treated animals in the front limbs force measured during the grid pulling ( Figure 2D).
AGING
To explore whether the multiple-drug regimen could affect anxiety-like behavior we performed Dark/Light Box (DLB) and Elevated Plus Maze (EPM) tests. The results from DLB experiment showed that polypharmacy mice displayed a similar time spent in, and latency to enter, the lit compartment to the controls ( Figure 2E). Likewise, EPM task did not reveal significant differences in the time spent by the animals exploring the close arms of the maze (about 80 % of the total trial duration: dot histogram in Figure 2F and heatmaps in Figure 2G).
Polypharmacy regimen impaired object recognition and fear associated contextual memory
Mice underwent cognitive tasks to investigate the effect of the polypharmacy treatment on different types of memory and learning. To study spatial working memory, we ran the Y Maze test. Animals from both the groups performed a similar number of arm entries and a percentage of possible alternations above 50 on average ( Figure 3A), suggesting that polypharmacy regimen in young adult female mice does not affect spatial working memory. The histograms represent the total average of food and water intake over the whole study period. (D, E) The curves show the weekly average of food and water intake monitored during the eight weeks of treatment. (F) Dot histograms express serum creatinine and ALT levels. Ctrl= control, Poly= polypharmacy, n= 10 animals per group. All data are presented as mean ± SEM. AGING Non-spatial memory was investigated via Novel Object Recognition (NOR) test. On day 3, control mice exhibited a clear preference in exploring the novel object compared to the familiar one. Conversely, polypharmacy animals did not discriminate between the familiar and the novel object as they spent a similar time exploring both ( Figure 3B, right panel). This was confirmed by the calculation of the discrimination index (B) NOR test, day 3: the dot plots express the discrimination index (a score above 0 indicates that the mice explored the novel object more than the familiar one). *p<0.001, t-Student test. Histograms on the right show the average of time spent in exploring the two objects by control and polypharmacy animals, ***p<0.001, two-way ANOVA repeated measurements. Note that control mice spent about double the time exploring the new object compared to the familiar one; on the contrary, treated animals did not differentiate between the two objects, as indicated by the discrimination index. (C) The heatmaps visually represent day 3 of NOR test and specifically the area explored around the objects by the animals, showing that only in control group there is a clear preference for the novel object compared to the familiar (in red color the most visited areas). Fam and Nov = familiar and novel object respectively. (D) Contextual and cue FC test: the graph on the left shows the percentage of freezing time measured on day 1 (habituation phase) vs day 2 (context test); the graph on the right expresses the freezing percentage measured before vs during the cue stimulus (sound). *p<0.05, **p<0.01, ***p<0.001, two-way ANOVA repeated measurements test. Ctrl= control, Poly= polypharmacy, n= 10 animals per group. All data are presented as mean ± SEM. AGING which was significantly higher for controls compared to treated mice, that presented an index close to 0 on average ( Figure 3B, left dot plot). Heatmaps in Figure 3C represent by colors as control animals spent more time on the novel object (in red) while the polypharmacy mice stayed similarly on both. The outcomes from NOR test propose that multiplemedication regimen impaired non-spatial object recognition memory.
Fear conditioning (FC) test was performed to assess fear associated memory and learning. Mice were subjected to an auditive stimulus (cue) paired to a foot shock on day 1 and then tested for context and cue memory on day 2 and 3 respectively. The freezing % recorded during the habituation phase of day 1 (as a measure of baseline freezing) was compared to freezing % of day 2 to evaluate the context memory. To assess the cue memory, we measured the freezing % on day 3 before and during the sound stimulus. During the context test on day 2, both controls and treated animals showed a significantly increased freezing behavior compared to day 1 ( Figure 3D, left graph). However, control mice responded to a greater extent to context recognition showing a significant higher freezing % than the polypharmacy ones ( Figure 3D, left plot: **p=0.01, two-way ANOVA repeated measurements), indicating that the multi-medication treatment may affect FC contextual memory in young adult female mice. On day 3, we measured freezing % before and during the delivery of the acoustic stimulus; mice from control and polypharmacy administered group expressed a significantly stronger freezing behavior during cue application compared to before ( Figure 3D, right graph), suggesting that both groups were able to associate the auditory cue to the adverse stimulus (the foot shock).
Polypharmacy reduced the levels of memory-related proteins in hippocampus
Western blotting experiments were performed to investigate whether the treatment could lead to changes in levels of hippocampal proteins involved in regulating synaptic plasticity and memory formation. We first analyzed the expression of synaptic N-Methyl-D-aspartate (NMDA) receptors (subunits NMDAR1 and phospho-NMDAR2A) and postsynaptic density protein 95 (PSD95), that are known to play a key role in synaptic transmission and potentiation and were found to be downregulated in our previous study on male mice [19]. Interestingly, we did not observe changes in the hippocampal levels of these markers between control and polypharmacy animals, as illustrated by immunoblots and histograms in Figure 4A, 4B. Since the data from NOR test indicated a clear memory impairment in treated mice, we explored markers for specific signaling pathways implicated in recognition memory. In the hippocampi of multiplemedication fed animals we found a decrease of total cAMP Response Element-Binding Protein (CREB) levels compared to controls, although the ratio of phospho/total CREB remained unchanged between the two groups ( Figure 4C, 4D). The analysis of Ca 2+ /calmodulin-dependent protein kinase II (CaMKII) revealed a significant reduction of phosphorylated CaMKII in polypharmacy mice, as shown by the ratio of phospho/total protein ( Figure 4E, 4F). The brain-derived neurotrophic factor (BDNF)tyrosine kinase B (TrkB) signaling is another important system that regulates synaptic plasticity and is involved in recognition memory consolidation [25] and fear conditioning learning [26]. We quantified the levels of TrkB and pro-BDNF and we observed a significant downregulation of TrkB receptor expression in treated animals compared to controls ( Figure 4G, 4H, left histogram). The levels of hippocampal pro-BDNF in polypharmacy mice resulted in a reduction of 25% on average than control levels (right plot of Figure 4H), albeit not significant (p=0.12, t-Student test).
DISCUSSION
In this study we performed a preclinical investigation on the adverse events related to polypharmacy on locomotion, anxiety, and cognition in female animals. Previous studies reporting negative outcomes associated with multiple-drug use on animal models, including our recent study [19], were performed primarily in males [27,28], except for one recently published study on physical functions in C57BL/6 male and female mice [29]. In the elderly population, women are more frequently exposed to polypharmacy and observational studies have reported a higher risk of receiving potentially inappropriate prescriptions in women compared to men [12,30,31].
In the current study we found that polypharmacy treatment significantly impaired object recognition and affected fear associated contextual memory, together with a significant decrease of some hippocampal proteins involved in pathways regulating the formation and consolidation of these types of memories. Noteworthy, we did not observe the impairments in explorative behavior and spatial memory that we previously reported in young adult male mice administered the same polypharmacy diet. We believe that the results from this study give interesting insights about possible sex-specific adverse effects from multiple-drug use and support the need of more targeted multi-medication therapies which consider sex-related differences.
AGING Animals were administered the polypharmacy regimen for eight weeks and tested for behavioral experiments during the last four weeks of treatment. The diet was well tolerated, serum levels of hepatic and renal function did not change between control and treated animals, nor we did observe signs of illness among the mice. The decrease in FI observed during the last four weeks in both the control and treated group correlates with the behavioral assessment period which may induce stress in mice as we have previously observed [19]. The average FI was similar between the two groups over the study period, except for a lower FI baseline during the first week in the polypharmacy animals compared to controls. Despite this, we observed AGING a significant increase of BW in the polypharmacy fed mice compared to controls. This BW gain might be due to a metabolic effect caused by one or more specific drugs contained in the polypharmacy diet. Mild weight gain can manifest as a side effect of some beta blockers, including metoprolol [32], and antidepressants like citalopram [28,33]. Also, it might be due to the presence of simvastatin in the drug combination: use of statins in adults has been associated with an increase in body mass index in comparison with statin nonusers [34]. Interestingly, this increase in BW was not reported in male mice administered with the same multimedication therapy [19]. In this regard it is relevant to note that side events for statin use, like muscle pain, have been found to affect women with a higher prevalence compared to men, together with a lower efficacy of the lipid lowering action in women than in men [35,36]. This supports the fact that drug outcomes may vary with sex.
When behavior was assessed in mice, no differences between treatment groups were found in the locomotor and exploratory patterns recorded in OF cages, indicating that polypharmacy administration in adult female mice did not affect exploration and total locomotor activity. This finding differs from our data in young adult male mice [19] indicating that female mice could be more resilient to these effects than males treated with the same polypharmacy combination at young adult age. Huizer-Pajkos et al. reported that a shorter treatment of 4 weeks in young male mice did not lead to impairments in OF [27], while data from Mach et al. report reduction of distance traveled in OF after 12 weeks of polypharmacy treatment in middle aged male mice, as well as after 12 months of low drug burden index and high drug burden index polypharmacy treatments in aging male mice [28]. Interestingly, functional outcomes for motor coordination and balance in Rotarod did not differ between control and polypharmacy female mice. However, while control group improved significantly during the 3 trials of Rotarod, polypharmacy mice showed no improvement and unchanged latencies to fall among trials, suggesting that multi-medications in female mice could start to affect coordination and balance at young adult age. Previous studies on aging male mice reported that performance in Rotarod test was negatively affected by the polypharmacy treatment [27]. Moreover, the observed lack of improvement during the Rotarod task may be also caused by decreased motor learning in the polypharmacy group rather than coordination deficit only [37]. OF and Rotarod tests resulted in different outcomes in adult males and females treated with our selected drug combination, supporting possible sexspecific adverse effects in locomotor functions of multimedication therapies. A recent study in young and old male and female C57BL/6 mice found no significant difference between young males and females in baseline grip strength, motor coordination, gait speed, distance travelled in the open field, anxiety or nesting [29], suggesting that the sex-specific outcomes we observe here are not related to baseline differences in the behavioral performance between sexes.
Our previous study was the first to investigate the effects of polypharmacy on cognitive functions in mice and we reported that a combination of different medications had a negative effect on spatial working memory in Y Maze and reduced hippocampal postsynaptic proteins already at young age [19]. Interestingly, when the Y Maze test was performed in female mice no differences were found between groups. These results were further confirmed by western blot analyses of proteins mainly involved in formation and consolidation of spatial memory as NMDA receptors and PSD95: female mice administered with polypharmacy did not show a reduction of these markers in hippocampus when compared to controls. It is important to mention that NMDAR1 and NMDAR2A/B expression were reported to be higher in the hippocampus and in postsynaptic density fractions of adult female mice than in those of males [38]. This aspect may influence the sex-specific effect of polypharmacy on postsynaptic protein levels observed in our studies.
Noteworthy, the present study shows that multimedication therapy in female mice impaired object recognition memory, measured by the ability to remember an object previously encountered and therefore distinguish a novel object from a familiar one in the NOR test. This type of memory was not affected by the same treatment in male mice [19]. Several studies reported that a functional hippocampus is essential for the formation of recognition memory in rodents [39,40]. Within hippocampus, several signaling cascades have been shown to be critical for consolidation of this type of memory. Specifically, CREB inactivation in CA1 and CaMKII inactivation in mutant mice has been shown to impair long term object recognition memory [41,42]. In the hippocampi of polypharmacy female mice, a decrease in total levels of CREB as well as a decrease in phosphorylation of p-CaMKII was shown. These results are consistent with the behavioral findings in the NOR test.
In addition to Y Maze and NOR we performed FC in order to assess fear associated memory. In this test, control and polypharmacy mice learned to associate both the context and the cue to the adverse event of the foot shock. However, it must be pointed out that during the context test polypharmacy mice showed a AGING significant lower freezing time than control animals. This result shows that polypharmacy mice performed worse when associating the foot shock to the context, suggesting that the multi-medication treatment affected the consolidation of fear associated contextual memory in female mice at young age. The FC deficits observed in treated mice are not as consistent as in the NOR test and we may hypothesize that aging would further lead to greater deficits in fear associated memories caused by the current combination of multiple drugs in female mice. This hypothesis is supported by western blot analyses revealing a significant reduction of TrkB levels in hippocampus of treated females as compared to controls. BDNF-TrkB pathway is a ligand-receptor system that underlies synaptic plasticity and has been shown necessary for acquisition and consolidation of fear conditioning in different brain regions including hippocampus [43][44][45][46]. While we observed a decrease in TrkB receptors, we did not observe a significant reduction in BDNF levels in hippocampus of treated mice, and it is possible that a longer multi-medication treatment as well as aging would eventually lead to a greater reduction of BDNF in female mice, followed by a greater deficit in FC test. Additionally, phosphorylation of CREB mediated by CaMKII may as well affect BDNF levels [47]. In this context, it is important to mention that a large body of evidence reported that CaMKII-CREB signaling is participating in estrogen receptor signaling in the brain [48,49]. Brain estrogen signaling has a neuroprotective role and is essential for synaptic function. The multimedication therapy proposed in this study could affect estrogen signaling, further supporting the sexdifferences in outcomes as different types of memories.
To our knowledge, the use of the individual drugs composing our polypharmacy regimen has not been reported to induce toxic effects in mice [27,[50][51][52][53][54]. This suggests that the combination of different medications used in this study causes the negative outcomes observed. However, only one out of the six pre-clinical studies cited above has been conducted in both male and female mice, while the rest used only male animals. This may lead to a more difficult interpretation of the data on the effects of polypharmacy in female mice. For instance, previous research on monotherapies in rodents did report different results within male and female animals: a recent study on treatments for post-traumatic stress disorder found differential effects caused by citalopram on fear associated memory in female mice compared to males [55]; similarly, different outcomes were found after administration of metoprolol: this beta-blocker impaired performance in Morris Water Maze and FC tests in males of the APP Alzheimer's Disease mouse model and wild-types but not in females [56]. Aspirin was reported to increase the lifespan of male mice but not of females [57], and this was attributed to different drug metabolism and disposition between sexes. These observations support the idea that more targeted research is necessary to refine appropriate therapies taking into account sexspecificity.
There are some limitations to consider in this study. Female and male experiments were not conducted simultaneously, not allowing a statistical comparison between male and female groups. While we replicated laboratory conditions, there may have been experimental differences that affected the behavioral outcomes. The study has been conducted at young adult age. Although there is some evidence of multiple-drug use in young and adult subjects [58], and its prevalence over time have been increasing in younger age groups [59] polypharmacy is more frequent in old age. Therefore, the use of aged mice would be of great interest to discuss the effects of polypharmacy related to older population. To do so, an optimization of the experimental design will be necessary for future studies in old animals. A recent study on the effects of a different polypharmacy regimen on physical function in young and old male and female mice has been published, demonstrating a marked increase in susceptibility to functional impairment in old age and greater impact on grip strength in males than in females [29]. The investigation of the impact of age and sex on susceptibility to the effects of polypharmacy on cognitive function can be the subject of future studies.
Taken together, this study is relevant and highlights the importance of investigating the possible adverse effects of multiple-medication treatments in female mouse models in the future. This is one of the first reports of the effects of polypharmacy in female mice and the first to study its cognitive effects. The fact that polypharmacy induces strong impairments in different types of memory and decreases synaptic proteins already at young age is significant and support the importance to further explore adverse effects of the multiple-drug regimen in old mice. The results from this study will therefore be useful to design and interpret future results on aging animals. The same combination of medications including simvastatin, metoprolol, aspirin, paracetamol, and citalopram induced clearly distinguished effects in male and female young adult mice, that can be translated to humans. In sum, this study strongly supports the importance of considering sex-specific differences in designing safer and targeted multiple-drug therapies for older adults.
Animals
In this study we used wild-type C57BL/6J female mice, which were purchased from Janvier Labs (France) at the age of 8 weeks and then housed in our animal facility in groups of five per cage (Karolinska Institutet, Solna, Sweden) with 12-h light/dark cycle, ad libitum access to food/water and standard enrichment (cardboard tunnels, wooden sticks, and tissue paper). A control and a polypharmacy group of 10 animals each were randomly constituted in groups of 5 mice per cage, when the mice were 5.5 months old. We used a standard rodent diet (control diet) to feed the control group: 18.5 % proteins, 5.5 % oils and fats, 4.5 % fiber (Teklad 2918 diet, Research Diet Inc., NJ, USA) while the polypharmacy group was administered with the same diet supplemented with drugs (polypharmacy diet).
Polypharmacy treatment and study plan
The drugs for the polypharmacy regimen were chosen based on the most frequently used medications in older population in Sweden [20]: metoprolol (100 mg/Kg/day; Sigma-Aldrich, USA) [60], paracetamol (acetaminophen, 100 mg/Kg/day; Sigma-Aldrich, USA) [61], aspirin(acetylsalicylic acid , 20 mg/Kg/day; Sigma-Aldrich, USA) [54], simvastatin (10 mg/Kg/day; Selleck Chemicals, USA) [62] and citalopram (10 mg/Kg/day; Selleck Chemicals, USA) [63]. Paracetamol was selected as analgesic as it is the second most frequently prescribed drug to older adults with polypharmacy in Sweden [20]. Many older adults have chronic pain and paracetamol is considered first line treatment for acute and chronic pain in older people, having a more favorable safety profile than nonsteroidal anti-inflammatory drugs (NSAIDs) and opioids [64]. Aspirin was included in the regiment for its antiplatelet properties, which is used for prevention of cardiovascular and cerebrovascular disease and low dose aspirin is among the three most commonly used drugs in older adults in Sweden [20].
Compound dosages per Kg/BW were selected after translation from the human therapeutic range into the mouse one and according to previous studies where they did not show toxicity in rodents, as explained in detail in our polypharmacy pilot study in young wild-type male mice [19]. Taking into account some variability between the estimated FI and the real one we decided to keep the drug concentrations towards the higher therapeutic dose, with the exception of drugs with potential dose-dependent toxicity in rodents (i.e. paracetamol [65,66]). Medicine concentrations per Kg/diet were considered based on a FI on average of 0.1 ± 0.2 g food/g mouse/day as previously observed for C57BL/6J mouse strain in our animal facility and literature [19,67].
According to our pilot study design [19] the animals were assessed for behavioral studies after four weeks of treatment, at 6.5 months of age, while carrying on the polypharmacy regimen for other four weeks, for a total duration of eight weeks. Over the study period we monitored the following parameters weekly: BW, FI (g food/mouse/day) and WI (ml water/mouse/day). Every week the chow was replaced with fresh food. At the end of the two-months treatment period the animals were sacrificed by cervical dislocation and trunk blood was collected. After brain dissection, tissues were collected and immediately snap frozen in dry ice and stored at -80 C until further use.
Ethical statement
All behavioral experiments were run in accordance with the local national animal care and guidelines and approved by the local committee of Karolinska Institutet and the Swedish Board of Agriculture (ethical permit ID 827). All possible efforts were made to reduce any suffering or distress to the animals.
Behavioral tests
Mice were evaluated with the following behavioral tests after four weeks of treatment at 6.5 months of age: Open Field (OF), Rotarod, Grip Strength, Elevated Plus Maze (EPM), Dark/Light Box (DLB), Y Maze, Novel Object Recognition (NOR) and Fear Conditioning (FC). All the experiments were run between 9:00 and 14:00 by a female researcher (FE) with a break from one to six days after those tests considered more stressful or physically demanding to allow the animals to recover. The order of the tasks was chosen according to the level of stress caused by the protocol, starting from the least stressful test: OF, EPM, DLB, Y Maze, NOR, Grip strength, Rotarod, FC [68]. Mice were allowed to acclimatize to the experimental room for 45 minutes prior to starting each test. The experiments were performed in white light. All the apparatuses were cleaned with 70% ethanol solution before starting each test and between animals.
OF activity in locomotor cages, Rotarod, EPM, DLB, Y Maze and NOR test protocols were run as recently described in detail [19]. Data of EPM, Y Maze and NOR experiments were acquired with a camera installed above the apparatus/boxes, connected to the video-tracking software Ethovision XT 15 (Noldus Information Technology, The Netherlands). OF and DLB tests were performed using 45 x 45 cm AGING activity cages where the animal movements were automatically detected as infrared beam interruptions by TSE ActiMot software (TSE Systems GmbH, Germany). Horizontal and vertical activity in OF, as well as the latency and time spent in the light compartment in DLB tests were analyzed through the same software.
Grip strength test
This test was used to evaluate the forelimb grip strength of the animals. The apparatus consisted of a grid attached to a force transducer which measured the force (in grams) applied by the mouse pulling the grid (Bioseb Instruments) [69,70]. During the pull the mouse was held by the tail by the experimenter and only pulls using both forepaws were considered. The animals performed three series of 3-pulls each with a short resting period between each (2 minutes). The final grip strength was calculated by taking the average of the 9 measurements collected over the 3-pull series normalized for the BW.
Contextual and cue FC test
This experiment was performed in transparent wall chambers with a stainless-steel grid floor which were enclosed in a soundproof apparatus (TSE Multi Conditioning Systems-TSE Systems GmbH, Germany). On day 1, mice were allowed to freely explore the context (a 20 x 20 x 40 cm square base chamber) for 2 minutes (habituation phase) and subsequently were exposed to a conditioned stimulus (55 dB sound at 5000 Hz, 30 sec duration) followed by a mild foot shock (0.3 mA, 2 sec duration). The sound-shock pairing was repeated three times in total with a 50-sec interval between each one. On day 2 (after 24 h) mice were returned to the same chamber for a period of 3 minutes to assess contextual fear memory. No sound or shock were given in this session. On day 3, the context was altered to evaluate the animals for cue memory [71]: the squared chamber was replaced with a round one (20 cm diameter x 40 cm high) and the grid floor was covered by a black smooth surface. To modify the odor, we cleaned the chamber with hypochlorous water instead of 70% ethanol. The animals were placed in this "new" context and after 2-minutes of free exploration they received the sound stimulus (same as in day 1: 55 dB at 5000 Hz) for a further 2 minutes continuously. The Freezing behavior (defined as complete absence of mobility within the same area for a time > 2 seconds) was measured through TSE Multi Conditioning software. The freezing % recorded during the habituation phase of day 1 (as a measure of baseline freezing) was compared to freezing % of day 2 to evaluate the context memory. To assess the cue memory, we measured the freezing % on day 3 before and during the sound stimulus.
Blood analysis
Trunk blood was collected right after the animal sacrifice and allowed to clot for 30 min at room temperature, followed by 5000 g centrifugation for 10 minutes at 4° C to collect the serum fraction [19,73]. Serum creatinine and ALT levels were measured using the following assay kits respectively: DICT-500 (BioAssay Systems) and MAK052 (Sigma-Aldrich). Assays were performed according to manufacturer instructions.
Statistical analysis
All data are displayed as mean ± standard error of the mean (SEM), with n indicating the number of animals. We used GraphPad Prism 9 software (San Diego, CA, USA) to perform the statistical analyses. T-Student or Mann-Whitney tests were used when comparing the average of two groups for parametric and nonparametric data respectively. Data distribution was evaluated with Shapiro-Wilk test. When two independent variables were present two-way ANOVA repeated measurements, followed by Tukey's multiple comparison test, was used to analyze the data. A P value ≤ 0.05 was considered as index of significance.
|
2021-06-04T06:16:21.355Z
|
2021-06-02T00:00:00.000
|
{
"year": 2021,
"sha1": "09beb68ba24a9ad93e4ca40f4f1e38b7e58ac93f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18632/aging.203132",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a460ed1007eb6b7b2795cbee1b97acc2aa79896a",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15888651
|
pes2o/s2orc
|
v3-fos-license
|
First molecular identification of mosquito vectors of Dirofilaria immitis in continental Portugal
Background Canine dirofilariasis due to Dirofilaria immitis is known to be endemic in continental Portugal. However, information about the transmitting mosquito species is still scarce, with only Culex theileri identified to date, albeit with L1-2, through dissection. This study was carried out to investigate the potential vectors of Dirofilaria spp. in continental Portugal. Methods Mosquitoes were collected in three distinct seasons (Summer, Autumn and Spring), 2011–2013, in three districts. CDC traps and indoor resting collections were carried out in the vicinity of kennels. Mosquitoes were kept under controlled conditions for 7 days to allow the development of larval stages of Dirofilaria spp.. DNA extraction was performed separately for both head+thorax and abdomen in order to differentiate infective and infected specimens, respectively, in pools, grouped according to the species and collection site (1–40 specimen parts/pool), and examined by PCR using pan-filarial specific primers. Mosquito densities were compared using non-parametric tests. Dirofilaria development units (DDU) were estimated. Results In total, 9156 female mosquitoes, from 11 different species, were captured. Mosquito densities varied among the 3 districts, according to capture method, and were generally higher in the second year of collections. From 5866 specimens screened by PCR, 23 head+thorax and 41 abdomens pools, corresponding to 54 mosquitoes were found positive for D. immitis DNA. These belonged to 5 species: Culex (Cux) theileri (estimated rate of infection (ERI)=0.71%), Cx. (Cux) pipiens f. pipiens and f. molestus (ERI=0.5%), Anopheles (Ano) maculipennis s.l. (ERI=3.12%), including An. (Ano) atroparvus, Aedes (Och) caspius (ERI=3.73%) and Ae. (Och) detritus s.l. (ERI=4.39%). All but Cx. pipiens, had at least one infective specimen. No D. repens infected specimens were found. Infection rates were: 3.21% in Coimbra, 1.22% in Setúbal and 0.54% in Santarém. DDU were at least 117/year in the study period. Conclusions Culex theileri, Cx. pipiens, An. maculipennis s.l. An. atroparvus, Ae.caspius and Ae. detritus s.l. were identified as potential vectors of D. immitis in three districts of Portugal, from Spring to Autumn, in 5 of the 6 collection dates in 2011–2013. Implications for transmission, in the context of climate changes, and need for prophylactic measures, are discussed.
Although the natural hosts of Dirofilaria spp. are dogs and wild members of the genus Canis, canine dirofilariasis (CD) infections may occur in a variety of species, including cats, other wild mammals and humans [2,3]. Previously human dirofilariasis (HD) was considered a rare disease, but a recent increase in the number of CD and HD cases, particularly after 2000, has resulted in it being classified as an emerging zoonosis [4,5]. Recent accounts of autochthonous cases of CD have stemmed from Slovakia [6], Hungary [7], Poland [8], and of HD from Hungary [9], Poland [10], Ukraine [11], and seroreactivity prevalences ranging from 5%-27% amongst humans, have been recorded in Serbia [12].
Dirofilaria spp. are transmitted by several mosquito species belonging to a wide range of genera in different parts of the world, such as Culex, Aedes and Anopheles [5]. Vectors ingest microfilariae, while feeding on an infected host, which then cross the midgut wall and migrate to the Malpighian tubules (MT) where they develop from first to third stage larvae. Later, the L3 (infective larvae) migrate to the proboscis through which they slide while the mosquito is feeding on another host, becoming sexually mature within six months in the main pulmonary arteries and right ventricle [1]. Transmission of dirofilariasis is dependent upon the presence of sufficient numbers of infected and microfilaremic dogs, susceptible mosquitoes, and a suitable climate to allow extrinsic incubation of the parasite in the mosquito vector [13,14]. Environmental factors, namely climatic and ecological, may affect the life cycle parameters of both the mosquito vector and filarial parasites.
Canine dirofilariasis due to D. immitis is known to be endemic in continental Portugal. In 2009-2010 the overall sero-prevalence in Northern and Central Portugal was 2.1% for CD [23]. A recent survey, 2011-2013, in three districts of Centre-South, has revealed an overall parasitological prevalence rate of 15.1%, the highest in Setúbal (24.8%), followed by Coimbra (13.8%) and Santarém (13.2%) [24].
Despite these high prevalences, information about the transmitting mosquito species was still scarce in continental Portugal, with Cx. theileri as the only likely vector of Dirofilaria spp. [25]. In addition, high densities of mosquito populations, namely Cx. theileri, Cx. pipiens s.l. An. maculipennis s.l. and Ae.caspius, were recorded in the above mentioned areas [26]. Hence, the purpose of this study was to identify potential vectors of Dirofilaria spp. by using a polymerase chain reaction (PCR) with species specific primers on mosquito populations from those three districts of continental Portugal, Coimbra, Santarém and Setúbal, collected in the vicinity of kennels being surveyed for CD, in a multidisciplinary project, for a period of two consecutive years.
Sampling area
The research was concentrated on three districts of Portugal: Coimbra (Centre), Santarém, and Setúbal (Centre-South), located at the basins of rivers Mondego, Tejo and Sado, respectively ( Figure 1). These districts present different prevalences of dog infections, ecological and overall soil use, although they have in common the presence of the main rice culture areas in the country. The number of localities surveyed in each district was, Coimbra-four, Santarém-five, and Setúbal-four.
Daily temperature data of 2011, 2012 and 2013 from stations operated close to the collections sites, were obtained from "Instituto Português do Mar e da Atmosfera" [27]. Average minimum and maximum monthly temperatures and rainfall values, for the study period are depicted in Table 1.
Mosquito collection and identification
Mosquitoes were collected by CDC light traps baited with dry ice, between 5.00 p.m. and 7.00 a.m., for active adult mosquitoes and with mechanical aspirators in the early morning targeting indoor resting mosquitoes (IR). Collections were carried out in kennels (whose identities are confidential) or their vicinity, but also in suburban or rural areas in those districts. Collections were carried out from 2011 to 2013, in July, October-November and April-May corresponding to Summer, Autumn and Spring seasons. Mosquitoes were kept in the insectary under controlled conditions of temperature and humidity (25±2°C, 70±5% RH), a photoperiod of 12 h:12 h (light:dark) and fed 10% sucrose solution, for 7 days to allow bloodmeal digestion and eventual parasite development to the infective L3 stage [1], as done in other studies [22]. After this period, those specimens still alive were frozen until species identification was carried out according to the keys of Ribeiro & Ramos [28]. Mosquitoes that were dead at time of trap collection, or that died during the following seven day period, were also frozen, identified and screened for filarial infection.
Set of biological material for PCR analysis
Mosquito females were dissected into head+thorax and abdomen to discriminate between Dirofilaria spp. infective/infected status, respectively [29]. Specimens belonging to the same collection, species, in identical gonotrophic stage, that had been the same number of days in the insectary, were joined in pools of these body parts, ranging from 1 to 40 specimen parts. Specimens of Cx. pipiens s.l.
were individually analyzed, due to the sympatric existence of the two biological forms, pipiens and molestus of the sensu strictu species, in Portugal [30].
DNA isolation
Genomic DNA was extracted from samples using the CTAB (Cetyltrimethylammonium bromide) method, adapted from Stothard et al. [31] by grinding the mosquito samples in a buffer (100 mMTris, 1.4 M NaCl, 20 mM EDTA, 2% Hexadecyltrimethylammonium bromide (CTAB), 0.2% mercaptoethanol) and incubating with proteinase K (Bioline) at 55°C for 90 min with agitation. Phenol/chloroform/isoamyl alcohol was used for further DNA purification. DNA was ethanol precipitated and pellet was suspended in TE buffer (pH 7.0).
For all PCR reactions described above, amplified products were separated on 1.5% agarose gel eletrophoresis and observed under UV light.
Sensitivity test of PCR
In order to determine the sensitivity of the PCR assay, two procedures were devised. Assays were carried out with DNA extracted from Cx. theileri female mosquitoes from IHMT colony, also separated into head+thorax and abdomen: i) to determine the minimum amount of parasite DNA that would be detected by the PCR assay, definite amounts of parasite DNA (10 ng, 5 ng, 1 ng, 0.1 ng, 10 pg and 1 pg) were mixed with 80 ng of mosquito DNA, and PCR reaction was performed in same conditions as described above. This showed that it was able to detect up to 10 pg of parasite DNA in 80 ng of mosquito DNA, either from head+thorax or abdomen; ii) it was also determined the sensitivity cut-off of detecting an infected mosquito in a pool of 40 mosquitoes. After the first individual specimen of Cx. theileri positive for D. immitis was detected, a sample from this pool with 80 ng/μl of total DNA, was diluted in uninfected Cx. theileri DNA at the same 80 ng/μl concentration.
DNA sequencing and analysis
Products from the first PCR described (panfilarial) were purified by QIAquick PCR Purification Kit (Qiagen) and sequenced by Macrogen. Sequences were edited and aligned using BioEdit [35], and compared to other similar sequences available in Genbank, as identified through BLAST [36].
Calculation of the infection rate of mosquitoes
The infection rate of mosquitoes were estimated by: i) Minimum infection rate (MIR), i.e. the number of positive mosquito pools/total number of mosquitoes in pools tested×1000, and ii) Estimated Rate of Infection (ERI) which is adjusted for pooled samples, by the formula: ERI=1−(1-x/m)1/k where x is the number of positive pools; m the number of examined pools and k the average number of specimens in each pool [37].
Ethical considerations
The study was approved by the Commission on Ethics of the Instituto de Higiene e Medicina Tropical, Universidade Nova de Lisboa with reference 21-2013-TM, and all procedures were performed according to national and European legislation.
Mosquito data and statistical analysis
Mosquito densities are presented as the number of mosquitoes captured per trap-night for CDC collections, or as the number of mosquitoes collected per collectorhour for IR collections. The arithmetic mean and the standard deviation were calculated for densities per district for all collections of each type, and date. However, the median and interquartile interval (Q1-Q3) revealed to be most appropriate for this data.
Statistical analysis was carried out using the SPSS package version 20.0 for Windows [38]. Kolmogorov-Smirnov (Lilliefors modification) and Shapiro-Wilk tests were used to analyse data for normality, while Levene's test was used to test for homogeneity of variance. Due to the lack of normality of the data, large standard deviations and lack of homogeneity of variance, non-parametric tests were used to analyse mosquito densities [39]. Mann-Whitney (MW) and Kruskal-Wallis (KW) tests were used for comparing, respectively, mosquito densities between the two years, and mosquito densities among the three districts. In the latter case, whenever significant differences were found, multiple comparisons were performed using the Dunn-Bonferroni (DB) pairwise comparisons.
Differences in mosquito rates of infection among species and locations were compared using Chi-squared test and Fisher's exact test.
Estimation of Dirofilaria development units (DDU)
In order to determine the hypothetical period in which there was risk of heartworm disease transmission in the surveyed areas, Dirofilaria Development Units (DDU) were calculated. For each day in which the average temperature was >14°C, temperature at which there is no extrinsic development of the parasite, the difference between the average temperature and 14°C was calculated (i.e. for Tmean≥15, DDU=Tmean-14) [40]. The sum of DDUs in the 30 days following the first day with average temperature >14°C, designated as DDU 30 , was then calculated. When DDU 30 is ≥130, it is assumed that a mosquito that might have taken a blood meal on a microfilaremic host on that particular day, had the possibility of allowing the completion of the extrinsic cycle, hence becoming infective, admitting an average mosquito life span of 30 days [2,13,40], independently of temperatures lower than 14°C during that period [41]. With this data, a bar graph was plotted depicting the favourable days for the completion of the extrinsic cycle, and for the transmission of heartworm in the areas and time periods studied [42].
Mosquito species captured and relative abundance
In total, 9156 female mosquitoes were caught in the whole sampling period (July/2011-May 2013), representing 11 species from five different genera. Culex (Culex) theileri was the most frequent species (5812, 63. The district of Santarém showed the highest number of mosquitoes captured (7818, 85.4%), followed by Coimbra (679, 7.4%) and Setúbal (659, 7.2%). Relative frequencies of the mosquito species caught in the different districts are depicted in Figure 1. Culex theileri was the most abundant species in Santarém and Coimbra, followed by Cx. pipiens s.l.. In Setúbal, the most frequent species found were Cx. pipiens s.l., Cx. theileri, An. maculipennis s.l. and Ae. caspius.
Average mosquito densities, and respective relative frequencies, were estimated according to the collecting method ( Figure 2, Table 2). For CDC traps, total mosquito densities differed among the 3 districts (KW: 14.231, DF=2, P=0.001) for the joint collections of the sampling period. Santarém exhibited a higher mosquito density compared just to Coimbra.
Culex theileri revealed different densities in the three districts (KW: 8.548, DF=2, P=0.014), being relatively more abundant just in Santarém compared to Coimbra.
Anopheles maculipennis s.l. collected by CDC traps did not reveal differences among the three surveyed districts.
As for IR collections, these densities also differed among the 3 districts for the total of the collecting period (KW: 9.802, DF=2, P=0.007). Mosquito density in Setúbal was significantly higher just in relation to Coimbra.
Culex pipiens s.l. also differed among the 3 districts for the total of the collecting period (KW: 7.230, DF=2, P=0.027), being more abundant just in Setúbal compared to Coimbra.
Anopheles maculipennis s.l., Cx. theileri and Ae. caspius were not significantly different between the three districts, in IR collections.
As to mosquito densities on the two surveying years,
Molecular detection of D. immitis DNA in mosquitoes
For PCR analysis, we used 5866 adult female mosquitoes. In total, 1815 head+thorax pools and 1529 abdomen pools were screened using the pan-filarial primers. This difference is due to bloodfed or semigravid females that still contained undigested blood in the abdomen, in order to avoid contamination of Dirofilaria spp. DNA that might be in the blood meal, thus preventing assumption of an established mosquito infection. Dirofilaria immitis DNA was found in the four most frequent species, but also in Ae. detritus s.l. (Table 3, with respective values of MIR, ERI and 95% CI). Culex pipiens s.l. positive pools for D. immitis, were identified as Cx. pipiens s.s., 7 form pipiens and 1 form molestus, which was from Setúbal.
The distribution of positive mosquitoes over the three sampled districts, and their respective values of MIR, ERI and 95% CI are depicted in Table 4.
No Overall, mosquitoes with D. immitis DNA were found in all collecting dates, but November 2011, usually by both methods and in more than one district (Figure 2, Table 5).
Estimation of transmission risk of Dirofilaria spp. by mosquitoes
The calculation of the DDU 30 for the three studied districts showed that there was, at least, 152 days in 2011, 119 days in 2012 and 117 days in 2013 with suitable conditions for the completion of the extrinsic development of Dirofilaria spp., and consequently, for its transmission to the vertebrate host (Figure 3). Most of the infected mosquito pools detected in this work (red lines) are in agreement with the determined favourable development periods.
Discussion
To our knowledge, this is the first report of molecular evidence for natural infections of mosquitoes with D. immitis in continental Portugal. Despite known prevalence of canine dirofilariasis (CD), the knowledge of its natural and potential vectors in mainland Portugal was scarce, with a historical study considering Cx. theileri as a probable vector of Dirofilaria spp. [25]. We report the finding of An. maculipennis s.l., Ae. caspius, Ae. detritus s.l. and Cx. theileri as likely competent vectors of D. immitis, i.e. with DNA in head+thorax, and Cx. pipiens form pipiens and form molestus, as likely vectors, i.e.with DNA only in abdomens, but from mosquitoes without any traces of bloodmeal. In this work, only D. immitis was detected, in contrast with recent findings of D. repens in other Southern European countries such as Italy, albeit in much lower rates than D. immitis, in Cx. pipiens [16,17], or at similar rates as D. immitis in Cx. pipiens and Ae. albopictus [44]. On the other hand, this is not surprising as D. repens was not found in parallel animal surveys in the same districts of Portugal [24].
Culex theileri, Cx. pipiens s.l. and An. maculipennis s.l. have already been implicated as vectors in countries such as Spain [15,20], Italy [16,17], Turkey [18] and Iran [21]. Aedes caspius and Ae. detritus s.l. are here for the first time, to the best of our knowledge, implicated as natural competent vectors of D. immitis. Aedes caspius had been found positive for the whole mosquito [17], and Ae. detritus s.l. for the abdominal portion [45], hence requiring confirmation. This is also the first study in which both biological forms of Cx. pipiens s.s., form pipiens and form molestus have been found infected with D. immitis. In Portugal, there are to date, records of Cx. pipiens and Cx. torrentium as members of the Cx. pipiens complex [46]. Culex torrentium is rare and occurs only in northern and mountain areas of the country [46], therefore, none of the collected specimens could belong to this species. As to Culex quinquefasciatus, although it has not yet been recorded in Portugal, hybrids with Cx. pipiens have recently been found in Greece [47]. For this reason, and considering either the ongoing climatic changes and its consequences on species distribution, or the similar PCR results between form molestus of Cx. pipiens s.s. and Cx. quinquefasciatus, all specimens were treated as Cx. pipiens s.l.; the molecular identification being made only for positive specimens for D. immitis. Anopheles maculipennis s.l. has also been previously found infected with D. immitis [16]. However, An. atroparvus is the only member of this complex occurring south of the Montejunto-Estrela mountain range, and even to the north of this range a proportion of nine An. atroparvus to one An. maculipennis s.s. was found [46,48,49]. Thus, we can be confident that the positive An. maculipennis s.l. in the district of Setúbal are in fact An. atroparvus, hence becoming the first vector incrimination for this species.
Aedes detritus s.l. were not differentiated as the technique available at the time of this study would preclude the screening for dirofilarial DNA.
Infection rates were similar whether estimated as MIR or ERI, probably as a great proportion of our pools were of a single mosquito specimen. The species with the highest infection rates were Ae. caspius (3.7%), followed by An. maculipennis s.l. (3.1%), Cx. theileri (0.7%) and Cx. pipiens s.l. (0.5%). The highest infection rate was in fact recorded for Ae. detritus s.l.. However, infection rates based on small sample sizes, i.e. <1000, may not accurately represent the true infection rate in the population [50]. Whereas in the case of An. maculipennis s.l. and Ae. caspius the sample size is 400 and 270, respectively, with somewhat large 95% CIs, and therefore, infection rates should be interpreted with caution, in the case of Ae. detritus s.l., with a sample of 23 specimens and a much wider 95% CI, the very high infection rate has a reduced significance.
In Portugal, previous detection of Dirofilaria spp. L1 and L2 larvae in the MT of Cx. theileri, in the district of Setúbal, had yielded an infection rate of 4.76%, again, in a small sample (N=42) [25]. In islands of Macaronesia, Cx. theileri has been found at infection rates of 0.16% in the Canaries [20] and circa 0.9%-1.13% in Madeira [19], to which the values in this study are more approximate.
In continental Portugal there are 41 identified species of mosquitoes [46], however, An. maculipennis s.l., Cx. pipiens s.l., Cx. theileri and Ae. caspius are the most abundant and broadly distributed [26]. In the three districts surveyed in this work, these were also the species with highest densities. Total mosquito densities were lower in Coimbra, the northern most district, in agreement with previous surveys, namely for Cx. pipiens s.l., by both methods, as a sign of identical capture yield for this species [26]. Conversely, CDC trap catches were able to show different densities among districts, for Cx. theileri and Ae. caspius, as they are superior for targeting these species [26]. On the other hand, IR catches yielded higher numbers of An. maculipennis s.l., as IR tends to be a more adequate method to capture this species [51]. Nevertheless, in this study, a striking difference was reported in the mosquito abundance in the district of Setúbal, with much lower densities compared to previous works [26,52,53]. The reasons contributing to this may well be i) the location of the collecting sites close to kennels, as per the experimental design, and which in this district were in areas not favourable for mosquito breeding, as opposed to earlier works which included rice fields and wetlands; ii) relatively low number of collections, with only one set in peak breeding season; and iii) strong winds registered in some of the collecting dates, as noted in field collection registers.
Collections in the second year yielded higher densities, particularly for Cx. theileri and Cx. pipiens s.l.. This increase may be due to local environmental variables, particularly climatic. Although no significant differences were registered for the average temperatures, precipitation was higher in the three districts in the second sampling year, 2012/2013. Nevertheless, considering there were only three collecting moments per year, there is not enough data to draw conclusions on the seasonal dynamics of mosquitoes.
Infection rates in mosquitoes were not in agreement with prevalence rates of CD found in the same research project [24], despite having targeted mosquito collections to the vicinity of kennels. Highest and lowest infection rates for mosquitoes were registered in Coimbra and Santarém, respectively, which had similar prevalence rates of CD. On the other hand, Setúbal, which had the highest CD prevalence rate, registered an intermediate mosquito infection rate. There are many factors whose influence is still unknown (vector efficiency of each species, overall level of protection in the dog population by preventive therapy, local environmental conditions in studied areas, etc.).
During the two year survey, infected mosquitoes were found in five of the six collection dates, representing an almost continuous presence of infected vectors, particularly in Santarém. Furthermore, infected mosquitoes were found by both methods in most of the collecting dates and sites. It can be argued that infected mosquitoes in IR collections may have become infected in the hosts in the shelters, however, the CDC trap collected mosquitoes represent the mosquito population searching for hosts and are proof therefore of circulating infected vectors. Coimbra was the only district with the five infected species as Ae. detritus s.l. was only found infected there, while Santarém and Setúbal registered four infected species.
The calculation of the DDU 30 infective and infected mosquitoes detected were collected during these favourable periods. The few exceptions are probably due to their maintenance in the insectary for the seven day period, which proves highly important in such studies. Although a 7 day period at 25°C may not be enough for the completion of the extrinsic cycle, a compromise had to be taken to compensate for mosquito mortality and filarial DNA degradation, while allowing for complete digestion of bloodmeals.
Activity of these mosquitoes, whether infected, infective or neither, was found in the three time point collections, corresponding to Spring, Summer and Autumn. In the context of climate changes, particularly in Portugal, where temperature increases have reached 0.5°C/decade since 1970, more than twice higher than the global median temperature [54], and with future scenarios that may range between 3-5.8°C by 2040-2090, the activity period for mosquitoes, and hence mosquito-borne diseases are likely to increase [55]. This is further relevant as dirofilariasis is recognized as an expanding zoonosis, particularly in Europe [4,5]. Our results are in agreement with predictions of occurrence and seasonality of Dirofilaria spp., with peaks of infection in Summer, from June to September, even in countries of Northern Europe [2,13,14,42].
Conclusions
We have confirmed and reported new mosquito vectors of dirofilariasis in three districts of Portugal with high prevalence of CD. To our knowledge, the present study is the first PCR screening for Dirofilaria spp. in mosquitoes for continental Portugal. Our results confirm that not only Cx. theileri is capable of becoming infected with D. immitis, but also Ae. caspius, An. maculipennis s.l., An. atroparvus, Cx. pipiens of both bioforms pipiens and molestus and Ae. detritus s.l. can support the development of D. immitis, and with the exception of Cx. pipiens, to the L3 infective stage, based on the presence of filarial DNA in the head+thorax. Most of these results were in agreement with the prediction of 130 DDU 30 for the regions surveyed. The finding of infected and infective mosquitoes in the three districts and in the Spring-Autumn interval heightens the necessity for prophylactic protective measures to prevent transmission at least during this period. Further studies are necessary to ascertain whether transmission season is wider than the interval Spring-Autumn.
|
2017-06-19T02:58:01.687Z
|
2015-03-03T00:00:00.000
|
{
"year": 2015,
"sha1": "9fe9cb9ec941ca98f7e26cb517d88843894444d5",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-015-0760-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c25c9824c02a4d695812beb6905ad49f0e54c32c",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
106401349
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of Microstructural Evolution for a Near-α Titanium Alloy with Different Initial Lamellar Microstructures
The effects of initial lamellar thickness on microstructural evolution and deformation behaviors of a near-α Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Si alloy were investigated during isothermal compression in α + β phase field. Special attention was paid to microstructural conversion mechanisms for α lamellae with different initial thicknesses. The deformation behaviors, including flow stress, temperature sensitivity, and strain rate sensitivity, and processing maps and their dependence on initial lamellar thickness were discussed. The detailed microstructural characterizations in different domains of the developed processing maps were analyzed. The results showed that the peak efficiency of power dissipation decreased with increasing initial lamellar thickness. The interaction effects with different extents of globularization, elongating, kinking, and phase transformation of lamellar α accounted for the variation in power dissipation. The flow instability region appeared to expand more widely for thicker initial lamellar microstructures during high strain rate deformation due to flow localization and local lamellae kinking. The electron backscatter diffraction (EBSD) analyses revealed that the collaborative mechanism of continuous dynamic recrystallization (CDRX) and discontinuous dynamic recrystallization (DDRX) promoted the rapid globularization behavior for the thinnest acicular initial microstructure, whereas in case of the initial thick lamellar microstructure, CDRX leading to the fragmentation of lamellae was the dominant mechanism throughout the deformation process.
Introduction
Near-α titanium alloy with an attractive combination of properties has been extensively applied as advanced structural material for aeroengine components [1,2].The Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy discussed in the present work is a type of near-α high-temperature titanium alloy for advanced gas turbine compressor disk application, which exhibits excellent thermal capability properties under the servicing temperature of 600 • C [3,4].In general, the mechanical properties linked with the final microstructures of titanium alloys are dependent on a set of typical hot-working steps involving primary cogging in β phase field, thermo-mechanical processes below the β-transus, and subsequent heat treatment [5].Therefore, it is necessary to have an in-depth knowledge about the deformation behavior, hot workability, and microstructure development of the material in the process in order to obtain optimum process schedules and achieve the desired microstructure and mechanical properties.
Over the years, hot deformation behaviors of titanium alloys with various initial microstructures have received considerable attention due to their significant influence.For example, Jackson et al. [6] analyzed the flow softening behavior of a near-β Ti-10V-2Fe-3Al alloy with two initial microstructures and revealed that the alloy with Widmanstätten α platelets displayed more flow softening than that with globular α.Lin et al. [7] discussed the hot tensile properties for a Ti-6Al-4V alloy with different initial microstructures including basket-weave, globular-lamellar, and equiaxed microstructures.Gao et al. [8] studied the effect of initial nonuniform microstructure on flow behavior and microstructure evolution of a near-α TA15 alloy.During the cooling from β phase field, lamellar α with diverse morphologies were formed.Semiatin and the coauthors systematically investigated the dependence of the flow softening mechanism on various initial microstructures, including initial lamellar α thickness [9], colony size [10], morphology of lamellar α [11], and texture [12].These works have a detailed description for understanding the deformation behavior and microstructural features of titanium alloys.However, most of the works were mainly focused on the effect of initial microstructures on globularization kinetics and flow softening behavior.Moreover, the number of studies related to the correlation among initial lamellar thickness, processing maps, and microstructure evolution for near-α titanium alloy is insufficient.In particular, little attention has been paid to discovering various evolution mechanisms for the lamellar microstructure with different initial thicknesses.
Therefore, the objective of the present paper is to characterize the evolution behavior of lamellar α with different initial thicknesses for the Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy.To this end, the effects of initial lamellar thickness and deformation parameters on the flow behavior and processing maps were detected and analyzed.Meanwhile, microstructure evolutions of different initial lamellar microstructures for a few particular domains with varying power dissipation values in processing maps were discussed.Moreover, an interesting point of this work involved evaluating the distinct deformation mechanisms of lamellar α with different initial thicknesses.The results could provide significant technical guidance for controlling the final microstructure and contribute to the development of applications in the practical production processes for the Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy.
Materials and Methods
The β-transus temperature of the Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy was measured at approximately 1045 • C via metallographic method.The samples for the alloy with three different initial lamellar microstructures in the present work (as shown in Figure 1) were prepared by heating to 1070 • C for 10 min, followed by various cooling rates to room temperature.Microstructure A (Figure 1a) is a typical fine acicular martensitic microstructure with an approximately 1 µm-thick alpha platelet.The colony microstructures with average alpha lamellar thicknesses of 6 µm and 10 µm are referred to as microstructure B (Figure 1b) and microstructure C (Figure 1c), respectively.To investigate the deformation behavior of the alloy in α + β phase field, cylindrical specimens of 8 mm diameter and 12 mm height were machined for isothermal compression tests which were performed on a Gleebe-3500 thermo-mechanical simulator (Data Sciences International, Inc., St. Paul, MN, USA) at deformation temperatures of 900 • C, 930 • C, 960 • C, 990 • C, and 1020 • C, with a strain rate range of 0.001-10 s −1 .During the tests, the heating rate was 5 • C/s and soaking for 5 min to eliminate thermal gradient.After being compressed to a height reduction of 50% under a certain strain rate, the specimen was rapidly cooled down to room temperature.Then the axial sections of deformed specimens were prepared for microstructure examination.The surfaces of compressed specimens were mechanically polished and then chemically etched in a solution of 3% HF, 6% HNO 3 and 91% H 2 O, subsequently observed on an OLYMPUS-GX71 optical microscope (Olympus Corporation, Tokyo, Japan) to obtain the metallographic microstructures.Electron backscatter diffraction (EBSD) samples were electro-polished in the solution of 5% perchloric acid and 95% methanol for about 30 s at approximately 25 • C with a voltage of 30 V. A TESCAN MIRA3 XMU scanning electron microscope equipped with a Nordlys Max EBSD detector (TESCAN, Brno, Czech Republic) was used for EBSD measurement.
Flow Behavior
The effects of initial lamellar microstructures and deformation parameters on the flow stress of the Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy are shown in Figure 2. The flow stress was significantly dependent on initial lamellar thickness and deformation parameters.The similar variation trend with a quick increasing to peak stress and a noticeable flow softening were observed in the exhibited stressstrain curves.When deformed to a large strain under some conditions, the flow stress curves may reach a steady state.Under given deformation conditions, the studied alloy with initial microstructure A (initial lamellar thickness: 1 μm) showed obviously higher flow stress and flow softening behavior than that with microstructures B (6 μm) and C (10 μm).It was observed that the increasing lamellar thickness of α phase could result in the decrease in deformation resistance.
Flow Behavior
The effects of initial lamellar microstructures and deformation parameters on the flow stress of the Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy are shown in Figure 2. The flow stress was significantly dependent on initial lamellar thickness and deformation parameters.The similar variation trend with a quick increasing to peak stress and a noticeable flow softening were observed in the exhibited stress-strain curves.When deformed to a large strain under some conditions, the flow stress curves may reach a steady state.Under given deformation conditions, the studied alloy with initial microstructure A (initial lamellar thickness: 1 µm) showed obviously higher flow stress and flow softening behavior than that with microstructures B (6 µm) and C (10 µm).It was observed that the increasing lamellar thickness of α phase could result in the decrease in deformation resistance.The steady state flow stress (stress at strain of 0.65 in the present work) changing with deformation temperatures for the alloy with three initial lamellar microstructures is shown in Figure 3a.It can be seen that the flow stress decreases rapidly with deformation temperature increasing from 900 °C to 960 °C, especially at a high strain rate, whereas the flow stress becomes less sensitive to temperature above the temperature of 990 °C.The work by Wanjara et al. [13] suggested that the The steady state flow stress (stress at strain of 0.65 in the present work) changing with deformation temperatures for the alloy with three initial lamellar microstructures is shown in Figure 3a.It can be seen that the flow stress decreases rapidly with deformation temperature increasing from 900 • C to 960 • C, especially at a high strain rate, whereas the flow stress becomes less sensitive to temperature above the temperature of 990 • C. The work by Wanjara et al. [13] suggested that the transition trend in flow stress may take place at a temperature between about 70 • C and 40 • C below the β-transus temperature of the material, which concurs with the present result.The temperature sensitivity can be evaluated by a parameter S, defined as the following equation [2]: Based on experimental results, the temperature sensitivity parameters of the alloy with different initial lamellar microstructures and strain rates at a given true strain of 0.65 were obtained using Equation ( 1) and shown in Figure 3.The temperature sensitivity curves obtained exhibit almost a similar trend with the deformation temperature.Regardless of strain rate and initial microstructure, the sensitivity parameter displays a peak value at the temperature of 960 • C, which indicates that a fine grain structure of the alloy may be exhibited in this temperature range [2].The values of the temperature sensitivity parameter are very small and less than 2 at a lower temperature (900 • C) and higher strain rates (1 s −1 , 10 s −1 ), which implies that the alloy may exhibit instable flow under these conditions.The steady state flow stress (stress at strain of 0.65 in the present work) changing with deformation temperatures for the alloy with three initial lamellar microstructures is shown in Figure 3a.It can be seen that the flow stress decreases rapidly with deformation temperature increasing from 900 °C to 960 °C, especially at a high strain rate, whereas the flow stress becomes less sensitive to temperature above the temperature of 990 °C.The work by Wanjara et al. [13] suggested that the transition trend in flow stress may take place at a temperature between about 70 °C and 40 °C below the β-transus temperature of the material, which concurs with the present result.The temperature sensitivity can be evaluated by a parameter S, defined as the following equation [2]: Based on experimental results, the temperature sensitivity parameters of the alloy with different initial lamellar microstructures and strain rates at a given true strain of 0.65 were obtained using Equation ( 1) and shown in Figure 3.The temperature sensitivity curves obtained exhibit almost a similar trend with the deformation temperature.Regardless of strain rate and initial microstructure, the sensitivity parameter displays a peak value at the temperature of 960 °C, which indicates that a fine grain structure of the alloy may be exhibited in this temperature range [2].The values of the temperature sensitivity parameter are very small and less than 2 at a lower temperature (900 °C) and higher strain rates (1 s −1 , 10 s −1 ), which implies that the alloy may exhibit instable flow under these conditions.The effects of deformation parameters and initial lamellar thicknesses on strain rate sensitivity (m = ∂ ln σ/∂ ln .ε) for the alloy deformed at strain of 0.65 are shown in Figure 4.The variations in strain rate sensitivity, along with the change in processing parameters, are closely associated with the microstructure evolution.Figure 4 indicates that strain rate sensitivity for the studied alloy with different initial microstructures increases with the decrease in the strain rate, showing results which are similar to the results obtained from other titanium alloys [14][15][16].The values of strain rate sensitivity for the present alloy are in the range of 0.1 to 0.4, and the higher values are obtained at lower strain rates for almost all temperatures except at 1020 • C. At lower strain rates of 0.001 s −1 and 0.01 s −1 , the m value increases to the maximum and then decreases with the rising temperature.When the alloy is deformed at a higher strain rate, the value of m presents a roughly increasing tendency with the rising temperature.The maximum strain rate sensitivity of 0.40, 0.36, and 0.32 was found to exist at a strain rate of 0.001 s −1 and a deformation temperature of 960 • C for the alloy with initial microstructure A, B, and C, respectively.The results show that the strain rate sensitivity values increase with the decrease in the initial lamellar thickness.Such increments may be associated with the occurrence of grain-boundary sliding and different dynamic globularization kinetics which increase as the alpha lamellar thickness decreases [10,17].strain rate sensitivity, along with the change in processing parameters, are closely associated with the microstructure evolution.Figure 4 indicates that strain rate sensitivity for the studied alloy with different initial microstructures increases with the decrease in the strain rate, showing results which are similar to the results obtained from other titanium alloys [14][15][16].The values of strain rate sensitivity for the present alloy are in the range of 0.1 to 0.4, and the higher values are obtained at lower strain rates for almost all temperatures except at 1020 °C.At lower strain rates of 0.001 s −1 and 0.01 s −1 , the m value increases to the maximum and then decreases with the rising temperature.When the alloy is deformed at a higher strain rate, the value of m presents a roughly increasing tendency with the rising temperature.The maximum strain rate sensitivity of 0.40, 0.36, and 0.32 was found to exist at a strain rate of 0.001 s −1 and a deformation temperature of 960 °C for the alloy with initial microstructure A, B, and C, respectively.The results show that the strain rate sensitivity values increase with the decrease in the initial lamellar thickness.Such increments may be associated with the occurrence of grain-boundary sliding and different dynamic globularization kinetics which increase as the alpha lamellar thickness decreases [10,17].
Effect of Initial Lamellar Thickness on Processing Maps
The approach using processing maps is an effective method to analyze hot deformation behavior, optimize processing parameters and control microstructures of the materials.During the hotworking process, based on the dynamic material modeling, the efficiency of power dissipation (η) is used to evaluate the power dissipation capacity of the material and is expressed as follows [18]:
Effect of Initial Lamellar Thickness on Processing Maps
The approach using processing maps is an effective method to analyze hot deformation behavior, optimize processing parameters and control microstructures of the materials.During the hot-working process, based on the dynamic material modeling, the efficiency of power dissipation (η) is used to evaluate the power dissipation capacity of the material and is expressed as follows [18]: where m is the strain rate sensitivity.The occurrence of flow instability is defined as follows [19]: The processing maps of the Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy with three different initial lamellar microstructures for the true strain of 0.65 are shown in Figure 5, in which iso-contour numbers represent the η values and shaded regions represent the instable domains.It is obvious that the initial lamellar thickness and processing parameters have a noticeable impact on the efficiency of power dissipation for the studied alloy.The η value had a similar variation trend with strain rate sensitivity which decreased with the increasing strain rate and the decreasing deformation temperature under lower strain rates.As shown in Figure 5a, the peak efficiency domain distributed over at the temperature range of 940 • C to 970 • C and strain rate of 0.001 s −1 to 0.003 s −1 under the present experimental conditions with a distinct maximum value of 0.57 occurring at 960 • C/0.001 s −1 .The alloy with initial microstructure A also exhibited high temperature sensitivity (Figure 3) and strain rate sensitivity exponent (Figure 4) under the same processing conditions.When the alloy was deformed in relatively higher strain rate regions (above 1.5 s −1 ), the ξ values were negative and flow instability occurred.For the alloy with microstructure B, the variation in the η value was similar and the safe region was located in the range of 900-1020 • C when the strain rate was less than 1 s −1 , as is shown in Figure 5b.The η value in the stable region was greater than 0.30, in which an area with a peak efficiency of approximately 0.53 existed.The flow instability region appeared to expand more widely than that for the alloy with microstructure A. For the alloy with microstructure C, the peak efficiency region occurred at 940-980 • C/0.001-0.01s −1 with a maximum value of 0.48 which was lower than that of initial microstructures A and B. The flow instability extended to a lower strain rate domain when the alloy was deformed at the low temperature of 900 • C. The developed processing maps revealed that the regions with high η value were almost positioned under deformation conditions of moderate-temperature and low-strain-rate.In addition, the peak efficiency values for the three initial lamellar microstructures all occurred at the deformation temperature of 960 °C and strain rate of 0.001 s −1 .The maximum value and the regions with η value higher than 0.5 (η > 0.5) decreased with the increase in the initial lamellar thickness, while the flow instability domain became obviously wider.The previous works indicated that the deformation mechanism of dynamic recrystallization [20] or superplasticity [21] could be responsible for the high η values in processing maps.Semiatin [22] suggested that the superplasticity generally appeared in the Ti-alloys with initial fine and two-phase equiaxed microstructure features.For the present work, the studied alloy with initial lamellar microstructure having large grains did not satisfy the microstructural requirement for obtaining superplasticity [14].Therefore, the peak power dissipation efficiency might be associated with dynamic recrystallization.The differences among the three processing maps may be due to the multiple microstructural evolution of the Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy with three initial lamellar thicknesses during the deformation at elevated temperature.To clarify the underlying deformation mechanisms, the characterization of deformation behaviors for different domains in processing maps was investigated according to microstructure observations and discussions.
Microstructural Analysis
In order to investigate deformation behavior and microstructure evolution, the microstructures of a few domains with varying efficiency of power dissipation values in processing maps for the present alloy with three initial lamellar microstructures were characterized and analyzed, as The developed processing maps revealed that the regions with high η value were almost positioned under deformation conditions of moderate-temperature and low-strain-rate.In addition, the peak efficiency values for the three initial lamellar microstructures all occurred at the deformation temperature of 960 • C and strain rate of 0.001 s −1 .The maximum value and the regions with η value higher than 0.5 (η > 0.5) decreased with the increase in the initial lamellar thickness, while the flow instability domain became obviously wider.The previous works indicated that the deformation mechanism of dynamic recrystallization [20] or superplasticity [21] could be responsible for the high η values in processing maps.Semiatin [22] suggested that the superplasticity generally appeared in the Ti-alloys with initial fine and two-phase equiaxed microstructure features.For the present work, the studied alloy with initial lamellar microstructure having large grains did not satisfy the microstructural requirement for obtaining superplasticity [14].Therefore, the peak power dissipation efficiency might be associated with dynamic recrystallization.The differences among the three processing maps may be due to the multiple microstructural evolution of the Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy with three initial lamellar thicknesses during the deformation at elevated temperature.
To clarify the underlying deformation mechanisms, the characterization of deformation behaviors for different domains in processing maps was investigated according to microstructure observations and discussions.
Microstructural Analysis
In order to investigate deformation behavior and microstructure evolution, the microstructures of a few domains with varying efficiency of power dissipation values in processing maps for the present alloy with three initial lamellar microstructures were characterized and analyzed, as illustrated in Figures 6-8.The selected domains in the processing map with strain of 0.65 for the alloy with initial microstructure A are marked with different numbers as follows: A1, 960 • C and 0.001 s −1 ; A2, 990 • C and 0.001 s −1 ; A3, 930 • C and 0.1 s −1 ; and A4, 990 • C and 1 s −1 .The corresponding microstructures in these domains are shown in Figure 6.Under deformation conditions of 960 • C/0.001 s −1 , almost all of the initial lamellar α phase was globularized and the final equiaxed α phase was well distributed as seen in Figure 6a, which corresponds to the peak η value in the processing map.The volume fraction of α phase changed slightly at deformation temperatures between 900 • C and 960 • C, but sharply decreased with the temperature increasing from 960 • C to 990 • C, as the significant phase transformation of α to β was enhanced at a low strain rate.Compared with higher strain rates, the globularization of α lamellae was more sufficient at strain rate of 0.001 s −1 .With the strain rate increasing to 0.1 s −1 , it was evident that the morphology of α phase changed in an obvious manner, which was more likely to be elongated rather than spheroidized (marked with a black arrow in Figure 6d).Thus, the η value at the lower strain rate was higher than that of the higher strain rate in the temperature range of 900 to 990 • C. The deformation characteristic of flow instability regions is always attributed to adiabatic shear deformation, internal cracks, and grain boundary cavities during the hot deformation of titanium alloys [23][24][25][26].The results in our previous work by Zhao et al. [4] showed that microstructures of the present alloy with acicular microstructure deformed in flow instability regions at a strain rate of 10 s −1 and exhibited flow localization bands.
The selected domains in the processing map for the alloy with initial microstructure B are marked as follows: B1, 960 • C and 0.001 s −1 ; B2, 990 • C and 0.001 s −1 ; B3, 930 • C and 0.01 s −1 ; and B4, 990 • C and 1 s −1 .The corresponding microstructures in these domains are shown in Figure 7.At the peak efficiency regions with a temperature of 960 • C and a strain rate of 0.001 s −1 , the deformed microstructure consisted of equiaxed α phase and a small amount of short lamellar α, as is shown in Figure 7a.The temperature affected the microstructure in a similar way for the alloy with initial microstructure A, which only changed the volume fraction of α phase but not the morphology in an obvious manner.After the alloy was deformed at the relatively higher temperature of 990 • C/0.001 s −1 , the volume fraction of α phase further decreased and more β-transus microstructures were observed.The breaking up of the lamellae and α phase with equiaxed morphology was generated to a greater degree at the lower strain rate.At strain rates higher than 1 s −1 , the microstructures exhibited extensive kinking of lamellae which formed in the colonies inclined up to 45 • to the compression axis, as is shown in Figure 7d.
A2, 990 °C and 0.001 s −1 ; A3, 930 °C and 0.1 s −1 ; and A4, 990 °C and 1 s −1 .The corresponding microstructures in these domains are shown in Figure 6.Under deformation conditions of 960 °C/0.001s −1 , almost all of the initial lamellar α phase was globularized and the final equiaxed α phase was well distributed as seen in Figure 6a, which corresponds to the peak η value in the processing map.The volume fraction of α phase changed slightly at deformation temperatures between 900 °C and 960 °C, but sharply decreased with the temperature increasing from 960 °C to 990 °C, as the significant phase transformation of α to β was enhanced at a low strain rate.Compared with higher strain rates, the globularization of α lamellae was more sufficient at strain rate of 0.001 s −1 .With the strain rate increasing to 0.1 s −1 , it was evident that the morphology of α phase changed in an obvious manner, which was more likely to be elongated rather than spheroidized (marked with a black arrow in Figure 6d).Thus, the η value at the lower strain rate was higher than that of the higher strain rate in the temperature range of 900 to 990 °C.The deformation characteristic of flow instability regions is always attributed to adiabatic shear deformation, internal cracks, and grain boundary cavities during the hot deformation of titanium alloys [23][24][25][26].The results in our previous work by Zhao et al. [4] showed that microstructures of the present alloy with acicular microstructure deformed in flow instability regions at a strain rate of 10 s -1 and exhibited flow localization bands.
The selected domains in the processing map for the alloy with initial microstructure B are marked as follows: B1, 960 °C and 0.001 s −1 ; B2, 990 °C and 0.001 s −1 ; B3, 930 °C and 0.01 s −1 ; and B4, 990 °C and 1 s −1 .The corresponding microstructures in these domains are shown in Figure 7.At the peak efficiency regions with a temperature of 960 °C and a strain rate of 0.001 s -1 , the deformed microstructure consisted of equiaxed α phase and a small amount of short lamellar α, as is shown in Figure 7a.The temperature affected the microstructure in a similar way for the alloy with initial microstructure A, which only changed the volume fraction of α phase but not the morphology in an obvious manner.After the alloy was deformed at the relatively higher temperature of 990 °C/0.001s −1 , the volume fraction of α phase further decreased and more β-transus microstructures were observed.The breaking up of the lamellae and α phase with equiaxed morphology was generated to a greater degree at the lower strain rate.At strain rates higher than 1 s −1 , the microstructures exhibited extensive kinking of lamellae which formed in the colonies inclined up to 45° to the compression axis, as is shown in Figure 7d.The selected domains in the processing map for the alloy with initial microstructure C are marked as follows: C1, at 960 °C and 0.001 s −1 ; C2, at 930 °C and 0.01 s −1 ; C3, at 990 °C and 1 s −1 ; and C4, at 1020 °C and 0.01 s −1 .The corresponding microstructures in these domains are shown in Figure 8.At the peak efficiency dissipation region C1, the lamellae displayed an obvious trend to twirl toward the perpendicular direction to compression axis.As is labeled in Figure 8a, only a few occurrences of fragmentation of lamellae and spheroidization of α phase existed because the 50% reduction of the height was insufficient for complete globularization of the alloy with initial thick lamellar microstructure.With the strain rate increasing and the temperature decreasing, the fragmentations of lamellar α were restrained and more elongated ones were observed.When the alloy was deformed at near β-transus temperature of 1020 °C, the microstructure were similar to a β transformed one and the β grain boundaries became serrated.In addition, some dynamic recrystallization grains were formed along the elongated grain boundaries.The efficiency of power The selected domains in the processing map for the alloy with initial microstructure C are marked as follows: C1, at 960 • C and 0.001 s −1 ; C2, at 930 • C and 0.01 s −1 ; C3, at 990 • C and 1 s −1 ; and C4, at 1020 • C and 0.01 s −1 .The corresponding microstructures in these domains are shown in Figure 8.At the peak efficiency dissipation region C1, the lamellae displayed an obvious trend to twirl toward the perpendicular direction to compression axis.As is labeled in Figure 8a, only a few occurrences of fragmentation of lamellae and spheroidization of α phase existed because the 50% reduction of the height was insufficient for complete globularization of the alloy with initial thick lamellar microstructure.With the strain rate increasing and the temperature decreasing, the fragmentations of lamellar α were restrained and more elongated ones were observed.When the alloy was deformed at near β-transus temperature of 1020 • C, the microstructure were similar to a β transformed one and the β grain boundaries became serrated.In addition, some dynamic recrystallization grains were formed along the elongated grain boundaries.The efficiency of power dissipation decreased at the temperature of 1020 • C in virtue of the slight increase in the β grain size with the decrease in the strain rate.
Deformation Mechanism of Lamellar Alpha
The EBSD microstructures of the alloy with initial thin acicular lamellar alpha (microstructure A) at two regions in the specimen deformed at a temperature of 960 °C and a strain rate of 0.01 s −1 are depicted in Figure 9.As a result of different local effective strains for the two regions in the deformed specimen, the microstructures displayed various characterizations.As is seen in Figure 9a, there were a certain number of residual lamellar α in region A, which was attributed to the insufficient and nonuniform deformation.In addition, some new fine grains with equiaxed morphology can be observed around the lamellar α.The volume fraction of globular α grains was greatly affected by the applied strain and it increased from 65.1% (region A) to 80.8% (region B) with the strain increasing.Moreover, the colors in the inverse pole figure (IPF) maps also change noticeably and the distribution becomes more uniform in Figure 9b.The strain had a significant influence on the frequency of the low-angle grain boundaries (LABs, misorientation between 2° and 15°) and high-angle grain boundaries (HABs, misorientation over 15°).The greater strain resulted in the number of LABs decreasing and of HABs increasing, which can be related to the microstructure conversion from lamellae to equiaxed.
Deformation Mechanism of Lamellar Alpha
The EBSD microstructures of the alloy with initial thin acicular lamellar alpha (microstructure A) at two regions in the specimen deformed at a temperature of 960 • C and a strain rate of 0.01 s −1 are depicted in Figure 9.As a result of different local effective strains for the two regions in the deformed specimen, the microstructures displayed various characterizations.As is seen in Figure 9a, there were a certain number of residual lamellar α in region A, which was attributed to the insufficient and non-uniform deformation.In addition, some new fine grains with equiaxed morphology can be observed around the lamellar α.The volume fraction of globular α grains was greatly affected by the applied strain and it increased from 65.1% (region A) to 80.8% (region B) with the strain increasing.Moreover, the colors in the inverse pole figure (IPF) maps also change noticeably and the distribution becomes more uniform in Figure 9b.The strain had a significant influence on the frequency of the low-angle grain boundaries (LABs, misorientation between 2 • and 15 • ) and high-angle grain boundaries (HABs, misorientation over 15 • ).The greater strain resulted in the number of LABs decreasing and of HABs increasing, which can be related to the microstructure conversion from lamellae to equiaxed.Figure 10 shows the variation in orientation accumulations which are detected along the white line marked in Figure 9.It is noted that orientations along the lamellar α change broadly and the cumulative misorientation of line L1 is lower than 10°, whereas the maximum value for L2 exceeds 30°.According to the point to origin profile, the two lines can be described as continuous accumulation for L1 and discontinuous multi-peak orientation distribution for L2, respectively.The small misorientation angle in point to point line for the continuous accumulation profile in L1 is observed, and it indicates a long-range continuous lattice distortion which can be associated with the formation of subgrains [27,28].The point to point line of L2 displays alternating lattice orientation with an order of 1-10° between the neighboring points leading to the multi-peaks.This may be attributed to the gradual increasing of the low misorientation angle to a higher value in the lamellar α with the strain increasing.These two representative types of orientation accumulations profiles are also observed in other deformed microstructures.The heterogeneous evolution of lamellar α is promoted by different magnitudes of strain gradient in different regions.Figure 10 shows the variation in orientation accumulations which are detected along the white line marked in Figure 9.It is noted that orientations along the lamellar α change broadly and the cumulative misorientation of line L1 is lower than 10 • , whereas the maximum value for L2 exceeds 30 • .According to the point to origin profile, the two lines can be described as continuous accumulation for L1 and discontinuous multi-peak orientation distribution for L2, respectively.The small misorientation angle in point to point line for the continuous accumulation profile in L1 is observed, and it indicates a long-range continuous lattice distortion which can be associated with the formation of subgrains [27,28].The point to point line of L2 displays alternating lattice orientation with an order of 1-10 • between the neighboring points leading to the multi-peaks.This may be attributed to the gradual increasing of the low misorientation angle to a higher value in the lamellar α with the strain increasing.These two representative types of orientation accumulations profiles are also observed in other deformed microstructures.The heterogeneous evolution of lamellar α is promoted by different magnitudes of strain gradient in different regions.Figure 10 shows the variation in orientation accumulations which are detected along the white line marked in Figure 9.It is noted that orientations along the lamellar α change broadly and the cumulative misorientation of line L1 is lower than 10°, whereas the maximum value for L2 exceeds 30°.According to the point to origin profile, the two lines can be described as continuous accumulation for L1 and discontinuous multi-peak orientation distribution for L2, respectively.The small misorientation angle in point to point line for the continuous accumulation profile in L1 is observed, and it indicates a long-range continuous lattice distortion which can be associated with the formation of subgrains [27,28].The point to point line of L2 displays alternating lattice orientation with an order of 1-10° between the neighboring points leading to the multi-peaks.This may be attributed to the gradual increasing of the low misorientation angle to a higher value in the lamellar α with the strain increasing.These two representative types of orientation accumulations profiles are also observed in other deformed microstructures.The heterogeneous evolution of lamellar α is promoted by different magnitudes of strain gradient in different regions.The dynamic globularization of lamellar α is generally considered to be a type of dynamic recrystallization (DRX) related to the evolution of grain boundary [29,30].To obtain a better understanding of the deformation mechanism, it is necessary to quantitatively investigate the generation and distribution of low-angle grain boundaries (LABs) and high-angle grain boundaries (HABs).The effects of initial lamellar thickness on the distribution of grain boundary misorientation for the Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy are shown in Figure 11.For the alloy with microstructure A deformed at 960 • C, 0.01 s −1 , the grain boundaries consisted of approximate 28.7% LABs and 71.3% HABs.The fraction of HABs under the same deformation conditions for microstructure B is about 44.1% (Figure 11d), while this value decreases to approximately 26.2% in microstructure C (Figure 11f)).Based on the microstructures discussed above, it is considered that the volume fraction of dynamic globularization is correlated to the ratio of LABs to HABs [31].As the grain boundary angle of the produced substructures becomes larger, DRX occurs more easily [32].More HABs suggest a more intensive dynamic process in lamellar α which results in the increase in the volume fraction of globularized α phase.Thus, refining the initial lamellar thickness could promote the dynamic recrystallization of α phase.The effects of deformation parameters on the frequency of LABs for certain types of titanium alloys during hot deformation have been discussed in [33][34][35], and results showed that the grain boundary misorientation gradually increased and the ratio of LABs decreased when the alloy was deformed at a lower strain rate with a higher strain.It is generally accepted that the grain boundary evolution is coincident with continuous dynamic recrystallization (CDRX), during which the dislocation interaction induces the accumulation of LABs and formation of HABs leads to the lamellar α decomposition [5,29,36].This suggests that the microstructure evolution mechanism for the fragmentation of lamellar α is dominantly predominated by the CDRX.
The EBSD maps including distribution information about grain boundaries, kernel average misorientation (KAM), and recrystallized fraction (DRX) for the alloy with initial thin lamellar microstructure are illustrated in Figure 12.The HABs mainly distributed in the equiaxed α caused by dynamic globularization, whereas the LABs were always generated inside α lamellae representing the features of substructures.As shown in the KAM map, the high local misorientation inside of the lamellae variants indicated the general accumulation of dislocation density and stored energy, which accelerated the forming of substructures (yellow regions in the DRX map).In particular, it can be clearly observed that a part of new fine equiaxed grains with high-angle boundaries significantly formed around the primary lamellar α.At the interface of α lamellae, the dislocation densities became lower and obvious recrystallized grains were observed.Therefore, these results indicate the interesting fact that a discontinuous dynamic recrystallization (DDRX) mechanism occurs and plays an important role in dynamic globularization during the deformation process for the studied alloy with initial acicular microstructure.The work by Matsumoto et al. [37] investigated the frequent occurrence of DDRX in the Ti-6Al-4V alloy and found that DDRX becomes the dominant deformation mechanism by changing the starting microstructure from the (α + β) to an acicular α martensite one.He et al. [38] revealed that the recrystallization mechanism changed to DDRX at a higher temperature (850 • C) in the Ti-6Al-2Zr-1Mo-1V alloy.The relative random distribution of the crystal orientations shown in Figure 9 also validates the conventional deformation feature.However, the DDRX mechanism could not be observed in the deformed microstructure during the globularization process of the alloy with initial thick lamellar microstructure.The schematic diagrams for lamellar α evolution behavior for the alloy with initial thin (microstructure A: acicular-platelet) and thick (microstructure C: colony-lamellae) lamellar microstructures are illustrated in Figure 13.At the beginning of deformation, some of the thick lamellae with specific orientations may undergo severe kinking and buckling.In addition, the rest of the colonies are elongated to a certain extent and tend to twirl toward the direction perpendicular to the applied stress.With further imposed strain, the kinking of lamellar α is enhanced which provides more nucleation sites and contributes to the breakdown of the lamellae.As is seen in Figure 13, the grooves were generated at the edge of α lamellae promoting the fragmentation of long lamellae.Moreover, the underlying mechanism of this fragmentation process was associated with CDRX.Meanwhile, the lamellae were fully rotated and finally perpendicular to the compression direction.Then the number of globular α and short α laths increased with the increasing strain until the fully equiaxed microstructure was obtained.In addition, compared with the initial colony lamellar microstructure, the microstructure conversion characteristics for alloy with initial acicular microstructure were distinct owing to the various thicknesses of lamellar α.During the evolution process, the phenomena of lamellar kinking were rarely observed, whereas the similar elongation and rotation behavior were visible.As pointed out in the above discussion, numerous new fine grains were formed at the interface of α lamellae as a result of DDRX.Thus, in the present work, it is suggested that the collaborative mechanism of CDRX and DDRX contributes to rapid globularization of the acicular microstructure.Furthermore, the CDRX is dominant for the dynamic globularization of initial thick lamellar microstructure throughout the whole deformation process.The schematic diagrams for lamellar α evolution behavior for the alloy with initial thin (microstructure A: acicular-platelet) and thick (microstructure C: colony-lamellae) lamellar microstructures are illustrated in Figure 13.At the beginning of deformation, some of the thick lamellae with specific orientations may undergo severe kinking and buckling.In addition, the rest of the colonies are elongated to a certain extent and tend to twirl toward the direction perpendicular to the applied stress.With further imposed strain, the kinking of lamellar α is enhanced which provides more nucleation sites and contributes to the breakdown of the lamellae.As is seen in Figure 13, the grooves were generated at the edge of α lamellae promoting the fragmentation of long lamellae.Moreover, the underlying mechanism of this fragmentation process was associated with CDRX.Meanwhile, the lamellae were fully rotated and finally perpendicular to the compression direction.Then the number of globular α and short α laths increased with the increasing strain until the fully equiaxed microstructure was obtained.In addition, compared with the initial colony lamellar microstructure, the microstructure conversion characteristics for alloy with initial acicular microstructure were distinct owing to the various thicknesses of lamellar α.During the evolution process, the phenomena of lamellar kinking were rarely observed, whereas the similar elongation and rotation behavior were visible.As pointed out in the above discussion, numerous new fine grains were formed at the interface of α lamellae as a result of DDRX.Thus, in the present work, it is suggested that the collaborative mechanism of CDRX and DDRX contributes to rapid globularization of the acicular microstructure.Furthermore, the CDRX is dominant for the dynamic globularization of initial thick lamellar microstructure throughout the whole deformation process.
According to above discussions on microstructural evolution, it is concluded that the maximum value and the regions with high efficiency of power dissipation under low strain rates decrease with the increase in the initial lamellar thickness.This may be associated with a larger extent of dynamic globularization for the thin alpha lamellae as compared with the thick one.Moreover, the micrographs clearly reveal that the volume fraction of dynamic globularization increases at a higher temperature and a lower strain rate.Compared with the deformation under low strain rates of 0.01 s −1 and 0.001 s −1 , lamellar α tends to be elongated and kinked with increasing strain rate under the temperature ranges of 900-990 • C, indicating a less sufficient dynamic globularization.On the other hand, a high efficiency of power dissipation at a moderate strain rate and temperatures ranging from 990 • C to 1020 • C is attributed to the significant α to β phase transformation and dynamic recrystallization of β grains.The flow localization and local lamellar kinking may result in the occurrence of flow instability.Moreover, the present work is the first to discuss the diverse deformation mechanisms involved in the collaborative behavior of CDRX and DDRX for dynamic globularization of lamellar α with different initial thicknesses on the near-α Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy.According to above discussions on microstructural evolution, it is concluded that the maximum value and the regions with high efficiency of power dissipation under low strain rates decrease with the increase in the initial lamellar thickness.This may be associated with a larger extent of dynamic globularization for the thin alpha lamellae as compared with the thick one.Moreover, the micrographs clearly reveal that the volume fraction of dynamic globularization increases at a higher temperature and a lower strain rate.Compared with the deformation under low strain rates of 0.01 s −1 and 0.001 s −1 , lamellar α tends to be elongated and kinked with increasing strain rate under the temperature ranges of 900-990 °C, indicating a less sufficient dynamic globularization.On the other hand, a high efficiency of power dissipation at a moderate strain rate and temperatures ranging from 990 °C to 1020 °C is attributed to the significant α to β phase transformation and dynamic recrystallization of β grains.The flow localization and local lamellar kinking may result in the occurrence of flow instability.Moreover, the present work is the first to discuss the diverse deformation mechanisms involved in the collaborative behavior of CDRX and DDRX for dynamic globularization of lamellar α with different initial thicknesses on the near-α Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy.
Conclusions
In this work, the characterization of hot deformation behavior and microstructural evolution of a near-α Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy with various initial lamellar microstructures was investigated by a series of isothermal compression tests at a temperature range from 900 °C to 1020 °C and a strain rate range from 0.001 s −1 to 10 s −1 .The main conclusions are drawn as follows: (1) The flow stress of the Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy is greatly dependent on the initial lamellar thickness and deformation parameters.The alloy with thinner initial lamellar thickness shows a higher flow stress, and an increasing deformation temperature or a decreasing strain rate can reduce the flow resistance.
(2) The peak efficiency in the processing maps occurred at a strain of 0.65 for all positioned at 960 °C/0.001s −1 with the maximum value of 0.57, 0.53, and 0.48, respectively, for the alloy with initial microstructure A, B, and C. The flow instability regions appear to expand more widely with the increase in the initial lamellar thickness when the alloy was deformed at higher strain rates.
(3) The microstructure observations indicate that different extents of globularization, elongating, kinking, and phase transformation of lamellar α are responsible for the variation in power dissipation
Conclusions
In this work, the characterization of hot deformation behavior and microstructural evolution of a near-α Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy with various initial lamellar microstructures was investigated by a series of isothermal compression tests at a temperature range from 900 • C to 1020 • C and a strain rate range from 0.001 s −1 to 10 s −1 .The main conclusions are drawn as follows: (1) The flow stress of the Ti-5.4Al-3.7Sn-3.3Zr-0.5Mo-0.4Sialloy is greatly dependent on the initial lamellar thickness and deformation parameters.The alloy with thinner initial lamellar thickness shows a higher flow stress, and an increasing deformation temperature or a decreasing strain rate can reduce the flow resistance.
(2) The peak efficiency in the processing maps occurred at a strain of 0.65 for all positioned at 960 • C/0.001 s −1 with the maximum value of 0.57, 0.53, and 0.48, respectively, for the alloy with initial microstructure A, B, and C. The flow instability regions appear to expand more widely with the increase in the initial lamellar thickness when the alloy was deformed at higher strain rates.
(3) The microstructure observations indicate that different extents of globularization, elongating, kinking, and phase transformation of lamellar α are responsible for the variation in power dissipation in the processing maps.The collaborative mechanism of CDRX and DDRX accelerates the globularization behavior for the thin acicular initial microstructure.For the thick initial lamellar microstructure, CDRX leading to the fragmentation of lamellae is the dominant mechanism throughout the deformation process.
8, x FOR PEER REVIEW 10 of 17 dissipation decreased at the temperature of 1020 °C in virtue of the slight increase in the β grain size with the decrease in the strain rate.
Figure 9 .
Figure 9. Electron backscatter diffraction (EBSD) inverse pole figure (IPF) maps of the alloy with initial microstructure A deformed at 960 °C/0.01 s -1 : (a) region A; (b) region B; (c) schematic of the microstructure observation locations; (d) relative frequency of misorientation.
Figure 9 .
Figure 9. Electron backscatter diffraction (EBSD) inverse pole figure (IPF) maps of the alloy with initial microstructure A deformed at 960 • C/0.01 s −1 : (a) region A; (b) region B; (c) schematic of the microstructure observation locations; (d) relative frequency of misorientation.
Figure 13 .
Figure 13.Schematic diagrams of the evolution lamellar α phase with various initial thicknesses during hot deformation.
Figure 13 .
Figure 13.Schematic diagrams of the evolution lamellar α phase with various initial thicknesses during hot deformation.
|
2019-04-07T20:42:52.145Z
|
2018-12-10T00:00:00.000
|
{
"year": 2018,
"sha1": "ce7c4d8164e84854e4c839c58e2d3ac42df9b8cd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4701/8/12/1045/pdf?version=1544431650",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ce7c4d8164e84854e4c839c58e2d3ac42df9b8cd",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
263405143
|
pes2o/s2orc
|
v3-fos-license
|
Interim Estimates of 2013–14 Seasonal Influenza Vaccine Effectiveness — United States, February 2014
In the United States, annual vaccination against seasonal influenza is recommended for all persons aged ≥6 months. Each season since 2004-05, CDC has estimated the effectiveness of seasonal influenza vaccine to prevent influenza-associated, medically attended acute respiratory illness (ARI). This report uses data from 2,319 children and adults enrolled in the U.S. Influenza Vaccine Effectiveness (Flu VE) Network during December 2, 2013-January 23, 2014, to estimate an interim adjusted effectiveness of seasonal influenza vaccine for preventing laboratory-confirmed influenza virus infection associated with medically attended ARI. During this period, overall vaccine effectiveness (VE) (adjusted for study site, age, sex, race/ethnicity, self-rated health, and days from illness onset to enrollment) against influenza A and B virus infection associated with medically attended ARI was 61%. The influenza A (H1N1)pdm09 (pH1N1) virus that emerged to cause a pandemic in 2009 accounted for 98% of influenza viruses detected. VE was estimated to be 62% against pH1N1 virus infections and was similar across age groups. As of February 8, 2014, influenza activity remained elevated in the United States, the proportion of persons seeing their health-care provider for influenza-like illness was lower than in early January but remained above the national baseline, and activity still might be increasing in some parts of the country. CDC and the Advisory Committee on Immunization Practices routinely recommend that annual influenza vaccination efforts continue as long as influenza viruses are circulating. Persons aged ≥6 months who have not yet been vaccinated this season should be vaccinated. Antiviral medications are an important second line of defense to treat influenza illness and should be used as recommended among suspected or confirmed influenza patients, regardless of patient vaccination status. Early antiviral treatment is recommended for persons with suspected influenza with severe or progressive illness (e.g., hospitalized persons) and those at high risk for complications from influenza, no matter how severe the illness.
In the United States, annual vaccination against seasonal influenza is recommended for all persons aged ≥6 months (1). Each season since 2004-05, CDC has estimated the effectiveness of seasonal influenza vaccine to prevent influenza-associated, medically attended acute respiratory illness (ARI). This report uses data from 2,319 children and adults enrolled in the U.S. Influenza Vaccine Effectiveness (Flu VE) Network during December 2, 2013-January 23, 2014, to estimate an interim adjusted effectiveness of seasonal influenza vaccine for preventing laboratory-confirmed influenza virus infection associated with medically attended ARI. During this period, overall vaccine effectiveness (VE) (adjusted for study site, age, sex, race/ethnicity, self-rated health, and days from illness onset to enrollment) against influenza A and B virus infection associated with medically attended ARI was 61%. The influenza A (H1N1)pdm09 (pH1N1) virus that emerged to cause a pandemic in 2009 accounted for 98% of influenza viruses detected. VE was estimated to be 62% against pH1N1 virus infections and was similar across age groups. As of February 8, 2014, influenza activity remained elevated in the United States, the proportion of persons seeing their health-care provider for influenza-like illness was lower than in early January but remained above the national baseline, and activity still might be increasing in some parts of the country (2). CDC and the Advisory Committee on Immunization Practices routinely recommend that annual influenza vaccination efforts continue as long as influenza viruses are circulating (1). Persons aged ≥6 months who have not yet been vaccinated this season should be vaccinated. Antiviral medications are an important second line of defense to treat influenza illness and should be used as recommended (3) among suspected or confirmed influenza patients, regardless of patient vaccination status. Early antiviral treatment is recommended for persons with suspected influenza with severe or progressive illness (e.g., hospitalized persons) and those at high risk for complications from influenza, no matter how severe the illness.
Methods used by the U.S. Flu VE Network have been published previously (4). At five study sites, patients aged ≥6 months seeking outpatient medical care for an ARI with cough, within 7 days of illness onset, were enrolled.* Study enrollment began after laboratory-confirmed cases of influenza were identified through local surveillance for ≥2 consecutive weeks. Trained study staff members reviewed appointment schedules and lists of symptoms to identify patients with ARI and approached eligible patients (or parents/guardians) to complete a brief screening survey. Patients were eligible for enrollment if they 1) were aged ≥6 months on September 1, 2013, and thus were eligible for vaccination; 2) reported an ARI with cough and onset ≤7 days earlier; and 3) had not been treated with influenza antiviral medication (e.g., oseltamivir) during this illness. Consenting participants completed an enrollment interview. Respiratory specimens were collected from each patient using nasal and oropharyngeal swabs, which were placed together in a single cryovial with viral transport medium. Only nasal swabs were collected for patients aged <2 years. Specimens were tested at U.S. Flu VE Network laboratories using CDC's real-time reverse transcription polymerase chain reaction (rRT-PCR) protocol for detection and identification of influenza viruses. Participants were considered vaccinated if they received ≥1 dose of any seasonal influenza vaccine ≥14 days before illness onset, according to medical records and registries (at Wisconsin and Washington sites) or medical records and self-report (at Michigan, Pennsylvania, and Texas sites). VE was estimated as 100% x (1 -odds ratio) comparing odds of vaccination among influenza-positive versus influenza-negative participants. Estimates were adjusted for study site, age, sex, race/ethnicity, self-rated health, and days from illness onset to enrollment using logistic regression. Interim VE estimates for the 2013-14 season were based on patients enrolled through January 23, 2014.
Of the 2,319 children and adults with ARI enrolled at the five study sites during December 2, 2013-January 23, 2014, a total of 784 (34%) tested positive for influenza virus by rRT-PCR ( Figure); 778 (99%) of these viruses were influenza A, and six (1%) were influenza B (Table 1). Among 755 subtyped influenza A viruses, 742 (98%) were pH1N1 viruses. The proportion of patients with influenza differed by study site, age, race/ethnicity, and interval from onset to enrollment ( Table 1). The proportion vaccinated was 38% to 48% across sites and also differed by age, race/ethnicity, and interval from onset to enrollment.
The proportion vaccinated with 2013-14 seasonal influenza vaccine was 29% among influenza cases compared with 50% among influenza-negative controls ( Table 2). After adjusting for study site, age, sex, race/ethnicity, self-rated health, and days from illness onset to enrollment, VE against medically attended ARI attributable to influenza was 61% (95% confidence interval [CI] = 52%-68%). The adjusted VE for all ages against medically attended ARI caused by pH1N1 virus infection was 62% (CI = 53%-69%). Similar VE against pH1N1 was observed for all age groups.
Editorial Note
Interim results for the 2013-14 season indicate that vaccination has reduced the risk for influenza-associated medical visits by approximately 60%, demonstrating the benefits of influenza vaccination during the current season. Influenza activity is likely to continue for several more weeks in the United States. Vaccination efforts should continue as long as influenza viruses are circulating. Persons aged ≥6 months who have not yet received the 2013-14 influenza vaccine should be vaccinated. As of February 8, 2014, approximately 134 million doses of influenza vaccine had been distributed in the United States for the 2013-14 season, from approximately 138-145 million doses that were anticipated to be available for the U.S. market. Because some vaccine providers might have exhausted their vaccine supplies at this time, persons seeking vaccination might need to call more than one provider to locate vaccine. † These age-adjusted interim VE estimates for the 2013-14 influenza vaccine suggest continued effectiveness in preventing outpatient medical visits associated with pH1N1 virus infection. The 2009 influenza pandemic viruses have continued to circulate each season since the 2009 pandemic, but the 2013-14 influenza season is the first season since 2009-10 during which the pH1N1 viruses have predominated; as of February 8, 2014, pH1N1 viruses accounted for nearly 96% of subtyped influenza A viruses reported to CDC (2). Interim VE estimates for 2013 influenza vaccine for prevention of pH1N1-associated outpatient ARI visits were similar to VE estimates for monovalent pandemic and seasonal influenza vaccines for prevention of outpatient medical visits associated with pH1N1 virus infection during previous influenza seasons (4-7) and are consistent with recent interim estimates from Canada (8). Nationally, more than 99% of pH1N1 viruses tested by CDC this season, including 40 viruses from U.S. These interim estimates suggest similar preventive benefits against pH1N1 influenza virus infections across age groups. During the pandemic, young adults, children, pregnant women, and persons with medical conditions (including morbid obesity) that placed them at high risk for influenzarelated complications § experienced high rates of severe illness and influenza-associated hospitalization. Although * Defined as having received ≥1 dose of vaccine ≥14 days before illness onset. According to medical record, to date, 93% of participants had been vaccinated with inactivated influenza vaccines. A total of 56 participants who received the vaccine ≤13 days before illness onset were excluded from the study sample. † The chi-square statistic was used to assess differences between the numbers of persons with influenza-negative and influenza-positive test results, in the distribution of enrolled patient and illness characteristics, and in differences between groups in the percentage vaccinated. § Enrollees were categorized into one of four mutually exclusive racial/ethnic populations: white, black, other race, and Hispanic. Persons identified as Hispanic might be of any race. Persons identified as white, black, or other race are non-Hispanic. The overall prevalences calculated included data from all racial/ethnic groups, not just the four included in this analysis. Race/ethnicity data were missing for four enrollees. ¶ Data on self-rated health status were missing for two enrollees.
reduced influenza-associated medical visits (9). Final 2013-14 influenza season vaccination coverage estimates will be available after the end of the season.
As of February 8, 2014, influenza activity remained elevated nationally and widespread across most of the country. These VE estimates imply that some vaccinated persons will become infected with influenza. Clinicians should maintain a high index of suspicion for influenza infection among persons with ARI while influenza activity is ongoing. Early antiviral treatment can reduce influenza-associated illness severity and complications (3). Early antiviral treatment is recommended for persons with suspected influenza with severe or progressive illness (e.g., hospitalized persons) and those at high risk for complications from influenza,** no matter how severe the illness. Antiviral medications should be used as recommended for treatment in patients with suspected influenza, regardless of vaccination status. The decision to initiate antiviral treatment should not wait for laboratory confirmation of influenza and should not be dependent on insensitive assays, such as rapid influenza diagnostic tests.
The findings in this report are subject to at least four limitations. First, vaccination status included self-report at three of five sites; dates of vaccination were available only for persons with documented vaccination obtained from medical records or immunization registries. Verification of vaccination status at all sites will be available for end-of-season VE estimates, which might differ from interim estimates. Second, information from medical records and immunization registries is needed to evaluate VE for fully versus partially vaccinated children (certain children aged <9 years require 2 vaccine doses) and by vaccine type (e.g., inactivated compared with live attenuated), as well as to evaluate the effects of prior season vaccination; end-of-season analysis of VE for the two most common vaccine types and effects of partial or prior season vaccination is planned. Third, the observational study design has greater potential for confounding and bias than do randomized clinical trials. However, a recent study found that the study design used by the U.S. Flu VE Network produced unbiased VE estimates when applied to analysis of data from randomized placebo-controlled trials (10). In this interim report, adjustment for age, study site, and potential confounding factors identified in previous studies resulted in adjusted estimates that were similar to crude estimates, although final estimates will adjust for additional potential confounders, such as chronic medical conditions, for which information was not available for interim estimates. Finally, end-of-season VE estimates could change as additional patient data become available or if there is a change in circulating viruses late in the season. Also, the VE estimates in this report are limited to the prevention of outpatient medical visits, rather than more severe illness outcomes, such as hospitalization or death; additional studies to measure VE against more severe outcomes are warranted.
Annual vaccination against circulating influenza viruses remains the best strategy for preventing illness from influenza. This report highlights the value of seasonal influenza vaccination and supports ongoing vaccination efforts for all persons aged ≥6 months. Antiviral medications continue to be an important adjunct in the treatment and control of influenza and should be used as recommended, regardless of patient vaccination status. ** A complete summary of guidance for antiviral use is available at http:// www.cdc.gov/flu/professionals/antivirals/summary-clinicians.htm.
|
2015-09-18T23:22:04.000Z
|
2014-02-21T00:00:00.000
|
{
"year": 2014,
"sha1": "78a36a0a6fb7962a9d409674ece9681a18cc0e2b",
"oa_license": "CC0",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "61669e1ba37d339045655acf4f0c8c28ad9d635b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
150033242
|
pes2o/s2orc
|
v3-fos-license
|
Investigating Antibacterial Effects of Latrodectus Dahli Crude Venom on Escherichia coli, Staphylococcus aureus and Bacillus subtilis
Background and Objectives: Nowadays, infections with antibiotic-resistant bacteria are among the most important causes of mortality worldwide. This has attracted the attention of researchers to seek suitable alternatives for antibiotics. The venom of many toxic species such as arthropods has antibacterial properties. In this study, we investigated antibacterial effects of crude venom of Latrodectus dahli on Escherichia coli, Staphylococcus aureus, and Bacillus subtilis. Methods: Lyophilized crude venom of L. dahli was dissolved in 50 mM Tris-HCl buffer. Protein concentration was determined by the Bradford assay. Then, the bacteria were exposed to different concentrations (31.25-250 ng/mL) of the crude venom. Inhibitory activity of the venom against the bacteria was determined by MTT assay and determining minimum inhibitory concentration (MIC). Results: Results of the MTT assay showed that the crude venom significantly inhibited the growth of E. coli (31.25 and 62.5 ng/mL), S. aureus (at 250 ng/mL) and B. subtilis (at 125 and 250 ng/mL). In the MIC experiment, the crude venom significantly inhibited the growth of E. coli (at concentrations of 31.25 and 62.5ng/mL), S. aureus (at concentrations of 31.25-250 ng/mL) and B. subtilis (at concentrations of 31.25-250ng/mL). Conclusion: The crude venom of L. dahli and its components showed relatively strong antibacterial effects.
purchased from Merck (Germany) and Razi Institute (Iran), respectively. Lyophilized venom was dissolved in 250 μL of Tris-HCl buffer (50mM), incubated at 4°C and then kept in a freezer at -20 °C. The Bradford assay was used for determining protein concentrations. Bovine serum albumin (BSA) was used for plotting the standard curve and microplate spectrophotometer (Epoch, Biotek) was used for measuring the absorbance of the samples at 595 nm. E. coli (ATCC 25922), S. aureus (ATCC 25923) and B. subtilis (ATCC 6633) were purchased from the Persian Type Culture Collection (PTCC). In brief, the bacteria were cultured in Mueller Hinton broth media inside of a 96-well plate until reaching turbidity of 0.5 McFarland (Qlab). Minimum inhibitory concentration (MIC) was calculated as the lowest concentration of venom that inhibited bacterial growth (13). The bacteria were exposed to different concentrations of the crude venom (31.25-250 ng/mL). Tetracycline (50 μg/mL) and bacterial suspension were used as positive and negative controls, respectively. In addition, Mueller Hinton broth was used as a blank. The final volume for each well of 96 plate was set as 100 μL. The assay was repeated three times for each concentration. After 15 to 18 hours of incubation at the defined condition, cell density was measured at 605 nm (14). MIC was calculated using the following equation: ( ) .
After 15 to 18 hours, 5 μL of MTT dye (5 mg/mL) were added to all wells and the plate was incubated for one hour in dark at 37 °C. Then, 100 μL of DMSO was added to each well and the plate was incubated for two more hours in dark. Absorbance was read at 595 nm (15). All experiments were carried out in triplicate. Cell viability was calculated using the following equation: Results were reported as mean ± standard deviation (SD) and data were analyzed in the GraphPad Prism 6.1 software (GraphPad Software Inc., USA) using ANOVA and the Tukey test. A p-value of less than 0.05 was considered statistically significant.
INTRODUCTION
Recently, researchers have focused on antimicrobial peptides (AMPs) to combat antibiotic resistance (1,2). These peptides contain deformed amino acids that are not found in polypeptides made by ribosomes. Research has shown that these compounds have great medicinal potential. Selective toxicity is an essential characteristic of an antimicrobial agent. Ideally, such compounds have affinity for one or more microbial determinants that are easily accessible, common to a broad spectrum of microbes and relatively immutable. Nature seems to develop a class of molecules that meet these constraints in the evolution of AMPs, which initially target microbial cells, and thus fulfill the criteria mentioned above for identifying molecular determinants of pathogens. AMPs have amphipathic features that mirror phospholipids, thus allowing them to interact with and exploit vulnerabilities inherent in essential microbial structures such as cell membranes (3). Until now, the antibacterial activity of more than 1000 peptides from different eukaryotic and prokaryotic sources have been investigated to find a suitable antimicrobial alternative for antibiotics (4,5). It has been demonstrated that venom of snakes, scorpions and spiders has strong antibacterial effects (6). Some studies have also shown that AMPs have anticancer effects (7,8). Peptides from arthropods are cationic and amphiphilic and do not contain cysteine residues (9). Spiders are members of arthropods and have more than 60 families and 35000 species. They live in nearly every habitat on earth (10). Latrodectus spider, also known as the black widow, is a member of the Theridiidae family (11). The venom of this spider contains α-latrotoxin, a 130 kDa mammalian neurotoxin that has been used for studying exocytosis in cells (12). In this study, we investigate antibacterial effects of the venom of black widow spider on Escherichia coli, Staphylococcus aureus, and Bacillus subtilis.
MATERIALS AND METHODS
3-(4,5-Dimethyl-2-thiazolyl-2)-2,5diphenyl-tetrazolium bromide (MTT) powder, flat bottom 96-well plates and dimethyl sulfoxide (DMSO) were purchased from Sigma-Aldrich, USA. Tris-HCL and crude venom of the black widow spider were 15/ Mousavi and colleagues concentrations of 31.25, 62.5, 125, and 250 ng/mL, respectively. However, the inhibitory effect of the antibiotic was much stronger than the crude venom against S. aureus ( Figure 2A). Results from MTT showed that 250 ng/mL of the crude venom had significant inhibitory effect on the growth of S. aureus. However, the viability of S. aureus treated with tetracycline (50 µg/mL) was much lower (16.7±5.03 percent) than those treated with venom ( Figure 2B
RESULTS
The results of MIC assay indicated that 31.25 and 62.5 ng/mL of the crude venom had significant inhibitory effects on the growth of E. coli cells. However, the inhibitory effect of the control antibiotic was much stronger than that of the crude venom. The MIC values of the crude venom were 9.3±4.61, 9±3.60, 5.7±4.04 and 3.7±3.05 percent against E. coli at the concentrations of 31.25, 62.5, 125 and 250 ng/mL, respectively. The MIC analysis of the antibiotic for these cells showed that 99.16 ± 0.40 percent of the cells were inhibited when using 50 µg/mL of the venom ( Figure 1A). Results of the MTT assay showed that 31.25 and 62.5 ng/mL of crude venom have significant inhibitory effects on the growth of E. coli compared with the control. However, the antibiotic showed stronger antibacterial activity compared to the crude venom. The viability of E. coli was 85.7±5.09, 90.3±1.15, 93.7±0.87 and 98±3.46 percent when using 31.25, 62.5, 125, and 250 ng/mL of crude venom, respectively. The viability of E. coli was 4.64 ± 3.67 percent when using 50 µg/mL of the antibiotic ( Figure 1B). Concentrations of 31.25-250 ng/mL of crude venom had significant inhibitory effects against S. aureus. MIC of the crude venom was 5±1, 3.7±2.08, 3±1, and 2 percent at the MTT assay and MIC determination. Disc diffusion and well diffusion methods had low reproducibility and were not suitable for this study. Since the venom was water insoluble, paper discs (as a filter and block) were used in the disc diffusion assay for evaluating the antibacterial activity of the crude venom and its components during culture (28)(29)(30)(31). Some studies have suggested that the antimicrobial effect of venom is promoted by the increased permeability of the bacterial cell membrane induced by antimicrobial proteins present in spider venom (32 (19). In our study, the results of MTT assay showed that the crude venom could significantly reduce the viability of E. coli (at 31.25 and62.5 ng/mL), S. aureus
DISCUSSION
Since the discovery of penicillin in 1940, various antibiotics have been proposed for the treatment of bacterial diseases and control of bacterial epidemics (16). The emergence of antibiotic-resistant bacteria along with the side effects of antibiotics, namely allergic hypersensitivity and immunosuppression (17, 18) encouraged researchers to seek for new generations of antibiotics or alternative antimicrobials (19)(20)(21). Recently, isolation of antibacterial molecules from animals, such as AMPs has been considered for the treatment of diseases. Venoms of different species such as snakes, spiders, insects, centipede and amphibians are rich sources of biologically active and therapeutic compounds including AMPs (4,6,(22)(23)(24). The innate immune system of arthropods has evolved a complex arrangement of constitutive and inducible AMPs that immediately destroy a large variety of pathogens. In this complex system, several enzymes, low-molecular-mass compounds, neurotoxins, antimicrobial and cytolytic peptides interact together, resulting in extremely rapid immobilization and/or killing of prey or aggressors (25). Many studies have shown that the venom of snakes, scorpions and spiders have antibacterial effects (6). As the largest venomous animals with more than 50000 species (26), spiders and their venom have been long used in traditional medicine for treatment of various diseases (27). In this study, we investigated the inhibitory effects of crude venom from black widow spider (L. dahli) against a number of bacteria using the Tetracycline (50μg/mL) and medium without crude venom were used as the positive and negative control, respectively (ns: non-significant,*: p<0.05,**: p<0.01, ***: p<0.001, ****: p<0.0001).
17/ Mousavi and colleagues
source of antibacterial.
|
2019-05-12T13:47:38.599Z
|
2019-03-01T00:00:00.000
|
{
"year": 2019,
"sha1": "96c8ea2a96a47a22e6ad604482b4d7522e8644c6",
"oa_license": "CCBYNC",
"oa_url": "http://mlj.goums.ac.ir/files/site1/user_files_c5015c/admin-A-10-1-558-f9a2478.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "dff9efa95d7782ad021bd654ef6cdbab3c6c0496",
"s2fieldsofstudy": [
"Chemistry",
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
210986365
|
pes2o/s2orc
|
v3-fos-license
|
Stockholm preterm interaction-based intervention (SPIBI) - study protocol for an RCT of a 12-month parallel-group post-discharge program for extremely preterm infants and their parents
Background Improved neonatal care has resulted in increased survival rates among infants born after only 22 gestational weeks, but extremely preterm children still have an increased risk of neurodevelopmental delays, learning disabilities and reduced cognitive capacity, particularly executive function deficits. Parent-child interaction and parental mental health are associated with infant development, regardless of preterm birth. There is a need for further early interventions directed towards extremely preterm (EPT) children as well as their parents. The purpose of this paper is to describe the Stockholm Preterm Interaction-Based Intervention (SPIBI), the arrangements of the SPIBI trial and the chosen outcome measurements. Methods The SPIBI is a randomized clinical trial that includes EPT infants and their parents upon discharge from four neonatal units in Stockholm, Sweden. Inclusion criteria are EPT infants soon to be discharged from a neonatal intensive care unit (NICU), with parents speaking Swedish or English. Both groups receive three initial visits at the neonatal unit before discharge during the recruitment process, with a strengths-based and development-supportive approach. The intervention group receives ten home visits and two telephone calls during the first year from a trained interventionist from a multi-professional team. The SPIBI intervention is a strengths-based early intervention programme focusing on parental sensitivity to infant cues, enhancing positive parent-child interaction, improving self-regulating skills and supporting the infant’s next small developmental step through a scaffolding process and parent-infant co-regulation. The control group receives standard follow-up and care plus extended assessment. The outcomes of interest are parent-child interaction, child development, parental mental health and preschool teacher evaluation of child participation, with assessments at 3, 12, 24 and 36 months corrected age (CA). The primary outcome is emotional availability at 12 months CA. Discussion If the SPIBI shows positive results, it could be considered for clinical implementation for child-support, ethical and health-economic purposes. Regardless of the outcome, the trial will provide valuable information about extremely preterm children and their parents during infancy and toddlerhood after regional hospital care in Sweden. Trial registration The study was registered in ClinicalTrials.gov in October 2018 (NCT03714633).
Background
Being born extremely preterm, i.e., born before 28 gestation weeks, is a potentially life-threatening circumstance affecting the child [1][2][3][4][5][6], the parents [7][8][9][10] and the interactions among family members [11][12][13]. Swedish health care delivers high quality services to all citizens regardless of family income and offers active and advanced neonatal intensive care, saving 90% of children born extremely preterm [14]. In Sweden, 0.3% of all children are born extremely preterm (EPT), and the Swedish Federation for Preterm Infants (SPF) stresses that surviving EPT children constitute a new group of patients in need of support beyond the intensive care period [15]. Further highlighting the urgency for additional supportive care to families with EPT-born infants, Sweden's frontline neonatal care [16] has resulted in a new EPT population of surviving children born as young as 22 + 0 to 23 + 6 gestational weeks, and the long-term outcomes for this novel population are not yet known.
Swedish data show that approximately 2/3 of the EPT children have no or mild impairment, while 1/3 have moderate to severe neurodevelopmental impairments when entering primary school [4], with signs thereof already in preschool [5]. EPT children are a defined highrisk population, and the occurrence of cognitive impairment [1] increases the earlier in pregnancy the child was born [17]. Working memory [18] and executive functions [3,19,20] seem to be particularly vulnerable in extremely preterm children, and Attention Deficit Hyperactivity Disorder (ADHD) is twice as common among EPT children compared to term peers [21]. Executive function (EF) deficits are of particular developmental interest, since selfregulation is associated with EF [22], and EPT children tend to display early self-regulatory difficulties [23]. Another neuropsychiatric disorder overrepresented among EPT children is autism spectrum disorder (ASD), which is diagnosed in 17% of EPT children. In some studies, up to 29% of EPT children screen positive according to ASD observation protocols [24][25][26]. Moreover, several skills that are important for school success, such as mathematical [27] and linguistic abilities [28,29], are negatively affected by extreme prematurity; preterm-born students in general do not perform at the same level as their term classmates in school [30][31][32]. Additionally, EPT children have an increased risk of mental illness [33,34] and of being bullied throughout school [35]. Given the described outcomes of extreme prematurity, there is a clear need for interventions and treatments that may positively influence the long-term development of EPT children.
Prematurity affects not only the individual child but also the family as a whole. Giving birth unexpectedly early, missing part of the pregnancy, the fear of losing the child, longterm stays at the neonatal intensive care unit and marital challenges are amongst some of the commonly referred strains with which parents of preterm infants must cope. From a longer-term perspective, after discharge, new challenges often occur: the question of how to support the child optimally upon coming home; how to interpret the oftenmore-diffuse behavioural communication of the EPT infant compared to that of infants born at term age; how to patiently wait for, identify and support the next developmental step of the child; and how to feel competent as a parent at home. Parental mental health may be negatively affected by a child's preterm birth [8,36], and poor parental mental health is associated with less favourable social, behavioural and functional development of preschool-aged EPT children [37,38]. Hence, adequate discharge planning and transition programmes for the child and family leaving the Neonatal Intensive Care Unit (NICU) is an area in need of further development [39,40] to benefit not only the child but also the parents.
An important concern with regard to the post-discharge programme for the EPT population is the most appropriate content of such an intervention. International efforts have been made to summarize effective qualities of interventions for the general population [41], and in 2015, a Cochrane report was published that charted the post-discharge programmes for preterm-born children [42], concluding that the post-intervention programme should target both motor and cognitive outcomes and that programmes focusing on providing an optimal environment for learning have accumulated more evidence. A later meta-analysis indicated that interventions given both in the home and in the hospital/ preschool show the most promising results [43], suggesting that there should be multiple locations. Similar to term children and their parents, EPT children are dependent upon their caretakers throughout their upbringing; therefore, interventions targeting both infants and parents as well as their relationship might be more effective than interventions with unidimensional targets. It is hardly surprising that an EPT birth influences the infant-parent relationship unfavourably [11,44], and parental behaviour should therefore also be targeted. Since parental responsivity seems to be the parental style that influences preterm children's cognitive development the most, and since parental responsivity and warmth seem to affect preterm children's behaviour favourably [45], these should be critical components of any post-discharge intervention aimed at this group. Moreover, the finding that parental rejection affects indicators of preterm children's behaviour negatively [45] supports the idea of a strengthsbased approach, focusing on children's abilities more than their difficulties. In addition, attention should be given to the EPT population's challenges concerning executive functions. Since self-regulation is associated with executive development, helping the preterm child to self-regulate should be an essential part of a post-discharge programme. Executive function is, in turn, crucial for the maturation of social skills [46] and academic achievement [47] of all children.
Internationally, different post-discharge programmes have displayed different approaches of the abovementioned ideas of intervention content, for example, the Infant Behavioral Assessment and Intervention Program (IBAIP), the ToP programme [48][49][50][51], the modified Mother Infant Transaction Program (MITP) [52,53], the Infant Health and Development Program (IHDP) [54] and a Taiwanese home-based intervention programme [55], among others. Different programmes offer intervention visits at different times in the discharge process and with different content. The MITP builds on sensitizing parents to baby cues and at the same time introducing them to stimulating activities for their infants. Most of the visits are scheduled during the last week of their hospital stay, with two additional home visits during the first quarter of a year at home [52]; hence, the intervention focus is rather early in the discharge process. The IBAIP is described as a strengths-based intervention, building on both infant and parental qualities and enhancing self-regulatory and coregulatory behaviour [48], and in the Dutch trial of the IBAIP, it consists of 6-8 home visits from an infant physiotherapist before 6 months CA. The Dutch research team later developed the ToP intervention, which is now a part of standard care for very preterm children and consists of 12 home visits during the first year at home; hence, the focus of the intervention is slightly later that in the case of the MITP. Other programmes, such as the IHDP, are more extensive and last until 36 months CA, including home visits, an educational child care programme and a bimonthly parental group during the last 2 years of the intervention. The home visits introduced both ageappropriate games for development and family support of parent-identified problems. Post-discharge interventions are not exclusively tested in Western societies; for example, a Taiwanese research group conducted up to 13 visits in the clinic or home environment during the child's first year to teach child developmental skills, provide instruction on health-related topics and feeding and massage procedures, support parents and enhance parentchild interaction, which showed positive results on infants' emotion regulation and stress responses in toddlerhood [55]. Many of the cornerstones of these programmes are also included in the SPIBI, which can be seen in the theory of change of the SPIBI (Fig. 1).
Stockholm County has four NICUs, where all professionals work to individually adapt intensive care in accordance with the Newborn Individualized Developmental Care and Assessment Program (NIDCAP) [56]. However, the NIDCAP, which supports parents' ability to read and adequately respond to baby cues, unfortunately ends at discharge. To contribute to a cohesive chain of care, the SPIBI builds on the same principles as the NIDCAP, e.g., relating to the synactive theory [57].
We designed a randomized clinical trial to evaluate the effect of an interaction-based programme for EPT infants and their caretakers, beginning in the discharge period and lasting until the child is 12 months CA. The aim of the SPIBI is to give this fragile population of children a better start in life by improving the quality of parent-child interaction and by supporting the parents. The programme is in line with the SFP's call for enhanced post-discharge support. The intervention aims to implement treatments for the new group of EPT survivors that require specific competences and responsiveness acquired through building on existing knowledge of international post-discharge interventions. The intervention is designed to have the following qualities: to match the unique Swedish context, with infants being saved from being born in earlier gestation weeks; to include a rather extensive follow-up programme; and to provide outcome data from an EPT population, with free healthcare services and 480 days of parental leave per child.
Methods
The study was designed in accordance with the SPIRIT 2013 statement.
Aims
This study aims to examine the effects of Stockholm Preterm Interaction-Based Intervention (SPIBI) in three overall domains: parent-child interaction, child development and parental mental health. The aim of the present paper is to give the rationale, content and trial design of the SPIBI.
Hypotheses
The primary hypothesis of the SPIBI is that the quality of the parent-child interaction will improve, and more specifically, that the emotional availability of both the child and parent will be higher in the intervention dyads than in the control dyads post-intervention.
The secondary hypotheses concern the children and the parents, respectively. The secondary hypotheses concerning the children are that the children in the intervention group will have enhanced development compared to the control children during the intervention, with an enduring effect concerning their general development, executive function, motor development, neurological development and autistic symptoms. Additional secondary hypotheses are that when preschool teachers are asked about their view of their extremely preterm pupil, they will describe the children in the intervention group as more participatory and playful than the control children are.
The secondary hypotheses concerning the parents are that compared to parents of children in the control condition, parents of the children in the intervention condition will be less depressed, less anxious and more resilient to stress, as well as describe themselves as having higher parental self-efficacy.
All hypothesises are formulated as objectives with specific outcomes attached to them as well as period of activities in Table 1.
Content of the programme given to the intervention group (IG)
The SPIBI is a manualized, strengths-based home-visit programme focusing on parent-child interaction, skill in reading children's cues and the provision of optimal support for children's next small developmental step (see Table 2), including the elements of verbal praise and special play [41]. The basic idea is to reduce the amount of time children spend in a stressed state, which may be toxic to the infant brain, and to enhance developmentally appropriate parent-child interaction to achieve mutual enjoyment. Increased parental self-efficacy is considered to be a common mediator of family-centred practices in early childhood intervention [58], and the parent's behaviour is a central target. The brief description of the visits below is a condensed version of the 50-page Swedish SPIBI manual specially developed for this trial.
The purpose of the first visit at the neonatal unit or hospital ward where the child is still being treated is to give the parent (s) a chance to get to know the interventionist and show her the environment where the infant has spent his/ her first 3-5 months of life. The interventionist initially explains the scope of the home visits and briefly describes the intervention to the parent (s), with a clear definition of what distinguishes the intervention from regular follow-up care in Stockholm for EPT children. The logbook that will be used during the home visits is presented to the parents; this logbook emphasizes playful interaction, striving for reciprocal amusement and intersubjectivity [59][60][61][62] and providing developmental support in the child's proximal zone of development [63]. All formalities are carefully written down, i.e., contact information, the time of the next home visit and the manner in which the home visits will be recorded in the logbook for the parents as well as in the medical records for the healthcare professionals.
Home visits 1-3 and two telephone calls are provided before the child is 3 months corrected age. The focus of these home visits is to observe the child and parent at home, validate the child's strengths and competences and enhance parent-child interaction, building on strengths. The child's strengths and interests will be summarized in the parents' logbook. All feedback to parents, presented orally as well as in written form, is given in a positive and non-judgemental way. During the three initial home visits, To improve the quality of the interaction between parents and their extreme preterm children All participants will be filmed during a free play situation during a 20 min interaction. The video films will be analyzed using the Emotional Availability Scales (EAS).
Parent-child interactions will be video filmed at 12 months corrected age (at the end of the intervention) and again at 2 years corrected age (1 year post-intervention).
Objective 2: To improve child development within several areas samspel", playtime/social time impression scale, ICF-CY and semi-structured interview with the preschool teachers). Motor skills assessment will begin at 3 months corrected age at the neonatal follow-up unit. Other child outcomes will be measured at 1, 2 and 3 years corrected age, in accordance with the age range the assessment and questionnaires are applicable for. Preschool teachers' view of their extreme preterm pupils will be collected at 2 or 3 years corrected age, depending on preschool introduction for that specific child.
Objective 3: To improve parental mental health of parents to extreme preterm infants post-discharge Parental mental health (STAI & HADS) and views of parenthood Parental questionnaires will be collected at baseline and when their child has reached an age of 1, 2 and 3 years corrected age.
Objective 4:
Collecting parental views of the first year at home after NICUdischarge with an EPT infant; both intervention and control group.
Semi-structured interviews with a focus of the first year at home, strengths and challenges. CSQ from intervention group.
Post intervention at 1 year corrected age.
the focus is to confirm the child's competencies, pointing to the child's capacity to self-regulate, the child's individual temperament, and early communication. Infant behaviour is categorized into one of three levels of stability, all with different optimal parental responses: red-labelled stress behaviour, in which the proper parental response is to offer calmness, comfort and safety; yellow-labelled concentration/coping and calming behaviour, in which the adjusted parental response would be to respect the infant's need for a break or co-regulate him/her; or green-labelled approaching behaviour, in which the child should be offered new information or stimuli to develop further. The interventionist pays extra attention to the child's arousal level, naming the child's current tiredness or alertness with the correct term according to Als' original definitions [57], and upon specific parental inquiry, generally informing parents about preterm children's tendency to be fussy and discontent in an intermediate stage of sleep and awareness, before more distinct and stable levels of arousal are developed. All visits are scheduled according to the manual in chronologic time +/− 3 weeks to individualize support to the family needs and wishes. During home visits 4-8, the interventionist, step-by-step and always with the utmost respect to the child's level of development, will help the parent find suitable objects/ toys at home for the infant to examine with the mouth, hands and body, as well as confirm the infant's abilities and give suggestions for the stimulation of further development in the infant's interaction with the parent. The logbook will now also contain suggestions for supporting the next developmental step, which will be formulated by the interventionist together with the parent.
When the child is 12 months corrected age, the interventionist makes the 9th and final home visit, emphasizing the child's progress during the past year, looking through the logbook with the parent (s), summarizing the past year and talking about the next developmental step for the future. The logbooks are not used for research, but only for individual parental development. Since the intervention is interactive and relation-based, there is an ethical as well as a pragmatic need for a clear finishing phase to encourage parent for future use of the intervention strategies, for which the logbook is a useful tool. Further programme features are provided in Fig. 2.
Control group (CG)
The intention of the control condition is that they will receive treatment as usual (TAU). The TAU in Sweden for EPT-born children consists of home-care nursing visits as long as the infant is tube-fed or in need of extra oxygen supply. The recommended basic follow-up for high-risk children in Sweden includes a standardised doctor's examination at full term, hearing screening and ophthalmologist assessment. At 3 months CA, a physiotherapist and paediatrician assess the infant's early motor development and neurological progress. Additional follow up visits are common during the first year upon clinical indication. At 1 year CA, the child re-visits the paediatrician and physiotherapist for further motor and neurologic assessment. Throughout the care chain, the paediatrician may refer the patient to the neurologist, pulmonologist, gastroenterologist or child habilitation centre if indicated. At 2 and 5.5 years of age, the child is assessed by a psychologist, paediatrician and physiotherapist. The psychologist assesses the child's cognitive level and screens for communicative and behavioral problems. The neonatal follow-up team collaborates closely with a speech and language therapist, an occupational therapist and a dietician, all of whom may join the team assessment if necessary. Concerning the SPIBI control group, the recruitment process implies approximately three coordinator visits, four baseline questionnaires of the parents and one extra child physiotherapy assessment. All study participants, controls and intervention children will receive an extended follow-up programme, with additional assessment and questionnaires at term age, 3 months corrected age, 12 months corrected age, 24 months corrected age and 36 months corrected age. In addition, all EPT children in Stockholm are offered a standard follow-up programme and will be referred to specialized care when needed.
Procedures to implement the intervention
The six interventionists (see Table 3) have professionally diverse backgrounds with several years of neonatal unit experience and have been carefully selected by the research team. The SPIBI training was conducted 1 day per week from October 2017 to October 2018, consisting of theoretical lectures, practical intervention-focused days, and at least six home visits to four different preterm-born children for every interventionist, including subsequent supervision. Each home visit was video-recorded and was then analysed and discussed during supervision. The theoretical lectures were given by Swedish and international researchers and clinicians specializing in fine and gross motor skills development, the cognitive development of preterm children, brain development, the NIDCAP, attachment, parental perspectives, early interventions, special education, early intervention for children with autism, parental mental health, play and
Trial design
The SPIBI trial is a two-arm randomized trial with four recruiting sites in Stockholm. The intervention group (IG) receives 10 visits and two telephone calls from a special trained interventionist (see Table 3). The focus of the intervention is providing strengths-based support of the parentchild interaction, sensitizing parents to infant cues, helping the parent to give optimal developmental support to the infant and enhancing the infant's self-regulating skills. All extremely preterm children in Stockholm are routinely offered an extensive follow-up test programme, and SPIBI participants are subject to additional assessments at 3 months, 12 months, and 24 months corrected age. Additionally, the children's preschool teachers will be interviewed when they reach 36 months corrected age. Control participants will have an additional meeting with the project coordinator when they are informed that they have been allocated to the control group, in which information will be provided about the discharge process, their child's behavioural cues and the importance of parent-child interaction at home. The trial began on 1 September 2018, and recruitment is anticipated to end on 31 August 2020 or at a later date when the target 130 is reached. The intervention will continue for 1 year after the last participant has been included.
Study setting
The study setting will be conducted mainly in the participants' home environment, except for the first visit, which is intended to occur in the hospital setting before discharge, if applicable.
Sample size and statistical power
The hospitals in Stockholm treat more than 100 extremely preterm infants every year, but several of them are not residents of Stockholm County, which is a prerequisite for study inclusion. The study team is prepared to recruit 130 participants, 50% of which will be randomized to the SPIBI intervention. The sample size is based on feasibility, and the assumption is that the effect size of the intervention on the primary outcome measure Emotional Availability Scales (EAS) will be moderate, i.e., Cohen's d = 0.5. This is largely in line with the results of Flierman et al. [50] for the sensitivity scale of the EAS in the previously mentioned Dutch trial. Hence, we aim to recruit 130 participants, which gives us a power of 0.8 given a normal distribution and an alpha value of 0.05.
Participant inclusion and exclusion criteria
Parents of all EPT children residing in Stockholm County who meet the inclusion criteria will be approached by the end of their child's hospital stay. Inclusion criteria are that the child was born before 28 gestational weeks (GW), is currently in stable medical condition, and is therefore close to hospital discharge from one of the four neonatal units belonging to the
Recruitment and randomization Recruitment
The PhD student working with the project spends 8-20 h per week as a project coordinator (see Table 3) visiting the four neonatal units and two medical child wards to identify families eligible for recruitment. The standard procedure for recruitment includes three visits. The first visit takes place during GW 32-36 and aims to provide initial information about the discharge process in general, including the neonatal follow-up programme and the SPIBI project in particular. This initial visit is only implemented if the nurse or neonatologist in charge of the child considers the patient to be medically stable. Two days to three weeks later, a second visit will take place, during which parental questions are answered and the intervention programme as well as the conditions for participating in an RCT are explained in detail. It is stressed that research participation is voluntary and that the family may withdraw from the project at any time with no further consequences. Participating parents are given a three-page information sheet and a consent form to sign, as well as four baseline assessment questionnaires. During the third visit, informed consent and baseline questionnaires are collected, and information about the project is repeated if necessary. The participant is randomized, and if assigned to the control group, a fourth visit is needed to provide information about this circumstance, as well as the fact that the child and parents are now a part of an extended follow-up starting at 3 months corrected age, and at 1 year corrected age, the PhD student will see the family again for assessment and a follow-up interview. If the participant is assigned to the intervention group, the assigned interventionist will visit the family as soon as possible.
Randomization
The Professor Emerita of Psychology (see Table 3), who is not involved in recruitment, has block randomized 130 participants using an Internet-based random generator (http://www. randomization.com). The instructions given to the random generator are separately stored and the procedure will be cross-checked when all participants have been randomized. All families agreeing to participate in the SPIBI are assigned a serial number 1-130 in chronological order from the date on which they signed the informed consent, and this information is stored in a safe locker separate from the baseline questionnaires. An overview of the flow chart of the study, including the recruitment process may be found in Fig. 3.
Fidelity check
Following each home visit, the interventionist completed a fidelity check of seven questions and one self-evaluation on a scale from 1 to 10 of the interventionist's faithfulness to the manual during the home visit.
Outcome evaluation
The contents of the outcome measures are threefold: parent-child interaction concerning emotional availability, the child's development and parental mental health.
Parental satisfaction
Since the Swedish Preterm Federation has expressed a need for a post-discharge programme and thus stimulated the development of the SPIBI, the views of the participating parents of the benefits and weaknesses of the intervention is of particular importance. All parents will be asked to participate in a semistructured interview concerning parental satisfaction after the intervention, in addition to rating the intervention with the Client Satisfaction Questionnaire (CSQ-8).
Outcome measurements
Since the SPIBI is a multi-professional intervention, the outcome is measured across several domains: outcomes concerning emotional availability in the dyadic relation, outcomes measuring child development and outcomes related to parental mental health and parenthood.
The primary outcome is the Emotional Availability Scales (EAS) [65], which will be used primarily at 12 months corrected age but then again at 24 months corrected age. The primary coder is blind to group allocation, whereas the second coder, who is used for interrater reliability checks in 20% of the cases, is not. The scale has four parental dimensions (sensitivity, structure, non-intrusiveness, and nonhostility) and two child dimensions (child responsiveness and child involvement). Each subscale has a maximum score of 29 and a direct score of 1-7. It is hypothesized that higher scores will be observed in the intervention group. Previous international studies have shown significant effects of parental sensitivity and structuring as well as child involvement [50], which also seem to be the subscales most indicative of maternal anxiety in the NICU [66].
The secondary outcome measurements are listed below. For measuring the cognitive, language, and motor development of the children, the Bayley Scales of Infant and Toddler Development, Third Edition (BSID-III) [67], will be used at 24 months corrected age. Composite scores are standardized to mean (SD) scores of 100 [15], based on age-matched normative data. The secondary hypothesis is that the mean will be higher in the intervention group.
The child's executive function at 24 and 36 months corrected age will be measured with the Behaviour Rating of Executive Function, Parental Version (BRIEF-P) [68,69]. All 5 subscales (inhibit, shift, emotional control, working memory and plan/organize) will be used, and the hypothesis is that the intervention group will have fewer executive problems reported.
Child's motor development will be measured by the Alberta Infant Motor Scale (AIMS) [70,71] at 3 and 12 months corrected age. The range of the AIMS is 0-58 points, with a hypothesis of higher scores for the intervention group.
Parental depression will be measured by Hospital Anxiety and Depression Scale (HADS) [72,73] at term age and at 12, 24 and 36 months corrected age. On the HADS, the range for the depression subscales is 0-21, and the range for the anxiety subscales is 0-21. It is hypothesized that lower scores will be observed for the parents of the children in the intervention group.
Parental anxiety will be measured by the State-Trait Anxiety Inventory (STAI) [74] at term age and at 12, 24 and 36 months corrected age. Both the state and trait scales have a maximum score of 80 points. It is hypothesized that lower scores will be observed for the parents of the children in the intervention group.
Parental self-efficacy will be measured by the Parental Self-Efficacy Scale (PSE) [75] at term age and at 12, 24 and 36 months corrected age. The PSE has 24 items for children at term age and at 12 and 24 months corrected age, while it has 48 items for children of older ages. All items are rated on a 0-10 scale. It is hypothesized that higher scores will be observed for the intervention group at 12, 24 and 36 months corrected age.
Parental resilience will be measured by the Resilience Scale (RES) [76,77] at term age and at 12, 24 and 36 months corrected age. The RES is a 25-item scale with a 7-point Likert scale. It is hypothesized that higher scores will be observed in the intervention group.
The other outcome measurements are listed below. The Hammersmith Neonatal Neurological Outcome (HNNE) is used at term age as a baseline measurement. Post discharge neurological development will be assessed with the Hammersmith Infant Neurological Examination (HINE) [78][79][80] at 3 months, 12 months, and 24 months corrected age. It is hypothesized that higher scores will be observed for the intervention group. The HINE can be used for infants 2-24 months of age and has optimal scores as well as cut-off values for future less fortunate motor outcomes. It includes 26 items assessing posture, movements, muscle tone, cranial nerve reflexes and reactions, with a score range of 0-3 for each time, thus yielding a possible sum of 0-78 points.
Children's motor development will be measured using Peabody Developmental Motor Scales (PDMS) [81] at 12 months corrected age. It is hypothesized that higher scores will be observed for the intervention group. The ranges for each subscale are as follows: stationary, 0-42; locomotion, 0-138; object manipulation, 0-30; grasping, 0-44; and visual-motor integration, 0-113. It is hypothesized that higher scores will be observed for the intervention group.
Children's general development will be measured using the Ages and Stages Questionnaire (ASQ-R) [85][86][87] at 12, 24, and 36 months corrected age. It is hypothesized that the parents of the children in the intervention group will score their children higher. All five subscales (communication, gross motor, fine motor, problem solving, personal-social) will be used, with an overall score ranging from 0 to 300. It is hypothesized that higher scores will be observed for the intervention group.
Children's strengths and difficulties will be measured using the Strengths and Difficulties Questionnaire (SDQ) [88,89] at 24 and 36 months corrected age, with the hypothesis that fewer difficulties and more strengths will be scored by parents in the intervention group. The SDQ consists of 25 items on a 3-point scale: 5 items on prosocial behaviour and 20 questions about various difficulties. It is hypothesized that higher scores will be observed for the intervention group for prosocial behaviour and that lower scores will be observed for the intervention group on the problematic subscales.
Child's autistic symptoms will be measured using the Modified Checklist for Autism in Toddlers (M-CHAT) [90] at 24 months corrected age. The scale ranges from 0 to 20 points, and it is hypothesized that lower scores will be observed for the intervention group.
Infant temperament is measured using the Infant Behaviour Questionnaire (IBQ-R) [91,92] at 12 months corrected age. The IBQ-R consists of 37 items on a 7point scale, and it is hypothesized that less problematic behaviour will be observed for the intervention group, i.e., the intervention group will have higher scores on smiling, laughter and soothability subscales and lower scores on the fear and distress to limitations subscales.
Parental satisfaction with the intervention is measured using the Client Satisfaction Questionnaire (CSQ-8) [93] and a semi-structured interview at 12 months corrected age. The CSQ-8 has 8 items and a range of 8-48 points.
Preschool educators' views of the children's engagement in preschool is measured using the Child Engagement Questionnaire (CEQ) [94,95] at 24 and 36 months corrected age. The Swedish version of the CEQ has 29 items rated on a 4point scale, and the summary score may range from 29 to 116, with higher scores indicating more positive engagement.
Preschool educators' views of the children's interaction in preschool will be measured using the Swedish questionnaire Ert Barn Vårt Samspel (EBVS) [94] at 24 and 36 months corrected age. The questionnaire has 36 items rated on a 5-point scale, and the summary score may range from 36 to 180, with higher scores indicating more interactive behaviour.
Preschool educators' views of the children's playtime in preschool will be measured using the play time/social time teacher impression scale [96,97] at 24 and 36 months corrected age. The teacher impression scale has 16 items rated on a 1-5 Likert scale (overall score range: 16-80), with higher scores indicating more social skills and play behaviour. It is hypothesized that higher scores will be observed for the intervention group.
Preschool educators' views of the children in preschool will be captured using a semi-structured preschool teacher interview at 24 or 36 months corrected age, depending on when the child has entered preschool.
Preschool educators' views of the children's level of functioning in preschool were captured with the ICF-CY core sets [94] at 24 and 36 months corrected age. The ICF-CY has 12 items on body functions (rated 0-9) and 22 items (rated 0-9) on activities and participation; higher scores indicate disability or developmental delay. Twenty items covering environmental factors (between + 4 and + 1 for facilitators; 0-9 for barriers) were included to identify possible disability and environmental moderators.
Statistical analysis plan
Data will be analysed using intention to treat analysis for the primary outcome and separate testing with multiplicity adjustments for secondary outcomes. Data will be analysed using SPSS version 25 (IBM, New York) and reported according to the CONSORT statement for RCTs. Descriptive parametric statistics will be presented as percentage for categorical variables and as mean and SD for continuous variables such as age or centiles if the data is skewed. Comparisons between the two groups will be performed with Mann-Whitney U test for independent samples. Subgroups with additional analyses will also be used for detaching outcome differences according to number of home-visits in total, additional medical diagnoses and whether one or two parents participated in the intervention. The level of significance is specified at 0.05. To account for repeated measures, to model within-subject variance, and to handle correlated data of continuous variables a linear mixed model will be used. An interaction term will be introduced in the model to examine heterogeneity effect. For binary and ordinal outcome variables Generalized Estimating Equation (GEE) will be employed. For the main analysis no missing data will be imputed. However, classical multiple imputation methods will be used for an additional sensitivity analysis if any of the included variables has more than 5% missing observations. The GEE is a technique which produces unbiased estimates under the assumption that missing observations will be missing at random. An amended approach of weighted GEE will be employed if missingness is found not to be at random. We will perform residual analysis to assess model assumptions and goodness-of-fit.
Generalized linear modelling will be used for outcome variables that are not repeated to create regression models with distribution of the binary and ordinal dependent variables.
The Bonferroni method will be used to appropriately adjust the overall level of significance for multiple comparisons. Count variables will be analyzed using the Poisson regression model. Qualitative variables from the semi-structured interviews will be analysed for their thematic content. The Pearson chi-square test will be used to detect associations between categorical variables, and the mean and standard deviation will be presented for normally distributed continuous variables.
Ethics and dissemination
The study was approved by the Regional Ethical Review Board in Stockholm (ref. 2017/1596-31). Since the SPIBI is an RCT, half of the participants will be randomized to the control condition, which entails an extended follow-up programme, as well as inquiries about parental mental health and resilience. To some, such questions may feel intrusive; on the other hand, answering them truthfully may open a possible channel of support and, if needed, a referral to professional help. The same clinically oriented approach will apply to developmental assessments of the children and the parent-child interactions.
In case the intervention does not have any statistically significant effects, it may be argued that all the time spent on the intervention was useless and could have been much more wisely spent on the parent-child dyad. However, the fact that the intervention is strengths-based is an ethical advantage even in the absence of the hypothesized effect. The research group is aware that ethical questions may arise at any time during the project and are prepared to identify and resolve them.
If the intervention has positive effects, there may be an ethical dilemma concerning the children in the control group, who will not benefit from the intervention. Since the intervention is age-specific, a wait-list design is not applicable. However, the control group will receive an extended follow-up programme that is intended to give the participants an extra sense of care and an opportunity for further referrals if needed.
The results will be disseminated through academic journals and presentations at research conferences. Since the research group consists of professionals from different parts of the Stockholm healthcare system, the results will easily reach clinical practitioners of neonatal, physiotherapeutic and child psychiatric care in Stockholm.
Discussion
The SPIBI is an ongoing randomized controlled study, with an anticipated date for the cessation of recruitment of 31 August 2020. The importance of transparent research processes to facilitate control and replication will be supported by this protocol. This protocol is also intended to be shared with different healthcare professionals throughout the EPT care chain, making a unified approach through specialties possible. If we can show that this post-discharge early intervention in the EPT group affects parent-child interactions, child development and/or parental mental health in a positive way, this kind of programme could be introduced at a national or even cross-national level.
The EPT infant is often referred to as the most vulnerable patient in the hospital due to these infants' immature bodies in general and sensitive brains in particular [98]. Brain plasticity continues throughout life, but the brain of the newborn infant is in an unceasing process of development and is utterly sensitive to disturbances. The sensitivity of the newborn brain poses a great potential risk when the EPT infant must live through stressful and painful medical procedures at the beginning of life, but the plasticity of the very same brain may potentially make it possible for these infants to experience the large positive effects of early interventions in the long run. Several researchers have argued that "[t]here is evidence that intervention in the earliest years of life provides the greatest social and economic benefits to the individual, their family and the wider community" [99]. Hence, the first year at home is an optimal time for early intervention in the EPT population.
One strength of the intervention is that its focus is threesided, as the SPIBI aims to affect parent-child interactions, the individual child and the parents. The importance of reducing parental depression as well as general parental stress to benefit the development of the child cannot be overstated [37,38,100,101]. The severity of prematurity outcomes has been shown to affect maternal well-being during the first year [102] and later in life [103].
A further strength of the trial is its multidisciplinary foundation, both in terms of researchers and interventionists. Since the risks of extremely preterm birth affect several parts of the child's future development, relating to, i.e., cognitive, motor, social, psychiatric and academic areas, a broad approach to intervention makes sense. At a national level and for several years, the Swedish government has published reports with clear demands for increased cooperation among different healthcare professions [104], but such initiatives are still rare. There are several medical, psychological and economic reasons for this international and national focus on multidisciplinary teamwork. Two of the main medical reasons are that the patient is a whole organism and is not separated into subsystems, and there is evidence that the psychological and social circumstances of a preterm child will affect his or her general long-term outcomes [11,45,[105][106][107]. There are constant economic implications for today's healthcare, and with a growing population and increased survival rates of EPT children, it is no longer sustainable to divide care efforts, which leave a growing number of families bewildered and insecure due to healthcare providers giving sometimes contradictory advice. A French review of cost-of-illness studies on prematurity concluded that the cost of extreme prematurity is 100,000 USD per child [108], which may suggest a need for costeffective early interventions in this group.
Although Sweden has active neonatal care with early interventions initiated during the hospital stay and a worldrenowned follow-up assessment programme [98,109], no systematic post-discharge interventions have been implemented thus far. Until recently, the exclusive focus of neonatal care has been survival, but with increasing surviving levels as well as the national decision to save even more immature infants [16], the need to support development as well as parental mental health can no longer be overlooked. In conclusion, if the SPIBI shows positive effects on parentchild interaction, child development and/or parental mental health, there are child-, family-and society-based arguments for its implementation in clinical practice. However, even non-significant results can be of interest, since the first year at home for preterm children and their parents is an underresearched area in Sweden due to the previous focus on the NICU stay and discharge process. Last visit: 31st of August 2021 (anticipated) or 1 year after the ast participant has been recruited.
Summary Results
No results yet IPD sharing statement Undecided University for organizing the interventionist education and training program of SPIBI. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Authors' contributions EB, MWA, KL, ACS, BW and UÅ conceived, designed the intervention as well as the trial and obtained funding for the trial. EB recruited participants and coordinated implementation. EB and KL supervised the implementation of the intervention. EB drafted the manuscript, thereafter MWA, KL, ACS, BW and UÅ made substantial contribution to the writing of the manuscript and provided major critical revisions. EB, MWA, KL, ACS, BW and UÅ approved the manuscript for submission and have all agreed to be personally responsible for the accuracy and integrity of any part of this work. None of the funders is in any way involved in the study as such, or in the design of the study or the collection, analysis or interpretation of data. None of the funders is in any way involved in the writing of this manuscript and has no potential commercial interest in the study outcome. The research group, a collaboration between Stockholm University and Karolinska Institutet, will own the data and will be responsible for personal data protection, as well as data accessibility for research purposes, in accordance with the GDPR of the European Union. Open access funding provided by Stockholm University.
Availability of data and materials
On an aggregated level, the SPIBI research-team may share aggregated data upon reasonable request. The manual for the intervention is free to use for any other research team after information and education from the SPIBI research team.
Ethics approval and consent to participate The project was approved by the Regional Ethical Review Board Stockholm (ref. 2017/1596-31). An information sheet of three pages was given to all participants, and three different consent forms were given to the participant: one general consent form before randomization and two different consent forms to parents in the intervention and control group at 1 year of CA concerning the semi-structured interview. Furthermore, a consent form was given to school heads and teachers at the preschool concerning questionnaires about the children and the semi-structured interview of the children's progress in preschool.
Consent for publication
Not applicable.
|
2020-02-01T15:24:19.986Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "53be4929a9b79c4d7a75d5ec5f49ce5b3209342d",
"oa_license": "CCBY",
"oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/s12887-020-1934-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53be4929a9b79c4d7a75d5ec5f49ce5b3209342d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5270294
|
pes2o/s2orc
|
v3-fos-license
|
Dilemmas in Military Medical Ethics: A Call for Conceptual Clarity
Despite the increase in and evolving nature of armed conflicts, the ethical issues faced by military physicians working in such contexts are still rarely examined in the bioethics literature. Military physicians are members of the military, even if they are non-combatants; and their role is one of healer but also sometimes humanitarian. Some scholars wonder about the moral compatibility of being both a physician and soldier. The ethical conflicts raised in the literature regarding military physicians can be organized into three main perspectives: 1) moral problems in military medicine are particular because of the difficulty of meeting the requirements of traditional bioethical principles; 2) medical codes of ethics and international laws are not well adapted to or are too restrictive for a military context; and 3) physicians are social actors who should either be pacifists, defenders of human rights, politically neutral or promoters of peace. A review of the diverse dilemmas faced by military physicians shows that these differ substantially by level (micro, meso, macro), context and the actors involved, and that they go beyond issues of patient interests. Like medicine in general, military medicine is complex and touches on potentially contested views of the roles and obligations of the physician. Greater conceptual clarity is thus needed in discussions about military medical ethics.
Despite the increase in and evolving nature of armed conflicts, the ethical issues faced by military physicians working in such contexts are still rarely examined in the bioethics literature. Military physicians are members of the military, even if they are non-combatants; and their role is one of healer but also sometimes humanitarian. Some scholars wonder about the moral compatibility of being both a physician and soldier. The ethical conflicts raised in the literature regarding military physicians can be organized into three main perspectives: 1) moral problems in military medicine are particular because of the difficulty of meeting the requirements of traditional bioethical principles; 2) medical codes of ethics and international laws are not well adapted to or are too restrictive for a military context; and 3) physicians are social actors who should either be pacifists, defenders of human rights, politically neutral or promoters of peace. A review of the diverse dilemmas faced by military physicians shows that these differ substantially by level (micro, meso, macro), context and the actors involved, and that they go beyond issues of patient interests. Like medicine in general, military medicine is complex and touches on potentially contested views of the roles and obligations of the physician. Greater conceptual clarity is thus needed in discussions about military medical ethics.
Despite the increase in and evolving nature of armed conflicts, the ethical issues faced by military physicians working in such contexts are still rarely examined in the bioethics literature. Military physicians are members of the military, even if they are non-combatants; and their role is one of healer but also sometimes humanitarian. Some scholars wonder about the moral compatibility of being both a physician and soldier. The ethical conflicts raised in the literature regarding military physicians can be organized into three main perspectives: 1) moral problems in military medicine are particular because of the difficulty of meeting the requirements of traditional bioethical principles; 2) medical codes of ethics and international laws are not well adapted to or are too restrictive for a military context; and 3) physicians are social actors who should either be pacifists, defenders of human rights, politically neutral or promoters of peace. A review of the diverse dilemmas faced by military physicians shows that these differ substantially by level (micro, meso, macro), context and the actors involved, and that they go beyond issues of patient interests. Like medicine in general, military medicine is complex and touches on potentially contested views of the roles and obligations of the physician. Greater conceptual clarity is thus needed in discussions about military medical ethics.
Introduction
Western militaries are now strongly mobilized by threats and conflicts. The tension between the great powers, characteristic of the Cold War, has been replaced by a proliferation of intrastate conflicts and terrorist acts; while the conflicts have changed in nature, security remains at the heart of Western foreign policy. The work of health professionals in such conflicts has garnered significant public and academic interest. For example, in the United States, Miles [1] denounced the involvement of some physicians in hostile interrogations of prisoners, while Allhoff [2] raised questions about the ethical duties and obligations of medical officers.
The literature on military medicine shows diverse points of view regarding potential ethical tensions or dilemmas. For some, military physicians are first and foremost health professionals and so should only consider their patient's welfare in their decision-making; i.e., they should give primacy to traditional bioethics principles (e.g., autonomy, beneficence, non-maleficence, justice) over other values [3,4]. For others, these principles are difficult to apply in a context where collective needsnamely security issues and scarce resources -must be addressed [5][6][7]. Some scholars consider the main ethical challenge to be that military physicians have dual loyalties, having obligations both towards their patients and their employers [2,[8][9][10]. Military physicians are members of two professions, each with it's own distinct (and potentially conflicting) ethical codes, norms and values.
Other scholars raise the question about a physician's legitimate role in war, asserting that as medicine is a pacifist and apolitical profession, physicians should only be involved in providing care and should ideally promote peace [11,12]. Finally, some even consider the military profession to be so different from that of healthcare that the two are considered totally morally incompatible [11].
Military medical ethics is a subject that highlights a series of concepts that are often studied in theoretical and isolated terms, such as double/dual loyalties, medicine as a profession, public health ISSN 1923-2799 2 / 15 policy, and the responsibility of healthcare professionals towards society. One of the main problems in the literature regarding military medical ethics, I suggest, is that questions and dilemmas are often examined independently -as if they are not in fact interrelated -and resulting from opposition with a military perspective. For example, dilemmas for physicians in the military are thought to come mainly from the tension between military and patient interests, where such opposition diverts military physicians' focus away from patient interests and towards issues such as military necessity. However, the concept of military necessity -an important principle in international humanitarian law that aims to circumscribe the legal and justified use of force in armed conflict and one that evolves in accordance with human rights and humanitarian values [13] -is rarely defined by scholars and is often treated as synonymous with military interest. Further, stereotypical judgments are often implied with regards to medicine which tends to be idealized while the military is presented mostly in negative terms and as having opposite values to those of physicians or with antiwar sentiments [14,15]. Even more problematic is the fact that some of the issues or dilemmas discussed in the literature differ in important ways with regards to the perspective of the actors involved, and thus the level of ethical analysis that is required: individual (micro), institutional (meso), or social/political (macro). So, for example, ethical dilemmas can be examined either through the lens of the physician-patient relationship; or via considerations of deontological issues, such as conflicts between different professional duties and obligations (professional codes of ethics or international humanitarian laws); or through discussion of broader dilemmas concerning the physician's role towards society in general and in war in particular. But to treat all of these issues as if they were problems of the same order or nature -and thus requiring similar ethical considerations -is to misunderstand the complexity of the context in which military physicians operate.
This article presents the main issues discussed in the military medical ethics literature and suggests that they can be grouped under three analytical perspectives that are often discussed in isolation from one another, hence neglecting important elements and concepts. The goal is to provide greater clarity regarding what should be considered when studying ethical issues in a context such as the military institution, where individual and collective interests can create important conflicts. There is clearly more to the work of a military physician than just an opposition or tension between two actors: physicians and the military institution or patient interest and the common good. Military physicians have a diversity of roles and obligations at different levels towards their patients, their employers and to the society that they serve. Military culture and physicians' values, but also professional rules and political and social choices, are all elements that combine to define or constrain ethical choices in a given context.
The analysis of ethical conflicts or dilemmas in military medicine must involve more than a micro-level perspective -i.e., that of the ethical physician -because meso and macro level issues and interests shape the practice and work environments of military (and other) physicians. Greater conceptual clarity is thus needed in discussions about military medical ethics in order to recognise the complexity of the relationships between the diverse stakeholders involved (i.e., physicians, patients, military institutions, and society in general). While ethical responsibility for patient care may ultimately lie on the shoulders of physicians, they are not the only or primary ethical agent in military medicine; this responsibility can and should be shared with other key stakeholders (e.g., the military institution) to ensure the practice of an ethical military medicine.
Ethical dilemmas in military medicine
Sidel and Levy [11] conducted a comprehensive review of the ethical tensions that military physicians faced in the US military. More recently, Gross [16] addressed the non-clinical challenges faced by physicians in the military, discussing the participation of physicians in the development of non-lethal weapons, in medical experimentation and research on enhancement technologies, and in humanitarian activities. Gross [16] points to the complexity of issues in military medicine, but also and perhaps more importantly, to the fact that there is a climate of mistrust between the military and the scientific community in general, and with bioethics in particular; it is as if discussing issues about the (in)appropriate involvement of physicians in military activities promoted a "misguided, wrong and dangerous" political agenda [17, p.10].
Without going into a complete description of all the ethical conflicts or dilemmas raised in the literature -each of which could be the subject of a lengthy description and analysis -I suggest a classification to show that most are generally presented from three (distinct) points of view (see Figure 1): 1. Moral problems in military medicine are particular because of the difficulty in giving primacy to traditional bioethics principles; 2. Medical codes of ethics and international laws are too restrictive and not adapted to a military context, and so do not provide the necessary guidance to make ethical decisions; 3. Physicians are social actors working within institutions, but should be pacifists, defenders of human rights, politically neutral or promoters of peace.
Obviously, the presentation of the dilemmas and discussions about the ethics of military medicine are not as sharp as the figure may suggest. But this classification allows us to see where the arguments are generally situated when we are dealing with a particular type of dilemma. Although for analytical purposes we sometimes need to limit our perspective to one element -whether clinical, deontological social or political -it is still essential to keep in mind that all of these elements constitute the reality of military physicians. This classification shows how dilemmas can be sorted for analysis, but ultimately all of these roles and conflicts are potentially within the same individual, that is, the military physician.
The classification also shows that ethical responsibility is shared between physicians, the military institution, professional associations, and society which through their government also defines the role and contribution of physicians in the military.
The types of dilemmas -and the different arguments and perspectives of scholars who have reflected on these situations -are situated at different levels, i.e., individual (micro), institutional (meso), or social/political (macro) and focus on different contexts and stakeholders. Moreover, they address different roles and obligations that physicians have to play as healers, professionals, employees and social actors. Some of the issues are therapeutic in nature (care of patient), while others are more social and political (e.g., participation of physicians in the war effort or identification of who are to be categorized as combatants). This results in ethical dilemmas often being examined either through the lens of the physician-patient relationship, which opposes the patient's welfare with military interest; or via considerations of legal issues, such as conflicts between different legal and professional duties and obligations (international humanitarian law and professional codes of ethics); or finally through discussion of broader dilemmas concerning the physician's role towards society in general. Treating all of these issues in the same way can put enormous pressure and responsibility on physicians, as if they were responsible not only for their own actions, but also for establishing an ethical climate in the institutions in which they work, and even in society more generally. Physicians certainly have a role to play in this regard, but they obviously cannot and should not bear all these responsibilities alone.
Conflict with traditional bioethics principles
The four principles of autonomy, beneficence, non-malfeasance and justice have been for some time at the heart of North American bioethics. Concern for the patient's wellbeing and interests, together with the primacy of doing no harm, are seen as ethical principles that should always guide medical practice [3,4]. According to some authors, in a military context these principles are impossible to respect, making it "morally unacceptable" for a physician to be both a physician and a soldier [11]; an opposition is created between the patient's rights and the common good, with health professionals having to privilege the former over the latter. But other scholars argue that bioethics needs to find a balance between these two objectives [18] hence a move towards focusing more on issues of justice and equality [19].
Gross [5] explains that a soldier's autonomy is always limited in a military context, since the right to life, for example, is voluntarily delegated, at least in part, to the military institution by the soldiercombatant. Unlike a civilian, a soldier cannot refuse treatment or vaccination and so expects to be treated primarily with a view to return to combat. As Bloche [7, p.273] points out, "patients are themselves committed to the social purpose served by medical intervention". During armed conflictwhere there are specific rules to follow, a strict hierarchy and an established chain of command -the patient-physician relationship becomes fairly paternalistic [20]. In a context where soldiers are legitimately sent into dangerous and potentially deadly situations, the principle of informed consent, which is a fundamental basis of the physician-patient relationship in Western countries, can also be difficult to apply [5,21]. For example, in the US, the requirement for informed consent can be overridden in certain combat situations by the US Department of Defense invoking Article 23(d) of the Food, Drug and Cosmetic Act [22). This exception applies to mandatory protections of soldiers, such as vaccination -but not experimentation -and gives the military "the authority to compel service members to take certain drugs to protect themselves" [22, p.863].
Figure 1: Types of Ethical Dilemmas in Military Medicine
In the same way, the principles of privacy and confidentiality within the physician-patient relationship, closely linked to respect for autonomy, also become more problematic in the military context. For example, because of the threat posed by certain patients, mainly detainees, physicians must exercise caution to protect themselves while treating and talking with patients. Hence, military personnel may be present during diagnosis and care, thus potentially breaching patient privacy and confidentiality. In the case of prisoners, in the "war on terror" some military physicians even disclosed confidential medical information about prisoner-patients to US intelligence personnel [1,23]. It was also reported that injured (allied or enemy) soldiers have been filmed, without their consent, by the media or the military, either to demonstrate the horrors of war or for reasons of propaganda [24]. As Sidel and Levy note [11], there have also been problems with the management of the medical files of soldiers and non-combatants. 1 Negligence in medical records keeping was reported for prisoners in Iraq and Afghanistan, where death certificates of detainees were found to be incomplete or even falsified [25].
Such dilemmas also occur in other clinical settings, such as breaches of confidentiality in the case of child abuse or the obligatory declaration of infectious diseases; but many authors argue that the magnitude of these dilemmas is amplified in the military context, in part due to the level of risk, stress and pressure for the key actors to support military objectives [23]. According to Sidel and Levy [11], these types of dilemmas arise primarily because physicians are required to subordinate their patients' best interests and wishes to that of military necessity. Arguably, in such a context, traditional ethical principles do not provide a comprehensive or sufficient practical guide for medical personnel [5,6,26]. Within this perspective, clinical context and patient interest are the main preoccupation and so a physician's primary role and activity are viewed as that of a healer with strictly therapeutic aims. But limiting the analysis to the clinical role with one patient disregards the fact that military physicians must also maintain a unit's health, have to deal with important resource limitations, and sometimes even national policies that affect their work.
Conflict of duties and ethical norms
Gross [5] highlights the ethical tensions that arise from the World Medical Association (WMA) declaration [27] that medical ethics is the same in peacetime as in war, and the Geneva Convention Physician's Oath [28] adopted by the WMA, that requires physicians to affirm that "the health of my patient will be my first consideration". In this case, potential tensions relate to codes of conduct that do not take into account outside pressures, both in the patient-physician relationship and between physicians and the institution (i.e., principles of necessity or military interest). From this perspective, conflict arises in military medicine from tensions between professional duties. These dilemmas are closely linked to the first category, i.e., conflict with traditional bioethics principles, and also concern about the physician-patient relationship, but are generally presented with an emphasis on the relationship between physicians and their host institutions, and the fact that military physicians may have multiple obligations and ethical codes to which they should refer. Authors such as Sidel and Levy [11], Gross [5], Miles [29] and Hathout [30] argue that for reasons of military necessity, physicians may not be able to provide appropriate care or are required to engage in practices to support military actions (e.g., interrogation of detainees, development of biological weapon or non treatment) that are contrary to the principle of non-malfeasance.
These sometimes conflicting duties and codes can create dilemmas that may be difficult for military physicians to resolve; described as an issue of 'dual loyalty' [8,9] or mixed agency [11,21] the aim is to highlight the tension for physicians of having to balance responsibilities to their patients and to the common good [8]. To which institution or profession do physicians owe primary allegiance? Which professional code of ethics should guide their behaviour? 1 In 1996, the US Presidential Advisory Committee on Gulf War Veterans' Illnesses criticized the US military for poor record keeping; the Committee identified problems with missing medical records and the absence of or incomplete data on health effects for troops who received the anthrax vaccine during the first Gulf War, which subsequently affected soldiers' medical care when they experienced side effects.
Constraints to the provision of care
Article 12 of the Geneva Convention I [31] states that triage should follow emergency medical needs, whether for one's own soldiers or for enemy soldiers/combatants. Gross [5,32] argues that triage should instead be understood as a function of military necessity and thus the least injured soldiers are those who should be treated first so they can be quickly returned to battle. Some authors argue that it is normal and appropriate for armies to prioritise the treatment of their own soldiers [33]. In the US Army, the decision on how to triage is the responsibility of military commanders, after consulting with physicians [21], and so ultimately it will be the collective interest that takes precedence over individual patient interests [34].
In the Canadian Armed Forces, triage decisions are taken by a medical officer who can more freely apply medical and humanitarian ethics principles. Rawling [35] argues that the view that medical treatment of the wounded in the Canadian Armed Forces is focused primarily on return to combat is more of a cliché than an actual policy, because nothing was ever put in place to implement such a policy. In his historical review of Canadian military medicine, Rawling [35] shows that medical teams usually decided when and how they treated the wounded with little interference from military authorities. For example, from a total of 2,599 trauma patients that were seen in the Role III hospital in Kandahar, Afghanistan, between October 2009 and December 2010, almost half (1,192) were Afghan troops or civilians [36]. Rawling further argues that the practice of amputation procedures and evacuation measures during conflict, which are done to save a soldier's life even if the soldier will become unfit for combat, demonstrate that triage and care are not done strictly for reasons of returning soldiers to combat. More recently in the context of the Afghanistan war, other authors have argued that triage was not the main ethical issue, as much as the difference in standards of care between NATO and Afghan soldiers who needed to be transferred to local hospitals after being treated for their trauma injuries [37][38][39].
Another example of dilemmas arising from conflicting duties that are discussed in the literature is when, for various reasons (e.g., because of security concerns or rules of engagement), physicians cannot provide care to local populations even if the Geneva Convention (Protocol II, article 7) stipulates that they should [40]. Physicians and other health professionals in the military can also be required to participate in "care caravans" as part of tactical agendas of "winning hearts and minds", but without being able to provide follow-up care [15].
Participation in warfare
According to Annas [4], in the US "war on terror", military physicians have faced three major challenges regarding their medical ethics, by being ordered by commanding military officers to 1) assist in intense interrogations of suspected terrorists, 2) participate in the forced feeding of prisoners on hunger strike, and 3) certify, against their own medical judgement, the ability to return to combat of soldiers being redeployed to Iraq or Afghanistan. These are, for Annas, conflicts involving professional duties that are addressed in professional codes of ethics and international humanitarian law (e.g., torture and inhumane treatment, including force feeding of detainees, are unethical behaviour in the WMA Declaration of Tokyo). But as Okie [41] and Gross [16] point out, not all bioethicists agree about force feeding, and some argue that a patient's best interest (i.e., access to life-saving treatment) should take precedence over respect for autonomy (i.e., right to refuse treatment).
Medical knowledge and services can also be used in the war effort by the military institution, thereby "weaponizing medicine" [42]. The WMA considers that physicians ought to be prohibited from participating in the development of lethal weapons, but although contentious, physician involvement in the development of non-lethal weapons that can augment a military's capabilities is occurring [16,43]. Questions about the role of physicians in developing strategic interrogation plans for detainees [44], non-lethal weapons to disorient the enemy [6], or tests of drugs that could genetically enhance soldiers performance in order to inhibit fear or guilt while also reducing the occurrence of Post Traumatic Stress Disorder [43] fidelity and medicine's social purposes" and should thus also be a matter of ethical reflection on the part of the military institution and healthcare professionals.
Limits of or contradiction between ethical codes or laws
Obligation conflicts can be transformed into ethical conflicts when the physician faces a situation regarding human rights. Indeed, in situations of armed conflict, health professionals are involved as non-combatants in caring for wounded soldiers (enemy or allies) and civilians. International humanitarian laws impose these roles and obligations on health professionals, and in return, grant them protection against attack [44]. When military physicians treat detainees that appear to have been the victim of mistreatment, as health professionals they have a duty to report to their superiors and other authorities these situations as cases of human rights abuse; i.e., Article 3 of the WMA Declaration of Tokyo 1979 stipulates that physicians have to report a breach of the Geneva Conventions. This requirement to report human rights abuse can create conflict between the physician and the military institution [40].
The Universal Declaration of Human Rights (UDHR) [45] provides a very broad perspective with regards to health (Article 25) and the right to life and security (Article 3), which creates expectations about the ethical treatment of civilians, soldiers and prisoners of war. Indeed, UDHR makes life, liberty and security fundamental rights of individuals, along with a right to a minimum standard of living for a person's health and well being. These rights are to be maintained during armed conflict and go hand in hand with other laws regulating conflict. As noted by the United Nations: As a result of efforts to ensure effective protection for the rights of all persons in situations of armed conflict, a number of United Nations bodies and organizations, human rights special mechanisms, as well as international and regional courts, have in practice increasingly applied obligations of international human rights law and international humanitarian law in a complementary and mutually reinforcing manner. [46, p.118] So in some cases, military physicians may refer to international humanitarian law when refusing to obey what they consider to be an illegal order, or one that goes against their professional medical and ethical obligations. Physicians may thus already be equipped to resolve some dilemmas, particularly those with regards to abuse, torture or involvement in developing biological weapons, since these are illegal acts under international law.
Yet, legal rules (codes of ethics, international humanitarian laws, etc.) do not always provide clear answers to physicians, and may even be contradictory. Military physicians can be caught between considerations of military interest, human rights, medical ethics principles and international rules, and a patient's well being. In the case of mass casualty situations, for example, decisions cannot simply be based on the ethical principle of non-discrimination of care as stipulated by medical codes of ethics. Medical resources (personnel and equipment) are limited, so selection may thus not be based on priority of care but on the need to save the greatest number, which may then involve neglecting the more severely wounded. Any form of triage in such a context requires physicians to exercise their own professional ethics, and not necessarily one that is solely or primarily aligned with their medical code of ethics [47]. Professional ethical rules can then create a type of dilemma, one in which legal aspects (social) conflict with ethical aspects (moral), so that "in the conflict between ethics and law, it is the conscience of each physician who arbitrates and commands certain behaviour" (translation) [48, p.118].
International laws exist to prevent excess or misdeeds during conflicts and so protect civilian populations. But the context of contemporary military medicine has changed, not only in terms of values (protection of human rights vs. sovereignty of nations) but also because the majority of conflict victims (90%) are now civilians [49,50]. As conflicts have become increasingly complex in nature and scope, so too have the laws surrounding these conflicts. In a given situation, hundreds of legal provisions (national and international) may be relevant, only some of which will be directly applicable and able to deliver practical medico-military solutions that respond to both strategic and tactical imperatives while also being consistent with general principles of the laws of war or armed conflict; mere common sense or superficial knowledge of key international texts and conventions will be insufficient [48]. This perspective highlights the different obligations of military healthcare professionals that includes a concern for public health and the common good and thus often favours a utilitarian approach. Physicians' roles must be examined, not only in relation to their patients but also as employees within an organization and as professionals, both medical and military. It also shows the diversity of norms and ethical rules (and their complexity) to which physicians must refer. Within this perspective, we start to see how the military institution as well as professional associations have responsibilities in fostering an ethical climate and facilitating ethical reflection for military physicians (and health professionals more generally).
Conflict with social and political roles
The political philosophies adopted by many democracies set out principles of "just war" that give importance to collective interests, subordinate individual interests and thus create tensions between individual rights and those of society as a whole [5]. The doctrine of just war as a moral foundation or justification for conflict can be divided into two parts: jus ad bellum, which covers the right to resort to war, and jus in bello, which covers justice during war. However, in the ethics literature on military medicine, there is a tendency to conflate these two principles.
For military physicians, tensions could arise between a moral obligation to help in the war effort and the "pacifist nature" of the profession (principle of non-malfeasance) [51]. For example, Article VII of the American Medical Association (AMA) Principles of Medical Ethics [52], the preamble to the Code of Medical Ethics, states that "A physician shall recognize a responsibility to participate in activities contributing to the improvement of the community and the betterment of public health." For several authors [52][53][54], war is an activity that goes against this principle; health professionals should thus act as politically neutral moral agents and refuse to tolerate or participate in wars or conflict, situations that invariably cause significant harm to population health and to the environment. Some scholars even call upon health professionals to promote peace through health activities, because peace and health are seen as motivated by the same values; health professionals should thus work together with peace workers and institutions who have the mandate to negotiate conflict resolution [56]. Specific examples include those proposed by Levy and Sidel in their book War and Public Health [54], where they apply a public health prevention model to war because war constitutes a threat to public health [57,58], and so can and should be prevented and/or treated as are other chronic illnesses [59]. Although marginal, the pacifist school argues for physician activisms against war while other scholars argue for political activism to report human rights abuses [60].
But as Gross [16, p.97] argues, "contributing to war, no less than contributing to peace, may also distance medical workers from their primary obligations." Rascona [61], for his part, argues that wars have taken place for centuries without the assistance of physicians and their absence did not necessarily help the world to live without war. Like Gross [51], Rascona [61,p.322] questions "the notion that medical ethics may be somehow superior to (all) others." Madden and Carter [14], further argue that health professionals have, as part of the social contract that is at the base of most formal professions, a duty to "give back" to their society.
In this context, the question of physicians' legitimate participation in war does not arise from their professional obligations, but rather because of their civic obligations. Physicians who choose to work in the military can hardly resist or defy principles that were adopted by their own society with regards to just war and the use of force (which refers to the principle of jus ad bellum, or the right justification to go to war). As health professionals, physicians have a moral duty towards their patients. As citizens and members of the military, they also have a duty and obligation towards the society in which they live. However, their participation in war, as with all soldiers (whether or not they are combatants), is governed by the principles of jus ad bello which provides that "acts of war must be proportionate to the objectives and must respect the immunity of non-combatants, which means sparing the civilian populations" (translated) [62]. As with just war theory, which distinguishes between the right of a country to participate in war and how to act during war, it is thus important to recognise the distinction between a physician's participation in war (political neutrality or activism) and the way he or she contributes (medical neutrality), which are not equivalent [60].
Another problem for physicians relates to political decisions during conflicts. It is increasingly the case that enemy combatants are not the soldiers of an enemy state (e.g., members of a standing army), but instead are members of guerrilla, revolutionary or terrorist groups, and in some cases they are children. Are such combatants to be seen as "enemy soldiers" and thus, when captured, treated as prisoners of war and protected by the Geneva Conventions [63]? Or, following the US policy implemented by former President George W. Bush, should these combatants be treated not as soldiers, but as "unlawful combatants" and deprived of the protection they would normally receive as prisoners of war [10]?
Since Al Qaeda was not a signatory of international conventions, the US government under the Bush administration concluded that the Geneva Conventions did not apply; the principles of the Conventions only need be respected "to the extent appropriate and consistent with military necessity" [64]. While this position was hotly contested by other NATO countries involved in the war in Afghanistan [65], and later changed under the Obama administration, it nonetheless highlights the complex nature of modern conflicts and the challenges facing military physicians in trying to apply the prescriptions of international policies that were written for very different types of warfare. If the decision regarding classification of combatants is made on the advice of "political leadership rather than on objective facts, the potential for undermining the humanitarian impact of International Humanitarian Law increases dramatically" [66, p.158].
Following allegations of prisoner torture by American soldiers and the involvement of medical personnel at Guantanamo Bay, Cuba, and in Iraq and Afghanistan, in 2005 the US Defence Department changed its ethical guidelines for health professionals [1,25,67]. These new guidelines, contrary to international humanitarian laws and international or national codes of medical ethics, opened the door for psychiatrists and physicians not involved in the care of prisoners to participate in interrogations. Their involvement was justified by the absence of a clinical link with the prisoners (i.e., the physician-patient relationship). Groups of psychologists and psychiatrists were created with the explicit mandate to develop integrated strategies for interrogation by manipulating prisoners' emotions and weaknesses so that they would provide information to US intelligence services [25]; and this was justified on the basis that these health professionals were not treating patients since they had no relationship with them, but were merely acting as behavioural scientists [24]. Of course, this participation by health personnel has been denounced by many and raised questions about the participation of physicians in war in its broadest sense. But this situation also demonstrates how the notion of patient (within a physician-patient relationship) can become confused and politically charged in a given context. In addition, the fiduciary or trust nature of the physician-patient relationship can be less clear in the context of a military operation because physicians are assigned to specific units and often have little direct contact with the soldiers they are likely to treat [24] 2 .
Finally, military physicians are asked to work in different contexts such as natural disasters, humanitarian crises and security operations, sometimes side-by-side with non-governmental organisations (NGO) and humanitarian workers. Since the end of the Cold War, conflicts are mostly intrastate as opposed to interstate. Military operations are increasingly deployed for humanitarian reasons and for the protection of human rights as opposed to, for example, dominance over territory and resources [68]. But the objectives of such aid -and of the partners involved in delivering aidmay be very different. For example, in Afghanistan and Iraq, humanitarian aid was used by the US military to advance military goals [69]. Such situations can create confusion on the part of the public and the health professionals involved in such missions regarding objectives that may, depending on the actor involved, be development, humanitarian and/or military [70][71][72].
In sum, political decisions (from government or from health professionals) in a military context have the potential to affect the work of health professionals to the point where it can redefine their contribution and role both in the military and towards patients.
This perspective focuses on the broader context, i.e., the social and political roles of physicians within a democratic society. Arguments are made as to whether or not physicians should advocate for peace, against war, or if they have a civic duty to treat the wounded. Although international humanitarian laws seek to preserve medical neutrality and impartiality, the political motivations of governments and their interpretation of these laws can have an impact on the work of military physicians. When discussing issues at this level, one can forget the therapeutic aim of medicine, and politically justify non-therapeutic roles of physicians for military purposes. Hence, the relevance of setting the context when discussing ethical dilemmas in military medicine in order to consider the different roles of physicians (healer, employee, citizen), and the other actors involved (patient, military, government).
Conclusion
The above review of dilemmas encountered in military medicine shows that they are very often presented as being the result of real or perceived pressures from military or political interests on physicians to divert their professional duty to their patients in favour of other ethical priorities; these pressures will then inevitably create ethical conflicts and contradictions that are difficult to manage. Thus, the state is opposed to the patient in its prioritising of collective interests over individual patient benefit, and the military institution is pitted against the physician in a conflict between, on the one hand, military interest and institutional pressure to conform, and on the other, medical responsibility and autonomy. Basically, there is a tendency to oppose the physician-patient relationship with the interests of other stakeholders, mainly the military institution.
Frequently, questions and dilemmas regarding military physicians focus on one issue at a time, in parallel and segmented ways, with respect to patients, codes of ethics or political ideology. For example, if the focus is on bioethics principles, the context discussed will often be mainly clinical, patient interest will be considered first, and the physician's main role and activity will be that of a healer with strictly therapeutic aims. If, instead, the focus is on ethical codes and obligations, then the institutional and deontological contexts will be emphasised, with attention to issues of the medical profession and its values, the role of physicians as employees of an organization (i.e., dual loyalties), and often within a utilitarian framework that highlights issues of the common good. In cases where social and democratic principles and policies are put forward, the focus is generally more on the role of physicians as citizens, on collective interests and on social and political aspects of medical practice, often arguing for a liberal or communitarian philosophy of medicine. Hence the way to address ethical conflict in military medicine is often reduced to claims that physicians are virtuous professionals with values that are intrinsically good, and therefore that their profession is simply incompatible with the reality of military operations.
But as Figure 1 shows, dilemmas differ substantially by level, context and the actors involved and this should be acknowledged to better situate ethical analyses of the challenges facing military physicians. In confining the debate to an opposition between only two types of actors, much of the discourse on ISSN 1923-2799 11 / 15 ethics in military medicine seems to miss more essential questions, such as: What are the common values and ethics in military medicine? What is the actual bargaining power or influence that physicians have as military officers? And what are the responsibilities of other stakeholders, namely professional associations, the military institution and even the government in supporting the military physician to be and act as a moral agent?
Not all conflicts of value or ethical dilemmas are due to pressures imposed by the institution in the name of military necessity or interest. Physicians are not, by definition, in an adversarial relationship with the military. The potential for conflict exists (and may be inevitable) between different principles, beliefs and values systems that are held by physicians, national/international professional associations, and the institutions in which physicians work [73]. But these various potential dilemmas are due to the different roles of military physicians and the fact that during an armed conflict, they act as healers, members of the military and also as citizens. Military physicians are in a position where they must have a professional responsibility both towards individuals and society [8] and thus must also attend to the common good. As such, the context in which they operate is both clinical, institutional, social and political. In that sense, military medical ethics would benefit from more explicitly taking into account the relationship between all the actors involved, i.e. physician, patient, military institution and society, and recognize that the responsibility for ethical reflection is shared between several actors and stakeholders who are specific but also interdependent. As Gibson and Suh note [15, p.5]: Military medicine should be a touchstone for principled management of the convergence of healthcare, foreign affairs, and human rights protection. However, this will only happen if the role of physician moves beyond that of merely a provider of medical services, and if physicians can rise to the challenge and assume the role of providing ethical checks and balances against other military leaders. For their part, military commanders must increasingly understand the unique role that physicians and other healthcare professionals play in their units. Physicians are not riflemen who conveniently possess medical skills. They possess highly developed scientific and humanitarian assets and thus may provide a perspective on military operations that other officers cannot.
As suggested by Bloche [7], the challenge for medical ethics is not to resolve all tensions between patient welfare and collective interests, but rather to reflect on how and to what extent the social expectations or pressures placed on physicians can become part of a coherent medical ethics reflection and judgment.
Military medicine, like medicine in general, involves a diverse group of actors and interests and touches on potentially contested views of the role and obligations of physicians, that is, as a caregiver in the patient-physician relationship, but also as a professional with social and political responsibilities.
Military medicine is but one example of the larger debate in medical ethics about the appropriate roles and responsibilities of physicians in diverse workplaces [21]. As medical practice has changed over the years, so too have social priorities. For example, issues of resource limitation and physician responsibilities towards society are becoming important constraints in many contexts, not just in military medicine. These types of problems are increasingly being discussed in the public health ethics literature, and in bioethics in general. In fact, a review of the literature by Wendler [74] reveals that there are no less than 27 exceptions recognized by scholars and physicians regarding the primacy of patient's best interests with competing considerations in contemporary medical practice.
I have argued that three perspectives -i.e., individual and clinical (micro), organizational and deontological (meso) and social and political (macro) -are necessary to examine the ethical dilemmas discussed in the literature on military medicine, in order to make progress in addressing the complex reality of military physicians. A larger and more conceptually clear discussion and analysis is thus required regarding the physician's role in society in general, and in war in particular, and one that ISSN 1923-2799 12 / 15
|
2017-06-19T04:03:14.559Z
|
2016-03-04T00:00:00.000
|
{
"year": 2016,
"sha1": "e187120d03bba0cd8598bf5426590a778967a736",
"oa_license": "CCBY",
"oa_url": "http://www.erudit.org/fr/revues/bo/2015-v4-bo02397/1035513ar.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fbcb4c3eb47c988f43e74c604e624b28d32928ec",
"s2fieldsofstudy": [
"Medicine",
"Philosophy"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
80511606
|
pes2o/s2orc
|
v3-fos-license
|
Recurring Glioblastoma : A Case for Reoperation ?
Unlike newly diagnosed glioblastoma, no clear or widely accepted standard of care is available for patients with a recurrence. A purely radiological diagnosis of recurrence or progression can be hampered by flaws induced by pseudoprogression, pseudoresponse, or radionecrosis. Based on parameters like tumor location and volume, patient’s performance status, time from initial diagnosis, and availability of alternative salvage therapies, reoperation can be considered as a treatment option to extend the overall survival and quality of life of the patient. The achieved extent of resection of the relapsed tumor—especially with the intention of having a safe, complete resection of the enhancing tumor—most likely plays a crucial role in the ultimate outcome and prognosis of the patient, regardless of other modes of treatment. Validated scores to predict the prognosis after reoperation of a patient with a recurrent glioblastoma can help to select suitable candidates for surgery. Safety issues and complication avoidance are pivotal to maximally preserve the patient’s quality of life. Besides a possible direct oncological effect, resampling of the recurrent tumor with detailed pathological and molecular analysis might have an impact on the development, testing, and validation of new salvage therapies.
Introduction
Maximal safe debulking surgery is well accepted as the mainstay treatment for newly diagnosed glioblastoma (GBM), and postoperative radiochemotherapy was determined in 2005 as the standard of care (SOC) by a pivotal phase 3 randomized trial by the European Organisation for the Research and Treatment of Cancer (EORTC) and National Cancer Institute of Canada Clinical Trials Group (NCIC) (1,2). According to this trial, adult patients, up to the age of 70, with newly diagnosed GBM are being treated with 6 weeks of radiotherapy with concomitant temozolomide chemotherapy, followed by six adjuvant cycles of adjuvant temozolomide. However, despite multimodal therapy, prognosis for GBM patients remains poor with a median progression-free survival (PFS) of only 6.9 months, median overall survival (OS) of 14.6 months, and a 5-year survival rate of 9.8%. The low PFS is also reflected in the fact that less than 50% of patients completed the six cycles of adjuvant temozolomide in the EORTC-NCIC trial.
Notwithstanding intense preclinical research and clinical trials, standard therapy has not changed over the past decade. New agents with promising results in Phase 1 and/or Phase 2 trials, for example, the Vascular Endothelial Growth Factor-A (VEGF-A) Inhibitor bevacizumab or the integrin inhibitor Cilengitide, failed to improve survival in randomized phase 3 trials (3,4). Moreover, in an effort to optimize the current chemotherapy, a dose-dense schedule of adjuvant temozolomide did not lead to improved survival (5). Recurrence, regrowth of tumor after a period of complete remission or stable disease, is universal. Unlike the well-defined treatment schedule in the newly diagnosed setting, no standard therapy exists for recurrent GBM. Treatment options in the recurrent setting include reoperation, re-irradiation, rechallenge temozolomide, or nitrosourea chemotherapy (e.g., lomustin [CCNU]), bevacizumab, or combinations of therapies (6). Given the absence of SOC, inclusion in clinical trials is optional upon recurrence. Whichever therapy is given, prognosis at recurrence is grim, with median survival in recent years estimated to be about 9 months and only one-third of patients alive after 1 year (7). Eventually, GBM will recur and lead to progressive neurological deterioration and death. Preserving quality of life (QoL) for as long as possible, therefore, becomes a priority in this palliative oncological setting.
Radiological Diagnosis of a Recurrence in Clinical Practice
During follow-up of GBM patients, most oncologists will perform an MRI scan every 2-3 months, or earlier upon clinical deterioration (8). This regular MRI scan will detect many recurrences in the early phase, often in asymptomatic patients. However, interpretation of these follow-up MRI scans can be challenging in the context of possible appearance of contrast enhancement due to radionecrosis or pseudoprogression in patients treated with radiotherapy and chemotherapy. Pseudoprogression is thought to occur in up to 50% of patients during the first 3-6 months after radiotherapy, whereas radionecrosis can occur up to several years after treatment and does not spontaneously regress without treatment (9).
As much as 15% of samples after reoperation showed only radionecrosis but no viable tumor in a series by Azoulay et al. (10). Moreover, bevacizumab, which is often used to treat recurrent GBM, compromises interpretation of follow-up MRIs as it normalizes leaky tumor vasculature and hence decreases T1 gadolinium enhancement and peritumoral edema (11,12), sometimes resulting in only a pseudoresponse. To assess progressive disease, it is therefore recommended to use the recent Response Assessment in Neuro-Oncology (RANO) criteria that include evaluation of corticosteroid use, T2/FLAIR images, and restricted parameters to determine progressive disease during the first 3 months after radiochemotherapy, instead of the classical MacDonald criteria (13).
When there is a clear relapse or high suspicion of a (symptomatic) recurrence for which new treatment has to be initiated, a neurosurgeon should always be consulted to assess whether the patient is suitable for a repeat surgery. In general, it is estimated that only about 25% of patients can be considered for repeat surgery (6). Certainly, in the case in which clinical symptoms are due to mass effect, surgery remains the only treatment strategy that can drastically and rapidly decrease tumor load and possible symptoms. This can alleviate symptoms such as headache and (more rapidly) reduce the need for steroids to decrease peritumoral edema (14,15). On the other hand a reoperation exposes patients to a risk of new temporary or permanent neurological deficits, general surgical and/or anesthesiological risks, and, at least temporarily, exclusion from other second-line treatments. Moreover, the oncological effect remains controversial (16).
Most recurrences appear locally in or close to the resection cavity of the first surgery (14). In a study by Brandes et al. on 79 patients with a recurrent GBM after initial treatment with standard therapy, almost 80% of recurrences occurred inside or at the margin of the radiotherapy field, where radiotherapy was administered at the contrast-enhancing mass with a margin of 2-3 cm (17). Rapp et al. reported on 97 recurrent GBM patients and found pure local recurrences in 79.3%, and combined local and distant recurrences in another 10.3% of patients (18). Obviously, diffuse, multifocal recurrences or deep infiltrative lesions are not surgical indications, contrary to a local well-circumscribed lesion. However, many patients will present with a local but poorly delineated lesion, for which a surgical indication cannot be advocated based on radiology alone.
Inherent selectIon bIas leads to better outcome In surgIcally treated recurrent patIents
No randomized trials exist that randomize patients for surgery in the relapse setting, and most reported surgical series in recurrent GBM are retrospective (15). An overview of selected surgical outcome series is given in Table 1. Several authors have reported better outcome after surgery for recurrent GBM, compared to control nonsurgical populations. However, we have to take into account that these reports inherently suffer from selection bias, as patients who are selected for reoperation usually tend to be younger and have a better Karnofsky Performance Scale (KPS), and hence belong to a more favorable prognostic group (19 cohort of nonsurgically treated recurrent GBM patients, based on initial extent of resection (EOR) and subventricular zone involvement (10). Median OS in the surgical subgroup was 9.6 months versus 5.3 months in the nonsurgical group, which was statistically significant. They concluded that reoperation, combined with additional rescue therapies, can induce prolonged survival in recurrent GBM. Chen et al. described 65 recurrent GBM patients, of whom 20 were reoperated. Median OS after recurrence in the surgical group was statistically higher with 13.5 months versus 5.8 months in the nonsurgical group (20). However, KPS at recurrence was also significantly higher in the surgical group, and 77.8% of the nonsurgical group received only palliative therapy. Tully et al. described 204 GBM patients of whom 24% were reoperated at recurrence, and they found a significantly improved survival of 20.1 months in reoperated patients compared to 9.0 months in recurrent patients who were treated nonsurgically (21). In their series, reoperated patients were younger, had a smaller initial tumor diameter, and were more likely to have an initial EOR of ≥50% at first resection. Moreover, reoperated patients had a significantly higher percentage of completion of adjuvant therapy (79.6% vs. 35.9%). To compensate for this selection bias, patients that were a priori unlikely to be selected for reoperation based on age or performance scale were excluded in a subgroup analysis. A much less significant, though still present, advantage for the surgical group was found at first recurrence, but not anymore at second recurrence. Moreover, reoperation was no longer an independent predictor of OS in a multivariate analysis. The authors suggested that the improved OS in the surgical group might be more of a reflection of favorable patient characteristics than surgery itself. Chaichana et al. showed a survival benefit resulting from repeat resections using a multivariate analysis and case control evaluation to correct for selection bias (22). In their series, median survival was 6.8 months for patients that had one resection versus 26.6 months for patients that underwent four resections. Very often, a more favorable course of disease and pattern of recurrence render these patients eligible for reoperation rather than vice versa ( Figure 1). On the other hand several authors did not find a survival advantage for surgery. Franceschi et al. reported outcomes of a retrospective study on 232 recurrent GBM patients of whom 102 were treated with reoperation and chemotherapy, and compared these patients with 130 recurrent patients who were treated only with chemotherapy. They did not find a survival advantage in the reoperation group (23). In a large prospective registry database, including >1000 patients treated from 1997 to 2010, Nava et al. did not find better survival after recurrence in patients that underwent a reoperation. However, this study did not provide data on patient stratification at recurrence or EOR (7).
KarnofsKy performance scale and age at recurrence
The importance of patient characteristics at recurrence cannot be overestimated. Several older surgical outcome series have identified preoperative KPS as an important factor related to survival (24) or prolonged high QoL survival after recurrence (25). Also, KPS at recurrence in many studies turned out to be associated with better OS (19,(26)(27)(28)(29)(30). Patients with a poor performance scale are generally not proposed to undergo repeat surgery. A KPS of ≥70, which means the patient is able to take care of himself or herself but cannot perform normal
Figure 1 A 57-year-old lady was diagnosed with a left occipital glioblastoma (A), for which a total resection was performed (B).
She was treated with standard radiotherapy, temozolomide chemotherapy, and experimental dendritic cell vaccination. An asymptomatic recurrence in the medial wall of the resection cavity was seen in a routine follow-up scan 16 months after the first surgery (C). A second total resection was performed (D), after which combined CCNU and bevacizumab was given in the EORTC 26101 study. A second asymptomatic local recurrence at the lateral side of the resection cavity was seen 14 months later (e), and again a total resection was performed (F). Nine months later she developed a multifocal progression, resistant to temozolomide. She died 42 months after the first surgery.
A B C D E F
daily work, is generally accepted as a cut-off to select patients fit for surgery. The influence of age per se seems to be less pronounced in the absence of a good KPS, and reoperations in selected elderly patients were reported to be still feasible (31).
scales to predIct survIval after surgery for recurrent gbm
Two helpful prognostic scales to select patients for recurrent surgery are available.
extent of resectIon: equally Important at recurrence?
In a newly diagnosed GBM, it is generally accepted that an improved EOR is an independent prognostic factor for better outcome. A significant benefit on OS was present when EOR was at least 78%, with a further stepwise improvement with an EOR in the 95-100% range (34). The survival benefit for complete versus incomplete resection was estimated to be almost 5 months in a post hoc analysis on patients initially included in the 5-ALA trial by Stummer et al. (35). In recurrent GBM patients, the importance of improving EOR is less universally accepted with highly variable survival rates in the literature. However, in recent years, several authors have reported a better OS when a higher EOR was achieved in the recurrent setting. McGirt et al. described a significantly improved OS after gross total (GTR) or near resection (NTR) compared to a subtotal resection (STR) in a study on 294 reoperated patients. Median survival for GTR and NTR were 11 and 9 months, respectively, versus 5 months for STR (28). Also, Bloch et al. showed in a series of 107 patients undergoing reoperation for recurrent GBM that EOR at reoperation was a significant predictor of OS. Interestingly, EOR at first resection was not a statistically significant factor when EOR at reoperation was included in a Cox proportional hazards model, suggesting that a complete resection at reoperation could overcome an initial STR (19). A large retrospective study by Ringel et al. described outcomes in 503 reoperated patients (30). In this series, EOR at reoperation was also found to be significantly associated with better outcome. Also, these authors concluded that a complete resection at first recurrence could compensate for an incomplete resection at the initial surgery. The authors of the two last mentioned studies favored an aggressive surgical resection in recurrent GBM, as the improved survival with higher EOR suggested a real oncological effect, not a reflection of the selection of younger patients with higher KPS for recurrence surgery. Oppenlander et al. reported on 170 patients reoperated for recurrent GBM. They also found EOR to be significantly associated with OS following repeat resection. A threshold of at minimum 80% EOR was calculated to offer a significant survival benefit, suggesting usefulness of repeat surgery even if only a STR can be achieved (36). Also, Perrini et al. found EOR at reoperation to be associated with longer OS in a multivariate analysis of 48 reoperated patients (26).
In a smaller series, however, De Bonis et al. did not find a survival advantage for patients who received a GTR (11 patients) versus partial resection (22 patients) (27). Suchorska et al. analyzed post hoc the influence of reoperation in patients of the DIRECTOR trial, originally designed to test different dosing schemes of temozolomide administered at recurrence. Patients who were reoperated before entry into the study had similar prognostic factors (age, KPS, MGMT promotor methylation) than patients who were not reoperated. OS was not different between the two groups. However, the subgroup of patients that had a complete resection had a significantly better OS than nonsurgical patients, and patients with an incomplete resection showed a trend toward a worse prognosis than nonsurgical patients. The authors concluded that reoperation improved survival if complete resection of contrast-enhancing tumor (CRET) could be achieved (37).
ImprovIng resectIon In the recurrent settIng
Surgery for recurrent GBM can be technically more demanding, as the tumor is usually more invasive, and anatomical margins are less-defined than initially due to post-treatment gliosis (14). Given the growing evidence to obtain a maximal resection in the recurrent setting, surgical adjuncts such as intraoperative navigation, functional mapping, intraoperative ultrasound, and/or intraoperative MRI can be useful. To maximize EOR, the use of 5-aminolevulinic acid has been shown to lead to more complete resection and improved PFS in newly diagnosed GBM (38). In surgery for recurrent GBM, the use of 5-ALA has also been shown to have a high predictive value for detection of tumor cells and, importantly, did not seem to be affected by prior radiotherapy and/or chemotherapy (39).
Surgical Risks and Complications at Reoperation
In 1987, Ammirati et al. reported an early mortality rate of 1.4% and surgical morbidity of 16% per procedure. In their series on reoperated malignant glioma patients, they found that 46% of patients improved on performance scale after surgery but also found worsening in 25% of patients. Harsh et al. had a 5.1% mortality and 7.7% morbidity (25). Sipos and Afra found a 3.4% mortality rate in 60 reoperated GBM patients (40). In their series, patients with a lower preoperative KPs were more likely to deteriorate postoperatively. In a series of 20 reoperated GBM patients, Mandl et al. found a mortality of 15%, and permanent neurological morbidity of 15% (41). Moreover, 40% of patients had a worse KPS postoperatively. More recently, in a series of 503 reoperated patients, Ringel et al. found a nonneurological complication rate of 7.4% (30). New neurological deficits appeared in 16.8% of patients, of which 9.2% were transient and 7.6% were permanent. The authors concluded that complications in reoperations are higher than in primary surgery, but the increase is rather small, and the overall complication rate stayed fairly small. D'Amico et al. published a retrospective study of 28 patients aged ≥65 years operated for recurrent GBM (31). In their study, no postoperative mortality was seen after reoperation, and the overall complication rate in reoperated patients was 17.9% at first surgery and 25.8% at reoperation. This difference was not statistically significant, and the authors concluded that age itself should not exclude patients from repeat surgeries. In summary, combined mortality and morbidity rates of repeat surgery can be estimated to be around 12-30%. This should always be taken into account, as the goal of surgery in recurrent GBM is essentially prolonging survival with good QoL.
Beyond Cytoreduction: Additional Benefits of Surgery tIssue dIagnosIs and subclassIfIcatIon
Surgery has the advantage over other treatment strategies by providing clinicians with a new tissue diagnosis. This can be important when radiology remains uncertain about possible pseudoprogression, real progression, or radionecrosis. If the diagnosis of a recurrence based on radiology, supplemented with nuclear imaging techniques, remains uncertain, surgery provides a unique opportunity for tissue confirmation of tumor regrowth or presence of viable tumor tissue (42), although no wide consensus exists about resampling pathology as the gold standard to confirm or definitively exclude pseudoprogression. Although currently not part of clinical practice, there is growing interest in the molecular subclassification of GBM to propel (personalized) experimental salvage treatments. Several subtypes of GBM have been described based on gene-expression profiles (43) and DNA methylation patterns (44). These subclassifications are already used to stratify and/or select patients in early clinical trials evaluating new anti-tumoral agents. For example, it has been shown that the mesenchymal subtype correlates with poor radiation response and shorter survival (45) but may be more immunogenic and respond better to immunotherapy (46). Moreover, Phillips et al. showed that upon recurrence, a class switch toward the mesenchymal subclass is frequently seen, showing that initial molecular diagnosis might not be easily extrapolated in the recurrence setting (47). As it is believed that these molecular genetic data will become part of clinical trials, the possibility of obtaining new tissue at recurrence will be of interest for researchers and neurooncologists.
surgery to obtaIn a state of mInImal resIdual dIsease
Surgery is unique due to the fact that it rapidly leads to at least a substantial reduction of the tumor mass. This can result in a (macroscopically) state of minimal residual disease, which can be of benefit for other therapies. Keles et al. published a study on 119 GBM patients who were treated with temozolomide upon recurrence. They showed that the residual tumor volume was a significant predictor for "time to progression" and "survival," even when adjusted for age, KPS, and time from initial diagnosis. They dichotomized between residual tumor volume of <10 cm³, 10-15 cm³, and >15 cm³; this was correlated with 6 and 12 months of PFS and OS, respectively. Although only three patients (3%) were reoperated before the start of chemotherapy in this series, the authors suggest that debulking surgery with the intent to reduce tumor volume to less than 10 cm³ could be considered before chemotherapy is commenced (48). Stummer et al. described that a complete resection not only improves survival by itself but also may enhance the efficacy of adjuvant therapies such as radiochemotherapy and BCNU wafers, based on post hoc analyses on data from three separate randomized phase 3 trials in newly diagnosed GBM (49).
surgery to start local chemotherapy
After resection of a recurrent GBM, the resection cavity can be implanted with carmustine wafers (Gliadel). The effects were evaluated in a randomized trial. Patients with recurrent GBM had a 50% increased survival (56% vs. 36%), without increased complications or toxicity (50). However, in a retrospective study comparing recurrent GBM patients treated with Gliadel with a matched cohort group, Subach et al. did report on increased complications without survival benefit (51). Currently, Gliadel is rarely being used in Europe (52), although Quick et al reported in their recent publication that some form of chemotherapy was used after reoperation in more than 50% of cases all together (52) (table 1).
Conclusion
No prospective randomized trials directly evaluating the effect of reoperation for recurrent GBM have been published, and almost all available outcome data in surgical series are blurred by the inherent selection bias of patients with a high performance score and local recurrences. However, literature provides some evidence for an oncological advantage when a high EOR (or a CRET) can be obtained. This judgment needs to be made by a multidisciplinary oncological team with oncological neurosurgeons. Besides a cytoreductive effect, surgery can have an important role in obtaining tissue. Given the future expected importance of subclassification of glioblastoma and/or detection of specific druggable mutations, surgery probably will remain an important treatment strategy in the recurrent setting.
|
2019-03-17T13:07:30.727Z
|
2017-09-20T00:00:00.000
|
{
"year": 2017,
"sha1": "ad2870a148bad3e947dc2e8a1276c69ac3a68a87",
"oa_license": "CCBYNC",
"oa_url": "https://codonpublications.com/index.php/codon/catalog/download/35/151/299-1",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "85a88df1fa802a7c0edbf7b33668c3021450a700",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226264776
|
pes2o/s2orc
|
v3-fos-license
|
Clinical and diagnostic imaging findings in police working dogs referred for hip osteoarthritis
Background Osteoarthritis (OA) is the most commonly diagnosed joint disease in veterinary medicine, with at least 80% of the cases of lameness and joint diseases in companion animals being classified as OA. Sporting and working animals are more predisposed to develop OA since they are exposed to chronic fatigue injuries, leading to bone and muscular tissue damage and failure, resulting in clinical signs. To characterize the clinical signs and diagnostic findings of Police working dogs presenting with bilateral hip OA at the time of diagnosis. Fifty animals were evaluated with a bodyweight ≥ 15 kg, be older than two years, and without any medication or nutritional supplements for ≥ 6 weeks. Results Weight distribution, joint range of motion at flexion and extension, thigh girth, digital thermography, and radiographic signs were collected. Data from different Clinical Metrology Instruments (CMI) were collected: Canine Brief Pain Inventory, Liverpool Osteoarthritis in Dogs, Canine Orthopedic Index, and the Hudson Visual Analogue Scale. Results were compared by breed, age, sex, and Orthopaedic Foundation for Animals hip grades with the Independent Samples T-Test, ANOVA followed by a Bonferroni post hoc test, and Pearson correlation coefficient, with p < 0.05. The sample included 30 males and 20 females, with a mean age of 6.5 ± 2.4 years and a bodyweight of 26.7 ± 5.2 kg. Animals with weight distribution below normal levels had significant variations of joint extension and function scores. This evaluation was the only not correlated with at least one breed. Animals with caudolateral curvilinear osteophyte showed a poorer clinical presentation and worse scores in all considered CMIs. Radiographic changes correlated with age and corresponded to worse CMIs scores and weight distribution. Dutch Shepherd Dogs showed better CMI scores than the other considered breeds. Conclusions Police working dogs presented with complaints related to hip OA at an early stage of the disease. Hip scores influenced clinical presentation, with moderate cases showing lower thigh girth and worse pain interference and severity, and function scores than mild cases. Patients with severe OA had lower thermographic evaluations than patients with moderate OA. Age was the primary variable influencing considered CMI scores.
Background
Osteoarthritis (OA) is the most commonly diagnosed joint disease in both human and veterinary medicine, with at least 80% of the cases of lameness and joint conditions in companion animals being classified as OA [1][2][3]. Risk factors include breed, neutering, higher body weight, and being older than eight years [4]. Police and working animals are at increased risk of developing an orthopaedic disease than companion animals, and OA is common amongst these animals [5]. Hip OA is commonly bilateral and a consequence of canine hip dysplasia, being influenced by many genes specific for every breed [6][7][8][9].
Pelvic radiographs are frequently performed in dogs to screen hip dysplasia and OA. They have been used for over four decades in several screening mechanisms worldwide. They are also a significant determination of clinical and experimental outcome [10][11][12]. The most common radiographic view is the ventrodorsal hip extended view. The ventrodorsal flexed view (also called frog-legged view) enhances the visibility of the cranial and caudal aspects of the femoral head and neck. This feature helps assess the presence of circumferential femoral head osteophyte (CFHO) and caudolateral curvilinear osteophyte (CCO). These two features represent early radiographic signs that predict the development of the clinical signs of hip OA [9,[13][14][15].
Weight distribution and off-loading or limb favouring at stance is a commonly used subjective assessment during orthopaedic examination [16]. Animals with OA may not be overtly lame at a walk or a trot but exhibit subtle shifts in body weight distribution at a stance due to pain or instability [17,18]. Stance analysis has been reported as sensitive for detecting lameness in dogs, with better results in large breed dogs [19]. Digital thermal imaging is a non-invasive, non-radiating, contact-free, physiologic diagnostic tool that depends on heat resulting from physiologic functions related to skin temperature control [20][21][22]. It has been described as useful in several species, from humans to horses and cats, but its clinical utility has rarely been studied in small animals [21,23,24]. Animals with OA present a variety of clinical signs, which can vary significantly. Muscular atrophy is a consistent finding and is evident within a few weeks of OA onset [8,25]. Restricted range of motion (ROM), including flexion and extension, is usually present [8]. The evaluation of asymmetry, assessment of muscle atrophy level, measurement of static weight-bearing, and ROM measurement have been described as the most valid and sensitive physiotherapeutic evaluation methods [26,27].
Pain and functional ability are also important parameters in the evaluation of OA treatment efficacy [28]. Pain is a multi-dimensional experience with sensory, evaluative, and affective components [29]. Several clinical metrology instruments (CMI) have been developed to measure outcome assessments to approach these different dimensions. In dogs, CMIs are typically completed by a proxy. The ones developed and validated for dogs are the Canine Brief Pain Inventory (CBPI) and the Liverpool Osteoarthritis in Dogs (LOAD) [30][31][32][33]. The CBPI allows to rate a dog's pain and is divided into two sections, a pain severity score (PSS) that assesses the magnitude of the animal pain, and a pain interference score (PIS) that evaluates the degree to which pain affects daily activities [34]. The Canine Orthopaedic Index (COI) was developed for clinical research in canine orthopedics or individual outcomes in four domains: stiffness, gait, function, and quality of life. It has been shown to have excellent reliability and validity [35]. The Hudson Visual Analogue Scale (HVAS) has been deemed repeatable and valid to assess the degree of mild to moderate lameness in dogs, compared with force plate analysis as a criterion-referenced standard [36]. By collecting information from different CMIs, it possible to characterize the disease in all dimensions, a patient's level of pain, the degree of lameness, the ability to enjoy life, and perform daily activities. It also allows characterizing the effect of a treatment in each of those dimensions.
This study aimed to characterize the clinical signs and diagnostic findings of Police working dogs presenting with bilateral hip OA. We hypothesized that differences occur when comparing breeds commonly used as Police working dogs.
Measured values of overall age, body weight, weight distribution, digital thermography, thigh girth, and joint range of motion, and divided by breed and sex, are presented in Table 1. Comparing males to females, significant differences were observed in weight and thigh girth (p < 0.01), with male dogs having higher values. Comparing breeds, GSD were significantly heavier than BM (p < 0.01) and LR (p < 0.01) and also had significantly higher thigh girth than BM (p < 0.01), LR (p < 0.01), and DSD (p = 0.02). LR were significantly older and had lower thigh girth than GSD (p < 0.01 for both), BM (p < 0.01 and p = 0.05, respectively), and DSD (p < 0.01 for both). DSD were significantly heavier than BM (p < 0.01). DSD also had higher measured values with digital thermography on the dorsoventral view than GSD (p = 0.02 for both) and on the lateral view than BM (p = 0.04). Thigh girth showed a correlation with breed (r=-0.34, p < 0.01), weight (r=-0.47, p < 0.01) and sex (r=-0.72, p < 0.01). Age correlated with joint extension (r=-0.31, p < 0.01), and thermographic measurement on the dorsoventral view correlated with breed (r=-0.30, p < 0.01). The weight distribution of both pelvic limbs correlated with joint extension (r=-0.36, p < 0.01), while considering the left pelvic limb, a higher value was observed (r=-0.43, p < 0.01). Variables considered in multiple regression statistically significantly predicted thigh girth F(5,84) = 26.33, p = 0.000, R 2 = 0.610, with breed (p < 0.01), bodyweight (p < 0.01), and OFA hip score (p = 0.01) adding statistically significantly to the prediction.
With a cut-off of weight distribution of individual limbs set at 18%, significant variations were observed on joint extension (p = 0.02) and the frequency of an irregular, misshapen femoral head (p = 0.03). At the 20% cut-off point, besides the differences in joint extension (p < 0.01) and on the frequency of an irregular, misshapen femoral head (p = 0.02), significant variations were observed in joint flexion (p < 0.01) and HVAS (p = 0.03). For both pelvic limbs with the 36 and 40% cut-offs, significant variations were observed in joint extension (p < 0.01), function (p = 0.03), presence of CCO (p = 0.03 at 40), and of a misshapen femoral head (p = 0.02).
Absolute frequencies and percentages of radiographic findings, presented by overall, by breed, and by sex, in the ventrodorsal and frog-leg views, are outlined in Table 2. Each joint was analyzed individually, for a total of 100 joints. Considering specific radiographic signs, patients with irregular wear on the femoral head were older (p < 0.01), with worse weight distribution (p < 0.01) and CMI scores (p < 0.01). Animals with a flattened or Table 1 Mean values (± standard deviation) of overall weight, age, stance analysis (per pelvic limb and of the combination of both), thermography (ventrodorsal and lateral views), thigh girth and range of motion (extension and flexion) measurements, and by breed, sex and OFA score, of left and right pelvic limbs shallow acetabulum, with an irregular outline, had lower weight distribution values (p = 0.03). Animals with CCO, on both the ventrodorsal and frog-legged views, were older (p < 0.01), had lower weight distribution values (p = 0.04), and had worse CMI scores (for all, p < 0.01). Those with new bone formation on the acetabulum and femoral head and neck were older (p < 0,01) and had worse PSS, Function, quality of life (p < 0.01), and PIS (p > 0.05) scores. Animals with a worn away angle at the cranial effective acetabular rim had lower thigh girth (p < 0.01) and joint flexion (p = 0.04). When CFHO was observable on the ventrodorsal, animals were heavier (p = 0.04) and had worse stiffness, function (p = 0.02), Gait, COI (p < 0.01), quality of life (p = 0.03) scores. The presence of CCO on the ventrodorsal was correlated with its presence on the frog-legged view (r = 0.51, p < 0.01). On the frog-legged view, the presence of CCO correlated with age (r = 0.47, p < 0.01) and joint extension (r=-0.51, p < 0.01).
Overall scores, by breed and sex, of the considered CMI, are presented in Table 3. While no significant differences were observed between male and female animals, the opposite was observed between breeds. GSD had lower function scores than LR (p = 0.04), while DSD had better results when compared to other breeds with HVAS (p < 0.01 for GSD and p = 0.02 for LR), LOAD (p = 0.02 for GSD, and p = 0.02 for BM and p < 0.01 for Table 2 Overall, by breed and by sex, absolute frequencies and percentages within group of radiographic findings in the ventrodorsal and frog leg views, of hip joints. For each animal, both joints were considered, representing one hundred joints Table 4. Comparing animals at several cut-off points for PSS (scores of 4, 6, and 8), the same significant differences being observed consistently, with animals above the cutoff having worse joint extension (p < 0.01) and higher frequency of CCO on the ventrodorsal and frog-legged views (p < 0.01). When comparing the same cut-offs for PIS, at the 4 and 6 cut-offs, animals had to have a worse joint extension (p < 0.01) and higher frequency of CCO on the ventrodorsal and frog-legged views (p < 0.01). On the 8 cut-off point, the occurrence of all other radiographic signs was significantly higher (p < 0.01), and weight distribution on the left pelvic limb and both limbs was worse (p < 0.01).
Discussion
Hip OA is very common in large breeds such as German Shepherd Dogs and Labrador. In working dogs, it has a toll on performance and quality of life [37,38]. To our knowledge, this is the first study to describe the clinical presentation of Police working dogs first diagnosed with hip OA. It presents a wide variety of physical examination results and several diagnostics to provide an indepth description of affected animals.
Radiographic examination is a staple in OA evaluation. Still, it is also well established that radiographic signs develop later than the structural changes associated with OA, and clinical symptoms do not always correlate with radiographic signs [9,39,40]. CFHO and CCO are considered the radiographic predictors of future OA development [9]. Animals presenting with these radiographic signs had a significantly worse clinical presentation, particularly with CCO, with animals showing worse results in all considered CMIs scores, ranging from pain to lameness level and functionality. If the presence of CCO, or other radiographic findings, influences response to treatment is still to be determined. Several differences were found between OFA grades, specifically considering pain and function scores and thermographic evaluation. The sequence of these differences may occur alongside the course of OA. From mild to moderate, structural changes occur and are detected on radiographic examination, specifically CCO, one of the predictive signs of OA development [13][14][15]. These structural changes are then reflected in clinical signs, such as muscular atrophy and pain, which takes a toll on daily activities. With severe OA, a corresponding loss of functional tissue and muscle masses surrounding the joint occurs [21,41]. These facts may account for the decrease in thermographic evaluation observed in severe hip grades compared to moderate hip grades. OFA hip was also one of the variables, alongside age, adding statistically significantly to the prediction of PIS scores.
Some of the differences observed during the physical examination, as the fact that GSD were significantly heavier than other breeds (such as BM and LR), also having greater thigh muscle masses, were expected. This relation also applies to male dogs being heavier than females and with higher thigh girth. Multiple regression analysis showed the effect of breed and bodyweight in predicting thigh girth, confirming these findings. It also showed that OFA hip significantly influenced thigh girth, making it a useful measure in evaluating hip OA. These variables combined may lead to a positive correlation [42,43]. This role is particularly true in dogs with higher body condition scores [4]. All of the animals included in this sample had either a 4 or 5 body condition score. Still, the fact that male dogs tend to be heavier than females (a tendency confirmed in this study) may place them under greater risk of developing OA and may account for the higher number of males observed. However, the OFA hip score was not predicted based on breed, age, sex, or bodyweight, so future studies should clarify these facts. Hip OA, when compared with OA in other joints, seems to be better tolerated by animals. This ability is mainly due to the higher amount of muscle masses surrounding this joint [8]. The quadriceps muscle group is particularly prone to atrophy secondary to decreased limb function. Therefore, measuring thigh girth helps make an initial assessment and measure patient evolution and treatment outcome [44]. In this study, we described thigh girth measurements of dogs initially diagnosed with hip OA, specifically of the breeds most commonly used as working and sporting dogs. However, it would be of interest to also have healthy subjects' values to compare both groups. The evaluation of joint ROM is a standard measurement, with OA joints usually exhibiting ROM restrictions. In the hip joint, specifically, a ROM decrease and particularly during extension, can also be present, even though this is not a universal finding [33,39]. It showed a correlation with age, which may be attributed to disease progression since some of the older animals had worse OFA scores. Normal ROM of the hip joint for some breeds have been described. In military working GSD, a normal ROM of 44°±6 at flexion and 155°±6 at extension, and in LR of 50°±2 at flexion and 162°±3 at extension have been reported [45][46][47]. Our study measured lower values in both breeds, which could be expected due to OA. Still, it would be interesting to have a group of disease-free dogs to compare these values and describe normal values in the other two considered breeds.
The mean age of animals included in this sample was 6.5 years, which is earlier than the commonly considered risk factor for OA of > 8 years [4]. GSD and DSD were even younger than 6.5 years, with only LR being beyond this point and significantly older than the other breeds. Multiple regression analysis showed that age was the primary variable adding statistically significantly to CMI scores' prediction. All of the animals included in the sample were screened before starting training and active work, so the earlier diagnosis may be attributed to the high demand and stress that these animals' musculoskeletal structures are under and the subsequent toll on performance [48]. Since these animals are active working dogs, it is possible that the disease actually develops or is simply detected earlier than in other dogs. The reason leading to a later diagnosis of LR is not clear. It may be due to breed characteristics, with LR being less explosive and less driven than BM, for example. Also, a less physically demanding mission of these dogs (most were product detection dogs) compared with the remaining animals included in the sample (mostly involved in search and rescue and use of force activities) might be an important factor to consider.
Normal weight distribution on the weight distribution plate is the same as for pressure-sensitive walkway total pressure index-30/30/20/20 (left thoracic limb/right thoracic limb/left pelvic limb/right pelvic limb) [49,50]. For the evolution of hip OA, bodyweight distribution at a stance may even be a superior measurement to VI and PVF since dogs present different standing postures to increase acetabular coverage. Sensitivity and specificity seem to be higher with a cut-off point of 18% for pelvic limbs [8,18,51]. We considered both the 20% and 18% cut-off, with more differences being found at 20%. Mean values were below the 20% value but showed some dispersion. Since included animals had bilateral disease, it is quite possible that at any given point, they would be overloading one side to protect the other, leading to very different weight distribution values when comparing contralateral limbs in the same animal. Dogs presenting with pelvic limb-lameness tend to distribute weight more side-to-side than pelvic-to-thoracic compensation [52,53]. For that reason, we also analyzed weight distribution for both pelvic limbs, with two different cut-off points. This analysis may be an interesting approach since it accounted for significant joint extension and function scores and CCO variations. It would be interesting to see the importance of these cut-off points in evaluating response to treatment. It should be the subject of further research, mainly since it did not show associated breed variations. It has been described that male dogs tend to carry more weight on the thoracic limbs naturally and may exhibit fewer improvements in response to treatment [17]. No significant variation comparing males and females in weight distribution was found, but future studies should evaluate this hypothesis.
Canine thermal imaging has been documented only recently. Still, a growing interest in this modality has led to an increase in the number of studies evaluating its use to assess the canine hip, stifle, elbow, and intervertebral disc [24,[54][55][56][57][58]. To our knowledge, this is the first study describing values for dogs with hip OA. The coat's type and color are variables that must be taken into account, and its influence documented [55,56,59,60]. Our results seem to confirm this fact since DSD showed significantly different values than other breeds, and this may be due to its brindle coat, in opposition to lighter coats in the other breeds. In humans OA studies, increased temperatures have been related to even slight degenerative changes and low temperatures in more severe disease cases [61]. In this study, this effect was not found, but it may be due to the coat variation effect. Still, its value in evaluating response to treatment has to be determined.
CMIs represent a patient-centred approach that, similar to what happens in human medicine, has been incorporated in veterinary assessments in different species [62][63][64]. They may also capture a different dimension of OA since owners may often be more focused on the dog's ability to perform daily activities, rather than an increase or decrease of ROM or use of a single limb at a walk or trot [65,66]. While no differences were observed when comparing animals by sex, several differences were observed between breeds and reported values for the same breeds' pet dogs. One of the reasons for this may be the nature of the specific mission of the animals. When involved in a more physically challenging task, it is more likely that complaints or limitations arise. Another reason may be age (which correlated with several scores), since older animals tend to be more experienced and able to manage the effort, making them less prone to injury [67]. Also, since these animals are selected based on working predisposition, they present high drive, which may mask some complaints and lead, for example, to relatively low PSS. We also aimed to see if different cut-off points of pain scores (measure with the PIS and PSS) presented significant differences. The main finding was that, as could be expected, animals with higher PIS scores had significantly lower weight distribution, but also had higher frequencies of all radiographic signs.
This study presents some limitations, namely the lack of a control group with non-lame dogs. This limitation is mainly related to the sample's convenience nature, comprised of dogs specifically presenting for treatment. Some of the previous report results of similar evaluations were conducted in the same breeds included in our sample, which is still useful. Since data was only collected in a single moment, we cannot comment on the interest of each of the findings for the prognosis or treatment monitoring of OA, which should be addressed in future studies.
Conclusions
To our knowledge, this study first describes several clinical and radiographic findings of working dogs of different breeds to hip OA. Police working dogs presented complaints related to hip OA at an early stage of the disease and a younger age than non-working dogs. LR were significantly older than other considered breeds. Hip scores influenced clinical presentation, with moderate cases showing lower thigh girth and worse PIS, PSS, and function scores than mild cases. Patients with severe OA had lower thermographic evaluations than patients with moderate OA. Age was the primary variable influencing considered CMI scores.
Methods
The sample comprised fifty (N = 50) Police working dogs with bilateral hip OA. It was a convenience sample, composed of patients presented at the Clínica Veterinária de Cães (Portuguese Gendarmerie Canine Clinic) to undergo hip OA treatment after initial diagnosis. Subsequent treatment was randomly determined, as the animals took part in a study evaluating intra-articular treatments for OA. Patients were active police working dogs of the Guarda Nacional Republicana (Portuguese Gendarmerie Canine Unit). The diagnosis was based on the dog's history, trainer complaints (difficulty rising, jumping and maintaining obedience positions, stiffness and decreased overall performance), physical examination (pain during joint mobilization, stiffness and reduced range of motion), and radiographic findings (OFA hip scores of mild, moderate or severe) consistent with bilateral hip OA. Inclusion criteria were: bodyweight ≥ 15 kg, animal older than 2 years and without any medication or nutritional supplements for 6 weeks or more before the beginning of the study. Animals suspected or with any other orthopaedic or concomitant disease (ruled out through physical examination, complete blood count, and serum chemistry profile) and not tolerant of data collection were excluded. All evaluations were performed at the same moment by the same researcher, which had extensive experience in the conduction of all procedures to reduce inter-observer variability.
Digital thermography
For the collection of digital thermography images, dogs were allowed to walk around in a large, plain wall room and adjust to room temperature (set at 21°C) in a relaxed way for approximately 30 min before imaging. They were then positioned in an upright standing position, as symmetrically as possible, without the trainer or veterinarian touching its torso. A dorsoventral and two lateral images (one for each limb) were obtained from every animal. Every dorsoventral thermographic image included the last lumbar vertebra area to the first coccygeal vertebra at a minimum, at a distance of 60 cm (Fig. 1) [23]. Lateral views had the greater trochanter in the centre of the image, also at a distance of 60 cm. All images were captured with a FLIR ThermaCAM E25® camera model and kept when the anatomical landmarks were included, and the image was steady enough to determine their location. The free software Tools (FLIR Systems, Inc) was used to analyse the images, with a rainbow color pallet. Temperature boxes of equal size were placed on the hip joint's anatomical area on both views, with mean and maximal temperatures determined.
Stance Analysis
Stance analysis was conducted with a weight distribution platform (Companion Stance Analyzer; LiteCure LLC®, Newark, Delaware, United States). According to the manufacturer's guidelines, it was placed in the centre of a room, at least 1 meter from the walls. It was calibrated at the beginning of each day, and zeroed before each data collection. Animals were encouraged to stand on to the weight distribution platform. Its trainer helped ensure the patients placed one foot on each quadrant of the platform while maintaining a natural stance with the centre of gravity and stability (measured by the platform) near the platform's middle. Gentle restraint was used to keep the patient's head in a natural, forward-facing position when needed. For all animals, at least 20 measurements were performed, and the mean value was determined. Normal weight distribution for each pelvic limb was considered 20% of the total weight [18]. Since all animals included had bilateral OA, weight distribution on both pelvic limbs was also considered and set at 40% (20% left pelvic limb + 20% right pelvic limb).
Clinical Assessment
Determination of thigh girth was made with a Gullick II measuring tape at a distance of 70% thigh length, measured from the tip of the greater trochanter, with the leg in an extended position while in lateral recumbency, and the dog relaxed [44]. With the patient in the same position, hip joint ROM was obtained with a goniometer (Veterinary Instrumentation, United Kingdom) at extension and flexion, with a flexed stifle [68]. These measurements were made in triplicate, and the mean value was calculated.
Radiographic examination
Radiographic studies were conducted under light sedation, using a combination of medetomidine (0.01 mg/kg) and butorphanol (0.1 mg/kg), given intravenously. A ventrodorsal extended legs view and a froglegged view were obtained. Hips were graded according to the OFA hip grading scoring scheme [69] by the researcher, blinded to the patient's identification. A mild score corresponded to a partially subluxated femoral head, causing an incongruent and widened joint space, with a shallow acetabulum, only partially covering the femoral head. In young dogs (24 to 36 months), OA lesions may not be present. Moderate grades were attributed when significant subluxation was present, and the femoral head was barely seated into a shallow acetabulum. Secondary remodeling along the femoral neck and head, acetabular osteophytes, and subchondral sclerosis were present. In severe cases, the femoral head was partly or completely out of a shallow acetabulum, with extensive secondary arthritic bone changes along the femoral head and neck head, acetabular rim changes, and large amounts of abnormal bone pattern changes. A full description of the OFA hip grading scheme is available online (https://www.ofa.org/diseases/hip-dysplasia/ grades). The presence of specific radiographic signs was also recorded: irregular wear on the femoral head, making it misshapen and with a loss of its rounded appearance; a flattened or shallow acetabulum, with irregular outline; CCO; new bone formation on the acetabulum and femoral head and neck; a worn away angle formed at the cranial effective acetabular rim; subchondral bone Fig. 1 A dorsoventral view of a dog with moderate osteoarthritis (left) and another with severe osteoarthritis (right), including the area from the last lumbar vertebra to the first coccygeal vertebra at a minimum, at a distance of 60 cm. Arrowhead indicates cranial direction. Arrow indicates the anatomical location of the hip joint. An area of increased temperature is observed on the patient with moderate OA and of lower temperature on the patient with severe OA sclerosis along the cranial acetabular edge; and CFHO [9,39,70,71]. In the frog-legged view, the presence of CCO and CFHO was also recorded.
Clinical metrology instruments
At the evaluation moment, an online copy prepared for the effect of the HVAS, CBPI, COI, and LOAD was completed by the trainers. The same trainer completed all CMIs for each dog.
Statistical Analysis
Normality was assessed with a Shapiro-Wilk test. Each measured parameter was compared with an Independent Samples T-Test (when two groups were considered, like sex) or ANOVA, followed by a Bonferroni post hoc test for multiple comparisons (when more than two groups were considered). CMI scores were compared with a Wilcoxon signed-rank test. Different score cut-off points (4, 6, and 8) were analyzed for PIS and PSS. 20% and 18% [18] pelvic limb percentages cut-off points were considered for weight distribution. Since hip OA is often bilateral, results for the combination of both pelvic limbs were also analyzed, at 36% (18% left pelvic limb + 18% right pelvic limb) and 40% (20% left pelvic limb + 20% right pelvic limb). The correlation between parameters was assessed with the Pearson correlation coefficient. Multiple regression was run to predict evaluated parameters from age, sex, breed, body weight, and OFA hip score. All results were analyzed with IBM SPSS Statistics version 20, and a significance level of p < 0.05 was set.
|
2020-11-07T14:58:20.637Z
|
2020-11-07T00:00:00.000
|
{
"year": 2020,
"sha1": "257839c646c412ee3e3e1d1927c223b14d33f2b8",
"oa_license": "CCBY",
"oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-020-02647-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "257839c646c412ee3e3e1d1927c223b14d33f2b8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119108945
|
pes2o/s2orc
|
v3-fos-license
|
$A_k \bar F$ Chiral Gauge Theories
We study asymptotically free chiral gauge theories with an SU($N$) gauge group and chiral fermions transforming according to the antisymmetric rank-$k$ tensor representation, $A_k \equiv [k]_N$, and the requisite number, $n_{\bar F}$, of copies of fermions in the conjugate fundamental representation, $\bar F \equiv \overline{[1]}_N$, to render the theories anomaly-free. We denote these as $A_k \, \bar F$ theories. We take $N \ge 2k+1$ so that $n_{\bar F} \ge 1$. The $A_2 \, \bar F$ theories form an infinite family with $N \ge 5$, but we show that the $A_3 \, \bar F$ and $A_4 \,\bar F$ theories are only asymptotically free for $N$ in the respective ranges $7 \le N \le 17$ and $9 \le N \le 11$, and that there are no asymptotically free $A_k \, \bar F$ theories with $k \ge 5$. We investigate the types of ultraviolet to infrared evolution for these $A_k \, \bar F$ theories and find that, depending on $k$ and $N$, they may lead to a non-Abelian Coulomb phase, or may involve confinement with massless gauge-singlet composite fermions, bilinear fermion condensation with dynamical gauge and global symmetry breaking, or formation of multifermion condensates that preserve the gauge symmetry. We also show that there are no asymptotically free, anomaly-free SU($N$) $S_k \, \bar F$ chiral gauge theories with $k \ge 3$, where $S_k$ denotes the rank-$k$ symmetric representation.
I. INTRODUCTION
The properties of chiral gauge theories, especially in the strong-coupling regime, remain a challenge for theoretical understanding. One requires that such a theory must be free of any triangle anomaly in gauged currents, since such an anomaly would spoil the renormalizability of the theory. Imposing the additional requirement that such a theory must be asymptotically free guarantees that there is at least one region, namely the deep ultraviolet (UV) at large Euclidean energy/momentum scales µ, where the running gauge coupling g(µ) is small, so that the properties of the theory are reliably calculable using perturbative methods. A chiral gauge theory is said to be irreducibly chiral if it does not contain any vectorlike subsector. In this case, the chiral gauge symmetry precludes any fermion mass terms in the underlying Lagrangian. We shall focus on irreducibly chiral theories here. In an asymptotically free gauge theory, as the reference scale µ decreases toward the infrared (IR) from the ultraviolet, the running gauge coupling grows. One possibility is that the beta function has an infrared zero at a small value of the coupling, which constitutes an infrared fixed point of the renormalization group (RG). In this case, in the full renormalization-group evolution of the theory from the deep UV to the IR, the gauge interaction remains weakly coupled, and one expects that the IR behavior is that of a (deconfined) non-Abelian Coulomb phase. In contrast, if the beta function has an IR zero at a sufficiently large value of the coupling, or if the beta function does not have any IR zero, then the theory becomes strongly coupled in the infrared. In this case, several types of behavior can occur. Since the fermions are massless, the theory is invariant under a global flavor symmetry. If one can construct gauge-singlet fermionic operator products that match the global anomalies of the fundamental fermions, a condition known as 't Hooft global anomaly matching, then the strongly coupled chiral gauge interaction may confine and produce massless gauge-singlet composite spin-1/2 fermions [1]- [13]. Alternatively, the strong gauge interaction may produce fermion condensate(s) that spontaneously break gauge and global chiral symmetries [8,9], [14]- [22]. This latter type of behavior can occur in several stages at different energy scales, resulting in a hierarchy of symmetry-breaking scales. In addition to their intrinsic field-theoretic interest, strongly coupled chiral gauge theories have been applied in efforts to construct (preon) models of composite quarks and leptons and models explaining electroweak symmetry breaking and the structure of fermion generations and masses in the Standard Model. Some work along these lines includes [1]- [21].
In this paper we shall study asymptotically free chiral gauge theories in four spacetime dimensions (at zero temperature) with an SU(N ) gauge group and chiral fermions transforming according to the antisymmetric rank-k tensor representation, denoted A k ≡ [k] N , and the requisite number, nF , of copies of fermions in the conjugate fundamental representation,F ≡ [1] N , to render the theories anomaly-free [23]. We denote these as A kF theories. We take N ≥ 2k + 1 so that the theory is chiral and nF ≥ 1. We extend previous studies on the A 2F theories [2,[8][9][10][11][12]18] with further analysis of fermion condensation channels and sequential symmetry breaking, while the A kF theories with k ≥ 3 are, to our knowledge, new here. The A 2F theories form an infinite family with N ≥ 5, but we show that the A 3F and A 4F theories are only asymptotically free for N in the respective ranges 7 ≤ N ≤ 17 and 9 ≤ N ≤ 11, and there are no asymptotically free A kF theories with k ≥ 5. We investigate the types of ultraviolet to infrared evolution for these A kF theories and find that, depending on k and N , they may lead to a non-Abelian Coulomb phase, or may involve confinement with massless gauge-singlet composite fermions, or fermion condensation with dynamical gauge and global symmetry breaking. One of the methods that we use for our analysis is the most-attractive-channel criterion for bilinear fermion condensation [2]. We find two cases in each of which two channels are equally attractive, so the most-attractive channel criterion cannot determine which is more likely to occur; for these we show how vacuum alignment arguments prefer one channel over the other. We also discuss the possibility that the strongly coupled gauge interaction can produce multifermion condensates that preserve the gauge symmetry. Finally, we show that there are no asymptotically free, anomaly-free SU(N ) S kF chiral gauge theories with k ≥ 3, where S k denotes the rank-k symmetric representation. We restrict our consideration here to chiral gauge theories with only gauge and fermion fields, but without any scalar fields; the nonperturbative behavior of systems with interacting gauge, fermion, and scalar fields has been studied, e.g., in [24], and some recent work on RG flows in chiral theories with scalar fields includes [25].
This paper is organized as follows. In Sect. II we briefly review the theoretical methods that we use for our work. In Sect. III we discuss the construction of the A kF chiral gauge theories and determine the constraints from anomaly cancellation and asymptotic freedom. In Sect. IV we ascertain whether or not the maximal scheme-independent information from the beta function indicates that the theory has an infrared zero, and, if so, we calculate the value of α at this zero in the beta function. Sections V and VI contain general discussions of the global flavor symmetry group and the most attractive channel for bilinear fermion condensation formation in an A kF theory. In Sects. VII, VIII, and IX we present our results on the specific A 2F , A 3F , and A 4F classes of theories, respectively. In Sect. X we discuss multifermion condensates that can preserve chiral gauge symmetry. In Sect. XI we prove that there are no asymptotically free S kF chiral gauge theories with k ≥ 3, where S k denotes the symmetric rank-k tensor representation of SU(N ). Our conclusions are given in Sect. XII and some auxiliary formulas are included in two appendices.
II. METHODS OF ANALYSIS
In this section we briefly discuss the methods of analysis that we use for our work. We refer the reader to [12,13,22] for more detailed discussions of these methods. To determine the constraints due to the requirement of asymptotic freedom and, for asymptotically free theories, to study the UV to IR evolution, we calculate the beta function to its maximal order, namely two-loops. We denote α(µ) = g(µ) 2 /(4π) and a(µ) ≡ g(µ) 2 /16π 2 . The beta function is β α = dα/dt, where dt = d ln µ, with the series expansion (2.1) In Eq. (2.1) we have extracted an overall minus sign, b ℓ is the ℓ-loop coefficient, andb ℓ = b ℓ /(4π) ℓ is the reduced ℓloop coefficient. The n-loop beta function, denoted β α,nℓ , is given by Eq. (2.1) with the upper limit on the ℓ-loop summation equal to n instead of ∞. The requirement of asymptotic freedom means that β α < 0 for small α, which holds if b 1 > 0. General expressions for b 1 [26] and b 2 [27] are given in Appendix A. Given that b 1 > 0, it follows that if b 2 < 0, then the two-loop beta function, β α,2ℓ , has an IR zero at a IR,2ℓ = −b 1 /b 2 , or equivalently, For sufficiently large fermion content in an asymptotically free theory, b 2 may be negative, so that the beta function exhibits such an infrared zero. This was discussed for vectorial gauge theories in [27,28] and is also important for the chiral gauge theories under consideration here. As the fermion content is reduced, α IR,2ℓ increases, and for sufficiently small fermion content, the beta function may not exhibit any IR zero. In both the case where the IR zero occurs at a substantial value of α IR,2ℓ and the case where the beta function has no IR zero, the theory becomes strongly coupled in the infrared. Higher-loop calculations of the IR zero for various fermion representations, including rank-2 tensor representations, were presented in [29,30]. Since these higherloop calculations are scheme-dependent, it was necessary to assess the sensitivity of the IR zero to scheme transformations. This was done in [31]- [33]. For our purposes here, it will suffice to use the maximal schemeindependent information available from the beta function, as encoded in the coefficients up to the two-loop level.
In these situations of strong coupling in the infrared, one can apply various methods to analyze the resultant nonperturbative behavior of the chiral gauge theory. First, one may investigate whether the fermion content of the theory satisfies the 't Hooft global anomaly matching conditions [1]. These are necessary but not sufficient conditions for the gauge interaction to confine and produce massless gauge-singlet composite spin-1/2 fermions. If such massless spin-1/2 fermions are actually produced, they would saturate the massless-fermion sector of the theory, since any composite fermions with spins J ≥ 3/2 would be massive [3].
An alternative possibility is that the gauge interaction can produce bilinear fermion condensates. In an irreducibly chiral theory (without a vectorlike subsector), these condensates break the gauge symmetry, as well as global flavor symmetries. A common method to identify the most likely channel in which this condensation occurs is the most-attractive-channel (MAC) criterion [2]. Let us consider a fermion condensation channel in which fermions in the representations R 1 and R 2 of the gauge group G form a (Lorentz-invariant) bilinear fermion condensate that transforms according to the representation R c of G, denoted as where the subscript c stands for "condensate channel". An approximate measure of the attractiveness of this condensation channel, is where C 2 (R) is the quadratic Casimir invariant for the representation R (see Appendix B). In this approach, the most attractive channel for bilinear fermion condensation is the one with the largest (positive) value of ∆C 2 , and this is thus the most likely to occur. If two or more such channels have the same value of ∆C 2 , then we make use of a vacuum alignment argument [14,15], as follows. Let us consider the case where two channels have the same ∆C 2 and produce condensates in the representations R c1 and R c2 . Assume that these condensates break the initial gauge group G to the respective subgroups H c1 ⊂ G and H c2 ⊂ G. . This is based on an energy minimization argument, since the channel that respects the largest residual gauge symmetry will minimize the number of gauge bosons that pick up masses. A rough estimate of the minimal critical strength of the coupling for fermion condensation in a given channel has been obtained from an analysis of the Schwinger-Dyson equation for the fermion propagator and is [34] α cr,c ∼ 2π 3∆C 2 (R c ) . (2.5) Owing to the uncertainties inherent in the strongcoupling physics describing this condensation phenomenon, Eq. (2.5) is only a rough estimate. For our purposes, it will be convenient to define the ratio for the channel (2.3): If ρ c is considerably larger (smaller) than unity, then condensation in the given channel (2.3) is likely (unlikely). We note that fermion condensation in a strongly coupled gauge theory may, in principle, involve a product of an even number of fermion operators larger than just two [40,41]. A conjecture for a thermally motivated inequality concerning a measure of field degrees of freedom, as evaluated in the UV and in the IR, was proposed and studied for vectorial and chiral gauge theories in [35] and [10] and investigated further in several works, including [11][12][13]36]. Here our main methods will consist of analyses of beta functions, various channels for fermion condensation, and construction of low-energy effective field theories resulting from self-breaking of chiral gauge theories.
III. A kF THEORIES AND CONSTRAINTS FROM ANOMALY CANCELLATION AND ASYMPTOTIC FREEDOM
The chiral gauge theories that we study here have an SU(N ) gauge group and chiral fermions transforming according to a rank-k antisymmetric tensor representation A k ≡ [k] N of this group, and the requisite number of chiral fermions in the conjugate fundamental representation, F ≡ [1] N , to render the theories free of any anomaly in gauged currents [23]. Here we determine the constraints on these theories from anomaly cancellation and asymptotic freedom. These theories are irreducibly chiral, i.e., they do not contain any vectorlike subsector. Consequently, the chiral gauge symmetry forbids any fermion mass terms in the underlying lagrangian. We denote the number of copies (flavors) ofF fermions as nF . The contribution to the triangle anomaly in gauged currents of a chiral fermion in the A k representation is [37] The total anomaly in the theory is so A = 0, i.e., the theory is free of anomalies in gauged currents, if and only if If N is even and k = N/2, the [k] N = [k] 2k representation is self-conjugate, with zero anomaly, so Eq. (3.3) yields nF = 0 and a nonchiral theory. In order to get a chiral theory, with positive nF , it is necessary and sufficient that so, for a given k, we will restrict N to this range. For N in this range, the anomaly A([k] N ) is an integer greater than unity. A member of this set of chiral gauge theories is thus determined by its values of k and N and has the form i.e., The A kF theories with k = 3 and k = 4 have respective upper bounds on N imposed by the requirement of asymptotic freedom.
To determine the upper bounds on N for these values of k, we calculate the one-loop coefficient in the beta function, b 1 . To indicate explicitly the dependence of the b ℓ coefficients with ℓ = 1, 2 on k, we shall write them as b In Eq. (3.6), both T (A k ) and nF = A(A k ) are polynomials of degree max(1, k − 1) in N and hence, b and b (4) The A 2F theories are thus asymptotically free without any upper bound on N . For the A 3F theories, the asymptotic-freedom requirement that b > 0 for N < 11.2291.) Denoting N max as the maximal value of N , for a given k, for which an SU(N ) A kF theory is asymptotically free, we summarize these results as N max = ∞ for k = 2 17 for k = 3 11 for k = 4 . (3.10) Combining these results, we explicitly exhibit the asymptotically free, anomaly-free chiral gauge theories of this type with 2 ≤ k ≤ 4 together with the respective allowed ranges of N , N min ≤ N ≤ N max : The SU(N ) A 2F theories have been studied in several works [2,4,8,[10][11][12], has fermion content given by k = 2 : fermions : 14) The SU(N ) A kF theories with k = 3 and k = 4 are, to our knowledge, new here. These have the fermion contents k = 3 : fermions : For the A kF theories with k = 2, 3, 4, we denote the fermion field in the A k = [k] N representation as ψ ab L , ψ abd L , and ψ abde L , respectively, where a, b, d, e are SU(N ) gauge indices (the symbol c is reserved to mean charge conjugation) with N is in the respective intervals (3.14)-(3.16), and we denote theF fermions as χ a,i,L , where i is a copy (flavor) index taking values in the respective We next show that there are no asymptotically free A kF theories with k ≥ 5. Consider first the k = 5 theory, for which With N generalized to a real variable, b 1 is positive only for N in the range 0.9585 < N < 10.7379. But for an A kF theory, N is bounded below by 2k + 1, which has the value 11 here, so for this k = 5 theory there is no value of N that simultaneously satisfies both the lower bound (3.4) and the requirement of asymptotic freedom. We reach the same conclusion in the k = 6 case, for which 1 is positive if N < 11.098, but N is required to satisfy N ≥ 13, which again means that for k = 6 there is no value of N that satisfies the lower bound (3.4) and the requirement of asymptotic freedom. Similarly, we find that for the k = 7 case, b 1 is only positive for the range 1.094 < N < 11.742, while N must be in the range N ≥ 15 by (3.4), and so forth for higher k. The underlying reason for the non-existence of asymptotically free A kF chiral gauge theories with these higher values of k is that, as noted above, both T ([k] N ) and A([k] N ) are polynomials of degree max(1, k − 1) in N , and they both contribute negatively to b (k) 1 for the relevant range N ≥ 2k + 1. Their negative contributions eventually outweigh the positive contribution of the (11/3)N term from the gauge fields.
In passing, we remark that there are two possible ways that one could expand the fermion content of the A kF models considered here for certain k and N values, as restricted by the constraint of asyamptotic freedom, namely (i) to have n cp replications of the chiral fermion content and (ii) to add vectorlike subsectors. For example, in category (i), the following k = 3 theories are asymptotically free: n cp = 2 and 7 ≤ N ≤ 11; n cp = 3 and 7 ≤ N ≤ 9; n cp = 4 and N = 7, 8; and n cp = 5 and N = 7. We have studied different chiral gauge theories with this sort of n cp replication of a minimal irreducible chiral fermion content in [22]. We shall not pursue these expansions here but instead focus on studying the minimal A kF theories.
IV. BETA FUNCTION ANALYSIS OF A kF THEORIES
In this section we give a general analysis of the beta function applicable to all of the (anomaly-free) asymptotically free A kF theories, with N in the respective ranges N ≥ 5 for k = 2 and the finite intervals 7 ≤ N ≤ 17 for k = 3 and 9 ≤ N ≤ 11 for k = 4 as given in (3.14)-(3.16). In Sect. III we gave the one-loop coefficient for the A kF theories, which we used to determine the upper bound on N for a given k. Here we proceed to give the two-loop coefficient, b (k) 2 , and use it to analyze the UV to IR evolution. We have (again with where the various group invariants are listed in appendix B. For the three relevant cases, k = 2, 3, 4, the explicit expressions are In Table I we list values of the reduced coefficientsb 1 andb 2 for an illustrative set of the A 2F theories and for all of the (asymptotically free) A 3F and A 4F theories. In the cases whereb 2 < 0 so that the two-loop beta function has a physical IR zero, we have also listed the value of α IR,2ℓ . The value of the resultant ratio ρ c for condensation in the most attractive channel for bilinear fermion condensation (discussed further below) gives an estimate of whether the theories are weakly or strongly coupled in the infrared. This is indicated by the abbreviations WC, MC, and SC (weak coupling, moderate coupling, and strong coupling) in Table I.
V. GLOBAL SYMMETRY OF A kF THEORIES
Because the A kF theories are irreducibly chiral, so that the chiral gauge symmetry requires the fermions to be massless, each such theory has a classical global flavor symmetry 2) For nF ≥ 2, the multiplet (χ a,1,L , ..., χ a,nF ,L ) may be taken to transform as the conjugate fundamental, , representation of the global flavor group, SU(nF ). The U(1)F and U(1) A k symmetries in (5.2) are both broken by SU(N ) instantons [38]. As in [12], we define a vector whose components are comprised of the instantongenerated contributions to the breaking of these symmetries. In the basis (A k ,F ), this vector is We can construct one linear combination of the two original currents that is conserved in the presence of SU(N ) instantons. We denote the corresponding global U(1) flavor symmetry as U(1) ′ and the fermion charges under this U(1) ′ as This condition only determines the vector Q (k)′ up to an overall multiplicative constant. A solution is The actual global chiral flavor symmetry group (preserved in the presence of instantons) is then For the three k values relevant here, this is and The ultraviolet to infrared evolution of a particular SU(N ) A kF theory is determined by the values of N and k. In the cases where it can lead to the formation of a bilinear fermion condensate, one should then determine the most attractive channel in which this condensate can form. We present this analysis here. Since the A kF theories that we consider here are irreducibly chiral, a bilinear condensate breaks the gauge symmetry. In Sect. X below, we will discuss the possible formation of multifermion condensates involving more than just two fermions, which can preserve the chiral gauge symmetry.
For the theories that we are discussing here, there are two relevant bilinear fermion condensation channels. First, there is a channel with a condensate that involves the contraction of 2k gauge indices of the antisymmetric tensor density ǫ a1,...,aN with the bilinear fermion product A k × A k , which transforms likeĀ N −2k . This channel can thus be written as This channel has attractiveness measure For a given k, this ∆C 2 is a monotonically decreasing function of N , decreasing gradually from its value at N = 2k + 1, and approaching the limit k 2 for N ≫ k. Second, there is the channel For a given k, this ∆C 2 is a monotonically increasing function of N , increasing from the value and approaching a linear growth with N for N ≫ k. In Table II we list the value of ∆C 2 in Eq. (6.2) for the A k × A k →Ā N −2k channel and the value of ∆C 2 in Eq. (6.5) for the A k ×F → A k−1 channel for an illustrative set of A 2F theories and for the full set of (asymptotically free) A 3F and A 4F theories. The most attractive channel for bilinear fermion condensation is the one among these two channels with the larger value of ∆C 2 (assuming that these two values are unequal; we discuss the cases where they are equal below). For a given value of k, we thus determine the MAC as a function of N in its allowed range N min ≤ N ≤ N max by examining the difference, For N = N min = 2k + 1, ∆C 2 is larger for the first channel, A k × A k →Ā N −2k =Ā 1 =F , than for the second channel, A k ×F → A k−1 . This is evident analytically from the fact that with N = 2k + 1, the difference (6.7) is which is positive for the relevant range k ≥ 2 considered here. Since ∆C 2 for the first channel decreases monotonically as a function of N , while the ∆C 2 for the second channel increases monotonically as a function of N , it follows that at some value of N , which we denote N e (where e stands for "equal"), these values are equal, and for N > N e , the ∆C 2 for the second channel is larger than that for the first channel. Setting the two ∆C 2 values equal and solving for N = N e , we find N e = k(k + 1) . . (6.10) The first two of these values are within the respective allowed ranges for N , while the value for k = 4 is larger than the upper bound N max = 11 for k = 4. Consequently, with N min = 2k + 1 and N max as given in Eqs. (3.4) and (3.10), we find that, for a given k, with the proviso that the second possibility only applies if k(k + 1) < N max , and hence only for k = 2 and k = 3. Thus, in particular, if N = N min = 2k+1, then the MAC is the special case of (6.1): In addition to breaking the original SU(N ) gauge symmetry, these condensates also break both the non-Abelian factor group SU(nF ) (which is present if nF ≥ 2) and the U(1) ′ factor group in the global flavor symmetry (5.8).
In particular, the breaking of the U(1) ′ symmetry is evident from the fact that the respective condensates in these channels have the nonzero U(1) ′ charges 14) The marginal case N = N e = k(k + 1) requires further analysis, since the ∆C 2 values for the A k × A k →Ā N −2k and A k ×F → A k−1 channels are equal, so the procedure of picking the channel with the largest ∆C 2 cannot determine which is more likely to occur. To deal with this marginal case, we use a vacuum alignment argument, which, as applied to possible bilinear fermion condensation channels, favors the one whose condensate respects the larger residual gauge symmetry. To apply the vacuum alignment argument, we must thus determine the residual gauge symmetry group respected by the condensates that occur in these two channels. The resultant bilinear fermion condensate transforms like an n-fold antisymmetric tensor representation of SU(N ), where n = N −2k for the A k × A k →Ā N −2k channel and n = k − 1 for the A k ×F → A k−1 channel. (The fact that in the first case the condensate transforms likeĀ N −2k rather than A N −2k does not affect how this breaks SU(N ).) From the point of view of the group theory, the problem of determining the residual gauge symmetry is effectively the same as the problem of determining the residual gauge symmetry that results when one has a Higgs field transforming according to the antisymmetric rank-n representation of SU(N ). An analysis of this, within the context of Higgsinduced symmetry breaking, was given in [39], and the results depend, in that context, on the parameters in the Higgs potential, which one has the freedom to choose, subject to the overall constraint that the energy must be bounded below. As emphasized in Ref. [20], the situation is different in dynamical gauge symmetry breaking; in principle, given an initial gauge group and set of fermions, there is a unique answer for how the symmetry breaks; this breaking does not depend on any parameters in a Higgs potential. Despite this basic difference between dynamical and Higgs-induced gauge symmetry breaking, we can make use of the general group-theoretic analysis performed for the Higgs case. The result is that there are, a priori, three possibilities for the gauge symmetries respected by a condensate or Higgs vacuum expectation value transforming as the rank-n antisymmetric tensor representation of SU(N ), We analyze the respective cases k = 2, 3, 4 next.
B. Case k = 2
From the special case for k = 2 of our general result (6.11) above, we infer that the A 2 × A 2 →Ā 1 =F channel is the most attractive channel for bilinear fermion condensation in the A 2F theories for the lowest value of N , namely N = 5, while the A 2 ×F → F channel is the MAC for the infinite interval N ≥ 7. For the marginal case k = 2, N = 6, the A 2 × A 2 →Ā N −2k =Ā 2 and A 2 ×F → F channels have the same value of ∆C 2 , namely ∆C 2 = 14/3 = 4.667 (see Table II), so the ∆C 2 attractiveness criterion cannot be used to decide which is more likely to occur. Now the condensate in the A 2 ×F → F channel leaves invariant an SU(5) subgroup of SU(6), with order 24. To analyze the possible invariance groups of a condensate in the A 2 × A 2 →Ā 2 channel, we apply our discussion above with N = 6, n = 2, and hence κ = [6/2] = 3, so the a priori possible invariance groups of the condensate are SU(4) ⊗ SU(2) with order 18 and Sp(6) with order 21. Neither of these groups has an order as large as that of SU(5), so the vacuum alignment argument predicts that, if a bilinear fermion condensate forms, then this condensate will form in the A 2 ×F → F channel. Summarizing our results for k = 2 and all N , we thus find that if bilinear fermion condensation occurs, then (6.19) As noted above, since this class of (asymptotically free) A 2F chiral gauge theories satisfies the 't Hooft global anomaly matching conditions, there is also the possibility of confinement, yielding massless composite fermions. There is also the possibility of multifermion condensate formation, which we will discuss below. Since the early works such as [1,2,4], for a class of asymptotically free chiral gauge theories such as the A 2F class discussed here, for which the UV to IR evolution leads to strong coupling and hence could lead to confinement with massless composite fermions or to fermion condensation, there has not, to our knowledge, been a rigorous argument presented that actually determines the type of UV to IR evolution in an asymptotically free chiral gauge theory.
C. Case k = 3
From the special case for k = 3 of our general result (6.11) above, we infer that the A 3 ×A 3 →Ā N −2k =Ā N −6 channel is the most attractive channel for bilinear fermion condensation not only for the minimal value of N , namely N = 7, but also for the interval of N values up to N = 11. We discuss the marginal case of N = 12 last. Again substituting k = 3 into (6.11), it follows formally that the MAC for 12 ≤ N ≤ 17 is the A 3 ×F → A 2 channel. However, for N = 13, 14, the respective values of the IR zero in the beta function are sufficiently close to the rough estimate of the minimal critical value of α for condensate formation in the A 3 ×F → A 2 channel (see Tables I and II) that it is possible that the system could evolve from the UV to a deconfined, non-Abelian Coulomb phase in the IR with no fermion condensate formation or associated spontaneous chiral symmetry breaking.
The k = 3, N = 12 case is again marginal; the A 3 × A 3 →Ā 6 and A 3 ×F → A 2 channels have the same value of ∆C 2 , namely ∆C 2 = 39/4 = 9.750. Hence, we use a vacuum alignment argument to decide on which of these channels is more likely to occur. For the A 3 × A 3 →Ā N −6 =Ā 6 channel, we apply our discussion above with N = 12, n = 6, and hence κ = [12/6] = 2, so the invariance group of theĀ 2 condensate is [SU(6)] 2 , with order 70. For the A 3 ×F → A 2 channel, we have N = 12, n = 2 and hence κ = [12/2] = 6, so the a priori possible invariance groups of the A 2 condensate are SU(10) ⊗ SU(2) with order 102 and Sp(12) with order 78. The vacuum alignment argument thus favors condensation in the A 3 ×F →Ā 2 channel for this N = 12 case. Summarizing these results, we have However, as mentioned above, for N = 13, 14 (and also for N = 12), the respective values of ρ c are sufficiently close to unity that, in view of the intrinsic theoretical uncertainties in the analysis of the strong-coupling physics, it is possible that the UV to IR evolution could lead either to the formation of a fermion condensate or to a non-Abelian Coulomb phase without spontaneous chiral symmetry breaking. If N is in the higher interval 15 ≤ N ≤ 17, then ρ c is sufficiently small that we definitely expect the evolution to lead to a chirally symmetric non-Abelian Coulomb phase in the IR. Hence, in these cases, the MAC is not directly relevant to the dynamics of the theory.
D. Case k = 4
Finally, we discuss the theories with k = 4, for which the interval of values of N is 9 ≤ N ≤ 11. Since the value of N e , namely N e = 20, is larger than N max , the most attractive channel for bilinear fermion condensation in all of these theories is A 4 ×A 4 →Ā N −8 , i.e., A 4 ×A 4 →F for N = 9, A 4 × A 4 →Ā 2 for N = 10, and A 4 × A 4 →Ā 3 for N = 11. In the SU(9) A 4F theory, the IR zero in the twoloop beta function is much larger than α cr for this channel, so it is likely that the SU(9) gauge interaction would produce a condensate in this channel, thereby breaking SU(9) to SU (8). For N = 10, α IR,2ℓ /α cr = 1.7, which is sufficiently close to unity that, taking account of the uncertainties in the strong-coupling estimates, the UV to IR evolution might produce a condensate in the respective most attractive bilinear fermion channel or might lead to a non-Abelian Coulomb phase. For N = 11, the IR zero in the two-loop beta function is small compared with the estimated α cr for the A 4 × A 4 →Ā 3 condensation channel, so we definitely expect the system to evolve from the UV to a non-Abelian Coulomb phase in the IR.
A. General
In this section we analyze the UV to IR evolution of some A 2F theories in detail. Recall that the explicit fermion fields are A 2 : ψ ab L andF : χ a,i,L , where a, b are the SU(N ) gauge indices and i = 1, .., N − 4 is a copy (flavor) index. The one-loop and two-loop coefficients were given in Eqs. (3.7) and (4.2). We find that for all N ≥ N min = 5, the coefficient b (2) 2 is positive, so the two-loop beta function of the A 2F theory has no IR zero. Hence, as the Euclidean reference scale µ decreases from the UV to the IR, the gauge coupling increases until it eventually exceeds the region where it is perturbatively calculable. This IR behavior is thus marked as SC, for strong coupling, in Table I. The global flavor symmetry group for this theory is given in Eq. (5.9) with the U(1) ′ charge assignments in (5.10). This theory satisfies the 't Hooft global anomaly matching conditions [1,10], so, as it becomes strongly coupled in the infrared, it could confine and produce massless gauge-singlet composite spin-1/2 fermions as well as massive gauge-singlet mesons and also primarily gluonic states. If this happens, then it is a complete description of the UV to IR evolution. The three-fermion operator for the composite gauge-singlet fermion can be written as Another possibility is that the SU(N ) gauge interaction could produce bilinear fermion condensates, thereby breaking both gauge and global symmetries. The most attractive channel for this fermion condensation was determined, as a function of N , in Eq. (6.19). It can also be possible to form multifermion condensates involving more than two fermion fields, which preserve the chiral gauge symmetry. We will discuss this latter possibility in Sect. X. Here we proceed to analyze bilinear fermion condensate formation for various specific theories.
B. SU(5) A2F Theory
The simplest chiral gauge theory in the A 2F family of theories has the gauge group SU(5), with fermion content given by the N = 5 special case of Eq. (3.14), namely A 2 +F = [2] 5 + [1] 5 . Like the other A 2F theories considered here that become strongly coupled in the infrared, this one could confine and produce a massless composite fermion. Alternatively, it could produce fermion condensates. The most attractive channel for bilinear fermion condensation in this theory is A 2 × A 2 →Ā 1 . If the dynamics is such that this condensate does, indeed, form, then we denote the mass scale at which it is produced as Λ 5 . This condensate breaks the SU(5) gauge symmetry to SU(4). Without loss of generality, we take the gauge index corresponding to the breaking direction to be a = 5. The condensate then has the form The fermions involved in this condensate gain dynamical masses of order Λ 5 , as do the nine gauge bosons in the coset SU(5)/SU (4). In addition to breaking the SU(5) gauge symmetry, the condensate has the nonzero value of the U(1) ′ charge Q ′(2) = −2 given by the k = 2 special case of Eq. (6.14) and hence breaks the global U(1) ′ symmetry. Since this symmetry is not gauged, this breaking yields one Nambu-Goldstone boson (NGB). To construct the low-energy effective field theory with SU(4) chiral gauge invariance that describes the physics as the scale µ decreases below Λ 5 , we decompose the fermion representations of SU(5) with respect to the unbroken SU(4) subgroup. It will be useful to give this decomposition more generally for SU(N ) relative to an SU(N − 1) subgroup in our usual notation and also in terms of the corresponding Young tableaux: The [2] 4 field is comprised of ψ ab L fermions with 1 ≤ a, b ≤ 4 that gained dynamical masses of order Λ 5 and were integrated out of the low-energy theory. The other massless SU(4)-nonsinglet fermions are the [1] 4 = F fermion ψ 5b L with 1 ≤ b ≤ 4 and the [1] 4 =F fermion χ a,1,L with 1 ≤ a ≤ 4. Hence, the massless SU(4)nonsinglet fermion content of this theory consists of F +F , so this theory is vectorial. This SU(4) theory also contains the SU(4)-singlet fermion χ 5,1,L . The one-loop and two-loop coefficients of the SU(4) beta function have the same sign, so again, this function has no IR zero, and therefore the SU(4) gauge coupling inherited from the SU(5) UV theory continues to increase as the reference scale µ decreases. Rewriting the left-handedF as a right-handed F , one sees that this is a vectorial SU(4) gauge theory with massless N f = 1 Dirac fermion in the fundamental representation. It therefore has a classical global chiral flavor symmetry group U(1) F ⊗ U(1)F , or equivalently, U(1) V ⊗ U(1) A in standard notation. The U(1) A is broken by SU(4) instantons, so the nonanomalous global flavor symmetry is U(1) V . At a scale Λ 4 < ∼ Λ 5 , one expects that the SU(4) gauge interaction produces a bilinear fermion condensate in the most attractive channel, which is F ×F → 1, thus preserving the SU(4) gauge symmetry. The condensate is This condensate respects the U(1) V global symmetry, and hence does not produce any Nambu-Goldstone bosons. Thus, this SU(4) theory confines and produces gauge-singlet hadrons (with the baryons being bosonic).
In the infrared limit, the only remaining massless particles are the SU(4)-singlet fermion χ a,1,L and the one Nambu-Goldstone boson resulting from the breaking of the U(1) ′ global flavor symmetry by the condensate (7.3).
C. SU(6) A2F Theory
We next consider an SU(6) A 2F theory. The fermion content of this theory is the N = 6 special case of (3.14), namely A 2 + 2F = [2] 6 + 2[1] 6 . The A 2 fermion is denoted ψ ab L = −ψ ba L , and the two copies of theF fermion are denoted χ a,i,L , where 1 ≤ a, b ≤ 6 are gauge indices and i = 1, 2 is the copy index. We consider possible bilinear fermion condensates for this theory. As discussed above, although the bilinear fermion condensation channels A 2 × A 2 →Ā 2 and A 2 ×F → F have the same ∆C 2 , a vacuum alignment argument favors the A 2 ×F → F channel because it leaves a larger residual gauge symmetry, namely SU (5). Assuming that a condensate in this channel does form, we denote the scale at which it is produced as Λ 6 . Again, by convention we take the breaking direction as a = 6 and the copy index as i = 2 on theF fermion in the condensate, which can thus be written as This condensate also breaks the SU(2)F ⊗ U(1) ′ global flavor symmetry. The ψ 6b L and χ b,2,L fermions with 1 ≤ b ≤ 5 involved in the condensate (7.6) get dynamical masses of order Λ 6 , as do the 11 gauge bosons in the coset SU(6)/SU (5). These are integrated out of the lowenergy effective SU(5)-invariant theory that describes the physics as the scale µ decreases below Λ 6 .
From the N = 6 special case of the general decomposition (7.4) in conjunction with the form of the condensate (7.6), it follows that the massless SU(5)nonsinglet fermion content of the descendant SU(5) theory is A 2 +F , together with the (massless) SU(5)-singlet fermions χ 6,1,L and χ 6,2,L . Thus, the SU(5)-nonsinglet fermion content of this theory is the same as that of the SU(5) theory discussed above, and our analysis there applies here. Since this SU(5) theory satisfies the 't Hooft global anomaly matching conditions, when it becomes strongly coupled, it could confine and produce massless SU(5)-singlet composite fermions, as well as massive mesons and primarily gluonic states, or it could self-break via fermion condensate formation. We also discuss below a possible SU(5)-preserving four-fermion condensate that might form. For N ≥ 7, the most attractive channel for bilinear fermion condensation is A 2 ×F → F , with ∆C 2 given by the k = 2 special case of (6.5), (7.7) The UV to IR evolution of these theories is similar to that of the SU(6) theory. At each stage, owing to the fact that the SU(N ) theory and the various descendant theories satisfy 't Hooft global anomaly matching conditions, as the coupling gets strong in the IR, the gauge interaction may confine and produce massless composite fermions or may produce various fermion condensates. The most attractive channel for bilinear fermion condensation at a given stage is A 2 ×F → F , breaking the theory down to the next descendant low-energy theory. If the theory follows the first type of UV to IR flow, namely confinement with massless composite fermions, this extends all the way to the IR limit, while if the theory follows the second type of flow with condensate formation, then there is, in general, a resultant sequence of low-energy effective theories that describe the physics of the massless dynamical degrees of freedom at lower scales. If all of the stages involve gauge (and global) symmetry breaking by fermion condensates, then the gauge symmetry breaking is of the form Here, the last theory, namely the SU(4) theory, is vectorial, while all of the higher-lying theories are chiral gauge theories. VIII.
A3F THEORIES
The fermion content of the A 3F theories was displayed in Eq. (3.15). The one-loop and two-loop coefficients in the beta function were given in Eqs. (3.8) and (4.3), with numerical results forb 1 andb 2 displayed in Table I. As is evident in Table I, for 7 ≤ N ≤ 10, the coefficient b 2 is positive, so the two-loop beta function has no IR zero, and hence, as the reference scale µ decreases from large values in the UV toward the IR, the gauge coupling increases until it exceeds the region where it is perturbatively calculable. These theories are thus strongly coupled in the infrared (marked as SC in Table I).
The next step in the analysis of the UV to IR flow in these theories is to determine if one or more of them might satisfy the 't Hooft global anomaly matching conditions. If this were to be the case, then, as in the A 2F theories, one would have a two-fold possibility for the strongly coupled IR physics, namely confinement with gauge-singlet composite fermions but no spontaneous chiral symmetry breaking or formation of bilinear fermion condensates with associated breaking of gauge and global symmetries. For this purpose, we have examined possible SU(N ) gauge-singlet fermionic operator products to determine if any of them could satisfy these global anomaly matching conditions. The global flavor symmetry group was given in Eq. (5.11) with (5.12). We have not found any such fermionic operator products. As an illustration of our analysis, let us consider the case N = 7, which contains a [3] with Q ′ = (1, −5). A fermionic operator product that is an SU(7) singlet is of the form where the c superscript denotes the charge conjugate fermion field. However, this vanishes identically. This can be seen as follows: an interchange (transposition) of ψ abd L and ψ ef g L entails a minus sign from the switching of an odd number of indices in the antisymmetric SU(7) tensor density, a second minus sign from Fermi statistics, and a third minus sign from the fact that C T = −C for the Dirac charge conjugation matrix, so the operator is equal to minus itself and hence is zero.
Therefore, when theory becomes strongly coupled in the infrared, we will focus on the type of UV to IR evolution that leads to fermion condensates, and we consider bilinear fermion condensates here. The most attractive channel for these condensates, as a function of N , was given in Eq. (6.20).
As an explicit example of the A 3F class of chiral gauge theories, let us consider the SU(7) theory, which has chiral fermion content given by the N = 7 special case of Eq. (3.15), namely The most attractive channel for this theory is A 3 × A 3 →F , which breaks the gauge symmetry SU(7) to SU (6) and also breaks the global flavor symmetry group SU(2)F ⊗ U(1) ′ . We denote the scale at which this condensate forms as Λ 7 . Without loss of generality, we label the gauge index for the broken direction to be a = 7.
The condensate then has the form ǫ abdef g7 ψ abd T L Cψ ef g L .
(8.4)
Of the 7 3 = 35 components of the A 3 fermion, denoted generically as ψ abd L , the 7 3 − 6 2 = 20 components with 1 ≤ a, b, d ≤ 6 that are involved in this condensate gain dynamical masses of order Λ 7 , as do the 13 gauge bosons in the coset SU(7)/SU (6). These are integrated out of the low-energy effective theory SU(6) chiral gauge theory that describes the physics as the scale decreases below Λ 7 .
The massless SU(6)-nonsinglet fermion content of this SU(6) theory thus consists of A 2 +2F = [2] 6 +2[1] 6 , comprised by the 6 2 = 15 components ψ ab7 L and the χ a,i,L with 1 ≤ a, b ≤ 6 and i = 1, 2. A theorem proved in [22] states that a low-energy effective theory that arises by dynamical symmetry breaking from an (asymptotically free) anomaly-free chiral gauge theory is also anomalyfree. One sees that the present example is in accord with this general theorem. Indeed, the nonsinglet fermions in this SU(6) descendant theory are precisely those of the SU(6) A 2F theory discussed above, and that analysis applies here for the further UV to IR evolution of the theory. In addition to the SU(6)-nonsinglet fermions, this descendant theory also contains the SU(6)-singlet fermions χ 7,i,L with i = 1, 2.
IX.
A4F THEORIES The fermion content of the A 4F theories was given in Eq. (3.16). The reduced one-loop and two-loop coefficients in the beta function were listed in Eqs. (3.9) and (4.4), with numerical results displayed in Table I. We find that for each of the three relevant values of N , namely N = 9, 10, 11, the coefficientb 2 is negative, so the two-loop beta function has an IR zero. As we noted above, for N = 11, this IR zero is at very weak coupling relative to the minimal critical value for bilinear fermion condensation, so we can reliably conclude that the theory evolves from the UV to a (deconfined) non-Abelian Coulomb phase in the infrared. In the N = 9 and N = 10 theories, the respective IR zeros in the two-loop beta function occur at strong and moderate coupling, so a full analysis is necessary.
We have examined whether there are SU(N ) gaugesinglet composite fermion operators that could satisfy the 't Hooft global anomaly matching conditions, but we have not found any. The global flavor symmetry group was given in Eq. (5.13) with (5.14). As an illustration of our analysis, let us consider the SU(9) A 4F theory, which contains a [3] 9 fermion ψ abd L and the fermions, χ a,i,L with 1 ≤ i ≤ 5 comprising five copies of the [1] 9 representation of SU (9). The global flavor symmetry group is The χ a,i,L fermions transform as of the SU(5)F flavor group, and the vector of U(1) ′ charges is Q ′ = (Q ′ A4 , Q ′F ) = (1, −7). A fermionic operator product that is an SU (9) gauge singlet is This transforms as a representation of the global SU(5)F symmetry with U(1) ′ charge 2Q ′ A4 − Q ′F = 9. Since this is a right-handed composite fermion, we actually calculate with the charge conjugate (f c ) i,L , which is a left-handed fermion that transforms as a representation of the global SU(5) with U(1) ′ charge −9. We find that this composite fermion does not satisfy the global anomaly matching conditions. For example, consider the SU(5) 3 anomaly. The fundamental fields make the following contributions: the A 4 fermion yields zero, while theF fermions yield N A( ) = 9 × (−1) = −9. However, the f c L fermion yields A( ) = −1, which does not match. Since we have not found composite fermion operators that satisfy the 't Hooft global anomaly matching conditions, we consider fermion condensation in the cases where the beta function has an IR zero at moderate (for N = 10) and strong (for N = 9) coupling in the infrared.
As an explicit example, we analyze the SU(9) A 4F theory. The fermion content of this theory is given by the N = 9 special case of Eq. (3.16), namely 3) The most attractive channel for bilinear fermion condensation is the N = 9 special case of (6.12), namely A 4 × A 4 →F . Assuming that this condensate forms, it breaks the gauge symmetry SU(9) to SU (8) and also breaks the global flavor symmetry group SU(5)F ⊗ U(1) ′ . We denote the scale at which this condensate forms as Λ 9 . Without loss of generality, we label the gauge index for the broken direction to be a = 9. The condensate then has the form L and the χ a,i,L with 1 ≤ a, b, d ≤ 8 and 1 ≤ i ≤ 5. Again, the theorem proved in [22] guarantees that this SU(8) descendant theory is anomaly-free. Indeed, the nonsinglet fermions in this SU(8) descendant theory are precisely those of the SU(8) A 3F theory discussed above, and that analysis applies here for the further UV to IR evolution of the theory. In addition to the SU(8)-nonsinglet fermions, this descendant theory also contains the SU(6)-singlet fermions χ 9,i,L with 1 ≤ i ≤ 5.
X. MULTIFERMION CONDENSATES AND IMPLICATIONS FOR THE PRESERVATION OF CHIRAL GAUGE SYMMETRY
Our discussion above of fermion condensate formation focused on bilinear fermion condensates and resultant dynamical chiral gauge symmetry breaking. However, it is, in principle, possible for a strongly interacting vectorial or chiral gauge theory to produce fermion condensates involving product(s) of more than just two fermion fields [40,41]. Much less attention as been devoted in the literature to such multifermion condensates than to bilinear fermion condensates. This is somewhat analogous to the situation with bound states of (anti)quarks in hadronic physics. For many years the main focus of research was on color-singlet bound states with the minimum number of (anti)quarks, namely qqq, for baryons and qq for mesons. (Subsequently, glueballs and mixing between qq mesons and glue to form mass eigenstates were also studied.) However, there is increasing experimental evidence that the hadron spectrum also contains bound states with additional quarks, such as qqqq and qqQQ, where Q means a heavy quark, c or b, including charged mesons, and possibly qqqqq and qqqQQ [42]. In the case of possible condensates involving four or more fermions, we are not aware of a reliable method that can be used to assess the relative likelihood that these would form. The problem of assessing this likelihood is fraught with even more theoretical uncertainty than the uncertainty inherent in the use of the rough MAC criterion to measure the attractiveness of bilinear fermion condensation channels.
Clearly, Lorentz invariance implies that the number of fermion fields in such multifermion condensates must be even. As usual, we denote the charge conjugate of a generic fermion field χ as χ c ≡ Cχ T , where C is the Dirac charge conjugation matrix satisfying C = −C T andχ ≡ χ † γ 0 ; recall also that for a left-handed fermion χ L , the charge conjugate is (χ L ) c = (χ c ) R .
As an example, consider the SU(5) A 2F theory, with the fields ψ ab L and χ a,1,L or equivalently, ψ c ab,R and (χ c ) a,1 R . When the gauge interaction becomes strong, it could produce several different four-fermion condensates that preserve the SU(5) gauge symmetry. One such condensate that involves all of the fermions is where here a, b, d, e, f, s are SU(5) gauge indices. This condensate has U(1) ′ charge 3Q ′ A2 + Q ′F . Using the results from the N = 5 special case of Eq. (5.10), namely, Q ′ A2 = 1, Q ′F = −3, we find that this condensate (10.1) has zero U(1) ′ charge, so it also preserves the global U(1) ′ symmetry of the SU(5) theory.
In a similar manner, consider the SU(6) A 2F theory, with the fermions ψ ab L and χ a,j,L with j = 1, 2. As the SU(6) gauge interaction becomes strong in the infrared, it might produce the following four-fermion condensate that is invariant under the SU(6) gauge symmetry: 2) Note that because of the contraction of the operator product [(χ c ) s,1 T R C(χ c ) u,2 R ] with the SU(6) ǫ abdesu tensor, the first term in Eq. (10.2) is automatically antisymmetrized in the flavor indices j = 1, 2; we have made this explicit by subtracting the term with these indices interchanged. As shown by the second line of Eq. (10.2), this condensate thus preserves the SU(2)F factor group in the global flavor symmetry G f l for this theory, namely SU(2)F ⊗ U(1) ′ . In the (A 2 ,F ) basis, the U(1) ′ charges are (2, −4), as given by the N = 6 special case of Eq. (5.10). Hence, the U(1) ′ charge of the condensate (10.2) is −4, so it breaks the U(1) ′ part of G f l , yielding one Nambu-Goldstone boson.
One can give corresponding discussions of gaugeinvariant multifermion condensates for other SU(N ) A kF theories that become strongly coupled in the infrared. In general, these theories could also produce other types of four-fermion condensates such as and where 1 ≤ i, j, k, ℓ ≤ nF . There are also multifermion condensates with eight and more fermions that one could consider. Such multifermion condensates merit further study.
It is natural to carry out an investigation of (anomalyfree) chiral gauge theories with gauge group SU(N ) and chiral fermions transforming according to the rank-k symmetric tensor representation with k ≥ 3 and a requisite number of chiral fermions in theF representation so as to render the theories free of an anomaly in gauged currents. We denote such a theory as an SU(N ) S kF theory. This investigation would be the analogue of the study that we have performed in this paper for A kF theories with k ≥ 3 and would generalize the studies that have been carried out in the past on the S 2F theory [4,[10][11][12][13]. As with the A kF theories, we require that the theory must be asymptotically free so that it is perturbatively calculable in at least one regime, namely the deep UV, where the gauge coupling is small.
However, we shall show here that there are no asymptotically free (anomaly-free) S kF chiral gauge theories with k ≥ 3. As before we denote the number of copies ofF fermions as nF . The contribution to the triangle anomaly in gauged currents of a chiral fermion in the S k representation is (see Appendix B ) The total anomaly in the theory is A = A(S k )−nF A(F ), so the condition of anomaly cancellation is that The first few values are of nF are and so forth for higher k.
To investigate the restrictions due to the requirement of asymptotic freedom, we calculate the one-loop coefficient of the beta function. We find We exhibit the explicit expressions for b (k) for the first few k ≥ 2: The coefficient b 1,S2F is positive for all relevant N , and this property was used in past studies of the S 2F theory. However, the coefficient b 1,S3F is negative for relevant N ≥ 3. (Recall that an SU(2) theory has only real representations and hence is not chiral.) With N generalized from positive integers to real numbers, b 1,S3F is negative for all N , reaching its maximum value of −8/3 for N = 2. We find that the b 1,S kF coefficients with k ≥ 4 are negative-definite for all positive N (either real or integer). This is evident from the illustrative explicit expressions that we have given for 4 ≤ k ≤ 6. This completes our proof that there are no asymptotically free, (anomaly-free) S kF chiral gauge theories with k ≥ 3.
XII. CONCLUSIONS
In summary, in this paper we have constructed and studied asymptotically free chiral gauge theories with an SU(N ) gauge group and chiral fermions transforming according to the antisymmetric rank-k representation, A k , with k = 2, 3, 4, and, for each k and N , the requisite number of copies, nF , of fermions transforming according to the conjugate fundamental representation,F , of this group to render the theory anomaly-free. For a given k, to get a theory that is chiral and has nF ≥ 1, we take N ≥ 2k + 1. We have extended previous studies of the A 2F theories with further analysis of fermion condensation channels and sequential symmetry breaking and have presented a number of new results on the A kF theories with k ≥ 3. The A 2F theories form an infinite family with N ≥ 5, but we have shown that the A 3F and A 4F theories are only asymptotically free for N in the respective ranges 7 ≤ N ≤ 17 and 9 ≤ N ≤ 11, and that there are no asymptotically free A kF theories with k ≥ 5. We have investigated the types of ultraviolet to infrared evolution for these A kF theories and have found that, depending on k and N , they may lead in the infrared to a non-Abelian Coulomb phase, or may involve confinement with massless gauge-singlet composite fermions, or bilinear fermion condensation with dynamical gauge and global symmetry breaking. In two cases, namely (k, N ) = (2, 6), (3,12), in each of which two bilinear fermion condensation channels are equally attractive, so the MAC criterion does not prefer one over the other, we have applied vacuum alignment arguments to infer which channel is preferred. We have also discussed multifermion condensates. Finally, we have shown that there are no asymptotically free, anomaly-free SU(N ) S kF chiral gauge theories with k ≥ 3, where S k denotes the rankk symmetric representation.
where T a are the generators of G, and D R is the matrix representation (Darstellung) of R.
The anomaly produced by chiral fermions transforming according to the representation R of a group G is defined as where the d abc are the totally symmetric structure constants of the corresponding Lie algebra. Thus, A( ) = 1 for SU(N ). For the symmetric and antisymmetric rank-k tensor representations of SU(N ), the anomaly is, respectively [37], N , nF ,b 1 ,b 2 , and, for negativeb 2 , α IR,2ℓ = −b 1 /b 2 , αcr for the most attractive bilinear fermion condensation channel (2.3) in the SU(N ) theory, and the ratio ρc. The dash notation − means that the two-loop beta function has no IR zero. The likely IR behavior is indicated in the last column, with the abbreviations SC, MC, WC for the type coupling in the IR (SC = strong, MC = moderate, WC = weak coupling). In the WC case, the UV to IR evolution is to a non-Abelian Coulomb phase (NACP). The various possibilities for the evolution involving strong and moderately strong coupling are discussed in the text. For k = 2, we include illustrative results covering the interval 5 ≤ N ≤ 10; for k = 3, 4 we list results for all (asymptotically free) A kF theories.
|
2015-10-26T20:27:28.000Z
|
2015-10-26T00:00:00.000
|
{
"year": 2015,
"sha1": "9efbae4cda90e8bbff8dbc2c5efea49c6235a318",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.92.105032",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "9efbae4cda90e8bbff8dbc2c5efea49c6235a318",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
221326012
|
pes2o/s2orc
|
v3-fos-license
|
AntagomiR‐29b inhibits vascular and valvular calcification and improves heart function in rats
Abstract We aimed to investigate the role of the miR‐29b and its effect on TGF‐β3 pathway in vascular and valvular calcification in a rat model of calcific aortic valve diseases (CAVD). A rat model of CAVD was established by administration of warfarin plus vitamin K. The expression levels of miR‐29b, osteogenic markers and other genes were determined by qRT‐PCR, Western blot and/or immunofluorescence and immunohistochemistry. The calcium content and alkaline phosphatase (ALP) activity were measured. The calcium content, ALP activity and osteogenic markers levels in calcified aorta and aortic valve were augmented compared to controls. The expression of miR‐29b, p‐Smad3, and Wnt3 and β‐catenin was significantly up‐regulated, whereas TGF‐β3 was markedly down‐regulated. However, compared with the CAVD model group, the calcium content and ALP activity in rats treated with antagomiR‐29b were significantly decreased, and antagomiR‐29b administration reversed the effects of CAVD model on the expression of miR‐29b and osteogenic markers. Inhibition of miR‐29b in CAVD rats prevented from vascular and valvular calcification and induced TGF‐β3 expression, suggesting that the miR‐29b/TGF‐β3 axis may play a regulatory role in the pathogenesis of vascular and valvular calcification and could play a significant role in the treatment of CAVD and other cardiovascular diseases.
| INTRODUC TI ON
Calcification of the aortic arch is an independent risk factor for coronary artery disease and increases mortality and morbidity associated with cardiovascular disease such as calcified aortic stenosis and calcific aortic valve disease (CAVD). CAVD is a common condition of cardiovascular diseases in developed countries, affecting up to 3% of people over the age of 65. 1 The prevalence of CAVD is closely related to risk factors such as age, dyslipidaemia, smoking, diabetes and hypertension. 2,3 Scientists have demonstrated that vascular calcification (VC) is a complex biological process involving the mineralization and transdifferentiation of vascular smooth muscle cells (VSMCs) by chondrogenic and osteoblastogenic pathways. 4,5 Calcification of the aortic valve leads to aortic stiffness and systolic hypertension, which constitute an important risk factor of mortality due to cardiovascular diseases. 6,7 CAVD can be in the asymptomatic phase for a long time and may persist for several years before symptoms appear. 8 Previous studies have shown that CAVD can be diagnosed at the early stage by Doppler echocardiography and cardiac auscultation, 9 MicroRNAs (miRNAs) are small non-coding RNAs (~22 nucleotides) that lead to silencing of genetic information by annealing inexactly to complementary sequences in the 3′-untranslated regions (3′-UTR) of the target mRNA causing mRNA destabilization and/ or translational inhibition. 10 Recent advances have identified miR-NAs as key regulators of cancer, and they also play an integral role in the pathogenesis of cardiovascular calcification. 4,11,12 Previous studies have shown that the expression of miR-29b is decreased in cholecalciferol-induced rat calcified arteries. 13 Moreover, miR-29b-3p overexpression significantly inhibits arterial calcification through regulating the expression of matrix metalloproteinases-2 (MMP2) in vivo and in vitro. 13 In the study of Du et al, 14 miR-29 targets the cartilage oligomeric matrix protein-degrading metalloproteinase, a disintegrin and metalloproteinase with thrombospondin motifs-7, to inhibit vascular calcification. Additionally, in our recent study, we have identified miR-29b as an endogenous positive regulator of human aortic valve interstitial cells (VICs) calcification that functions through repressing TGF-β3 expression in vitro. 15 However, whether miR-29b could target TGF-β3 to attenuate CD and cardiovascular diseases such as CAVD in vivo has yet to be explored.
In this study, we investigated the role of the miR-29b/TGF-β3 axis in CAVD in vivo. We found that miR-29b was significantly induced in CAVD and inhibition of miR-29b was followed by decreased expression of osteoblastic differentiation and calcification markers and attenuated vascular and valvular calcification through derepressing the TGF-β3 signalling pathway.
| Rat model of CAVD and experimental groups
A total of 80 male Sprague-Dawley (SD) rats (8 weeks old, weighing 250-280 g), purchased from Vital River Laboratory Animal Technology Co. Ltd, were housed at 22 ± 2°C and 40%-60% humidity with light/ dark cycle of 12-hours. The rats had free access to food and water.
The rat model of CAVD was established as described previously with minor modification. 16 In brief, rats were randomly assigned to control group (n = 20) and CAVD group (n = 60). Rats in CAVD group were treated with warfarin (20 mg/kg/d in drinking water) and subcutane-
| In vivo administration of antagomiR-29b
For treatment with antagomiR-29b, after confirming CAVD in rats, 100 µL of antagomiR-29b or NC (which were diluted in PBS at 2 mg/mL) was injected to rats in these groups whereas those in the control group and the CAVD group were injected with equivalent volume of PBS three times per week by tail vein injection for 4 weeks.
| Measurement of heart of rats
The rats were weighed and placed in an airtight box with 4% pentobarbital (30 mg/kg) for anaesthesia. The chest of the rats was depilated and fixed on the test plate followed by coating an appropriate amount of ultrasonic coupling agent. Subsequently, echocardiography was performed using the Vevo 2100 (VisualSonic
| Tissue analysis
At day 28, rats were anesthetized with 4% pentobarbital (30 mg/ kg) to collect blood plasma. After that, rats were sacrificed by cervical decapitation and portions of aortas and hearts were collected.
To collect the aortic roots, rat hearts were fixed with 4% paraformaldehyde in 0.1 mol/L sodium phosphate buffer (pH 7.4) and embedded in paraffin. Then, cross-sectional slices with a thickness of about 7 µm were prepared. The sections of aortic root located at the proximal of the aortic valve area were selected for histological analysis, so there were only a limited number of slices available before entering the ascending aorta. 5-mm sections of aorta and aortic root pieces were stained with haematoxylin and eosin (H&E). Data analyses were performed using the ImagePro Plus 6.1 image analysis software (Media Cybernetics).
| ALP activity assay
ALP activity in plasma, aorta and aortic roots was detected as previously described. 17 Briefly, abdominal aortic blood was harvested and mixed with 50 U/mL of heparin. Subsequently, plasma was isolated from the aortic blood by centrifugation at 1409 g for 15 minutes at 4°C. Aortic tissue was homogenized in pre-cooled physiological saline, followed by centrifugation at 7529 g for 10 minutes to collect the supernatant. ALP activity of plasma and tissue supernatants was determined using the ALP assay kit (Sigma Chemical Co.), and data analysis was performed as previously described in the Bradford method. 18
| Quantification of calcium content in aortas
The calcium content in aorta and aortic roots was determined using o-cresolphthalein complexone colorimetric approach. Briefly, the aortas and aortic roots were air-dried and weighed, followed by addition more than 10 times volumes (w/v) of 100 mL/L formic acid and incubated overnight at 4°C. The supernatant was collected by centrifugation at 3000 rpm for 10 minutes before calcium detection.
The reagent and the supernatant were mixed and incubated at 37°C for 5 minutes according to the manufacturer's instruction. Then, the absorbance at a wavelength of 600 nm was measured using a spectrophotometer (Thermo Scientific Biotech) to calculate the calcium deposition content of each group of aortas and aortic roots. Calcium content was determined relative to the dry tissue weight and was expressed in mg/g dry tissue.
| Real-time PCR analysis
Total RNA was extracted from aorta using TRIzol reagent (Invitrogen) and subsequently reverse transcribed into cDNA. Real-time PCR was performed on the cDNA in a 20 µL reaction system using Mx3000 Multiplex Quantitative PCR System (Stratagene), and the PCR product was assessed using Eva Green fluorescence (Invitrogen). GAPDH was selected as the internal reference gene for mRNA levels whereas U6 was used as the internal reference for miR-29b levels. The primer sequences are shown in Table 1. The ΔΔC t method was used for computing the relative mRNA expression levels.
| Western blot analysis
The aortic tissues were washed 3 times with PBS. The total protein was extracted by RIPA lysate (Sigma-Aldrich). The protein concentration was measured by BCA method. About 40 µg of protein was separated by SDS-PAGE electrophoresis, and the protein was transferred onto the PVDF membrane (Millipore) by wet transfer method. Subsequently, the membrane was blocked with 5% skim milk at room temperature for 45 minutes followed by the addition with primary antibodies (anti-ALP, Densitometrical analysis was performed using Image J software (version 1.41o, Java 1.6.0_10, Wayen Rasband, US National Institutes of Health), and the relative content of the target protein was expressed by the grey ratio of the target protein/GAPDH.
| Immunohistochemical staining
After dewaxing in three toluene baths of 3 minutes each, the sections were rehydrated by passage through decreasing gradients of alcohol baths. After that, the sections were subjected to antigen unmasking by heat in a citrate buffer at pH 6 for 10 minutes.
| Statistical analysis
GraphPad Prism v6.0 (GraphPad Software, Inc) software was used for statistical analysis, and data were expressed as mean ± SD.
Comparisons between groups were performed using one-way or two-way ANOVA, followed by Newman-Keuls multiple comparison test. P < .05 was considered statistically significant, and all experiments were repeated three times.
| Warfarin up-regulated miR-29b in CAVD rat model
In order to examine the expression of miR-29b in CAVD rat model, qRT-PCR experiments were performed in plasma, aorta and aortic valve samples collected from warfarin-treated and control rats. The results indicated that miR-29b expression level was significantly increased in the plasma, aorta and aortic valves of CAVD rat model compared to the control rats ( Figure 1A).
| AntagomiR-29b attenuated the mineralization of vascular tissue in CAVD model
The increased expression of miR-29b suggested that this miRNA may play a significant role in CAVD in vivo. Thus, we aimed to investigate its effect on key parameters involved in CAVD. To this achievement, the antagomiR-29b was injected in CAVD rats. The efficiency of antagomiR-29b was assessed by qRT-PCR, and the results showed that the expression levels of miR-29b in plasma, aortas and aortic valves of CAVD rats were significantly decreased ( Figure 1A). Next, we evaluated the effect of antagomiR-29b on the mineralization of aortas and aortic valves. The measurement of calcium content indicated no significant difference in plasma level of calcium (P > .05), but the calcium content in the aorta and aortic valves was significantly higher in the CAVD model group compared to the control group (P < .01, Figure 1B). On the contrary, after the treatment with antagomiR-29b, calcium content was markedly decreased compared to the CAVD model group (P < .01, Figure 1B). In addition, compared with control rats, increased ALP activity in plasma, aorta and aortic valves was observed in the CAVD rat model whereas antagomiR-29b significantly reverted this effect (P < .01, Figure 1C). Furthermore, H&E staining revealed disordered elastic fibres in aortas of CAVD group compared with control aortas but this effect was counteracted by antagomiR-29b ( Figure 1D). Similarly, H&E staining of the TA B L E 1 The primer sequences for qRT-PCR aortic roots indicated the disorganization of tissue structure in the CAVD rat model compared to the control group whereas counteracting effects were observed in antagomiR-29b group ( Figure 1E) compared to the CAVD model group.
| AntagomiR-29b promoted TGF-β3 expression and inhibited Wnt3/β-catenin/Smad3 axis and osteogenic factors in the aorta and aortic valves of CAVD model
To get insights into the potential molecular mechanism of miR-29b in CAVD, we preformed gene expression and protein expression analyses using the aorta samples. The qRT-PCR results indicated that the mRNA levels of osteogenic markers such as OCN, OPN, ALP and Runx2 were significantly up-regulated in the CAVD model compared to the control group (P < .01, Figure 2). In addition, the same trends were observed for Wnt3 and β-catenin mRNA levels (P < .01, Figure 2). However, upon antagomiR-29b treatment, the mRNA levels of these genes were significantly decreased compared to the CAVD model group (P < .01, Figure 2). Meanwhile, the mRNA level of TGF-β3 was significantly decreased in CAVD model compared to the control group, but markedly increased following the antagomiR-29b treatment comparatively to the CAVD model group (P < .01, Figure 2). Besides, there was no significant difference in Smad3 expression among these four groups (P > .05, Figure 2). As Figure 5). Interestingly, treatment with antagomiR-29b significantly down-regulated the expression of these genes compared to the CAVD model group ( Figure 5). In addition, we found that TGF-β3 mRNA level was significantly down-regulated in the CAVD model relatively to the control group, and this trend was reversed by antagomiR-29b administration compared to the CAVD model group ( Figure 5). Similar results were observed with Western blotting experiments ( Figure 6).
These results suggested that antagomiR-29b can effectively prevent the progression of CAVD by possibly promoting TGF-β3 through repressing the Wnt3/β-catenin/Smad3 axis, which may serve as a potential target for treatment of CAVD.
| AntagomiR-29b antagonizes angiogenesis in the aorta of CAVD rat model
In order to uncover whether miR-29b impacts on the angiogenesis of the aortas of CAVD rat model, we performed VEGF immunofluorescence in aortas collected from animals in the four groups. As shown in Figure 7, we found that the expression of VEGF was significantly increased in the CAVD model group compared to the control group.
In addition, antagomiR-29b treatment significantly decreased the expression of VEGF in the CAVD rats comparatively to the CAVD model groups (Figure 7). No significant difference was observed between the CAVD model group and NC group (Figure 7). These results indicated that antagomiR-29b antagonizes angiogenesis in the aorta of CAVD rat model.
| AntagomiR-29b improves heart function in CAVD rat model
To evaluate heart performance in CAVD model and the possible effect of antagomiR-29b, ECG and echocardiography analyses we performed. ECG data indicated that, compared with the control group, the heart rate was significantly decreased in the CAVD model ( Figure 8A). Interestingly, we found that antago-miR-29b treatment improved the heart rate of the CAVD model ( Figure 8A). Echocardiography analysis was performed to examine the cardiac performance in rats from the four groups. The M-mode echocardiograms were depicted in Figure 8B. The results indicated that the pattern of mitral Doppler flow spectra F I G U R E 2 AntagomiR-29b promoted TGF-β3 expression and inhibited Wnt3/β-catenin/Smad3 axis and osteogenic factors in the aorta of CAVD model at transcription level. The mRNA expression levels of different genes were determined by qRT-PCR. The experiments were performed at least 3 times, and data are expressed as means ± SD. *P < .05, **P < .01 vs control group, ##P < .01 vs CAVD model group, &&P < .01 vs NC model group was consistent and smooth in the control group, whereas the middle space among inferior and superior flow spectra was regular and stable ( Figure 8B). However, significant variations in the middle space distance and sawtooth waves were observed in the CAVD model group, which was reverted after treatment with antagomiR-29b group ( Figure 8B). The indices of EF% and FS% were significantly decreased in the CAVD model compared to the control group but, interestingly, EF% and FS% were significantly increased upon treatment of CAVD rats by the antagomiR-29b ( Figure 8B). On the contrary, LVDd and LVDs were significantly increased in the CAVD model compared to the control (P < .01, Figure 8B), but decreased in the antagomiR-29b treatment group (P < .01, Figure 8B). These results indicated that antagomiR-29b improves heart function in CAVD rat model.
| D ISCUSS I ON
Previous researches have shown that vascular calcification may be related to a number of mechanisms such as calcium metabolism disorder and osteoblast phenotypic transformation of VSMCs, 15,19 but the exact mechanisms so far remain unclear. Our in vitro study showed that miR-29b is a positive regulator of hAVICs calcification. 15 However, there was no in vivo study conveying the potential involvement of miR-29b in aortic calcification and its impact on heart function. Here, we uncovered that miR-29b was increased in the cardiovascular system of CAVD rat model and showed that the inhibition of miR-29b mitigated aortic calcification in tissues of CAVD rats.
We also found that the expression of the endogenous TGF-β3 was down-regulated in the calcification model in vivo. Meanwhile, an-tagomiR-29b treatment significantly increased the expression level of TGF-β3 and inhibited vascular and valvular calcification, showing a decrease in calcium content and ALP activity, indicating that the endogenous miR-29b/TGF-β3 pathway may be involved in the processes of vascular and valvular calcification. This finding corroborated with our previous study that miR-29b promoted the calcification of hAVICs via direct targeting of TGF-β3. 15 It is well known that osteoblast transformation of VSMCs is an important process of calcification. 15,20 Recent studies have shown that during calcification, the expression of smooth muscle lineage markers in VSMCs is reduced whereas the expression of osteogenic markers is increased. [21][22][23] In this study, we found that the expression levels of osteogenic markers were increased in CAVD model in vivo. Nonetheless, antagomiR-29b treatment significantly reverted the expression of these osteogenic markers. These results suggested that antagomiR-29b inhibits aortic and valvular osteoblastogenesis.
These findings corroborated with our previous in vitro studies. 15 Other studies have also indicated that miR-29b is intrinsically involved in osteogenesis. [24][25][26] Moreover, we further investigated the potential mechanism of inhibiting CAVD through antagomiR-29b. TGF-β is known to regulate cell differentiation, proliferation and apoptosis through cell surface receptor signal transduction pathway. 27,28 Previous studies have shown that TGF-β1 regulates interstitial cell calcification through apoptosis mechanism in calcified aortic valves. 29 Moreover, TGF-β can also regulate vascular calcification and the differentiation of VSMCs. 30 TGF-β transmits cytoplasmic signals into the intracellular domain through phosphorylating Smad2 and Smad3 by activating type II receptor. 31 The activated phosphorylated Smad2 and Smad3 regulate the transcription of a series of genes via binding to Smad4. 32 In our study, we found that the expression of TGF-β3 was decreased, whereas its downstream signal molecule Smad3 was significantly increased in the CAVD model in vivo. We also found that treatment with antagomiR-29b significantly promoted the expression of TGF-β3 and the expression of Smad3 was inhibited. Previous researches have shown that Runx2 is an important regulator of vascular calcification. 33 Up-regulation of Runx2 is crucial for the calcification process, which was observed in vascular calcification of chronic kidney disease patients. 34 In addition, Runx2 acts as a significant target gene for the TGF-β signal pathway, can inhibit myogenic differentiation of C2C12 cells and induce osteoblast differentiation. 35 Here, we found up-regulation of Runx2 expression in the Our study showed that the expression of VEGF in the CAVD model was significantly increased compared to the control group but reverted by antagomiR-29b administration, which was corroborated by its role in the occurrence and development of vascular calcification. 16 As VEGF is an angiogenic marker, we also stipulated that angiogenesis occurs in CAVD and that antagomiR-29b exerts anti-angiogenic effect in CAVD.
This hypothesis was supported by the previous findings indicating that miR-29b hinders angiogenesis in hepatocellular carcinoma.
F I G U R E 8
AntagomiR-29b improves heart function in CAVD rat model. A, Representative images of electrocardiograms in different groups. B, Representative echocardiogram images and quantitative analysis of heart function parameters. The experiments were performed at least 3 times, and data are expressed as means ± SD. **P < .01 vs control group, #P < .05 vs CAVD model group, &P < .05 vs NC model group Moreover, we found that CAVD negatively impacts on heart performance and that this effect is suppressed by the miR-29b inhibition. This indicated that miR-29b may be a key therapeutic target in CAVD and cardiovascular diseases such as vascular stenosis and heart failure.
Nevertheless, this study has some limitations. Studies have shown that vascular calcification is associated with decreased vascular compliance and arterial hypertension. 37 Vitamin D3 plus nicotine can cause severe calcification in the rat aorta and significantly increase blood pressure. 38 Our current study indicates that antago-miR-29b treatment can significantly attenuate vascular and valvular calcification in rats, but does not involve blood pressure studies.
Therefore, we cannot rule out the possibility that blood pressure may also be related to the inhibition of vascular calcification by miR-29b and these problems need to be further clarified.
In summary, this study provides important new insights into the mechanisms of miR-29b in CAVD and opens the door for innovative preventative strategies by providing new targets for small molecule therapies. Specifically, pharmacological inhibitors that can prevent miR-29b up-regulation during early development may represent an intriguing therapeutic approach.
CO N FLI C T O F I NTE R E S T
The authors declare that they have no conflicts of interest with the contents of this article.
DATA AVA I L A B I L I T Y S TAT E M E N T
All data generated or analysed were published within the manuscript.
|
2020-08-27T13:06:57.557Z
|
2020-08-26T00:00:00.000
|
{
"year": 2020,
"sha1": "aff1d9095805e709ebaec1ae9f5aa7a3e159b3f5",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.15770",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "27de0d30692a17565a52fd2a357ed7fc2c8f0aa1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
125675741
|
pes2o/s2orc
|
v3-fos-license
|
Skyrme tensor force in the collision 16O+40Ca
The role of tensor force is investigated by using the time-dependent Hartree-Fock (TDHF) theory in the collision 16O+40Ca. The full tensor force is incorporated in our TDHF implementation. The calculations are performed in three-dimensional Cartesian coordinate without any symmetry restrictions. We study the effect of tensor force on Coulomb barrier, upper fusion threshold energy, and energy contribution of the time-odd and tensor terms in Skyrme energy functional. The Coulomb barrier obtained from the energy functional with frozen density approximation is compared with the available experimental data. We find that the tensor force may change the upper fusion threshold energy in the order of a few MeV for the collision 16O+40Ca. The tensor force has a non-negligible effect in heavy-ion collisions.
TDHF theory was proposed by Dirac in 1930 [44] and was first applied in nuclear physics in 1976 [45]. After the first application, many groups in the late 70s and 80s performed more extensive calculations in nuclear large amplitude collective motion [46]. However, at that time, limited computer capacity restricted most calculations with many approximations. For instance, the reaction was assumed to be in an axial symmetric plane and a simplified Skyrme force with the omission of spin-orbit coupling was used. These approximations turned out to be a hindrance for the theoretical development. For example, TDHF calculations predicted that for 16 O+ 16 O reaction at E c.m. = 34 MeV the partial waves L ≤ 6 do not lead to fusion and the corresponding deep inelastic cross section was expected to be 132 mb [47]. According to TDHF prediction, an experiment to search for a fusion L window was carried out. However, experimentally there is no evidence for the occurrence of such phenomenon predicted by TDHF calculations [47]. This conflict between TDHF prediction and experimental observations is called the puzzle of small fusion window, and promotes the theoretical development.
A few years later in 1986, Umar et. al included the time-even terms of spin-orbit force in TDHF calcue-mail: luguo@ucas.ac.cn lations [48], and found the upper fusion threshold energy was increased by more than two times. This indicates that earlier TDHF calculations without spin-orbit coupling underestimated the energy dissipation from the collective kinetic energy into the internal excitations so that the energy window of fusion reactions was too small in comparison with the experiments. After including the spin-orbit coupling, more intrinsic degrees of freedom is accessable and the dissipation is enhanced. The inclusion of time-even spin-orbit force in TDHF solved the puzzle of small fusion window. However, a meaningful collision theory should satisfy Galilean invariance which guarantees the calculation results will not depend on the choice of the frame of reference. This invariance is particularly important for reaction dynamics so that both the time-even and time-odd terms of spin-orbit force should be included simultaneously. In recent years, TDHF calculations with the full spin-orbit force become possible thanks to the development of computational power. The strong spin-excitation was shown in spin-saturated system 16 O+ 16 O [22]. The full spin-orbit force was found to contribute about 40%∼65% of the total dissipation depending on the different Skyrme parameters and bombarding energies [26].
These studies indicate the nucleon-nucleon interaction plays a significant role in heavy-ion collisions. The most obviously missing component of nuclear force is tensor force, which is well known to be crucial to explain the properties of the deuteron. In nuclear structure, the tensor force plays an important role in the shell evolution of exotic nuclei [49], spin-orbit splitting [50,51], deformation [52], rotation [53], Gamow-Teller and spin-dipole excitations [54]. However, in heavy-ion collisions the full tensor force has been neglected in most calculations due to the complexity of collision dynamics. In Ref. [55], the time-even spin-current tensor was shown to become important as the increase of the mass of colliding systems. The role of time-even tensor force in the dissipation dynamics in deep-inelastic collisions has been explored in Ref. [26]. Recently the full tensor force was shown to play a non-negligible effect in 16 O+ 16 O inelastic collisions [9,28].
The article is organized as follows. Section 2 will present the TDHF theory with the full version of Skyrme interaction and energy density functional including the tensor force. In Sec. 3 we illustrate the role of tensor force in the asymmetric reaction 16 O+ 40 Ca. A summary is given in Sec. 4.
Theoretical method
Starting from the time-dependent action and applying the variational principal δS =0 with respect to the many-body wave-function Ψ(r, t), one may obtain the time evolution of mean-field In the above TDHF equation, the many-body wavefunction has been approximated as the Slater determinant composed by the single-particle states φ λ The initial wave-functions in dynamical evolution employ the nuclear ground state obtained from HF equation The time evolution of single-particle states is expressed as withĥ the single-particle Hamiltonian. Here r denotes the three-dimensional Cartesian coordinate and the spin of nucleon.
We employ the Skyrme effective interaction in our TDHF calculation where t i , x j (i, j = 0, ..., 3), W 0 , α, t e , and t o are the Skyrme parameters. In the above equation, from the sixth to ninth line represents the tensor force, in which the coupling constants t e and t o denote the strength of the triplet-even and triplet-odd tensor interactions, respectively. The operator k = 1 2i (∇ 1 −∇ 2 ) acts on the right , k = − 1 2i (∇ 1 −∇ 2 ) acts on the left. The spin exchange operator is P σ = 1 2 (1 + σ 1 σ 2 ). It is natural to express the Skyrme interaction with the energy density functional (EDF) In above equation, the number density ρ, kinetic density τ, current density j, spin density s, spin-kinetic density T, and spin-current pseudotensor density J are defined as Here, q = n(p) stands for neutron (proton). From these densities, one can define the isoscalar (t = 0) and isovector (t = 1) densities and currents as With the above definitions, the full version of Skyrme energy functional is expressed as where H 0 is the basic Skyrme functional used in Sky3D code [56] and most TDHF calculations. In our code, we incorporated the full version of Skyrme energy functional as shown in Eq. (10), including all the terms from central, spin-orbit and tensor forces. These new terms have been clarified to be especially important in the studies of nuclear structure. The basic functional H 0 used in Sky3D code [56] is expressed as The coupling constants A and B appearing in Eqs. (10) and (11) have been defined in Refs. [51,57]. Some authors used the coupling constants C which is the sum of parameters A and B, C = A + B. The terms with the coupling constants A come from the Skyrme central and spin-orbit forces, while those with the B parameters are from the tensor force. In the calculations, we set C ∇S t = C ∆S = 0 because the terms containing the gradient of spin-density may cause the spin instability both in nuclear structure and reaction studies as pointed out in Refs. [28,51].
In the energy functional Eq. (10), the spin-current pseudotensor density J is expressed in its Cartesian components J µν . In Ref. [58] the spin-current density has been decomposed into pseudoscalar, (antisymmetric) vector, and (symmetric) traceless pseudotensor components as Here δ µν is the Kronecker symbol and ε µνk is the Levi-Civita tensor. The pseudoscalar J (0) , vector J (1) , and pseudotensor J (2) components are given in terms of the Cartesian form The vector spin-current density J (1) (r) ≡ J(r) is often called spin-orbit current, as it enters the spin-orbit functional in Eq. (11). The terms of energy functional involving the spin-current density in Eq. (10) can be instead expressed as To test the accuracy of numerical calculations, the energy functional involving the spin-current density has been implemented by using the above two approaches in our code. In the next section, we will show the energy contributions with this two approaches are identical, as they should be. The set of nonlinear TDHF equations has been solved on a three-dimensional Cartesian coordinate and without any symmetry restrictions. We calculate the ground state of 16 O and 40 Ca in a numerical box of 24×24×24 fm 3 . For the dynamical evolution of reaction 16 O+ 40 Ca, we used a box with 32 × 24 × 32 fm 3 grid points and grid spacing 1.0 fm. The initial distance between the projectile and target is taken to be 20 fm. From infinity to initial distance the nucleus was assumed to move on a pure Rutherford trajectory so that the initial boost are properly treated in TDHF evolution. We expand the Taylor expansion up to the sixth order and employ a time step ∆t = 0.2 fm/c in the dynamical evolution. The choice of these parameters guarantees a good numerical accuracy during the dynamical evolution for all the cases studied here. The total TDHF energy and particle number have been well conserved and shifted less than 0.1 MeV and 0.01, respectively.
Results and discussions
We employ the full version of Skyrme energy functional as shown in Eq. (10) in our calculations. The static properties and reaction dynamics are treated with the same energy density functional and a unified theoretical framework. To examine the accuracy of our code, we compared our results with those obtained by other code in three aspects. First, we have reproduced the upper fusion threshold energy in the collision 16 O+ 16 O reported in Ref. [28] within the accuracy of 1 MeV by our code. Second, we calculated the energy contribution from the new terms as a function of time, and our results reproduced those reported in Ref. [28] within a negligible discrepancy. Third, we implemented the energy functional involving the spin-current density in two approaches as shown in Eqs. (14) and (15) in our code, and found that the results with two approaches are identical as they should be and also reproduced those shown in Ref. [28].
In present work, we employ the five sets of Skyrme parametrizations SLy5 [60,61], SLy5t [50], and T22,T26 and T44 [51] to study the effect of tensor force in the collision dynamics of 16 Table 2. Energy (in MeV) and radii (in fm) of the Coulomb barrier obtained with the frozen density approximation for the five sets of Skyrme parametrizations and experimental data [59] in the collisions 16 O+ 40 Ca with the five sets of forces and the experimental data [59]. Here the Coulomb barrier is obtained by the frozen density (FD) approximation within the EDF theory [62][63][64]. The interaction potential in the approaching phase can be expressed as ρ P+T = ρ P +ρ T is the sum of ground state density of projectile and target at the relative distance R, and E[ρ P+T ](R) is the Skyrme EDF as shown in Eq (10). Note that the Pauli principle and the coupling between the collective motion and intrinsic states have been neglected in FD approximation. When the overlap of two densities is small, e.g., at the position of Coulomb barrier, EDF with FD approximation is a good tool to estimate the Coulomb barrier. However, at the smaller relative distance, since the Pauli effect is strong, FD approximation will not properly account for the interaction potential [65]. The energy and radii of Coulomb barrier with SLy5t are observed to be exactly same as those with SLy5 for 16 O+ 40 Ca, as expected, because the tensor force has no contribution to the ground state EDF for the spin-saturated nuclei 16 O and 40 Ca. There exist some differences among the T22,T26 and T44 parameter sets. These differences, for the spin-saturated system, come from the rearrange- ment of other terms of Skyrme EDF in the fit of the parametrizations. Note that the structure of 16 O and 40 Ca, for instance, matter and charge radii, has small difference for T22,T26 and T44 parameters, which may cause a slight change of the barrier height. The Coulomb barrier with FD-EDF overestimates the experimental data due to its omission of the coupling between the collective motion and single particle degrees of freedom [66].
The upper fusion threshold is quite sensitive to the details of Skyrme EDF as reported in Ref. [3]. The inclusion of pure tensor force SLy5t increased the upper threshold by 3 MeV. The SLy5 and T22 have the same isoscalar coupling constants, and the upper fusion threshold is found to reduce as the decrease of isovector coupling. The same trend is also observed for T26 and T44. By comparing T22 and T44 with the same isovector coupling, the increase of isoscalar coupling increases the upper fusion threshold. Our results in 16 O+ 40 Ca are consistent with the findings in Ref. [28] for 16 The energy contributions from the terms involving time-odd densities and currents (a) s · F, (b) s · T , (c) s 2 with t 0 parameter, and (d) s 2 with t 3 parameter are examined for 16 O+ 40 Ca head-on collisions with SLy5 and SLy5t parametrizations at E c.m. = 170 MeV. The results are shown in Fig. 1. This is deep-inelastic collisions as seen from Tab. 3. These terms were not included in Sky3D code [56]. At the initial time, the energy contribution from these terms both for SLy5 and SLy5t are zero, as expected, because these time-odd terms contribute zero for the ground state of even-even nuclei. Since the term s · F comes from the pure tensor force, its energy with SLy5 remains zero during the time evolution. For SLy5t because the inclusion of tensor force changes the time-dependent mean-field and hence the densities s and F itself, the energy contribution keeps to be zero in the early stage and then starts to oscillate. Since s · T term comes from both the central and tensor forces, its energy with SLy5 evolves with time, while for SLy5t the more pronounced effect appears compared with SLy5. The s 2 terms from the central force are found to have similar trends.
In order to test the numerical realization of our code and understand the contribution of tensor force in energy functional, we calculate the energy contribution from the J 2 terms both in its coupled form and Cartesian form as given in Eqs. (14) and (15). The left column in Fig. 2(a-d) shows the energy contribution of scalar, vector, and tensor components of J 2 , and the summation of these three terms, while the right column in Fig. 2(e-h) is the contribution from the diagonal, symetric and anti-symetric components and also the corresponding summation. The total contribution calculated with two approaches are exactly identical shown in Fig. 2(d) and (h), as they should be.
We also check the final kinetic energy of the fragments for the case of E c.m. =170 MeV, which is 28.1 MeV and 26.3 MeV for SLy5 and SLy5t, respectively. This energy provides the amount of energy transferred from relative motion to internal degrees of freedom, and could be a direct measure of the energy dissipation.
The contribution to the total energy from J 2 term is shown in Fig. 3 for the five sets of Skyrme parameters in the collisions 16 O+ 40 Ca at E c.m. = 46.5 MeV and b = 5.0 fm. In the early stage of dynamical evolution, the energy from J 2 keeps constant as that in the ground state. As time evolves, these terms are highly excited in the dynamic process and present an evident effect in heavy-ion collisions of spin-saturated system 16 O+ 40 Ca , while they have negligible effect in the ground state of spin-saturated nucleus. The perturbative addition of tensor terms with SLy5t has an opposite sign with respect to the other four forces. The TIJ forces contribute to the energy up to a few MeV.
Conclusion
We study the role of tensor force within the TDHF theory for the collision 16 O+ 40 Ca. The full tensor force is incorporated in our TDHF implementation. The calculations are carried out in three-dimensional and symmetry unrestricted Cartesian coordinate. We employ the five sets of Skyrme parametrizations SLy5, SLy5t, T22, T26 and T44 with a wide range of isoscalar and isovector tensor coupling to study the role of tensor force in heavy-ion collision dynamics. The tensor force is found to change the upper fusion threshold energy in the order of a few MeV in the spin-saturated system 16
|
2019-04-22T13:08:21.423Z
|
2017-11-01T00:00:00.000
|
{
"year": 2017,
"sha1": "e240b4d071cc6ff029e85f38f5e311916cc704e8",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/32/epjconf_fusion2017_00021.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "629a89e1d9c3c89fd4ed11a2cee82a7ef164b8a4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
16184394
|
pes2o/s2orc
|
v3-fos-license
|
Hepatitis C Viral Heterogeneity Based on Core Gene and an Attempt to Design Small Interfering RNA Against Strains Resistant to Interferon in Rawalpindi, Pakistan
Background Global prevalence of Hepatitis C Virus (HCV) infection corresponds to about 130 million HCV positive patients worldwide. The only drug that effectively reduces viral load is interferon-α (IFN-α) and currently combination of IFN and ribavirin is the choice for treatment. Objectives The present study is aimed to resolve the genotypes based on core gene that might affect the response to interferon therapy. Furthermore an attempt was made to propose a powerful therapeutic approach by designing the siRNA from sequences of the same patients who remain resistant to IFN in this study. Patients and Methods To achieve the objectives, a sequence analysis was performed in five HCV ELISA positive subjects who have completed IFN treatment. Neighbor Joining (NJ) method was used to study the evolutionary relationship. Atomic models were predicted using online software PROCHECK and i- TASSER. Results Two new genotypes were reported for the first time namely 4a from suburban region of Rawalpindi and 6e from all over the Pakistan. According to Ramachandran plot, satisfactory atomic model was considered useful for further studies, i.e. to calculate HCV genotypes conservation at structural level, to find out critical binding sites for drug designing, or to silence those binding sites by using appropriate siRNA. Single siRNA can be used to inhibit HCV RNA synthesis against genotype 3 and 4, as the predicted siRNA were originated from the same domain in studied HCV core region in both genotypes. Conclusions We can conclude that any change or mutation in core region might be the cause of HCV strains to resist against IFN therapy. Therefore, further understanding of the complex mechanism involved in disrupting viral response to therapy would facilitate the development of more effective therapeutic regimens. Additionally, a single designed siRNA can be used as an alternative for current therapy against more than one resistant HCV genotypes.
Background
Hepatitis C virus (HCV) is endemic worldwide infection and its distribution varies geographically. Prevalence estimate is much higher among developing countries than in developed countries (1). Pakistan holds one of the world's largest burdens of chronic hepatitis disease and death incidence due to liver failure and hepatocellular carcinoma (2). More than 10 million people that make up to 6 % of total population of Pakistan are suffering from HCV, with high morbidity and mortality (3).
Phylogenetic analyses identified six major lineages, namely genotype 1-6. These groups were further subdivided into several subtypes. The creation of quasispecies is possible due to high rate of mutation in hepatitis virus strains, even within a single infected individual. On the basis of genetic similarities, numerous viral strains have been categorized into groups, types, and subtypes. Genotype 1 is the most common lineage in North and South America as well as in Europe (4). However, distribution of the genotypes in Asia-Pacific region is diverse. In contrast to Japan and China where the predominant genotype is 1, and the Middle East where the major genotype is 4, genotype 3 is common in Pakistan and is detected in 67-87 % cases (5). The interesting fact about genotype 3 is that besides Pakistan, India, and Bangladesh, it is also a major genotype in Australia and New Zealand. Furthermore, in the context of Pakistan, Idress et al. (3) have reported that genotype 3a is the most common type in all provinces except in Baluchistan where the most popular subtype is 1a. HCV is endemic worldwide and its distribution varies geographically. Reported HCV prevalence in Pakistan is much higher when compared with other countries of the region like India (0.9 %), Indonesia (2.1 %), and China (3.2 %) (6). Therapeutic approaches against HCV include antiretroviral treatment, inhibitors, RNAi, or siRNA. Interferon (IFN) alpha, a naturally occurring cytokine that increases the immune response against HCV is considered the only therapy for chronic hepatitis C; injected PEG-IFN is hypothesized to function by mimicking this natural cytokine (7). Another antiviral agent ribavirin (RBV) is devoid of considerable antiviral activity when used alone in HCV infection (5) but it significantly increases the antiviral effect of IFN when used in combination treatment (8). Various HCV genotypes respond to interferon in different ways (9). HCV genotype 1b (HCV-1b) is resistant to interferon with lesser response of only 10-40% as compared to other genotypes like HCV-2a or HCV-2b, that showed complete response of 60-90 % (10,11). HCV-1b is the most recurrent variant worldwide, with a high occurrence (37-80 %) in Asian, American, and European countries. Patients infected by HCV-1b genotype suffer from a more active disease and are more prone to liver cirrhosis and hepatocellular carcinoma than patients with other HCV genotypes (12). Amantadine is another antiviral agent that reduces viral replication by interfering with virus uncoating or transcription of vi-ral RNA. Moreover, a major research struggles to develop 'Specifically Targeted Antiviral Therapy for HCV' (STATC). The best knowledge about molecular structure of HCV, its proteins components, and diverse stages of replication cycle of the virus, specific small molecules, lead to development of viral enzymes inhibitors. Some new antiviral drugs include Telaprevir, Boceprevir, Danoprevir, Nucleoside analogues, nucleotide analogue, non-nucleoside analogue, caspase inhibitors, and cyclophillin inhibitors (13). Drugs that are under the development include small molecules such as protease inhibitor, polymerase inhibitor, and toll-like receptor drug classes. While many of these drugs seem to hold promise as either a primary or an adjunctive treatment for patients with chronic hepatitis C, they are years from market and their safety and efficacy are uncertain in difficult-to-treat patients (14). Practice of siRNA is more valuable as it binds directly to specific mRNA results in exclusive block of transcription potentially bearing a powerful molecular therapeutic approach rather than current therapy of highly liable to RNAi-induced suppression, as the inhibition of HCV RNA levels by targeting different genes using RNAi has been reported (15). HCV Core protein is involved in a whole array of host cell functions including signal transduction, and transcriptional regulation of genes in the liver. Many reports showed that substitutions in HCV core region consequences enhanced insulin resistance, liver steatosis, oxidative stress and Hepatocellular Carcinoma (HCC) (16). Current therapies against HCV demonstrate limited efficiency due to development of viral resistance and high rate of HCV mutation. The problem of viral mutants could be resolved by using a mixture of siRNAs against different sequences. Several studies have also revealed the feasibility of targeting host cellular and viral factors involved in HCV infection as potential therapeutic goals (17,18).
Objectives
In the present study the genotypes of five HCV positive but unresponsive to IFN therapy were resolved. Further, an attempt was performed to design an anti-HCV siRNA in studied samples to be employed against any HCV genotype.
Patient and Sample Selection
HCV Enzyme linked immune sorbent assay (ELISA) positive individuals who have completed IFN treatment were randomly selected from Rawalpindi General Hospital. Blood samples were collected from patients after taking informed written consent.
RNA Extraction, cDNA Synthesis, and Amplification of Core Region
Qualitative detection of HCV RNA was performed using Reverse transcriptase (RT) PCR. Briefly, 150 μl of patients' sera samples were used to isolate the RNA by a commercially available kit (NucleoSpin RNA Virus by Macherey-Nagel) according to the manufacturer's instructions. Complimentary DNA (cDNA) of partial core region HCV was synthesized using 100 units of Moloney murine leukemia virus (MMLV) reverse transcriptase enzyme (Fermentas, USA) with 10 μM of outer antisense primer. Two PCR amplification cycles were performed (first cycle PCR and nested PCR) with five units of Taq DNA polymerase enzyme (Fermentas, USA) in a volume of 25 μl reaction mixture. External PCR conditions performed in a thermal cycler were as follows: an initial denaturation step at 95°C for 3 minutes followed by 30 cycles of 94°C for 30 seconds, 55°C for 30 seconds, 72°C for 1 minute, and finally extended at 72°C for 3 minutes. Internal PCR conditions were the same except for a different set of inner primers used with annealing temperature for 5 minutes in cycle 1 of PCR amplification. Products of nested PCR were directly sequenced on Beckman Coulter CEQ 8000 sequencer after purification by PCR Product Purification Kit from Genomed.
Phylogenetic Analysis
Five obtained sequences were aligned by ClustalW and similarity of sequences with which already reported in database (http://blast.ncbi.nlm.nih.gov/Blast.cgi) was found by Nucleotide Blast (nBlast). (19). After application of Tajima's test (20) and Neighbor Joining (NJ) methods, obtained statistical selection pairingwas used for phylogenetic tree construction (21). Base statistical robustness was performed by 500 bootstrap repeats and the whole process was developed by MEGA 5 software (22).
Protein Structure and Function Prediction
Protein structure and function was predicted using i-TASSER server after translating the nucleotide sequence into amino acid. 3D models were built based on multiplethreading alignments and i-TASSER assembly simulations; function insights were then derived by matching the predicted models with protein function database (23, 24).
Stereo Chemical Evaluation of 3D Protein Models
3D structural models built using i-TASSER were evaluated on PROCHECK. It checks the stereochemical quality of a protein structure that produces a number of plots in PostScript format analyzing its overall and residue-byresidue geometry. Pdb files of 3D models were uploaded and Ramachandran plot was used to check the existence of five models predicted by i-TASSER for each sequence.
siRNA Prediction for HCV
Antiviral siRNA prediction was made for HCV core region. SiRNA sequences were selected based on their degree of conservation, defined as the proportion of viral sequences that were targeted with complete matches by corresponding siRNA. SiDirect was employed to predict siRNA as a highly effective, target specific siRNA online design tool. Five sequenced HCV samples were pasted in FASTA format and the program was run by implementing Ui-Tei algorithms, the algorithms that are on the back hand of siDirect (25). The output page displays siRNA sequence and siRNA position.
Phylogenetic Analysis
Aligned sequences using multiple alignment software Clustal W were further subjected to construct tree using the software MEGA 5 to find out the relationship between the sequences, novel genotypes, subtypes, and variants. Unrooted NJ tree constructed for studied samples with reference sequences (Africa, Europe, Asia, North America, and South America) from Los Alamos mounted from two main clades that show the presence of separate ge-
Accession Number of Reference Sequences Similarity, % Genotype
Pak-01-10 AB523124 98 3b Pak-02-10 DQ988076 96 4a Pak-03-10 AB301826 98 6e Pak-04-10 DQ777803 93 3b Pak-05-10 EU81436 97 3a (Figure 1). Active rate of mutation in HCV core region was shown by tree topology in all different geographical regions i.e. European strains (clade 2, cluster I, II, and III), North American and South American strains (clade 2, cluster III). Sequences from Asia appeared at distinct positions in a tree showing high level of diversity (clade 2, cluster I, II, V, VI) ( Figure 1). In clade 1, single cluster showed a clear independent clustering of studied sequenced samples. Clade1 has shown active and early branching pattern of subtype 3b in Pakistani population depicted that genotype 3 was actively evolving in IFN resistant strains of HCV in inhabitants of Pakistan.
3D Structural Analysis
Models for the studied sequences were first built by the use of i-TASSER which generates up to five full-length
Kanwal S et al.
Hepatitis C Viral Heterogeneity atomic models (ranked based on cluster density) of the studied samples with resolved genotypes. Ramachandran plots were built using PROCHECK to evaluate these models for 3D structural analysis (Figure 2 A-E). The next step was to determine the most likely to exist among five predicted atomic models by using a software for model evaluation. Ramachandran plots were built through PRO-CHECK to evaluate the 3D structural models predicted by i-TASSER. PROCHECK builds of protein models show whether the amino acid residues lie in the "favored region" or "disallowed region" of the plot. For a good protein model, there must be ≥ 90 % residues in the most favored region or < 2 % in the disallowed region of the plot. Considering the Ramachandran plots model 1 of Pak-03-10 was almost well for further analysis ( Figure 3C) as about 90 % (88 %) residues were found to be in the most favored regions, while less than 2 % (1.1 %) were in the disallowed region. Analysis of the Ramachandran plots for the remaining four models revealed that only model 1 should be analyzed further. Similarly, model 4 of Pak-01-10 ( Figure 3A), model 4 of Pak-02-10 ( Figure 3B), model 5 of Pak-04-10 ( Figure 3D), and model 3 of Pak-05-10 ( Figure 3E) were considered satisfactory (Figure 3B-3G). Detailed score of Ramachandran plot for all five sequences is given in Table 2.
Models predicted by i-TASSER with highest C-score as well as satisfactory plot statistics (< 75 %) can be further used for 3D models analysis which could help us to predict the cleavage sites or recognition of phosphorylation sites.
Tajima's Test of Neutrality
Tajima's test of neutrality collates the number of discriminating sites per site with the nucleotide assortment. A site is considered segregating if in a comparison of sequences, there are two or more nucleotides at that site. The Tajima test was calculated using MEGA 5. All gaps
Prediction of siRNA for HCV Core
Highly effective siRNA sequences were selected by using novel guidelines that were established through an extensive study of the relationship between siRNA sequences and RNAi activity by online available software siDirect. The siRNA was predicted against five HCV samples that were resolved in the present study ( Table 3). The predicted siRNA from same domain of core region showed that RNA and DNA binding domain was conserved in core region in genotype 3 and 4; so single siRNA can be used against both genotypes and inhibit HCV RNA synthesis.
Discussion
Genotype is one of the strongest predictive aspects of sustained virological response (26). In the present study four different genotypes have been revealed among five HCV positive samples resistant to IFN while two samples remained without recognized genotypes. Genotype 3 is prevalent is Pakistan (27,28) and the increase in the num-ber of genotype 3 (a and b) patients along the time in Pakistani population have been observed by various scientists (29,30). Present study demonstrates that genotype 3 (a and b), 4a, and genotype 6e do not respond to IFN therapy. Genotype 4a is reported for the first time in this study from suburban areas of Rawalpindi by sample Pak-02-10. This is the most important and prevalent strain of Egypt (31), North Africa and the Middle East (32,33). Iqbal et al. (26) and Idrees et al. (3) reported genotype 4 in the blood samples of Pakistani population from other geographical regions of the country. The possible existence of this genotype in Pakistan might be due to the prevalence of genotype 4 in neighboring country Iran with geographical location near to Europe and Middle East (29,30). Genotype 4 has been reported to be frequently coupled with severe cirrhosis and a reduced response to interferon therapy (34,35). Similar to Pakistan, HCV genotype 4 is also infrequent in the United Sates and there are few published data regarding response to therapy in patients with HCV genotype 4 infections in both Pakistan and United States (36). Genotype 6e is reported for the first time in Rawalpindi, Pakistan by the sample name Pak-03-10. This genotype 6 is frequent amongst patients from Southeast Asia (32,33). An earlier study found that Kanwal S et al.
Hepatitis C Viral Heterogeneity
Figure 3. Ramachandran Plots Constructed Using PROCHECK for Evaluation of 3D Structural Models Predicted With the Help of i-Tasser
These PROCHECK builds of protein models demonstrate whether the amino acid residues exist in the "favored region" or "disallowed region" of the plot. Kanwal S et al. Hepatitis C Viral Heterogeneity genotype 6 was spread widely through Southeast Asia and was not limited to injection drug users (37). Rare genotypes reported from Pakistan include 1c, 1d, 2c, 2k, 3c, 3k, 4, 5a, 6a, and 6v (3,38). It is quite promising that these two new genotypes may have entered into Pakistan from other countries through local persons who cross borders for job and trade. Shift in HCV genotype circulation needs to be paid more consideration. This enhances an alarming signal on the major steps to be taken to reduce such infection as this genotype is correlated with severe cirrhosis. It has been reported that failure to typing a genotype is caused by mutations (39,40) or may be insertions, deletions, or inversions and translocations. HCV do not perform proof reading and its high mutation rate made it genetically successful according to Darwinian Theory of natural selection (41). So, these two samples that failed for typing in the present study might be either a novel variant of the existing genotype or representative of the recombinant forms of the mixed genotype. 14 % and 14.13 % novel genotypes in Lahore and Quetta cities of Pakistan, respectively. Therefore, on the basis of these facts it can be concluded that genotype distribution is not even in all areas of Pakistan. The rate of distribution of genotypes and their genetic makeup varies at sub population levels of same area.
Phylogenetic Analysis
The NJ tree was constructed for the studied samples with reference sequences from the (LANL) mounted by two main clades showed the presence of separate genetic lineage. The Dendrogram was developed to find an association between studied samples and reference accessions from other countries; overall, it was revealed that the studied HCV samples of the present study exhibited long branch lengths, indicating an ancient history of their evolution and their genetically stable genome composition; this might be attributed to the suppressed or compromised immune pressure of the host. Previous reports supported the evidence that cases where immune response is compromised, there are a less chance of viral clearance (43,44). Secondly, it was evident that history of evolution of virus is more ancient in Pakistan than other countries. On the other hand, the accessions from world over are actively mutating and more divergent, or it can be said that they are still in an active phase of evolution. Moreover, it is evident from the dendrograms that disease is endemic in Pakistan for more time period than reference countries as it is already established that HCV prevalence in Europe is not homologous with reference to the distribution of genotypes (45,46). Pybus et al. (37) particularly with reference to Asia, explained the origin and maintenance of HCV diversity and reported that Asian model of evolution could be a baseline for investigating HCV spread in other continents. Similarity of Asian strains with all the reference strains is might be due to some migration events. It demonstrates the relationship with European strains; a study described that there was a sizeable community of South Asians like Asian labor migrants settled in European countries. Currently, approximately two million South Asians are living in Africa; some came in late nineteen and early twentieth century (47). The phylogenetic analysis depicts that viral genome underwent various significant changes with time at different rates in which core region is considered to be more diverse (48). A limited migration pattern has been identified among strains of Europe, North America, South America, and Africa that have shown high diversity in their respective geographical regions as reported previously that in areas of endemicity a highly divergent pattern is observed among the strains suggesting long infection duration (37). It has been seen that core region is undergoing diversification at high rate in European, Asian, and South and North American countries. Some strains showing no branching in clade 2 of the tree giving an idea about probable "no change" occurrence among them for years as they were under high negative selection pressure due to some environmental factors. Clade 2, cluster VII showed some ancient HCV genotype 3 that might be circulating in suburban population of Rawalpindi. Clade 1 in the tree showed a clear independent cluster of sequenced samples; early branching in the tree at level 1 clearly indicates to a history of viral evolution that is very ancient in Pakistan as compared to other parts of the world (30,49). Pybus et al. (37) explained the origin of HCV diversity and reported that Asian model of evolution could be a baseline for investigating HCV spread in other continents. Clade 1 has shown active and early branching pattern of subtype 3b in Pakistani population depicted that genotype 3 is actively evolving in suburban population of Pakistan.
Tajima's Test of Neutrality
Test values indicated that the high mutation rate of the HCV might be one of the points of determinant action of the natural selection and thereby cooperated in inducing the divergence of viral species. At the beginning of HCV infection, there is a reduction in the viral population (50). Despite positive values (D = 0.6568), the test indicates high levels of polymorphism and reduced population size thereby mediating a balancing selection process (51).
3D Protein Structure Evaluation
Protein models were predicted using i-TASSER of the studied sequences. These 3D structures would be helpful in predicting the cleavage sites or recognition of phosphorylation sites. After translation is completed, during HCV replication, HCV polyprotein gets cleaved into at least ten distinct products. The order in which cleavages occur from N-terminus to C-terminus is -C-E1-E2-p7-NS2-NS3-NS4A-NS4B-NS5A-NS5B (52). These protein help virus Kanwal S et al.
Hepatitis C Viral Heterogeneity to maintain its structural integrity and protection against its host as well as incorporate virulence and pathogenesis to the virus such as envelop proteins. Interaction of some phosphorylation sites with kinases might be responsible for HCV resistance to antiviral effects of IFN which could be confirmed by analyzing these sites for different HCV genotypes (53). With the help of predicted protein models the cleavage and phosphorylation sites in HCV polyproteins can be predicted and further targeted for designing an appropriate drug against resistant strains.
siRNA Prediction for HCV Core
The siRNA was predicted against five HCV samples that were resolved in the present study. Different domains of core perform different functions like siRNA predicted against all the genotypes (3a, 4a and 3b) positioned at Nterminal from 13-35 nucleotides contains RNA and DNA binding domain 54). N-Terminal of core induces apoptosis and necrosis higher than those of C-terminal (55). The results in this study demonstrated that siRNAs directed against domains (N-terminal and C-terminal) of HCV-3a Core gene and resulted in specific inhibition of HCV RNA synthesis (60-80 %) (56). siRNAs targeted against HCV structural genes efficiently make full length HCV particles silent and provide an effective therapeutic option against HCV infection (57). siRNA predicted against HCV core are from the same domain of core region showed that RNA and DNA binding domain was conserved in core region in genotype 3 and 4; so single siRNA can be used against both genotypes and inhibit HCV RNA synthesis. Multiple genotypes of HCV have been isolated throughout the world. The identification of HCV types and subtypes has major implications for HCV vaccine development. Characterization of these genetic groups is likely to facilitate and contribute to the development of an effective vaccine against infection with HCV. Currently, in addition to HCV genotype 3 (3a and 3b), two new genotypes have been reported for the first time: 4a from Rawalpindi and 6e from Pakistan. As high genetic diversity is shown throughout the world by phylogenetic analysis, a universally protected vaccine requires the addition of genotype-specific epitopes. Herein, little effort has been put in to design siRNA against the resolved genotypes of the study samples.
|
2018-04-03T04:40:59.409Z
|
2012-06-01T00:00:00.000
|
{
"year": 2012,
"sha1": "114574ac84b20df6109e906f8d8963ef664d8cbc",
"oa_license": "CCBY",
"oa_url": "http://cdn.neoscriber.org/cdn/serve/3144c/89475c2ce2a7819d87402c73012a393729d23b40/70414-pdf.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "114574ac84b20df6109e906f8d8963ef664d8cbc",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270708659
|
pes2o/s2orc
|
v3-fos-license
|
Obesity enhances the response to neoadjuvant anti‐PD1 therapy in oral tongue squamous cell carcinoma
Abstract Objectives Previous studies have demonstrated that obesity may impact the efficacy of anti‐PD1 therapy, but the underlying mechanism remains unclear. In this study, our objective was to determine the prognostic value of obesity in patients with oral tongue squamous cell carcinoma (OTSCC) treated with pembrolizumab and establish a subtype based on fatty acid metabolism‐related genes (FAMRGs) for immunotherapy. Materials and Methods We enrolled a total of 56 patients with OTSCC who underwent neoadjuvant anti‐PD1 therapy. Univariate and multivariate Cox regression analyses, Kaplan–Meier survival analysis, and immunohistochemistry staining were performed. Additionally, we acquired the gene expression profiles of pan‐cancer samples and conducted GSEA and KEGG pathway analysis. Moreover, data from TCGA, MSigDB, UALCAN, GEPIA and TIMER were utilized to construct the FAMRGs subtype. Results Our findings indicate that high Body Mass Index (BMI) was significantly associated with improved PFS (HR = 0.015; 95% CI, 0.001 to 0.477; p = 0.015), potentially attributed to increased infiltration of PD1 + T cells. A total of 91 differentially expressed FAMRGs were identified between the response and non‐response groups in pan‐cancer patients treated with immunotherapy. Of these, 6 hub FAMRGs (ACSL5, PLA2G2D, PROCA1, IL4I1, UBE2L6 and PSME1) were found to affect PD‐1 expression and T cell infiltration in HNSCC, which may impact the efficacy of anti‐PD1 therapy. Conclusion This study demonstrates that obesity serves as a robust prognostic predictor for patients with OTSCC undergoing neoadjuvant anti‐PD1 therapy. Furthermore, the expression of 6 hub FAMRGs (ACSL5, PLA2G2D, PROCA1, IL4I1, UBE2L6 and PSME1) plays a pivotal role in the context of anti‐PD1 therapy and deserves further investigation.
| INTRODUCTION
The most common head and neck malignancies are thought to originate in the mucosal epithelium of the oral cavity, pharynx, and larynx. 1 According to the statistical study, tongue cancer in the United States in 2024 contains 19,360 newly diagnosed cases and 3320 deaths. 2 One of the most prevalent tongue malignancies is oral tongue squamous cell carcinoma (OTSCC), and in recent years, its frequency has increased. 3,4resently, OTSCC is treated with surgery and chemotherapy, radiotherapy, immunotherapy and combination treatment. 4,5Recently, there have been significant advancements in the development of drugs that target the interaction between the receptor known as programmed death-1 (PD-1) and its ligands, namely programmed death-ligand 1/2 (PD-L1/L2).7][8] To date, the utilization of anti-PD1 therapy has exhibited promising outcomes in the treatment of HNSCC patients experiencing tumor progression. 91][12] Afterward, several clinical randomized studies proved that pembrolizumab prolong the overall survival of patients with progressive HNSCC. 12,13Additionally, anti-PD1 therapy was used in the neoadjuvant therapy setting in untreated patients with advanced tumors with promising results. 6,14,15Moreover, neoadjuvant anti-PD1 therapy holds potential in achieving tumor reduction while preserving organ function and facial appearance, thereby maximizing patients benefits, which enable more advanced patients with opportunities for surgical treatment.In spite of the advancements made in research and treatment over the past 10 years, The low response rate among patients with HNSCC poses a significant limitation to the effectiveness of immune checkpoint inhibitors (ICIs) treatment.Additionally, both clinics and biomedical science face considerable challenges in dealing with OTSCC. 16,17Hence, it is crucial to promptly discover potential biomarkers capable of precisely predicting prognosis and forecasting the effectiveness of immunotherapy.
In the past few years, incidence of obesity has increased significantly, and population data link obesity to the increased incidence of several common cancers. 18,19Thus, obesity emerge as a pressing global concern.Previous study suggested that obesity associated with many of diseases has been linked to dysfunction of the immune system. 20Moreover, T lymphocytes play a crucial role in the immune system, regulating key elements of an immune response. 21In addition, checkpoint blockade therapies aimed at T cell responses are proving to be effective in the treatment of cancer patients in the clinic. 22As it is known to all, obesity can lead to chronic inflammation, which promote an exhausted T cell phenotype. 23Similarly, the obese state with chronic inflammation and subsequent generation of exhausted T cells may enhance tumor progression while concurrently promoting an environment conducive to ICIs. 24Preclinical studies demonstrated that obesity enhanced tumor growth, which was associated with dysfunctional CD8 T cells. 25,26However, in several tumor models treatment of obese mice with anti-PD1 slowed down tumor growth or led to complete tumor resistance. 27,28The first major study to report this finding with ICB was that obesity was found to be associated with a significant reduction in disease progression and death risk in patients with metastatic melanoma who received immune checkpoint blockade (ICB) treatment. 29A recent study also showed that obese Asian patients with advanced non-small cell lung cancer who received immune checkpoint inhibitors had better overall survival independent of muscle or fat mass. 30Therefore, obesity has a significant impact and checkpoint blockade therapy has the potential to cure cancer, to enhancing the utilization of these therapies in the growing number of obese patients, it is imperative to acquire a more comprehensive understanding regarding the impact of obesity on T cells.
Obesity could also alter tumor lipid metabolism. 31As a crucial intermediate product in lipid metabolism, fatty acid metabolism (FAM) plays an indispensable role in numerous biological activities and holds great potential as an immunotherapy target. 32,33And previous studies have identified a potential correlation between fatty acid metabolism and the effectiveness of immunotherapy as well as prognosis in patients with malignancies in. 34For example, fatty acid metabolism-related genes (FAMRGs) are potentially useful for predicting prognosis and immunotherapy response in bladder cancer. 35The latest research demonstrates that the reprogramming of fatty acid metabolism has a significant impact on the phenotype of immune cells infiltrating the microenvironment of melanoma.Furthermore, the identification of biomarkers for molecular subtypes in FAM can independently predict prognosis and immunotherapy response in melanoma patients. 36,37However, the prognostic and therapeutic effect of abnormal lipid metabolism throughout the body and FAM-related biomarkers in OTSCC remains unexplored.
Our study found that obese patients have more PD1+ T cell infiltration, so they could benefit more from immunotherapy.We also identified that 6 FAMRGs were positively associated with the efficacy of immunotherapy, which may be used as molecular indicators to predict the efficacy of ICI.Our analysis process was shown in Figure 1.
| Clinical data collection
With the approval of the Institutional Review Board of Sun Yat-sen University Cancer Center (approved number: B2022-221-Y01, approved data: 2023-6-15), this study was granted a waiver of informed consent.The data of 56 OTSCC patients receiving anti-PD1 therapy from October 2013 to April 2022 were retrospectively reviewed in Sun Yat-sen University Cancer Center.
The stringent eligibility criteria were as follows: the resected specimen demonstrated the presence of histologically confirmed squamous cell carcinoma; the primary lesion had to be situated in the tongue; pathologic stages cII, cIII, cIVA, cIVB (NCCN Guidelines 1.2022 edition); sufficient organ functionality; absence of clinically significant abnormal findings on electrocardiography.Patients less than 5 months follow-up were excluded.The initial measurements of BMI, OS and PFS were obtained for the 56 patients.PFS was determined as the duration between the initiation of anti-PD1 therapy and either the occurrence of disease progression on radiological imaging or death due to disease.OS was determined as the period from the initiation of anti-PD1 therapy until death resulting from the disease or until it censored at the last follow-up.
| Gene set enrichment analysis (GSEA) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses
We obtained RNA sequence and corresponding clinical data of patients received PD-1 inhibitors treatment from the The Cancer Genome Atlas database (TCGA, https:// portal.gdc.cancer.gov/ ) and NCBI-GEO DataSets (https:// www.ncbi.nlm.nih.gov), including 49 response samples and 42 non-response samples in melanoma, 8 response samples and 19 non-response samples in non-small cell lung cancer (NSLCL), 4 response samples and 7 nonresponse samples in renal cell carcinoma (RCC), and 21 response samples and 57 non-response samples in stomach (STAD) samples.
GSEA was conducted to explore the distinct pathways associated with the differential gene expression in cancer patients.The GSEA analysis was conducted with cohort PRJEB23709, GSE135222, GSE67501, PRJEB25780 in melanoma, NSCLC, RCC and STAD, respectively.The selection of HALLMARK gene sets and KEGG gene sets was based on statistical significance, as indicated by the normalized enrichment score (NES), false discovery rate The workflow of the current work.
The analysis was conducted using limma package, wherein differentially expressed genes (DEGs) were identified by applying a significance threshold of pvalue <0.05 and |log2 FC > 1 to distinguish between response and non-response samples.Subsequently, the biological functions associated with these DEGs in pancancer were systematically investigated using DAVID (https:// david.ncifc rf.gov), a tool available for KEGG pathway analyses.p-value was considered statistically significant.
| GEPIA
The GEPIA database (http:// gepia.cance r-pku.cn/ ) a wealth of RNA sequencing expression data collected from 9736 tumors and 8587 normal samples sourced from the TCGA and GTEx databases.In recent study, we set out to examine the association between the expression of FAMRGs and PD-1 levels by utilizing the GEPIA database to calculate Spearman's correlation coefficients.
| Analysis of fatty acid metabolism-associated genes in TIMER
The TIMER database (https:// cistr ome.shiny apps.io/ timer/ ), 38,39 a comprehensive tool for analyzing tumorinfiltrating immune cells, was employed to examine the correlation between the expression levels of FAMRGs and both tumor purity and tumor-infiltrating immune cells in HNSCC.
| Immunohistochemistry staining
The tissue specimens were preserved in a 10% solution of formalin and then embedded in paraffin wax.Subsequently, sections measuring 5 μm were obtained from the tissue blocks.These sections underwent deparaffinization using xylene, and dehydration though a sequence of alcohol concentrations (75%, 85%, 95%, 100%).To retrieve antigens, EDTA was employed, followed by blocking with goat serum at a concentration of 5%.The tissue sections were subsequently subjected to incubation with primary antibodies targeting CD4 (ZA-0519, ZSGB-BIO, China), CD8 (ZA-0508, ZSGB-BIO, China), CD20 (ZM-0039, ZSGB-BIO, China), PD-1 (ZM-0381, ZSGB-BIO, China).Then, the tissue sections were subjected to a 2-h incubation at room temperature with secondary antibodies (PV-6000, ZSGB-BIO, China).Following the incubation, the tissue sections with DAB.After the applications of a staining process, the sections underwent digital scanning using a scanner from Leica Biosystems made in Germany.Subsequently, analysis was conducted utilizing a workstation named Qupath, employing nuclear and membranal algorithms trained by pathologists.The protein expression was assessed based on the density of positive immune cells per square millimeter using Qupath image analysis.
| Statistical analysis
The statistical analysis was conducted using SPSS software, version 22. Mean values and 95% confidence intervals (CI) were used to describe continuous data.The Kaplan-Meier method was employed to analyze survival data for each group, with comparison done through the log-rank and Wilcoxon tests.To compare between the two groups, t tests and p-value were utilized.Cox multivariate hazard analysis was utilized to assess the impact of pre-specified prognostic factors.A significance level of p-value <0.05 was considered.The key raw data have been uploaded to the Research Data Deposit public platform (www.researchdata.org.cn) with approval number RDDA2024932200.All patients were assigned to the two groups by Chinese BMI classification. 41,42Baseline patients and tumor characteristics between the BMI≥24 and BMI < 24 groups are given in Table 2.At the point of diagnosis, there were 37 patients classified as having a normal weight, while 19 individuals fell into the overweight category.Clinical TNM stage, clinical T stage and LN involvement did not reveal any significant statistical differences between the two BMI groups.Interestingly, we found that overweight patients had significantly higher pCR frequency than normal weight (p = 0.024).The same trend was observed that among hypercholesterolemia (p = 0.003) and metabolic syndrome (p = 0.003) upon initial diagnosis.However, there was no significant difference in prognostic nutrition index (PNI), hypercholesterolemia and dyslipidemia.
| Survival outcomes of OTSCC patients treated with neoadjuvant anti-PD1 therapy
The duration until the final follow-up or mortality varied between 6 and 30 months, with a median duration of 15 months.Among the total of 56 patients, absence of disease was observed in 44 individuals (78.6%) during their most recent follow-up, 8 (12.5%) patients were alive with disease (five cases recurrence within the local region, two cases of distant metastasis, and one case exhibiting both), 6 (10.7%) died of disease.The findings showed the univariate survival analyses conducted on distinct BMI groups (Figure 2A,B).The effects of BMI on PFS as calculated based on fully adjusted univariate and multivariate Cox regressions (Tables 3,
Characteristics
No 4).Through univariate analysis, a statistically significant association was observed between high BMI and improved PFS (p = 0.027) (Figure 2A; Table 3).Covariates with p-value <0.1 were subjected to multivariable analysis, including BMI, smoking history, radiotherapy history, radiotherapy, cN, alcohol history, hypertension, hypertriglyceridemia and PNI.By multivariate analysis, high BMI remained significantly associated with improved PFS (p = 0.005) (Table 4).Besides, we also revealed that smoking history (p = 0.023), radiotherapy history (p = 0.017) and cN (p = 0.025) emerged as independent predictors for PFS (Table 4).But not in other subgroups (Figure S1).Of the whole 56 patients, 6 cancerrelated deaths were all reported in BMI < 24 group, and BMI≥24 group has no death.However, when analyzing the impact of increasing BMI on OS, no statistically significant differences were found (p = 0.224) (Figure 2B).These results may be caused by the small-scale clinical studies and short duration of follow-up.Hopefully, a large sample size will result in a statistical difference.
To investigate the effects of BMI on immune infiltration in OTSCC, we measured the expression of CD4, CD8, CD20, PD1 in tumor immune microenvironments (TIMs) involving a total of 27 patients with available of specimen by IHC (Figure 2C).The expression of PD1+ T cells was significantly elevated in high BMI group (Figure 2G), while there were no notable differences detected in the overall expression levels of CD4, CD8 and CD20 among the two groups (Figure 2D-F).These findings demonstrated that obese OTSCC patients may benefit from anti-PD1 therapy by increasing the infiltration of PD1 + T cells.Obesity is a common cause of chronic inflammation, both systemically and at the tissue level, 43 and chronic inflammation with increased levels of PD-1 expression in obese patients may increase the efficacy of anti-PD1 treatment for OTSCC.
GSEA and KEGG
As previous studies described, obesity is defined by an elevated BMI, typically because of excess adipose tissue. 44,45And the condition of being obesity leads to a state of meta-inflammatory characterized by heightened levels of proinflammatory cytokines, glucose, leptin, fatty acids metabolism.These factors have been shown to directly influence the response of T cells. 46n this research, we conducted an extensive range of bioinformatics analysis methods to thoroughly investigate the influence of lipid metabolism on the efficacy of anti-PD-1 therapy.First, the biological role of lipid metabolism-associated genes (LMAGs) was illustrated through GSEA in pan-cancer.We enriched all the pathways associated with lipid metabolism, the findings indicated that among the HALLMARK terms, the responder group exhibited significantly elevated NES values in relation to triglyceride metabolic process, lipid storage, regulation of fatty acid metabolic process and fatty acid metabolism.(Figure 3A-D).Second, we identified DEGs between response and non-response samples and performed KEGG pathway enrichment analyses.The analysis revealed that the responder group exhibited a significant enrichment in lipid acid metabolism, such as central carbon metabolism in cancer, citrate cycle, insulin resistance, lipid and atherosclerosis, and so on (Figure 3E-H).Collectively, these results illustrate that fatty acid metabolism may play a significant role in anti-PD1 therapy response in pan-cancer.
| Screening of differentially expressed FAMRGs
To investigate how the fatty acid metabolic process could increase the response rate to immunotherapy.Via limma analysis online tool, we extracted DEGs between responder and non-responder from PRJEB23709, GSE135222, GSE67501 and PRJEB23709, respectively.The clustering heatmap (Figure S2) and volcano plot depict the differential expression of genes (Figure 4A-D).In total, 468 FAMRGs was downloaded from Molecular Signatures Database (MsigDB) database (Table S1). 40Then, Venn diagram software was used to identify the differentially expressed FAMRGs in pan-cancer datasets, respectively.Overall, there are 91 differentially expressed FAMRGs (Genes that contain duplicates), including 14 FAMRGs in melanoma samples, 20 FAMRGs in NSCLC samples, 29 FAMRGs in RCC samples and 28 FAMRGs in STAD samples (Figure 4E,F; Table 5).
| Identification of hub FAMRGs associated with the effectiveness of anti-PD1 treatment
A total of 91 FAMRGs with differential expression were identified based on the Venn diagrams.To ascertain the correlation between FAMRGs and the therapeutic efficacy of PD-1.First, we explored the functions of these genes in fatty acid metabolism (Table S2), second, we compare FAMRGs expression in human adjacent normal versus 24 types of tumor tissues by UALCAN database, the result showed that ACSL5, PLA2G2D, IL4I1, PROCA1, UBE2L6 and PSME1 were significantly upregulated in the most cancer types from TCGA (Figure 5A-F), especially in HNSCC sample (Figure 5G-L).Third, we used TIGER database (http:// tiger.cance romics.org/ ) to compare differentially expressed FAMRGs.These results further showed that higher ACSL5 and PLA2G2D expression in melanoma responder samples compared to that in non-responder samples.Meanwhile, the same result showed that the expression of IL4I1, PROCA1, UBE2L6 and PSME1 expression were notably elevated in responder tissues compared to non-responder tissues in NSCLC, RCC and STAD, respectively (Figure 6A-F).
Additionally, the overall survival analysis was conducted by Kaplan-Meier plotter database (http:// kmplot.com/ analy sis/ index.php? p= backg round ), the results demonstrated that ACSL5 high expression in patients treated with anti-PD1 therapy significantly prolonged OS compared to low expression, the same trend were also found in PLA2G2D, IL4I1, PROCA1, UBE2L6 and PSME1 (Figure 6G-L).Finally, these 6 FAMRGs were included in the biomarkers in predicting prognosis the efficiency of anti-PD1 therapy.
To explore whether the FAMRGs affect immune cell infiltration in tumor microenvironment, we conducted an analysis using the TIMER database to investigate the infiltration of 6 distinct immune cell types, including B cells, CD8+ T cells, CD4+ T cells, macrophages, neutrophils, and dendritic cells.The findings indicated a positive association between the expression of ACSL5 expression and CD8+ T cell (r = 0.405, p = 3.96e-20), CD4+ T cell (r = 0.493, p = 8.28e-31), negativity correlated with tumor purity (Figure 7A).Meanwhile, we evaluated the other hub FAMRGs with these 7 immune cells.The results demonstrated that the levels of PLA2G2D, PROCA1, IL4I1, UBE2L6 and PSME1 exhibited a positive correlation with CD8+ T cell, CD4+ T cell (Figure 7B-F).
To illustrate whether FAMRGs is correlated with the T cell exhaustion biomarker PD1 in HNSCC.We used GEPIA database to evaluate correlation of FAMRGs and PD1 in HNSCC samples.The result showed that ACSL5, PLA2G2D, PROCA1, IL4I1, UBE2L6 and PSME1 (Figure 7G-L) were positively correlated with PD1, respectively.In summary, the above results suggested that fatty acid metabolic process might affect the anti-PD1 therapy efficiency through regulating immune cell infiltration, especially T cell exhaustion.
| DISCUSSION
In the present study, we reveal that BMI levels can independently serve as indicators for predicting the efficacy of anti-PD1 therapy in patients with OTSCC.Consistently, we confirm that BMI is positively correlation with PD1+ T cell in tumor microenvironment (TME).In mechanism, the GSEA and KEGG analyses revealed enrichment of fatty acid metabolism pathways in the responder group.Furthermore, we identified 6 FAMRGs (ACSL5, PLA2G2D, IL4I1, PROCA1, UBE2L6, PSME1) which were high expressed in anti-PD1 therapy responders, presented a positive correlation with PD1 expression and the infiltration of immune cell.We believe that the results of this study provide the first dataset that assesses and validates the prognostic significance of BMI in predicting responses to immunotherapies in tongue cancer.Furthermore, our study highlights the critical role of fatty acid metabolism and T cell exhaustion in shaping the efficacy of immunotherapies.
Obesity is currently defined by an elevated BMI, typically because of excess adipose tissue (AT).In current research, ATs are characterized as heterogeneous organs that have a significant impact on the regulation of metabolism, 47 inflammation, 48 and anti-tumor immune responses. 49But, in recent years, some experts have identified an "obesity paradox", wherein there is a simultaneous increase in the prevalence of obesityassociated malignancies and studies investigating the impact of obesity on the efficacy of available antineoplastic therapies. 50The response to immune checkpoint blockade (ICB) is generally more favorable among patients with obesity-associated BMI compared to lean patients in NSCLC, melanoma and renal cell carcinoma. 514][55] Recent evidence suggested BMI at the time of clinical diagnosis was found to be an independent predictive factor for recurrence/metastasis head and neck squamous cell carcinoma patients using pembrolizumab, and the patients with normal weight have better prognosis than underweight. 56Our findings are in accord with recent studies indicating that the response rate for immunotherapy in OTSCC patients with high BMI was much higher than patients with low BMI.The most importantly, to date, we found for the first time the prognostic contribution of BMI in OTSCC with immunotherapy.
In addition, we also explore the underlying mechanisms.As described above, by both clinical samples and online datasets analyses, our study showed that an increased prevalence of PD1+ exhausted CD8 T cells in the obese patients have better response to anti-PD1 therapy.There are similarities with the current study, some researchers found that obesity-associated increases in systemic leptin were responsible for promoting CD8 T cell exhaustion, as evidenced by elevated surface expression of PD-1 on CD8 tumor-infiltrating lymphocytes and loss of cytokine secretion and cytolytic activity. 28While other studies have suggested that the chronically inflamed obese state and subsequent generation of exhausted T cells may enhance tumor progression while concurrently promoting an environment conductive to ICIs.Additionally, exhausted T cells can be subdivided into multiple populations based on their re-activation potential, and some subpopulations are more responsive to PD-1 blockade than others. 57,58In summary, although some of these phenotypes may underlie ICI efficacy in obese cancer patients, the proposition that obesity-mediated T cell exhaustion in reversed by checkpoint blockade is likely an over-simplification.Therefore, a large number of studies is needed to further explore the mysteries and allow more patients to benefits from immunotherapy.
As is known to all, obesity is an abnormality of systematic metabolism.Obesity has been proved to induce increased levels of leptin, which affects anti-tumor immunity by increasing PD-1 expression and promoting T cell exhaustion. 28While these impacts immune disorder, they allow for strengthened restoration of T cell activity following anti-PD1 therapy. 59It is a prove that systemic metabolism influences the tumor microenvironment (TME) and impact anti-tumor immunity.Moreover, recent research has demonstrated that CD8+ effector T cells oxidize more fatty acids by the leptin STAT3 axis, which suppresses anti-tumor immune responses in breast cancer. 60Thus, the aim of this research was to investigate the influence of obesity on the response to anti-PD1 therapy.This study did not find any notable variances in the infiltration of CD8+ T cells between two distinct groups when stratified by BMI, however, we show that increased PD-1+ T cells have been associated with the responsiveness to anti-PD1 therapy in the OTSCC clinical data.It is also possible that heightened PD-1 expression on tumor-infiltrating lymphocytes form patients with obesity simply provides an increased number of targets for engagement of anti-PD1 antibodies. 28Furthermore, we found local tumor metabolism influences the efficacy of anti-PD1 therapy, and we obtain 6 FAMRGs positively associated with PD1 expression in OTSCC.At present, our study suggested that genes associated with fatty acid metabolism have the potential to serve as biomarkers for predicting and evaluating the effectiveness of immunotherapy in OSTCC.
According to the FAM molecular subtypes, we have identified 6 FAMRGs (ACLS5, PLA2G2D, PROCA1, IL4I1, UBE2L6) that could significantly contribute to the response of HNSCC patients undergoing anti-PD1 therapy.Among the 6 hub FAMRGs, Prior research has indicated that the protein product produced by ACSL5 gene is responsible for converting unbound long chain Fatty Acids into fatty acyl-coenzyme A, and plays a role in both the uptake of Fatty Acid and the synthesis of triacylglycerol. 613][64] Besides, it has been reported by studies that ASCL5 has been identified as a potential prognostic factor and predictor of response to immunotherapy in cases of pancreatic and cutaneous melanoma cancer. 37,65Analogously, our current findings in OTSCC align with previous studies indicating a positive correlation between the expression of PLA2G2D and immune infiltration, as well as a better prognosis observed in HNSCC, breast cancer, and cervical squamous cell carcinoma, [66][67][68] which aligns with the outcomes we have currently observed in OTSCC.Moreover, previous research revealed that the lipid metabolism-associated prognostic signature has potential as a reliable biomarker for forecasting the effectiveness of chemotherapy and anti-PD-L1 therapy in colorectal carcinoma, which includes PROCA1. 69The effects of IL4I1, UBE2L6 and PSME1 on immunotherapy have rarely been reported.We discovered these three newly potential predictor genes that may control the immunotherapy efficacy.Importantly, our research unveiled that these 6 hub FAMRGs possess the potential to serve as biomarkers for prognosticating the efficacy of immunotherapy in patients with HNSCC.
Although our findings are unprecedented, certain limitations are worth mentioning.Firstly, the limited number of clinical studies on neoadjuvant anti-PD1 therapy in OTSCC patients is primarily attributed to the delayed incorporation of immunotherapy into their treatment protocol.In the future, we aim to recruit more patients who have undergone this treatment for further validation.Secondly, this study lacks the relevant molecular mechanism, and we will explore it in the future.Thirdly, although BMI has been widely used as a surrogate for obesity, it does not reflect more specific measures and the distribution of adipose tissue.Obesity can be more precisely defined by measuring body fat percentage or by medical imaging to assess the fat content in the future.We will delve deeper into this question in further research.
| CONCLUSION
We identified that BMI could serve as promising prognostic biomarkers in OTSCC patients undergoing immunotherapy.And our findings demonstrate that obesity have profound effects on efficacy of anti-PD1 therapy by regulating PD1+ T cell infiltration.In addition, OTSCC patients with enhancing fatty acid synthesis metabolism were more likely to respond to anti-PD1 therapy by high expression of 6 FAMRGs, including ACSL5, PLA2G2D, IL4I1, PROCA1, UBE2L6, and PSME1.Thus, our study provides some novel and efficient biomarkers in predicting prognosis and in the efficiency of anti-PD1 therapy, thus guiding to an effective therapeutic strategy and facilitating personalized immunotherapy in the future.
7 )F I G U R E 2
Obese patients benefit from anti-PD1 therapy by increasing infiltration of PD1+ T cells.(A, B) Kaplan-Meier plots of progression-free survival (A) and overall survival (B) according to body mass index (BMI) group (BMI < 24 and BMI≥24) in patients with tongue squamous cell carcinoma.(C) Verification of BMI and immune cell infiltration in OTSCC (n = 27).immunohistochemical images show the immune cell infiltration (CD4+ T cell, CD8+ T cell, B cell, and PD1+ T cell) in OTSCC tissues.(D-G), human protein quantification analysis results of immunohistochemical staining by Qupath.
F I G U R E 3
Fatty acid metabolism impacts anti-PD1 therapy effects in pan-cancer.(A-D) Result of Gene Set Enrichment Analysis (GSEA) between responders and non-responder groups in melanoma (A), non-small cell lung cancer (B), renal cell carcinoma (C) and stomach adenocarcinoma (D).(E-H) Result of Kyoto Encyclopedia of Genes and Genomes (KEGG) between responders and non-responder groups in melanoma (E), non-small cell lung cancer (F), renal cell carcinoma (G) and stomach adenocarcinoma (H).
F I G U R E 4
Differentially expressed FAMRGs between responder and non-responder samples.(A-D) Volcano map of the differentially expressed genes in pan-cancer.The red, green, and gray dots indicate high, low, and no difference in expression between responder and non-responder samples (p < 0.05 & |log2 FC|>1).(E-H) Authentication of 91 FAMRGs in the four cancer datasets through Venn diagrams.(Melanoma (A), NSCLC (B), RCC (C), STAD (D)).(The upregulated genes were displayed in red, with downregulated FAMAGs in blue).
5 | 15 of 18 TAN
All 91 differentially expressed genes (DEGs) of FAMRGs were detected from 4 cancer datasets.F I G U R E 5 (A-F) Histogram of hub FAMRGs expression in 24 types of unpaired normal and normal tissues from TCGA using Wilcoxon rank-sum test.(G-L) Histograms of hub FAMRGs in normal and HNSCC with significant differences from UALCAN portal.F I G U R E 6 Correlation of hub FAMRGs expression with anti-PD1 therapy response and prognostic value in cancer samples.(A, B).ACSL5 and PLA2G2D expression levels between responder and non-responder samples in melanoma.(C) IL4I1 expression levels between responder and non-responder samples in non-small cell cancer.(D) PROCA1 expression level between responder and non-responder samples in renal cell carcinoma.(E, F) UBE2L6 and PSME1 expression level between responder and non-responder samples in stomach adenocarcinoma.(G-L) The Kaplan-Meier curves of the low and high FAMRGs expression in pan-cancer patients treated with anti-PD1 therapy (n = 520).et al.
Comparison of baseline information between BMI groups.
Univariate analysis of factors associated with progression-free survival.
T A B L E 3 Multivariate analysis of factors associated with progression-free survival.
T A B L E 4
|
2024-06-26T05:06:57.066Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "afaa1199d83dd430a9fbec3de52d880dc3714eb1",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "afaa1199d83dd430a9fbec3de52d880dc3714eb1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
99732912
|
pes2o/s2orc
|
v3-fos-license
|
Irradiation of Fe-Mn Supersaturated Solid Solution with Ions of Various Atomic Masses (Ar + , Xe + ) and Analysis of the Role of Nanosized Dynamic Effects in the Activation Processes of Long-Range Type
. A multiple increase in the atom mobility in metastable supersaturated (quenched from 850 °С) Fe-8.16 at % Mn solid solution is detected at temperatures less than 250°С under irradiation with 5-keV Ar + and Xe + ions of different masses. The irradiation-induced atom redistribution in the entire volume of foils 30 μm thick at a projected Ar + and Xe + ion ranges as much as 20-30 nm only is found and studied by the transmission Mössbauer spectroscopy. Long-range effects at low irradiation doses and anomalously low temperatures are attributed to "radiation shaking" of metastable media with post-cascade solitary waves in contrast to thermally stimulated radiation-enhanced processes in the narrow nanoscale near-surface layers of the alloy. It has been shown that heavier Xe + ions at higher irradiation doses have a stronger impact on the solid solution than Ar + ions.
Introduction
In work [1] it has been shown that upon irradiation of materials with heavy ions of low and medium energy (from a few tens to several hundreds of keV), the source of defects is only a surface layer less than 1 μm thick rather than entire volume of material in contrast to severe plastic deformation.
In the case of neutron irradiation, defects in material are accumulated slowly due to the high penetrating ability of neutrons (neutron interaction cross section is by 6-7 orders of magnitude smaller than that of ions). This is the reason of why primary recoil atoms and cascades of atomic displacements formed by these atoms under neutron irradiation are separated by distances much greater than those under ion irradiation when accumulated fluences (cm -2 ) are the same.
Nevertheless, low-dose and long-range effects under neutron and ion irradiation are well-known. These effects manifest themselves by dramatic changes in the structure and properties of materials at a small number (< 0.001) of displacements per atom and by the fact that the distance at which the changes occur is several orders of magnitude greater than the projected range of implanted ions (or primary recoil atoms under low-dose neutron irradiation).
These effects seem to represent two sides of the same coin, and cannot only be explained by relatively slow processes, for example, by radiation-enhanced or radiation-induced diffusion, which well represent radiation swelling and radiation-induced creep in the framework of the classical radiation physics. It is easy to estimate, as it has been done in [1], that the diffusion length of vacancies is certainly not more than 1 μm for a few seconds of irradiation at room temperature in metals with a melting temperature of Tmelt>1000 K. There is a large number of studies, which indicate dramatic changes in the structure and properties in the entire volume of materials irradiated with accelerated ions for several seconds. These changes occur without heating and other obvious factors causing such effects at a depth from several tens of micrometers to several millimeters [2][3][4]. Low-dose and long-range effects in highly-defected quenched and highly-deformed materials with numerous traps and sinks for the radiation-induced defects also cannot be explained by the role of more mobile interstitial atoms.
It is suggested in [2][3][4] that the role of nanosized radiation-dynamic effects under cascade-forming corpuscular radiation is significant and crucial in some cases. Nanoregions of cascades of atomic displacements are zones of explosive energy release. The temperature of the cascade regions thermalized for 10 -12 s is 5000-6000 K and above (the rate of energy release is comparable to that during nuclear explosion). A sharp increase in the temperature and pressure [5] in these regions causes the emission of powerful elastic and shock postcascade solitary waves.
It is known that a nonideal state of solid solutions, which often manifests itself near the temperature and concentration phase transitions, can have a significant effect on the properties of materials.
The thermodynamics equations imply that the tendency of solid solutions to the atomic ordering and concentration segregation, at a certain ratio of the energy of interatomic interactions between components can be characterized by an extremely low critical temperature, when the diffusion mobility of the atoms is extremely small. Nevertheless, there are instances where some alloys after long-term service (over hundreds and thousands of hours) at relatively low temperatures exhibit changes in the properties. As a result of special investigations it was found that this is due to intraphase processes, for example, the concentration separation of solid solutions into substitutional impurities [6]. The reason for the changes in the properties of alloys can also be a short or long-range atomic ordering formed in them.
New potential for the control of the atomic structure of materials at low temperatures (due to a sharp increase in the atom mobility throughout the volume of the material [3] far beyond the above calculated diffusion length) is offered owing to the discovery of the mentioned radiation-dynamic effects caused by radiation shaking of condensed matter with post-cascade shock waves under ionizing radiation.
As shown in [5,7], the temperature and pressure in the regions of nanosized thermal peaks strongly depend on the mass and energy of bombarding ions. The density of the released energy in the dense cascades of atomic displacement increases with ion energy reduction and an increase in the ion mass. So-called metastable media with increased stored energy in a state corresponding to intermediate free energy minimum rather than the absolute one are in a greater extent exposed to external exposure. In particular, this refers to quenched alloys.
In the present study, we investigated the effect of low-energy Ar + (39.95 amu) and Xe + (131.29 amu) ions (~5 keV), which differ from each other in their atomic mass, on the quenched iron alloy with 8.16 at % Mn (having a tendency to atomic separation) at low temperature (250 ○ С), when the diffusion of substitutional impurities is almost frozen. At the same time, it is shown [2][3][4] that the atomic mobility in these alloys at lower temperatures can be repeatedly increased by ion irradiation. This refers to the near-surface layers of irradiated materials, the length of which is several orders of magnitude greater than the projected range Rp of the implanted ions (and reaches 10 5 Rp or more [4]). Concentration profiles of the Ar + and Xe + ions are shown in Figure 1. As can be seen, the ranges of the ions under consideration are extremely small and less than 0.02-0.03 μm, which is about 10 3 times smaller than the thickness of the studied iron-manganese alloy foils under ion irradiation.
X-ray diffraction is ineffective for the analysis of the processes taking place due to similarity of electronic scattering factor of Mn and Fe, which are neighbors in the periodic concentration, nanosized intraphase separation, and the precipitation of the second phase (austenite). Therefore, we used the Mössbauer effect, as in [8], for the analysis of these processes.
Experimental
The Fe − 8.16 at % Mn alloy under study according to the chemical analysis had the following content of impurities: 0.070 C, 0.020 N, 0.008 S, and 0.010 wt % P. The alloy was melted in an induction furnace in the argon atmosphere. The ingots were forged to obtain rods 6 × 12 mm in cross section and rolled at 1000°C to form sheets 0.5 mm thick. Pretreatment consisted of normalizing (900°C), annealing (600°C), and sample quenching from 850°C and from γ region (corresponding to the equilibrium fcc γ phase) in a 15% solution of NaCl. After quenching the alloy has a bcc α'-martensite structure with 3-5% of the retained γ phase (austenite). Thin foils for Mössbauer studies were mechanically and electrolytically polished in electrolyte (12 g Сr2О3 and 100 mL Н3РО4). The thickness of the samples was 25 μm. The samples were irradiated in a continuous mode using an ILM-1 ion beam implanter equipped with a PULSAR-1M ion source based on a low-pressure glow discharge with a hollow cold cathode [9], which also can operate in a pulsed-periodic mode. The samples were irradiated with Аr + and Xe + ion beams at an energy of 5 keV and current densities from 150 to 170 μA/cm 2 (the current density was continuously varied to achieve the desired temperature). The samples were hanged with thin threads with a low heat conductivity, so that heat removing from a sample was carried out exclusively by radiation. This made it possible to predict the temperature of the sample theoretically [10] (using the emissivity factor chosen in advance from the experiment). In the course of irradiation, the sample temperature was controlled with the help of a thin chromel-alumel thermocouple connected with an Advantech Adam 4000 automated system designed for digital signal registration.
Mössbauer studies were performed using an SM-2201 automated Mössbauer spectrometer under constant acceleration conditions. Isotope 57 Со served as the source of γ quanta in Rh. The speed scale of the Mössbauer spectra was calibrated with respect to pure iron.
Results and discussions
Mössbauer spectra of the Fe − 8.16 at % Мn alloy after irradiation with low argon and xenon ion fluences (E = 5 keV, j = 150-170 μA/cm 2 ) significantly change (Figure 2). In particular, the intensity of the components of the external peaks vary, which correspond to the presence of l1=0 and l1=1 of atoms Mn near atom Fe. Component l1=0 decreases, whereas l1=1 increases (indicated by arrows in Figures. 2b, 2c ), which clearly indicates the redistribution of atoms in the solid solution [11]. The short-range order parameter according to Cowley [12] in the first coordination sphere α , here a -Fe, b -Mn, εabpair correlation parameter [13]) was determination by analyzing the external shape of the peaks, which corresponded to such nuclear transitions as +1/2 → +3/2 and -1/2 → -3/2, and the Mössbauer spectra (since they have the highest intensity and resolution), taking into account electric and magnetic interactions in the radius of the first coordination sphere using the method described in [11,13], where a method for calculating a statistical error of the parameter α by referring the structural matrix is described. This approach is justified because, according to [14], the effect of the second, the third, and other remote coordination spheres on the parameters of hyperfine interactions is relatively weak for the above-mentioned concentration of Mn. The values of parameter α, obtained from the processing of the left and right Mössbauer spectra peaks within the error were zero for all quenched samples.
The intensities of Mössbauer lines after irradiation with Ar + ions significantly change already at a fluence of 1·10 16 cm -2 (exposure time was only ~10 s). During this time the temperature of the samples reaches a stationary temperature of 250°C, the samples were treated without holding at this temperature. This means that the alloy under Ar + and Xe + ion irradiation acquires short-range atomic separation for a very short period of time. The value of short-range atomic order αl thus becomes positive (α = 0.17-0.18) both for Ar + and for Xe + (see Figure 3). A further increase in the irradiation time to 30 s (fluence of 3·10 16 cm -2 ) causes a decrease in the short-range order parameter to α = 0.16 in the case of Ar + ion irradiation and to α = 0.12 in the case of Xe + ion irradiation. This can be explained by the fact that solid solution depleted with manganese at low temperatures may tend to not only short-range separation, but also to short-range atomic ordering found in [8]. In contrast to the separation, this leads to the change in the intensities of spectra components corresponding to l1 = 0 and l1 = 1 impurity atoms in the immediate vicinity of iron atoms.
That is, during the first seconds of irradiation, the solid solution is separated into zones depleted and enriched with manganese atoms. Thereafter, the atomic ordering can occur in the formed zones, leading to an increase in the effective concentration of manganese in the first coordination sphere of iron atoms. The latter indicates the complex ratio of the energy of the pair interaction aa, bb and ab in the iron-manganese system. It may be necessary to refine the model of the atomic redistribution in the iron-manganese alloy, and, consequently, the model approximating the Mössbauer spectra to describe this process in detail. In particular, this concerns not only the effect of the first coordination sphere, as was done in [8] and in this work, but also more distant coordination spheres (at least the 2 nd coordination sphere).
Data of [8] somewhat differ from those of the present study, which may be explained by the different composition of alloys used (in particular, the difference in the concentration of interstitial impurities). The alloy used in the present work contains more interstitial impurities than in [8], which is manifested in the presence of a small amount of retained austenite in the quenched alloy (central peak in the center of the Mössbauer spectra, Figure 2). It should be noted that parameter α in the case of heavier Xe + ions more significantly decreases (i.e., an increase in the effective concentration of Mn in the in the nearest neighborhood of Fe), which is in agreement with a large energy release in dense cascades of atomic displacements generated by Xe + ions, compared to those generated by Ar + ions.
Conclusion
Our investigation is an indirect proof of the fact that nanosized dynamic effects (appeared due to the formation of thermal peaks) play the significant role in the effect of accelerated Ar + и Xe + ions on the structural state of the metastable quenched Fe − 8.16 at % Mn alloy. The temperature of the thermal spikes was measured in [5,7]. Indeed, the short-range atomic ordering of the short-range separation type is formed at the projected range of the used ions less than 0.02-0.03 nm and temperatures less than 250°С, which are insufficient for the redistribution processes of substitutional atoms in the target volume (25 μm thick) due to thermally and radiation-stimulated diffusion, for the first 10 s of irradiation. The ordering degree somewhat decreases (substantially more for heavier Xe + ions) at a further increase in the fluence; which seems to be caused by the complicated structure of atomic ordering, to reliably identify the type of which additional data are required.
In this connection, it seems interesting to study in detail the regularities of the processes using various types of ions and irradiation modes and theoretical models for describing the experiments.
|
2019-04-08T13:13:13.563Z
|
2017-05-04T00:00:00.000
|
{
"year": 2017,
"sha1": "b05cbc035eca8298abb981cd6e1e0e3d25d4a534",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1742-6596/830/1/012086",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ad129ad26b1f86ccd5563015198a697818121884",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
168400445
|
pes2o/s2orc
|
v3-fos-license
|
Using Behavioural Validity Method to Analyse the Dynamic Model of Smallholder Beef Farming Systems in Indonesia
. Smallholders beef farming is a complex systems which has wide range of stakeholders whose interests are varied. Systems thinking is one approach which can be recommended to study the complexity of a system. Model is developed to mimic the situation of the farming situation in the real world. A model opens up possibilities for simulating an intervention easier, less dangerously, and more ethically than experimenting in the real world. However, before a model were used to simulate any intervention strategy, it needs to be validated. This paper aimed to describe one validity method which used to test the validity of a model describing the smallholder beef farming. A series of surveys have been undertaken to harness perspectives, opinion, and data from 2 beef farmers group in Kabupaten Banjarnegara and Kabupaten Banyumas. Model were developed using iThink software developed by Ventana®. Behavioural validity was conducted using extreme condition test which use 4 combination of extreme value; calving interval, share to farmer, purchasing price, and selling price. Result showed that behavioural validity method using extreme value test was able to show the consistency of the logic which construct he structure of the model.
Introduction
One of the characteristics of smallholders is that the proportion of income from beef farming is usually less than 30% (Kusnadi, 2008). Most smallholders have fewer than four cattle. Farmers collect grass only when they do not have sufficient rice straw, or when rice straw becomes scarce (Hadi et al., 2002). Cut and carry is the most common feeding practice. The animal are kept mostly in housing, which frequently poorly designed and maintained (Lisson et al., 2010), for the whole year and feed is carried by hand to the cattle. In some way, smallholder farmers are systems thinkers because farmers have to balance many different aspects (Snapp and Pound, 2008). The farmers OE ‰OE • vš ^o]À]vP ‰}}o }( lv}Áo P _U v šZ ]OE views and knowledge could be a genuinely valuable input to strategies for reforming the smallholder beef farming sector. Efforts to model smallholder beef farming systems in Indonesia has been undertaken by (Setianto et al., 2014a;2014b;2014c) which presented the qualitative Causal Loop Diagram model of smallholders. A model opens up possibilities for simulating an intervention easier, less dangerously, and more ethically than experimenting in the real world (Jackson, 2002). Modelling provides possibilities to preview whether or not the proposed changes in the systems thinking world can improve the problematic situation in the real world (Rodríguez-Ulloa et al., 2011).
One important step on model development is model validation which represents the degree of the quality of a model (Schwaninger and Groesser, 2009).
The aim of the model validation is to improve the confidence that the model mimics the real situation well enough for its intended purposes thus provides a sound basis for decision making (Qudrat-Ullah, 2012;Sterman, 2000). This paper aimed to present the behavioural validity methods to analyse the model of smallholder beef farming systems.
Materials and Method
This study took place in Kabupaten Banjarnegara and Banyumas as the pilot study of two smallholders beef farmers group. The study were mostly using direct observation, semi structured interview, and focus group discussion. There are five steps involved in conducting SD methodology: (1) structuring the problem; (2) discovering the causal structure; (3) developing the dynamic model; (4) scenario simulation; and (5) implementation and organizational learning (Maani and Cavana, 2007;Sterman, 2003).
First step was to identify the qualitative Causal Loop Diagram of smallholder beef farming systems and its Systems Archetypes (Setianto et al., 2014b). Then, both the CLD and the archetypes were refined in a small group discussion which involved the representatives of actors in the system. This was achieved by contrasting the CLD with the real world situation. Some adjustments and modifications were made to ensure that the loops and linkages made sense and were able to mimic the real farming situation. Once the was CLD was regarded as being adequately capable of describing the real world situation, the next step was transforming the CLD into stocks and flows modelling to generate the dynamic model of the smallholder beef farming.
Translating the CLD into quantitative Stock and Flow dynamic model requires three steps of activity. First step was to build the model structure. This was conducted using iThink software by Ventana® systems. Second step was to parameterize data. In order to obtain all required data for the model, secondary data study has been carried out. Further, the secondary data was confronted to the model. Any data gap, which did not sufficiently filled by secondary data, need to be collected using primary data collection. Last, the stock and flow dynamics model was then need to be validated.
This study used behavioural validity tests (Barlas, 1989(Barlas, , 1996Schwaninger and Groesser, 2009) examinig two components the model behaviour is valid; that its ability to mimic the major pattern exhibited by the real system and its structure has no major error. For this purpose, this study used the extreme condition test (Sterman, 2000).
Results and Discussions
Stock and Flow dynamic model In system dynamics, modelling is described in term of stocks and flows diagrams, which show stocks, flows, auxiliary, and feedback loops (Sterman, 2000). Principally, model building transforms the flows into levels, rates and auxiliary variables (Rodriguez-Ulloa and Paucar-Caceres, 2005). The purpose of this stage is to generate a computer-based model which is able to track all the relationships between variables, as well as their dynamic behaviour (Lane and Oliva, 1998).
A stock is symbolized by a rectangle. It means accumulations. These could be inventory, population, level of knowledge, etc. Stock will continue to exist in the system even when there is no single flow exists. Stocks visualize the state of the system. Flows are represented by an arrow pipe. An arrow pointing into a stock indicates an inflow, while pointing out of a stock denotes an outflow. Flow describes change that happens to the stock during certain period of time. Flows have regulators, known as valves, which control the flow rate. Another important symbol is clouds, which represent the sources and sinks of a flow.
The stock and flow model was build based on a translation of qualitative CLD model which has been published previously . The complete translation of the model presented in Figure 1.
Extreme condition test
As the reference point, the current base situation of the smallholder beef farming system is presented in Figure 2 With the current value, all stocks are decreasing. The low calving rate provoked farmers to shift the breeding operation into fattening. In the first 12 months, its figure increased due to program regulation which specifically mandated farmers to keep their breeding cattle. However, after 12 months without calving, many breeding cattle were culled into fattening operations. Then, after two years, all other stocks decreased as well. Figure 3 and 4 showed the behaviour under low and high calving rates. Both results were as predicted. Low calving rate ( Figure 3) provoke farmers to directly cull their cows, as a result, the breeding population vanished. The only remaining cows in the first 10 month (less than 3 cows) were there because the model was equipped with the order that the breeding portion should be maintained for at least 10 months. Revenue from breeding sales was used to buy more fattening cattle, thus increasing the fattening population during year 1.
Higher calving rate ( Figure 4) means more newborn calves per year; thus an increase in the population. Moreover, with a high calving rate, farmers had more interest in maintaining their breeding cattle. Consequently, more fattened cattle were also available, thus more were sold resulting in increased revenue. Figure 4 describes how the calving interval of 1 (1/year) results in an increase and maintenance of the POE}µ‰ ‰]š oU ( OEu OE•[ ]v }u U v šZ breeding and fattening population over time. It has the potential to be increased further, but the population was limited by the forage carrying capacity. These outputs were consistent with the logic of the base model. The model was then run using the extreme condition of the share for farmers. Ten percent }( šZ • o • OE À vµ Á • oo} š (}OE ( OEu OE•[ shares and the remaining 90% was allocated for the group to cover costs for purchasing replacement cattle and other group expenses. Under a low extreme condition, the model is able to perform a rational simulation. As shown in Figure 5, fewer shares went to the farmers which meant more shares were available for the group. This would result in the maintenance of farming for a longer period compared with the current base condition. However, over the first ðô u}všZ•U ( OEu OE•[ ]v }u • Á OE o}Á OE šZ v the base. More of the group shares can therefore be used to buy more cattle, thus the population is higher than the base before it decreases due to the selling price dropping as a result of the import policy after month 48. In contrast, high extreme share allocation to farmers ( Figure 6) will mean that most of the • o • OE À vµ Á vš š} šZ ( OEu OE•[ Z}usehold and less was allocated for reinvestment in the farm. Lastly, the model was run using the price extreme, both for purchasing and selling price. For purchasing price, the lower extreme occurs when the purchasing price was set to be halved from Rp6.5 million to Rp3.25 million/animal. In contrast the extreme high use assumption was that the price was doubled to Rp13 million/animal. Figure 7 showed that except for breeding, all stocks were sustained. With lower purchasing prices, farmers managed to yield more profit. This is shown by the increase in ( OEu OE•[ ]v }u •U šZ vµu OE }( ( šš v ššo and the group capital over time. After 72 months, the system was in equilibrium. Although the group capital was sufficient to buy more cattle, the carrying capacity of maximum 44 cattle meant that the cattle population peaked. In contrast, Figure 8 showed that when the model is exposed to a high purchasing price, farmers failed to obtain profit and suffered significant losses. As a result all stocks decline significantly and essentially vanish after year four when no capital is left to purchase cattle. These results indicate that the model used is able to mimic the real condition. The low selling price was simulated using half of the current selling price. Subsequently, the high selling price is double of the current price. Figure 9 displays how the stock behaves when the selling price is halved. Beef farming would be non-existent after the fourth year. However, when the selling price is doubled farming would be sustainable (Figure 10) although the breeder numbers would continue to fall due to the low calving rate. Similar to the case of low purchasing price, the population will be constrained by the carrying capacity. The next extreme situation is the combination of the selling and purchasing prices. Firstly, the model was run using low selling and purchasing price. Purchasing price was halved to Rp3.250 million, whereas the selling price was Rp4.125 million per cattle. The difference between the selling price and the purchase price is Rp875 thousand; far less than of Rp2 million used in the initial basic simulation. The output of the model (Figure 11) shows that with a low margin, all stocks decrease.
When the model used a combination of high selling and purchasing prices, the output showed that all stocks increased. Figure 12 shows the model output when the selling and purchasing price doubled to Rp13 million and Rp16.5 million respectively.
Thus, the margin between purchasing and selling increased from Rp2 million to Rp3.5 million. Based on the ability of the model to simulate the situations under the different extreme conditions used, this researcher believes that the model has a sound structure and is without any major structural errors.
Conclusions
The behavioural validity method could be employed to analyse the validity of a stock and flows model which model the smallholder beef farming system. Based on extreme condition test using four different combination of extreme value, the model showed its consistency to logically mimic the behavioural of the real situation of the smallholder beef farming. Thus, validated model could be used to simulate strategy simulation.
|
2018-10-12T03:13:51.606Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "56722b1ddc938d76e4888f19a8bcc3c63770a170",
"oa_license": "CCBYSA",
"oa_url": "http://animalproduction.net/index.php/JAP/article/download/551/460",
"oa_status": "GOLD",
"pdf_src": "Neliti",
"pdf_hash": "75ca707a5ed524d837ed4796aead275f54bf22f9",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
44173285
|
pes2o/s2orc
|
v3-fos-license
|
Bilingual Character Representation for Efficiently Addressing Out-of-Vocabulary Words in Code-Switching Named Entity Recognition
We propose an LSTM-based model with hierarchical architecture on named entity recognition from code-switching Twitter data. Our model uses bilingual character representation and transfer learning to address out-of-vocabulary words. In order to mitigate data noise, we propose to use token replacement and normalization. In the 3rd Workshop on Computational Approaches to Linguistic Code-Switching Shared Task, we achieved second place with 62.76% harmonic mean F1-score for English-Spanish language pair without using any gazetteer and knowledge-based information.
Introduction
Named Entity Recognition (NER) predicts which word tokens refer to location, people, organization, time, and other entities from a word sequence. Deep neural network models have successfully achieved the state-of-the-art performance in NER tasks (Cohen;Chiu and Nichols, 2016;Lample et al., 2016;Shen et al., 2017) using monolingual corpus. However, learning from code-switching tweets data is very challenging due to several reasons: (1) words may have different semantics in different context and language, for instance, the word "cola" can be associated with product or "queue" in Spanish (2) data from social media are noisy, with many inconsistencies such as spelling mistakes, repetitions, and informalities which eventually points to Out-of-Vocabulary (OOV) words issue (3) entities may appear in different language other than the matrix language. For example "todos los Domingos en Westland Mall" where "Westland Mall" is an English named entity.
Our contributions are two-fold: (1) bilingual character bidirectional RNN is used to capture character-level information and tackle OOV words issue (2) we apply transfer learning from monolingual pre-trained word vectors to adapt the model with different domains in a bilingual setting. In our model, we use LSTM to capture long-range dependencies of the word sequence and character sequence in bilingual character RNN. In our experiments, we show the efficiency of our model in handling OOV words and bilingual word context.
Related Work
Convolutional Neural Network (CNN) was used in NER task as word decoder by Collobert et al. (2011) and a few years later, Huang et al. (2015) introduced Bidirectional Long-Short Term Memory (BiLSTM) (Sundermeyer et al., 2012). Character-level features were explored by using neural architecture and replaced hand-crafted features Lample et al., 2016;Chiu and Nichols, 2016;Limsopatham and Collier, 2016). Lample et al. (2016) also showed Conditional Random Field (CRF) (Lafferty et al., 2001) decoders to improve the results and used Stack memory-based LSTMs for their work in sequence chunking. Aguilar et al. (2017) proposed multi-task learning by combining Part-of-Speech tagging task with NER and using gazetteers to provide language-specific knowledge. Characterlevel embeddings were used to handle the OOV words problem in NLP tasks such as NER (Lample et al., 2016), POS tagging, and language modeling .
Dataset
For our experiment, we use English-Spanish (ENG-SPA) Tweets data from Twitter provided by 62.62% 16.76% 19.12% 3.91% 54.59% + FastText (spa) 49.76% 12.38% 11.98% 3.91% 39.45% + token replacement 12.43% 12.35% 7.18% 3.91% 9.60% + token normalization 7.94% 8.38% 5.01% 1.67% 6.08% Aguilar et al. (2018). There are nine different named-entity labels. The labels use IOB format (Inside, Outside, Beginning) where every token is labeled as B-label in the beginning and follows with I-label if it is inside a named entity, or O otherwise. For example "Kendrick Lamar" is represented as B-PER I-PER. Table 2 and Table 3 show the statistics of the dataset. "Person", "Location", and "Product" are the most frequent entities in the dataset, and the least common ones are "Time", "Event", and "Other" categories. 'Other" category is the least trivial among all because it is not well clustered like others.
Feature Representation
In this section, we describe word-level and character-level features used in our model.
Word Representation: Words are encoded into continuous representation. The vocabulary is built from training data. The Twitter data are very noisy, there are many spelling mistakes, irregular ways to use a word and repeating characters.
We apply several strategies to overcome the issue. We use 300-dimensional English and Spanish FastText pre-trained word vectors which comprise two million words vocabulary each and they are trained using Common Crawl and Wikipedia. To create the shared vocabulary, we concatenate English and Spanish word vectors.
2. Token normalization: Concatenate Spanish and English FastText word vector vocabulary. Normalize OOV words by using one out of these heuristics and check if the word exists in the vocabulary sequentially (a) Capitalize the first character (b) Lowercase the word (c) Step (b) and remove repeating characters, such as "hellooooo" into "hello" or "lolololol" into "lol" (d) Step (a) and (c) altogether Then, the effectiveness of the preprocessing and transfer learning in handling OOV words are analyzed. The statistics is showed in Table 1. It is clear that using FastText word vectors reduce the OOV words rate especially when we concatenate the vocabulary of both languages. Furthermore, the preprocessing strategies dramatically decrease the number of unknown words.
Character Representation: We concatenate all possible characters for English and Spanish, including numbers and special characters. English and Spanish have most of the characters in common, but, with some additional unique Spanish characters. All cases are kept as they are.
Model Description
In this section, we describe our model architecture and hyper-parameters setting.
Bilingual Char-RNN: This is one of the approaches to learn character-level embeddings without needing of any lexical hand-crafted features. We use an RNN for representing the word with character-level information (Lample et al., 2016). Figure 1 shows the model architecture. The inputs are characters extracted from a word and every character is embedded with d dimension vector. Then, we use it as the input for a Bidirectional LSTM as character encoder, wherein every time step, a character is input to the network. Consider a t as the hidden states for word t.
where V is the character length. The representation of the word is obtained by taking a V t which is the last hidden state. Main Architecture: Figure 2 presents the overall architecture of the system. The input layers receive word and character-level representations from English and Spanish pre-trained Fast-Text word vectors and Bilingual Char-RNN. Consider X as the input sequence: where N is the length of the sequence. We fix the word embedding parameters. Then, we concatenate both vectors to get a richer word representation u t . Afterwards, we pass the vectors to bidirectional LSTM.
where ⊕ denotes the concatenation operator. Dropout is applied to the recurrent layer. At each time step we make a prediction for the entity of the current token. A softmax function is used to calculate the probability distribution of all possible named-entity tags.
where y t is the probability distribution of tags at word t and T is the maximum time step. Since there is a variable number of sequence length, we padded the sequence and applied mask when calculating cross-entropy loss function. Our model does not use any gazetteer and knowledge-based information, and it can be easily adapted to another language pair.
Post-processing
We found an issue during the prediction where some words are labeled with O, in between B-label and I-label tags. Our solution is to insert I-label tag if the tag is surrounded by B-label and I-label tags with the same entity category. Another problem we found that many I-label tags are paired with B-label in different categories. So, we replace B-label category tag with corresponding I-label category tag. This step improves the result of the pre- Table 4: Results on ENG-SPA Dataset ( ‡ result(s) from the shared task organizer (Aguilar et al., 2018) † without token normalization) Figure 3 shows the examples.
Experimental Setup
We trained our LSTM models with a hidden size of 200. We used batch size equals to 64. The sentences were sorted by length in descending order. Our embedding size is 300 for word and 150 for characters. Dropout (Srivastava et al., 2014) of 0.4 was applied to all LSTMs. Adam Optimizer was chosen with an initial learning rate of 0.01. We applied time-based decay of √ 2 decay rate and stop after two consecutive epochs without improvement. We tuned our model with the development set and evaluated our best model with the test set using harmonic mean F1-score metric with the script provided by Aguilar et al. (2018). Table 4 shows the results for ENG-SPA tweets. Adding pre-trained word vectors and characterlevel features improved the performance. Interestingly, our initial attempts at adding character-level features did not improve the overall performance, until we apply dropout to the Char-RNN. The performance of the model improves significantly after transfer learning with FastText word vectors while it also reduces the number of OOV words in the development and test set. The margin between ours and first place model is small, approximately 1%.
Results
We try to use sub-words representation from Spanish FastText , however, it does not improve the result since the OOV words consist of many special characters, for example, "/IAtrevido/Provocativo", "Twets/wek", and possibly create noisy vectors and most of them are not entity words.
Conclusion
This paper presents a bidirectional LSTM-based model with hierarchical architecture using bilingual character RNN to address the OOV words issue. Moreover, token replacement, token normalization, and transfer learning reduce OOV words rate even further and significantly improves the performance. The model achieved 62.76% F1score for English-Spanish language pair without using any gazetteer and knowledge-based information.
|
2018-05-30T16:29:32.000Z
|
2018-05-01T00:00:00.000
|
{
"year": 2018,
"sha1": "084ae7dad200635a8e440342bdab30f80b832b78",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/W18-3214.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "7b7047bad158c198a44ab1ba282c5660df197bb8",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
54950683
|
pes2o/s2orc
|
v3-fos-license
|
Validation of the multidimensional sense of humor scale in people with chronic kidney disease
Multidimensional Sense of Humor Scale (MSHS) was developed by Thorson and Powell and it was validated in Portuguese, but not in people with chronic kidney disease (CKD). This study examined the psychometrics of the MSHS in people with CKD on hemodialysis. A random sample of 171 people with CKD undergoing hemodialysis was selected. Exploratory Factor Analysis revealed a structure with three factors, “Humor Production and Social Use of Humor”, “Adaptive Humor and Appreciation Humor” and “Attitude Towards Humor”, with Alpha Cronbach values of 0.93, 0.90 and 0.83 respectively. It revealed stability in both interview and questionnaire methods. It showed moderate positive correlations with Positive Affect, Subjective Happiness and Wellbeing Personal Index, and moderate negative correlation with Negative Affect. Therefore, MSHS shows evidence of being a valid, reliable and reproducible scale either by questionnaire or interview.
INTRODUCTION
Chronic kidney disease (CKD) is a general term for heterogeneous disorders disturbing kidney structure and function. Disease and management are classified according to stages of disease severity, which are assessed from glomerular filtration rate and albuminuria, and clinical diagnosis (cause and pathology). [1] Hemodialysis (HD) is the therapy most often used in terminal stage CKD, which involves the removal of nitrogenized toxic substances from the blood and liquid excesses retained in the tissues of the body. [2] Patients on hemodialysis (HD) are thought to be highly susceptible to emotional problems because of the chronic stress related to disease burden, dietary restrictions, functional limitations, associated chronic illnesses, adverse effects of medications, changes in self-perception and fear of death. [3] In this sense, hemodialysis influences psychological, physical and social aspects of life. [4] Taking into account the high levels of depression, disability and impaired immunity in people with chronic kidney disease, [3,5] the humor helps a person cope with kidney disease and can be a key component in the quality of life of people on hemodialysis. In a study conducted in Norway, with 52 persons undergoing hemodialysis followed for two years, it was found that higher levels of sense of humor had a negative association with stressors and mortality related to the disease. [5] Humor is a construct that is closely related to well-being and is considered a complex phenomenon, clearly personal in nature. The standard sense of humor varies from person to person and changes according to the humor, the personality, the situation, the level of attention, the importance given to the situation, among other things. [6] The main benefits of humor in people's health are to promote physical and psychological well-being and improve of the perceived health, it helps address chronic disease, reducing pain, stress, anxiety, stress relief and strengthens immune system. [7] Within the context of CKD nurses have a fundamental role by working collaboratively with other health professionals to achieve the competent and consistent care required by the complexity of treatment. The Nursing Intervention Classification (NIC) describes humor has a nursing intervention that allows the professional to help patients to understand, appreciate and express funny, entertaining or humorous situations, in order to release anger and tension, facilitate learning, contributing for health promotion and maintenance, therefore, helping patient dealing with feelings related to treatment. [7,8] Other authors state that humor develops communication and the relationship between the nurse and the patient, it helps manage emotions, decreases tension and improves the experience in the caregiving setting. [9,10] The use of humor as a planned and intentional nursing intervention must take into account a set of considerations related to the nature of the humor, namely its individual, personal and paradoxal character and its properties. [7] Within the framework of humor, the nurse must take into consideration the type of resources that the person has, especially if they like to play, to laugh, to have someone make them laugh, be with people with a sense of humor, to listen to anecdotes and funny stories, as well as, to read comic books since these resource influence humor assessment. [11] The Multidimensional Sense of Humor Scale (MSHS) was developed by Thorson and Powell. [12] In this study the sense of humor was presented as a multidimensional construct. The 24-item scale showed a Cronbach's α of 0.92, comprising the dimensions "Social production and use of humor", "Adaptive humor", "Appreciation of the humor" and "Attitude toward humor". [12] This scale has been validated in several languages and cultures, particularly in the fol-lowing countries: United States of America (USA), [12][13][14] Croatia, [15] Australia, [16] Spain, [17] Portugal, [6,18] China [19] and Mexico. [20] Factor analysis has showed some differences in the first study; [12] in a sample of elderly from the USA six factors emerged, that were not present in young people (four factors); [14] in Croatia [15] the validation study resulted in a scale with five factors identical to the Portuguese version. [6] Concerning construct validity, some studies conducted Exploratory Factorial Analysis (AFE) with principal components analysis and Varimax Rotation; [6, 12-15, 18, 19] or maximum likelihood method with Oblique Rotation [16] and Confirmatory Factorial Analysis (AFC). [20] In all cultures the reliability assessed by Cronbach's α showed values above 0.70, except for the Mexican version, in which two dimensions, "adaptive humor/coping" and "enjoy life" presented a 0.53 and 0.56 result, respectively. The stability and reproducibility was studied only in the sample from Australia, [16] through test-retest. In the study of discriminant validity, MSHS was able to discriminate by gender [14] and age. [6,14,17,18] Hereupon, the aim of this study is to verify that MHSH keeps the psychometric properties of validity and reproducibility in people with CKD under hemodialysis program. We also intend to verify the association between sense of humor and well-being. [19] Thus the concurrent validity study aims to find out if MSHS is correlated with psychological well-being measures: Positive Affection and Negative Affection; [21] Subjective Happiness; [22] and Satisfaction with Life in General. [23] 2. METHOD 2.1 Study design and setting An exploratory and cross sectional study on the psychometric properties [24] of the MHSH was carried out in two Diaverum Clinics located in Lisbon, Portugal between May and June 2015. The study of reliability was performed with two separate evaluations, with the last evaluation (Test Retest) being performed 48 to 96 hours after the first.
Subjects
The population consisted of people with CKD undergoing hemodialysis program. We established the following inclusion criteria: people diagnosed with CKD, undergoing hemodialysis for at least six months, and above 18 years of age. Exclusion criteria were the following: people with cognitive impairment and active psychiatric disease because of the difficulty in answering the questionnaire, with some patients refusing and demonstrating aggressive behavior. This information was retrieved from patient's medical records.
The initial sample consisted of 248 patients that met the inclusion criteria (139 in Clinic 1 and 114 in Clinic 2). A randomized probability technique was used for sampling (random without replacement). The sample calculation with a Confidence Interval of 95% and sample error of 5% indicated a necessary sample of 192 patients (103 in Clinic 1 and 89 in Clinic 2). Afterwards, a random selection without replacement was made. Regarding Clinic 1, six patients refused to participate, two were hospitalized and two dropped out. In Clinic 2, five patients refused to participate, two were hospitalized and five dropped out. After this process a total of 171 patients were included in the study: 93 from Clinic 1 (89%) and 78 (88%) Clinic 2.
Data collection
Data were collected through self-administered questionnaires, and interviews were conducted face to face by five trained researchers. The researchers were chosen and were submitted to specific training. This training consisted of the explanation of the objectives of the study and the various criteria; also, a guide regarding the correct filling of the questionnaire was given to each of the researchers. Both were performed during the haemodialysis session.
For data collection the following instruments were used: Portuguese version of the Multidimensional Sense of Humor Scale (MSHS), [6] Positive Affect and Negative Affect measured by the Portuguese version of the Positive and Negative Affect Schedule (PANAS), [21] Subjective Happiness by Portuguese version of the Subjective Happiness Scale (SHS), [22] Satisfaction With Life in General (SWLG), obtained by the Wellbeing Personal Index (WPI) [23] and a tool to identify the sample demographic and clinical characteristics (age, gender, nationality, educational level, occupation, marital status, duration of dialysis, presence of hypertension and diabetes). The MSHS is an instrument consisting of 24 items that assess the multidimensional aspects of sense of humor, considering four dimensions (humor production; coping or adaptive humor; appreciation of humor and attitudes towards humor and humorous people). [12] It is presented in the form of a 5-points Likert scale, ranging from 1 (strongly agree) to 5 (strongly disagree). The MSHS presented an interpretable factor structure globally consistent with studies conducted in other samples, with satisfactory internal consistency values and can be considered a valid instrument to characterize individuals with regard to their "humorous state" and can describe the sense of humor in its different dimensions. [6] The internal reliability assessed by Cronbach's α in factor I "Production and Social Use of Humor" is 0.93, in the II factor "Adaptive Humor" is 0.84, the factor III "Negation to Use Humor" is 0.63, in the fourth factor "Attitude toward Humor" is 0.74 and the factor V "Appreciation of Humor" is 0.71. [6,18] The PANAS consists of two subscales: Positive Affect and Negative Affect, each with 10 items, wherein the constructs are evaluated on a 5-points Likert Scale. Both Positive and Negative Affect dimensions can get a maximum score of 50 points. In the Portuguese version, the psychometric properties of PANAS in people with CKD, (similarly to the original scale) reveals the existence of two factors, internal consistency with Cronbach's α of 0.86 (in the original, α = 0.88) for the scale of Positive Affect and 0.88 (in the original, α = 0.87) for the scale of Negative Affect. [21] The SHS is composed of four items: within two items (two and three) respondents are asked to characterize and compare themselves with others, both in absolute and relative terms; the two other items correspond to descriptions of happiness and unhappiness. On this scale, respondents are asked to indicate to which extent the statements characterize them, and the answer is given on a visual analogue scale with seven points, founded on two opposing statements that express the level of happiness or lack of it. The Portuguese version in people with CKD presents a single factor with internal reliability with Cronbach's α of 0.90. [22] The SWLG/WPI consists of seven items/subjects (satisfaction with standard of living, health, personal development, personal relationships, sense of security, connection to the community, and security for the future) that intend to assess the "satisfaction with life in general". For each item, it is asked for participants to classify their satisfaction with each item on a scale that ranges from "0" (extremely dissatisfied) to "10" (extremely satisfied), where "5" means neutral. The WPI is calculated on a score of 0-100 (maximum range percentage). The exploratory factor analysis of the Portuguese version in people with CKD shows the existence of a single factor, with an internal reliability with Cronbach's α of 0.82. [23]
Ethical approval
This study was approved by the Diaverum Ethics Committee (approval No 1/2015). All participants were fully informed and freely signed a consent form to ensure the confidentiality of their data and the right of withdrawal, without repercussions to themselves.
Data analysis
Statistical analysis was performed using the Statistical Package for Social Sciences (SPSS) version 20.0. In the evaluation of the psychometric properties, the study of reliability was made through the Cronbach α. To evaluate the stability we used the intraclass correlation coefficient (ICC) and coefficient of Spearman-Brown correlation [25] in Test Retest (after 48 to 96 hours for 40 randomly selected people, 26 by questionnaire and 14 by interview). A minimum value of 0.70 was adopted as a satisfactory internal consistency. [25] On what concerns the validity study, Exploratory Factor Analysis (EFA) was performed through maximum likelihood method, with Variamax Rotation. Adequacy was assessed by Kaiser-Meyer-Olkin (KMO) and Bartlett's test of sphericity. Convergent validity was assessed by a Pearson correlation between MSHS, PANAS, SHS and WPI. Sense of humor is encompassed in wellbeing. It is comprised by a range of phenomena which include emotional responses, satisfaction and global satisfaction with life. The components of SWB are pleasant affect (i.e. joy, contentment, pride, affection and happiness); unpleasant affect (i.e. guilt and shame, anxiety, worry, anger, stress and depression); life satisfaction (i.e. desire to change, satisfaction with life -current, past and future); and satisfaction (i.e. work, family, leisure, health, finances and self). [26] To verify the predictive validity of the MHSH dimensions a hierarchical multiple regression analysis was made with the scores of 3 dimensions (Humor Production and Social Use of Humor; Adaptive Humor and Humor Appreciation and Attitude Towards Humor) as dependent variables. The independent variables of age and gender were inserted into the equation in step 1. Scores of PANAS, SHS and WPI were later introduced in the regression equation (step 2). Categorical variables were expressed as percentages or absolute values; continuous variables as means ± standard deviation. The significance level was set at p < .05.
A request for the use of the Portuguese versions of PANAS, SHS and WPI was send to the author, and permission was therefore granted.
Reproducibility
The sample consists of 171 patients diagnosed with CKD, with data being collected from 88 interviews (51.5%) and 83 questionnaires (48.5%), with an average age of 60.20 (± 14.34) years, mostly men (61%). The nationality of the patients is distributed in the following way: most patients are Portuguese (80.1%), followed by Cape Verde (14%), Sao Tome (3.5%), Angola (1.8%) and Guinea (0.6%). As for education, 3 The analysis of the psychometric properties presented the following results: for the reproducibility of MSHS, (verified by Cronbach α coefficient) the dimension "Humor Production and social use of humor" ranged from 0.92 to 0.94, the dimension "Adaptive Humor and appreciation of humor" ranged from 0.87 to 0.90 and dimension "Attitude Towards Humor" ranged from 0.80 and 0.82, after the exclusion of each item; for the stability study (Test Retest
Validity
The exploratory factorial analysis (KMO = 0.92; Bartlett sphericity test χ 2 [276] 2753.047, p < .001) showed a threedimensional factor solution, which accounted for 63.0% of the explained variance of the construct. All items were loaded into factors with appropriate factor loadings (ie > 0.5, see Table 1). The commonalities (h2) ranged between 0.28 and 0.80. The Cronbach α coefficient of the overall scale was 0.93.
In the study of convergent validity, Humor Production and Social Use of Humor has a strong positive correlation with Adaptive Humor and Humor Appreciation and moderate positive with Attitude Towards Humor, Positive Affect, Subjective Happiness and Satisfaction with Life in General. Adaptive Humor and Humor Appreciation presents moderate positive correlations with Attitude Towards Humor, Positive Affect, Subjective Happiness and Satisfaction with Life in General; and moderate negative with Negative Affect. The Attitude Towards Humor presents moderate positive correlations with Positive Affect, Subjective Happiness and Satisfaction with Life in General and moderate negative with Negative Affect (see Table 2). Table 3 shows the results regarding the predictive validity, in order to identify whether the PA, NA, SH and GSWL predicts MSHS dimensions, using gender and age as variables. According to the results, production of humor and social use of humor have as predictive variables positive affect and subjective happiness. Adaptive Humor and Humor Appreciation has as predictive variables gender, positive and negative affect and subjective happiness. Adaptive Humor and Humor Appreciation are positively influenced by positive affect and subjective happiness and negatively influenced by negative affect. Attitude Towards Humor has as predictive variables negative affect and subjective happiness. The Attitude Towards Humor is negatively influenced by negative affect and positively influenced by the subjective happiness. Age and Satisfaction with Life in General do not influence the sense of humor in patient diagnosed with CKD.
Validity
All items were loaded on three factors, with factor loadings above 0.53. The results differ from the original version that had four factors, [12] as well as, versions in US samples, [13] China [19] and Mexico. [20] Nevertheless, in a US [14] study performed on young and old people, six factors were found concerning the elderly sample. On the other hand in two studies undertaken in Croatia [25] and Portugal [6,18] five factors were found, which shows the multifaceted character of humor. Usually elderly people are not exposed to a comedy panoply provided through digital tools. The humor may have had less emphasis on their life experience and therefore has less importance. This may explain both the differences in the production of humor, as well as the appreciation of humor and humorous people. [14] Such differences may indicate that sense of humor varies between cultures. [19] However, we found a common aspect in all versions: the main factor combines humor production/creativity and social use of humor. The explained variance of the three factors has a value greater than 50% and a KMO greater than 0.70 which shows adjusted measures to the data set. [25] The explained variance is similar to the obtained in the US (61.5%) [12] and Portuguese version (65.22%) [6,18] and is superior to that found in studies from Croatia (55.9%), [15] Australia (47.75%) [16] and China (53.67%). [19] In convergent validity, such as sense of humor, it is associated with the general well-being. This association is also discussed by the authors of the Chinese [19] version of MSHS: "Humor Production and Social Use of Humor" is associated positively to Positive Affect, Subjective Happiness and Satisfaction with Life in General; "Adaptive Humor and Humor Appreciation" and "Attitude Towards Humor" are positively associated with Positive Affect, Subjective Happiness and Satisfaction with Life in General and negatively to Negative Affect. Regarding predictive validity, "Adaptive Humor and Humor Appreciation" gets higher values in women. In a previous study [14] differences were also found between men and women, as men had higher values on some items of Production and Social Use of Humor. Age was not predictive of sense of humor. It showed clear differences in a US sample. [14] The "Humor Production and Social Use of Humor" presents higher values in people with CKD who had higher scores on the Subjective Happiness and Positive Affect. The "Adaptive Humor and Humor Appreciation" is positively influenced by the Positive Affect and Subjective Happiness, and negatively by Negative Affect. Finally, the "Attitude Towards Humor" is positively affected by Subjective Happiness and negatively by Negative Affect, that is, people with CKD with higher scores on "Attitude Towards Humor" present higher scores on Subjective Happiness and lower scores on Negative Affect.
Implications for nursing educators and practice
These results show structural differences compared to the original scale [12] and the Portuguese language in the European version. [6,18] They also suggest that it is a valid and reliable scale to evaluate the effect of multidimensional humor, in people with terminal stage CKD. This scale can provide important contributions to nursing interventions (Humor) [8] related to sense of humor evaluation in patients with terminal stage CKD during hemodialysis session [5] and it can be applied to people with chronic disease.
Limitations and venues for future research
Our main limitation was the nature of the study design as a cross-sectional study, as it may have conditioned the results of predictive validity. A longitudinal study is recommended to avoid bias. Also the lack of financial support limited the number of researchers involved in data collection. Therefore sample size was conditioned.
This study used a representative sample of individuals with CKD undergoing hemodialysis program. In future research it is important to conduct a confirmatory factor analysis in this specific population with a broader sample, a minimum of 300 people, and it is also important to understand how sense of humor affects patients' quality of life and decreases their stress and anxiety.
CONCLUSION
The Portuguese version of MSHS scale in patients with CKD has a structure consisting of three dimensions adjusted to this specific population: Production of humor and social use of humor; Adaptive Humor and Appreciation of humor; and finally Attitude Towards Humor. being proper and adjusted in this specific population.
However, in this study we couldn't find evidence that supports the original scale of four factors or even the five factors scale, such as the European Portuguese version.
MSHS shows evidence of being valid and reproducible when applied through both questionnaire and/or interview methods to patients with CKD, to evaluate the effect of multidimensional humor. It can also be used in patients with other chronic diseases.
The measurement of sense of multidimensional humor can be integrated in humor intervention in nursing at the time of the initial assessment and also to monitor responsive gains for nursing care within the area of health and well-being.
|
2019-05-11T13:06:14.061Z
|
2017-11-14T00:00:00.000
|
{
"year": 2017,
"sha1": "c960e261a8e334ebf8c0bab96c38e2a8b6e71ad3",
"oa_license": null,
"oa_url": "http://www.sciedupress.com/journal/index.php/jnep/article/download/12155/7723",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1444cc125ca3b354fa02fe9c0c5985f527164fe9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
110365061
|
pes2o/s2orc
|
v3-fos-license
|
Using the coupled wake boundary layer model to evaluate the effect of turbulence intensity on wind farm performance
We use the recently introduced coupled wake boundary layer (CWBL) model to predict the effect of turbulence intensity on the performance of a wind farm. The CWBL model combines a standard wake model with a “top-down” approach to get improved predictions for the power output compared to a stand-alone wake model. Here we compare the CWBL model results for different turbulence intensities with the Horns Rev field measurements by Hansen et al., Wind Energy 15, 183196 (2012). We show that the main trends as function of the turbulence intensity are captured very well by the model and discuss differences between the field measurements and model results based on comparisons with LES results from Wu and Porté-Agel, Renewable Energy 75, 945-955 (2015).
Introduction
For the design of wind farms it is important to understand the effect of the relative turbine positioning on the overall power output [1]. Two analytical approaches have been used for such evaluations. The first approach, is the use of classical wake models [2][3][4][5][6][7][8] to estimate the wind farm performance. Such models have been used in several wind farm optimization studies [9][10][11][12]. Wake models predict the wake deficits in the entrance region of the wind farm well, but have difficulty predicting the performance further downstream where many wakes interact [13][14][15]. "Top-down" models [16][17][18] can capture the vertical structure of the atmospheric boundary layer and the associated wake-atmosphere interactions in this fully developed regime of the wind farm better than wake models. Therefore "top-down" models have been used for the evaluation of the optimal spacing in very large wind farms [19,20]. However, "top-down" models do not consider the effect of the relative turbine positioning [21,22] on the wind farm performance. Previous works have used a one-way coupling between wake and "top-down" models [6,17,23] to improve the performance.
In this paper we use the recently introduced coupled wake boundary layer (CWBL) model [15,22] to predict the influence of the turbulence intensity on the performance of Horns Rev. In particular, to investigate how well computationally inexpensive analytical models can predict the influence of this flow feature. The CWBL model combines the Calaf et al. [18,24] "topdown" model and a classical wake model [3,4,6] through two-way coupling. The effect of the relative turbine positioning is captured by the wake model part and the interaction between the wind farm and the atmospheric boundary layer by the "top-down" part. It has been shown that the CWBL model [15,22] gives improved predictions over either of its constitutive parts for the turbine power production in the fully developed regime of the wind farm. In this proceeding we start with a very short review of the basic concepts of the CWBL model in section 2. In section 3 the CWBL model results for different turbulence intensities are compared with field measurement data for Horns Rev [25] and corresponding LES results by Wu and Porté-Agel [26]. Section 4 concludes the paper.
The generalized CWBL model
In the CWBL model both the wake (section 2.1) and the "top-down" (section 2.2) model have one parameter that needs to be determined. An iterative procedure (section 2.3) is used to calculate these parameters from the complementary part of the CWBL model to make sure that a consistent CWBL system is obtained. For a detailed description of the (generalized) CWBL model we refer the reader to Refs. [15,22].
Wake model part
The classic wake model [2-4, 6, 7] assumes that wind turbine wakes grow linearly with downstream distance and uses the following expression for the velocity (which is simplified as a top-hat distribution) in the turbine wakes [2,3,5]: Here u 0 is the incoming free stream velocity, k w is the wake expansion coefficient, R is the rotor radius, C T = 4a(1 − a) is the thrust coefficient with flow induction factor a, and x is the downstream distance with respect to the turbine, which is modeled as an actuator disk. Wake interactions are accounted for by adding the squared velocity deficits of interacting wakes [4]. The predicted normalized turbine power P T /P 1 is given by Here the summation k is over all points (N d ) on the turbine disk and the velocity ratio is obtained by calculating the effect of all wind turbine wakes, including the wake interaction effects, at these locations. We use a uniform inflow, which is common in wake models. Computing the sum over the entire disk, instead of just considering the value at hubheight, ensures that the ground effect modeled by the image turbines sets in incrementally, i.e. there are no discontinuous changes in the wake behavior with increasing the streamwise distance. The wake decay parameter at the entrance of the wind farm k w,0 can be modeled as the ratio of friction velocity to the mean velocity at hubheight, which gives where z 0,lo is the roughness length of the ground surface, z h the turbine hubheight, and κ = 0.4 is the von Kármán constant [2,17,27]. To match the wake expansion coefficient to the turbulence intensity, expressed based on the streamwise velocity fluctuations, we use the following logarithmic laws for the mean The turbulence intensity at hubheight as a function of the wake expansion coefficient at the entrance of the wind-farm, k w,0 . The data points give the recommended relationship by Windpro [5], which is based on comparisons with field measurement data. The black line is based on the model equations (3), (6), and (7). The turbulence intensity as function of (b) the roughness length z 0,lo , and as function of (c) the boundary layer height for the data shown in panel (a).
in the boundary layer [28][29][30]. Equation (4) gives the mean velocity profile in an atmospheric boundary layer, while equation (5) gives the streamwise velocity fluctuations in the boundary layer. Combining both equations gives the turbulence intensity at hubheight as The constants A 1 and B 1 , are based on those measured in different high Reynolds number turbulent boundary layer experiments, see Marusic et al. [28] for an overview. They concluded that A 1 ≈ 1.25 is an universal constant, while B 1 ≈ 1.5 − 2.1 depends slightly on the flow geometry. Based on comparisons of our atmospheric boundary layer simulations with experiments we concluded that B 1 ≈ 1.6 is an appropriate value for this case [30]. According to equation (6) the turbulence intensity at hubheight depends on the roughness length z 0,lo and the boundary layer height δ, while equation (3) states that the wake expansion coefficient at hubheight only depends on z 0,lo . In addition, we use that the atmospheric boundary layer height is given by [31] where f = 2Ω sin(ψ) with Ω = 2π/(24 × 3600s) = 7.27 × 10 −5 1/s, ψ = 55 • (the latitude of Horns Rev), and C = 0.15, which is typical for neutral atmospheric boundary layers [32][33][34]. We note that stratification affects the very largest scales the most, i.e. things happening at a height z = δ, but that these effects are much weaker at hubheight, and below (i.e. where z 0,lo enters). We verified in the model calculations that small changes in C do not significantly influence the results. Using equations (3), (6), and (7) we obtain for each turbulence intensity the corresponding boundary layer height δ, the roughness height of the ground z 0,lo , and wake expansion coefficient k w,0 . Figure 1 shows that the results agree very well with the recommended values by Windpro [5], which are based on comparisons with field measurement data.
The "top-down" model
In the CWBL model we use the "top-down" approach introduced by Calaf et al. [18]. This model is used to obtain the ratio of the mean velocity in the fully develop regime to the reference incoming velocity at hubheight Here δ IBL indicates the height of the internal boundary layer in the fully developed regime of the wind farm, D the turbine diameter, β = ν * w /(1 + ν * w ) and ν * w ≈ 28 πC T /(8s x s ye ), and z 0,hi denotes the roughness length of the wind farm, which is defined as Here s ye is the effective spanwise turbine spacing, which is obtained from the two-way coupling with the wake model part of the CWBL model. Due to the two-way coupling between the wake and the "top-down" model s ye depends on parameters such as the streamwise distance between the turbines s x , the relative positioning of the turbines, and the wake coefficient in the fully developed regime of the wind farm k w,∞ . The reason for changing the effective spanwise spacing s ye in the "top-down" model is that this model considers a momentum balance averaged over the entire horizontal plane. In that model the horizontally averaged velocity thus depends on the friction velocity, which depends on the stresses that are imposed by the turbines in the wind farm. However, when the spanwise spacing between the turbines is very large this assumption is not longer valid. This fact can be clearly seen from cases with very small streamwise spacing, s x , and very large spanwise spacings, s y , which in the "top-down" model (that only depends on s) will result in limited wake effects. This assumption is obviously unrealistic, as even for a single line of turbines aligned in the wind direction for which s y = ∞ a significant power reduction is observed for downstream turbines. In the CWBL model we assume that the momentum analysis in the "top-down" part of the model should be performed over the control area that is representative of the region that is directly influenced by the wind turbine wakes. Since the wakes progress very far in the downstream direction, the streamwise spacing, s x , is unadjusted in the model, while the wake expansion is limited in the spanwise direction. In the CWBL modeling approach the effective spanwise spacing, s ye , that should be used is obtained from a two-way coupling procedure with a wake model. For further details about this procedure we refer the reader to section IV of Ref. [15].
Coupling
The wake and "top-down" parts of the CWBL model are coupled by demanding that both models give the same prediction for the turbine velocity in the fully developed regime of the wind farm. An iterative procedure is used to obtain the effective spanwise spacing, s ye , in equation (9) and the wake expansion rate, k w,∞ , in equation (1) in the fully developed regime. The effective spanwise spacing, s ye , is the spanwise dimension of the control volume size that is used to account for the large scale interactions with the atmosphere as explained above. Because in the wake model the turbine velocity in the fully developed regime depends on k w,∞ , while in the "top-down" model this velocity depends on s ye , these values need to be iterated until convergence is reached and this iteration is accomplished through the two-way coupling in the CWBL model. This procedure is described in detail for aligned and staggered wind farms in Stevens et al. [15]. To make sure that the effect of the wind farm geometry is taken into account this two-way coupling should be enforced for each wind direction separately, which is described in detail in Ref. [22]. Here we in addition, iterate the internal boundary later thickness in the fully developed regime as where is obtained from the CWBL model. We limit the internal boundary layer thickness to 500 meters when we compare with the LES data (as this was the boundary layer thickness used in the corresponding simulations by Porté-Agel et al. [35]) and 1000 meters (height of the thermal inversion) for the comparison with the field measurements.
In order to capture the entrance effects the CWBL model assigns a wake coefficient to each turbine in the wind farm by interpolating between k w,0 (the wake expansion coefficient at the entrance of the wind farm) and k w,∞ (the wake expansion coefficient in the fully developed regime, which is found using the iterative procedure). Here ζ = 1 and m is the number of turbine wakes that interact with the turbine of interest. Thus the wake model part dominates in the entrance region of the wind farm, while the wake development further downstream is determined by the two-way coupling between the wake and "top-down" models that comprise the CWBL model. Therefore the CWBL predictions in the fully developed regime of the wind farm depend much less on the quadratic superposition of the wakes [4] than those of the stand alone wake model. In this proceeding we assumed that all turbines operate with the same thrust coefficient C T . In the field measurements [25] and the LES results [26] to which we compare the model predictions the wind speeds considered correspond to turbines operating in region II for which this assumption is reasonable. Because the measurement consider a very narrow range of wind speeds we neglected the variation of the power coefficient C P with wind speed as this effect cancels out when relative powers are considered. Figure 2a shows the layout of the Horns Rev wind farm, which consists of 80 turbines with a hubheight of 70 meters and a turbine diameter of 80 meters, in a rectangular pattern. The streamwise spacing (East-West direction) is 7 turbine diameters and the spanwise spacing (North-South) is 6.95 turbine diameters. Figure 2 shows the power deficits as function of the downstream distance averaged for the wind directions 270 • ±15 • , 222 • ±15 • , and 312 • ±15 • , and a wind speed of 8 ± 0.5 m/s with an average turbulence intensity of 7.0% (6.3% for 222 • ± 15 • ) (see figure 4 of Hansen et al. [25]). The corresponding model results capture these measurements quite accurately. The figure also shows that the CWBL model agrees well with LES results from Wu and Porté-Agel [26].
Results
The study of Hansen et al. [25] also analyzed the effect of turbulence intensity and atmospheric stability on the downstream power development. Hansen et al. [25] defined different stability classes (cL = 3, cL = 2, and cL ≤ 1) based on the Obukhov length, see table 1 of Ref. [25]. In the remainder of the proceeding we will refer to these different cases using the naming convention introduced by Hansen et al. [25], i.e. "very stable" (case label cL = 3), "stable" (case label cL = 2), and "remaining" (case label cL ≤ 1). From figure 7 of their paper we obtain that the average turbulence intensity of these three main stability classes they consider are 7.1%, 5.1%, and 3.9%, respectively, when the wind speed is 8 ± 0.5 m/s. We use these reported [25] data the wind speed range is 8 ± 0.5 m/s with an average turbulence intensity of 7.0% [25] for 270 ± 15 • and 312 ± 15 • , and 6.3% for 222 ± 15 • . The corresponding CWBL predictions have been computed using a turbine thrust coefficient C T = 0.80. For the LES data of Wu and Porté-Agel [26] the turbulence intensity is 7.7% and we used C T = 0.78 for the corresponding CWBL calculations [22,26]. The squares provide a comparison between the data from Hansen et al. [25] and the CWBL model and the circles between the data from Wu and Porté-Agel [26] and the CWBL model. The data from Hansen et al. [25] and Wu and Porté-Agel [26] are digitally extracted from their figures. turbulence intensities to distinguish the different atmospheric stability classes. We note that the effect of the atmospheric stability is not included in the generalized CWBL model, but that the effect of increasing turbulence intensity is included. Figure 3a shows the measured power deficits for these three stability classes as a function of the downstream direction for the 270 ± 15 • direction. Panel (b) of that figure shows the model results for the three corresponding turbulence intensities. The lower panels of figure 3 compare the field experiments and the model results for the three cases separately. In agreement with the experimental observations, the model predicts an increased velocity deficit for decreasing turbulence intensity. The reason is that a higher turbulence intensity results in a wake that better mixes with the surrounding flow. The figure shows that the effect of the turbulence intensity on the wind turbine performance is more pronounced further inside the wind farm. Figure 4 shows the same results as in figure 3 for the wind-directions 222 ± 15 • and 312 ± 15 • . A comparison with figure 3 reveals that the effect of changes in the turbulence intensity result in similar changes in the wind farm performance for the different wind directions. Considering that the effective streamwise distance is "similar" for these different wind directions (222 • , 270 • , 312 • ), i.e. 9.4D, 7.0D, and 10.4D, respectively, and the overall wind turbine density is the same for these cases, this is reasonable. However, the corresponding experimental data seem to indicate a different trend, specifically a bigger influence of the turbulence intensity on the performance for the 222 ± 15 • and 312 ± 15 • cases than for the 270 ± 15 • . We do not know the reason for this discrepancy. However, we do note that it is well known in the literature that the comparison with the Horns Rev data to model results tends to be more favorable for some wind-directions than for others [36].
Here we also emphasize that, apart from the turbulence intensity that is included in the model, the effect of the atmospheric stability is not accounted for in the model. So, for example, the effect of the atmospheric thermal stability on the wind shear is not taken into account. Generalizations to this approach can be made by including stability correction functions. Such a direction has been explored in a recent paper dealing with another topic ( Figure 4. The power deficit for the wind directions 222 • ± 15 • (top panels) and 312 • ± 15 • (lower panels) for a wind speed of 8 ± 0.5 m/s. The field measurement results are grouped in the very stable (cL = 3), stable (cL = 2) and remaining (cL ≤ 1) stability classes. The CWBL predictions have been computed for three different turbulence intensities, i.e. 7.1%, 5.1%, and 3.9%, which correspond to the average turbulence intensity for these three different stability classes. The left panels indicate the field measurement by Hansen et al. [25] and the right panels the CWBL predictions. The data from Hansen et al. [25] are digitally extracted. [37]). Nevertheless the CWBL model is able to predict some of the main trends observed in the field data, which seems to indicates that the change of the turbulence intensity due to the thermal effects is one of the effects that influences the wind farm performance. In the field measurements the effect of the different atmospheric conditions seems stronger for some wind directions.
Summary
This paper demonstrates that the predicted performance trends by the coupled wake boundary layer (CWBL) model as a function of the turbulence intensity compare well with trends observed in field measurement data of Horns Rev [25]. The model has been compared with the field experiments using the reported turbulence intensities for the different atmospheric stability classes. We find that the model captures the trend that an increase in the turbulence intensity leads to decreased wake defect velocities due to enhanced mixing. The effect of the turbulence intensity is bigger further downstream. In the model predictions the change in the wind farm performance due to changes in the turbulence intensity are similar for different wind directions. Considering the similarity between these cases this is reasonable. However, in the field measurements the effect of the different atmospheric conditions seems to be stronger for some wind directions. Further research will be necessary to clarify this behavior. Further research will be necessary to clarify this behavior.
|
2019-04-13T13:13:55.071Z
|
2015-06-18T00:00:00.000
|
{
"year": 2015,
"sha1": "11a73da03037b9964c3ec432ab69927364f8fe0c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/625/1/012004",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0ec9a7d0c2ecf51d234ea811c4ac5a90406b58aa",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Physics"
]
}
|
237709250
|
pes2o/s2orc
|
v3-fos-license
|
Transcatheter valve-in-valve implantation vs. reoperative mitral valve surgery for failing surgical prosthesis
The incidence of degenerated mitral bioprosthesis is increasing in clinical practice due to a greater use of biological prostheses for mitral valve replacement compared to mechanical valves and increased life expectancy after cardiac surgery. Similarly, mitral valve repair can result in long-term recurrence of mitral valve disease requiring reintervention. Therefore, the number of failing surgical prosthesis or repaired valves will increase over the next years representing a new challenge in the corrective approach in the case of long-term structural valve degeneration. Those patients were generally managed with reoperative mitral valve surgery, but transcatheter interventional therapies, initially considered an option in patients who are ineligible for redo surgery, have been recently associated with excellent outcomes. The efficacy and safety of transcatheter mitral valve procedures have been reported in both failing ring annuloplasty or degenerated mitral bioprosthesis, but the use of non-dedicated devices remains associated with significant severe complications such as device malpositioning or left ventricular outflow tract obstruction requiring emergent conversion to open surgery. Both the careful selection of patients and pre/intra-procedural scheduling are warranted to maximize benefits and reduce issues. This review focuses on emerging transcatheter mitral valve replacement devices as therapeutic options for degenerated mitral bioprosthesis or failed mitral repair. This paper aims to summarize current interventional techniques and available evidence, comparing outcomes between transcatheter technologies and reoperative mitral valve surgery. Page 2 of Nappi et al. Vessel Plus 2021;5:40 https://dx.doi.org/10.20517/2574-1209.2021.06 20
INTRODUCTION
The increase in the number of mitral valve repair operations, with a rise in the number of biological prostheses implanted in younger patients, has resulted in higher incidences of reoperative mitral valve surgery. Bioprosthesis replacement for deterioration or failed repairs and mitral annular calcification (MAC) pose ongoing challenges to the surgical management of patients requiring repeat valve surgery (Re-MVS) [1][2][3][4] .
There is currently evidence that the percentage of structural valvular degeneration (SVD) is 85% in patients after previous MV surgery [1] . In these cases, a second mitral valve surgery may be required in 35% of cases within the first 10 years [5,6] . For patients who have undergone primary mitral valve surgery, the greatest concern falls on the best choice for the subsequent surgical therapeutic approach. The crossroads is therefore represented between the choice of mitral valve reoperation using a standard surgical approach or, alternatively, the transcatheter mitral valve implantation procedure. In addition, given the high risk and prohibitive operative mortality for reoperation after mitral valve replacement operation (Re-MVRpl) or mitral valve repair procedure (Re-MVRp), the search for an alternative to the standard surgical approach is both judicious and desirable.
The literature has reported successful cases of transmitral valve-in-valve replacement (TMViVR) in nonsurgical candidates in whom a mitral valve bioprosthesis or surgical repair failed [7] .
The consolidated experience gained with the Sapien Transcatheter Heart Valve (Edwards Life Sciences, Irvine CA) has led to accelerate legislative endorsement by the United States Food and Drug Administration (FDA) which has authorized the use of transcatheter valve treatment (TVT) for the treatment of the diseased mitral valve [8][9][10][11] .
As a result, it was possible to extend the indication for the use of these devices to Medicare and Medicaid Services (CMS) patients. The well-defined role of NCD has left the heart team to create a nationally verified prospective registry for each hospital.
Patients in whom therapy [transcatheter mitral valve-in-valve therapies (TMViV-T)] is recommended are closely surveilled as part of the after-market monitoring and followed up to assess the quality of the results produced at the individual level by the centers involved in TMViV-T.
The TMViV/TMViR module is one of three modules of the National Registry, the others being transcatheter aortic valve replacement (TAVR). Instead, patients who are managed with transcatheter aortic valve-in-valve are included in the TAVR module. The TMViV-T module gathers patients' demographic information and comorbidities. In addition, the TMViV/TMViR registry contains specific baseline criteria to characterize mitral valve disease, reporting detailed notes on the procedure adopted as well as the results on mortality and morbidity. Specifically, it reports mortality to 30 days and 1 year and provides information on hospital admissions and quality of life. A risk model for the prediction of hospital mortality is also listed in the United States National Registry which references the TMViV/TMViR module. It is useful to compare the results reported both with national averages and with those obtained from peer groups, with the aim to improve and refine the quality of the procedure. Mechanical intervention with the transcatheter approach for structural degeneration of the mitral valve prosthesis or failure of MV repair can be performed using the TMViV or the TMViR technique [12][13][14][15][16][17][18][19] . This type of approach can be considered a safe alternative to reduce the risk of operative mortality and increase the clinical benefit in patients experiencing heart failure (HF) [12] .
Reoperation for previous mitral repair failure or prosthesis degeneration can be performed via a second sternotomy or through a right thoracotomy approach [20][21][22] . The intraoperative risk assessment of a resternotomy is evaluated by thoracic CT and coronary or graft catheterization. Cardiopulmonary bypass (CPB) will be established using the femoral vessels and the sternum will be reopened using an oscillating saw. The right thoracotomy instead will be performed by making an incision on the right infra-mammary fold, isolating the right lung and opening the pericardium anterior to the right phrenic nerve, which will be isolated and protected so as to allow safe surgical access to the left atrium [20,[23][24][25] .
Despite the lack of established guidelines to choose the safest therapeutic approach and the presence of established evidence related to surgical approaches, the recent feasibility results for transcatheter mitral valve-in-valve procedures place the treatment of structural heart disease into a novel prospective with evidence [13][14][15][16] .
We undertake this review with the primary goal to reassume the existing evidence for the use of transcatheter mitral valve procedures or when repeat reoperative mitral valve surgery is required. We focus on the recent results from prospective registry studies, propensity matched observational series, metaanalyses and unmatched observational series. Our secondary objective is to identify a pool of patients for whom the benefits of the transcatheter mitral valve (TMViV) and ring valve (TMViR) procedure are more evident than standard redo surgical therapy [5][6][7]12,13] .
With the aim to encourage a broader use of transcatheter mitral valve-in-valve implantation and provide drive for health professionals, we herein argue the current evidence for the choice of the various transcatheter mitral valve-in-valve therapies (TMViV-T). Finally, we develop a useful evidence-based algorithm to guide the choice of the mitral valve prosthesis in case of reoperation.
The emerging new contribution of transcatheter mitral valve-in-valve therapy
TMViV-T is presently considered in the United States and Europe for the management of deteriorated MV (TMViV) or mitral ring (TMViR) prostheses in patients at high risk for standard surgical approach [17,18] .
Unlike Mitraclip therapy, in which the legislative endorsement is based on the results reported from the Endovascular Valve Edge-to-Edge Repair randomized Study (EVEREST) [19] , there is poor evidence on the survival benefit in patients who have undergone TMViV/TMViR for deteriorated bioprostheses compared to those who had redo surgery for SVD of MV [3,14,15,17,19] .
Both procedures had the same enrollment method for the patients who were considered suitable for the use of devices, therefore 349 persons from 98 sites were included in the US registry from 2013 to 2015 [26] . Nevertheless, the recent literature shows that the transcatheter procedure with the use of Mitraclip has a slight advantage over TMViV-T due to a better understanding of the pathophysiology of mitral rigurgitation and the normalization of the geometry [2][3][4]8,9,[27][28][29][30] .
Recipients of transcatheter mitral valve implant (n = 248) were treated for degenerated mitral valve prosthesis (TMViV 76.1%) or severe mitral regurgitation occurring after a mitral valve repair using annular ring (TMViR 23.9%). The population recruited in the TMViV-T group was at high risk for conventional open mitral valve operation, with a median age of 76 years and being predominantly female (61%) [18,26] .
TMViV-T includes a transapical, transatrial or transseptal approach. The transapical approach is performed by practicing a small anterolateral thoracotomy in the fifth or sixth intercostal space which is used to access the pericardial space near the left ventricular apex. At this level, the transcatheter heart valve (THV) delivery system is introduced. The advantage of this approach is due to the anatomical peculiarity that allows direct access to the entire mitral apparatus aimed at the coaxial retrograde deployment of the transcatheter heart valve in patients presenting with a failed surgical mitral bioprosthesis. Instead, the transatrial approach technique is performed through a small anterolateral thoracotomy which allows the antegrade deployment of the transcatheter heart valve device. Finally, the transseptal approach is achieved with an initial access through the femoral vein, which allows the catheters and delivery system to progressively advance towards the left atrium. A transseptal puncture is performed, allowing for the deployment of the antegrade transcatheter heart valve [18,26] .
The main advantage of the transseptal approach is avoiding a thoracic incision, which offers a quick postprocedural recovery after mechanical intervention. Moreover, its indication is suitable in patients with severe chronic lung disease or in those who were managed with multiple sternotomies [8,18,26,28] .
The preferred approach was the transapical access (70.1%) over transseptal (24.4%). It is important to note that in-hospital and 30-day mortality was much less common than their STS PROM (8.5% vs. 11%), while 1.4% of patients experienced postoperative left ventricular outflow tract obstruction. In addition, considering the fragility of patients treated with this type of mechanical intervention, the post-procedure morbidity was low. A postoperative left ventricular outflow tract obstruction occurred in very few patients; similarly, the rate of stroke or need for dialysis was low in recipients of TMViV-T. Repeat mitral valve surgery during hospitalization was not required for any patient. Considering that 83.7% of the patients had echocardiographic control available, moderate to severe mitral regurgitation occurred infrequently and the measured transvalvular mitral gradients were low. Evidence based on early outcomes in this group of highrisk patients, for whom repeat open mitral valve surgery was avoided, is encouraging. Therefore, TMViV-T can be offered as an effective treatment option for this category of patients [18,[26][27][28] .
Clinical evidence and technical evolution using transcatheter mitral valve-in-valve therapy
The international registry for transcatheter mitral valve replacement (TMVR) was established with the aim to treat patients with degenerated mitral bioprostheses and failed annuloplasty rings who are suitable to receive the transcatheter mitral valve-in-valve therapy. All patients who were subsequently recruited, with a mean age of 72.5 years and a mean STS score of 8.9% ± 6.8% [ Table 1], constitute the TMVR registry, which serves as a multicenter observational study. It was due in November 2015 with a total of 25 centers from Europe and North America working to implement and improve the registry. In total, 248 patients were included to receive a mechanical intervention on the mitral valve through transcatheter-based therapy. Of these, 176 patients (71%) were considered to receive a TMViV for a degenerated mitral bioprosthetic valve, whereas 72 patients (29%) had TMViR due to failed annuloplasty rings [8,19,27,29,30] .
The survival benefits of TMViV-T in a small number of patients with failed bioprosthetic mitral valves were proved in a report by Nappi et al. [32] and Webb et al. [33] nearly 10 years ago. TMViV-T was used for 24 high-risk patients with degenerated valves bioprostheses (aortic, n = 10; mitral, n = 7; pulmonary, n = 6; or tricuspid, n = 1). The first-in-human use of a percutaneous transseptal approach was unsuccessful. On the contrary, five subsequent patients were managed successfully with the use of transapical access for TMViV implantations. The procedure was relatively easily accomplished; all patients not only survived at 30-day follow-up, but there was no mortality at a median 72-day follow-up [16,18,19,28] .
TMViR has been used since 2010 in a restricted number of patients (n = 17) who had failed annuloplasty rings, demonstrating how promising this safe and efficacious approach is for this type of mechanical intervention. The survival rate survival was 82% (14/17 patients) at 30 days, while, at the most recent followup (13 ± 5 months), the percentage of surviving patients dropped to 71% (12/17) [34] .
Two years later, Elmariah et al. [10] , using fluoroscopic and echocardiographic guidance, described the successful implantation of TMViV through the left ventricular apex. The authors implanted a 26-mm Edwards SAPIEN heart valve (Edwards Lifesciences) that was accurately placed within the previous stenosed mitral 27-mm bioprosthesis Carpentier Edwards Perimount (Edwards Lifesciences, Irvine, California). Surgery with a conventional standard approach was not recommended for the patient because he presented numerous comorbidities including cerebrovascular disease [95% stenosis of the left internal carotid artery (ICA) and moderate stenosis of the right ICA]. He had undergone a previous coronary artery bypass grafting (CABG)/MVR which was complicated with a dehiscence of the sternal wound requiring complex reconstruction and had received recent percutaneous coronary intervention. Moreover, the patient was managed with bilateral renal artery stenting.
Simultaneously, Seiffert et al. [35] reported the improved outcome using the transapical TMViV implantation in 6 patients. The amelioration of the clinical condition is almost certainly due to rapid improvement in hemodynamics, as testified by the reduction of the mean transvalvular gradients [from 11.3 ± 5.2 mmHg to 5.5 ± 3. A few years later [11] , the first 4 consecutive patients successfully received a transcatheter mitral valve replacement for the treatment of degenerated mitral valve bioprostheses that included both bioprosthetic valves (n = 2) or rings (n = 2) and using the transseptal access. The cornerstone of this report was to highlight how the innovative transcatheter mitral valve replacement technique could be performed using transvenous and transseptal mitral valve replacement. The authors, demonstrating the feasibility of the procedure with only femoral venous approach, stated the reduction of complications and length of hospital stay in TMViV-T recipients compared to those who were managed with transapical access or redo surgery [5,31] .
The percutaneous transvenous transseptal approach was reported in one study including 48 high-risk patients with a median follow-up of 40 days (range: 1-491 days) and the mean Society of Thoracic Surgeons (STS) risk score was 13.2% ± 7.4% with a mean age of 76 ± 11 years [14,18,27,28] . Patients were managed using an Edwards SAPIEN prosthesis (Edwards Lifesciences, Irvine, California) and received TMViV/R therapy for 33 degenerated mitral bioprostheses, 9 previously failed annuloplasty and 6 severe MAC. Considering the overall group of patients undergoing treatment, it is important to underline that efficacy was achieved in 42 of 48 patients (88%). In detail, the success rate was reached for 94% (31 out of 33) of patients who underwent the mechanical intervention for the failed bioprosthetic mitral, whereas it was 73% (11 out of 15 patients) for those with failed annuloplasty rings and MAC [14,15] .
While awaiting the outcome of large series of multicenter registry studies, there is currently an unremarkable amount of evidence to sustain the use of percutaneous transfemoral antegrade transseptal implantation for mitral prostheses, even though this may provide an encouraging survival benefit over a standard Re-MVS. Indeed, more than 7 years ago, a cohort of 23 patients was studied with a median followup greater than 2 years revealing a Kaplan-Meier survival rate of 90.4% [9,20,36,37] .
Only one independent study has confirmed this finding in a larger cohort and with the use of transseptal balloon-expandable TMViV-T. The study reported the results from 87 patients, of whom approximately three-quarters had a degenerated mitral bioprostheses. For patients who underwent TMViR/valve related to MAC disease, the estimated survival rate was 78% at 30 days. Instead, for those who received TMViV for degenerated bioprosthetic mitral valve, the survival rate was 95% at 30 days [95% confidence interval (CI): 70%-86% vs. 92%-97%; P = 0.008]. Finally, the survival rate was 68% in patients who had transcatheter mitral valve therapy ViR/ViV in MAC group compared to the rate of 86% in those who were managed with transcatheter mitral valve therapy for failed bioprosthetic mitral valve at 1-year follow up (95%CI: 58%-78% vs. 81%-91%; P = 0.008) [14,15] .
Hu et al. [23] , in a recent systematic literature review, analyzed the outcomes of 245 patients who underwent TMVR (172 TMViV and 73 TMViR) surgery for degenerated bioprostheses and failed annuloplasty ring from 2009 to 2018. The mean age of all candidates for TMViV therapy was 73 ± 12 years, with a mean STS score of 15.6% ± 13.5% and mean LVEF or 46.7% ± 14.1%. In this review, the technical success rate was 93.5% and the operative mortality was 5.7% with no significant differences between the two groups (5.2% TMViV vs. 6.8% TMViR). In the entire study, left ventricular outflow obstruction were observed in 4 (1.6%) patients: 0 (0%) in the TMViV group and 4 (5.5%) in the TMViR group.
In a recent review with a 5-year follow up, Cheung et al. [31] reported 23 consecutive patients with degenerated mitral bioprosthesis, who were successfully treated using TMViV. Patients received Edwards SAPIEN-type balloon-expandable valves with a left ventricular apex approach. The success of the device was 100% and no cases of valve related malposition or embolization occurred [31] .
As demonstrated by the reported data and the cited studies, despite the evidence of low intraoperative and postoperative mortality and low incidence of major bleeding and stroke, the main concern with the use of TMViV-T, besides the possible need for a reoperation, is the increased risk of complications from LVOT obstruction and death related to congestive heart failure.
Eleid et al. [14,15] showed that LVOT obstruction significantly increased in patient with higher ejection fractions (66 ± 6 mmHg vs. 56 ± 12 mmHg; P = 0.002). This risk is even higher in TMViR or TMV-in-MAC recipients than in those who had TMViV procedures. It is important to point out that the majority of patients who had minimal symptoms related to LVOT obstruction were managed conservatively, showing transvalvular mitral gradients decreasing over time. Nevertheless, in some cases, patients who received TMViVR or TMV-in-MAC developed a critical obstruction related to displacement of the systolic anterior mitral leaflet in the left ventricular outflow tract resulting in irreversible impairment of cardiac function with death due to the onset of congestive heart failure. However, the incidence of severe LVOT issue and other complications can be notably decreased by careful patient screening [24,[33][34][35] .
Another concern is related to the choice of the appropriate size of the implant, which remains a matter of debate. We mainly adopt an oversizing of 5%-10% compared to the pre-existing prosthesis. The reference considered is the internal diameter reported by the producer, so, for patients who had a prosthesis with an internal diameter of less than 21.5-mm, a 23-mm SAPIEN valve is preferred. The advantage offered by oversizing lies in a greater security for anchoring the device when it is inserted inside the sewing ring. Furthermore, this choice has proved to be of great use in reducing the risk of paravalvular leakage. However, the risks of extreme oversizing remain to be considered and should be avoided. In fact, a severely under-expanded device can lead to various drawbacks including an increase in the transvalvular gradient, a non-optimal coaptation of the leaflet and a compromised valve durability [14,15] . Ultimately, a considerable contribution has been offered by the standardized use of cardiac computed tomography, which has proved to be fundamental in assessing the risk of LVOT obstruction and determining the degree and distribution of calcium deposits in the valvular annulus. The use of CT imaging is useful in guiding the choice of device size in order to oversize by about 5%-10% of the annulus area. In patients with oversizing superior to that 5%-10% range, a minimum neo-LVOT area of 250 mm 2 and calcification expanding for more than 270° (75%), complications such as valve embolization and LVOT obstruction can be anticipated [14][15][16]19] [ Table 2].
It is important to note that a potential benchmark for the use of TMVT was provided by a Harvard study [1] . From 1992 to 2015, at Brigham and Women's Hospital, the authors reported 520 potential candidates for TMViV therapy in whom a failure of previous mitral valve replacement or repair surgery occurred. They had a mean age of 64 ± 12 years and a median left ventricular ejection fraction of 60%. Median STS score was 6.12% ± 6.5%. In total, 319 patients had NYHA Class III or IV. Of all patients, 273 had pMVR and 247 had pMVr. There were no differences in term of risk for permanent stroke between the two groups (5.1% in pMVR and 5.3% in pMVr) or necessity of a surgical approach (0.7% in pMVR and 0.8% in pMVr), but there was a higher risk of major bleeding in the pMVr group (5.3%) than in the pMVR group (2.9%). Operative mortality was higher in the pMVr group (9.3%) than in the pMVR group (5.1%).
Recently, this topic has been extensively investigated in the cardiovascular literature. Two recent studies showed different results regarding post-procedural residual mitral stenosis (MS) or residual mitral regurgitation (MR) and analyzed how transcatheter mitral valve replacement (TMVR) can be a valid approach to patients with prosthetic degeneration (TMViV) or ring degeneration (TMViR) or in patients with mitral annulus calcification (TMV-in-MAC) [38][39][40] .
In one study, 1079 patients from 90 centers were randomized and treated with TMViV (n = 857) or TMViR (n = 222) therapy. The primary endpoint was patient survival, while secondary endpoints were residual MS (mean gradient ≥ 10 mmHg), residual MR (defined as regurgitation ≥ moderate) and rate of repeat MV replacement. With a median echocardiographic follow up of 772.5 days, this study found that postprocedural MS was more common in patients treated with TMViV procedure, while post-procedural MR was more common in patients treated with TMViR. This analysis reported that residual MS was correlated with smaller true internal diameter, younger age and larger body mass index, while post-procedural MR was correlated only with TMViR procedure [38] .
Surgical therapy: sternotomy or right thoracotomy?
Reoperative mitral valve (MV) procedures are increasingly common and represent over 10% of all MV operations in the United States. There is no broad consensus regarding the optimal surgical approach. The decision between a surgical approach via resternotomy or right thoracotomy represents one of the most important steps of the surgical planning [11,24] [Figures 1 and 2].
Anatomical characteristics, concomitant pathologies, risk scores and type of MV degeneration (mitral repair failure or prosthesis degeneration) can be useful for the decision between sternotomy and right thoracotomy. Moreover, recognizing and analyzing the cause and mechanism of primary MV repair failure with reliable intraoperative and predischarge echocardiography is important to improve the outcome of the initial MV repair [23,25] [ Figure 3].
Regardless of the type of repair failure/prosthesis degeneration, the presence of peripheral artery disease, high stroke risk, concomitant or previous coronary artery bypass grafting (CABG) or aortic valve surgery represents one of the first parameters that can guide the choice towards the safest surgical approach. Thoracic CT can show anatomical characteristics, such as proximity of right ventricle or pericardium to the sternum or the position of large vessels or graft used for previous CABG, which are useful to identify patients with high or low risk for intraoperative injury in repeat sternotomy [ Figure 2].
A publication from the Division of Cardiac Surgery of the University of Maryland School of Medicine in Baltimore [20] shows that repeat sternotomy MV operation can be performed with low perioperative mortality (4.6%) and low re-entry injury rate (1.5%) and, moreover, that repeat sternotomy MV operation is not an independent risk factor for operative mortality or morbidity. Furthermore, a recent review of patients who underwent reoperative MV surgery between 2011 and 2017 at four institution within the Northwell Health System confirms that reoperative mitral valve surgery via right anterolateral minithoracotomy, when performed in centers with high experience in minimally invasive surgery, is safe, reproducible and associated with shorter ventilation times, decreased hospital stay and faster postoperative recovery. Furthermore, the type of MV reoperation (re-repair or re-replacement) is not affected by the surgical approach used. These assessments are essential to adopt the safest and most effective surgical approach for the patient's benefit; therefore, we propose a flowchart that can guide the choice of the best surgical approach [ Figure 4]. Concomitant tricuspid valve surgery is not important for the decision between sternotomy and right thoracotomy. As shown in a review performed by the Division of Cardiovascular Surgery from Toronto [21] , there are several independent predictors of mortality during redo mitral valve replacement: renal failure (OR = 3.4), previous stroke/TIA (OR = 2.5), left ventricle dysfunction (FE < 40%; OR = 1.6), urgent timing (OR = 1.5) and no subvalvular preservation during first surgery (OR = 3.4). These data show the importance of careful patient selection and meticulous evaluation of the first surgery [ Table 3]. Another critical question is: "should the mitral valve be re-repaired or replaced"?
A 2006 retrospective review of patients undergoing surgical correction (repair or replacement) of recurrent MV regurgitation after primary MV repair for regurgitation caused by degenerative valve prolapse analyzed 145 patients who underwent mitral reoperations for recurrent MR at the Mayo Clinic, Rochester [22] . This review showed that there were no striking differences, in terms of operative mortality, between the re-repair group and the replacement group, and that three different independent factors were associated with improved survival: MV re-repair, younger age and the preoperative indication of pure MR.
These data emphasize the importance of a prior evaluation of the patient's characteristics and the prior mitral repair failure mechanism for the choice between a re-repair approach, when this is feasible and safe, or replacement. The great advantage of a surgical approach is the possibility to carry out, in young and selected patients, a lasting and probably definitive repair of the mitral valve. Our review shows that there is a better outcome in term of survival at 5 years for patients treated with a re-repair surgery (76%) compared to patients treated with replacement (60%).
The literature recommends that, when the repair failure is correlated to a valvular disease progression, it is advisable to prefer a replacement to avoid a third MV surgery [14,22,25,31] [ Table 4]. Aside from the type of surgical approach, we analyze the differences between patients who undergo MV rerepair and MV replacement. We use data from different studies to analyze those differences in terms of operative mortality, stroke risk, major bleeding risk, sternal infection and all causes of mortality at 30 days. A recent review by Patel et al. [21] analyzed the outcomes of 256 patients who underwent reoperative mitral valve surgery from 2011 and 2017. Ninety patients had their redo mitral valve surgery performed via right anterolateral thoracotomy and 166 were approached with a median sternotomy. Thirty-day mortality (8; 4.4% vs. 1; 1.1%), stroke (3; 1.8% vs. 0; 0%), reoperation for bleeding (5; 3.2% vs. 2; 2.2%), deep wound infection (4; 2.4% vs. 0; 0%) and sepsis (6; 3.6% vs. 0; 0%) were higher in the sternotomy group.
The data reported in Tables 2 and 4, when comparing operative mortality, 30-day mortality and procedural success rate, should be analyzed considering the total number of patients treated by reoperation or transvalvular approach. Considering these numbers, it is easy to understand how the fundamental point of the therapeutic choice remains the patient evaluation in order to choose the best operative approach. In well-selected patients, the surgical approach is able to guarantee safe, reproducible and long-lasting results.
DISCUSSION
Reoperation for deteriorated bioprostheses or failed repair is among the highest risk surgical procedures performed in heart surgery. However, as reported by large surgical series, it is the most effective procedure considering several categories of recipients. Indeed, its efficacy has been demonstrated for patients with moderate to severe prosthetic valve dysfunction, as well as structural and non-structural valve degeneration, such as prosthetic valve dehiscence or endocarditis [22,31,45] . Short-and long-term survival is a crucial point for the success of the procedure. However, to date, we do not have precise guidelines that orient the selection of patients who might better benefit from a TMViV-T. Therefore, the choice of the mechanical intervention strategy is not supported by solid scientific evidence.
The literature on the subject has shown that in-hospital mortality rates are substantially lower (between 4% and 5%) for patients who have undergone previous mitral valve repair [16] . In contrast, the 30-day mortality rate was significatively higher for patients who have been managed with conventional Re-MVS for degenerated bioprostheses and ranged from 9.3% to 12%. Importantly, the pivotal study by Eleid et al. [14] showed a 30-day survival and freedom from secondary cardiac surgery for other complications of 85% in TMViV and TMViR patients, while this rate was 91% in a subgroup of TMViV for failed bioprosthetic mitral valve.
The baseline features probably explain the superior intrahospital mortality of the re-mitral valve surgery in the pr-MVRpl group compared to TMViV. The higher operative mortality in patients who received a reoperation for mitral valve replacement was probably related to a proportionately greater number of patients with endocarditis or requiring associated coronary/valve surgery [21][22][44][45] . Transcatheter heart valve therapy has become technologically advanced with new emerging devices over the past 5 years; however, as stated by the international guidelines and professional societies recommendations, it should be the preferred option only for critically ill patients presenting with symptoms due to isolated MV stenosis or combined regurgitation and stenosis of aortic valve. Candidates for TViV procedure should be discussed by the heart team and considered to be at high or prohibitive risk of reoperation. This recommendation is categorized as Class IIa with a level of evidence B-NR assuming a reasonable improvement in hemodynamics [5,6,15,26] .
Currently, the clinical benefits of using TMViV-T are well established by the Mitral Valve Academic Research Consortium (MVARC), which elaborates the consensus document of modern mitral valve surgery. There is no solid evidence to suggest that the use of catheter heart valve therapy is associated with additional benefit in the long-term outcome of patients with degenerated or failed MV. MVARC has also focused on the pathophysiology, prognosis and criteria for the design of clinical trial design in mitral valve disease.
The benefits of TMViV-T apply to high-risk patients, including those with mitral annular calcification, and become evident in the evaluation of primary and secondary genesis of mitral regurgitation. A contemporary analysis of the STS Adult Cardiac Surgery Database revealed that, in the United States during 2015, a little over 2% of patients underwent a TMViV-T compared to a total of 12,792 surgical procedure of mitral valve repairs and 4548 interventions of mitral valve replacements [8,25,30,34] .
Between 4% and 10% of patients comprised who are managed with a mitral valve repair will require a second operation and usually the reoperation is a mitral valve replacement. The rationale for this reduced use of re-repair is complex and multifactorial.
A survey reported in a French study and published more than three decades ago found a greater use of mitral valve re-repairs than replacement. However, several other studies showed that the percentage of patients in whom repair was feasible ranged 36%-85%.
Repeat open mitral valve replacement is preferred in patients who require a second operation for endocarditis, mitral stenosis, bileaflet prolapse or severe degenerative progression of native disease [8,29,30,32] .
In this scenario, short-term quality metrics can be determining factors in influencing surgical decisionmaking because they directly affect the financial situation of the institutions and the employer/health provider relationship. For example, these metrics have driven the choice for patients who received mitral valve replacement surgery with bioprosthetic valves being preferred for the avoidance of anticoagulant therapy and despite the risk of structural deterioration.
It is important to note that to date the expense for the treatment of thrombotic complications after standard surgery mechanical prostheses is "not negligible", as evidenced by the United States Center for Medicare & Medicaid Services.
Nevertheless, a study from Harvard reporting the 24-year experience of 520 patients who were reoperated after preceding MV replacement or MV repair showed that the use of a repeat mitral valve replacement increased operative mortality (7.1%) or other major morbidity (18.2%). Moreover, the use of the mitral valve repair, instead of the replacement, revealed a higher long-term survival, as noted with a Cox-adjusted analysis.
It is clear that repeat mitral valve replacement is burdened by higher mortality, as shown by Suri et al. [22] , Anyanwu et al. [44] , David et al. [46] (6.9%), Borger et al. [36] (9%) and Vohra et al. [5] (12%), at over 10-year follow-up. Concerns related to the survival benefit at 30 days are due to: (1) largely increased technical problem relative to reintervention; (2) increased frailty of patients undergoing reoperation; and (3) the fact that prosthetic valve endocarditis is a usual indication for repeat surgery. In this group, the slightly superior operative mortality and inferior long-term survival in patients who received a homograft may affect the long-term results.
The evidence produced previously indicates that contemporary mitral valve redo surgery should involve both the use of the standard approach and TMViV/R therapy -the latter in the absence of clinical or anatomic contraindications -and substantial efforts should be made to raise the promotion of TMViV/R therapy using transfemoral/transatrial approach in inoperable/high risk patients.
The increasing use of bioprostheses even in young patients has changed the platform for mitral valve interventions and TMViV/TMViR could be considered similar alternatives in patients with degenerated mitral and ring bioprostheses or MAC. However, concerns remain over the widespread use of TMViV/R because of the lack of solid evidence on the durability of these devices implanted in the mitral position as well as the potential risk of LVOT obstruction. Early outcomes of TMVR indicate that the procedure should be preferred in high-risk cases because it has low periprocedural mortality. In addition, the occurrence of early complications such as embolization, frequent paravalvular leaks with either C-or D-shaped rings and obstruction of the left ventricular outflow tract due to malposition of the device must be taken into account.
Recently, Yoon et al. [24] showed superior midterm outcomes in patients who received TMViV (n = 176) compared to those who underwent TMViR (n = 72). The authors noted that patients receiving the mitral valve transcatheter procedure for failure of a ring annuloplasty had higher rates of procedural complications compared to those who had mitral valve in the valve for failure of degenerated bioprostheses valve with a technical success of 83.3% vs. 96% (P = 0.001), respectively. Medium-term (1-year) mortality was also higher in the TMViR group (28.7% vs. 12.6%; P = 0.01). Additionally, mechanical intervention for failed annuloplasty showed worse outcomes in patients who received TMViR and was independently associated with all-cause mortality, as noted with multivariate Cox analysis (HR = 2.7; 95%CI: 1.34-5.43; P = 0.005).
The FDA approved the use TMVR in prohibitive/high-risk patients but comparable data for surgical repeat mitral valve repair or replacement outcomes are also needed to assess its safety and indications, especially in lower-risk patients. TMViR was shown to have poorer results compared to TMViV or MV surgery.
TMViR or valve in MAC is probably less competitive in specific anatomic configuration and less appropriate in cases of severe hypertrophic obstructive cardiomyopathy for the higher risk of anterior mitral leaflet displacement. Post-mitral TMViV/R alcohol septal ablation and good prosthetic valve function cannot be considered helpful to improve the efficacy of the procedure in cases of critical obstruction because paravalvular leakage is common.
The use of the future benchmarking to TMViV/R could be limited for the presence of young patients, in whom the procedure is not indicated for ethical and clinical reasons. Resistance to the use of TMViV/R among surgeons and cardiologists can be motivated, at least in part, by the fact that the clinical advantage of TMViV/R therapy has only been shown in observational reports and has never been confirmed in randomized clinical studies.
The ongoing RCT Mitral Implantation of Transcatheter Valves trial (NCT02370511) can certainly provide further clarifications; however, no patients have been enrolled for a surgical repeat mitral valve replacement in this registry.
Data from the literature are unable to reach meaningful conclusions, so more in-depth investigations are required.
CONCLUSION
This review analyzes different outcomes between percutaneous and surgery approaches for degenerated or failed previous mitral valve intervention.
Data extracted from different studies and reviews help us establish a valid protocol of choice for patients who need redo surgery due to the degeneration of a mitral biological valve prosthesis or a previous failed mitral valve repair. The choice between the two methods of reintervention involves the analysis of different factors and, as described above, is the result of a careful evaluation of the patient characteristics and comorbidities, the presence of concomitant aortic valve pathology or coronary artery disease, the type of previous surgery approach and life expectancy. Recently, different studies have shown, in agreement with our results, that both procedures, TMViV and TMViR, can present different degrees of residual insufficiency or residual valve stenosis. According to these results, the only way to guarantee the best clinical result is the careful choice of the patient to be a candidate for TMViV/ViR or for reoperation [38][39][40] .
|
2021-08-27T17:22:54.596Z
|
2021-07-19T00:00:00.000
|
{
"year": 2021,
"sha1": "2384e9b634f7d4da1418bdbb8a834bf635878fa6",
"oa_license": "CCBY",
"oa_url": "https://vpjournal.net/article/download/4207",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1a6aa0769b962cfd40fd813a6c971cd0807b4415",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
20988219
|
pes2o/s2orc
|
v3-fos-license
|
Mean-field limit versus small-noise limit for some interacting particle systems
In the nonlinear diffusion framework, stochastic processes of McKean-Vlasov type play an important role. In some cases they correspond to processes attracted by their own probability distribution: the so-called self-stabilizing processes. Such diffusions can be obtained by taking the hydrodymamic limit in a huge system of linear diffusions in interaction. In both cases, for the linear and the nonlinear processes, small-noise asymptotics have been emphasized by specific large deviation phenomenons. The natural question, therefore, is: is it possible to interchange the mean-field limit with the small-noise limit? The aim here is to consider this question by proving that the rate function of the first particle in a mean-field system converges to the rate function of the hydrodynamic limit as the number of particles becomes large.
Introduction
In the stochastic convergence framework, the large deviation theory plays an essential role for describing the rate at which the probability of certain rare events decays. Each convergence result therefore leads to find the large deviation rate associated with. In suitable cases, the knowledge of the so-called large deviation principle (LDP) even permits to obtain informations about the convergence itself (see the central limit theorem [Bry93]).
This paper is concerned with the convergence of continuous stochastic processes defined as small random perturbations of dynamical systems. In the classical diffusion case, the stochastic process converges in the small-noise limit to the deterministic solution of the dynamical system and the large deviation theory developed by Freidlin and Wentzell [FW98] emphasizes the behaviour of the rare event probabilities. More recently, Herrmann, Imkeller and Peithmann [HIP08] studied the large deviation phenomenon associated with the McKean-Vlasov process, a particular nonlinear diffusion which is attracted by its own law (the so-called self-stabilizing effect). This process appears for instance in the probabilistic interpretation of the granular media equation. They presented the explicit expression of the rate function J ∞ and the Kramers' rate which is related to the time needed by the diffusion to exit a given bounded domain.
The aim of this paper is to better understand the link between the large deviation principle of the nonlinear diffusion and the classical theory developed by Freidlin and Wentzell. More precisely, the McKean-Vlasov equation describes the behaviour of one particle in a huge system of particles in interaction, as a result of the hydrodynamic limit in a mean-field system. The natural question, therefore, is to emphasize the link between the rate function (or entropy function) J ∞ of the nonlinear diffusion and the Freidlin-Wentzell rate function J N associated to one particle in a mean-field system of size N . We prefer to use functional analysis tools rather to develop the probabilistic interpretation of the corresponding equations.
The material is organized as follows: first we discuss and recall different notions associated with the large deviation theory. Secondly we present the model and point out the link between nonlinear diffusion and high dimensional classical diffusions: the so-called mean-field effect. The third section will be devoted to the main result: the convergence of the rate functions J N → J ∞ as N becomes large. Finally, we present some immediate consequences and a generalization result.
A large deviation principle
Let us introduce the large deviation theory using some simple arguments. We consider a probability space (Ω, F , P) and (X k ) k∈N * a sequence of independent and identically distributed random variables. This sequence is concerned with several classical convergence results: the strong law of large numbers points out that the arithmetic average X n := 1 n n k=1 X k converges almost surely to the mean E[X 1 ] as n goes to infinity. The Central Limit Theorem goes further providing the distribution around this limiting value. Indeed, the random variable √ n X n − E[X 1 ] converges in distribution to the centered gaussian law with variance Var(X 1 ). Let us note that we do not specify the hypotheses required for these two results to occur. The idea of the large deviations is to go even further estimating the probability of rare events: typically, the probability for the empirical mean X n to be far away from E[X 1 ] or the probability that the empirical measure 1 n n k=1 δ X k is far from P X1 , the probability distribution of X 1 .
In order to measure how small the probability of a rare event is, it is convenient to describe the distribution of the vector (X 1 , . . . , X n ) and to prove that it is concentrated on a small set of typical values with high probability (see [DZ98] for precise statements). Let us illustrate this important feature by an example. We assume that X 1 is a finite valued random variable #X 1 (Ω) = d > 0. Without loss of generality, we set X 1 (Ω) = M := {1, . . . , d} and denote p i := P(X 1 = i) for any 1 ≤ i ≤ d. M n therefore corresponds to the family of sequences (messages) of length n. The main interest in the study of rare events is to define the entropy of the so-called typical messages and the following surprising remark holds: the probability of being a typical message goes to 1 as n goes to infinity despite the number of such messages is negligible with respect to the number d n = #M n of all possible sequences.
To make this remark precise, let us define the entropy of the probability distribution (p 1 , . . . , p d ) as follows For a given positive real ǫ, we introduce the set of typical messages: Using the law of large numbers applied to the sequence of independent and identically distributed random variables (log p Xi ) 1≤i≤n , the following properties yields lim n→∞ P ((X 1 , · · · , X n ) ∈ T ǫ n ) = 1 and #T ǫ n ≤ e n(H(p)+ǫ) .
These two results occur for any ǫ > 0; in particular, if H(p) < log(d), we highlight a set of messages T ǫ n whose probability is close to 1 for large n whereas its size is small compared to the whole space: #T ǫ n = o (d n ) = o (#M n ). In other words, the trajectory (X 1 , · · · , X n ) has a small probability to be outside a small part of the phase space M n . This discussion is based on the explicit expression of the entropy function which permits to describe the probability of paths deviating from the typical message ones (large deviation phenomenon).
In this paper, the framework concerns continuous time processes depending on a parameter σ and we describe the behaviour of this family in the smallparameter limit. Even if the state space is infinite, this idea is similar to the above discussion, we need to find out a rate function (entropy) which describes the probability of a trajectory to be far away from typical paths.
Let us consider a family of continuous stochastic processes X σ := (X σ t ) t∈[0,T ] with T < ∞. In the following, the family of processes (X σ ) σ>0 is said to satisfy a [DZ98]). The associated good rate function is given by if ϕ belongs to the set of absolutely continuous functions starting in 0, denoted by H 0 . If ϕ / ∈ H 0 , we set I 0 (ϕ) := +∞. Here · stands for the euclidean norm in R d . The study elaborated by Schilder permits to go further in the description of LDP for diffusions as presented by Freidlin and Wentzell. If X σ satisfies the stochastic differential equation: where the drift term b(t, x) is a continuous function with respect to the time variable and locally Lipschitz with respect to the space variable; then the family (X σ ) σ>0 admits a LDP with the good rate function for ϕ ∈ H x (the set of absolutely continuous functions starting in x). For ϕ / ∈ H x , I b (ϕ) := +∞. Let us focus our attention to the typical paths of such a diffusion. In fact, in the particular case of a deterministic equation admitting a unique solution, the diffusion X σ starting in x converges in probability towards the deterministic trajectory Ψ(x) in the small-noise limit. The Freidlin-Wentzell LDP estimates the rate of convergence: introducing where · ∞ stands for the uniform norm, we obtain lim sup Let us finally note that the precise description of the deviation phenomenon permits to deal with the small-noise asymptotics of exit times τ D from a domain of attraction D. Namely if the drift term of the diffusion is in the so-called gradient case, that is b(t, x) = ∇V (x), if moreover V reaches a local minimum for x = a and D is a bounded domain of attraction associated to a, then a Kramers' type law can be observed. A weak version of this result is the following asymptotic expression: In other words, not only is the rate function a key tool for the description of the diffusion deviation from typical trajectories (linked to a study on a fixed time interval [0, T ]), but it is also involved in the description of exit times from a domain (a study developed on the whole time axis). The aim of our paper, therefore, is to describe some nice properties of the rate function, not in the classical diffusion case just described above, but for self-stabilizing diffusions of the McKean-Vlasov type, diffusions attracted by their own law. Let us finally note that for other applications of large deviations to communication, optic and biology, we refer the reader to [DZ98].
The self-stabilizing model
From now on, we restrict the study to the McKean-Vlasov model: for x ∈ R d , the process satisfies the following stochastic differential equation: (2) The * symbol stands for the convolution product and u σ t denotes the density of the probability distribution P X σ t . Since the own law of the process plays an important role in the structure of the drift term, this equation is nonlinear, in the sense of McKean, see for instance [McK67,McK66]. Three terms contribute to the infinitesimal dynamics.
• The first one is the noise generated by the d-dimensional Brownian motion (B t , t ≥ 0).
• The second force is related to the attraction between one trajectory t → X σ t (ω 0 ), ω 0 ∈ Ω, and the whole set of trajectories. Indeed, we observe: Consequently, F is called the interaction potential. The interaction only depends on the difference X σ t (ω 0 )−X σ t (ω) and therefore can be associated to the convolution product. Let us note that other dependences have been studied namely the quantile case: the drift is then a continuous function of the quantile of the distribution P X σ t , see [Kol13]. • The last term corresponds to the function V , the so-called confining potential. The solution X σ t roughly represents the motion of a Brownian particle living in a landscape V and whose inertia is characterized by F . Therefore it is easy to imagine that the minimizers of the potential V attract the diffusion if F (0) = 0.
Let us now present the hypotheses concerning the functions F and V . The confining potential V satisfies: Combining (V1) and (V2) ensures the existence of a solution to (2). The interaction function satisfies: (F2) G is an even polynomial function with deg(G) ≥ 2 and G(0) = 0.
Let us now complete the description of this McKean-Vlasov model by briefly recalling several already known results concerning (2).
• Probabilistic interpretation of PDEs. In fact, the self-stabilizing diffusion corresponds to the probabilistic interpretation of the granular media equation. The probability density function of X σ t , starting at x, is represented by (t, x) → u σ t (x) and satisfies the following partial differential equation This equation is strongly nonlinear since it contains a quadratic term of the form u σ t (F * u σ t ). This link between the granular media equation (3) and the McKean-Vlasov diffusion (2) permits to study the PDE by probabilistic methods [CGM08,Mal03,Fun84].
• Existence and uniqueness. The existence and the uniqueness of a strong solution X σ to (2) defined on R + has been proven in [HIP08] (Theorem 2.13). Moreover the long-time asymptotic behaviour of the probability distribution P X σ t has been studied in [CGM08,BRV98] (for convex functions V ) and in [Tug13a,Tug13b] for the non-convex case. In this second case, the key of the proofs essentially consists in using the results of [HT10a,HT10b,HT12] about the non-uniqueness of the invariant probabilities (that means in particular that there exist several positive stationary solutions of the granular media equation (3) which have a total mass equal to 1).
• Large deviation principle. The noise intensity appearing in the equation (2) is parametrized by σ. The aim of the large deviation principle is to describe precisely the behaviour of the paths in the small-noise limit. For any T > 0, we can prove, see [HIP08], that the family of processes (X σ ) σ>0 satisfies a large deviation principle with the following good rate function J ∞ : Here the function Ψ x ∞ is independent of F and satisfies the following ordinary differential equation: In other words, the diffusion process (X σ t , t ≥ 0) converges exponentially fast towards the deterministic solution Ψ x ∞ as σ tends to 0. The limit function for a classical diffusion Y σ t defined by is exactly the same: the self-stabilizing phenomenon does not change the limit, it only changes the speed of convergence. Indeed the rate function J ∞ clearly depends on F . If the function F is convex, the trajectories of the McKean-Vlasov diffusion X σ are closer to Ψ x ∞ than the ones of the diffusion Y σ .
Since the asymptotic behaviour has been described on a fixed-time interval [0, T ], the next step is to describe the asymptotic behaviour on the whole time interval and namely the study of exit problems: the first time the self-stabilizing diffusion exits from a given bounded domain. This problem has already been solved if both V and F are uniformly strictly convex functions, see [HIP08,Tug12] by the use of large deviation technics. In [Tug12], the method is based on the exit problem for an associated meanfield system of particles.
An interacting particle system
The McKean-Vlasov diffusion X σ , described in the previous section, corresponds to the movement of a particle in a continuous mean-field system in the so-called hydrodynamical limit, that is, as the number of particles tends to infinity. The mean-field system associated to the self-stabilizing process (2) is a N dimensional random dynamical system ( Here, (B i t ) t∈R+ stands for a family of N independent ddimensional Brownian motions. We also assume B 1 = B, in other words, both diffusions X 1,N,σ and X σ (see (2)) are defined with respect to the same Wiener process (this is possible due to the existence of a strong solution). The propagation of chaos then permits to link (2) and (6). It is essentially based on the following intuitive remark. The larger N is, the less influence a given particle X j,N,σ has on the first particle X 1,N,σ . Consequently, it is reasonable to consider that the particles become less and less dependent as the number of particles becomes large. The empirical measure 1 N N j=1 δ X j,N,σ t therefore converges towards a measure µ σ t which corresponds to the own distribution of X 1,∞,σ t . If fact this law corresponds to P X σ t . For a rigorous proof of this statement, see [Szn91,Mél96]. It is also possible to adapt a coupling result developed for instance in [BRTV98] in order to obtain the following convergence: Large deviation principle. For N large, the diffusion X σ defined in (2) is close to the diffusion X 1,N,σ defined in (6). Then it is of particular interest to know if these two diffusions have the same small-noise asymptotic behaviour. The large deviations associated with (6) are quite classical since the system of particles is a Kolmogorov diffusion of the form with the following potential: Here µ N := 1 N N j=1 δ zj . This approach is directly linked to the particular form of the interaction function F which only depends on the norm, see hypothesis (F1) and (F2). The good rate function, associated with the uniform topology, is a functional defined by If one function of the family (f i ) 1≤i≤N does not belong to H x , then we set I N (f 1 , · · · , f N ) := +∞. Let us just note that this LDP leads to the description of the exit problem for the McKean-Vlasov system [Tug12]. Since a LDP holds for the whole particle system, a LDP in particular holds for the first particle (X 1,N,σ ) with the good rate function J N obtained by projection: Since X 1,N,σ is close to the self-stabilizing process X σ , solution of (2), we aim to state that the functional J N converges towards J ∞ , the entropy function of the mean-field diffusion, as N becomes large. In other words, is it possible to interchange the limiting operations concerning the asymptotic small noise σ and the asymptotic large number of particles N i.e. the hydrodynamic limit ?
2 Convergence of the rate functions In this section, we emphasize the main result of this study. We prove that the large deviation rate function J N associated to the first particle in the huge McKean-Vlasov system of particles in interaction is close to the rate function of the self-stabilizing (nonlinear) diffusion.
By definition, J N (f ) ≤ I N (f, f 2 , · · · , f N ) for any f 2 , · · · , f N ∈ H x where I N is defined by (8). Hence, we can choose f k := Ψ x ∞ for all 2 ≤ k ≤ N . Let us remind the reader that Ψ x ∞ is the solution of (5). Thus we obtain: By definition of Ψ x ∞ , we haveΨ x ∞ + ∇V (Ψ x ∞ ) = 0. The previous inequality yields: Taking the limit as N goes to infinity in the previous inequality leads to the announced upper-bound (10). Let us just note that, due to the local Lipschitz property of the interaction function ∇F , this convergence is uniform with respect to f on any compact set for the uniform topology.
Step 2. Let us focus our attention to the lower bound: Step 2.1 Let us recall that J N is defined as a minimum (9) Since J N (f ) is the minimum, then for any ǫ > 0, there exist f ǫ 2 , · · · , f ǫ N belonging to H x such that Let us consider the set S x f ⊂ H N −1 0 of functions (g 2 , · · · , g N ) satisfying I N (f, g 2 , · · · , g N ) ≤ 2J ∞ (f ) .
By (10), for N large enough and ǫ small enough, we obtain that (f ǫ 2 , · · · , f ǫ N ) ∈ S x f . Moreover let us prove that the subset S x f is included in a closed ball of H N −1 0 . Indeed the following inequality holds and the convention g 1 := f . Here ·|· stands for the Euclidian scalar product in R d . We first observe that Hypothesis (F1) leads to Due to the hypothesis on the interaction function F the right hand side of the previous inequality is finite. With similar arguments, we get These two previous inequalities combined with (13) and (12) permit to prove the existence of a constant C(N, x, f ) only depending on f , x and N such that We immediately deduce that the subset S x f is included in a closed ball of the . . , f ǫn N ) n≥0 which converges in the weak topology towards a limiting function (f * 2 , . . . , f * N ) ∈ H N −1 0 . Finally, due to the continuity of the function I N we deduce that: The minimum J N is then reached for (f * 2 , . . . , f * N ) ∈ H N −1 x .
Step 2.2 In order to compute J N , let us point out particular properties of the functions (f * 2 , . . . , f * N ). Since the minimum is reached, we are going to compute the derivative of I N with respect to each coordinate. Since we restrict ourselves to the functional space H x , we take an absolutely continuous function g ∈ H 0 and we consider the following limit That defines the derivative of I N with respect to the second argument. In a similar way, we can define D i I N for any 2 ≤ i ≤ N . Let us compute explicitly D 2 I N . For any 1 ≤ i ≤ N , we set with f 1 := f by convention. So we note that We now observe the derivative of the quantity ξ i with respect to the second function. In other words, introducing f δ 2 := f 2 + δg, and defining ξ 2,δ i like ξ i just by replacing f 2 by f δ 2 in (15), we get and, for i = 2: Here H(F )(x) represents the Hessian matrix of the function F at the point x ∈ R d . From now on, we simplify the notation D 2 I N (f, f 2 , . . . , f N )(g) and replace it by D 2 I N . By (16) and the polarization identity, we obtain with Let us assume that ξ 2 is regular (let us say continuously differentiable), then by an integration by parts and since g(0) = 0, we obtain where the function E f 2 is defined by We proceed in the same way for D j I N (f 1 , f 2 , · · · , f N )(g) for any 2 ≤ j ≤ N , just replacing 2 by j in (18) and (19). Since the minimum x (see Step 2.1), the following expression vanishes: . . , f * N )(g) = 0 for 2 ≤ j ≤ N and any function g, and therefore (ξ * 2 , . . . , ξ * N ) is solution to the system where the functions E f * j are defined like E f j in (19) (respectively ξ * j like ξ j in (15)), we need just to replace (f, f 2 , . . . , f N ) by (f, f * 1 , . . . , f * N ). In fact we do not know if ξ * j , 2 ≤ j ≤ N , are regular functions as assumed. We deduce therefore that (ξ * 2 , . . . , ξ * N ) is a generalized solution of the system (20).
Let us prove this uniqueness property. Let us consider a function h = (h 2 , . . . , h N ) belonging to ⊗ N −1 C([0, T ], R d ), then using the Cauchy-Lipschitz theorem (see for instance Theorem 3.1 in [Hal69]), there exists a C 1 -solution (g 2 , . . . , g N ) of the following system of equations: with the initial condition g j (0) = 0 for any 2 ≤ j ≤ N . Here f * 1 stands for f for notational convenience. In particular g ∈ H N −1 0 . Using results developed in Step 2.2, the function I N reaches its minimal value for the arguments (f * 2 , . . . , f * N ) and consequently: where g j are solutions of (21). Combining the expression (17) and (21) leads to for any continuous functions (h j , 2 ≤ j ≤ N ). Since f * is in the function space H N −1 x and since ξ * j is related to f * by (15), we know that ξ j is square integrable function. Using Carleson's theorem (see for instance Theorem 1.9 in [Duo01]) and (22) we deduce that ξ * j (t) = 0 for a.e. t ∈ [0, T ] and for any 2 ≤ j ≤ N .
Step 2.4 Using the definition of ξ * j , the previous step permits to obtain that, for any 2 ≤ j ≤ N , with the boundary condition f * j (0) = x. Applying once again the arguments presented in Step 2.3 leads to the uniqueness of the solutions for (23). Since the system is symmetric, we get the existence of a C 1 -function Ψ f N satisfying f * 2 (t) = .
We just recall that f * 1 = f for notational convenience. Using the definition of J N (f ), we get Since the first order differential equation (24) can be associated to a Lipschitz constant which does not depend on the parameter 1/N , the unique solution Ψ f N (t) depends continuously on both the parameter 1/N and the time variable (see, for instance, Theorem 3.2 p. 20 in [Hal69]). Here we consider that (t, 1/N ) belongs to the compact set [0, T ] × [0, 1], where J ∞ is defined by (4) and (5). The proof of the lower-bound (11) is then achieved. It is quite easy to prove that the convergence is uniform with respect to the function f on any compact set for the uniform topology.
We could provide the precise rate of convergence because we give better than a simple lower-bound in the previous proof. Indeed, we have obtained the exact expression of J N (f ).
Immediate consequences and further results
Theorem 2.1 emphasizes the link between the large deviation rate function of the self-stabilizing diffusion (2) and the rate function associated to the meanfield system (6). This result is of particular interest since one of the diffusion is nonlinear whereas the second one is linear and therefore well-known. In this section, we present a coupling result concerning these two diffusions and extend Theorem 2.1 to a more general nonlinear model.
Let us first recall the large deviation principle already presented in the introduction. A family of continuous stochastic processes (X σ ) σ>0 is said to satisfy a large deviation principle for the uniform topology with good rate function I which is a compact set since J ∞ is a good rate function (see [HIP08]), we obtain In order to conclude the proof, it suffices to apply the convergence of J N towards J ∞ developed in Theorem 2.1, which is in fact uniform with respect to ϕ on any compact subset, in particular on the subset κ 2α .
This first corollary concerns the rate functions. Let us now focus our attention on the associated processes. A nice coupling property can be obtained describing the link between the self-stabilizing diffusion (X σ t , t ≥ 0) defined by (2) and the linear diffusion (X 1,N,σ t , t ≥ 0) defined by (6). Since there exists a unique strong solution to each of these two equations, we can construct X σ and X 1,N,σ on the same probability space (Ω, B, P x ).
Corollary 3.2. Under Hypotheses (V1)-(V2) and (F1)-(F3), for any x ∈ R d , each element of the family (X 1,N,σ ) N converges in probability towards the diffusion X σ as σ → 0, uniformly with respect to the parameter N . In particular let δ > 0, then for N sufficiently large (resp. σ small), there exists a constant K δ (T ) > 0 such that Let us just note that this result implies the convergence in distribution of the first particle -first coordinate -of the linear mean-field system (6) towards the self-stabilizing process. Combining Corollary 3.1 and Corollary 3.2 leads to the following statement: for any closed set F, open sets G, replacing both the limit inferior by the limit superior and the upper bound by a lower one. A second remark: Large deviation principles can sometimes be proven directly by the use of coupling bounds. Nevertheless the coupling bound developed in Corollary 3.2 is not strong enough for such implications. Indeed, the family (X 1,N,σ ) N is called an exponentially good approximation of X σ if, for any δ > 0, (such a large deviation notion has been for instance developed in [DZ98], Definition 4.2.14). For such approximations, the rate function of the limiting process (as N tends to ∞) can be obtain as follows: where B(ϕ, δ) = {z : sup 0≤t≤T z(t)−ϕ(t) < δ}. In practice, (27) is quite difficult to obtain. Such technics were used in order to prove the Freidlin-Wentzell large deviation result for classical diffusions: the process is approximated by an other stochastic process with piecewise constant diffusion and drift terms (see Theorem 5.6.7 in [DZ98]). For the large deviation principle associated with the self-stabilizing diffusion developed in [HIP08], an argument of exponentially good approximation is used but it does not concern the approximation of the non linear process by the first particle of the mean field linear system and therefore it does not use (27).
Proof. By definition, the family of processes (X σ ) σ>0 satisfies a large deviation principle associated with the good rate function J ∞ . So, for any closed subset F of H x , we have on one hand On the second hand, by Corollary 3.1 and for N large enough, we obtain lim sup Introducing the particular subset: where Ψ x ∞ is defined in (5) that is Ψ x ∞ (t) := x − t 0 ∇V (Ψ x ∞ (s)) ds , we observe that, for N large and σ small, P sup 0≤t≤T X 1,N,σ t − X σ t ≥ δ ≤ P X 1,N,σ ∈ F + P (X σ ∈ F) ≤ e − K δ (T ) σ 2 , where K δ (T ) := C inf ϕ∈F J ∞ (ϕ) > 0 with 0 < C < 3/4. In Theorem 2.1, we only deal with the gradient case of the so-called McKean-Vlasov diffusion starting from initial position x. Let us now discuss a more general setting by considering the following nonlinear diffusion: Here A is a general two variables R d -valued function being a vector flow, non necessary gradient and l is a C 1 -continuous function from R + to R d . Finally the probability measure ν σ s stands for the distribution P Y σ s . The aim of this discussion does not concern the existence and uniqueness of such equation, so we assume that V , A and l satisfy suitable conditions for the unique solution to exist. Then, it is possible to adapt the arguments developed in [HIP08] in order to prove that (Y σ ) σ>0 satisfies a large deviation principle with the associated rate function: J ∞ (f ) := 1 4 T 0 ḟ (t) + ∇V (f (t)) + A(f (t), Ψ x (t)) +l(t) 2 dt (28) for any function f ∈ H x and J ∞ (f ) := +∞ otherwise. Here, the function Ψ x is defined as the unique solution of the ordinary differential equation: The stochastic model (Y σ t ) can also be approximated by a system of interacting particles. In this context, we can develop a statement similar to Theorem 2.1. The functional J ∞ is effectively the limit as N goes to infinity of the functional J N (f ) defined by inf f2,...,fN ∈Hx (29) with the convention f 1 = f . Such a result can be proven under suitable assumptions: • the confining potential V satisfies Hypotheses (V1)-(V2).
• there exists a lower bounded C ∞ -function A : R d × R d → R such that A(x, y) = ∇ x A(x, y) and inf (x,y)∈R d ×R d A(x, y) > −∞.
• A satisfies a symmetry property: A(x, y) = −A(y, x) The details of the proof are left to the reader, it suffices to apply the same arguments. Let us just note that the assumptions, just formulated, concerning A are sufficient in order to get the upper-bound (14), a crucial step for proving the claimed statement. We end this study pointing out an example of such a diffusion: where W t := E {∇W (X t )} and W is such that the required conditions are satisfied. This equation actually corresponds to the hydrodynamic limit of an equation characterizing the charge and the discharge of the cathode in a lithium battery (see [DGGHJ11,DGH11]). In such a framework, A(x, y) := ∇W (x) − ∇W (y) and A(x, y) := W (x) − x|∇W (y) . Therefore, the rate function can be explicitly computed: J ∞ (f ) = 1 4 T 0 ḟ (t) + ∇W (f (t)) − ∇W (x + l(t)) +l(t) 2 dt and is obtained, as announced, as the limit for large N of the rate function: inf f2,...,fN ∈Hx ∇W (f j (t)) +l(t) 2 dt.
|
2014-09-03T16:42:45.000Z
|
2014-09-03T00:00:00.000
|
{
"year": 2014,
"sha1": "a151b5bc249ae7472799a511c10c4fdac94e9a15",
"oa_license": null,
"oa_url": "https://digitalcommons.lsu.edu/cgi/viewcontent.cgi?article=1395&context=cosa",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a151b5bc249ae7472799a511c10c4fdac94e9a15",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
202220777
|
pes2o/s2orc
|
v3-fos-license
|
Study on the flow and heat transfer in a thermal shielding radiator
A thermal shielding radiator, which has an extra cuboid cavity between channels and sealplate, is proposed to achieve a large temperature difference between the hot and cold surfaces and high temperature uniformity on the cold surface. To get uniform distribution of the flow inside each channel, the thermal shielding radiator with different inlet/outlet types (namely C-, Z-, Y- and I-type) are numerically simulated. It is found that I-type has the best performance in velocity uniformity. Due to the cavity, the temperature difference between two sides of the thermal shielding radiator improves significantly. Besides, the mini-channel with equidifferent fin thickness is also proposed, and more uniform flow and the temperature distribution on the cold surface are achieved. Moreover, effect of geometry, operating parameters on the flow and heat transfer performance for both the thermal shielding radiator and the heat radiator are studied and compared.
Introduction
Mini-channel and micro-channel heat radiators are always used to transfer heat from the rapid advanced electronics. Micro-channel which has an astonishing heat flux always come up with a high pressure drop [1] while mini-channel has intermediate level of pressure drop and heat flux. Usually, the topic micro-channels is ranged from 10 to 200 μm while the mini-channels is ranged from 200 μm to 3 mm [2]. On the other hand, Kew [3], Ong [4] used dimensionless parameters to definite the distinction of the channel. The confinement number of 0.5 and the Eötvös number around 0.2 are used to identify micro-or mini-channel flow.
Micro-channel was firstly raised up by Tuckerman and Pease. They reported the reduction of the hydraulic diameter of channel from macro-scale to micro-scale cause the increase of heat flux [5]. Gunnasegaran [6] and Xia [7] studied the effect of geometric structures on fluid flow and heat transfer performance. Three different shapes (rectangular, trapezoidal and triangular) of microchannel heat radiators, four kinds of header shapes (triangular, trapezoidal and rectangular) were numerically studied and found rectangular shaped microchannel heat radiators showed the highest heat transfer [8] performed various inlet/outlet arrangements to investigate the flow and temperature distribution. Zhou [9] installed an experiment on the transient heat transfer of mini-channel and found that the outlet temperature became higher with the increase of the cooling water velocity. Al-Neama [10] presented a study on the influence of chevron fin structures in the mini-channel. They found that, with the decreasing of the chevron fin oblique angle, the thermal resistance reduced furtherly. Ho [11] designed a divergent rectangular mini-channel and found it had a negligible impact on the heat transfer at low flow rate.
From the literature mentioned above, micro or mini-channel has done a great contribution in heat transfer. However, it is rarely used as a cold plate to achieve the thermal shielding function. In this paper, a thermal shielding radiator with a cavity in the middle of the sink is proposed to enlarge the temperature difference between the cold and hot side. The heat insulation radiator(HIR), which has 48 channels with a cavity between the channel and the sealplate, is 100mm in length, 101mm in width and 3mm in thickness(figure 1). Cross section of traditional heat radiator and heat insulation radiator are displayed in figure 2. The equidifferent fins heat insulation radiator(EHIR) has the same size as HIR in the geometry. However, the fin width is growing as 1.05 times with the first fine width of 0.55mm. In present study, the heat insulation radiator is likely a mini-channel heat radiator which has enough height to arrange the inlet /outlet on the side wall. Four kinds of mini-channels(I、Z、Y、C) are proposed to study the influence of inlet/outlet position. The details are presented in figure 3. Geometrical dimensions are summarized in table 1.
Governing equations
In present numerical study, the following assumptions are made: 1) Fluid flow and heat transfer are steady-state and three-dimensional.
2) The flow is laminar; 3) Properties of fluid and the radiator are temperature-independent. 4) Except the top plate where connected with the heat source, all the other surfaces exposed to the surroundings are heat insulated. Based on above assumptions, the continuity, the momentum and the energy equation can be written as followings: We assemble the entrance of the insulation at z=0, and sign the channel from 1 to 48. The radiator is made by aluminum, with a rubber layer between the top plate and the bottom plate for thermal resistance. The bottom/top plate is connected with the heat source which is taken as 400 K. In the mini-channel section, the inlet velocity is 2 m/s and the inlet water temperature is 300 K. The outlet is set as pressure out.
The flow fluid is deionized water and stays 300K by thermostatic water bath. The properties of solid material are as follows: =
flow uniformity
The thermal performance of the heat insulation is characterized based on the average top plate temperature while flow uniformity affects the temperature uniformity directly. Subsequently, Average flow flux of each channel is presented in figure 4 to show the flow uniformity. The cross section flow flux at the inlet of channel 1 to 48 revels that the inlet/outlet location of C-, Z-, Y-, I-type affects the fluid velocity. For I-type, the inlet/outlet location has higher average velocity as expected. Fundamentally, flow velocity is reassigned due to the opposite fluid flows that mix eventually. Figure 5 showed the pressure drop of four type mini-channels. It is found that the pressure drops of I-, Y-type are much smaller compared to C-, Z-type. When the fluid flows into the insulation by one side, recirculation zone can be obviously found near the center of the channel. It is because that the local pressure is smaller than surrounding pressure. Due to the uniformity of fluid flow, the I-type has been chosen for further study.
Temperature distribution
The top plate temperature contours of HIR, EHIR and traditional mini-channel heat radiator are shown in figure 6. The maximum, minimum, average temperature and the temperature difference between the top plate and the bottom plate are presented in table 2.
(a)mini-channel (b) HIR (c) EHIR Figure 6. Temperature contours at the top plate. figure 6, it is found the high temperature region occurs at the outlet for three type thermal shielding radiators due to temperature increasing of the fluid. The low temperature region occurs at the inlet due to the low inlet fluid temperature and the high local heat transfer coefficient. In figure 6(b), a high temperature located at the middle of the outlet region is found. This is due to the low mass flow rate at the middle channels. Figure 6(c) discloses that the equidifferent fins make the flow redistribution and eliminates the local high temperature. The table 2 showed that, compared with mini-channel, the HIR and EHIR get the lower average temperature because of the cavity used as a huge thermal resistance between the heat source and the surroundings. It can reduce the heat transfer to the surroundings and enlarge the temperature difference. In addition, the maximum temperature of HIR is higher than others due to the velocity reduces when the cross section of water area increase. (bottom). However, the maximum temperature for HIT or EHIR increases about 11.60 K or 5.78 K when heat source is located in the seal plate side (top), respectively. For both thermal shielding radiators, the pressure drops are nearly the same. It is suggested that, for higher thermal shielding purpose, the heat source should be located in seal plate side, which can achieve larger temperature difference between the hot and the cold side. And it can get higher heat dissipation by changing the location of the heat source to the channel side.
Conclusions
To achieve a large temperature difference between the hot and cold surfaces and high temperature uniformity on the cold surface, a heat insulation radiator is proposed. To get uniform distribution of the flow inside each channel, the mini-channel thermal shielding radiator with different inlet/outlet types (namely C-,Z-,Y-, I types) are numerically simulated. The conclusions are summarized as: (1) The temperature uniformity of the thermal shielding radiator cold face is affected by inlet/outlet 9th edition of the international SOLARIS conference IOP Conf. Series: Materials Science and Engineering 556 (2019) 012010 IOP Publishing doi:10.1088/1757-899X/556/1/012010 6 type due to changing of the flow distribution in the channel. Among C-, Z-, Y-, I-type of the heat radiator, the I-type shows uniform mass flow rate. For Y-, I-type, however, they can reduce the pressure drop.
(2) For higher thermal shielding purpose, the heat source should be located in seal plate side, which can achieve larger temperature difference between two sides for both HIR and EHIR. For heat dissipation purpose, however, the heat source should be located in the channel side. (3) Due to the equidifferent fins, the flow and temperature are more uniform in EHIR.
|
2019-09-11T02:02:51.765Z
|
2019-08-19T00:00:00.000
|
{
"year": 2019,
"sha1": "20862cbacb3a3959af9ec9462ee6314bbc9340fc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/556/1/012010",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d9b027bcaf6f02bcde8bbf3d9fc7469af9435cc4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
15603235
|
pes2o/s2orc
|
v3-fos-license
|
Inducing mineral precipitation in groundwater by addition of phosphate
Background Induced precipitation of phosphate minerals to scavenge trace elements from groundwater is a potential remediation approach for contaminated aquifers. The success of engineered precipitation schemes depends on the particular phases generated, their rates of formation, and their long term stability. The purpose of this study was to examine the precipitation of calcium phosphate minerals under conditions representative of a natural groundwater. Because microorganisms are present in groundwater, and because some proposed schemes for phosphate mineral precipitation rely on stimulation of native microbial populations, we also tested the effect of bacterial cells (initial densities of 105 and 107 mL-1) added to the precipitation medium. In addition, we tested the effect of a trace mixture of propionic, isovaleric, formic and butyric acids (total concentration 0.035 mM). Results The general progression of mineral precipitation was similar under all of the study conditions, with initial formation of amorphous calcium phosphate, and transformation to poorly crystalline hydroxylapatite (HAP) within one week. The presence of the bacterial cells appeared to delay precipitation, although by the end of the experiments the overall extent of precipitation was similar for all treatments. The stoichiometry of the final precipitates as well as Rietveld structure refinement using x-ray diffraction data indicated that the presence of organic acids and bacterial cells resulted in an increasing a and decreasing c lattice parameter, with the higher concentration of cells resulting in the greatest distortion. Uptake of Sr into the solids was decreased in the treatments with cells and organic acids, compared to the control. Conclusions Our results suggest that the minerals formed initially during an engineered precipitation application for trace element sequestration may not be the ones that control long-term immobilization of the contaminants. In addition, the presence of bacterial cells appears to be associated with delayed HAP precipitation, changes in the lattice parameters, and reduced incorporation of trace elements as compared to cell-free systems. Schemes to remediate groundwater contaminated with trace metals that are based on enhanced phosphate mineral precipitation may need to account for these phenomena, particularly if the remediation approach relies on enhancement of in situ microbial populations.
Introduction
The promotion of phosphate mineral precipitation in order to sequester inorganic contaminants has gained interest in recent years (e.g., [1][2][3][4][5]). Immobilization of contaminants in situ is an attractive option at many sites because the vast quantities of affected media, often with low pollutant concentrations, make excavation or extraction of the contamination infeasible. Phosphate minerals are advantageous for use in sequestration because they are poorly soluble in many environments and they can incorporate a wide variety of elements. In some cases the sequestration may occur by precipitation of the contaminant in primary minerals (e.g. U in autunite [Ca(UO 2 ) 2 (PO 4 ) 2 •10-12(H 2 O)], or Pb in pyromorphite [Pb 5 (PO 4 ) 3 Cl]). However, because metal or radionuclide contaminants are often at trace levels, coprecipitation with calcium phosphate minerals is common, as calcium is frequently the dominant cation available to react with phosphate in the subsurface. Hydroxylapatite [Ca 5 (PO 4 ) 3 (OH)] (HAP) in particular garners attention because of its well-known ability to immobilize trace metals by coprecipitation or solid solution formation [6] as well as by adsorption [7,8]. Sequestration in HAP is especially appealing for radionuclide contaminants for several reasons. Actinides (triand tetravalent), Cs, and Sr can be incorporated into the HAP structure [6,[9][10][11], HAP exhibits rapid defect annealing when subject to radiation damage [10] and HAP can be extremely long-lived, as evidenced by the presence of~2Ga apatites found near the Oklo natural reactor [12].
Phosphate is frequently a limiting nutrient for biological activity in natural terrestrial subsurface systems [13]. Consequently enhanced phosphate mineral precipitation for contaminant immobilization will require the engineered addition and dissemination of phosphate. This is a significant challenge, as the direct addition of high concentrations of soluble inorganic phosphate can result in rapid precipitation and clogging of injection wells [14]. In light of this, several investigators have proposed alternative methods of introducing and distributing phosphate in the subsurface. The use of long-chain polyphosphate to promote uranium immobilization is one option that has been considered [14], and another is the application of the compound glycerol-3-phosphate for degradation by native subsurface microorganisms [15,16]; both cases are examples of "slow release" or in situ generation of orthophosphate by degradation of precursor compounds. By moving the location of the introduction of the phosphate beyond the injection well, a wider zone of treatment may be realized and injection well clogging mitigated. An analogous approach has been proposed for immobilization of trace contaminants within carbonate minerals, where bicarbonate ion is generated in situ by hydrolysis of urea [17].
Regardless of the mechanism by which reactive phosphate is introduced into the subsurface, the effectiveness of such precipitation-based schemes for remediation of trace contaminants will depend on the specific mineral precipitates generated and their rates of formation. In studies of calcium phosphate mineral precipitation in constant composition experiments conducted over a pH range of 6-9, Zawacki et al. reported that the products were non-stoichiometric apatites, with Ca/P ratios ranging from 1.49 to 1.65 [18]. Precipitation of HAP occurring at high supersaturation has been shown to proceed by steps from the initial precipitation of amorphous calcium phosphate (ACP; Ca 3 (PO 4 ) 1.87 (HPO 4 ) 0.2 ) to final crystallization of HAP [19][20][21]. Feenstra and de Bruyn (1979) later found that ACP proceeds to HAP via the intermediate formation of octacalcium phosphate (OCP; Ca 8 H 2 (PO 4 ) 6 ·5H 2 O), which heterogeneously nucleates onto ACP [22]. In experiments at near-neutral pH and 25°C with a Ca/P molar ratio of 1.67 and a saturation index (relative to HAP) of 20.9, Borkiewicz et al. (2010) found that an initial precipitate consisting primarily of ACP with some brushite transformed over the course of seven days to primarily brushite with small amounts of ACP and poorly crystalline HAP. However, at lower levels of supersaturation (10 5 -10 9 , relative to hydroxylapatite), some researchers report that there are no such precursor phases [23], while others state that even at low degrees of supersaturation, HAP never forms homogeneously [24].
In natural systems, organic solutes may also be present and impact the precipitation of HAP. In particular, organic molecules with numerous carboxylate groups, such as humic and fulvic acids, have been shown to significantly reduce precipitation rates even at low concentrations, e.g. 0.25 -5 ppm [25]. In heterogeneous nucleation systems, it is thought that such molecules adsorb onto seeds and block sites for growth [26]. Organic acids also impact HAP precipitation under non-seeded conditions (homogeneous precipitation). Investigations comparing the effects of 1 mM concentrations of citrate and acetate showed that the presence of citrate resulted in decreased crystal size, higher content of impurities, and greater incorporation of carboxylate groups into the crystal structure, relative to acetate [27]. In addition, the degree of supersaturation required to induce precipitation was higher in solutions containing 1 mM citrate (saturation index = 11.73) than in solutions containing 1 mM acetate; the latter were not significantly different from the organic acid free controls (saturation index = 10.93) [28]. In experiments with 8 mM oxalate, HAP formation was almost completely inhibited due to the precipitation of Ca-oxalate [29].
Phosphate mineral precipitation and dissolution has also been reported to be influenced by the presence of microbial cells. Ca-phosphates and struvite (NH 4 MgPO 4 · 6H 2 O) have been reported to precipitate from culture medium and grow on the outer membrane of Gramnegative bacteria [30]. In experiments on precipitation of Ca-phosphates on agar plates in the presence of Ramlibacter tataouinensis, poorly crystallized minerals with low Ca/P ratios formed in the periplasm while nanocrystalline HAP formed inside the cell [31]. Hutchens et al. (2006) reported that when HAP was introduced into systems with both indirect and direct contact with Bacillus megaterium, dissolution rates increased, suggesting that the cells decreased the mineral stability, although the rate increase was less in the presence of direct contact between the cells and the HAP [32]. In experiments designed to investigate the impacts of nonmetabolizing bacterial cells (gram-positive and gramnegative) on precipitation of phosphate minerals, Dunham-Cheatham et al. (2011) determined that although U-phosphate minerals exhibited heterogeneous nucleation at higher saturation states, Ca-phosphate minerals showed no evidence of heterogeneous nucleation, irrespective of the saturation states tested [33]. However, they noted that at the highest saturation states tested (saturation indices with respect to HAP of 8.29 and 8.34), Ca removal from solution was lower in biotic systems than in abiotic controls, an observation they attributed to the complexation of Ca by bacterial exudates.
The majority of the studies reported in the literature and cited above were conducted in "clean" systems where the precipitation medium was limited almost exclusively to the reactants (calcium, phosphate, counter ions, and selected solutes for testing-e.g., organic acids), or within bacterial culture media. The objective of the present study was to examine the precipitation of calcium phosphate minerals under conditions more representative of an actual groundwater site. We conducted experiments using a synthetic groundwater (SGW) simulating the composition of the Eastern Snake River Plain Aquifer (ESRPA) in Idaho. This SGW has been used as the medium for another study [34] of the microbial degradation of triethylphosphate (TEP) by mixed cultures derived from the subsurface at the Idaho National Laboratory (INL), a U.S. Department of Energy facility located above the ESRPA. TEP was under evaluation as a biodegradable precursor compound (e.g., an alternative to glycerol-3-phosphate) for slow release of phosphate, as a means for immobilizing contaminants such as 90 Sr 2+ in phosphate minerals.
In our studies with TEP, we observed that despite accumulation of dissolved phosphate in the SGW at concentrations far in excess of what would be expected based on equilibrium considerations, detectable mineral precipitation did not occur [34]. To support interpretation of those results, we decided to determine the threshold concentration of soluble phosphate necessary in order to induce mineral precipitation in the SGW, and then conducted experiments to examine whether the addition of a trace organic acid mixture or bacterial cells affected the course of mineral precipitation and the identity of the solid products. The particular trace organic acids had been identified in the TEP degradation experiments; presumably the acids were released by the microbial cultures. We also compared the impacts of having a "low" and "high" cell concentration within the SGW; a scheme relying on microbial degradation to generate phosphate would likely involve enhancement of microbial abundance in the environment. The cells used in these experiments were a strain of Comamonas testosteroni; phylogenetic analyses of the mixed culture in the TEP degradation experiments indicated relatives of this gram-negative soil bacterium were present in the enrichment [34].
Chemicals and Materials
All chemicals used in the experiments were ACS reagent grade. Water was 0.2 μm filtered "nanopure" grade (18 megohms-cm; Barnstead, Dubuque, IA).
Synthetic Groundwater Composition
Synthetic groundwater (SGW) was formulated to mimic the ESRPA groundwater composition (Table 1). It should be noted that elevated in-situ p CO2 in the ESRPA causes CO 2 to exsolve, pH to rise, and calcite to precipitate upon sampling under atmospheric pressure conditions. The SGW was formulated to reproduce in-situ pH conditions (pH 7.5, adjusted using HCl). Therefore, under experimental conditions of atmospheric pressure, the SGW contains less bicarbonate than the real ESRPA groundwater in order to preserve stability with respect to atmospheric p CO2 .
Microbial Culture Preparation C. testosteroni (ATCC 11996) was grown overnight in Nutrient Broth with 2% yeast extract. The mid-to-late log phase culture was harvested and washed three times by centrifugation (14,000 × g; 2 minutes) at 4°C and was re-suspended in sterile distilled water. The final cell concentration in the suspension was adjusted as needed to achieve the desired cell density in the experiments by the addition of 1 mL of cell suspension.
Precipitation Experiments
The impacts of the presence of bacterial cells (C. testosteroni) and of a trace mixture of organic acids on the course and products of phosphate mineral precipitation in the SGW were investigated in batch reactor systems. The selection of particular organic acids and their concentrations was based on observations from the TEP biodegradation experiments [34]. Because the solid phosphate phases that form initially are not necessarily the phases that persist over time [21,35], the intention was to evaluate the course of precipitation over one week at 25°C. To determine an appropriate amount of initial phosphate such that the onset of precipitation was detectable within a 24 -72 hr window, a preliminary experiment was conducted with varying amounts of NaH 2 PO 4 added to vessels containing SGW. Based on the results of this experiment, and using a > 3% drop in aqueous phosphate concentration as a target indicator of precipitation, an initial phosphate concentration of approximately 1.6 mM was selected for the time course experiments. The different treatments evaluated are shown in Table 2.
Possible calcium phosphate precipitation products and their calculated saturation indices for the initial conditions are indicated in Table 3 where Q is the ion activity product and K so is the mineral's equilibrium solubility product. These thermodynamic calculations were made using vMinteq v. 2.61 [36,37] with the thermo.vdb thermodynamic database, the comp_2008 component database, and the type 6 solids database [38]. Amorphous calcium phosphates AM1 and AM2 (both phases are Ca 3 (PO 4 ) 1.87 (HPO 4 ) 0.2 ) are described in Christoffersen et al. (1989Christoffersen et al. ( , 1990 [39,40]. Note that β tricalcium phosphate cannot precipitate from low temperature aqueous solutions [41]; however, when some substitution of Ca by Mg is permitted, this phase can precipitate as whitlockite (Ca 18 Mg 2 H 2 (PO 4 ) 14 .
Triplicate batch systems were set up for each treatment and each of three time points (60 minutes, 24 hrs, and 7 days), for a total of 9 bottles per treatment, to allow sacrifice and collection of all of the newly precipitated solids for analysis at the designated times. For each replicate, 100 mL of SGW (pH 7.5) was added to an acid-washed and autoclaved 250 mL polymethyl pentene bottle. Organic acids were added to the OA bottles, and suspensions of washed C. testosteroni cells were added to the LC and HC treatments with targeted initial average cell densities of 10 5 and 10 7 cells mL -1 , respectively. Then 1.6 mL of 0.1 M NaH 2 PO 4 (pH 7.5) was added to each bottle. The closed bottles were incubated on a shaker table in a temperature-controlled (25°C) chamber.
Each bottle was sampled immediately after preparation to determine the elemental composition of the aqueous phase. In addition, aqueous samples (1.5 mL) were collected at 60 minutes, 24 hours, and daily until seven days had elapsed. Each sample was centrifuged for 10 minutes at 12,700 × g. The supernatant was filtered (0.2 μm) prior to acidification (nitric acid) and analysis of elemental composition. The pH was measured periodically throughout the experiment.
Samples for cell counts (1 mL for HC treatments, 10 mL for LC treatments) by fluorescent microscopy were also removed at the beginning and end of the experiment, and immediately preserved with 2% formaldehyde and refrigerated until processing. All sample transfers were conducted using sterilized implements and containers.
For solid (precipitated minerals and/or cells) collection, the supernatant was removed after centrifuging (8,000 × g, 20 minutes) and then the solids were washed 5 times with ethanol and repeated centrifuging to remove residual salts. The solids collected from the three replicates for a given treatment and time period (e.g. T1, 24 hours) were combined to attain sufficient mass for analysis.
ICP-AES
Analysis of aqueous samples for Na, K, Ca, Mg, Sr, P, and S was carried out by ICP-AES (iCAP 6000, Thermo Fisher Scientific, Waltham, MA).
Scanning Electron Microscopy
Solids (in an ethanol slurry) were mounted onto silicon plates, dried, and coated with a thin layer of platinum. Scanning electron microscopy (SEM) of the solids was performed using an FEI Q650 with a tungsten field emission gun in environmental mode using a large field detector. Accelerating voltage was 5 KeV.
X-Ray Diffraction
The ethanol-washed solids were mounted on oriented silicon plates and allowed to air dry before analysis by X-ray diffraction (XRD) using a Bruker D8 Advance Xray diffractometer (Bruker AXS, Congleton, Cheshire, UK) with Cu Kα 1/2 emission using a Goebel mirror. The accelerating voltage was 40 KV with a current of 30 mA and the step size was 0.02 degrees with an integration time of 2 seconds.
Rietveld Structure Refinement
Rietveld structure refinement was performed on diffraction patterns obtained from solids collected at the end of the experiment (7 days) to confirm phases, refine lattice parameters and estimate crystallite dimensions. Topas 4.2 (Bruker AXS) was used for peak profile fitting and application of analytical Voigt functions to fit the diffracted peak profiles.
Microbial Enumeration
Microbial cells in the formaldehyde-preserved samples were enumerated using direct microscopic counts. Cells were stained with 0.01% acridine orange, filtered onto black 25 mm 0.2 μm polycarbonate filters, and counted by epifluorescent microscopy using standard protocols [42].
Changes in Solution Chemistry
The pH decreased in all of the treatments over the 7 days of observation (Figure 1), as would be expected with HAP precipitation. However, the rate of pH change varied for the treatments. In the abiotic control (AC), most of the pH change occurred within the first 48 hrs and then the pH stabilized to just below 7 by 120 hrs. In contrast, in the organic acid (OA) treatment the pH dropped more slowly, although it mirrored the behavior of the AC treatment after 72 hours.
The low cell (LC) treatment showed a continually dropping pH for the first 96 hours and the final pH was lower than in the AC or OA treatments. In comparison the high cell (HC) treatment showed a delayed response, but it too reached a pH value lower than the AC and OA treatments and was indistinguishable from the LC treatment by the end of the 7 days. The lower pH in the cell-containing treatments could have been due to the release of amino acids or other organic acids from the cells during the experiments; both LC and HC treatments exhibited a pungent amine-like odor at the conclusion of the experimental period. However, high performance liquid chromatography assays for common organic acids (citrate, butyrate, oxalate, propionate, formate, acetate, succinate, and malate) conducted on Figure 1 Average pH over the course of the experiment. Error bars show two standard deviations based on triplicate analysis.
aliquots from the treatments after both 1 and 7 days did not detect any of these compounds at levels above the method detection limit (0.1 mM except for isobutyrate (0.18 mM) in one of the three replicates for the LC treatment after one day; data not shown).
With respect to soluble P concentrations (Figure 2a), all of the treatments behaved similarly the first 24 hours, but by 48 hours, there were significant differences between the cell-containing treatments and the cell-free treatments. Aqueous P concentrations for cell treatments (LC and HC) were elevated relative to cellfree treatments (AC and OA), particularly for the first 72 hours, and they remained higher for much of the duration of the experiment, until the last 2 days, when they appeared to have "caught up to" the cell-free treatments. There was no statistically significant difference in P concentrations between the two cell-containing treatments nor between the two cell-free treatments.
The trends in Ca and Sr concentrations (Figures 2b and 2c) were similar to the trend observed for P-that is, removal of all 3 elements from solution is delayed in the cell-containing treatments as compared to treatments without cells. Similarly to the situation for P, there does not appear to be a statistical difference between the two cell treatments nor between the cell-free treatments with respect to Ca removal. However there are differences between treatments with respect to Sr removal. Until the final time point, the AC treatment removes more Sr from solution than any other treatment ( Figure 2c). The difference between treatments becomes more obvious if we consider the Ca/Sr ratio in solution. By the end of the experiment, the ratio of Ca/Sr remaining in solution was highest in the AC treatment (molar ratio resolution of our methods, concentrations of Na, K, and S for all treatments were unchanged during course of the experiment.
Characterization of Solids
No solids were observed to have formed after 60 minutes in any of the treatments. After 24 hours, precipitated solids were visible and could be collected from the AC treatment. Solids were also present in the OA and LC treatments, but in both cases the collected solids redissolved in the ethanol used for washing, and could not be subsequently characterized by XRD or SEM. Precipitates could not be collected from the HC treatment until after 72 hours. SEM examination of the AC solids collected after 1 day showed agglomerations of roughly spherical particles on the order of 100 nm in diameter (Figure 3a). The XRD pattern of the solids was consistent with that reported by Li and Weng [43] for amorphous calcium phosphate (ACP). The HC solids at 72 hours also showed a similar XRD pattern (data not shown), although the SEM imaging showed that the precipitates were more needle-like (Figure 3b). Bacterial cells were enmeshed within the precipitated material.
After 7 days, solids were collected from all four treatments. Imaging of the 7-day solids from all of the treatments by SEM showed a morphology like crinkled paper (Figures 3c through3f). Bacterial cells were visible amongst the precipitates from the LC and HC treatments, and precipitates appeared to coat some of the bacterial surfaces (arrows in Figures 3e and 3f). Figure 4 shows an X-ray diffraction pattern for precipitates collected from the AC treatment at the conclusion of the experiment. The pattern is consistent with poorly crystalline HAP, with no significant precipitation of any other phase. Elevated counts seen at~6 o 2θ are caused by the sample holder. The pattern collected from the specimens representing the other three treatments looked similar and are not shown here.
Results from Rietveld analysis performed on diffraction patterns from 7-day solids are shown in Table 4. natural hydroxylapatite, which has lattice parameters of a = 9.417 Å and c = 6.875 Å, respectively [44], the results clearly showed that lattice parameter a became substantially larger with the following progression AC < OA < LC < HC while lattice parameter c became somewhat smaller (AC≈OA > LC = HC). The overall effect is an increasing cell volume in the sequence AC < OA < LC < HC. The 7-day samples were analyzed a second time by XRD 10 months after the initial analysis. There were no significant changes in the X-ray diffraction patterns generated by the AC, OA, or LC treatments; however, although the samples had been stored dry, the second analysis of the HC treatment produced a diffraction pattern completely devoid of peaks, suggestive of a phase lacking any far-or medium-range ordering. For pure systems, the degree to which HAP deviates from stoichiometry can be described by the Ca/P molar ratio. Perfectly stoichiometric HAP has a Ca/P molar ratio of 1.67. Because our precipitated phase contains Mg and Sr in addition to Ca, it is reasonable to examine the divalent cation to phosphate ratio (i.e., (Ca + Mg + Sr)/P). The assumption that the major ions accounting for the phase's stoichiometry include Ca, Mg, Sr, and P is reasonable because although carbonate can substitute into the crystal structure, it would result in a smaller a lattice parameter [45], not the larger a parameter as we observe with our solids. This suggests that if carbonate is present in the crystal structure, the effect is dominated by substitution of HPO 4 2-(see Discussion). Although some substitution of chlorine for hydroxyl is possible in these solids, it would not change the structure type or the overall phase identification.
The trends in the divalent cation/P ratios for each of the four treatments are presented in Figure 5, which shows that stoichiometric HAP is most closely approached by the OA and AC treatments. Even in the
Cell Growth
Although the cells used for inoculation in the LC and HC treatments had been washed with deionized water and were not provided with growth substrates, cell populations nonetheless appeared to increase over the course of the week long experiment for the LC treatment. The electron and carbon donor(s) necessary to support the observed growth are unknown. As noted previously, analyses for organic acids at day 1 and day 7 did not indicate accumulation of such compounds although it is possible that they were released by cell lysis at concentrations below the method detection limit and/or that they were released but rapidly consumed.
During the cell enumeration procedure, it was noted that the cells at day 7 appeared smaller than at day zero. Cell counts rose from 4.28 ± 0.87 × 10 5 at time zero to 1.11 ± 0.21 × 10 6 mL -1 at 7 days. In the HC treatments, the counted numbers of suspended cells decreased from 2.31 ± 0.38 × 10 7 to 3.76 ± 0.27 × 10 6 mL -1 , but these values likely do not reflect the true biomass as an extensive biofilm was observed to develop on the bottom of the HC reactor vessels during the week-long experiment. The increased cell density in the LC treatment could explain why the two cell-containing treatments did not appear to be significantly different from each other with respect to precipitation progress ( Figure 2) or final product composition ( Figure 5). conditions (e.g., level of saturation, pH) but in aqueous media of simpler composition. The mineral phase that precipitated after 1 day (amorphous calcium phosphate) was not the same as the one identified at the end of 7 days (poorly crystalline HAP). Although we were not able to detect them with our experimental design, it is possible that other transient phases besides ACP were formed prior to the crystallization of HAP. In experiments conducted at ambient temperature and near-neutral pH in a medium of 100 mM calcium acetate and 60 mM ammonium phosphate (conditions resulting in a saturation index relative to HAP of 20.9 and a Ca/P molar ratio of 1.67, compared to our values of 11.8 and 0.94, respectively), Borkiewicz et al. (2010) detected the formation of only brushite and ACP during an 8 hr observation period, using in situ monitoring with synchrotron XRD [21]. However, in a similar seven-day precipitation experiment with ex situ monitoring by XRD, Borkiewicz et al. (2010) observed initial formation of a precipitate mixture composed predominantly of ACP, with a small amount of brushite. By the experiment's conclusion, the amorphous precipitate had largely transformed into brushite with small amounts of poorly crystalline HAP and some ACP remaining.
Discussion
The addition of the mixture of organic acids did not appear to impact precipitation progress relative to the abiotic control; the solute concentration profiles were almost identical (Figure 2), as was the composition of the final mineral phase as estimated from solution chemistry ( Figure 5). The pH decrease initially lagged, but by 72 hours the trend for the OA treatment mirrored the profile for the abiotic control. The physical appearance (Figure 3d) of the final solid was also indistinguishable from that of the abiotic control. However, the OA cell volume was somewhat larger than the cell volume of the AC precipitate (Table 4)-a finding that is contrary to what van der Houwen et al. (2003) reported based on studies using a higher concentration of organic acids (1 mM) in solution. The organic acid mixture of propionate, isovalerate, formate and butyrate was added at a total concentration of 0.035 mM; this was equivalent to 0.13 mM dissolved organic carbon (DOC) or 1.5 mg-C L -1 . This level of DOC is higher than the concentrations reported by Amjad and Reddy (1998) [25] to slow the rate of HAP precipitation, but Amjad and Reddy were studying the effects of humic compounds, rather than simple monocarboxylic acids like those included in this study. Our findings support the reports of van der Houwen et al. (2003), that the number of carboxylate groups is more important than the bulk DOC concentration with respect to inhibition of HAP precipitation.
The addition of cells to the precipitation medium appeared to have a more dramatic impact than the organic acids. During the first 24 hours, the extent of precipitation in the cell-containing treatments appeared to be indistinguishable from the cell-free treatments, as indicated by the P and Ca profiles (Figures 2a and 2b) respectively. However, after the first day the rate of precipitation in the cell-containing treatments apparently slowed, and removal of Ca, Sr and P from solution was less in the LC and HC treatments compared to the AC and OA treatment, until near the end of the experiment. The reason for the temporary lag in precipitation is unknown. One possibility is the production of extracellular polymeric substances (EPS) by the microbial cells. EPS can inhibit mineral precipitation by reducing diffusion rates and creating microgradients of solutes at the mineral surface, and they can also bind divalent metals, such as Ca 2+ and Mg 2+ , thus decreasing effective saturation in the bulk phase [46]. The resulting constraint on mass transfer of the reactants to the mineral surface would result in slowed precipitation rates. As noted previously, for their experiments with initial saturation states closest to our conditions, Dunham-Cheatham et al. (2011) reported decreased aqueous Ca removal in the presence of cells compared to abiotic controls and attributed this to formation of Ca complexes with bacterial exudates. However, their experiments were conducted primarily over the span of 2-3 hours; the longest experiments performed were 48 hours [33]. In our experiments, between 2 and 5 days we too observed significant differences in Ca removal between the cell-containing and cell-free experiments (Figure 2b). However, the two conditions converged by the end of our 7 day experiments. EPS production would be consistent with the observed development of a biofilm in the HC treatment reactors. In the LC reactors, no biofilm formation was visible, but this may have been associated with the lower initial cell density.
With respect to observed concentrations of phosphate in solution, in experiments such as ours there is the potential for internal phosphate reserves within the cells to play a role. Dunham-Cheatham et al. (2011) noted some contribution of phosphate from cells in their experiments. However, in our studies the concentration of phosphate added was relatively high (~1.6 mM) and the cell density was relatively low (maximum 2.31 × 10 7 cells/mL). Assuming 6.7fg P per stationary phase cell (based on an estimate for E. coli; [47]), the potential P contribution from cells was 300 times smaller than the amount of P added externally as NaH 2 PO 4 , suggesting that the potential phosphorus contribution from the cells was negligible.
One of the most significant differences between the treatments lies in the degree of departure from HAP stoichiometry for the various treatments. As seen in Figure 5, HAP stoichiometry is most closely approached with the AC treatment (divalent cation/P ratio = 1.61), and its greatest departure is with the treatments containing cells (divalent cation/P ratio~1.4), with the OA treatment having a Ca/P ratio of 1.58. So-called Ca-deficient and other nonstoichiometric apatites have been synthesized in the laboratory, and it has been reported that the lower the HAP stoichiometry, the larger the a lattice parameter [45]. The size of the c lattice parameter was related more to the pH of formation, with a smaller c lattice parameter forming at a pH less than 7 [45]. This is consistent with our data, which shows that the less stoichiometric precipitates are accompanied by the larger lattice parameter a ( Figure 5, Table 4). The smaller c lattice parameter for the cell-containing treatments is likely the result of the lower solution pH measured for those treatments (Figure 1). The expansion of the a lattice parameter in non-stoichiometric apatites is attributed to HPO 4 2and water in the structure (Elliot, 1994). This results in a cation deficiency (lower Ca/P ratio), which is reflected most obviously in the relatively lower amount of calcium incorporated into the HC and LC solids, but this phenomenon would also reduce Sr incorporation into those same solids [48].
The increased departure from stoichiometry for the HAP produced in the cell-containing treatments as compared to the AC and OA treatments may have also been related to the observed loss of crystal integrity for the HC treatment by the time of the second XRD analysis. In natural HAP, the bond between the phosphorus and each oxygen is about 60% covalent, with the remainder consisting primarily of ionic bonding [49]; [50]. However, with non-stoichiometric HAP, calcium ions are removed from the crystal structure and are replaced by ions that are hydrogen-bonded to adjacent ions [49]. In the case of the HC treatment precipitates, it appears that the non-stoichiometry was to such a degree that 10 months after precipitation the X-ray diffraction pattern no longer showed any evidence of medium-or far-range order, although the SEM images looked similar at both timepoints.
Conclusions
The induced precipitation of hydroxylapatite mineral phases in "clean" laboratory systems (i.e., containing just Ca, phosphate, sodium and chloride) has been observed to require the generation of very high levels of supersaturation (> 10 orders magnitude with respect to HAP) [28] and our results suggest that this will be true for groundwater systems as well. Our results also are consistent with the reports of others that HAP precipitation occurs following the transformation of a precursor phase of amorphous calcium phosphate.
The crystallization of HAP in our experiments proceeded relatively slowly; at the end of the week-long experiments the precipitates in the cell-free treatments still appeared to be evolving toward more stoichiometric, and presumably more stable, HAP. This suggests that the minerals formed initially during an engineered precipitation application for trace element sequestration may not be the ones that control long-term immobilization of the contaminants.
The organic acids that were detected in previous experiments designed to observe phosphate mineral precipitation induced by microbial degradation of TEP were unlikely to have been responsible for the absence of detectable mineral precipitation; rather the amount of phosphate released by the microbes was simply insufficient to achieve the required level of supersaturation. In a natural system where sufficient phosphate can be added however, organic acids may be important, although they are more likely to be humic and fulvic acid type compounds than the simple monocarboxylic acids tested here.
Another finding from our study is that microbial cells at concentrations representative of typical and stimulated groundwaters can impact HAP formation by delaying precipitation, by making the precipitates less stoichiometric, and by reducing the incorporation of trace elements in the HAP as compared to cell-free systems. In addition, such precipitates may exhibit reduced long-term stability. Whether these effects are exerted through intact cells or via soluble organic compounds derived from or released by the cells could not be conclusively determined in our study. Nevertheless, schemes to remediate groundwater contaminated with trace metals that are based on enhanced phosphate mineral precipitation may need to account for such effects, particularly if the remediation approach relies on enhancement of in situ microbial populations.
|
2016-05-04T20:20:58.661Z
|
2011-10-26T00:00:00.000
|
{
"year": 2011,
"sha1": "b0103b6b4fb3aa09ca26a143da7726a42b431e3c",
"oa_license": "CCBY",
"oa_url": "https://geochemicaltransactions.biomedcentral.com/track/pdf/10.1186/1467-4866-12-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "70781e770bf3d49af44f2f715aaec6e77a5a74a9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
221121404
|
pes2o/s2orc
|
v3-fos-license
|
Clinical and Model‐Based Evaluation of the Effect of Glasdegib on Cardiac Repolarization From a Randomized Thorough QT Study
Abstract Glasdegib is a potent, selective oral inhibitor of the Hedgehog signaling pathway. This phase 1 double‐blind thorough QT study (NCT03162900) evaluated the effects of glasdegib on QTc interval. The study enrolled 36 healthy volunteers to receive a single dose of 150 mg glasdegib (representing a therapeutic dose), 300 mg glasdegib (representing a supratherapeutic dose), 400 mg moxifloxacin (positive control), or placebo under fasted conditions. The study demonstrated that therapeutic and supratherapeutic doses of glasdegib had no significant effect on QTc interval; the upper bound of the 2‐sided 90% confidence intervals (CIs) for all time‐matched least‐squares mean differences in QT interval corrected using Fridericia's formula (QTcF) between glasdegib and placebo was below the prespecified criterion of 20 milliseconds (Food and Drug Administration correspondence reviewed and accepted). Based on an exposure–response analysis, glasdegib was determined not to have a meaningful effect on heart rate (change in RR interval). The mean (90%CI) model‐derived baseline and placebo‐adjusted QTcF at the average maximum observed concentration values corresponding to therapeutic and supratherapeutic glasdegib doses was 7.3 milliseconds (6.5‐8.2 milliseconds) and 13.7 milliseconds (12.0‐15.5 milliseconds), respectively. Together these results demonstrated that following therapeutic and supratherapeutic glasdegib dosing, the change in QTc from baseline was well below the 20‐millisecond threshold of clinical concern in oncology.
Glasdegib is a selective, once-daily, oral small-molecule inhibitor of Smoothened, a key protein in the Hedgehog (Hh) pathway. Aberrant Hh signaling has been identified in many solid tumor types and in hematologic malignancies. As an inhibitor of the Hh signaling pathway, glasdegib may act as an inhibitor of leukemic stem cells. 1 Glasdegib is approved by the US Food and Drug Administration (FDA) in combination with low-dose cytarabine (LDAC) for the treatment of newly diagnosed acute myeloid leukemia (AML) in adult patients who are ≥75 years old or who have comorbidities that preclude use of intensive induction chemotherapy. 2 In preclinical evaluation using an in vitro assay in human embryonic kidney 293 cells, glasdegib demonstrated the ability to inhibit the human etherà-go-go-related gene (hERG) potassium channels in a concentration-dependent manner, suggesting the potential to affect the cardiovascular system. 3 In the first-in-patient dose-escalation study, following single-agent glasdegib treatment (5 to 600 mg once daily), some patients with advanced hematologic malignancies experienced QT corrected for heart rate (QTc) of >500 milliseconds following multiple dosing at the 2 highest evaluated doses of 400 mg once daily (maximum tolerated dose) and 600 mg once daily (maximum administered dose). 4 Based on phase 1 clinical evidence of consistent downregulation of the Hh pathway at ≥100 mg once daily, clinical efficacy signals, and the safety and tolerability profile, as well as to provide an additional exposure margin for possible drug-drug interactions (DDIs) with the potential to increase glasdegib exposure, a 100-mg oral once-daily dose was chosen for further clinical evaluation. 4,5 In a phase 2 study, 100 mg glasdegib once daily was administered in combination with chemotherapy backbone to patients with AML or high-risk myelodysplastic syndromes (MDSs). 6 At the 100-mg once-daily glasdegib dose in combination therapy, QT interval values of >500 milliseconds, corrected using Fridericia's formula (QTcF), or changes in QTcF > 60 milliseconds from baseline were noted in a few instances in the setting of multiple confounders, such as underlying disease and concomitant medications (grade 3 QTcF changes were also observed in patients treated with chemotherapy alone). 6 Consequently, a phase 1 study (study B1371023) in healthy volunteers was designed and conducted to estimate the effect of glasdegib on cardiac repolarization, specifically on the QTc interval.
In this thorough QT (TQT) study, single oral doses of glasdegib 150 and 300 mg were selected to achieve maximum observed plasma concentrations (C max ) and exposures representative of steady-state therapeutic (approximately 100 mg once-daily) and supratherapeutic (approximately 200 mg once-daily) doses. The higher single doses used in this study aimed to account for the accumulation of glasdegib following repeated daily dosing. 4 The choice of the supratherapeutic dose was based on available pharmacokinetic (PK) data, the known DDI potential with a strong cytochrome P450 (CYP)3A4/5 inhibitor (ketoconazole), which increased mean C max of glasdegib by 40%, cumulative safety information, and last, the intent to allow for adequate coverage (∼100% increase in C max ) above and beyond the anticipated maximum exposures in clinical situations. 5 Placebo and positive control (moxifloxacin) treatments were also included in this randomized TQT study, in line with International Council for Harmonisation (ICH) E14 guidance. 7 Although it is difficult to determine whether a mean change in QTc interval can be considered inconsequential, based on the ICH E14 guidelines, the threshold of regulatory concern is that the upper bound of the 1-sided 95% confidence interval (CI) around the largest time-matched mean effect on QTc be <10 milliseconds. In addition, the guidelines indicate that any drug causing mean QTc prolongation >20 milliseconds has a substantially increased likelihood of causing cardiac arrythmias. The threshold of <20 milliseconds is widely accepted for anticancer drugs, given that the potential therapeutic benefit of these agents is frequently deemed to outweigh the risk of cardiac events. 8-10 A model-based population PK/pharmacodynamic (PD) analysis using the clinical data from this TQT study was subsequently performed to characterize the exposure-response (E-R) relationship between glasdegib plasma concentrations and QTc in healthy volunteers. This analysis used a prespecified linear mixed-effects (LME) model recently recommended for use in analyzing electrocardiogram (ECG) concentration data, 11,12 as it allows for the characterization of QTc change from baseline ( QTc) under both placebo and active (glasdegib) treatment conditions, as well as model-based prediction of the placebo-adjusted change from baseline in QTc ( QTc).
Study Design and Treatments
Study B1371023 (ClincalTrials.gov, NCT03162900) was a phase 1 single-dose, single-center, randomized, double-blind, placebo-and moxifloxacin-controlled, crossover TQT study in healthy volunteers. The subjects, investigator, and site personnel involved in the study were blinded to study treatments (except openlabel moxifloxacin), and the sponsor was unblinded. Subjects were randomized to 1 of 4 treatment sequences. Each treatment sequence consisted of 4 treatment periods (placebo, moxifloxacin 400 mg, glasdegib 150 mg, and glasdegib 300 mg) as described in Figure 1, with a washout period ≥6 days between successive study treatment doses. Following an overnight fast (≥10 hours), subjects received oral treatment at ∼8:00 AM on day 1 of each period. Subjects were required to refrain from lying down, eating, or drinking beverages other than water during the first 4 hours after dosing.
The primary objective of the study was to estimate the effect of glasdegib on QT/QTc relative to time-matched placebo. Other objectives included: (1) evaluate study sensitivity by assessing the effect of moxifloxacin on QTc interval, (2) assess the safety and tolerability of single doses of glasdegib in healthy adult volunteers, and (3) evaluate the PK of glasdegib and the relationship between QTc and plasma concentrations.
This study was conducted at the Pfizer Clinical Research Unit, Belgium, in compliance with the ethical principles originating in or derived from the Declaration of Helsinki and in compliance with all ICH Good Clinical Practice guidelines. The final study protocol and informed consent documentation were approved by the institutional review board, the Comite d'Ethique Hospitalo-Facultaire Erasme-ULB, Brussels, Belgium. All subjects provided written informed consent prior to participating and before any screening procedures were initiated.
Subjects
Eligible subjects included healthy women of nonchildbearing potential and men aged 18 to 55 years, with a body mass index of 17.5 to 30.5 kg/m 2 , body weight Sequence 2 (n = 9) Sequence 3 (n = 9) Sequence 4 (n = 9) Figure 1. Trial design and randomization scheme. Healthy subjects received each of the 4 treatments (glasdegib 100 mg, glasdegib 300 mg, moxifloxacin 400 mg, and placebo) in the order randomly assigned by their treatment sequence.
>50 kg, and no known history of QTc prolongation, cardiovascular disease, or ECG abnormalities.
Safety and Electrocardiogram Assessments
Safety evaluations included monitoring adverse events (AEs) and serious AEs (SAEs), safety laboratory tests, physical examinations, vital signs, and 12-lead ECGs.
Using the semiautomated method, triplicate 12-lead (with a 10-second rhythm strip) measurements in the supine position were collected ∼2 minutes apart to determine the mean QTc interval. ECG assessment for all treatments occurred -1, -0.5, and 0 hours predose and 0.5, 1, 1.5, 2, 3, 4, 6, and 24 hours postdose and were collected prior to blood draws. Baseline ECG values were determined by averaging the mean of the triplicates collected -1, -0.5, and 0 hours predose.
Pharmacokinetic Evaluation
Blood samples for PK analysis were collected 0, 0.5, 1, 1.5, 2, 3, 4, 6, 24, 72, 96, and 120 hours postdose using collection tubes with dipotassium ethylenediaminetetraacetic acid anticoagulant. All study treatments had identical sample collection times. Moxifloxacin samples were planned to be analyzed only if deemed necessary (ie, if no positive QTc signal was observed); based on the observed results, the analysis of moxifloxacin was not required. Analysis of placebo PK samples was also not performed.
Glasdegib plasma concentrations were measured using a validated, sensitive, and specific high-performance liquid chromatography-tandem mass spectrometric method at Covance Bioanalytical Services (Shanghai, China). 13 Calibration curves were linear over the range of 3 to 3000 ng/mL for glasdegib in plasma, using weighted (1/concentration 2 ) linear regression. The lower limit of quantification (LLOQ) of glasdegib was 3 ng/mL. PK plasma samples were stored at −70°C and assayed within the 575 days of established frozen plasma stability. Interassay accuracy (percentage rela-tive error) at 9, 100, 2250, and 15 000 (diluted 10-fold) ng/mL glasdegib in quality-controlled plasma samples ranged from −1.8% to 11.3%. Interassay precision (percentage coefficient of variation [%CV]) was ≤6.4% across quality-control levels.
Glasdegib PK parameters including C max , time when C max was reached (T max ), area under the plasma concentration-time profile (AUC) from time 0 to the time of the last quantifiable concentration, AUC from time 0 extrapolated to infinite time (AUC inf ), apparent oral clearance, apparent volume of distribution, and terminal half-life were calculated using noncompartmental analysis of plasma concentration-time data. Samples below the LLOQ were set to 0 for analysis.
Sample Size Determination
A sample size of 36 subjects (9 per sequence) provided at least 98% power to exclude the upper bound of 2-sided 90%CIs (equivalent to a 1-sided 95%CI) of a time-matched difference in QTcF between glasdegib and placebo of >20 milliseconds (FDA correspondence reviewed and accepted) at each point. 3 The overall study power for 8 postdose times following the day 1 dose was ≥85%. These calculations were based on the assumptions that the expected mean difference in QTcF between glasdegib and placebo would be no greater than 15 milliseconds at each time, and the intrasubject variability was 5.27 milliseconds, based on the mean of 13 previous Pfizer TQT studies (data on file).
Statistical Analysis
The PK parameter analysis population was defined as all subjects randomized and treated who had at least 1 of the glasdegib PK parameters of primary interest in at least 1 treatment period.
The ECG analysis population was defined as all subjects randomized and treated who had at least 1 postdose ECG measurement in at least 1 period. The average of triplicate ECG measurements was used in all statistical analyses. All statistical analysis was conducted in SAS version 9.1, TS2M3 (SAS Institute, Cary, North Carolina). The postdose QTcF intervals were analyzed with baseline as a covariate. Analysis of covariance (ANCOVA) was conducted using a mixedeffects model with sequence, period, treatment, time, and treatment-by-time interaction as fixed effects, subject within sequence as a random effect, and baseline QTcF as a covariate. The 2-sided 90%CI (equivalent to a 1-sided 95%CI) for time-matched change from placebo in QTcF at each time on day 1 was computed for each dose of glasdegib and moxifloxacin.
Given glasdegib's indication for use in oncology patients, the study was designed to exclude a large effect of glasdegib on the QTc interval. Absence of large effect of glasdegib on the QTc interval was to be concluded if the upper bounds of the CIs for all the time-matched mean differences between glasdegib and placebo were <20 milliseconds (FDA correspondence reviewed and accepted). 3 This study was deemed adequately sensitive to detect QT/QTc prolongation if the lower bound of the 2-sided 90%CI for the mean difference between moxifloxacin and placebo was >5 milliseconds 3 hours postdose. Categorical analysis of QTcF for absolute postdose maximum value and maximum increase from baseline was also generated.
Exposure-Response Analysis
Using the clinical data from the TQT study, further analyses were conducted to characterize the E-R relationship between glasdegib plasma concentration and QTc using a PK/PD model.
Characterization of the QTc-concentration relationship was performed in the following stepwise manner: (1) The effect of glasdegib on heart rate (RR interval) was evaluated to support the assumption that the QT-RR relationship is the same regardless of the presence or absence of drug. (2) The concordance in the time course of glasdegib plasma concentrations and QTcF (absence of hysteresis) was evaluated through assessment of the PK profiles along with the placebo-adjusted change from baseline in QTcF ( QTcF) profiles over time, by dose level.
(3) The QT interval correction for RR was determined.
Although QTcF was the primary end point for analysis, LME methods were also used to estimate a study population-specific correction factor (β) to allow for determination of study populationspecific QTc (QTcS).
(4) QTcF, QTc using Bazett's formula, and the QTcS factors were evaluated for appropriateness (elimination of the QTc-RR relationship). (5) The presence of a linear QTc-concentration relationship was verified to support the use of a linear PK/PD model. (6) The relationship between baseline-adjusted QTc (ie, QTcF, QTcS) and glasdegib plasma concentration was evaluated initially with a prespecified LME model 11 in which the QTc and concentration data from both placebo and glasdegib treatment periods (therapeutic and supratherapeutic) were analyzed, with concentrations during placebo treatment set to 0. This base model to describe the dependent variable QTc (QTc change from periodspecific baseline at time = 0) included the following fixed-effect parameters: intercept, slope, and the effects of treatment (categorical), time (categorical), and baseline QTc (continuous) on the intercept (see Equation 1 at the end of this article). Characterizing the placebo response at each nominal time accounted for the effect of diurnal variation in QTc. Subject was included as a random effect on both the intercept and slope. A nonsignificant slope would indicate a lack of evidence of an effect of glasdegib concentration on the QTc interval.
Interindividual variability was included for the mean population parameters of both intercept and slope using an additive error model for each individual. Residual variability was also modeled as an additive error. (7) Evaluation of model adequacy (goodness of fit) was completed through various diagnostic plots. Model predictive performance was assessed through visual predictive checks (VPCs). (8) The model-derived difference in baseline-corrected QTc (ie, QTcF, QTcS) was computed across relevant glasdegib concentrations using the final PK/PD model. This provided the mean and 2sided 90%CI for QTc at concentrations of interest (eg, C max at therapeutic dose (100 mg once daily) and supratherapeutic dose (200 mg once daily) based on prior observed data in the patient population). 6,18 Data manipulation, post-processing, and graphics were conducted using R Studio (version 3.4.1). Estimation was conducted using the nlme library (version 3.1-131).
Subject Disposition and Baseline Characteristics
A total of 36 subjects were enrolled and randomized. All subjects were male, with a mean age of Table S1). One subject was discontinued from the study, following placebo and moxifloxacin dosing periods only because of an AE of alanine transferase elevation.
Pharmacokinetics
Median glasdegib plasma concentration-time profiles following single oral doses of 150 and 300 mg are presented in Figure 2, demonstrating a median T max of 2 hours postdose with a range of 1 to 4 and 1 to 6 hours, respectively. The geometric mean AUC inf and C max increased proportionally with dose, and variability (%CV) ranged from 30% to 35% for AUC inf and from 27% to 29% for C max . Glasdegib PK parameters by dose are summarized descriptively in Table 1.
Electrocardiogram
ANCOVA (baseline as a covariate) statistical analysis of QTcF during moxifloxacin treatment compared with placebo demonstrated adequate sensitivity to assess the effect of glasdegib on the QTcF interval ( Figure 3 and Table 2). None of the subjects met categorical criteria of absolute QTcF interval ≥480 seconds or increase from baseline in QTcF interval ≥30 milliseconds after receiving any treatment. The upper bound of the 2-sided 90%CIs (equivalent to 1-sided 95%CI) for all time-matched least-squares mean differences in QTcF between glasdegib 300 mg (supratherapeutic plasma exposure) and placebo were less than the predefined cutoff of 20 milliseconds; the highest 90%CI upper bound for the largest, placebo-and baseline-adjusted QTcF change was 15.61 milliseconds 3 hours postdose (Table 2).
Similarly, for glasdegib 150-mg treatment (therapeutic plasma exposure), the highest 90%CI upper bound for the largest and placebo-and baseline-adjusted QTcF change was 10.22 milliseconds 4 hours postdose (Table 2). Therefore, the absence of a large effect of glasdegib on the QTcF interval was demonstrated in this study at both therapeutic and supratherapeutic C max .
Safety
No deaths, SAEs, medication errors, or discontinuations from the study or study treatment because of AEs were reported during the study. There were 25 treatment-emergent AEs (TEAEs) with 20 treatment-related AEs (TRAEs) following administration of glasdegib 150 mg, 26 TEAEs with 20 TRAEs following administration of glasdegib 300 mg, 27 TEAEs with 23 TRAEs following administration of moxifloxacin 400 mg, and 32 TEAEs with 27 TRAEs reported for placebo. All the AEs reported were considered mild to moderate in severity. Alanine transaminase >3 × the upper limit of normal observed in 1 subject was reported as an AE; this subject did not receive glasdegib treatment.
Exposure-Response Analysis
Data from all 35 subjects who received glasdegib were included in the analysis. Singlet ECG readings as well as the average of the triplicates were included, resulting in 9 time-matched ECG-PK pairs per treatment arm at nominal times between 0 and 24 hours postdose in the analysis data set. No ECG readings or PK sample results were missing.
Plots and results from the LME model of RR change from baseline (RR) versus concentration indicated a relationship of negligible magnitude between RR and glasdegib concentration (Supplementary Figure S1), and it was concluded that fixed correction factors (ie, β) were adequate to eliminate the underlying RR effect on the QT intervals, regardless of the concentration of glasdegib. No PK/PD hysteresis, indicating a lag between plasma concentration and effect on QTc, was observed.
For determination of QTcS, the study populationspecific QTc factor (β) and corresponding 95%CI were estimated to be 0.250 (0.220-0.281). On assessment of all correction factors, Fridericia's method did not completely remove the relationship between RR and QTc, with a slope estimate of -0.003 (95%CI, −0.042 to −0.019); see Supplementary Figure S2. QTcS correction fully eliminated the relationship between QT and RR, with a slope estimate of 0.001 (95%CI, −0.011 to 0.013). Although Fridericia's correction did not completely eliminate the QTc-RR relationship, the slope estimate for the correction was quite small, and given that QTcF has been demonstrated to be an adequate correction factor in other analyses, QTcF was used in the primary E-R analysis, with additional analysis using QTcS provided. 19 The mean slope estimate describing the QTcFconcentration relationship in this study was 0.005 ms/ng/mL, with the 95%CI around the slope estimate excluding 0, indicating a positive concentrationdependent effect of glasdegib plasma concentration on the length of QTcF (Table 3, Supplementary Table S2). Change in QTcS was similar, with a slightly smaller slope. Because the study population was composed solely of healthy male subjects of a narrow age range, screening of intrinsic and extrinsic factors (covariates) was not performed; the final model was the same as the base model.
Diagnostic plots to assess model adequacy, and goodness of fit did not show any evidence of model misspecification. Scatterplots of observed versus model-predicted (population and individual) dependent variable ( QTc) values did not demonstrate any evidence of over-or underprediction of QTcF (Supplementary Figure S3); plots of standardized residuals did not display any systematic trends (ie, residuals were randomly scattered about 0; Supplementary Figure S4), and quantile-quantile plots (not shown) and boxplots (Supplementary Figure S5) of standardized residuals supported the assumption of a normally distributed residual error, which was independent of time and treatment.
Adequate predictive performance of the final models was demonstrated through VPCs; the percentiles calculated from the observed data were generally contained within the 95%CIs estimated from the simulated data at the 2.5th, 50th (median), and 97.5th percentiles ( Figure 4A).
The model-derived predicted mean (90%CI) estimates for QTcF using the final LME model at the average steady-state C max following once-daily dosing of glasdegib 100 mg (therapeutic) and once-daily dosing of 200 mg (supratherapeutic) are shown in Table 3 and Figure 4B. At the supratherapeutic dose, the mean predicted increase in QTcF was 13.72 milliseconds, with the upper 90%CI <20 milliseconds.
Discussion
This report of a TQT study performed with glasdegib in healthy volunteers demonstrates that although glasdegib had an effect on cardiac repolarization, it was below the threshold of clinical concern in the context of an oncology setting. This conclusion is based on the results of the primary statistical analysis in the TQT study, as the upper bound of the 2-sided 90%CIs for all time-matched least-squares mean differences in QTcF between glasdegib (therapeutic and supratherapeutic exposures) and placebo was <20 milliseconds. Although a positive effect on the QTcF interval was observed in the TQT study, this increase did not reach the prespecified threshold of clinical concern (20 milliseconds, FDA reviewed and accepted) generally accepted for oncology therapies, even at the supratherapeutic dose. The design of the TQT study included the advantages of double blinding and full randomization, statistical powering, robust PK/QT monitoring following dosing to capture the time course around the C max as well as at later times, and finally, exclusion of comorbidities and concomitant medications, which may act as confounders in the cancer treatment setting. Furthermore, the study was determined to be adequately sensitive to assess the effect of glasdegib on the QTcF interval by evaluating the effect of a positive control (moxifloxacin) relative to a time-matched placebo control at the historical T max for moxifloxacin. Categorical analyses showed that no subjects in the TQT study had absolute QTcF values ≥480 milliseconds postdose or a ≥30-millisecond increase from baseline in QTcF. These findings were in contrast to grade 3 QTcF changes observed in patients with AML and high-risk MDS receiving glasdegib + LDAC. 6 However, cases of QTcF prolongation were also reported in patients receiving LDAC alone, making conclusions about the effect of glasdegib on QTc difficult and confirming the impact of confounders in the patient setting. In contrast, the TQT study, conducted in a more controlled setting with exclusion of major confounders, adequate controls (placebo, moxifloxacin), and robust statistical Plot of observed and model-predicted placeboadjusted change from baseline in QTcF versus glasdegib concentration. The red line represents mean model prediction with 90%CI in gray with blue outline. CI, confidence interval; C max , maximum observed concentration; QTcF, QT interval corrected for heart rate using Fridericia's formula; QTcF, change from baseline in QTcF.
rigor, facilitated the isolation of the true effects of glasdegib on cardiac repolarization. Furthermore, the E-R analysis to evaluate the relationship between QTc and glasdegib concentration was consistent with these above-mentioned results from the primary statistical analysis in the TQT study, supporting the conclusion that glasdegib treatment does not have a large effect on the QTcF at clinically relevant exposures or at those ∼2-fold higher.
This E-R analysis used a prespecified LME model that has recently been recommended for use in analyzing ECG-concentration data such as this. 11 The LME model allowed for the characterization of QTc under both placebo and active (glasdegib) treatment conditions, accounting for diurnal variation and allowing for model-based prediction of the QTc through simulating QTc under both placebo and treatment conditions. Further, such a PK/PD model allows for predicting the change in QTc that may be expected at various drug concentrations of clinical interest (including those following different doses/dosing regimens not included in the TQT study or in special populations or DDI scenarios, for example), along with associated CIs. Because of the nature of the healthy volunteer study, including the homogeneity of the subjects enrolled and the lack of impact of any tested covariates in the previous population PK/PD evaluation of glasdegib in cancer patients, intrinsic or extrinsic factors were not tested in this analysis of the QTc-concentration relationship. 19 Previous DDI evaluation with a strong CYP3A4/5 inhibitor, ketoconazole, demonstrated a 40% mean increase in C max of glasdegib. 5 At a glasdegib steady-state C max representing such a 40% increase (1592 ng/mL), the upper bound of the 90%CI of model-predicted QTcF is just above 10 milliseconds (mean, 9.5 milliseconds; 90%CI, 8.41-10.71 milliseconds). The TQT study was also designed to evaluate a more extreme worst-case scenario. At double the clinical dose, which is greater than the anticipated exposure even under DDI conditions, the upper bound of the model-predicted 90%CI for QTcF (and QTcS) is <20 milliseconds. The supratherapeutic concentrations obtained in this study and corresponding predicted QTc were designed to provide additional exposure margins to account for potentially higher glasdegib exposures, which, theoretically, could be achieved in patients with organ impairment, based on the metabolic (CYP3A4/5) and renal excretion pathways involved in the elimination of glasdegib. Therefore, glasdegib treatment is not expected to lead to a ≥20-millisecond prolongation from baseline in QTcF under clinical conditions. As expected, QTcS was the most appropriate correction factor, with a β estimated to be 0.250, close to the β of 0.333 used in Fridericia's correction. Modeling was performed using both QTcF and QTcS, be-cause QTcF is considered the most clinically relevant factor and allows for QTc interval comparison across studies and compounds, as it does not depend on a particular population. The results from the QTcFconcentration analysis were similar to those from the QTcS-concentration analysis, and the overall conclusion from both models was the same: the upper bound of the 90%CI of the model-predicted QTc in the worst-case-scenario supratherapeutic concentrations remained <20 milliseconds.
Conclusion
Absence of a large effect of glasdegib on the QTc interval was demonstrated at the therapeutic and supratherapeutic doses (the upper bounds of the 90%CIs were below 20 milliseconds for time-matched differences in baseline-corrected QTcF between treatment and placebo). Single oral doses of glasdegib at 150 and 300 mg were well tolerated, with an acceptable safety profile in healthy adult subjects.
From the E-R analysis using a prespecified LME model, glasdegib was determined not to have a meaningful effect on heart rate. At the mean therapeutic glasdegib C max value previously observed in patients, the mean predicted increase in QTcF was 7.34 milliseconds, with the upper bound of the 90%CI <10 milliseconds. At the mean supratherapeutic glasdegib C max value observed in patients (ie, twice the therapeutic dose), the mean predicted increase in QTcF was 13.72 milliseconds, with the upper bound of the 90%CI below the 20-millisecond threshold of clinical concern in oncology. Therefore, glasdegib treatment is not expected to lead to a large effect on the QTcF interval under clinical conditions. Equation 1. The prespecified linear mixed-effects model for QTc versus concentration QTc ijk = (θ j +β 0k +η 0,i ) + γ (QTc ij0 − QTc ij0 ) +(β 1 +η 1,i ) · C ijk +ε ijk where QTc ijk is the change from baseline in QTc for the ith subject in the jth treatment at the kth time relative to dosing; where θ j is the treatment-specific intercept (any glasdegib versus placebo), β 0k is the population mean QTc with placebo for time k (categorical fixed effect of time (diurnal variation); QTc ij0 is the subject and treatment-specific baseline QTc, QTc ij0 is the overall population mean of all baseline QTc values, and γ is the influence of the baseline QTc (centered on population baseline); β1 is the slope that quantifies the relationship between QTc and concentration; C ijk is the concentration (and C i0k = 0 for placebo); η 0,i and η 1,i are the subject-specific random effects (interindividual variability) for the intercept and slope, respectively, each with a mean of 0 and variance of ω 2 , and ε is the residual error, with a mean of 0 and variance of σ 2 .
|
2020-08-14T13:01:29.455Z
|
2020-08-12T00:00:00.000
|
{
"year": 2020,
"sha1": "22a13fe0e255dda2071add92a81d2451843b05c9",
"oa_license": "CCBYNCND",
"oa_url": "https://accp1.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cpdd.862",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "1a02ca222766b902b6be2b200183330d503b6311",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16336129
|
pes2o/s2orc
|
v3-fos-license
|
RECOVERING SINUSOIDS FROM NOISY DATA USING BAYESIAN INFERENCE WITH SIMULATED ANNEALING
- In this paper, we studied Bayesian analysis proposed by Bretthorst[6] for a general signal model equation and combined it with a simulated annealing (SA) algorithm to obtain a global maximum of a posterior probability density function (PDF) for frequencies. Thus, this analysis offers different approach to finding parameter values through a directed, but random, search of the parameter space. For this purpose, we developed a Mathematica code of this Bayesian approach together with SA and used it for recovering sinusoids from noisy data. Simulations results support its effectiveness.
INTRODUCTION
Let the vector θ contain the parameters to be estimated from the (measurements) vector , D which is the output of the physical system that one wants to be modeled.In many experiments, the recorded data ( ; ) ( ), where ( ) i e t represents the noise, assumed to be drawn independently from a zero mean Gaussian probability distribution with a standard deviation of .Different models correspond to different choices of signal model function ( ; ) f t θ .According to Bretthorst [6], the most general signal model may be given in the following form: . In this paper, we address the problem of estimation of sinusoids in white Gaussian noise within a Bayesian framework.This is of great interest in many fields of science, including seismology, nuclear magnetic resonance and radar.Under an assumption of a known number of sinusoids, several algorithms have already been used in parameter estimation literature, such as least-square fitting [9], discrete Fourier transform [1], and periodogram [2].The discrete Fourier transform has been a very powerful tool in Bayesian spectral analysis since Cooley and Tukey introduced the fast Fourier transform (FFT) technique in 1965, followed by the rapid development of computers.After Jaynes [3] derived a periodogram directly from the principles of Bayesian inference, researchers in different branches of science have given much attention to the relationship between Bayesian inference and parameter estimation and they have done excellent works in this area for last fifteen years [4][5][6][7].
We consider here a Bayesian approach, proposed by Bretthorst, in which the posterior PDF for { } has to be maximized.Unfortunately, conventional algorithms [9] based on a gradient direction fail to converge for this maximization problem.Even when they converge, there is no assurance that they have found a global, rather than a local maximum.This is because a log of the posterior PDF is so sharply peaked and highly nonlinear function of{ } .Bretthorst pointed it out and used a pattern search algorithm described by Hook-Jevees [10] to overcome this problem but, this approach does not converge unless starting point is much closer to the optimum{ } .Our contribution is therefore to develop a computer program using Mathematica [8] that combines this Bayesian approach with a very different optimization algorithm called SA [11].It explores the entire surface of the posterior PDF for the frequencies and tries to optimize it while moving both uphill and downhill in order to escape from local maxima.Furthermore, it is largely independent of the starting values, often a critical input in conventional algorithms.Finally, we use it for estimating parameters of the sinusoids corrupted by a random noise.
BAYESIAN PARAMETER ESTIMATION
Let us now reconsider the problem given in Equation (1) within the Bayesian is a set of parameters of interest, then their joint PDF given the observed data D is proportional to The quantity The exponent Q is defined as follows where . In order to obtain the PDF for{ } , we integrate out the nuisance parameters{ } B and dependence, To do it analytically, it is necessary to make the matrix jk to be diagonal, effectively by introducing new orthogonal model functions, This diagonalization process gives a new expression for the signal model function, where the new amplitudes k A 's are related to the old amplitudes j B 's by 1 , ( 1,..., ) and where kj u represents the j th component of the k th normalized eigenvector of jk , with j as the corresponding eigenvalue.Substituting these expressions into Equation (5) and define to be the projection of the data onto the orthogonal model functions
,{ } j H t , we can then proceed to perform m Gaussian integrations over j A .If is known, the joint PDF of the { } parameters, conditional on the data is given by 2 where not known, by using the Jeffreys prior [15] and integrating Equation ( 6) over parameter we obtain This has the form of the Student t-distribution.It is desirable to compute the variances associated with those parameters.To do this, we can expand the function 2 h in a Taylor series at the point { } which maximizes the PDF in Equation ( 11) or (12), such For an arbitrary signal model the matrix jk b cannot be calculated analytically; however, it can be evaluated numerically.This can be done by first changing to the orthogonal variables and performing the Gaussian integrals.Let j and kj u represent the th j eigenvalue and eigenvectors of the matrix jk b , respectively.Then the new orthogonal variables are given by 1 1 , From these, we get the estimated frequency j w in the form: One can also show that j j A h .By using Equation ( 9) we get the estimated amplitude k B in the form:
CONTINUOUS GLOBAL OPTIMIZATION ALGORITHM
Bayesian parameter estimation turns into the global optimization problem [12] which is a task to find the best possible solution for the problem in Equations ( 11) and (12): where . Although there is numerous other approaches [13,16] to solve this maximization problem, the SA algorithm, suggested by Corana et.al. [11] and modified by Goffe et.al. [12] is chosen here because it is generally applicable and easy to implement.This begins with initial guesses of 0 ω and 0 (called temperature).Each step of this algorithm replaces the current frequency by a random nearby frequency.In other words, the next candidate point k ω is generated by varying one component of the current iteration ( 1) k ω at a time: where ).The step size is calculated from the square root of the Cramer-Rao lower bound (CRLB) ω [13] that is a lower limit to the variance of the measurement of j , so this generates a natural scale size of the search space around its estimated value.It is expected that better solutions lie close to solutions that are already good and so normally distributed step size are used.Thus, the point , , Otherwise, it is accepted or rejected according to the Metropolis-criterion [14]: where p is a uniformly distributed random number from 0,1 .This continues until all components have been altered and thus new points have successively accepted or rejected according to the Metropolis criterion.After this process is repeated s n times, the whole cycle is then repeated n times, after which the temperature is decreased by a factor 0 k (called annealing or cooling Termination of the algorithm occurs when average function value of sequences of points after each s n n step cycle reaches a stable state: where is a small positive number defined by user, is the value of the PDF at the best point k ω and l indicates the last four successive iteration values of the PDF that are being stored.
COMPUTER SIMULATED EXAMPLES
To verify the performance of the proposed algorithm, we generated a data vector from multiple frequency sinusoids.Here i t runs over the symmetric time interval We carried out the Bayesian analysis of the simulated data, assuming that we know the mathematical form of the signal model but, not the value of the parameters.As an initial estimate of the frequencies 0 ω for the maximization procedure, it is possible to take random from the interval 0, 2 .However, it is better to start with the locations of the peaks chosen automatically from the Fourier power spectral density graph by using a computer code written in Mathematica.After reasonable values of parameters that control the simulated annealing routine are chosen as 20 , the algorithm starts at some high temperature 0 =100 and generates the sequence of points until a sort of equilibrium is approached; that is a sequence of points reaches a stable value as iteration proceeds.During this phase the step size is naturally adjusted.The best point ω reached so far is recorded.After thermal equilibration, 0 is reduced by a factor 0.85 and a new sequence is made starting from this best point ω , until thermal equilibrium is reached again and so on.Therefore it proceeds toward better maxima even in the presence of many local maxima.Consequently, the process is stop at a temperature low enough that no more useful improvement can be expected, according to a stopping criterion in Equation (20).Once the frequencies are estimated, we carried on calculating the estimated amplitudes associated with the errors.However, an evaluation of the posterior PDF at a given point ω cannot be made analytically and also requires a numerical calculation of projections on to orthonormal model functions, related to eigen-decomposition of ω .It is therefore expected that its evaluation at ω requires larger consumption of time as the length of ω increases.
The computer program we developed was run on the workstation in two cases where the standard deviation of the noise is known or not.The computer simulations illustrated in Table 1 show the results when is known but they are almost similar with those obtained in the case is unknown.Estimated parameter values, quoted as (value) ± (standard deviation), indicate that all values of the parameters within the calculated accuracy are clearly recovered.The estimated value of signal to noise ratio (SNR) and the standard deviation of the noise are also shown in Table 1.On the other hand, Figures 1 shows the power of the method for recovering the signal from the noisy data.In general, we consider a multiple harmonic frequency signal model which is also used by Bretthorst [6]: cos(0.1 t 1) 2 cos(0.152) 5cos(0.3 3) 2 cos(0.314) 3cos( +5) ( 1 ,...,512) The best estimates of parameters are tabulated in Table 2. Once again, all the frequencies have been well resolved, even the third and fourth frequencies, which are very close not to be separated by the Fourier power spectral density shown in Figure 3.
Actually with the Fourier spectral density when the separation of two frequencies is less than the Nyquist step, defined as 2 / N , two frequencies are indistinguishable.In this example these two frequencies are separated by 0.01, which is less than the Nyquist step size.Therefore, there is no way by using the Fourier power spectral density that one can resolve these closed frequencies less than the Nyquist step, however Bayesian power spectral density shown in Figure 3 gives us very good results with high accuracy.The results we obtained so far are also consistent with that of Bretthorst [6].
Moreover, we initially assumed that the noise in data were drawn from the Gaussian density with the mean 0 and the standard deviation .
Figure 2 shows the exact and estimate PDF of the random noise in data.It is seen that the estimated (dotted) PDF is closer to the true (solid) PDF and the histogram of the data is also much closer to the true PDF.The histogram is known as a nonparametric estimator of the PDF because it does not depend on specified parameters.We generated 64 data samples from a single frequency signal model and added it to the variety of noise levels.After 50 independent trials, the mean square errors (MSE) were calculated and their logarithmic values were plotted with respect to SNR that varies between zero and 20 dB.It can be seen from Figure 4 that the proposed estimator has threshold about 3 dB of SNR and follows nicely the CRLB after this value.As expected, larger SNR ratio gives smaller MSE.
CONCLUSION
In this work we have partially developed a Bayesian approach with a simulated annealing and applied it to spectral analysis and parameter estimation problems.Overall results show that it provides rational approach for estimating, in an optimal way, values of parameters of sinusoids corrupted by random noise.Both frequency and amplitudes can be recovered from the experimental data and the prior information with high accuracy, especially the frequency which is the most important parameter in spectral analysis.Although it requires a large consumption of CPU time, it is competitive when compared to the multiple runs often used with conventional algorithms to test different starting values.For a sufficiently high SNR the MSE will attain Cramer-Rao lower bound so that it justifies the accuracy of the frequency estimation.Moreover, 1 In order to compare the results with Bretthorst's in this example we converted Mathematica is a powerful system for doing mathematics on the computer and it has grown to become today an unparalleled platform for all forms of computations.and "Bayesian Spectrum Estimation of Harmonic Signals with a number FEN-BGS-060907-0191, supported by the University of Marmara, Istanbul, Turkey.
1 {
assumes a signal model equation as follows:
1 2 {
label collectively{ } and j B represents the amplitude corresponding to the j th model function ;{ } j G t .The goals of data analysis are usually to use the observed data D to infer the values of parameters 1 ,..., ; ,..., }
Figure 1 .
Figure 1.Recovering signals from noisy data produced from two closed harmonic Frequency signal model.
Figure 2 .
Figure 2. Comparison of exact and estimate PDF of noise in data 1 .
Table 1 .
Computer simulations for two closed harmonic frequency signal model
Table 2 .
Computer simulations for a multiple harmonic frequency model1
|
2016-03-22T00:56:01.885Z
|
2011-08-01T00:00:00.000
|
{
"year": 2011,
"sha1": "e18f65349f607950bf6f56815fd2c7f88a9c8106",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2297-8747/16/2/382/pdf?version=1458053084",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e18f65349f607950bf6f56815fd2c7f88a9c8106",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
104335646
|
pes2o/s2orc
|
v3-fos-license
|
Barrel-to-barrel variation of phenolic and mineral composition of red wine
The aim of the present work was to evaluate the effect of barrel-to-barrel variability on chemical characteristics of red wine. An experimental trial was carried out involving two red wines from the Portuguese DO Dão and independent replicates of French oak barrels (Quercus sessiliflora Salisb.) from three different cooperages. After six months of aging, comprehensive chemical characterization of the wines took place: general physical-chemical analysis by FTIR, phenolic composition and chromatic characteristics, major mineral elements (K, Mg, Ca, Na, and Fe) by flame atomic absorption spectrometry (FAAS), minor and trace elements (Li, Be, Ti, Mn, Co, Ni, Cu, Zn, Ga, As, Rb, Sr, Y, Zr, Nb, Mo, Cd, Sn, Sb, Cs, Ba, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu, W, Tl, Pb, and U) by inductively coupled plasma mass spectrometry (ICP-MS). In respect to barrel effect, significant differences between replicates were observed for phenolic composition, especially polymerized pigments, flavonoids and color intensity. Regarding major, minor and trace elements, no significant differences were observed between barrel replicates with exception of iron and
Introduction
Wine aging in oak barrels has been used worldwide for over 2000 years [1], at first for storage purposes, more recently to enhance the stability and sensory characteristics of wine.
This enological practice is still widely used nowadays for red wines, though it is time consuming and expensive. Because of the physical and chemical characteristics of the wood, many reactions and transfers to the wine take place in a barrel during time [2].
The transfer of compounds from the wood to the wine and the oxygen permeation through the barrel have a strong effect on the phenolic composition and sensory characteristics of the final product. Many studies showed that those phenomena are highly influenced by several parameters [3]: the wood species [4] and geographic origin [5], the barrel volume [6], the barrel processing [7], especially the drying temperature [8] and time of toasting [9], the barrel age [10] and the time of aging [11]. According to winemaker's empirical knowledge, a tangible variability also exists between the characteristics of wines resulting from aging in similar barrels coming from the same cooperage. In spite of this, only scarce information is available on barrel-to-barrel variation and its effect on wine characteristics. Doussot et al. [12] studied the interindividual variability of six oak extractible compounds by measuring it in a high number of trees from two botanical species and from six different forests. Towey et al. [13] quantified seven volatile oak extractives in barrel-fermented Chardonnay wines. Wines from four different barrel types, using samples from ten similar barrels for each type were analyzed for this study and a variance of individual compounds ranging from 15% to 40% was reported.
It is known that the mineral composition of wine depends on several parameters such as vineyard soil, grape variety and rootstock, environmental conditions, viticultural technology and enological practices [14]. In a previous study of the authors, the evolution of the elemental composition of red wine aged with oak staves was investigated [15]. However, concerning wine aging using barrels, research is usually focused on the compounds more directly linked to the organoleptic properties of wines, mostly phenols and volatile compounds.
The present study aims to evaluate the effect of barrelto-barrel variability through a comprehensive analytical characterization of red wines aged in barrels: general physical-chemical characterization, extended phenolic composition, and mineral profile were investigated.
Wines and oak wood barrels
Two red wines of Touriga Nacional grape variety (Vitis vinifera L.), 2016 vintage, from the Portuguese DO Dão were produced at industrial scale.
The initial physical-chemical characteristics of the wines after malolactic fermentation were as c The Authors, published by EDP Sciences. This is an Open Access article distributed under the terms of the Creative Commons Attribution follows: alcoholic strength, 13.1% vol; total dry matter, 27.7 g/l; total acidity, 5.54 g/l (expressed in tartaric acid); volatile acidity, 0.72 g/l (expressed in acetic acid); total sulfur dioxide, 44 mg/l; pH 3.65; total phenol index, 60.6 u.a. for Wine A and alcoholic strength, 13.2% vol; dry extract, 28.8 g/l; total acidity, 4.33 g/l (expressed in tartaric acid); volatile acidity, 0.54 g/l (expressed in acetic acid); total sulfur dioxide, 52 mg/l; pH 3.66; total phenol index, 71.2 u.a. for Wine B.
The wines were aged in 225 l new barrels of French oak, botanical specie Quercus sessiliflora Salisb. Barrels from three different cooperages (X, Y and Z) and with two different toast levels, medium toasting (M) and medium plus toasting (M + ) were used in the experiment.
Experimental design
Six months after the vinification, the wines were put in barrels. Wine A was aged in medium toast and medium plus toast barrels from cooperage X (respectively codified as A-XM and A-XM + ), while Wine B was aged in medium toast and medium plus toast barrels from cooperage Y (respectively codified as B-YM and B-YM + ) and in medium toast barrels from cooperage Z (codified as B-ZM). Two independent barrel replicates (referred to as respectively 1 and 2 after the barrel type code) were available for three of the five barrel types as shown Fig. 1. The wines were sampled after 4 months and 6 months of aging in the casks for general characterization, phenolic profile and mineral composition analyses.
Wine general physical and chemical characterization
The following parameters were determined by means of Fourier transfer -infrared spectrometry: density at 20 • C, alcoholic strength at 20 • C, total dry matter, reducing substances, total acidity, volatile acidity, total sulphur dioxide, pH, ash, glycerol, sulphates and chlorides [16,17]. The analyses were carried out in duplicates.
Color intensity and Tonality
The chromatic characteristics of the wines were assessed following the spectrophotometric method described by the OIV [18]. The color intensity is given by the sum of optical densities calculated for 1 cm optical path and radiations of wavelengths 420, 520 and 620 nm. The tonality is expressed as the ratio of absorbance at 420 nm to absorbance at 520 nm.
Total anthocyanins, ionization index, colored anthocyanins, total pigments, polymerization index, polymerized pigments
Anthocyanins equilibrium and pigments content were determined according to the spectrophotometric method established by Somers and Evans [19].
Total phenols
Total phenols were quantified following the OIV method for the Folin-Ciocalteu Index [18].
Flavonoids phenols, non-flavonoids phenols
Non-flavonoids phenols were assessed according to the method developed by Kramling and Singleton [20]. The flavonoid phenols content is determined by calculation by subtracting the value for non-flavonoids concentration to the value for total phenolics concentration obtained following the OIV official method Folin-Ciocalteu Index.
Tanning power
The tanning power gives information on the astringency of the wine. It was measured as described by De Freitas and Mateus [21].
Monomeric flavanols and proanthocyanidins fractions
The monomeric flavanols, oligomeric and polymeric proanthocyanidins were determined by applying an analytical procedure described by Sun et al. [22,23]. The sum of the three fractions obtained by this method corresponds to the total condensed tannins (proanthocyanidins).
All measurements for color, pigments and phenolic composition were performed on centrifuged wines and in triplicates.
Major element analysis
Concentrations of K, Mg, Ca, Na and Fe were determined by flame atomic absorption spectroscopy (FAAS) according to the OIV method [18].
Statistical analysis
The statistical treatment was performed in order to study the barrel-to-barrel variability. The wines aged in the cooperage X M + barrels, in the cooperage Y M + barrels and in the cooperage Z M barrels respectively were treated separately. For each barrel type, factorial ANOVAs 2 factors, individual barrel and time, and Fisher LSD tests were performed. The results for each modality (i.e. individual barrel at a determined time) were based on the average values of analytical replicates. The factorial ANOVAs and the Fisher LSD tests were performed using Statistica software program (StatSoft, Inc). The chosen significance levels (P) were 0.05 and 0.01.
Results and discussion
The results presented in this article concern the effect of barrel-to-barrel variability after 6 months of wine aging in the oak barrels. Additional results for cooperage and toasting level effects and including the 4 months sampling are being prepared for future publication.
For the discussion, for each the cooperage X M + , the cooperage Y M + and the cooperage Z M barrel types, the two barrel replicates were considered as independent. However, the variability in the characteristics of oak aged wines is explained, along with the low reproducibility of the cooperage practices, directly by the inherent variability of the oak trees themselves. Table 1. Physical-chemical characteristics of red wines a after 6 months of aging in oak barrels, according to barrel type (cooperage and toasting level) and barrel replicate. (1), wine A aged in cooperage X medium plus toasting barrel replicate 1; A-XM + (2), wine A aged in cooperage X medium plus toasting barrel replicate 2; B-YM + (1), wine B aged in cooperage Y medium plus toasting barrel replicate 1; B-YM + (2), wine B aged in cooperage Y medium plus toasting barrel replicate 2; B-ZM(1), wine B aged in cooperage Z medium toasting barrel replicate 1; B-ZM(2), wine B aged in cooperage Z medium toasting barrel replicate 2. Results values correspond to the mean of two analytical replicates with corresponding standard deviation (in brackets). b ns: not significant effect. Means followed by the same letter are not significantly different at 0.05* or 0.01**. Statistical treatment was performed independently for the 3 barrel types. Table 2. Concentration of major mineral elements (mg/l) of red wines a after 6 months of aging in oak barrels, according to barrel type (cooperage and toasting level) and barrel replicate. (1), wine A aged in medium plus toasting barrel from cooperage X replicate 1; A-XM + (2), wine A aged in medium plus toasting barrel from cooperage X replicate 2; B-YM + (1), wine B aged in medium plus toasting barrel from cooperage Y replicate 1; B-YM + (2), wine B aged in medium plus toasting barrel from cooperage Y replicate 2; B-ZM(1), wine B aged in medium toasting barrel from cooperage Z replicate 1; B-ZM(2), wine B aged in medium toasting barrel from cooperage Z replicate 2. b ns: not significant effect; ** significant effect ( p < 0.01). Results expressed in mg/l correspond to the mean of three analytical replicates with corresponding standard deviation (in brackets). Statistical treatment was performed independently for the three barrel types.
General physical-chemical analysis
The physical−chemical characteristics of the wines after 6 months of aging in the oak barrels are listed in Table 1. Regarding total SO 2 the concentrations are higher than the initial ones, what is explained by its addition during this period of time for wine preservation. The values are in agreement with usual values found in red wine for each parameter [2]. As seen in Table 1, no significant difference between barrels was observed for most of the parameters. In what concerns volatile acidity, differences were found between the independent replicates of both the cooperage X M + and the cooperage Z M barrel types. Nevertheless, despite being statistically significant, the observed variations are not relevant under the technological point of view. Statistical significance was found on sulfates for the cooperage Y M + barrels, however again this variation is not relevant from a technological point of view.
Color, pigments and phenolic composition
The results indicate a clear overall variation from a barrel to another in the wines parameters for color, pigments and phenolic composition. However, as seen in Fig. 2, the parameters significantly affected are not the same for the three studied barrel types. Color intensity, tonality, total anthocyanins, ionized anthocyanins, total pigments and polymerized pigments were found with statistical significance for only one barrel type and total phenols for two barrel types, cooperage X M + and cooperage Z M + . Flavonoid content of the wines present a significant difference between similar barrels (P < 0.01) for all of the three barrel types. For total tannins, no significant difference was observed for none of the wines, even though a significant barrel effect is found regarding the fraction of monomeric flavanols for the cooperage X M + barrels (data not shown).
In respect to chromatic characteristics and phenolic composition, the barrel-to-barrel variability of the wine can be easily explained by the variability of the barrels themselves. Many parameters of the phenolic composition of wines aged in oak casks are the result, along with the oxygenation phenomena, of the interaction between the phenolic compounds of the wine and those of the wood that are extracted into the wine with time. In fact, Towey et al. [13] and Doussot et al. [12] already reported important variation of oak extractive between individual barrel from the same wood species, the same cooperage and undergoing the same technological processes.
Mineral elements composition
The study of trace and minor mineral elements is particularly interesting regarding legal limits, wine safety but also wine authenticity as they play an important role as discriminative tool of the wines. For most of the trace and minor element, no statistical significance of barrel-tobarrel variation was found (results not shown). However, a significant difference (P < 0.01) in Cu content was found between the wines aged in similar barrel for the three barrel types (i.e. cooperage X M + , cooperage Y M + and cooperage Z M). This variation could be explained by a different rate of precipitation as copper sulfite, which depends on the availability of sulfur anion originating from sulfur dioxide and on the oxidation-reduction potential of wine [15].
Major mineral elements are the most relevant from a technological point of view because of their effect on physical-chemical stability. Besides, Na, Mg, K, Ca but also Fe are the main components of wood ash [25]. Having in mind that oak barrels undergo several heat treatments during their production, a transfer of those metals from the wood to the wine during aging can be expected. The concentrations of Na, K, Ca, Mg and Fe, measured by flame atomic spectrometry are presented in Table 2. For all the wines, concentration values for major elements are in the usual range of variation for wine [26]. Results for Na, Mg, K and Ca indicate that the concentration of those elements in the wines do not differ significantly from a barrel to another similar barrel. A significant effect of the individual barrel is observed on Fe contents in the wines aged in cooperage Y M + barrels. Again, this difference seems not relevant from a technological point of view.
Conclusions
This study contributes to the understanding of barrelto-barrel variability in wine, whose management is a challenge in the industry. It also addresses a shortcoming on the scarce information available for the effect of wood aging on the multielemental composition of wine.
|
2019-04-10T13:13:29.627Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "54d62ca73acf26bffdcd6197991e47a055cc856d",
"oa_license": "CCBY",
"oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2019/01/bioconf-oiv2018_02011.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7b3ed1461316d9cb6df1cd9e12c2177081a355df",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
259272783
|
pes2o/s2orc
|
v3-fos-license
|
Meconium‐stained amniotic fluid and neonatal morbidity in nulliparous patients with prolonged pregnancy
Abstract Introduction Our objective was to study the strength of the association between meconium‐stained amniotic fluid and severe morbidity among neonates of nulliparas with prolonged pregnancies. Material and methods This was a secondary analysis of the NOCETER randomized trial that took place between 2009 and 2012 in which 11 French maternity units included 1373 nulliparas at 41+0 weeks of gestation onwards with a single live fetus in cephalic presentation. This analysis excluded patients with a cesarean delivery before labor and those with bloody amniotic fluid or of unreported consistency. The principal end point was a composite criterion of severe neonatal morbidity (neonatal death, 5‐minute Apgar <7, convulsions in the first 24 h, meconium aspiration syndrome, mechanical ventilation ≥24 h, or neonatal intensive care unit admission for 5 days or more). The neonatal outcomes of pregnancies with thin or thick meconium‐stained amniotic fluid were compared with those with normal amniotic fluid. The association between the consistency of the amniotic fluid and neonatal morbidity was tested by univariate and then multivariate analysis adjusted for gestational age at birth, duration of labor, and country of birth. Results This study included 1274 patients: 803 (63%) in the group with normal amniotic fluid, 196 (15.4%) in the thin amniotic fluid group, and 275 (21.6%) in the thick amniotic fluid group. The neonates of patients with thick amniotic fluid had higher rates of neonatal morbidity than those of patients with normal amniotic fluid (7.3% vs. 2.2%; p < 0.001; adjusted relative risk [aRR] 3.3, 95% confidence interval [CI] 1.7–6.3), but those of patients with thin amniotic fluid did not (3.1% vs. 2.2%; p = 0.50; aRR 1.0, 95% CI, 0.4–2.7). Conclusions Among nulliparas at 41+0 weeks onwards, only thick meconium‐stained amniotic fluid is associated with a higher rate of severe neonatal morbidity.
| INTRODUC TI ON
Meconium-stained amniotic fluid (AF) is associated with 8%-20% of births. [1][2][3][4] Its pathophysiology remains poorly understood. Indeed, meconium-stained AF could be considered more as a symptom than as a cause of neonatal morbidity, except, obviously, in the case of meconium aspiration syndrome. 5 The strength of the association between the onset of meconium-stained AF and neonatal morbidity, and particularly with meconium aspiration syndrome, remains poorly characterized because of important limitations in the studies published. Most of them were retrospective, 6,7 with small sample sizes, 8 conducted in heterogeneous populations, without taking into account potential confounding factors and with questionable definition of neonatal morbidity. Most importantly, they have included pregnancy at all gestational ages while the main risk factor is the prolongation of pregnancy. Furthermore, most studies fail to distinguish between thin and thick meconium-stained AF, 9 even though its thick or lumpy character has been recognized as a risk factor of neonatal morbidity. 5 Finally, recent modifications of clinical obstetrical and pediatric practices 10 could change the strength of this association.
More precise knowledge of the strength of this association could be important for pediatricians, who need to be particularly attentive to births with meconium-stained AF.
The objective of this analysis was to study the strength of the association between meconium-stained AF and severe neonatal morbidity among a homogeneous population of nulliparous patients with prolonged pregnancy, by using data from a multicenter prospective randomized trial (NOCETER). 11 NOCETER was conducted to assess the efficacy of cervical ripening by nitric oxide donors for prolonged pregnancy, but gives a unique opportunity to study in a population at high risk of meconium-stained AF, the associated neonatal complications. Indeed, NOCETER makes it possible to study a high number of meconium-stained AF cases due to the prolonged pregnancy, to have quality data from the randomized design and hence to overcome the limitations of earlier studies and those in the context of contemporary care.
| MATERIAL AND ME THODS
This is a secondary analysis of the multicenter NOCETER (NO donors for reduction of CEsareans at TERm) 11 randomized placebocontrolled trial conducted between 2009 and 2012 in 11 French maternity units and including 1373 nulliparas with pregnancies at 41 +0 weeks of gestation onwards with a live singleton fetus in cephalic presentation, a Bishop score less than 6, and intact membranes.
The NOCETER trial aimed to assess the efficacy of isosorbide mononitrate in diminishing the cesarean rate by inducing cervical ripening in nulliparous patients with prolonged pregnancy. The two study groups had identical characteristics at randomization. The principal end point of the main study was the cesarean rate, which did not differ significantly between the isosorbide mononitrate group and the placebo group.
We were able to include all of the patients, regardless of randomization group, due to their identical severe neonatal morbidity rates.
To define the exposure, information on AF color and consistency at birth was collected in the original NOCETER study database as a direct multiple-choice question (clear, thin, thick, other) and recorded by the midwives present at delivery. The questionnaire was intentionally simple and clear for clinical practitioners or midwives who could only distinguish between thin and thick according to the color and consistency of AF on the concentration of meconium particles in the fluid. Patients with a cesarean delivery before labor and those with AF that was bloody or of an unreported consistency were excluded from this secondary analysis.
Three groups of patients were compared: the patients with thin or thick meconium-stained AF and those with normal AF (reference group). The maternal and neonatal characteristics, as well as the data about labor and delivery, were collected from a computerized questionnaire completed by trained research nurses in each maternity ward. Gestational age was calculated from the date of the last menstrual period, and confirmed by the crownrump length at systematic first-trimester ultrasound examination.
The duration of labor was defined as the time elapsed between the beginning of labor and delivery and therefore reflects the total duration of labor.
The principal end point of this study was a composite criterion of severe neonatal morbidity, defined by the occurrence of at least one of the following events: neonatal death, a 5-min Apgar score less than 7, convulsions in the first 24 h of life, diagnosis of meconium aspiration syndrome, mechanical ventilation for ≥24 h, and neonatal intensive care unit (NICU) hospitalization for 5 days or more.
Meconium aspiration syndrome is suspected when a neonate has respiratory distress with meconium-stained AF, and its K E Y W O R D S nulliparas, prolonged pregnancy, severe neonatal morbidity, thick meconium, thin meconium
Key message
Among nulliparas at 41 +0 weeks onwards, only thick meconium-stained amniotic fluid is associated with a higher rate of severe neonatal morbidity. diagnosis is confirmed by chest radiography showing variable areas of atelectasis and diaphragmatic flattening. The initial radiographic signs may be confused with the signs of transient neonatal tachypnea; however, what is often present in infants with meconium aspiration syndrome is significant hypoxemia associated with concomitant persistent pulmonary hypertension. Meconium aspiration syndrome accounts for 10% of causes of neonatal respiratory distress, with a mortality of 4%-5%. Approximately 40% of these neonates will require mechanical ventilation and 10% will require non-invasive ventilation. 12,13 Less severe indicators of neonatal morbidity were studied as secondary outcome criteria, including, but not limited to, pH at birth <7.10 and transfer to the NICU or to Neonatal medicine. The characteristics of the mother, labor, and delivery as well as the maternal and neonatal outcomes of the three groups were compared by univariate analysis. The association between the consistency of the AF and severe neonatal morbidity was tested by univariate and then multivariate analysis with adjustment for the confounding factors identified from both the univariate analysis and the literature. Three variables were retained as confounding factors after the performance of a directed acyclic graph ( Figure S1), as recommended 14 : gestational age at birth, duration of labor and country of birth. In a second model, oxytocin during labor was added as a fourth adjustment variable.
Data were presented as follows: continuous variables are presented either as mean ± standard deviation and categorical variables are presented as n (%). Continuous parameters were compared using the Student's t test and categorical variables by the chi-squared test with Yates' correction test or the Fisher's exact test, as appropriate.
A statistically significant p-value of <0.05 was defined.
We used STATA 13 software to calculate the results.
| Ethics statement
The Patient Protection Committee approved the NOCETER study (number P071212) on April 21, 2009, and all participants provided written informed consent. In comparison with the normal AF group, the thin and thick meconium-stained AF groups had more fetal heart rate abnormalities during labor, more patients receiving oxytocin, and more fetal tachycardia ( Table 2). In the thick meconium-stained AF group, there were also more patients with hyperthermia during labor. The mode of labor onset, the method of induction, the type of analgesia, and the duration of labor did not differ significantly between the three F I G U R E 1 Flow chart of study population.
| RE SULTS
groups. Mode of delivery differed between the groups; both operative vaginal deliveries and cesareans were more frequent in both the thin and thick meconium-stained groups than in the normal AF group, particularly cesarean sections for fetal heart rate abnormalities.
Severe neonatal morbidity did not differ significantly in the comparison between the groups with thin meconium-stained AF and with normal AF: respectively, 3.1% (6/196)
| DISCUSS ION
This study shows that in nulliparous patients with a prolonged pregnancy, the rate of severe neonatal morbidity in patients with thick meconium-stained AF was significantly higher than in those with normal AF. In contrast, we did not observe a significant difference in neonatal morbidity between patients with thin meconium-stained AF and those with normal AF.
The rate of meconium-stained AF in our study is near the upper limit of prevalence described in the literature, where it is estimated at 8%-20%, probably related to our selection of patients at a gestational age of 41 +0 weeks and onwards and to the fact that both thin TA B L E 1 Mothers' characteristics according to amniotic fluid consistency. are consistent with earlier studies for the significant increase in the neonatal morbidity criteria considered separately: rates of neonatal acidosis, 5-min Apgar score <7, and NICU admissions. [17][18][19][20] The previous studies had discordant results, probably because they were performed decades ago with obstetrical protocols that are no longer in use and were underpowered because of their small sample size. 6 These differences may also result from the great heterogeneity of maternal comorbidities, fetal position, and gestational ages at birth, with the inclusion of all deliveries after 24 weeks. 9 This method makes the results difficult to interpret because of the potential measurement bias associated with preterm birth or breech presentation.
Our analysis has several strengths. Because our data came from a multicenter randomized controlled trial in which every case report form was checked twice, rates of missing data for our exposure, outcomes, and potential confounders were very low, allowing the best possible control of bias by multivariable regression models. Another strength of our study was the choice of a homogeneous population of 1274 nulliparas with prolonged pregnancy, which resulted in a high incidence of meconium-stained AF. However, rates of meconium-stained AF might be lower in multiparous women due to the shorter time in active labor, but the strength of the association will likely remain the same.
Mothers had comparable characteristics regarding parity, pregnancy-related diseases, or preexisting conditions, which limits potential confounding factors.
Our principal outcome was a composite outcome consisting of events that are all related to neonatal asphyxia. These events may be life-threatening situations and are independent risk factors of poor long-term neurodevelopmental outcome. Hence, the principal outcome is constructed to measure severe neonatal morbidity.
TA B L E 3
Neonatal outcomes by amniotic fluid consistency -for all comparisons, the reference group is normal AF. Abbreviations: AF, amniotic fluid; RR, relative risk. a Adjusted for gestational age at birth, duration of labor, and country of birth.
b Adjusted for gestational age at birth, duration of labor, country of birth, and oxytocin during labor.
TA B L E 4
Association between amniotic fluid consistency and neonatal morbidity.
However, our study is not without some limitations. Finally, data of the non-participating women were not available; therefore, our results can be extrapolated only to women with the same characteristics as the included women.
It was necessary to assess the association between consistency of AF and neonatal morbidity, while guidelines for the monitoring and management of labor and the obstetric and pediatric management of meconium-stained AF have been modified. Invasive procedures such as amnioinfusion during labor or aspiration of neonates at birth were previously recommended to limit the neonatal complications of meconium-stained AF, but have been abandoned because no evidence has proven their effectiveness. 15,22,23 This multicenter study conducted according to current obstetric and pediatric practices at each center among a population of patients with low-risk prolonged pregnancies appears to show that meconium-stained AF is an alarm signal that should induce closer monitoring during labor and early notice to the pediatric team. Pediatricians should be alerted in the delivery room to the presence of a thick meconiumstained AF so that there is no delay in management.
| CON CLUS ION
This study shows a relative risk of 3.3 higher of severe neonatal morbidity in the presence of thick meconium-stained AF. It is necessary to distinguish thick from thin meconium-stained AF because only the thick meconium-stained AF is associated with a higher severe neonatal morbidity rate.
AUTH O R CO NTR I B UTI O N S
IA: conceptualization, methodology, data collection, data analysis, interpretation, writing, review, editing. DK: conceptualization, methodology, data collection, data analysis, interpretation, review.
|
2023-06-29T06:15:55.817Z
|
2023-06-28T00:00:00.000
|
{
"year": 2023,
"sha1": "2f33ae04fc1aacf881bbe09f506d70ad84ba1b39",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/aogs.14619",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "f70c83394c6c13cb7c4de2aa90a36803c17cee55",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249712198
|
pes2o/s2orc
|
v3-fos-license
|
Towards Robust Ranker for Text Retrieval
A ranker plays an indispensable role in the de facto 'retrieval&rerank' pipeline, but its training still lags behind -- learning from moderate negatives or/and serving as an auxiliary module for a retriever. In this work, we first identify two major barriers to a robust ranker, i.e., inherent label noises caused by a well-trained retriever and non-ideal negatives sampled for a high-capable ranker. Thereby, we propose multiple retrievers as negative generators improve the ranker's robustness, where i) involving extensive out-of-distribution label noises renders the ranker against each noise distribution, and ii) diverse hard negatives from a joint distribution are relatively close to the ranker's negative distribution, leading to more challenging thus effective training. To evaluate our robust ranker (dubbed R$^2$anker), we conduct experiments in various settings on the popular passage retrieval benchmark, including BM25-reranking, full-ranking, retriever distillation, etc. The empirical results verify the new state-of-the-art effectiveness of our model.
INTRODUCTION
Text Retrieval plays a crucial role in many applications, such as web search (Brickley et al., 2019) and recommendation (Zhang et al., 2019b). Given a query, it aims to retrieve all relevant documents from a large-scale collection (while each entry of the collection could be a sentence, passage, document, etc., we adopt 'document' for a clear demonstration). For a better efficiency-effectiveness trade-off, the de facto paradigm relies on a 'retrieval & rerank' pipeline (Guo et al., 2022). That is, 'retrieval' is to use an efficient retriever to fetch a set of document candidates given a query, whereas 'rerank' is to re-calculate the relevance of the query to each candidate by a heavy yet effective ranker for better results. R1: 5;R2: 7;BM25: 65 ID:189466// foods that help cortisol stress ID:6115376// How to Lower High Cortisol Levels Naturally<sep>Ways to Lower Cortisol Step 1. Eat a balanced, nutritious diet that contains plenty of fruits and vegetables. Low-glycemic index foods like eggs help lower cortisol levels in the blood while whole grain products help proteins control the production of cortisol in the body. Stay away from processed sugars and flours. Vitamin B5 and folic acid help lower cortisol levels. R1: 8;R2: 33;BM25: 131 ID:284072// how many gallons is liters ID:6134923// What Is the Equivalent of 1 Liter in Gallons?<sep>Quick Answer. One liter is the equivalent of approximately 0.2642 gallons. As both terms are measurements of volume, one can convert between the two by utilizing the fact that there are about 3.785 liters per gallon. R1: 20;R2: 1;BM25: 207 Figure 1: BM25-reranking performance of the rankers trained on different negative distributions by specific retrievers (left) and false negative labels brought by the two well-trained strong retrievers in contrast to BM25 retriever (right). Here, 'R1' denotes a well-trained coCondenrser (Gao & Callan, 2022) dense-vector retriever whereas 'R2' denotes a well-trained SPLADE (Formal et al., 2021) lexicon-weighting retriever.
in retrieval field . Please refer to Figure 1(right) for several examples. This problem is caused by, when applying finite manpower to infinite documents in the collection, the annotating process depends heavily on the best on-hand retriever to narrow the labeling range.
On the other hand, since a ranker is usually built upon a PLM-based cross-encoder without consideration of computation complexity, it is far more high-capable than any retriever with bi-encoder. As such, the hard negatives sampled by a single retriever hardly fool the ranker, making the ranker training less effective.
To reduce the effect of the two challenges above, we propose a brand-new robust reranker (R 2 ANKER) learning method. It involves multiple retrievers as generators to jointly sample diverse hard negatives, which are used to challenge a single ranker as the discriminator.
R 2 ANKER has certain merits regarding the robustness of its model training. First, as the false negatives are closely subject to the relevance distribution over the collection by a specific retriever, various negative generators achieved by different retrievers are prone to sample out-of-distribution (OOD) or open-set label noise to each other. In light of 'insufficient capacity' assumption (Arpit et al., 2017), such open-set noise has been proven effective in improve robustness when learning a ranker with open-set noises. Second, intuitively, sampling negative over a joint distribution of various retrievers is likely to offer more challenging hard negatives. Although the joint distribution is not exactly same as the ranker's negative distribution (which is unavailable due to combinatorial explosion), at least it is closer to the ideal negative distribution than a single retriever verified in our analysis. Hence, the ranker can learn from diverse hard negatives that are approximatively subject to ideal negative distribution for more effectiveness.
In experiments, we adopt the most popular passage retrieval benchmark dataset, MS-Marco (Nguyen et al., 2016), to evaluate our proposed model under various settings. Specifically, our method achieves new state-of-the-art performance on BM25 reranking and full-ranking. Meantime, to verify the expressive power of our ranker model, we conduct an experiment to distill our well-trained model to a retriever, which even shows state-of-the-art first-stage retrieval performance without cooperative training. Moreover, our extensive quantitative analyses also unveil the essence regarding negative distributions to reach a robust ranker. We make our code implementations and well-trained models available at https://github.com/taoshen58/R2ANKER. 2 R 2 ANKER: ROBUST RANKER
TASK FORMULATIONS & CHALLENGES
Given a user query q, a ranker model, K(q, d), is responsible for calculating a relevance score between q and an arbitrary document d from a large-scale collection D (i.e., d ∈ D). It usually serves as a downstream module for an efficient retriever, R, to compose a 'retrieval & rerank' pipeline, where a lightweight retriever R (e.g., Siamese encoder) is to retrieve top candidates and then a relatively heavy K (says cross-encoder (Devlin et al., 2019)) to make the search quality better.
Formally, K(q, d) is usually achieved by a deep Transformer encoder for dense interactions in a pair of query and document (so called cross-encoder, or one-stream encoder), i.e., (1) In contrast, a retriever R(q, d) is usually defined as a bi-encoder (a.k.a. dual-encoder, two-stream encoder, and Siamese encoder) to derive counterpart-agnostic representation vectors, i.e., where, u = Trans-Enc( Distinct from a traditional classification task where each category is associated with adequate training samples, only positive document(s), d + , is provided for each training query q ∈ Q (trn) , regardless of its negative ones, d − . Therefore, to train a ranker, a prerequisite is determining a negative sampling strategy to make the training procedure more effective, i.e., where P denote a probability distribution over D, which can be either non-parametric (i.e., θ (smp) = ∅) or parametric (i.e., θ (smp) = ∅). Then, the ranker calculates a probability distribution over a combination of {d + } and N , i.e., Lastly, the ranker is trained via a contrastive learning objective, whose training loss is defined as Nonetheless, there are two major challenges emerging in ranker training: First, as crowd-sourcing is very expensive, it is possible to annotate every positive document d + for a query q, leading to a label-noise problem. That is, there exists false negative documents in D\{d + }. What's worse, the false negatives are more likely to fuse with hard negatives (i.e., with high values of P (D\{d + }|q), making the ranker training non-robust.
Second, as exhaustive training (i.e., N = D\{d + }) is infeasible in practice, how to train the model effectively with limited computation resources. Ideally, the model to sample negatives for a query should rely on the being trained model to make the learning more effective, i.e., θ (smp) in Eq.(3) equaling to θ (ce) of K. However, K as a sampler over P (D\{d + }|q; θ (ce) ) suffers from a combinatorial explosion problem brought from the cross-encoder, leading to intractable computation overheads. Practically, θ (smp) is must as efficient as possible to circumvent the problem, which could be a heuristic strategy (e.g., uniform sampling), lightweight term-based retriever (e.g., BM25), or laterinteraction representation models (e.g., Siamese encoder). So, it requires a sophisticated negative sampling strategy to simulate P (D\{d + }|q; θ (ce) ) during ranker training.
BRUTE-FORCE: RANDOM & BM25 NEGATIVES
The most straightforward negative sampling method to fulfill Eq.(3) is uniform sampling over the whole collection D (a.k.a random or in-batch negatives). That is Even if using large numbers of random negatives in one batch is proven very effective in selfsupervised contrastive learning, prior works show very inferior retrieval quality when only random negatives were involved. The common practice is resorting to a lightweight lexicon-based retriever, i.e., BM25, to sample relatively challenging negatives (a.k.a. hard negatives) for a specific query. This is written as where BM25 model is built upon the collection D.
As BM25-reranking 1 usually serves as a critical task to evaluate a trained retriever, learning from BM25 negatives, N (bm25) , is closely consistent between the training and evaluating phase, potentially leading to optimal evaluation results.
However, both challenges mentioned in §2.1 remain unsolved. Limited by discrepancy against ranker distribution and label-noise training data, training a ranker based solely on BM25 negatives, N (bm25) , is far from optimal performance and leaves a large room to improve.
EFFECTIVE RANKER WITH ADVERSARIAL HARD NEGATIVES
To alleviate the first challenge, a well-known method is to sample hard negatives by applying a strong or well-train retriever, R, to a query q, which is parameterized by θ (be) . That is, Although P (D\{d + }|q; θ (be) ) is not very similar to the ideal distribution P (D\{d + }|q; θ (ce) ), the strong deep retriever at-least ensure its distribution is closer to the ideal one than BM25, thus leading to better performance in empirical.
Then, the hard negatives are used to minimize the contrastive learning loss defined in Eq.(4). Coupled with a recently advanced adversarial retriever-ranker learning method , the learning objective can be written as where the θ (be) -parameterized R can be either a frozen and well-trained or a jointly optimized retriever.
ROBUST RANKER WITH OPEN-SET DIVERSE NEGATIVE
Although learning from (adversarial) hard negatives has been proven effective to obtain a highperforming ranker (Ren et al., 2021b;, a single retriever R, even well-trained with various advanced techniques Lu et al., 2022), is hard to provide hard enough negatives to challenge the ranker R for robust training.
What's worse, a strong well-trained retriever usually introduces the label-noise problem. Due to limited crowd-sourcing resources, it is impossible to comprehensively annotate the relevance of every query ∀q ∈ Q (trn) to every document ∀d ∈ D. In general, the annotating process can be roughly described as i) using the best on-hand retriever (e.g., a commercial search engine) to fetch top document candidates for a query q, and then ii) distinguish positive document(s), d + , associated to q from the very top candidates. Therefore, constrained by the retriever in the annotation process, there exists positive documents for q not included in the top candidates, which are regarded as negative by mistake -false negative label -degrading standard ranker training. Prior works focus on 'co-teaching' or/and 'boosting' strategies , but they assume a ranker is robust enough for anti-noise while only denoise for more fragile retrievers by the ranker. Therefore, how to From another point of view, we could also formulate the search problem (both retrieval and rerank) as a many-class many-label classification problem, where the number of classes equals to |D|, i.e., the number of documents in D. And |D| is usually very large, ranging from millions to billions. Thus, the current solutions of the search problem are analogous to label semantic matching paradigm for many-class classification problems (Hsu et al., 2019). As such, the mis-labeled class caused by a single θ (be) -parameterized retriever R will be subject to the following distribution: where P (FN) (·|·; θ (be) ) denotes an inherent label noise distribution by the retriever θ (be) .
Motivated by weak negatives and label noises, we propose to leverage multiple retrievers to improve the ranker's robustness from two aspects -i) introducing out-of-distribution noise against inherent label noise and ii) generating diverse hard negatives to challenge the only ranker.
As proven by a recent "insufficient capacity" (Arpit et al., 2017) assumption 2 , learning extra out-ofdistribution (OOD) or open-set noise can improve robustness against inherent label noises that are subject to one dataset or one distribution. Inspired by this, we present one (or several) extra retriever (parameterized byθ (be) ) other than that in Eq.(10) to introduce OOD documents as open-set noise labels and alleviate the effect of noisy training samples (q, y ). In formal, the open-set label noise can be generated byỹ which will be mixed with (q, y ) to learn a more robust ranker.
Meantime, employing multiple retrievers can provide more diverse hard negatives -seen as different negatives distributions -prone to be more challenging for ranker training -leading to robust ranker. Intuitively, the joint negative distribution of multiple retrievers is more close to the ideal negative distribution subjecting to θ (ce) as top-probable negatives cannot be handled by most retrievers.
EFFICIENT IMPLEMENTATIONS
To ground our proposed robust ranker, several details must be specified before our experiments. First, as the success of our robust rank training depends on the multiple retrievers' diversity in Eq. (13), we need to employ multiple retrievers distinct enough to each other. To this end, we employ three kinds of retrievers with five retrieval models in total: • i) BM25 retriever: A simple BM25 retrieval model built on the whole collection.
• ii) Den-BN retriever: A dense-vector retrieval model trained on BM25 negatives.
• iii) Den-HN retriever: A dense-vector retrieval model that is trained on hard negatives sampled by Den-BN. • iv) Lex-BN retriever: A lexicon-weighting retrieval model trained on BM25 negatives.
• v) Lex-HN retriever: A lexicon-weighting retrieval model that is trained on hard negatives sampled by Lex-BN.
Second, in line with (Clark et al., 2020) and (Ren et al., 2021b), we do not seek for updating the generators (i.e., the retrievers in our method) w.r.t performance of the discriminator (i.e., the ranker) with two considerations: On the one hand, we try to avoid heavy computation overheads to train the retrievers jointly and update the large-scale index synchronously. On the other hand, due to their intrinsic discrepancy in model structure, the generators hardly fool the discriminator, making the adversarial process less effective. As verified in , cooperative learning (training the retriever towards the reranker, regularized by a Kullback-Leibler divergence) is also necessary for competitive performance.
Third, the strategy to sample a negative distribution in Eq.
(3) plays an important role in our method. Instead of directly sampling from the softmax distribution over D\{d + } that inclining to the very top candidates, we follow the previously common practice to cap top-N (says N=200 in our experiments) candidates and then conduct a uniform sampling to ensure its diversity. As for sampling over the joint negative distribution in Eq.(13), we combine the capped top-N candidates from multiple generators (retrievers) without de-duplication to better simulate the joint distribution.
DATASETS & METRICS
In the experiment, we adopt the popular passage retrieval dataset, MS-Marco (Nguyen et al., 2016) for model training and evaluation. We utilize official queries on the MS-Marco dataset and report the model results on MS-Marco Dev (Nguyen et al., 2016) to report BM25-reranking, full-ranking and first-stage retrieval, and TREC Deep Learning 2019 (Craswell et al., 2020) for our method. Following previous work, on the BM25-reranking and full-ranking task of the MS-Marco Dev dataset, we adopt MRR@10 to report the performance. On TREC Deep Learning 2019, we use NDCG@10 to report performance. In addition, following previous works (Ren et al., 2021b;, we leverage our ranker to teach a retriever by knowledge distillation for first-stage retrieval, and evaluate the retrieval results using MRR@10 and R@50. The above MRR, NDCG, and R refer to Mean Reciprocal Rank, Normalized Cumulative Discount Gain, and Recall, respectively.
EXPERIMENTAL SETUPS
For the training of the ranker, we use the ERNIE-2.0-base model (Sun et al., 2019) as the initialization of our ranker. To provide more diverse hard negatives for ranker's robust training, we sample them from multiple retrievers. In our experiments, we use three kinds of retrievers, including BM25 (Yang et al., 2017), coCondenser (Gao & Callan, 2022) as dense-vector retrieval models and SPLADE (Formal et al., 2021) as lexicon-weighting retrieval models. During ranker training, we sample 40 hard negatives for each query. The maximum training epoch, batch size and learning rate are set to 2, 12 and 1 × 10 −5 . The maximum sequence length is set to 128 and the random seed is fixed to 64. For model optimizing, we use Adam optimizer Kingma & Ba (2015) and a linear warmup. The warmup proportion is 0.1, and the weight decay is 0.1. All experiments are conducted on an A100 GPU.
To distill our trained ranker to a retriever for first-stage retrieval, we adopt the two-stage coCondenser retriever (Gao & Callan, 2022) and merely apply our ranker scores to the second stage. Specifically, instead of contrastive learning, we leverage training data by Ren et al. (2021b) and discard contrastive learning loss as in coCondenser, but a simple KL divergence loss. Learning rate, batch size, and epoch number is set to 5 × 10 −5 , 16× (1 positive and 10 negatives), and 4, respectively.
BM25-RERANKING AND FULL-RANKING
First of all, we briefly introduce the involved baseline retrieval models in the following.
• SAN + BERT base : Using a stochastic answer network (SAN) and pre-trained language model for passage reranking (Liu et al., 2018).
• RocketQA: The method proposes three training strategies, comprising cross-batch negatives, denoised hard negatives and data augmentation .
• Multi-stage: Arranging monoBERT and duoBERT in a multi-stage ranking architecture to form an end-to-end search system (Nogueira et al., 2019a).
• RocketQAv2: The method introduces the dynamic listwise distillation for unified training of both the retriever and the reranker (Ren et al., 2021b). (Xiong et al., 2021) RoBERTabase -33.0 -ColBERT (Khattab & Zaharia, 2020) BERTbase -36.0 82.9 RocketQA ERNIEbase ERNIEbase 37.0 85.5 COIL BERTbase -35.5 -ME-BERT (Luan et al., 2021) BERTlarge -33.8 -PAIR (Ren et al., 2021a) ERNIEbase -37.9 86.4 DPR-PAQ (Oguz et al., 2021) BERTbase -31.4 -DPR-PAQ (Oguz et al., 2021) RoBERTabase -31.1 -Condenser BERTbase -36.6 -coCondenser (Gao & Callan, 2022) BERTbase Comprehensive Full-Ranking on MS-Marco Dev. Table 1 shows full-ranking results in terms of MRR@10 by different retriever-ranker combinations. From the table, we can make two observations: First, we can see that R 2 ANKER-BM25,D2,L2 achieves the best performance over different retrieval results. Second, the results show that the performance of our ranker tends to increase by introducing hard negatives for more different retrievers. But introducing hard negatives from too many different retrievers, we found that the performance no longer improved and even had a tendency to decline. Please refer to the following discussion for more analyses regarding these observations. In the subsequent discussion, for the convenience of analyzing our ranker, we employ R 2 ANKER to denote R 2 ANKER-BM25,D2,L2.
BM25-Reranking on MS-Marco Dev. The results of BM25-reranking on MS-Marco Dev are listed in Table 2. As we can see, the results show that our method achieves a state-of-the-art performance, which demonstrates that our reranker is more robust. The reason is that our method introduces more diverse hard negatives from multiple retrievers, which can alleviate the noise learning inherent to the particular retriever by the ranker. In addition, we can observe that co-training methods (i.e., RocketQA, Multi-stage, CAKD, RocketQAv2) can significantly improve the performance of reranking. However, our method does not use co-training but introduces hard negatives from different retrievers and outperforms other co-training methods by about 1.0% ∼ 4.1% on MRR@10. It indicates that the reason for the co-training strategy improving the performance could be introducing more diverse hard negatives. The results also demonstrate the superiority of our method for training a more robust ranker without complex co-training of multiple models. It significantly reduces the training cost of the model.
BM25-Reranking on TREC Deep Learning 2019.
To further verify the effectiveness of our method, we evaluate our approach and the strong baseline (RocketQAv2) on TREC Deep Learning 2019 dataset. As shown in Table 3, we can observe that our method significantly outperforms Rock-etQAv2 by 2.6 on NDCG@10. It further demonstrates the effectiveness of our approach. Meanwhile, it shows that introducing more diverse hard negatives can guide ranker training with better generalization.
Full-Ranking with Different Retrievers. To more comprehensively verify our claim and compare our method and other baselines, we compare our method with other strong baselines under the different retriever. The results are shown in Table 2. From the results, we can see that our method outperforms other models by about 1.4% ∼ 2.4% on MRR@10. It is worth noting that since our method only trains a reranker, we need to use other retrievers to provide a preliminary retrieval L1 L2 D1,D2 L1,L2 D1,L1 D2,L2 bm25,D2,L2 D1,L1,D2,L2 bm25,D1,L1,D2,L2 Figure 2: BM25-reranking performance by various rankers that were trained on negatives sampled from retrievers' (joint) distributions. D1, D2, L1, and L2 is abbreviations for Den-BN, Den-HN, Lex-BN, and Lex-HN retriever, respectively. 'KL divergence' denotes the difference between the retrievers' (joint) distribution and BM25 retriever's, i.e., KL(P (·|q; Θ (be) )| BM25(·|q; D)), which is used to measure negatives' distribution. For example, the point 'bm25,D2,L2' denotes that i) the KL between its joint retriever's distribution and BM25 retriever's distribution is round 0.4, and ii) a ranker trained on that joint negative distribution can achieve 41.1 MRR@10 on BM25 reranking. result first. Moreover, we can observe that our ranker still achieves better performance, even though our method is based on a weaker retriever whose performance underperforms RocketQAv2 (i.e., coCondenser * (Gao & Callan, 2022) is 83.5 and RocketQAv2 is 86.2 on R@50). Therefore, our model can achieve better reranking performance even on a sub-optimal retrieval result.
KNOWLEDGE DISTILLATION FOR RETRIEVER
Under the 'retrieval & rerank' pipeline, the ranker can often rerank the results from the retriever to get better full-ranking results. Some recent works use ranker to guide retriever training, specifically by distilling the reranker's scores to the retriever, which has been shown to be very effective (Ren et al., 2021b;. To verify that our ranker learns more robust knowledge, we distill our ranker to a bi-encoder retriever. We compare the trained retriever with recently advanced competitors, and the results are shown in Table 4. It is observed that, with the comparable ranker (teacher) size, our model achieves state-of-the-art performance in terms of both MRR@10 and R@50. Meantime, our distilled retriever's performance even keeps competitive with the retrievers with larger (i.e., 3×) teachers. Moreover, it is noteworthy that our distillation paradigm is very easy, i.e., no need for co-training or adv-learning, and can be plug-in into any retriever training with reduced computation overheads.
DISTRIBUTION ANALYSIS
Distribution of Hard Negative. As shown in Figure 2, we show BM25-reranking performance for the rankers that were trained on negatives sampled from retrievers' (joint) distributions, P (·|q; Θ (be) ).
Here, we assume that the negatives sampled by the BM25 retriever are too moderate to train a robust ranker. Thereby, we propose to measure the diversity of negatives from P (·|q; Θ (be) ) by a Kullback-Leibler (KL) divergence between P (·|q; Θ (be) ) and BM25(·|q; D)). It is expected that, the more dissimilar distribution to BM25, the more diverse negative samples, making the performance better. However, as shown in the figure, the most effective joint retriever involves BM25. A potential reason is that, the almost non-parametric BM25 is orthogonal to the other learnable retrievers (both dense-vector and lexicon-weighting ones), making the negative diverse enough for a great ranking quality.
Distribution Shift.
We want to investigate whether the distribution shifted from a (joint) retriever to a correspondingly trained ranker will correlate to the ranker's performance. We assume that, a smaller distribution shift represents that the joint retriever is closer to the ideal negative distribution of a ranker, which is an echo of our motivations and claims. As shown : BM25-reranking performance by various rankers vs. relevance score distribution changes from the (joint) retriever to the trained ranker. In formal, ∆ = KL(P (·|q; Θ (be) )| BM25(·|q; D)) − KL(P (·|q; θ (ce) )| BM25(·|q; D)), where θ (ce) is trained with a specific Θ (be) . The smaller |∆|, the negative sampling distribution closer to the ideal negative distribution of the ranker, as learning on the retriever-sampled negatives will not shift the distribution.
INFORMATION RETRIEVAL
In information retrieval, 'retrieval & rerank' has become a standard and default pipeline (Guo et al., 2022). The reason is twofold: First, it is not feasible to train on the entire collection; Second, human annotation is costly. For the retrieval stage, given a textual query, a retriever can return a relevant score between it and each textual item (e.g., paragraphs and documents) from large-scale collections. The retriever is generally implemented by a bi-encoder (also known as dual encoder and siamese encoder) because they can independently derive representations of texts and compute correlations between texts efficiently. Then, the top-k retrieval results derived from the retriever are passed to the reranker to obtain more accurate ranking results. Therefore, the reranker is a crucial part of the pipeline and directly affects the final performance of information retrieval. Meanwhile, it has been attracting the attention of more researchers Ren et al., 2021b;. In this work, we focus on reranker learning due to its importance.
In addition, rerankers are currently widely used as a teacher in retriever training. The scores derived by the reranker are demonstrated that they can guide the retriever learning through knowledge distillation (Ren et al., 2021b;. RocketQAv2 (Ren et al., 2021b) introduces a joint training approach for dense passage retrieval and passage reranking, which uses dynamic listwise distillation to dynamically update both the parameters of the reranker and the retriever. AR2 presents an adversarial framework for dense retrieval, where the retriever is regarded as the generator, and the ranker is viewed as the discriminator. Therefore, the reranker can not only be further used to rank the results from the retriever but also improve the performance of the retriever through knowledge distillation.
There is a discrepancy between the training and inference of the retriever. The reason is that candidate passages annotated for one question are from the top-K passages retrieved by a specific retrieval approach (e.g., BM25 (Yang et al., 2017)). The rest of the collection may still have positive passages. During the retriever training, the model is learned to estimate the probabilities of positive passages in a small candidate set for each question, which leads to difficulty in effectively training a solid retriever. The training of the reranker is based on the hard negatives retrieved by the retriever but is full of false negatives in them. Therefore, an imperfect retriever and noisy data also lead to suboptimal performance of the reranker.
LEARNING WITH NOISE DATA
To alleviate the discrepancy between the training and inference of the retriever, some works propose reselecting some hard negatives from the retrieval results. Although this method dramatically improves retrieval performance, it still suffers from false negatives. To remove false negatives from the top-ranked results retrieved by a retriever, introduce denoised hard negatives, using a cross-encoder based reranker to remove top-retrieved passages that are likely to be false negatives. construct a unified minimax game for training the retriever and ranker interactively to reduce the effect of false negatives. Although these methods can mitigate the effect of false negatives for retriever, false negatives retrieved by the retriever still affect the training of the reranker.
In addition, many other works in learning with noise data (Han et al., 2018; show remarkable results. Some works by weighting or filtering the training samples, such as small-loss selection (Han et al., 2018;Wei et al., 2020), (dis)agreement between two models (Malach & Shalev-Shwartz, 2017;Wei et al., 2020), or GMM distribution (Arazo et al., 2019;Li et al., 2020). Moreover, there are some works propose to improve generalization under label noise settings by regularization techniques, such as virtual adversarial training (Miyato et al., 2019), gradient clipping , label smoothing (Lukasik et al., 2020;Szegedy et al., 2016) and temporal ensembling (Laine & Aila, 2017). Recently, some works attempt to alleviate the effect of data noise by introducing open-set noise that obeys different distributions . Intuitively, based on the "insufficient capacity" assumption (Arpit et al., 2017), increasing the number of open-set auxiliary samples slows down the fitting of inherent noises. In the 'retrieval & rerank' pipeline, the training of the reranker relies on the hard negatives obtained by the retriever. Still, hard negatives are inevitably full of false negatives, which spoil the performance of the reranker. Therefore, in this work, we focus on how to train a robust reranker on noise data.
LEARNING WITH MULTI-GENERATOR
Recently, learning with multi-generators has been proven to improve models significantly and has attracted much attention (Zhou et al., 2021;Zhang et al., 2019a;Hoang et al., 2018). Zhou et al. (2021) propose a triple adversarial framework for unsupervised caption generation, which comprise an image generator, a sentence generator and a discriminator. The framework can achieve aligning representations across modalities through an adversarial game between generators and the discriminator. Zhang et al. (2019a) propose an adversarial learning framework, ensembleGAN, consisting of a language-model-like generator, a ranker generator, and a discriminative ranker. In the framework, the generator and discriminator improve each other through the generation and retrieval-based methods. Although these co-training methods are effective, the model suffers from high training costs. In this work, instead of co-training to improve model performance, we train a more robust model by sampling diverse training samples from different models.
CONCLUSION
In this work, we present a simple yet effective learning strategy to reach a robust ranker, R 2 ANKER. The robustness is achieved by two aspects -open-set label noises making the ranker against inherent label noises and a multi-retriever joint negative sampling distribution close to the negative distribution of a ranker. Upon experiments on passage retriever, our proposed method achieve stateof-the-art performance on both BM25-reranking and full-ranking settings. In addition, our trained model can serve as a teacher to learn a strong retriever, which can achieve very competitive results on first-stage retrieval. Lastly, extensive analytical experiments verify the correctness of our assumptions and claims.
|
2022-06-17T06:42:01.549Z
|
2022-06-16T00:00:00.000
|
{
"year": 2022,
"sha1": "06bc0063617c4850fdb8961e76ec76d18285edfe",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b8f4469596d08471daac086f6379131aa60f5a0d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
16996837
|
pes2o/s2orc
|
v3-fos-license
|
Accuracy of analyzed temperatures, winds and trajectories in the Southern Hemisphere tropical and midlatitude stratosphere as compared to long-duration balloon flights
. Eight super-pressure balloons floating at constant level between 50 and 80 hPa and three Infra-Red Mont-golfier balloons of variable altitude (15 hPa daytime, 40– 80 hPa night time) have been launched at 22 ◦ S from Brazil in February–May 2004 in the frame of the HIBISCUS project. The flights lasted for 7 to 79 days residing mainly in the tropics, but some of them passed the tropical barrier and went to southern midlatitudes. Compared to the balloon measurements just above the tropical tropopause the ECMWF operational temperatures show a systematic cold bias of 0.9 K and the easterly zonal winds are too strong by 0.7 m/s. This bias in the zonal wind adds to the ECMWF trajectory errors, but they still are relatively small with e.g. about an error of 700 km after 5 days. The NCEP/NCAR reanalysis trajectory errors are substantially larger (1300 km after 5 days). In the southern midlatitudes the cold bias is the same
Introduction
This study provides a new intercomparison between the operational ECMWF (and to some extent the NCEP/NCAR reanalysis) data and independent in-situ measurements valid for the southern tropics and extratropics.These new results are important for research studies which depend on the accuracy of assimilations systems such as for example chemical transport studies in the stratosphere do.
Correspondence to: B. M. Knudsen (bk@dmi.dk)The accuracy of analyzed temperatures in the tropical tropopause layer has been studied extensively, due to e.g. its influence on the stratospheric humidity.Simmons et al. (1999) found the operational ECMWF analyses from 1996-1998 to have a standard-level bias of the order of 0.5 • C or less compared to radiosondes.The temperature minima were, however, substantially overestimated, partially due to a 20 hPa vertical resolution of the model back then.
From three long-duration super-pressure balloon flights launched at 0.1 • N from Ecuador, Vial et al. (2001) studied the accuracy of both ECMWF temperatures and winds at around 60 hPa from late August to mid October 1998 in the equatorial region.They found a warm bias of the ECMWF temperatures of about 0.5 K compared with longduration balloons.The easterly zonal winds were too strong by 2.4 m/s, but this could be explained by the balloon Stokes drift due to a Rossby-gravity wave near the equator.
In the Stratospheric Processes And their Role in Climate (SPARC) Intercomparison of Middle Atmosphere Climatologies (SPARC, 2002;Randel et al., 2004a) several analyses and reanalyses were studied.Noteworthy is a 1-3 K warm bias of the tropical tropopause for the (UK)MO (Met Office), CPC (Climate Prediction Center), and NCEP (National Center for Environmental Prediction) analyses from 1992-1997.Randel et al. (2004b) found this still to be true for (UK) MO andNCEP in 2001-2002.Using GPS (Global Positioning System)-derived temperatures Gobiet et al. (2005) found an ECMWF cold bias of 1-2 K at the tropical tropopause in 2003-2004.While past results for reanalyses still could be valid depending on changes in the observing system, the operational analyses are also subject to continuous model developments and past results may thus not reflect the current status.
Published by Copernicus GmbH on behalf of the European Geosciences Union.In this paper we compare operational ECMWF analyses to long-duration balloon launched from Bauru (22 • S, 49 • W) in Brazil in February-May 2004 during the HIBISCUS campaign (Pommereau et al., 2006 1 ).We compare temperatures, horizontal winds, and trajectories.We primarily analyze data in the region just above the tropical tropopause (50-80 hPa).Trajectories based on NCEP/NCAR reanalyses are also compared.A paper on the ERA-40 reanalyses compared to longduration balloon flights back to 1988 will also be submitted to the HIBISCUS special issue (Christensen et al., 2006 2 ).
Long-duration balloon flights
Eight super-pressure (BP=Ballon Pressuris) and three Infra-Red Montgolfier (MIR) long duration balloons have been flown from Brazil during the HIBISCUS campaign (Pommereau et al., 2006, this special issue).One of the balloons (BP2) failed and a second one (BP3) experienced some problems of transmission which make the data useless.The BPs are spherical constant volume and therefore constant 1 Pommereau, J.-P., Garnier, A., Goutail, F., et al.: An overview of the HIBISCUS campaign, Atmos.Chem.Phys. Discuss., in preparation, 2006. 2 Christensen, T., Knudsen, B. M., Pommereau, J.-P., Letrenne, G., Hertzog, A., and Vial, F.: Validation of ECMWF ERA-40 tropical lower stratosphere temperatures and winds with long-duration balloon data, Atmos.Chem.Phys. Discuss., in preparation, 2006.density (isopycnic) balloons made of trilaminated polyester.They were of two sizes: 10 m diameter flying around 55 hPa (19 km) varying a little with the load and 8.5 m diameter at 75 hPa (18 km) that is at or a little above the cold point tropopause.Two of the 6 HIBISCUS BP flights stayed in the tropics, while the 4 others drifted to the southern-hemisphere mid-latitudes.
The scientific payload on BP flights, called Rumba, carries a GPS for location (±10 m) and wind (±0.01 m/s), a pressure (±0.6 hPa) and two temperature sensors.The data were sampled every 15 ′ and transmitted by the ARGOS satellite data collection system.The temperature sensors are small thermistors (YSI microbeads), with an accuracy of 0.25 K.The sensors are mounted 180 • apart on a 1-m boom, hanging 5 m below the gondola.The thermistors are heated by the sun during day and consequently the daytime temperature observations exhibit a warm bias.This bias has been corrected as in Hertzog et al. (2004).On the Hibiscus flights, two kinds of thermistors were used: 120 µm diameter and 240 µm diameter.As expected, the temperatures measured with the largest thermistors have a larger bias than those measured with the smallest one due to larger radiative cooling and heating (Fig. 1).Night time temperatures measured with the largest sensor are also colder than those measured with the smallest one.This is due to the sensor radiative cooling, which scales as the square of the sensor diameter.With these two sizes, it is possible to roughly estimate the cold bias of the small thermistors night time temperatures as 0.1 K.This small bias is neglected hereinafter and the night time temperatures measured with the small sensors are used in this paper as a reference.
The MIR balloon (Pommereau and Hauchecorne, 1979) is a hot air balloon heated by solar radiation during daytime and infrared radiation from the Earth during night-time.Therefore its altitude varies from around 15 hPa (27 km) during the day to 40-80 hPa (22-18 km) at night depending on the cloud cover, except during the first 3 days before the complete escape of helium used for the initial ascent, when the MIR could be as high as 4 hPa.The position and thus the wind are obtained from GPS just like for the BP balloons.The MIR temperatures have not been used in the present study limited to altitudes close to the tropopause.
Table 1 gives some information on the flights.Among the 6 BPs, one (BP1) was leaking and fell after 13 days.All others stayed in flight until the end of their batteries between 27 and 80 days depending on the energy consumption of the additional passenger payload flown.The three flights BP5, BP7, and BP8 are closest to the tropical tropopause and their average tropical temperatures are 198-199 K.Among the 3 MIR flown, 2 dropped after 7 and 9 days over the South Pacific Convergence Zone.The third flew for 39 days for one and a half circumnavigation of the earth between 20 and 10 • S.
Analyses
The ECMWF operational analyses in 2004 are produced by a 4-D variational analysis (Rabier et al., 2000) and are used at 6 hourly resolution.ECMWF T511 fields were extracted at the 60 model levels (spacing ∼1.4 km) in a 1.5 • ×1.5 • latitudelongitude grid (from a T79 truncation) and interpolated linearly in between.The interpolation in the vertical is done log-linearly in pressure.The top level is 0.1 hPa (∼60 km).
The integration scheme is a 2nd order Runge-Kutta scheme with a time step of 30 min (BP) or 10 min (MIR).Such an integration should give rise to much smaller errors than other errors connected to trajectory calculations such as analyses or interpolation errors (Knudsen and Carver, 1994).The 6-hourly NCEP/NCAR reanalyses (Kalnay et al., 1996;Kistler et al., 2000), which are produced by a 3-D variational analysis, were used in a T62 truncation with 28 levels in the vertical up to 10 hPa.Contrary to the ECMWF trajectory calculation the NCEP/NCAR trajectories use cubic spline interpolation in space and time and a fourth order Runge-Kutta scheme with a time step of half an hour.
Temperatures and winds
In this section, we compare the ECMWF operational analysis with the observations gathered during the BP flights by the Rumba gondola.The Rumba temperature measurements on the BP balloons shows a warm bias during daytime, which has been corrected.Figure 2 shows the histogram of differences between ECMWF and observed temperature, zonal and meridional wind using 6-hourly ECMWF data.The mean bias of ECMWF relative to the balloon measurements is −0.9K and the standard deviation is 1.3 K independent of whether it is from day or night and tropics or mid-latitudes.This cold bias of the ECMWF temperatures is in agreement with comparisons to radiosondes at 100 hPa and started already in the 1998 analyses (Simmons, 2003).The bias is also seen in comparisons with GPS-derived temperatures (Gobiet et al., 2005).Vial et al. (2001) found a warm bias of about 0.5 K around 1 September 1998, at 60 hPa (20 km).This does not disagree with the present results since the flights they studied were closer to the equator.Actually Gobiet et al. (2005), who used GPS-derived temperatures, also indicate a warm bias at 20 km close to the equator.
The ECMWF zonal (meridional) velocity has a mean bias of −0.4 (0.0) m/s and a standard deviation of 3.1 (3.5) m/s.If the comparison is limited to the data collected in the tropics (0-30 • S), the bias of the zonal velocity is −0.7 m/s, indicating that the ECMWF tropical easterlies are too strong.A spectral analysis (not shown) has shown that much of the scatter in these comparisons is due to meso-and short-scale inertia-gravity waves, which the ECMWF analysis has difficulties in capturing.These waves are more frequent at the tropical tropopause than above.Several factors can explain that the inertia-gravity waves are not well represented by ECMWF: The first is that at the BP flight altitude, the ECMWF vertical resolution was about 1 km in 2004, while the dominant vertical wavelength of gravity waves in the lower stratosphere is about 2 km.The vertical resolution was thus a bit too short to fully resolve those waves.The sec-ond is that analysis outputs are stored every 6 h, which is an undersampling of the model time resolution.Gravity waves, which have periods of about 1 day, may be damped by this undersampling.The third is that a major source of gravitywaves in the tropics is convection, which is parameterized in the ECMWF model, so that the model may miss the physical processes that generate the waves.
Trajectories
To assess how accurate calculated trajectories are, trajectory calculations (Knudsen et al., 2001;Knudsen and Carver, 1994;Hertzog et al., 2004) were started every 2 h along the flight track.In each time step the trajectories calculated for the BP flights were forced to a pressure lying on the same isopycnic (constant density) surface as the balloon.Thereby the vertical motion of the trajectories is taken care of.The trajectories for the MIR flights were forced to balloon pressure.Figure 3 shows the flight track for the 79 day long BP4 flight, and every 12th of the calculated trajectories.Most calculated trajectories stay in the tropical reservoir (i.e. the region between the northern and southern tropical edges) except for a few.One of the trajectories even moves to the Antarctic, just like some of the other BP balloons did.This does not necessarily indicate that the analyzed tropical barrier is leakier than the real, since trajectory errors could bring the calculated trajectories to regions where transport out of the tropical reservoir does occur.In agreement with the previous section, this figure also shows that the major part of the wave perturbations seen on the BP4 trajectory is not caught by the ECMWF analyses.We have tried to run trajectories at the highest possible resolution (0.5 • ×0.5 • from a T511 truncation) for the second revolution of the balloon (not shown).This leads to small changes in the calculated trajectories, but the major part of the wave pertubations are still not cought.
The horizontal balloon velocity is a very good approximation of the horizontal air velocity (Vial et al., 2001).In order to mimic the balloon behaviour in the vertical, isopycnic trajectories were computed for BP, while MIR trajectories were obtained by forcing the pressure to the observed balloon pressure.With this method only horizontal trajectory errors can be addressed, but these are in fact the most important ones if the vertical transport is calculated with a state-of-the-art radiation code (Knudsen et al., 2001).
Special attention has been attributed to the 79 day long BP4 flight, which remained in the tropics.Figure 4 shows the average spherical distance between the calculated and observed trajectory as a function of trajectory duration.The standard error on the average is calculated with lag 2 h autocorrelations taken into account and are indicated by the shading.The errors for the NCEP/NCAR reanalyses are based on trajectories started every 12 h and are calculated under the assumption that the autocorrelations are the same as for the ECMWF trajectories.The NCEP/NCAR reanalyses trajectory errors are larger than the ECMWF errors at the 68% confidence level except for durations of 5.25-8.25 days, even though the shadings overlap.For durations larger than about 10 days the shadings do not overlap indicating that the differences are significant with more than 68% confidence.The difference is only significant at the 95% confidence level for a duration of 12 h.The coarse vertical resolution of the NCEP/NCAR reanalyses could be responsible for the differences.Another contributing factor could be increased dynamical consistency of the ECMWF 4-D variational data assimilation, where departures of observations from a forecast in a 12 h time window are minimized.
Figure 4 also shows the trajectory errors for all other flights in 2004.The balloons leaving the tropics were only used until the zero wind line was crossed at about 27 • S. The MIR data are only used after the time when the pressure permanently is larger than 10 hPa (except that the MIR mLidar does reach a minimum pressure of 9.2 hPa).The MIR balloons run at higher altitudes than the BP.In the Arctic trajectory errors increase with altitude (Knudsen et al., 2001) due to e.g. the decreasing number of radiosondes and their increasing errors.Judging from the trajectory errors on the longest MIR and BP flights there is no increase with height in the tropics.This result could, however, be influenced by the reduced occurrence of atmospheric waves, which ECMWF has difficulties in catching, along the higher-altitude MIR flights.The longest MIR and BP flights have 3.7 times more calculated trajectories of duration 5 days than all the other flights together.Therefore we can concentrate on these two longest flights.The results of the other flights do not seem to be significantly different due to the large confidence limits on these other flights.The trajectory error in the zonal (meridional) direction is defined as the spherical distance at fixed balloon latitude (longitude).For the BP4 flight the trajectory errors are a factor of 2.4 larger in the zonal direction than in the meridional direction after 5 days.This is partially caused by the bias in the zonal wind.
In the Arctic the trajectory errors are approximately halved being about 270 km after 5 days (Hertzog et al., 2004).This is primarily due to the bias in the meridional wind.This bias may be due to the reduced number of radiosondes and the break-down of geostrophy in the Tropics, which makes it difficult to transform satellite observations of temperature related quantities to winds.In principle it should be possible to correct for such a bias and thereby reduce the trajectory errors.In the Arctic wind speeds are much larger, so trajectory errors relative to the trajectory length (Relative Spherical Distance = RSD) are much smaller.
The trajectory errors after 2 h are a good measure of the errors of the vector wind.For the tropical part of the BP flights it can be transformed to an average vector wind difference of −0.86 m/s, which agrees quite well with the results found in Sect. 4. For the MIR flights the result is −0.58 m/s, indicating a slightly reduced wind error at higher altitudes in the lower stratosphere.
Figure 5 shows the spherical distance between the calculated and observed trajectories for the extratropical part of the flights BP5-8 (i.e. after the time when the zonal wind turned zero).The calculated trajectories were cut-off at 79.5 • S. In case of missing trajectories due to this cut-off uncertainties were not calculated.The last 12.75 days of the BP8 flight were removed to avoid trajectories to be cut-off.For the BP5 flight the trajectory errors are comparable to the tropical errors for BP4 for trajectory durations up to 5 days.Much larger errors occur for the BP7 flight.As seen on Fig. 6 this is due to a substantial part of the calculated trajectories staying in the tropics, whereas the balloon moves towards the South Pole.These trajectories of course have very large errors, but the average error is in fact not significantly larger than the errors of the other flights at 95% confidence due to very large error bars.
Trajectories close to a flow barrier like the tropical barrier can thus have large errors, because unavoidable small errors can push the calculated trajectory to the wrong side of the barrier or the barrier can be misplaced in the analyses.In this case the trigger was probably a situation with very low wind speeds as depicted in Fig. 7. New trajectories were started every 12 h from the red crosses.The balloon makes a loop, which is not caught by the ECMWF trajectories.The arrows show the 80 hPa wind field at the time when the balloon passes the cross in the loop.Situations of low wind speeds are critical for trajectory calculations since the errors on the analyzed wind do not go to zero as the wind speed does (Knudsen et al., 2001).Most of the trajectories started before or during the loop thus take a more northerly course and end up in the tropics, while most of the trajectories started afterwards move towards the South Pole.Most of the other balloon flights also encountered low wind speeds, but this did not lead to so large trajectory errors because it did not push such a large number of trajectories to the wrong side of a flow barrier.
The ECMWF reanalysis, ERA-40, temperatures show a smaller systematic cold bias around 60 hPa of 0.5 K compared to long-duration balloon flights close to the equator in 1998 (Christensen et al., 2006 2 ).For these flights the trajectory errors are about 1000 km after 5 days, which is possibly due to Rossby-gravity waves which ERA-40 is unable to catch.Flights at higher altitudes have trajectory errors of about 500 km after 5 days, which is more in line with the results for the ECMWF operational analyses shown in this paper.
Conclusions
The ECMWF operational temperatures in 2004 show a systematic cold bias of 0.9 K just above the tropical tropopause (50-80 hPa).The easterly zonal winds in this region are too strong by 0.7 m/s.In the southern extratropics the temperature bias is the same and the zonal wind has almost no bias.After 5 days the average trajectory error is about 500 km, when discarding one balloon flight with very large errors.This is true for both the tropics and southern extratropics and also for tropical flights at about 13-45 hPa.The absolute trajectory errors are not much larger than the errors in the Arctic, but there the wind speeds are much larger.
Fig. 1 .
Fig. 1.Temperature variations on super-pressure balloon flights (at 50-80 hPa) as a function of the sun zenith angle.Measured temperatures (noisy curves) exhibit a warm bias during day, which is larger for the largest themistors (grey) than for the smallest ones (black).The temperatures measured with the largest thermistors also show a cold bias during night.The variations shown here are computed with respect to the night time temperatures measured with the smallest sensor (see text).(smooth curves) Correction applied to the raw measurements.
Fig. 4 .
Fig. 4.Mean tropical trajectory errors for the 2004 flights.Dark and light grey shading gives 68% confidence limits on the means for BP4 using ECMWF and NCEP/NCAR reanalysis, respectively.The BP flight level usually is 52-75 hPa, whereas the MIR flight level usually is 13-45 hPa.
Fig. 5 .
Fig. 5. Mean extra-tropical trajectory errors for the 2004 flights.Thin dashed lines give 68% confidence limits on the means.For comparison the tropical flight BP4 is included.
Fig. 7 .
Fig. 7. Close-up of the BP7 flight track when passing South America (thick red line) and 12 hourly calculated trajectories (thin orange lines) started from the red crosses.Horizontal winds are shown as arrows with a scale such that the north-south distance between the arrow centres equals 20 ms −1 .
Table 1 .
Bauru 2004 long-duration balloon flight data: Start and end day and duration of the part used in the calculations is given.10-90% quantiles of the pressures and latitude ranges are shown.Mean wind speed for the BP flights is given in last column.Values for the tropical part of BP5-8 are in parenthesis.
|
2015-03-27T18:11:09.000Z
|
2006-12-04T00:00:00.000
|
{
"year": 2006,
"sha1": "1c9e7b6cf22eddef37efbe12bee5cf5102602d7d",
"oa_license": "CCBYNCSA",
"oa_url": "https://acp.copernicus.org/articles/6/5391/2006/acp-6-5391-2006.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9ed6e8cb96cb551e6fefce808c6d72abdf6695ee",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
133882641
|
pes2o/s2orc
|
v3-fos-license
|
Cocoa floral phenology and pollin ation: Implications for productivity in Caribbean Islands
Cocoa midges [Forcipomyia sp (Diptera: Cerato-pogonidae)] are major pollinators of cocoa and it is assumed that the number of fertilized pods and the increase in bean numbers may be the approach to enhancing cocoa yield. An insect survey using suction traps was employed to estimate the midge population dynamics in three Caribbean territories. Separate studies were conducted on the cocoa floral and reproductive phenology in addition to the evaluation of several naturally occurring substrates. The results indicated that the insect population as determined by the suction traps were low (27.1 ± 3.37 to 53.5 ± 8.47 transect site). The trees maintained the floral prolificacy even though the pollination [%] was low for Jamaica (0.91), Trinidad (0.88), and Tobago (0.11). However, it was improved when the midge pollinator population was increased with augmentation of substrates of cacao pods [5660] and banana pseudo-stem (1885). This resulted in significant increases in new pods which increased from < 10 pods/tree in the untreated areas to 49 to 76 pods/tree with substrate augmentation. It was evident that the discarded cocoa pod after harvest was a suitable feeding substrate and breeding site for the midge. This information is to be used to advance further studies in plant-pheromones which can serve as attractants to increase pollination/fertilization in cocoa.
INTRODUCTION
The cacao industry is driven by the major international chocolate manufacturing in Europe and USA.However, all the raw materials are produced in the tropical south and Central America, Africa and the Caribbean (Motamayor et al., 2002).Commercial cacao (Theobroma cacao L.; formerly Sterculiaceae family; reclassified Malvaceae family] (Alverson et al., 1999) is a tropical tree [3 to 5 m] which is derived from varieties belonging to three major groups viz: Criollo, Forastero and Trinitario (Lachenaud et al., 1997).
The varieties and the hybrids exhibit considerable genetic variability in morphological and physiological The crop growth is highly influenced by environmental conditions viz.temperature (Daymond and Hadley, 2004), flooding (Sena and Kozlowski, 1986), and water stress (Almeida and Valle, 2007).The bi-modal seasons influence the phenological stages of flowering, fruiting and pod growth (Cazorla et al., 1989).The plant produces caulescent flowers with the non-pollinated flowers abscising 24 to 36 h after anthesis (Garcia, 1973).The cacao flower is hermaphrodite and is pollinated by insects, mainly Forcipomyia sp.(Diptera: Ceratopogonidae (Dias et al., 1997)).The flowers setting to pods are very low [0.5 to 5%] (Aneja et al., 1999).
The quality of pollination can depend on two factors, the degree of pollen compatibility and the number of pollen grains deposited on the stigma (Lanaud et al., 1987).It is assumed that with increased pollen grains pod set is improved (Hasenstein and Zavada, 2001) and more pollinations result from the visit of a single pollinator (Yamada and Guries, 1998).The increase in Forcipomyia larvae and pupae associated with rotten banana stems had shown to produce more cocoa flowers (Young, 1986).The pod yield is influenced by photosynthesis and partition of photo-assimilate (Sounigo et al., 2003).
It is assumed that midge population can be a limiting factor in the pollination of cocoa in addition to the environmental conditions.However, populations of insect pollinators are often severely disturbed by hurricanes through flooding of essential habitat and the widespread loss of existing flowers.Small, poor-flight insects such as midges are likely to be swept away by high winds.Climate variation, particularly changes in rainfall leading to sporadic or less rain, may also affect midges which normally thrive in moist humid environments.
Understanding these ecological dynamics can lead to ways of conserving midge populations and mitigating the effects of global climate change and extreme climatic events.The objective of this study is to examine the relationship between the midge population, flower pollination in Trinidad Selected Hybrids (TSH) cacao, and selected weather variables in several different Caribbean cocoa producing islands.
Characteristics of the study area
A multi-location study during the project period of 2013 to 2016 was conducted on several farms in the islands of Trinidad and Tobago (10.667°N, 61.567°W), and Jamaica (18.1824°N, 77.3218°W) in the Anglo-Caribbean which were previously under natural forest (tropical Montane Crappo-guatecare, fine leaf cocorite, black heart) in altitude 120 to 330 m (Nelson, 2004).The areas experienced annual average temperatures of 26.5 ± 2.09°C, relative humidity of 86.1 ± 12. 6%, and mean monthly rainfall ranging between 19.1 and 235.1 mm (Anon, 2016) (October 2014-October, 2015).
The cocoa vars.were mainly form the Trinidad Selected Hybrids [TSH] (Maharaj et al., 2011), and the trees were in full reproductive phase.The first flowerings were in early January over a 3 month period, and a second period, depending on the rains, in June.Harvesting usually occurred over a 2 month period around 6 months after the first flowering.
All the islands experienced a bimodal rainfall distribution, with peaks in June and November.The first and second growing seasons typically last from mid-March to mid-July and from mid-August to end of November, respectively.However, this is separated by a short dry spell of about two weeks in September and referred as petite careme.The major dry season starts in mid-December and lasts till end of May, and the climate is marked by high incidence of solar radiation and relatively little variation in day length.All data on temperature and relative humidity were measured using the Data Davis Wireless Vantage Weather Pro [Model E14062 Rainfall data, were taken from the meteorological records from the National Water Resources Agency.
Experimental
Four separate studies were conducted during the period 2013 to 2016 in which the European Union COCOAPOP was executed in the following areas: 1. Insect population dynamics.2. Cocoa floral phenology.3. Substrate augmentation trials for culture of cocoa midges (Diptera: Ceratopogonidae), and 4. Generalized linear modelling of weather, midge dynamics and floral phenology.
Study 1: Insect population dynamics
The cocoa insect population dynamics survey was conducted in the 3 islands on 2 well established and managed farms that cultivated the cacao TSH variety under similar agronomic practices.The selected farms were of similar altitude (120 m) and agronomic conditions.The study was conducted over a minimum of fifteen (15) months duration (2013)(2014)(2015).However, the data analysis was confined to 2 complete flowering seasons over 1 year period.
Insect suction traps (Arnold and Chittka, 2012) were set up in 9 representatives transects within each cocoa estate of the different territories.These traps were secured onto branches of cocoa tress, powered by 9-volt batteries and insects were sucked into vials containing 90% ethanol.Insect samples were collected for 2 days/month for each sample site, labelled, stored properly for analysis in the insectary for other insects and midge count.Collection was timed to the midge life cycle (Figure 1).
Study 2: Cocoa floral phenology
The cocoa floral phenology was conducted on the same cocoa farms for each island.Over 20 mature cocoa trees95 to 12 m tall] with 5 cushions/ tree were randomly selected and labelled within an experimental area not exceeding 500 m 2 .The study ensured that data were collected from a minimum 100 plants over 3 consecutive flowering years (2013 to 2015).The observations were conducted monthly on each tree using the modified BCCH (Bleiholder et al., 1991) on counts of flowers, buds, number of mature flower buds, open flowers, new pods or cherelles, small pods (5 to 10 mm), The BBCH scale was amended to include days from the first day buds become visible [FBV] for each stage and was used to compute the length of each reproductive phase (Figure 2).
Table 1.The principal reproductive growth stages 5 to 7 of T. cacao var.TSH according to the BBCH (Biologische Bundesantalt, Bundessortenamt and CHemische Industrie, Germany) scale.
Principal growth Code Description
Stage 5: Inflorescence emergence 52 Flower buds expanded, emergence of sepal primordia (bud <1 mm long).59 Flower bud growth complete (buds 6 mm long and 3 mm large; pedicle 14 mm), buds l closed The BBCH Scale (Bleiholder et al., 1991) and the extended BBCH scale (Hack et al., 1992) covered the 10 principal growth stages numbered 0 to 9 (Table 1).However, for the purpose of this study, only 4 of these stages were considered; namely 'macro stages' numbered from 5 to 7.
Study 3. Substrate augmentation trials for culture of cocoa midges (Diptera: Ceratopogonidae) Two (2) separate studies were conducted on 3 commonly found substrates within the fields to determine if they can augment the midge population as suitable breeding sites (Figures 3 and 4).These studies were confined to Trinidad farms only, as the insectary was located there.The substrates assessed over the 2 crop seasons in 2015 were as follows: 1. Field substrate in-situ assessment, and 2. Field augmentation and insectary evaluation.
Field substrate in-situ assessment: During the cropping season of 2015, four (4) cocoa farms were designated for field manipulation to determine if the substrates had any effect on the midge population dynamics.Three substrates were assessed in heaps viz: Rotted cocoa pod (15 kg) (Figure 4), banana pseudo-stem slices (Figure 2) (15 kg) and cocoa leaf litter, all of which replicated three times per farm.All treatments were moistened (5 L water/heap/weekly).The experimental sites (25 m 2 /substrate) were laid out as a Latin square (3 × 3) design.During the first 2 months, insect populations were monitored for 2 days per month using a standard suction trap placed in the approximate centre of each area.Cocoa floral phenology was also monitored during the duration of the study which lasted over 6 months.
Field augmentation and insectary evaluation:
The field experimentation was conducted at one farm (Gran Couva, Trinidad) The Ceratopogonid midge larvae after developing in the organic matter were collected using the Berlese Funnel Traps (Dietick et al., 1959).The substrates were inspected for larger midge larvae (Forcipomyia spp.) which are removed from the substrates and placed in a ball of well-decomposed cocoa pod husk with 100 larvae/vial and adequate air-flow and temperature (26°C).
Study 4. Generalized linear modelling of midge dynamics, floral phenology and weather variable
The approach was to determine the relative role of the midge population dynamics and cocoa floral and reproductive phenology, and its interaction under the prevailing weather variables (rainfall and temperature).This study was conducted over the period 2014 to 2015 in the three countries (Trinidad, Tobago, and Jamaica) on two estates per country.The data was collected from previous midge collection and the floral phenology trials and daily weather data (Table 11) for each location.Best fit generalized linear models were developed to determine the interactions and significance.
Data analysis
The count of flowers and other parameters taken were pooled together on each farm, but separate for each location.All count data were transformed when necessary using the square-root (√x + 0.1) before analysis.Regression analysis were used to determine the relationship between weather variables (temperature, relative humidity, rainfall and light intensity) and flower production, and insect population dynamics using the MINTAB statistical package.
Study 1. Insect population dynamics
There were significant differences between the monthly midge and other insect's population and farms over the 3 territories.There were two distinct and observable high populations May/June and November/January.These periods coincided with the new flushes of cocoa flowers (Figure 4) and the higher rainfall patterns.In Trinidad, the seasonal midge population was 19 ± 3.65 and 53.5 ± 8.47 compared to Tobago which varied between 27.1 ± 3.37 and 22.6 ± 6.47, and Jamaica 21 to 28 ± 4.39/ transect site (Table 3).In all territories, the low midge population varied between 2 to 6 midges/transect site.Jamaica (82) and Tobago (72) had higher midge populations compared to Trinidad (45).The other insect's population was significantly higher than midges and varied between 1067 and 1547 insects/transect site.This indicated that the midge population was less than 2% of the insect trapped (Table 4).
Study 2. Cocoa floral phenology
The cocoa floral and reproductive phenology followed a similar pattern (Figure 4) as outlined on the modified model developed by Bleiholder et al. (1991).In Trinidad, the mean number of flowers was 33.6 ± 6.1/cushion, with the highest ranging between 40 to 96 flower/cushion 2 and 5).This represented the 2 major flowering flushes, which corresponded with the early and late wet seasons, respectively.Tobago experienced a similar weather pattern to Trinidad during that period (Table 11), and the trees in the study exhibited a slightly higher mean flower/cushion (51.1 ± 7.61).The mature cocoa trees displayed 2 distinct flushes, with the first in November/December 2014 (45 to 89), and a second flush (65 to 81) in the beginning of the wet The percentage of flowers that were pollinated and successfully fertilized i.e. (Flowers → Cherelles (0" -2.0")) were higher in Jamaica This manifested with a similar pod/cushion yield between countries, with Jamaica (1.5) having a higher pollination/ fertilization, compared to Trinidad (1.0) and Tobago (<1), and was very low for that season (Table 9).
Trial 1: Field substrate assessment
The field trials (Table 6) indicated that there were no variations between the 3 substrates (5.0 to 5.4 ± 1.27) during the experimental period.However, during the wet months of July/August, 2014, the number of midges caught in the suction traps located in the areas of the banana pseudo-stem, and cocoa pod increased, compared to the litter substrate.Similarly, the cocoa leaf litter was not significantly different from pods or pseudostems in August.
The number of midges per suction trap in this trial was consistent to the results obtained in the cocoa insect population dynamics studies (2013/14).The study demonstrated that regardless of the quality of the substrate to improve on the feeding and fecundity of midges, the suction trap appeared to have a determining factor, and may not actually reflect the substrate suitability.
Trial 2: Field manipulative and laboratory evaluations
In this study, no suction traps were used, but samples of the substrate were removed and incubated in the insectary, where the emerging larva were counted, and reared to adult.The results in this study are different from Trial 1, and reflected the potential midge population when interventions of substrates are manipulated in the field.
The fresh cocoa pod (Table 7) left to decay was the preferred substrate for the adult midge to feed and continue its reproductive cycle (Figure 1).The total midge population in the cocoa pod was 3 to 4 times higher than the banana pseudo-stem.The data suggested that increasing the breeding sites with augmentation of cocoa pod substrates can increase the midge population (Table 7) dynamics in the field and new pods development (Table 8).Further, the use of suction traps are not effective or a reliable indicator of the true insect population dynamic in the cocoa estates.
Study 4. Generalized linear modelling of midge dynamics, floral phenology and weather variable
This study involved data transformation and statistical manipulation of observations on the cocoa crop reproductive phenology (Table 9), and midge population dynamics (Table 10) during a one year period, and taking into consideration the prevailing weather variables (Rainfall and Temperature at the different Farm locations) (Table 11).
The generalized linear model revealed that there were variations between farms which influenced the yield of flowers and cherelles (Table 12).Also, the variation in rainfall between months, confirmed the bimodal (wet/dry) season which affected flower emergence and pollination into cherelles.The other main variables in the model; midge, other insects, and temperature, were not significant and had no impact on flower and pollination.Additionally, the analysis did not reveal any interactions between any of the independent variables on flower and cherelles (Table 13).The analysis showed that the ratio of flowers to cherelle per cushion varied between territories: Jamaica (33:10), Trinidad (33:0.7),and Tobago (18:0.3).However, this data has to be interpreted in the light of the limitations of the suction trap and the true midge population as reported in Study 3. Further, the numbers of flowers were similar
Data was also collected at two (2) sites in Tobago; L'eau Estate (November 2014 -July 2015) and Providence Estate.The 2 estates selected in Jamaica were: Orange River (September 2014 -October 2015) and Richmond . The 4 farms/estates were in Trinidad: Jude Lee Sam Estate (July 2014 -July 2015), San Juan Estate (February 2015 -July 2015), San Antonio Estate (February 2015 -July 2015) in Gran Couva and ECIAF Estate (July 2014 -July 2015) in Centeno.
Table 2 .
Codes and Descriptors used for cocoa phenological cycle in 6 cocoa farms.
Table 4 .
Midge population (%) compared to other insects in cocoa farms over the three locations.
Table 5 .
Cocoa phenological cycle in 6 cocoa farms over 3 Islands during a one year period [ 2014/15].
season(May/June, 2015).The mean flower/ cushion in Jamaica did not vary compared to Trinidad (32 ± 5.98), as the trees were of same variety and age, and also displayed two distinct flusher in Sept/Nov, 2014 (29 to 61) and April/June, 2015 (63 to 78).
Table 8 .
Cocoa pod yield in farms with substrate augmentation.
Table 9 .
Cocoa floral phenology and pod yield in 6 cocoa farms over 3 Caribbean Islands during a one year period [ 2014/15].
|
2019-04-27T13:05:51.435Z
|
2017-07-31T00:00:00.000
|
{
"year": 2017,
"sha1": "97708b0dd1100f076e04227b717a359ab5bb686b",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/JPBCS/article-full-text-pdf/B7EC76E65047.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "20f745df43454365dbfed937b17517f01e196f1f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
266072106
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of The Feasibility of Revitalization of The Type a Amplacing Terminal and The Performance of The Sisingamangaraja Road Section, Medan
The main objective of this research is to analyze and find out if the investment plan is profitable in the future so that it is feasible or not to be implemented. Apart from that, this research also aims to analyze the Operational Performance of the Sisingamngaraja Road Section which includes: Side Obstacles, Free Flow Speed, Travel Speed and Time, and Service Level. The data used in this research are primary data and secondary data. Primary data was obtained through interviews and observations. Meanwhile, secondary data was obtained from related agencies or services such as the Medan City Transportation Department. The analytical method used in this research uses two analytical tools, namely investment feasibility analysis and road segment performance analysis. The research results obtained from the Net Present Value for the next 10 years are IDR. 60,400,000. shows that it is feasible to revitalize the Amplas Type A Terminal. This can be proven from the investment acceptance criteria and the NPV method, namely if the calculation results show that the value is positive then the investment is worth implementing. From the calculation results, the IRR of 30.5% shows that the interest rate will equal the present value of the investment. With present value, the value of net cash receipts in the future. As a criterion for acceptance of this method, if the calculated interest rate is greater than the desired interest rate, the investment is accepted. In this case the interest rate is 20%, so the IRR of 30.5% is said to be feasible for the revitalization of the Amplas Type A Terminal. The Profitability Index (PI) is 1.10 (accepted) because PI > 1. From the results of the Payback Period calculation, the Payback period for the revitalization of the Amplas Type A Terminal is 22.2 years. This means it is shorter than the economic investment period, namely 35 years, so the Payback Period criteria for the revitalization of the Amplas Type A Terminal can be accepted because it is lower than the useful life period. The performance of the section on Jalan Sisingamngaraja by determining the size of the road capacity and traffic volume is known to still have problems with congestion during peak activity hours.
Introduction
The increase in population in the city of Medan, especially on the Sisingamangaraja road, has an impact on increasing the need for various activities, including trade, education and other activities.The developments occurring in Medan City must of course be balanced with a good traffic system and supporting infrastructure.Community activities will certainly affect the smoothness of traffic, especially during rush hours.This disruption to the smooth flow of traffic is caused by vehicle entry and exit activities to schools, markets, street vendors, workshops to places where passengers are picked up and dropped off and side obstacles which reduce the effective width of the road body, lower road sections and increased obstacles (Harahap et al, 2018).Therefore, the revitalization of the Amplas Type A Terminal must continue to prioritize smooth transportation, which of course can be done by implementing an appropriate transportation system and traffic management.
The decision to revitalize the Amplas Type A Terminal is an investment, namely the decision to use funds or allocate funds originating from the National Strategic Project (PSN) in the context of increasing economic growth through infrastructure development in Indonesia.The government is making efforts to accelerate projects that are considered strategic and have high urgency so that they can be realized within a short period of time.In this effort, the Government, through the Coordinating Ministry for Economic Affairs, initiated the creation of a mechanism to accelerate the provision of infrastructure and the issuance of related regulations as a legal umbrella to regulate it.Through this mechanism, the Committee for the Acceleration of Priority Infrastructure Provision (KPPIP) selects a list of projects that are considered strategic and have high urgency and provides facilities to facilitate project implementation.By providing these facilities, it is hoped that strategic projects can be realized more quickly (Ervianto, 2017).
There are two investment decisions for this revitalization, namely short term and long term.Short-term investment decisions are the use of funds for terminal operations, while long-term investment decisions are investments in fixed assets.Long-term investment is an expenditure that is expected to produce benefits for more than one year in the future.This investment or capital expenditure is related to the use of funds (cash) to obtain operational assets that will help obtain income or reduce costs in the future (Sururi & Agustapraja, 2020).
Decisions regarding investment are very important decisions because they have a significant influence on economic development or growth.This decision is not only about the level of risk that must be borne but also determines the level of government profits in the future.So in carrying out revitalization it is not enough to just rely on experience and intuition.More than that, there is now an increasing demand to carry out investment feasibility studies on the business you want to run or develop.Not just for the purpose of assessing the feasibility of the business to be built, this feasibility study has become a necessity for the common good (Wilujeng dkk, 2018) In revitalizing investment in order to realize the implementation of comprehensive development in various sectors, including the business sub-sector in the private sector, which is one of the sectors that plays an important role in development, and an inseparable part of economic development.Another thing that shows the progress of development in our country is that more and more companies are switching to other businesses (Nurhayati & Amalia, 2019).
The context of revitalizing investment at the Medan Amplas Terminal which is sustainable for businesses is still an urgent political will to motivate the success of activities.In line with this concept, in its management, especially local business managers, experience various kinds of challenges.One of the most influential is the lack of capital to make Terminal activities run well.This lack of capital greatly limits the scope for business activities and moreover makes it difficult for businesses to develop operational activities.There is a risk that always lies in revitalization, namely the problem of lack of capital (funds) for further development.
The phenomenon illustrated by the revitalization of the Amplas Type A Terminal is that the roads around the terminal have conditions on the ground that are very different from the strategic plans that existed during the planning stage.On sections, especially on the Sisingamangaraja road, traffic problems often occur, such as increased delays at certain times due to high community activity followed by economic activities and educational activities, plus side obstacles that arise due to vehicles dropping off or picking up students and buying and selling transaction activities at the market.
The problem of revitalizing the Amplas terminal must of course be the subject of an in-depth study in order to improve the performance of increasingly congested roads, so a research was conducted which aimed to determine the traffic conditions in Sisingamangaraja.This research is needed to identify the problems that occur on Jalan Sisingamangaraja so that later we can find the right solution to prevent bigger traffic problems and it is necessary to divide and divert some of the traffic load to other roads with the aim of reducing the volume of incoming traffic.to Sisingamangaraja.
Literatur Riview Investment Theory and Investment Feasibility
The key to a company's success is determined by the management function running in accordance with the company's development and adjustments to economic conditions.The management function is very decisive in achieving company goals in accordance with their respective functions by paying attention to the obstacles that must be overcome.We already know that when investing in companies that can be categorized as investing in the future over a fairly long period of time, the author can express the understanding of investment by economists (Witjaksono, 2020).Proposals for investing in the form of funds, which are usually called capital, then the percentage Dodi Frenky, Sinar Indra Kesuma and Satia Negara Lubis: Analysis of The Feasibility of Revitalization of The Type a Amplacing Terminal and The Performance of The Sisingamangaraja Road Section, Medan time is analyzed at the turnover rate, then the money that has been invested will be expected in the future (Sudrajat & Rudianto, 2019).
Regional Development Through Revitalization Prospects and Feasibility
This revitalization design object has good prospects in meeting the current increasing needs of society, especially in the field of transportation facilities/infrastructure. The increasing need for transportation facilities/infrastructure makes the existence of terminals capable of facilitating the community increasingly necessary.Especially with the existence of the Amplas terminal as a type A terminal in the city of Medan, by carrying out revitalization it is hoped that it can make the Amplas terminal a better terminal in meeting the needs of the community (Dimas dkk, 2023).
This object is in accordance with Medan City Regional Regulation Number 1 of 2014 concerning Medan City Regional Spatial Planning for 2014-2034 and is suitable for revitalization.Amplas Terminal is one of the inter-provincial terminals that is still actively used today.However, the existence of this terminal is sometimes not used according to its function.Judging from how public transportation in the city often causes traffic jams outside the terminal due to not using the facilities inside the terminal.This is caused by the lack of good circulation arrangements within the terminal, and also the facilities for passengers to find public transport are still unclear (Martaningtyas, 2023).
Road Segment Performance Evaluation
Evaluation of road performance is carried out by analyzing side obstacles, slow vehicle factors, free flow speed and service level.Based on MKJI (2009), side obstacles are interactions between traffic and various activities next to the road that can result in a reduction in the number of saturated traffic on the road and can also affect the capacity and performance of the traffic.
Based on MKJI ( 2009), the slow vehicles referred to are bicycles, carts, pedicabs and wagons.Where vehicles with fairly slow speeds on a road section result in disruption of the speed of other vehicles which will later use the road section.Therefore, a vehicle that is slowing down is a part that can also influence the value of the side resistance.
Based on MKJI (2009), free flow speed is defined as speed at zero flow level, namely the speed that the driver would choose if driving a motorized vehicle without being influenced by other motorized vehicles on the road.
Methodology
The type of research in this research is quantitative descriptive research which includes the activity of assessing attitudes or opinions towards individuals, organizations, situations or procedures, the aim is to describe a situation or phenomena as they are where the results of these observations will be tabulated in quantitative form and analyzed using formulas.quantitative.
The data used in this research are primary data and secondary data.Primary data was obtained through interviews and observations.Interviews were conducted through in-depth direct interaction and communication with respondents.Meanwhile, secondary data was obtained from related agencies or services such as the Medan city transportation service.
Analysis of the feasibility of revitalizing Amplas Terminal A using the following analysis: 1.
Result
Terminal revitalization is where the function of a terminal which has begun to lose its function as the terminal itself is revived, so that the terminal that will be revitalized can follow trends and meet user needs now and in the future.
Investment Feasibility Assessment
The investment required for the Amplas Type A Terminal revitalization project will be explained in the table below: Investment will be carried out in stages with the initial outlay for the first stage requiring an investment of IDR.15,521,064,991 and entering the second year, the second phase of investment was carried out, namely Rp. 24,390,243,938.So the revitalization of Amplas Terminal A has an investment value of Rp. 39,911,308,930 which will operate from the beginning of the third year.The source of funding for investment is 100% internal capital from the central government through the National Strategic Project budget without external capital loans or banks.The Amplas Terminal A revitalization plan is estimated to have a useful life of 35 years with an estimated residual value of 13,968,958,126.The effective interest rate is estimated at 20% per year.The following is an assessment of feasibility for the next 10 years of operation.-------------------------------------- ---------------= ----------------------- This time, the Sisingamangaraja Road Performance Analysis discusses the essence of the research.The Road section has a geometric type of four-lane two-way undivided road (without median) (4/2UD) with a width per lane of 6.5 m, a width per lane of 3.25 m and a shoulder width of 2.5 m.Jalan Sisingamangaraja has the function of being a city road with good road conditions.The condition of the traffic volume on Jalan Sisingamangaraja itself is also very high, this time the traffic volume was taken on peak days or hours, because this makes it easier to understand.For a sample of traffic volume at this time, from data in the direction outside the city and within the city, the peak volume on Monday afternoon rush hour was 6,266 ≥ 3,700, while in the opposite direction during the morning rush hour it was 6,178 ≥ 3,700.Traffic volumes at peaks that exceed existing standard limits should be diverted so that traffic jams can be avoided.
Traffic capacity for the Sisingamangaraja Road section has a population of 1,753,092 people, with a city size of 1.0 -3.0.Analysis data for Sisingamangaraja Road with the largest traffic composition for both directions occurs in the inner city direction with a total traffic composition on Monday of 24,245.with LV 18.42%, HV 0.62%, MC 80.00%, and UM 0.96%.From the Total Flow Composition of Traffic, the number of motorbike users traveling on Sisingamangaraja Road is very high.Motorbike users should be encouraged to use public transportation.Provided that mass transportation is made as good and comfortable as possible so that people will be interested in using public transportation, in order to minimize possible traffic jams.
The average speed that occurs on Jalan Sisingamangaraja is dominated by motorbikes.Based on a sample of traffic speed obtained, the total flow from both directions occurred on Sunday from outside the city towards Amplas with an average speed: MC 46.73 km /O'clock; LV 41.47 km/h; HV 37.10 km/hour and UM 9.71 km/hour.Field conditions, especially during peak hours, traffic flow is very high, as are the side obstacles.So there needs to be firm action to reduce side obstacles, so that the driver's speed remains stable while driving his vehicle.
The traffic density that occurs on Jalan Sisingamangaraja is dominated by motorbikes.At this time, the largest traffic density in both directions occurred on Monday in the direction out of town towards Amplas, with an MC of 529.20 vehicles/hour; LV 137.75 vehicle/hour; HV 5.17 motor/hour and UM 28.86 motor/hour.To anticipate if traffic flow and traffic density continues to increase.
Discussion
The research results obtained from the Net Present Value for the next 10 years are IDR.60,400,000.shows that it is feasible to revitalize the Amplas Type A Terminal.This can be proven from the investment acceptance criteria and the NPV method, namely if the calculation results show that the value is positive then the investment is worth implementing.From the calculation results, the IRR of 30.5% shows that the interest rate will equal the present value of the investment.With present value, the Dodi Frenky, Sinar Indra Kesuma and Satia Negara Lubis: Analysis of The Feasibility of Revitalization of The Type a Amplacing Terminal and The Performance of The Sisingamangaraja Road Section, Medan value of net cash receipts in the future.As a for acceptance of this method, if the calculated interest rate is greater than the desired interest rate, the investment is accepted.In this case the interest rate is 20%, so the IRR of 30.5% is said to be feasible for the revitalization of the Amplas Type A Terminal.The Profitability Index (PI) is 1.10 (accepted) because PI > 1.From the results of the Payback Period calculation, the Payback period for the revitalization of the Amplas Type A Terminal is 22.2 years.This means it is shorter than the economic investment period, namely 35 years, so the Payback Period criteria for the revitalization of the Amplas Type A Terminal can be accepted because it is lower than the useful life period.The results of this research are in line with the theory that investment is the placement of a certain amount of funds in the hope of maintaining, increasing value, or providing a positive return (Sutha, 2000); (Webster, 1999);(Lypsey, 1997).The financial aspect is a key aspect of a feasibility study.If other aspects are classified as feasible but the financial aspects provide unfeasible results, then the company's proposal will be rejected because it does not provide economic benefits.
The performance of the section on Jalan Sisingamngaraja by determining the size of the road capacity and traffic volume is known to still have problems with congestion during peak activity hours.Based on the results of the data above, it can be stated that the value of side obstacles to traffic on Jalan T Sisingamangaraja is found to be side obstacles on Mondays which are dominated by parking and stopped vehicles, on Saturdays dominated by parking and stopped vehicles, and on Sundays dominated by stopped vehicles and vehicles.Parker.Free flow speed is the speed at zero flow level, namely the speed that the driver would choose if driving a motor vehicle without being influenced by other motor vehicles on the road.
The level of service on the Sisingamangaraja Road section itself can be determined using the degree of saturation which is still in class C. The average speed that occurs on the Sisingamangaraja Road section is dominated by motorbikes, based on a sample of the traffic speed obtained by the total flow from both directions.on Sunday out of town towards Amplas with average speed: MC 46.73 km/hour; LV 41.47 km/h; HV 37.10 km/hour and UM 9.71 km/hour.Field conditions, especially during peak hours, traffic flow is very high, as are the side obstacles.So there needs to be firm action to reduce side obstacles, so that the driver's speed remains stable while driving his vehicle.
Figure
Figure 1.Framework
Reference
Abdelhady, S. (2021).Performance and cost evaluation of solar dish power plant: sensitivity analysis of levelized cost of electricity (LCOE) and net present value (NPV).Renewable Energy, 168, 332-342.Dimas Harianto, D. I. M. A. S., Bambang Drajat, B. D., & Probo Yudha Prasetyo, P. Y. P. (2023).Revitalisasi Terminal Terhadap Kinerja Pelayanan Dan Operasional Terminal Tipe A Giwangan Di Kota Yogyakarta.Revitalisasi Terminal Terhadap Dodi Frenky, Sinar Indra Kesuma and Satia Negara Lubis: Analysis of The Feasibility of Revitalization of The Type a Amplacing Terminal and The Performance of The Sisingamangaraja Road Section, Medan Dodi Frenky, Sinar Indra Kesuma and Satia Negara Lubis: Analysis of The Feasibility of Revitalization of The Type a Amplacing Terminal and The Performance of The Sisingamangaraja Road Section, Medan Dodi Frenky, Sinar Indra Kesuma and Satia Negara Lubis: Analysis of The Feasibility of Revitalization of The Type a Amplacing Terminal and The Performance of The Sisingamangaraja Road Section, Medan Payback Period Road segment performance assessment is carried out by carrying out the analysis stages of Traffic Survey, Vehicle Speed Survey, Side Obstacle Survey.
Table 1 .
List of Type A Sandpaper Terminal Revitalization Tables
.064.991 List of Investments for Terminal Type A Phase II
Dodi Frenky, Sinar Indra Kesuma and Satia Lubis: Analysis of The Feasibility of Revitalization of The Type a Amplacing Terminal and The Performance of The Sisingamangaraja Road Section, Medan
Table 2 . Feasibility of Revitalizing Terminal A Amplas Dodi
Frenky, Sinar Indra Kesuma and Satia Negara Lubis: Analysis of The Feasibility of of The Type a Amplacing Terminal and The Performance of The Sisingamangaraja Road Section, Medan LisensiLisensi Internasional Creative Commons Attribution-ShareAlike 4.0.215Source: Data Analysis Results, 2023Based on the table above, the economic evaluation of the investment project can be calculated as follows: Profitability Index (PI) PV proceed 43.746.393.626PI = -
|
2023-12-08T16:37:40.843Z
|
2023-12-06T00:00:00.000
|
{
"year": 2023,
"sha1": "d94d9ff2b4f65c174536ff687576ef27b60f6542",
"oa_license": "CCBYSA",
"oa_url": "https://journal.ysmk.or.id/index.php/IJMEA/article/download/26/27",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "78b1ff1aa8580218143e56857c18309dc4bf0f79",
"s2fieldsofstudy": [
"Engineering",
"Business"
],
"extfieldsofstudy": []
}
|
226692944
|
pes2o/s2orc
|
v3-fos-license
|
Anatomising the Problems of Nigeria: Is It Mainly Anchored on Weakness of State Institutions or Otherwise?
The checkered history of Nigeria since her independence in 1960 is evidenced in the failures of successive governments to deliver requisite infrastructure that can lead to industrialization and a sustainable development with a strong economic base. This research work seeks to critically examine the assertion that the problem of Nigeria is mainly anchored on weakness of state institutions like the Police, the Armed Forces and the Civil Service. While agreeing that weak institution is a contributory factor to Nigeria’s unfortunate status as a failed state, the research exposes and reveals myriads of other factors that have contributed in no small measure to the Nigerian problems. Military incursion into governance; leading to the enthronement of ethnicity and mediocrity; religious intolerance; rigged electoral processes; a constitutional frame work that abhors competition and adopts a feeding bottle approach to the constituent state within the federation who monthly rush to Abuja to share oil revenue; the mono economic base and lack of diversification; obsolete laws; lack of trust amongst ethnic groups; marginalization of some sections of the country; low regard for lives and properties; corruption; cultural differences; the roguish and unholy marriage called amalgamation of 1914; the colonial objective of exploitation and prospection of our natural resources; fear of domination by the North; the transfer of the apparatus of power to an unprepared North at the time of independence etc, appears to establish that our problems transcend weak institutions of state and is but a mere contributory factor. The scope of this research is limited to exposing these other factors as well as the weak institutions, and answers the question in the negative. Even as this work suffer from dearth of empirical evidence and materials, the doctrinal approach implicated in the interrogation of other contributory factors to the Nigerian problems using primary, secondary and tertiary sources reached the conclusion that even if strong institutions are developed, failure of government in the Nigerian state may still result from these other factors, and recommends that to solve the Nigerian problems, we must look beyond weak institutions.
we cannot allow that to happen. 1
Insecurity
The location of several Internally Displaced Persons (IDP's) camps littered across the country are clear indicators of chronic and sustained human flight as a result of these conflicts. The farmers-Fulani herdsmen conflict has assumed disgraceful proportions that there is hardly any day that passes without the media reporting of several killings by armed bandits. Suicide bombings have become more or less a way of life in Nigeria. The killings, maiming, rape, arson and kidnapping by the Boko Haram group 2 has attracted global attention. About 276 school girls were abducted from their school in Chibok 3 in 2014. The plea by the parents of the victims and several human rights organizations like the "Bring Back Our Girls Group" led by the former minister for education, Oby Ezekwesili, and even the wife of the former US President, Michelle Obama, fell on deaf ears. The former US first lady said "In these girls, Barack and I see our own daughters". 4 It is pertinent to note that many of the abducted school girls are Christians who were forcefully converted to Islam. Boko Haram released a video showing the missing girls and alleging they had converted to Islam and would not be released until all militant prisoners were freed. 5 As if enough was not enough, on the February19, 2018, by 5.30pm, Nigerians were again served another dish of the abduction of 110 innocent and defenseless girls from a government-owned secondary school in Dapchi, Yobe State of Nigeria. 6 The believe is widely held that this particular abduction was stage-managed by the Buhari administration to score cheap political point on the readiness of the regime to quick response in contradistinction to the Goodluck Jonathan 7 administration's inability to release the Chibok girls. These spates of kidnappings have made parents particularly in the North-Eastern part of Nigeria to withdraw their female children from schools since they now appear obvious easy target. This has had tremendous impact on the girl-child education which by necessary implication has grave negative impact on the Nigerian society generally.
The arraignment of the suspected billionaire kidnapper, Chukwudumeme Onwuamadike also known as Evans, 8 and several other kidnapping incident trials have yielded no positive result as the prevalence has reached alarming proportions leading to some States like Edo making it a capital offence. Nigeria has a big problem here.
Corruption
The word "corruption" lacks universally acceptable definition because different people perceive the word differently. From legal point of view, corruption is "the act of doing something with intent to give some advantage inconsistent with official duty and the rights of others." 9 From economic standpoint, "corruption is rent seeking behavior by public officials through the exploitation of their monopoly discretionary powers." 10 Socially speaking, it is the violation of socially accepted norms of duty and welfare. 11 Within the political space, corruption may be defined as the synthesis of the misfit between the private accumulation ideals of capitalism and the public welfare virtues of democracy. 12 In spite of the available diverse definitions, "corruption is consensually agreed to involve www.iiste.org ISSN 2224-3240 (Paper) ISSN 2224-3259 (Online) Vol.101, 2020 4 the misuse of public power for private gain." 1 It is therefore safer, for reasons of clarity, to adopt a narrow, enumerative definition of the conduct, which is, by specifying some corruption types such as bribery, trading in influence, embezzlement, extortion, fraud, favoritism, illegal political party financing, and abuse of discretion. 2 Corruption is endemic in Nigeria when viewed from the angle of how widespread it is, the scale and the class of persons involved in the crime. It is also recognized as a category of economic and financial crime in Nigeria. 3 Igbinedion S. A, put it succinctly as follows: It pervades every stratum of public office and constitutes a means to the end of private enrichment at the expense of the common good. Noteworthy is the fact that public office is not merely a medium of illicit accumulation of public wealth; it constitutes the private estates of corrupt public officials who have routinised plunder of public wealth. 4 The embezzlement or outright plunder of the Nigerian treasury by the former dictator and self-styled President of Nigeria, Ibrahim Babangida, still leaves a sour test in the mouth. Babangida and his cronies are believed to have misappropriated $12.2 billion out of the $ 12.4 billion in the Dedicated Account with the Central Bank of Nigeria (CBN). 5 The late Nigerian dictator, General Sani Abacha, also corruptly took from the CBN between $2.3 and $2.8 billion. 6 Till date, the government of Nigeria is still in search to recover this loot. It is interesting to note that the same ugly situations which were encapsulated in the broadcast by Sani Abacha heralding the Buhari military regime in December 31, 1983 became even worse during the Babangida and Abacha era. indiscipline, and continue to proliferate public appointments in complete disregard of our stark economic realities… 7 Interestingly, the above scenario which aptly captures the Nigerian situation then has not changed even now, in all respects. In fact, in some cases, it has become even worse. For example, the demonstration of the endemic nature of corruption in the country is, perhaps, manifested by the fact that, from 1996 to 2005, the CPI 8 consistently perceived Nigeria to be in the category of the sixth most corrupt countries in the world. 9 The "Abacha loot" has entered the Nigerian lexicon of corruption and is estimated to be about $16 billion. R. Baker summarized it in the following words: The regime, most of this to the personal accounts of Abacha and his immediate family members. 1 Remember the $214 million National Identity Card Scheme scam, the N17 billion stolen by the former Inspector General of Police, Tafa Balogun, the N1.16 billion by former Governor of Plateau State, Joshua Dariye who was only recently jailed for 14 years, after so many years; what about the N124 billion by the former Governor of Bayelsa State, late Diepreye Alamieyeseigha, the government of Olusegun obasanjo that allegedly spent $16 billion to provide electricity for the country yet we are still in darkness and the beneficiaries of the loot live in luxury. 2 Not to forget the persistent fuel subsidy scandal. 3 According to the House of Representative Ad-hoc Committee on Fuel Subsidy Report, the difference between the N245 billion appropriated for fuel subsidy in 2011 and the N1.3 trillion actually paid to marketers largely represented plunder of national wealth. What of the humongous N273.9 billion pension scheme scam between 2005-2011 only. Senator Aloysius Etok-led Senate Joint Committee on Public Service and Establishment and State and Local Government Administration, described it as "syndicated and institutionalized corruption, fraud and embezzlement in the management of pension funds in the country." 4 The very funny N270 million IDP camp grass cutting scandal by the secretary to the Federal Government, Babachir Lawal, is still fresh in our memory. 5 Sambo Dasuki is still languishing in jail for the $2. 1 billion arms scandal. 6 Nigeria's problem is better summarized, as no single research can exhaustively treat all our problems.
Other Problems
The health sector is not spared, with incessant strike action that has not changed the status of our hospitals from "mere consulting clinic" without drugs. The educational sector is not any better. The poverty level in the country is unimaginable and we have been labeled the "poverty capital of the world." Unemployment level has reached disgraceful all time high. Inflation rate is one of the highest globally. The list is endless and there appears to be no future for the teaming Nigerian youths. We appear to have squandered our tomorrow today. Militancy and civil unrest is the order of the day. Cries of marginalization and self-determination as fall outs of bad government policies aimed at keeping down certain sections of the country, our unsuccessful attempts at democracy with rigged electoral processes, religious intolerance, enthronement of mediocrity in place of meritocracy, the shameful uncertain census figures, bad government policies and visionless leadership, high crime rate, human rights abuses, child labor and trafficking, a steadily dwindling economy ,nepotism, tribalism, favoritism, civil war, Boko Haram menace, banditry, Fulani herds attacks, religious upheavals, lack of trust among the constituent ethnic groups, political instability, insincerity, lack of patriotism, bad governance, no identifiable good leaders, an almost zero international respect et al, are only some of our very many problems.
Causes of the Problems
Having sufficiently dealt with the problems existing within the Nigerian State, our next task will be to trace the root causes of these problems and to find out whether it is mainly anchored on the weakness of state institutions. In this respect, two key words call for analysis. That is "institution" and "weak".
The Oxford Advanced Learner's Dictionary 7 defines "institution" as "a large important organization that has a particular purpose". It could be described as agencies through which government works. It is an organization, establishment, foundation, society, or the like, devoted to the promotion of a particular cause or program, especially one of a public, educational, or charitable character. 8 "Weak" means not physically strong. Easy to influence. Not having much power. That which people are not likely to believe or be persuaded by. 9 A weak institution therefore will mean institutions that have become incapable of achieving the purpose for which it is set up.
The Amalgamation
Cultural differences between the constituent ethnic groups that make up Nigeria, the mistrust and divisiveness, could not have been traceable to any weak institution. Nigeria as a federation is a product of British experiment in political cloning, having come into existence in 1914 by virtue of the amalgamation of the Northern and Southern protectorates of the country by Lord Lugard, the first Governor General. 1 This union has created endless animosity between the two protectorates because none of them was prepared for it. The WILL has this to say of the amalgamation; The ill conceived connubial resolution that brought Southern Nigeria and Northern Nigeria together in 1914 is up till today being debated as the basis of the problems we are facing as a nation… In retrospect, Southern Nigeria and Northern Nigeria were coalesced together in 1914 to form a single colony of Nigeria. The unification was consummated purely for economic reasons rather than political. It was a union marshaled and carried out without any form of consultation between the South and the North. History has it that, Northern Nigeria Protectorate had a budget deficit, and the colonial administration sought to use the budget surpluses in Southern Nigeria to offset the deficit of the Northern Nigeria… Despite the fact that the unification process developed embryonic problems and was greatly undermined by the persistence of different regional perspectives on governance between the Northern and Southern provinces, the colonial masters never deemed it fit to put up ameliorative process that could have made the forced marriage work… Today, many observers of our national polity are of the opinion that Nigeria's myriads of problems are as a result of the 1914 amalgamation. This is why, up till this moment, the relationship between the two is one based on mutual suspicion and is responsible for the country's retrogressive nature among comity of nations… It has always been a cat and mouse relationship in which every ethnic group tries to outsmart one another in an existential 'rat-race'. 2 Nothing can be clearer and more direct in the conclusion that our problems as a country are not mainly anchored on weak institutions. It has to be noted that at this stage of our historical evolution, not much has been established as institutions. At various stages of this matrimony, Nigeria has recorded series of events including a civil war and currently Boko Haram menace, banditry, Fulani herdsmen attacks, insurgency, religious upheavals, mistrust, etc. The amalgamation is morbidly reflecting in all our ways of life even as a sovereign nation. The taunted 'unity in diversity' is an illusion as constituent ethnic groups continue to champion their causes as tribalists rather than as nationalists. The question begging for answer is, how long do we continue to strive in deceit and treachery when we are all aware that the 'marriage' has brought us more loss than gain? "From the way the 1914 amalgamation was conceived, the union was never meant to be a political elixir but an ill-conceived palliative economic measure." 3 It is safe to conclude that almost Nigeria's entire problem came post-amalgamation as Nigeria started existing after the inglorious amalgamation. Lord Lugard was criticized not only for basing his rule of Nigeria on a military system but also for running the country with half of each year spent in England, distant from realities of Africa and compelling his subordinates to delay decisions on many important matters until he returned. While Southern colonial administrators welcomed amalgamation as an opportunity for imperial expansion, their counterparts in the Northern Provence believed that it was injurious to the interest of the areas they administered because of their relative backwardness and that it was their duty to resist the advance of southern influences and culture into the north. Southerners, on the other hand, were not eager to embrace the extension of legislation originally meant for the north to the south.
Richard Aknjide (SAN) surmised his view of the amalgamation as a fraud. According to the learned Senior Advocate of Nigeria: After This exposition by the learned Senior Advocate of Nigeria is particularly revealing and instructive as it exposes the very root cause of the Nigerian problem tracing it from the dispatches made to London by Lord Lugard. Perhaps, after reading this, you will completely agree that this is majorly our problem as a country, and we need to go back to the drawing board and decide whether we really need to continue together as a country. The conclusion of the learned icon was emphatic when he said: "In fact, the so-called Nigeria created in 1914 was a complete fraud. It was created not in the interest of Nigeria or Nigerians but in the interest of the British." 2 Tracing our problem from 1914 and narrowing it to 1960, Bello was no less emphatic. According to him, …the political history of the country since 1960 till now is characterized by corruption, nepotism, favoritism, bad governance (sic) tribalism, ethnic and religious upheavals, lack of trust, political instability, insincerity, lack of patriotism etc. As a result of all these, Nigeria as a country is yet to evolve into a nation and can be described more as a ship without anchor or to use the word of chief Obafemi Awolowo, a mere geographic expression".(sic) 3
Manipulated Census Figures
Census figures are the basic ingredient upon which any meaningful economic planning can be based. Without a sincere census, there is hardly anything the best of any economic team can achieve in terms of good planning and adequate provision of infrastructure. How would a country conduct a free and fair election without it? How do we know the number of the aged people to cater for, or the number of the productive youth population? How can we even tackle the issue of unemployment or the provision of adequate electricity, or a good public health sector? In fact, almost everything you want to do in any economy is predicated on the population census figures. It is very disgraceful that till now the Nigerian population figure is a mere estimate, a gaze and an approximation. This is part of the reason for the dwindling economy with policy summersaults. At one time, our population was put to "about 120 million", at other times, it is said to be 150 million. It is now estimated at 180 million. The policy of the distribution of the national income amongst the constituent states on the basis of population appears to be the reason for manipulation of census figures. It is also part of the handiwork of political gladiators who prepare fertile grounds upon which elections can be rigged in their favor.
Over Politicization of Everything in Nigeria
This is another major contributor to the Nigerian problem. Section 14 (1) of the 1999 Constitution regarding Federal Character has been abused that we now have a predominance of one ethnic group in charge of the security apparatus of the country where all the service chiefs are from the North, including the Minister for Defense, the Inspector General of Police, the National Security Adviser. This has led to serious suspicion and mishandling of security issues in the country. A very influential Yoruba monarch once threatened to drown Igbo residents in Lagos in the lagoon if they fail to vote for his APC preferred candidate.
Disobedience to Court Orders
Court orders are flagrantly disobeyed by the executive . If there is anything that the current administration of Buhari is increasingly being notorious for, it is in the manner it disregards court orders. Former National Security Adviser, Col. Sambo Dasuki (rtd), Sheikh Ibrahim www.iiste.org ISSN 2224-3240 (Paper) ISSN 2224-3259 (Online) Vol.101, 2020 8 El-Zakzaky 1 and his wife and several others have at different times secured judgments to be admitted to bail but these orders have been completely ignored. Disobedience to court order is not only a constitutional issue, 2 but is a threat to democracy and a recipe for disaster. Today there are over 100 such decisions that are being disregarded, to the extent that State Governments have also taken a cue from the Federal Government in this regard. In many cases, Attorneys General have filed appeals and motions for stay of execution just to circumvent the law. It is no longer the rule of law but the rule of might notwithstanding the supreme courts unambiguous pronouncements in Oguebego v PDP, 3 that any person against whom a decision of a court is given is duty bound to obey it, irrespective of whether the person against whom the order is made is of the opinion that the order is void or perverse. He is bound to obey the order until it is set aside. Self-help or disobedience to court orders by government at all levels is an attack to the rule of law, due process, can breed anarchy, dictatorship and totalitarianism which are antithesis to democracy as was held in N.B.A v Henk yaa. 4 This is not as a result of any weak institution. The judiciary in these instances did their work only for the "almighty" executive to disobey and manipulate them in its favor.
Military Incursion into Governance
Nigeria's first military coup was not as a result of weak institution but largely for the mismanagement of our electoral processes by politicians, as well as several killings that took place then. There was the Igbo pogrom of Jos in 1945, Kano in 1953. Military rule in Nigeria first started on January 15, 1966 when a group of army officers came to correct these ills and over threw the NPC-NNDP government of Prime Minister Abubakar Tafawa Balewa. Major General Johnson Aquiyi-Ironsi was made the head of the Federal Military Government of Nigeria. This coup was unjustifiably baptized an "Igbo coup" even when the coup was quelled by Ironsi, an Igbo man. The retaliatory coup of 29 th July 1966 came, killing Ironsi and a lot of Ibo officers and civilians and enthroning the government of General Yakubu Gowon. The Igbo pogrom continued and eventually led to the Nigerian-Biafran civil war. The military having tasted power became corrupt and unwilling to relinquish power to democratic civilian government. It is noteworthy that subsequent military governments were from northern extractions ranging from major General Gowon from 30 th July 1966 to July 30, 1975; Murtala Muhammed took over and was subsequently killed in the Dimka coup in 1976, and then Lt. General Olusequ Obasanjo from the south by accident became the Head of State being the most senior officer then. He handed over to the Shehu Shagari led civilian administration in 1979 who was again overthrown by the military in 1983. General Buhari became Head of State and in succession, Ibrahim Babangida, Sani Abacha, Adulsalami Abubakar, all from the north, took turns. 5 Among other reasons for military intervention in Nigeria's governance were tribal loyalty, regional differences, politicization of the army, corruption, low level of economic development, and infrastructural decay. The military also came with incompetent personnel, non-tolerance to criticism, dictatorship, a non-independent judiciary, mismanagement of public funds, violation of human rights, no respect for rule of law, corruption and the Nigerian civil war. These problems still remain with us even under a democratic government. The military has continued to recycle themselves and come back as civilian presidents like Obasanjo and Buhari has done. We do not know who will come next.
The argument here is not that there is no good side to military regimes. Military regimes that do not have much time are in unique position to conduct corrective reform and bequeath democracy. But as Max Siollun said: …once they are in power for a prolonged time they begin to act in a more circumspect manner which robs them of the decisiveness, promptness and precision which are hall marks of a truly corrective regime. They then become sluggish and reluctant to relinquish power… the military transferred its doctrinal attachment to regimentation, uniformity and the suppression of dissent into the political arena. It was unable to shake off its battlefield mentality and lacked the courage to terminate its own political life. The military failed in the political arena." 6 On this score,Claude Ake added his voice when he said: The
military and democracy are in dialectical opposition. The military can never engender democracy because it is the antithesis of democracy in regard to its norms, values, purposes and structure. The military addresses the extreme and the extra-ordinary while democracy addresses the routine; the military values discipline and hierarchy; democracy freedom and equality. The military is a chain of command; democracy is a benign anarchy of diversity. Democracy presupposes human sociability; the military presupposes its total absence, the inhuman extremity of killing the opposition. The military demands submission; democracy enjoins participation; one is a tool of violence,
the other a means of consensus building for peaceful co-existence." 1 Flowing from the above, there is no doubt that the military incursion into governance and democracy was and still is a huge mistake that has continued to relegate our practice of democracy to a level of militarism and thus affected a sustained economic and political development of the country.
The Nigerian Civil War Philosophy
The foundational philosophy of the Nigerian State formulated to prosecute the Nigerian Civil War has been the guiding principle of successive regimes, military and civilian alike. It is expressed in the slogan: "to keep Nigeria one is a task that must be accomplished." At some point, it was replaced with another no less barbaric slogan 'Nigeria's unity is not negotiable'. 2 The respected author Chudi Offodile analyzing the events of the Nigerian civil war and its aftermath summed it beautifully this way: ' It must be recalled that in the historical antecedent of the Nigerian-Biafran civil war, Biafra was rated the fastest growing economy in the world. They were refining their own crude oil, manufactured war weapons including bombs, "Ogbunigwe" , built bunkers, machine guns, airports etc. Today, the Nigerian government cannot refine its own crude oil. Our airports are still built by foreign engineers. We go begging countries to sell weapons to us. Chudi Offodile predicts that "Nothing will change until Nigeria finds the proper foundational matrix to galvanize the talents and resources of the Nigerian people and create a productive economy." 4 Every other pretentious adjustment will not alter the inevitable consequence of Nigeria's foundational contradictions, which is state failure. Those who have good vision of this have already started calling for restructuring or selfdetermination.
Mono Economic Base
Before the discovery of crude oil in Nigeria, we use to have oil palm in the Eastern Nigeria, Cocoa in the West, groundnut pyramids of the North. We produced cotton too. All were in export proportions and the economy boomed. As soon as oil was discovered, we appeared to have jettisoned other means of foreign exchange generation and concentrated on oil. In the process, we have become lazy and economic development fluctuated correspondingly with international oil prices. Our economy became solely based on easy oil money that other sectors of the economy like agriculture, tourism, technology, banking, insurance, taxation, sports, health, even prospection of other mineral deposits of which Nigeria is greatly endowed, became neglected. We lost sight of industrialization of the economy and could not even produce enough food for local consumption not to talk of exportable quantity. Policy summersaults killed our "Green Revolution" and "Operation Feed the Nation". Infrastructure decayed, and the power sector was grounded and the nation plunged into darkness. Cost of production rose to astronomical proportion and the few Nigerian products could not compete in terms of price with their foreign counterparts. Foreign exchange from oil was spent on importation of foreign consumable products. We went as low as importing tooth-picks and candles. The government said we could begin to manufacture pencil in another few years! This is laughable for a country that claims to be the biggest economy in Africa and the giant of Africa. Our dept profile has reached an all-time-high and the government is still making plans to borrow more and plunge generations yet unborn into perpetual indebtedness.
A Faulty Constitution
The fraud of a military imposed constitution which roguishly provides that "We the people…do hereby make and give to ourselves the following Constitution," is sickening. Our Constitution came into being by virtue of a military decree. 1 It has also been argued that mere recognition that it is the people who ought to make or approve the Constitution is enough consolation given our circumstance as a nation. 2 Sections 162-164 of the Constitution 3 make provisions where the constituent States in Nigeria go to Abuja on monthly bases to share the federal allocation from the Federation Account. This feeding-bottle approach has made the States lazy to think of other means of generating funds for governing the State. The same applies to the Local Governments who also depend on the distribution from Federation Accounts.
There is also the issue of the concentration of all executive powers on a single person called the "President". He is the Head of State, Chief Executive of the Federation and Commander-in-Chief of the Armed Forces of the Federation. The Nigerian President is reputed to being the most powerful leader in the world by virtue of our Constitution. 4 Predictably, there has been so much abuse of this power ranging from intimidation of political opponents, by trumped-up charges and detaining them without trials, to intimidation of other arms of government. We still remember the raiding of residents of Judicial officers including Supreme Court Justices in the night by the DSS (an agent of the executive), the Dino Melaye case, the former Senate President, Bukola Saraki accused of financing armed robbers, Senator Ahmed Sani Yarima docked for corruption, Senator Ike Ekwerenmadu for forgery, Senator Danjuma Goje for fraud, Senator kashamu facing threat of extradition from NDLEA, Senator Eyinnaya Abaribe is now arrested in a case where the Federal Government frustrated the attendance to court of Nnamdi Kanu whom he sureteed. The mace of the National Assembly has been stolen, with the police fingered as conspiring with the executives to intimidate the National Assembly. The IGP has refused several invitations by the National Assembly to attend her sitting for questioning. So did the Comptroller General of customs. In all these cases, Mr. President has given tacit approval by refusing to terminate their appointments. The list is endless. This is executive lawlessness and abuse of executive power. On the other hand, any corrupt government official, particularly of the executive arm who moves from his former party to the All Progressive Congress (APC), becomes an angel and untouchable by the law enforcement agents as "their sins are forgiven." See the wide powers of the President and Governors under sections 5(1) (b) and 2(b) of the 1999 Constitution.
Weak Institutions of State
We shall look at weak institutions from the praxes of the Nigerian Police, Army and Independent National Electoral Commission (INEC) only.
INEC:
One disturbing feature of elections in Nigeria which any participant or observer of election petitions will attest to is the involvement of the electoral umpire, INEC (with the help of the police and other security agents) in ensuring that any party in power gets favored in the way and manner elections are conducted. These acts ranges from thumb printing of ballot papers in favor of the ruling party, sensitive election documents are given to the agents of the ruling party; where opposition is strong, material for election will either be delayed or not supplied at all; cancellation of election results where opposition has done well; encouraging ballot box snatching and allowing exchange of money at polling booths during elections; secreting of opposition witnesses during election petitions and confirming stories of the ruling party during the trials; declaration of false results; exclusion of lawful votes from counting; falsification and forgery of result sheets; vote buying, under-aged-voting and multiple-voting are just some of the ways by which the electoral umpire rig the election in favor of any of her preferred candidates. The late President Yaradua did acknowledge that the election which brought him to power was one of the most un-free and unfair elections. In Oshiomole v Osunbor, 5 a witness said he could not report the snatching of ballot boxes to the police by PDP henchmen because the then ruling party PDP and the police were like 'nut and bolt.' 6 This is clearly a failure of INEC as an institution of State. A marred electoral process will naturally enthrone a bad government as the will of the people is subverted. When a government is foisted on the people through electoral fraud, the ultimate result is bad government, economic depression, strangulating inflation, currency devaluation, loss of lives and property, protests, insecurity and other trappings of a failed State.
The Police
Whilst the 1963 and 1979 Constitution of Nigeria did not provide for the functions of the Nigerian Police Force directly but left it to the parliament to organize, the 1958 Police Act provided for the functions of the Police. The Police Act Cap 359 of 1990 amended the 1958 Act by substituting the word "Governor-General" with "this or any other Act" it provides that; The police shall be employed for the prevention and detection of crime, the apprehension of offenders, the preservation of law and order, the protection of property and the due enforcement of all laws and regulations with which they are directly charged and shall perform such military duties within or without Nigeria as may be required of them by, or under the authority of this or any other Act." There is no gainsaying the fact that the Nigerian Police has not lived up to these mandates in very many respects. The Nigerian Police is reactive rather than proactive. It tries to detect crime after it has been committed, and have had little success in this respect, while it has done practically nothing in trying to prevent crimes. Sometimes, the police gets directly involved in the commission of crimes it is supposed to prevent. We all remember the "Apo Six" where the police killed 6 youths on very suspicious circumstances. There is the recent killing of a Youth Corp member by the police. The dailies are replete with many incidents of accidental discharge from police gun resulting in death of innocent citizens. When armed robbers are caught, they are "wasted" before they can be tried. Police has helped in election rigging in the country. Some of them are accomplices to cases of kidnapping, robbery, rape and other vicious crimes. They have become instrument of intimidation in the hands of successive ruling parties. Their next name is corruption as police check points are duty posts of extortion and intimidation. There have been cries by the Nigerian public for the scrapping of SARS. The activities of the likes of Tafa Balogun, a former Inspector General of Police, are still fresh in our memories. Human rights abuses have become the rule rather than the exception. There is serious capacity failure within the police. These failures in some cases are hardly the fault of the police force itself. There is inadequate funding and provision of modern gadgets to combat crime. It has been revealed that about 80% of officers of the Nigerian Police Force are assigned to secure VIPs rather than the general public. Nigeria's effective police officer to population ratio is I (one) to 2,514. 1 This ranks among the lowest police-population ratios in the world. This has negatively affected the police ability to detect, prevent, monitor and investigate crime. Every now and then you hear police admitting to being overwhelmed by armed bandits, herdsmen, Boko Haram. It must be stated that there are many police officers who are upright, diligent, honest and effective in their work in spite of the very poor funding of the police. The conclusion however, is that the police is a failed institution of state unfortunately. Recently a report released by the National Bureau of Statistics (NBS) and the United Nations Office of Drugs and Crime (UNODC) rated the police as the most corrupt institution in Nigeria. 2 I would like to stop so far on the Nigerian Police for want of space.
The Nigerian Army
I have already dealt with the disruptive influences of the Nigerian army in our quest for a sustainable democracy, the military's love for power and the consequent corruption into which it became submerged. Apart from her failure to finally defeat the Boko Haram and the recent accusations against it of having taken side with the "Fulani herdsmen" in the Benue State killings and destruction of properties, one can safely say that everything put together, the Nigerian military has not totally failed as an institution of state. This is particularly true when we judge them from the point of view of their primary function, which is the responsibility of defending the territorial integrity of Nigeria. They have fought a civil war and kept Nigeria as one. No other nation has successfully invaded and taken away any territory of Nigeria. The military has been engaged in peace keeping operations in other African countries and have kept a good record of success. In fact the Nigerian military is rated one of the best in Africa. The failure to completely tackle and defeat the Boko Haram stems from inadequate funding of their operations and lack of motivation for the officers. It is a fact that they have fought this war with inferior weapons compared to what the insurgents have. Col Sambo Dasuki is still standing trial for diversion of money meant for purchase of modern war equipments for the military. It is on record that at one point, young officers were said to have mutinied and refused to confront the Boko Haram with their "bare hands" as was the claim. Notwithstanding this fact, the boys were made to face trial and were sentenced to prison terms with some dismissed from the army. This speaks of the high level of discipline that still subsists within the military set up. Again we must not lose sight of the fact that the Boko Haram is an unconventional war where the enemy is not easily identified and cowardly choose to attack soft targets of civilian residents and clusters. The military incursion into governance would not depict weakness but strength. It is also true that anytime the police is overwhelmed by any situation, it is still the Nigerian military that has been invited to take care of such situation. The Nigerian army has been voted as the best institution of the month 1 . This cannot be for doing nothing.
Recommendations
This paper recommends that it is safer to say that weak institutions of state has contributed to the problems of Nigeria but cannot be said to be mainly anchored upon it, otherwise, we will be leaving the substance and pursuing the shadows. The identified Nigerian problems have simple solutions.
Amalgamation
There is a lesson we can learn from Catalonia, Flanders and even Scotland. In Scotland, residents will have to decide on whether their homeland should become an independent country or remain part of the United Kingdom. Also in Catalonia, Spain Provincial President, Arthur Mas has called for a referendum on whether Catalonia should become a sovereign state. And in the Belgian province of Flanders, the leader of the ruling party has called for negotiations that would "enable both Flanders and French-speaking Wallonia to look after their own affair." All these secession plans are without any form of violence but are being peacefully expressed within the context of their various constitutions. Nigeria should do the same and allow for a referendum to decide whether component parts still want to remain in Nigeria or go their separate ways. They can now develop at their own pace. Alternatively, the government should heed the call for genuine restructuring. This will allow the component States to develop at their pace and douse the call for self-determination.
Restructuring
We must embark on a holistic restructuring of the Nigerian nation and allow true federalism where component States own their natural resources and contribute to the federation account rather than the unitary-federalism we now practice.
Police
Regions or the Geo-political Zones or States should be allowed to own and fund their own police. That way, those who know the geography of the communities and the residents, are employed and detection and prevention of crime becomes easy. Even police corruption will disappear as members of the community also know members of the police force within their locality, and can easily report when they start living above their means. Adequate funding and training should also be given to them to avoid abuse of their powers. Special police unit can be created for the enforcement of court orders whether it is against government or not.
Census
A clear and sincere census should be conducted without any political influence. A census commission that is independent and that is funded directly from the Federation Account can be created to update the figures every 2 or 4 years. A digitized national identification card which will be compulsory and serialized for all citizens should be adopted. Banks, schools, hospitals and indeed all government agencies should be mandated not to attend to any person without a national ID card. This way, we can have an accurate figure of our population upon which a meaningful development plan can be based.
Post War Philosophy
We should abandon this approach of "Nigeria's unity is not negotiable". "No victor, no vanquished" should be made to manifest in the government policies in an honest and just manner that will win back the trust of the Igbos and other ethnic nationalities. Those economic areas where they have very high relative advantage should be funded by the Federal Government and be allowed to flourish and blossom. Equity should be ensured in appointments into government positions, and they should be allowed to occupy the highest position in the landthe presidency. IPOB, MOSSOB etc will die a natural death if they have this sense of belonging. Those who kill and destroy their properties should be made to face the law.
Mono Economy
Nigeria should as a matter of urgency diversify her economic base to maximize the potentials of our human and material resources and vast land. What is required is a focused and well thought out economic plan that will have over 80% technology based technique that will sprout industrialization and bring us into the advancement of the next century globally.
INEC
The Independent National Electoral Commission should be made to be truly independent. The prosecution of electoral offenders should be televised and special courts assigned with electoral matters. A highly organized penal regime should be put in place to deter electoral offenders. Staff of the commission should be well remunerated and their funding should not be tied to the whims and caprices of the ruling party. This will make them truly independent of "favors" from government in power and well positioned as impartial umpires of our electoral processes.
It is my belief that if these recommendations are sincerely pursued, most of Nigeria's problems will be a thing of the past or at least reduced to the barest minimum.
Conclusion
Whilst it is true to say that weak institutions have contributed to Nigeria's problems, it will not be true to say that they are mainly responsible for our problems. It is our sincere submission that even if institutions of State are made strong, the very obvious differences in culture, religion, traditions, values, the amalgamation issues, the military incursion which is even evidence of a strong institution, the mono-economic base, will still create intractable problems of their own. These other factors appear to have contributed more to our problems than weak institutions of State.
|
2020-10-28T18:55:23.116Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "fc75131a0429a3da037de14c7162bccc030921a6",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/JLPG/article/download/54295/56100",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "99c75c35242e62da0eb5170cc6cec42ecc1b258f",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
248795939
|
pes2o/s2orc
|
v3-fos-license
|
PERCEIVED WORKPLACE FAIRNESS, ETHICAL LEADERSHIP, DEMOGRAPHICS, AND ETHICAL BEHAVIORS
How to cite this paper: Adekanmbi, F. P.
INTRODUCTION
Ethics is considered rules and standards that serve as supervisory principles for work corporations such as public service corporations.Ethics ensures behavioral compliance from every member of an organization, while its negation incurs full fury of the law (Adebayo, 2014).In the public sector and administration, ethics is referred to as written codes of conduct to protect shared values, for instance, fairness, responsiveness, accountability, public interest, and many more (Casimir, Izueke, & Nzekwe, 2014).Work organizations, governments, and researchers have been on the spot attributable to ethical scandals in recent years (Treviño, den Nieuwenboer, & Kish-Gephart, 2014; Al Halbusi, Tehseen, & Ramayah, 2017).Owing to ethical concerns and problems in organizations and public service, for instance, the Nigerian public service, scholars generally have perceived that individual who exhibits unethical behaviors tends to concentrate on their individual goals and needs at the expense of companies (Schaubroeck, Lam, & Cha, 2007;Padilla, Hogan, & Kaiser, 2007).Ethics as a concept is advancing in the public service sector.This sector in Nigeria faces innumerable problems, such as laziness, corrupt practices, public-funds misappropriation, cold attitude to their co-workers and work (Ogundele, 2011;Adebayo, 2014).
In the modern sense, fairness consists of equity, moral appropriateness, honesty, and impartiality (Polanyi & Tompa, 2004).Colquitt and Rodell (2015) have noted fairness as a general insight on suitability.This paper will interchangeably use fairness and justice.Thus, justice has become essential for leaders, managers, and work organizations.In recent times, injustice and unfairness within work organizations have been recognized as an issue of concern among human resource management and organizational psychology scholars.Hence, fairness has become a resounding topic in employees' whole working lives (Colquitt & Zipay, 2015).It is expected that leaders show a high moral value about attitudes, actions, and decision-making.They are also likely to deliver a high ethical-behavior level (Al Halbusi et al., 2020a).Ethical leaders are seen as character models as they exhibit high ethical behaviors and integrity within the work organization (Brown, Treviño, & Harrison, 2005).Therefore, followers imitate and embrace ethical leaders' standards-driven behaviors (Brown & Treviño, 2006).
Ethics issues remain the main challenges threatening Nigeria's public service sector, as unethical behavior has been differently displayed.For instance, corruption and the lack of responsibility have been pervasively exhibited within Nigeria's public service sector (Beetseh & Kohol, 2013).Also, a few unethical behaviors such as bribery, fraud, nepotism, extortion, influence peddling, and embezzlement long existed within Nigeria's public service (Iyanda, 2012).The behaviors mentioned above seem to have been traditional within Nigeria's public service sector because they appear normal and acceptable to several citizens and civil servants (Fatile, 2013).
Within the current digital era, a few concerns are the issues of perceived workplace fairness and ethics.Gunz and Thorne (2020) indicated a concern about ethical considerations in the workplace, which is now known as a responsibility gap, which is the degree to which technology adoption leads to the abandonment of ethical obligation for the consequences of decisions by real people.Hence, the questions of how and to what degree technology impacts ethical behaviors of public servants must be vital.Furthermore, studies have demonstrated that workplace fairness justice significantly impacts employees' ethical perceptions and behaviors (Goergen, Pauli, Cerutti, & Perin, 2018).
Notably, minimal studies have been conducted to reduce unethical behaviors within Nigeria's public service sector (Iyanda, 2012) by investigating predictors such as perceived workplace fairness, ethical leadership, and workers' demographics.Therefore, this investigation aims to add to the literature by looking into the impacts of perceived workplace fairness, ethical leadership, and workers' demographics on ethical behaviors to suggest a helpful and pragmatic model to significantly encourage and increase ethical behaviors within Nigeria's public service sector in the current digital era.To achieve the stated aim, the following research questions are germane to this investigation: RQ1: Is there a significant impact of perceived workplace fairness on ethical behaviors within Nigeria's public service sector?RQ2: Does ethical leadership significantly influence ethical behaviors within Nigeria's public service sector?RQ3: Is there a significant effect of workers' demographics on ethical behaviors within Nigeria's public service sector?RQ4: Will perceived workplace fairness, ethical leadership, and workers' demographics significantly and jointly influence ethical behaviors within Nigeria's public service sector?
Furthermore, the current investigation applied the equity theory, social exchange theory, and social learning theory in investigating the influence of perceived workplace fairness and ethical leadership on ethical behaviors within Nigeria's public service sector.However, the findings of this investigation are significant for the management of Nigeria's public service sector.It would help the state government in Nigeria in the different precise approaches to guarantee a noteworthy increase in the adoption and exhibition of ethical behaviors, increased perceived workplace fairness, and adoption of ethical leadership.Taking such a step would significantly help improve ethical behavior within Nigeria's public service sector.
Moreover, the current research adopted a crosssectional survey approach.Questionnaires were handed out to participants to get their views on workplace fairness, ethical leadership, and ethical behaviors in their public service centers in ten local government areas of study.Survey forms were given to 500 civil servants, and data retrieved were analyzed and shown in tables.Nonetheless, the results of this paper indicated that female civil servants exhibit more ethical behaviors than their male counterparts within Nigeria's public service sector.Also, older civil servants with higher educational qualifications, who are also at the highest job level, exhibited more ethical behaviors.Furthermore, the current investigation further established that perceived workplace fairness and ethical leadership significantly and positively impacted ethical behavior within Nigeria's public service sector.
The rest of this paper is structured as follows.Section 2 reviews the relevant literature.Section 3 analyses the methodology that has been used to conduct empirical research on the impacts of perceived workplace fairness, ethical leadership, and workers' demographics on ethical behaviors within Nigeria's public service.Section 4 provides the results of the paper.Section 5 discusses the obtained results.Section 6 concludes the paper.
The theories of perceived workplace fairness and ethical leadership
The fundamental focus of equity theory is on reward, thus, the purpose for fairness or unfairness in various circumstances within work organizations (Dugguh & Dennis, 2014).The position of the equity theory is that employees compare their rewards with other employees in equivalent positions.Hence, employees feel motivated and satisfied at the notice or perception of fairness, justice, and equity, resulting in positive behaviors (Aswathappa, 2008).It has been generally noticed that employees are delighted at the feeling of some considerable measure of compensation or reward for their work efforts and contributions.So, suppose such employees perceive any form of injustice or unfairness in the reward they get from their organizations compared to their contributions.In that case, they express some dissatisfaction and eventually become hostile towards their organizations to reduce job satisfaction, lack motivation, and increase unethical behaviors (Dugguh & Dennis, 2014).However, fairness is multidimensional as it involves perception.Therefore, employees are happy and satisfied when they perceive that their inputs are equally rewarded with outputs and are better inspired to discharge their duties more ethically and positively.However, when they perceive a mismatch between their inputs and the outcomes or reward, they become demotivated and more likely to exhibit unethical behaviors (Schultz & Schultz, 2010).
In explaining ethical leadership and its effects on organizational and employee behaviors, Brown and Treviño (2006) have reinforced two theories, namely the social exchange theory (SET) (Blau, 1964) and the social learning theory (SLT) (Bandura, 1986).
The SET suggests that the rules of exchange or reciprocity determine several social relationships (Blau, 1964).Going by the SET, when followers or workers identify their leader as ethical and cares for their welfare and happiness, they get more inclined or inspired to be more devoted to exchanging such gifts with positive and ethical behaviors.In keeping with this position, the current research proposes that ethical leaders stimulate their followers' perception of equity and trust, making their subordinates reciprocate with positive and ethical behavior (Brown et al., 2005;Brown & Treviño, 2006).Furthermore, the SLT emphasizes the previous circumstances and results of ethical guidance.It proposes that people study the rules of proper behavior in two ways: through observing other people and personal experience (Bandura, 1986).Individuals primarily focus on and consider role models or reliable leaders (Brown & Treviño, 2006).Ethical leaders are seen as character models as they show excellent ethical behavior standards and integrity within work organizations (Brown et al., 2005).Consequently, followers imitate their ethical leader (Brown & Treviño, 2006).Hence, ethical leaders inspire ethical and appropriate behaviors within their followers.Therefore, this current investigation has adopted these two theories in investigating the impact of ethical leadership on workers' ethical behaviors within Nigeria's public service sector.
The predictors of ethical behaviors
In organizational psychology, employees' perception of workplace fairness remains pertinent (Fujishiro, 2005).Kivimäki H2: Ethical leadership significantly affects ethical behavior among civil servants within Nigeria's public service sector.
Swaidan, Vitell, and Rawwas (2003) showed that age significantly impacts ethical behaviors.They further argued that older participants tend to be more ethical.Also, Lindblom and Lindblom (2016) opined that age significantly predicts workers' ethical behavior.However, Lokman, Talib, Ahmad, and Jawan (2018) opined that age does not considerably affect ethical behavior.Concerning marital status, Swaidan et al. ( 2003) found a significant influence on ethical behavior.They further noted that married participants reject unethical behaviors more than their single counterparts.Besides, Auger, Burke, Devinney, and Louviere (2003) indicated that married employees are more likely to behave ethically than their single counterparts.On the other hand, Doran (2009) found no relationship between marital status and ethical behavior.
Furthermore, Ross and Robertson (2003) found that gender significantly impacts workers' ethical behaviors in their study.They further noted that women are more ethical than men, supporting Alatas, Cameron, Chaudhuri, Erkal, and Gangadharan (2009) finding that women are less tolerant of corruption.Lindblom and Lindblom (2016) opined that gender is a factor that significantly impacts ethical behavior.In a similar vein, Lu and Lu (2010) indicated that females tend to be more ethical than males.In contrast, Keller, Smith, and Smith (2007) revealed no significant difference between genders impacting ethical behavior.In addition, Lokman et al. (2018) indicated no significant difference in male and female ethical behavior, as they both have approximately equal predispositions to behave ethically and unethically.Also, Bell et al. (2011) found religion a significant predictor of ethical behavior.Moreover, Swaidan, Cloninger, and Nica (2006) noted that education level significantly predicts ethical behavior.They further indicated that individuals with advanced levels of education would be less prone to exhibiting unethical behaviors than their counterparts with lower levels of education.In contrast, Jonck, van der Walt, and Sobayeni (2019) indicated that the highest academic qualification did not significantly influence ethical behavior.Yamin (2020) noted that demographics, such as age, work experience, and gender, significantly and positively influence ethical behavior.In their study, Jonck et al. (2019) found that only gender and job tenure significantly impacted ethical behavior out of the demographics under investigation.Also, Bolman and Deal (2017), as stated by Grigoropoulos (2019), indicated that age and level of education significantly impact ethical behaviors.They further posited that the younger and the less educated the individual is, the higher they tend to make wrong choices or exhibit unethical behaviors.Going by previous research about the impacts of workers' demographics on ethical behaviors, a hypothesis about ethical behavior in Nigeria's public service is proposed below: H3: Workers' demographics significantly impact ethical behaviors among civil servants within Nigeria's public service sector.
In addition, the above literature review prompted the hypothesis stated below: H4: There is a joint influence of perceived workplace fairness, ethical leadership, and workers' demographics on ethical behavior among civil servants within Nigeria's public service sector.
RESEARCH METHODOLOGY
This investigation could be conducted with a quantitative or mixed method.A qualitative method collects, analyzes, and interprets nonnumerical data, while a mixed-method combines qualitative and quantitative methods into one study.However, the current research adopted a crosssectional survey approach.Survey forms were handed out to participants to test the current research hypotheses and collect data about their views on workplace fairness, ethical leadership, and ethical behaviors in their public service centers in the local government areas of study.Survey forms were given to 500 civil servants from ten local government areas (Lagelu, Olorunsogo, Oyo West, Ibadan North, Ido, Ibarapa East, Akinyele, Atiba, Oluyole, Ibadan South-West) of Oyo State.Data retrieved were analyzed and shown in tables.However, this investigation put the ethical matters associated with measuring, gathering, and keeping private data into consideration.Therefore, intentional participation was stimulated.Altogether, 452 questionnaires were recovered and deemed suitable to use.The retrieved data was cleansed and analyzed with the statistical package for social sciences (SPSS 27), and the current study conducted reliability analyses in achieving the measuring scale's local reliability.This paper's survey form has the following segments (see Appendix): Section A: Workers' demographics.This segment has the participants' demographics, for example, gender, age, religion, marital status, education qualification, and job level.
Section B: Perceived workplace fairness scale (POFS).This part of the questionnaire had a 14-item measuring instrument modified from Donovan, Drasgow, and Munson (1998) to quantify the perceived workplace justice amongst public workers.This instrument has a Yes/No response format, and the authors noted a 0.76 Cronbach's alpha reliability coefficient score.Nonetheless, the present study has reported a Cronbach's alpha reliability of 0.88.
Section C: Ethical leadership scale (ELS).This section measures the participants' perception of ethical leadership within their work organizations; through an ethical leadership-measuring instrument developed by Brown et al. (2005).This measuring tool has 10 items and a 5-point Likert response format, reaching -1 = strongly disagree‖ to -5 = strongly agree‖.The instrument developer indicated a Cronbach's alpha reliability coefficient of 0.95, and in this paper, a Cronbach's alpha reliability of 0.92 was achieved.
Section D: Ethical behavior scale (EBS).This paper measured workers' perceived ethical behaviors within their work organizations using a 16-item measuring scale modified from a prior study (Lu & Lin, 2014).This scale has two proportions, namely, judicial and normative ethical behaviors.Items one to ten measure the normative ethical behaviors, while eleven to sixteen measure the judicial ethical behaviors.This scale has a 5-point Likert response format.Lu and Lin (2014) stated that the judicial dimension had a 0.89 reliability coefficient while the normative dimension reported a 0.94 reliability coefficient.However, in the current research, the reliability coefficient of the judicial measurement is 0.85, while the normative dimension is 0.88.
However, the current research floated a pilot study in detecting any possible difficulties in advance to authenticate the scale's effectiveness.
Inferential statistics
Table 2 showed that perceived workplace fairness, ethical leadership, and workers' demographics (marital status, age, gender, educational qualification, religion, and job level) significantly and jointly impact civil servants' ethical behavior within Nigeria's public service sector (R = 0.982, R 2 = 0.964, F = 1489.991,p < 0.01).The p-value is sufficient.These findings showed that perceived workplace fairness, ethical leadership, and workers' demographics significantly and jointly impacted a 98.2% variance in ethical behaviors within Nigeria's public service sector.Thus, the hypothesis (H4) is confirmed that there is a joint influence of perceived workplace fairness, ethical leadership, and workers' demographics on ethical behavior among civil servants within Nigeria's public service sector.Moreover, the model shown in Table 3, stipulates that outside the workers' demographics (for example, marital status, age, gender, religion, educational qualification, and job level) tested, only gender, age, educational qualification, and job level significantly impact the variance in ethical behavior at = 0.026, t = 2.639; p < 0.01; = -0.029,t = -2.928;p < 0.01; = 0.022, t = 2.349; p < 0.01, = 0.022, t = 2.271; p < 0.01, respectively.The p-value is adequate.These results suggest that gender contributed about 2.6%, age 2.9%, educational qualification 2.2%, and job level 2.2% variance in ethical behavior within Nigeria's public service sector.The negative shows that civil servants' ethical behavior decreases with older age.On the other hand, the positive relationship indicates that civil servants' ethical behavior increases with gender, educational qualification, and job level.Therefore, the hypothesis (H3) that workers' demographics significantly impact ethical behaviors among civil servants within Nigeria's public service sector is confirmed.
Furthermore, Table 3 stipulates that perceived workplace fairness significantly and positively impacts civil servants' ethical behavior change within Nigeria's public service sector at = 0.640, t = 17.867; p < 0.01.The p-value is adequate.Thus, this paper shows that perceived workplace fairness contributed about 64% influence on variance in civil servants' ethical behavior within Nigeria's public service sector.Similarly, the current results indicate that ethical leadership significantly and positively impacts the variance in civil servants' ethical behavior within Nigeria's public service sector at = 0.339, t = 9.674; p < 0.01.The p-value is sufficient.Therefore, this paper suggests that ethical leadership contributed about 33.9% influence on the change in civil servants' ethical behavior within Nigeria's public service sector.In addition, as stated above, the positive relationships show that workers' ethical behaviors increase with their perceived level of workplace fairness and ethical leadership adoption level.Thus, the stated hypotheses (H1 and H2), namely, perceived workplace fairness significantly impacts ethical behavior among civil servants within Nigeria's public service sector, and ethical leadership has a significant effect on civil servants within Nigeria's public service sector, are confirmed.Table 4 shows that gender difference notably impacts ethical behavior within Nigeria's public service sector.The change in score between male civil servants and female civil servants is t (450) = -5.304,p < 0.05, two-tailed with the female public workers (M = 34.47,SD = 10.98)recording higher mean than male public workers (M = 29.54,SD = 8.51).These results further suggest that female civil servants in Nigeria's public service sector significantly exhibit more ethical behavior (M = 34.47)than their male counterparts (M = 29.54).Thus, gender significantly impacts ethical behavior within Nigeria's public service sector.Table 5 below shows the results of a one-way between-groups ANOVA, which was carried out to examine the impacts of age, educational qualification, and job level on ethical behavior.The investigation's participants were split into three groups according to their age (20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47)(48)(49), and 50 and above), and a significant variance at the p < 0.05 level in ethical behavior occurred amongst the three age groups: F (2, 451) = 10.102,p < 0.05.Also, Table 5 indicates that respondents were split into four groups of academic qualification (Ordinary National Diploma, Higher National Diploma, Bachelor of Education/Bachelor of Science, and Master of Education/Master of Science).A significant change at the p < 0.05 level in ethical behavior occurred amongst the four academic qualification groups: F (3, 451) = 20.196,p < 0.05.Furthermore, Table 5 shows that respondents were also split into three groups according to their job level (level 6, level 7-9, and level 10 and above), and a significant variance at the p < 0.05 level in ethical behavior occurred amongst the three job level groups: F (2, 451) = 6.878, p < 0.05.By carrying out a post-hoc assessments test, Table 6 substantiates a significant variance amongst the mean scores of the age group of 20-34 (M = 33.47,SD = 9.38), the age group of 35-49 (M = 32.42,SD = 10.57), and the age group of 50 and above (M = 27.18,SD = 7.91).Table 6 also establishes a substantial variance between the mean scores of the educational qualification group of Ordinary National Diploma holders (M = 34.22,SD = 8.60), the educational qualification group of Higher National Diploma holders (M = 29.34,SD = 9.50), and the group of civil servants with Master of Education/Master of Science (M = 39.09,SD = 10.15).Nevertheless, no significant variance exits between the mean scores of the group of Higher National Diploma holders (M = 29.34,SD = 9.50) and the group of civil servants with Bachelor of Education/Bachelor of Science (M = 29.92,SD = 9.86).Furthermore, Table 6 confirms a noteworthy variance amongst the mean scores of the job level group of civil servant at level 6 (M = 33.23,SD = 9.60), the group of civil servant between level 7-9 (M = 30.60,SD = 9.77), and the group of civil servants on level 10 and above (M = 35.04,SD = 10.99).
Therefore, the current results suggest that the group of 50 and above (M = 27.18,SD = 7.91) of the age groups are more likely to exhibit consistent ethical behavior than other age groups within Nigeria's public service sector.This is because initial results indicated a negative impact of age on ethical behavior ( = -0.029,t = -2.928);hence, the group with the lowest mean score would exhibit more ethical behavior.Also, the results suggest that the group of civil servants with Master of Education/Master of Science (M = 39.09,SD = 10.15), which has the highest mean score amongst the educational qualification groups, will exhibit more ethical behavior than other groups.The same goes for the job level groups where the group of civil servants on level 10 and above (M = 35.04,SD = 10.99)shows more likelihood of displaying more ethical behavior within Nigeria's public service sector, as it has the highest mean score amongst other groups.
DISCUSSION
The current findings showed that perceived workplace fairness significantly and positively impacts ethical behaviors within Nigeria's public service sector.This position infers that the more fairness civil servants perceive within Nigeria's public service sector, the more ethical they become.
As earlier elucidated, the role of the equity theory is that employees compare their rewards with other employees in equivalent positions.Hence, employees feel motivated and satisfied at the notice or perception of fairness, justice, and equity, resulting in positive behaviors (Aswathappa, 2008).The fundamental focus of equity theory is on return or reward; therefore, the purpose for fairness or unfairness in various circumstances within work organizations (Dugguh & Dennis, 2014).Thus, this paper corroborates the view of the equity theory regarding the impact of perceived workplace justice on ethical behavior.This paper validates Adekanmbi and Ukpere's (2020) work, which has indicated a significant effect of reduced perceived workplace fairness on unethical behavior such as absenteeism among civil servants.It also supports De Schrijver et al.'s (2010) view that public service participants indicated a positive perception of workplace fairness and reported high ethical behaviors against those who noted a negative perception of workplace fairness.The current results also corroborate Demir and Tutuncu's (2010) study, which shows that employees' positive perceptions of justice in their workplace make them less likely to pursue unethical behaviors.The results also support Demir's (2011) position that perceived workplace fairness reduces unethical behaviors.
In addition, the current findings have established that ethical leadership significantly and positively influences ethical behaviors among civil servants within Nigeria's public service sector.This position implies that civil servants are more ethical when their managers/leaders adopt and often exhibit ethical leadership within Nigeria's public service sector.As earlier noted, the SET proposes that when followers or workers sense their leader as ethical and cares for their welfare and happiness, they get more inclined or inspired to be more devoted to exchanging such gifts with positive and ethical behaviors.In keeping with this position, ethical leaders inspire their followers' feelings of justice and trust, making them reciprocate with positive and ethical behavior (Brown et al., 2005;Brown & Treviño, 2006).Also, as earlier indicated, the SLT emphasizes the previous circumstances and results of ethical leadership.It proposes that people study the rules of proper behavior in two traditions: by observing other people and through personal experience (Bandura, 1986).Individuals commonly focus on and consider role models or reliable leaders (Brown & Treviño, 2006).Ethical leaders are seen as character models as they show excellent ethical behavior and integrity within the organization (Brown et al., 2005).Consequently, followers imitate and assume their ethical leader (Brown & Treviño, 2006).Hence, ethical leaders inspire ethical and appropriate behaviors within their followers.This paper, thus, validates the positions of the SET and SLT regarding the impact of perceived workplace fairness on ethical behavior.Moreover, this paper posits that the gender of the civil servants significantly and positively impacts their ethical behaviors with Nigeria's public service sector.Thus, female workers exhibit ethical behaviors more than their male counterparts.Also, the current findings suggest that the age of civil servants significantly and negatively influences their ethical behavior within Nigeria's public service sector.Hence, the older the workers are within Nigeria's public service sector, they exhibit more ethical behaviors.Besides, this paper reports that workers' educational qualification and job level within Nigeria's public service sector significantly and positively impact their ethical behaviors.Thus, the higher their academic qualifications and job level, the more ethical they become.This paper, hence, confirms the position of Lindblom and Lindblom (2016), who opined that gender is a factor that significantly impacts ethical behavior.It also corroborates the view of Lu and Lu (2010), who indicated that females tend to be more ethical than males.However, this paper could not support Keller et al. (2007) and Lokman et al. (2018).They revealed no significant difference between genders impacting ethical behavior, indicating no significant difference in male and female ethical behavior, as they both have approximately equal predispositions to behave ethically and unethically.This paper further corroborates Lindblom and Lindblom (2016), who opined that age significantly predicts workers' ethical behavior.It also confirms Swaidan et al. (2006), which states that education level significantly indicates ethical behavior.Individuals with higher education levels would be less likely to exhibit unethical behaviors than their counterparts with lower levels of education.It, however, could not confirm the position of Jonck et al. (2019), which indicated that the highest academic qualification did not significantly influence ethical behavior.
Going by the current results, this paper has achieved the study's aim: to suggest a helpful and pragmatic model to significantly encourage and increase ethical behaviors within Nigeria's public service sector.Hence, the model is presented in Figure 1 below:
CONCLUSION
The current investigation concludes that ethical leadership, perceived workplace fairness, and workers' demographics account for significant variance in ethical behaviors among civil servants within Nigeria's public service sector.Therefore, these stated factors have been established as predictors of ethical behaviors among public workers within Nigeria's public service sector.Furthermore, the current investigation concludes that demographic factor (gender) significantly impacts the civil servants' ethical behaviors within Nigeria's public service sector.It concludes that female government workers exhibit more ethical behaviors than their male counterparts.Also, the current study concludes that demographic factor (age) significantly and negatively influences ethical behaviors among civil servants within Nigeria's public service sector.Older workers engage in ethical behaviors more than their younger counterparts.Moreover, this paper concludes that educational qualification and job level significantly and positively impact civil servants' ethical behaviors within Nigeria's public service sector.Civil servants with higher academic qualifications and on a higher job level will exhibit ethical behaviors more than their co-workers who have lower educational qualifications, which are on a lower job level.This paper has contributed meaningfully to leadership roles in looking into organizational matters, for example, attaining a notable increase in ethical behaviors within the public service sector of a developing economy.Therefore, the subsequent suggestions are helpful.The current investigation suggests that the state governments should ensure good and sufficient communication amongst workers and managers to identify and tackle the unfairness between employees' dedications/contributions and their rewards.The above suggestion could be attained by observing regular surveys of employees' worries, which helps to guarantee suitable interventions.
Also, the government needs constantly to establish an employee-fairness rule that suggests how employees are to be treated equitably, thereby inspiring an essential rise in ethical behaviors.Supervisors who show ethical leadership abilities such as honesty and justice, highlight ethical values, support and compensate ethical employees, and become models of ethical behavior tend to encourage ethical behaviors amongst their followers and workers.Hence, this paper recommends that state governments and other public organizations groom leaders who inspire and exemplify ethical behaviors.Furthermore, for further study, this paper suggests a qualitative empirical study to achieve a clearer understanding of the perceptions and feelings of the public service workers on the subject matter.Such in-depth qualitative inquiry could divulge issues that would enable a more detailed operationalization of the concepts linked to workers' ethics.
Moreover, this paper is with some limitations.Firstly, the current sample was restricted to the public service workers across local government areas of Oyo State, Nigeria.Hence, a future investigation should look into employees in other states and sectors of Nigeria.This will ensure the generalizability of the findings.Second, the current research adopted a cross-sectional survey design.
, Vahtera, Elovainio, Lillrank, and Kevin (2002) have proposed that perceived justice/ fairness notably affects absenteeism attributable to ailment.Besides, Adekanmbi and Ukpere (2020) have indicated a significant influence of perceived workplace fairness on unethical behavior such as absenteeism among civil servants.De Schrijver et al. (2010) noted that participants within the public sector indicated a positive perception of workplace fairness and reported high ethical behaviors against those who stated a negative perception of workplace fairness.Some empirical investigations show that employees' positive perceptions of justice in their work make them less likely to pursue unethical behaviors (Chiu & Peng, 2008; Demir & Tütüncü, 2010).Similarly, Demir (2011) posited that perceived workplace fairness significantly reduces employees' unethical behaviors.Essien and Ogunola (2020) noted how important it is for Nigerian organizations to increase their workers' perceived workplace fairness, whether public, religious, private, nongovernmental, among others.As much as organizations stand for what they stand for, they tend to identify more with such organizations, leading them to exhibit several positive work behaviors.Hence, to test the impact of perceived workplace fairness on ethical behaviors among civil servants within Nigeria's public service sector, the current investigation proposed the following hypothesis: H1: Perceived workplace fairness significantly impacts ethical behavior among civil servants within Nigeria's public service sector.Investigations have indicated that ethical leadership significantly influences followers (Lu & Lin, 2014; Demetriou, Thrassou, & Papasolomou, 2018).Hence, ethical leaders have constructive individual attitudes and actively exhibit ethical conduct, influencing their followers or employees within work organizations (Meyer et al., 2019; Presbitero & Teng-Calleja, 2019).Ethical leadership focuses on building trust and fairness between leaders and their followers or the employees within work organizations.Hence, as employees perceive fair treatment by their managers, they conclude that such behavior towards them has excellent advantages to the entire organization.Consequently, it is unlikely that employees exhibit unethical behaviors (Treviño et al., 2014).Leaders are an essential organizational component with a significant impact on followers' or employees' ethical behaviors.Recently, scholars have noted the influence of ethical leadership on employee ethical behaviors within work organizations (Neves, Almeida, & Velez, 2018; Al Halbusi et al., 2020b).Furthermore, investigations have indicated that ethical leadership significantly impacts employee ethical behavior (Brown et al., 2005; Toor & Ofori, 2009).Lin, Liu, Chiu, Chen, and Lin (2019) posited that ethical leaders' focus is on transactional efforts impacting the workers' ethical behavior.Also, investigations have noted the crucial role of ethical leaders in affecting employees' ethical behavior by their everyday communication with their subordinates.Hence, workers' behavior may change due to their leaders' guidance, fairness, and standards in the work organization (Lu & Lin, 2014; Neves et al., 2018; Moore et al., 2019).A study has established that ethical leadership reduces employees' tendency to exhibit unethical behavior (Moore et al., 2019).Also, Al Halbusi, Williams, Ramayah, Aldieri, and Vinci (2020b) indicated that ethical leadership impacts employees' ethical behavior.They further noted that the more an organization upholds an ethical leadership style, the more its employees behave ethically.The abovestated position, therefore, inspires the following hypothesis.
Furthermore, the current results support Meyer et al. (2019) and Presbitero and Teng-Calleja (2019), which opine those ethical leaders, have constructive individual attitudes and actively exhibit ethical conduct, influencing their followers or employees within work organizations.The current results also support Toor and Ofori's (2009) that ethical leadership notably impacts employees' ethical behavior.The present findings also confirm Lin et al.'s (2019) view that ethical leaders focus on transactional efforts affecting the workers' ethical behavior.
Figure 1 .
Figure 1.Empirical model of achieving and sustaining ethical behaviors among civil servants within Nigeria's public service sector civil servant (50 and above, M = 27.18)Educational qualification: Higher qualification (Master of Education/Master of Science, M = 39.09)Job level: Civil servant on level 10 and above (M = 35.04)Perceived workplace fairness ( = 0.640) Ethical leadership ( = 0.339)
Table 1
below indicates that 236 of the participants were male, while 216 participants were female.In addition, the dispersal of participants by age group meant that more participants were between 35-49 years old (253; 56.0%) after that, participants who were 20-34 years old (128; 28.3%), and participants at the age of 50 years old and above (71; 15.7%).Furthermore, the findings revealed that 208 respondents were single, 207 were married, and 37 were divorced.Also, Table1showed that 100 (22.1%) respondents were Ordinary National Diploma holders, 197 (43.6%) participants were Higher National Diploma holders, 92 (20.4%) were Bachelor of Education/Bachelor of Science licensed, and 63 (13.9%) were Master of Education/Master of Science holders.The current results added that 101 participants were on job level 6, 279 -level 7-9, and 72 -level 10 and above.
Table 2 .
Multiple regressions presenting the joint impact of the predictors on ethical behaviors Notes: a indicates the regression value of the predictors: (Constant), ethical leadership, job level, educational qualification, religion, gender, age, marital status, perceived workplace fairness.b indicates the level of significance.
Table 4 .
Summary of the t-test analysis showing the impact of gender on ethical behavior
Table 5 .
One-way ANOVA (age, educational qualification, and job levels)
|
2022-05-15T15:08:52.202Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "b385efb3ab1c61ab5670c63adebf5568096dafc7",
"oa_license": null,
"oa_url": "https://virtusinterpress.org/spip.php?action=telecharger&arg=10588&hash=b9e1e71e9a192d91cb77ff21efe0408d1036bc6f",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "234ebc56f1ca14eccb49a5e40549493b5f61bb4b",
"s2fieldsofstudy": [
"Business",
"Sociology"
],
"extfieldsofstudy": []
}
|
251766466
|
pes2o/s2orc
|
v3-fos-license
|
The impact of simulation-based trabeculectomy training on resident core surgical skill competency
Purpose To measure the impact of trabeculectomy surgical simulation training on core surgical skill competency in resident ophthalmologists. Methods This is a post-hoc analysis of the GLAucoma Simulated Surgery (GLASS) trial, which is a multi-center, multi-national randomized controlled trial. Resident ophthalmologists from six training centers in sub-Saharan Africa (in Kenya, Uganda, Tanzania, Zimbabwe and South Africa) were recruited according to the inclusion criteria of having performed zero surgical trabeculectomies and assisted in less than five. Participants were randomly assigned to intervention and control arms using allocation concealment. The intervention was a one-week intensive trabeculectomy surgical simulation course. Outcome measures were mean surgical competency scores in eight key trabeculectomy surgical skills (scleral incision, scleral flap, releasable suturing, conjunctival suturing, sclerostomy, tissue handling, fluidity and speed), using a validated scoring tool. Results Forty-nine residents were included in the intention-to-treat analysis. Baseline characteristics were balanced between arms. Median baseline surgical competency scores were 2.88/16 (IQR 1.75-4.17) and 3.25/16 (IQR 1.83-4.75) in the intervention and control arms respectively. At primary intervention, median scores increased to 11.67/16 (IQR 9.58-12.63) and this effect was maintained at three months and one year (p= 0.0001). Maximum competency scores at primary intervention were achieved in the core trabeculectomy skills of releasable suturing (n=17, 74%), scleral flap formation (n=16, 70%) and scleral incision (n=15, 65%) compared to scores at baseline. Conclusion This study demonstrates the positive impact of intensive simulation-based surgical education on core trabeculectomy skill development. The rapid and sustained effect of resident skill acquisition pose strong arguments for its formal integration into ophthalmic surgical education.
Introduction
Glaucoma is the leading cause of irreversible blindness, affecting approximately 76 million people worldwide in 2020. 1 Recent estimates suggest that over 100 million people will be diagnosed with the condition by 2040, largely due to an increasing and ageing global population. 1 Primary openangle glaucoma (POAG), in particular, develops earlier in those of African ancestry, with a more aggressive and rapid progression to advanced disease compared to other ethnic groups. 2 Currently, Africa has the highest global prevalence of glaucoma and POAG, estimated at 4.8 % (95% CI 2.6-8.0) and 4.2% (CI 2.1-7.4) respectively in those aged 40 to 80 years. 1 Importantly, glaucoma is responsible for 4.4% (CI 4.1-5.0) of all blindness in Africa, which is proportionately much higher than other regions in the world. 3 Urgent public health measures are therefore required to control and reduce the disease burden in the region.
depending on the subtype of disease, the extent of optic nerve damage, and the degree of visual field dysfunction. Yet, in sub-Saharan Africa (SSA), management of glaucoma is challenging for several reasons; late diagnosis, poor adherence to treatment and limited access to healthcare services and treatment. [4][5][6] Furthermore, a profound lack of patient awareness about the condition means that those affected often present with advanced, irreversible sight loss. In POAG, the initial step in treatment is medical, using long-term topical drop therapy, but in SSA, this is confounded by barriers in affordability, adherence and side effects. 5,7 Laser therapy, such as selective laser trabeculoplasty, is effective in lowering IOP and offers an alternative and safe initial treatment for African individuals with mild to moderate glaucoma. 8,9 However, the IOP lowering effect from laser treatment is only temporary, with many requiring repetitive treatment or initiation of medical or surgical therapy later in life. Furthermore, its efficacy in advanced disease remains unclear. For these reasons, surgical management of glaucoma, in the form of trabeculectomy, is often recommended as the first line choice in SSA. 10 Trabeculectomy remains the gold standard surgical procedure and is the most effective technique for long-term IOP management. 7,11 However, the provision of trabeculectomy depends on the availability of ophthalmic surgeons with surgical proficiency. At present, there is a global shortage of ophthalmologists, with a disproportionate shortage of ophthalmologists in SSA (an average ratio of 2.5 per million population, against a global average of 31.7) that are mostly confined to urban areas. 12,13 Due to the magnitude of disease burden and high general patient workload, ophthalmologists are often denied the opportunity for sub-specialist training in glaucoma, resulting in a paucity of glaucoma surgical skills. 12 These challenges, coupled with a low uptake of surgical treatment, a fear of surgical complications and challenges in post-operative care, make many ophthalmologists reluctant to offer trabeculectomy as a first line treatment option to patients. 12,14 A practical solution is to enhance the existing surgical skillset of current and prospective ophthalmologists in SSA. There is widespread variability in the number of trabeculectomies performed during residency, with a mean of 4 (median of 1) in a recent survey of resident ophthalmologists in the Eastern, Central and Southern African (ECSA) region. 15 Qualitative analysis found that residents in the region expressed a need for improvement in conventional ophthalmic surgical training, with better supervision and more use of simulation-based surgical education (SBSE). 15 Conventional ophthalmic surgical teaching in SSA typically uses theoretical-based learning, observation, low use of SBSE (mostly using low to moderate-fidelity simulation models), followed by live surgical teaching for advanced skill development. 16 Importantly, the use of SBSE varies across the different training institutions in the region and is not uniformly integrated into ophthalmic surgical training. For those using SBSE in their ophthalmic surgical training, many report inadequacy of training facilities and tools, as well as a lack of trainer supervision. 16 Yet, compared to the traditional Halstedian apprenticeship model of "see one, do one, teach one", SBSE offers a safer alternative for junior surgeons to refine their skills in the absence of patient harm, by using artificial training models. It is associated with less error rate, improvement in skill acquisition and fewer intraoperative complications. [17][18][19][20] Yet, whilst there is extensive research in simulation techniques for cataract surgery, data on SBSE in glaucoma surgery is limited. At the time of writing, there is no known integrated, comprehensive SBSE course on surgical trabeculectomy in SSA. The GLAucoma Simulated Surgery (GLASS) trial is the first known randomised controlled trial (RCT) assessing the efficacy of intense SBSE in glaucoma surgery on overall surgical competence, confidence, and live trabeculectomy surgery output in SSA-based resident ophthalmologists. 21 Here we present a post-hoc analysis of the GLASS trial data that evaluates the impact of SBSE on core trabeculectomy surgical skill competency in resident ophthalmologists.
Study Design
This is a post-hoc analysis from the GLASS trial, which is a randomised controlled, parallelgroup efficacy trial conducted between October 2017 and July 2019. Trial participants were randomized to two arms, with intended 1:1 allocation ratio. The trial design and primary results have been fully presented elsewhere. 21,22 Ethical approval was obtained from the London School of Hygiene and Tropical Medicine and the collaborating research institutions. 21,22 The trial was registered (PACTR201803002159198).
Setting & Participants
Resident ophthalmologists from six training centers in Kenya, Uganda, Tanzania, Zimbabwe and South Africa, were recruited according to the inclusion criteria of having performed zero surgical trabeculectomies and assisted in less than five. Participants were in their second, third or fourth year of postgraduate ophthalmology training.
Intervention
The trial intervention was a one-week, intense trabeculectomy SBSE course. The course consisted of theoretical and practical-based teaching on glaucoma and trabeculectomy surgery. The surgical procedure was deconstructed and instruction in surgical steps was provided using a modified Peyton's four-stage approach. 21,23 Individual steps of the procedure were practiced using low cost, moderate-fidelity simulation materials including foam materials for suturing practice and apple peels for scleral flap construction. 24 A full trabeculectomy procedure was performed on high-fidelity synthetic 'Advanced TrabEye' simulation surgery eyes (PS-023, Phillips Studio, Bristol, UK) and using Zeiss Stemi 305 microscopes (Carl Zeiss Microscopy, Jena, Germany) for the competency assessments. Each resident's trabeculectomy procedure on the high-fidelity synthetic 'Advanced TrabEye' was recorded using the Zeiss Labscope App (V.2.8.1) on iPads. Participants allocated to the control arm received the exact same intervention shortly after the one year follow-up assessment.
Outcomes
Participants were assessed on their competency in completing a full trabeculectomy procedure using the ophthalmic simulated surgical competency assessment rubric (Sim-OSSCAR) grading tool. 25 Timelines of assessment were at baseline, primary intervention (time of intervention in the intervention arm), three months, one year, time of intervention in the control arm, and fifteen months (equivalent to three months after intervention received in the control arm, Figure 1).
Anonymized video recordings of the procedures were assessed by two independent, masked graders who were experts in glaucoma surgery and had undergone familiarization training using the Sim-OSSCAR tool ( Figure 2). Video recordings of procedures were allocated a random seven-digit number, being the only identifiable information available for grading. Each grader was therefore fully masked to the participant's identity, allocation arm, training institution and timing of surgical assessment. The primary outcome measure was the combined mean score of three masked assessments of simulation surgical performance over the study period in eight selected core skills from the Sim-OSSCAR tool (Figure 3). Each grader evaluated a minimum of two and maximum of three anonymized videos, and allocated a maximum score of 2 to each selected core surgical skill. The maximum overall score for the combined surgical skills per anonymised video was 16. Secondary outcome measures included individual core surgical skill competency scores most improved after intervention and the trends in individual core surgical skill competency scores over the 15 month study period.
Statistical analysis
The GLASS trial protocol and primary analysis included the sampling strategy, sample size and power calculations. 21,22 Intention-to-treat (ITT) analysis was used for all outcome measures. Results were presented as mean ± standard deviation (SD) for parametric data, and median and interquartile range (IQR) for non-parametric data. Wilcoxon signed rank test was used for differences in combined core skill competency scores at each assessment timeline and for differences in scores between trial arms. Residents achieving maximum scores in competency in individual core surgical skill were presented as numbers and percentages, with Fisher's exact test used to measure statistical significance for differences in proportions between trial arms and McNemar's test for differences at each assessment timeline. All statistical analyses were conducted using STATA for Windows version 16.0 (StataCorps, Texas, USA), with an alpha level of p<0.05 deemed as statistically significant.
Results
Fifty-three participants were assessed for eligibility for the GLASS trial during the study period. Two participants were excluded pre-randomisation due to prior surgical experience. Fifty-one participants were recruited and randomised, with 25 allocated to the intervention arm with two dropouts, and 26 to the control arm. Forty-nine participants were included in the GLASS trial ITT sub-analysis 21 , in whom baseline characteristics of age, sex and time in residency were balanced.
Overall surgical competency in simulated trabeculectomy
The median combined surgical competency scores at baseline were 2.88/16 points (IQR 1.75-4.17) and 3.25 (IQR 1.83-4.75) in the intervention and control arms respectively (Table 1). At primary intervention, median scores increased to 11.67 (IQR 9.58-12.63; p=0.00001). This increase in core surgical competency scores was maintained at three months (median 11.67, IQR 10.33-13.17; p=0.00001) and at one year (median 11.50, IQR 9.67-12.67; p=0.0001) in the intervention arm. On receiving the intervention after one year of conventional training, median scores in the control arm increased to 11.33 (IQR 10.67-12.50; p=0.00001). The increase was maintained at fifteen months (median 11.00, IQR 8.17-14.00; p=0.0156). When comparing the trial arms, the difference between combined surgical competency scores at three months and at one year showed a large effect of the training intervention (p=0.00001, Table 2). Figure 4 illustrates the mean scores of individual core surgical skill between arms. Trial participants in both arms achieved higher mean scores in core surgical skill competency on receiving the intervention. In the intervention arm, the highest score achieved was in releasable suturing at primary intervention (mean 1.77± SD 0.42). At three months, the highest score was in conjunctival suturing (mean 1.87± SD 0.22); at one year, the highest score was releasable suturing (mean 1.71± SD 0.30). Conversely, mean scores in the control arm at three months and one year were similar to those at baseline. Following intervention in the control arm at one year, the highest score was seen in conjunctival suturing (mean 1.93 ±SD 0.23) and remained so at 15 months (mean 1.64 ±SD 0.38). Lowest scores were achieved in speed at primary intervention (mean 0.48 ±SD 0.71) and remained so at three months (mean 1.02 ± SD 0.71) and at one year (mean 1.03 ±SD 0.72) in the intervention arm. The lowest scores were also in speed in the control arm at the time of intervention and at 15 months (mean 0.64 ± SD 0.64 and mean 1.14 ± SD 0.69 respectively).
Maximum scores in surgical skill competency
Few participants achieved maximum scores in surgical skill competency prior to receiving the intervention (Figure 5). At primary intervention, releasable suturing was the most competent skill achieved, with 17/23 (74%) participants in the intervention arm achieving maximum scores. This was followed by scleral flap (n=16, 70%) and scleral incision (n=15, 65%). However, the number of participants with maximum scores declined at three months and again at one year. The exception was in fluidity and speed, where participants achieving maximum scores in these skills were significantly higher at three months than at the time of primary intervention (χ 2 = 6.53, p=0.0106 and χ 2 = 8.33, p=0.0039 respectively, McNemar's test). The number of participants in the control arm achieving maximum scores rose from one (4%) at three months to three (13%) at one year. Following the intervention, 20 participants (83 %) achieved maximum scores in conjunctival suturing, followed by releasable suturing (n=15, 63%) and scleral flap (n=10, 42%). When comparing the two arms, only maximum scores in conjunctival suturing were significantly different (p=0.018, Fisher's exact), with the control arm achieving more maximum scores after intervention than the intervention arm had at that same point.
Overall efficacy of glaucoma surgical simulation
The GLASS trial is the first known international multi-center RCT to demonstrate a positive effect of glaucoma surgical simulation training on surgical competency of ophthalmology residents. 21 Participants in both the control and intervention arms showed significant improvement in competency after receiving high-fidelity, intense SBSE and this effect was maintained months after the intervention. There was a significant difference in competency between the trial arms, illustrating the disparity in skill uptake between those receiving conventional ophthalmic surgical teaching and simulation-based training. Few studies are available for direct comparison. A study evaluating the efficacy of virtual-reality (VR) SBSE on resident and medical student competency in simulated pars plana vitrectomy found that those naïve to simulation had longer operating times and more incidences of retinal detachments compared to those with simulation training. 26 However, these findings were not statistically significant, owing to a low sample size of 14. Similarly, Solverson et al. reported marked improvement in the error rate of novice surgeons using the Eyesi VR simulator, yet the study lacked a simulation naïve comparison group or a validated means of skill assessment. 27 As the GLASS trial used a validated scoring rubric and adopted an RCT study design, our findings strongly indicate that SBSE can have an immediate and sustained improvement in glaucoma surgical skills.
Efficacy of glaucoma surgical simulation on core surgical skills
In the absence of training, residents scored poorly in core skills required for modern trabeculectomy surgery such as releasable suturing. Conversely, they scored highest in scleral incision and flap formation, possibly from previous surgical experience in small incision cataract surgery. 15 With conventional training alone, mean scores remained at novice level, with little progression to competent level. Yet, a significant and sustained improvement in competency was observed in both arms shortly after receiving simulation training. Importantly, skills traditionally used in trabeculectomy surgery, such as releasable suturing, sclerostomy and conjunctival suturing, saw the biggest improvement overall which supports the hypothesis that targeted simulation training can refine sub-specialist surgical skills. Of note, there was little change in general skills such as fluidity and speed possibly due to insufficient repetitive skill practice by residents over the course of the study. Continuous simulation practice may therefore help reduce overall trabeculectomy surgery time.
Although glaucoma SBSE led more residents to progress to competent level, the subsequent decline in competent scores in later months suggests that residents may become deskilled in acquired trabeculectomy surgical skills over time. This may be due to insufficient exposure to live trabeculectomy surgery practice in conventional training or inadequate uptake of simulation practice between follow-up assessments. Arguably, SBSE should be used to complement traditional surgical teaching rather than replace it 28 , as transfer of surgical skills to the operating room can vary widely depending on the type and amount of simulation training received. 19,27 Moreover, the true association between simulated training and clinical practice remains uncertain. When examining transfer of skills to live surgery, most studies have adopted a retrospective study design to investigate the effects of simulation training based on patient outcomes. For example, one US-based retrospective case series found significantly lower phacoemulsification complication rates in residents with VR simulation training compared to simulation-naïve residents (2.4% vs 5.1% respectively, p=0.037). 29 However, Belyea et al's retrospective case review reported no significant difference in phacoemulsification complication rates between third year residents with and without VR simulation training. 19 Prospective assessment and comparison of surgical skill competency in both simulated and live surgeries may be useful to determine the true effect of simulation training.
Limitations-This study has several limitations. Firstly, this is a retrospective post-hoc analysis of data from the GLASS trial. Therefore, the GLASS trial was not originally powered to address the hypothesis that specific skills benefit more from intense simulatedbased training in trabeculectomy. As a result, the true efficacy of the intervention reported in our study may be exaggerated due to sub-analyses of the original data producing falsely positive and/or negative associations between the variables. Secondly, whilst this study clearly shows superiority of glaucoma surgical simulation training over conventional training, the results only apply to simulated surgical skill competency using high-fidelity artificial eyes. In SSA, a comparison and evaluation of ophthalmic SBSE between high and low-fidelity models would be beneficial for reflecting clinical practice in low-and middle-income settings. Due to low trabeculectomy case numbers in the respective training environments, it was also not possible to evaluate and compare live surgical skills with those in the simulated environment. Furthermore, the low response rate in the control group at fifteen months (n=7, 26.9%) makes the findings susceptible to selection bias, distorting the true measure of effect of the intervention. This low response rate was due to trial participants completing their Master of Medicine (MMed) in Ophthalmology degree and no longer being able to participate in the study. Comparison of post intervention scores between the trial arms should also therefore be interpreted with caution. Finally, further detailed analysis of participants failing to achieve "advanced beginner" or "competency" Sim:OSSCAR scores following the intervention would have been beneficial to evaluate how best to refine the intervention to improve their skillset.
Conclusion
This study is the first to show a positive, immediate and sustained impact of SBSE on key and core surgical skills in trabeculectomy. Trabeculectomy remains the most effective surgical treatment for glaucoma management in SSA but performing the procedure requires advanced microsurgical skills. Evaluating the performance of each surgical step is essential for providing targeted, constructive feedback to residents. Time taken to complete a task is a commonly used outcome measure for SBSE studies, 30 however our findings suggest that the outcome measure of speed is not the best indicator of impact. Formal integration of glaucoma surgical simulation into residency programme structures may result in better standards of surgical training and most importantly, improve the delivery of safe and effective glaucoma surgical treatment. Recent observations suggest that adopting surgical simulation training is widely accepted as a safer alternative to conventional surgical teaching. 16 Finally, there remains very limited data on surgical trabeculectomy rates and post-operative outcomes in SSA. We therefore suggest a follow-up comprehensive, comparative analysis of trabeculectomy outcomes in centers incorporating SBSE, to evaluate the real world effectiveness of the intervention on patient glaucoma care. Sim:OSSCAR tool for simulated trabeculectomy. Performance of each individual core surgical skill is ranked from 0 (novice), 1 (advanced beginner) and 2 (competent). Sim:OSSCAR =Ophthalmic Simulated Surgical Competency Assessment Rubric
|
2022-08-25T06:18:00.597Z
|
2022-08-13T00:00:00.000
|
{
"year": 2022,
"sha1": "ea93b8b01a15a92c3028aed2f555c337274e56c2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1bba1c28d2e071f9e76d0bceebf86c7751a930c3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
134155625
|
pes2o/s2orc
|
v3-fos-license
|
The Study of Fault Lineament Pattern of the Lamongan Volcanic Field Using Gravity Data
Lamongan Volcano located in Tiris, East Java, possesses geothermal potential energy. The geothermal potential was indicated by the presence of geothermal manifestations such as hot springs. We usedsecondary gravity data from GGMplus. The result of gravity anomaly map shows that there is the lowest gravity anomaly in the center of the study area coinciding with the hot spring location. Gravity data were analyzed using SVD method to identify fault structures. It controls the geothermal fluid pathways. The result of this research shows thatthe type of fault in hot springsisanormal fault with direction NW-SE. The fault lineament pattern along maaris NW-SE.Maar indicates anormal fault. As the result we know that gravity data from GGMplus which analyzed with SVD can be used to determine the type and trend of fault.
Introduction
Indonesia is located between three tectonic plates, i.e. the Eurasian, the Indo-Australian, and the Pacific Plates [1]. These plates formed convergent plate margin namely subduction zone. Consequently, Indonesia has many active volcanoes [2]. Active volcanism in convergent margin becomes a potential target for geothermal explorations. These geothermal potentials are indicated by manifestations such as hot springs [3].
The Lamongan Volcano is one of the 76 active volcanoes in Indonesia [4]. The Lamongan Volcano, which is stratovolcano type, is located in 7.983°S and 113.342°E [5]. It is also situated in the Sunda arc and in between three volcanic complexes, that is Bromo, Semeru, and Argopuro volcano, as shown in Figure 1 [6]. Moreover, the Lamongan Volcano has 61 basaltic cinder or spatter cones and 29 maars [4,7].Tiris, a village located on the eastern of Lamongan Volcano, has a geothermal potential characterized by hot springs, with temperatures between 35°C and 45°C. There are also zeolite veins on rocks, indicating a boiling zone along the hydrothermal outflow zone [3].
Gravity methodwas conducted to identify subsurface structures in Lamongan Volcano. The basic principle of the gravity method was to measure the difference value of the gravitational field due to variations in rocks density [8]. In this research, we usedgravity method to identify faultstructures that controlling geothermal fluid pathways using Second Vertical Derivative (SVD) [9].We usedsecondarygravity data from Global Gravity Model Plus (GGMPlus).
Materials and Methods
In this research, gravity data was obtained from GGMPlus in radial derivative of disturbing potential form.Gravity data satellite was downloaded fromhttp://ddfe.curtin.edu.au/gravitymodels/GGMplus. GGMPlus provides gravity data covering all continents and coastal zones with a total of 3 billion points data. Gravity data was available in grid form with space between points ~200 m [12]. The coordinates of this research in the 49S UTM zone elongated from 681731 m to 806625 m and 9085476 m to 9137909 m.
The data obtained from GGMPlus was available in the form of 5°x5° areas, so we have to select data according to the research area [12]. The utilized GGMPlus data was the gravity anomaly.Gravity anomaly was subsequentlyseparated into regional anomaly using upward continuation and subtracted to produce theresidual anomaly. The residual anomaly was analyzed with SVD to generate SVD anomaly.
Structural analysis was conducted by slicing SVD anomaly. SVD was used to delineate the contact of lithology with contrast density and enhancing local anomaly and to identify the types of fault and the estimated dipping fault [13,14]. The slice was made by cross-section of the fault boundary. The fault boundary was classified by a zero value of SVD anomaly [15]. The slice on the contour anomaly of SVD resultedin a curve which then analyzed to determine the type of fault in the area. There were criteria used to determine the type of faults, as follows [14,16]
Results and Discussion
Gravity anomaly response was obtained from GGMplus gravity data. The gravity anomaly ranges from -20 mGal to 380 mGal(shown in Figure 2).
Figure 2.
Contour map of gravity anomaly in the study area was obtained from GGMplus and overlaid with fault, lineament, and hot springs. The focus of our study shown by the black rectangular and will be explained further in Figure 3. Contour map of SVD anomaly (Figure 3) shows anomaly value between -0.00032 mGal/m 2 to 0.00028 mGal/m 2 . East side was dominated by highest and lowest anomaly which corresponds with Argopuro Volcano.In the north-west of Lamongan, there is alow anomaly that located in the center area and thehigh anomaly was locatedaround the center. This result corresponds with maar (blue circle) [4,7]. The trend of fault lineament pattern directed in NW-SE. This direction indicatesa weak zone which was assumed as thefluid pathways from the body of volcano to the surface. This weak zone can be defined as a maar.The fault lineament was known from the pattern of maar distribution. In the north-east of Lamongan Volcano, zero value of SVD anomaly indicates the contact boundary of thefault plane. It is shown in Figure 3 that the trend of fault lineament pattern was directedNW-SE. Thisdirectioncorresponds tothe geological data which also displayed NW-SE.In addition, the fault in anarea that located in north-east of Lamongan Volcano coinciding with hot springs location. Overall from this curve, the value of SVD anomaly is ranged between -0.000053mGal/m 2 to 0.000084mGal/m 2 .The value of SVD anomaly of slice A-A', B-B' andC-C' has formed the curve with the maximum amplitude which is higher than minimum amplitude. Based on the value of SVD anomaly and equation 1 and 2, the result can be classified as normal fault. This normal fault has a coinciding with hot springs location.
Conclusion
From SVD anomaly contour map and curve profile of slicing result, we can conclude that there are two faults. The type offaultwhich corresponds with hot springsis anormal fault with trend direction NW-SE. Moreover, the trend of fault lineament patternalongmaar can beindicated by normal fault. This lineament trends in NW-SE.
Acknowledgements
We would like to thank to "Kementerian Riset, Teknologi, dan Pendidikan Tinggi" for the PUPT funding no. 2452/UN1.P.III/DIT-LIT/LT/2017. We are grateful to Western Australian Geodesy Group, Curtin University for data providing the gravity data used in this study. Wewould like to thank the anonymous reviewers for important suggestions and comments.
|
2019-04-27T13:09:39.485Z
|
2018-04-01T00:00:00.000
|
{
"year": 2018,
"sha1": "bde1e6c40ea1ea9a8edacde4b719b0b8b4811c66",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1011/1/012025",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "293537bac87335995c1a52ea95c49dd0578e2c5b",
"s2fieldsofstudy": [
"Geology",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
}
|
266861276
|
pes2o/s2orc
|
v3-fos-license
|
Acute ischemic stroke and measurement of apixaban and rivaroxaban: an observational cohort implementation study
Background Treatment with intravenous thrombolysis for acute ischemic stroke is contraindicated with intake of apixaban/rivaroxaban in the last 48 hours. Recent European Stroke Organization guidelines suggest that thrombolysis can be considered if anti-factor Xa activity (AFXa) is <0.5 × 103 IU/L with low-molecular-weight (LMWH) or unfractionated heparin (UFH) calibrated assays. Some centers also use apixaban/rivaroxaban-calibrated AFXa assays to identify patients with low drug concentrations. Objectives To prospectively evaluate the first year of implementation of drug-calibrated AFXa assays at our center with 2500 yearly admittances with suspected stroke. Methods Samples were analyzed on Sysmex CS-5100 instruments with Innovance anti-Xa reagents. Thrombolysis could be considered with drug concentrations <25 μg/L. Patients were registered in an institutionally approved quality register. Outcomes included (1) the number of patients receiving thrombolysis after drug measurement, (2) turn-around time for drug concentration measurements, and (3) sensitivity of LMWH/UFH AFXa to apixaban and rivaroxaban. Results Apixaban or rivaroxaban was measured in 148 samples, and 4 patients who previously would have been ineligible for thrombolysis were treated with thrombolysis. In total, thrombolysis was administered in 123 patient episodes in the study period. The median turn-around time for the drug measurements was 38 minutes. Apixaban concentrations of 25 μg/L and 50 μg/L corresponded to LMWH/UFH AFXa of 0.13 and 0.27 × 103 IU/L, respectively. There were too few rivaroxaban results for regression analysis. Conclusion Implementation of apixaban and rivaroxaban measurements led to a small increase in the number of patients receiving thrombolysis. Excluding significant concentrations of apixaban or rivaroxaban using LMWH/UFH AFXa may be feasible.
Acute ischemic stroke (AIS) is a leading cause of death and disability worldwide, and due to an aging population, the absolute number of incident strokes is increasing [1].Early reperfusion therapy with intravenous thrombolysis with alteplase can dramatically improve patient outcomes [2].However, as many as 20% of patients with AIS use direct oral anticoagulants (DOACs) before the stroke event, and the use of DOACs is increasing [3][4][5].Thrombolysis is contraindicated with recent ingestion (within 48 hours previously) due to the presumed increased risk of symptomatic intracranial hemorrhage [4,6].Studies show that the compliance or dosage of DOACs may be insufficient [7], and the time of the last dose is often unknown or uncertain at admittance to the emergency department (ED).Thus, patients with low drug concentrations or no prior intake of DOACs can still be considered ineligible for thrombolysis if drug concentration measurements are unavailable.
The 2021 edition of the European Stroke Organization (ESO) guideline for acute treatment recommends considering thrombolysis for patients on factor Xa inhibitors such as apixaban and rivaroxaban when anti-FXa activity (AFXa) is <0.5 × 10 3 IU/L, presumably representing an activity corresponding to low drug concentrations [6].
However, anti-FXa assays calibrated for heparin (low-molecularweight [LMWH]/unfractionated heparin [UFH] AFXa) have variable sensitivity for apixaban and rivaroxaban.In a comparison of 3 different assays, Mithoowani et al. [8] found that 50 μg/L corresponded to LMWH/UFH AFXa of 0.28 to 0.88 × 10 3 IU/L and 0.41 to 1.45 × 10 3 IU/L for apixaban and rivaroxaban, respectively.Thus, it seems necessary to have a specific definition of LMWH/UFH AFXa cutoffs for each combination of drug and measurement procedure [9].Some centers use apixaban-or rivaroxaban-calibrated AFXa assays to measure drug concentrations more directly.A recently published international, multicenter, retrospective cohort study showed standard operating procedures (SOP) for thrombolysis in patients with recent ingestion of DOACs at 49 stroke centers worldwide, with remarkably different cutoffs and selection strategies [10].A French expert group has previously suggested that thrombolysis can be administered when apixaban or rivaroxaban concentration is <50 μg/L [11]; others have suggested no contraindication for thrombolysis if rivaroxaban concentrations are <100 μg/L [12], and apixaban <10 μg/ L [13].However, these studies lack clinical validation [8].
LMWH/UFH AFXa or apixaban-and rivaroxaban-calibrated AFXa are not routinely available at many centers.Costs, or perceptions about costs, turn-around time, clinical utility, and regulatory approval of measurement procedures, are possible barriers to implementation [14].
To "get with the guidelines," an SOP to guide decisions for the use of thrombolysis incorporating both LMWH/UFH and apixaban-and rivaroxaban-calibrated AFXa assays were established at the Oslo stroke center, with a cutoff for thrombolysis <25 μg/L for both drugs.
We prospectively evaluated the first year of implementation of the SOP using data from a local quality register.Several aims were investigated: (1) if the implementation led to more patients receiving thrombolysis, (2) the turn-around time for the measurements, and (3) the cutoff limits for LMWH/UFH AFXa compared with drug-calibrated AFXa assays.
Essentials
• Thrombolysis is contraindicated in anticoagulated patients with acute stroke due to bleeding risk.
• We introduced apixaban and rivaroxaban testing to identify patients who could get thrombolysis.
• During 1 year, 123 patients received thrombolysis and 4 extra due to low drug concentration.
• Providing rapid apixaban and rivaroxaban testing may result in a small but important benefit.
| Patients
Treatment of patients with suspected stroke in Oslo is centralized at Oslo University Hospital, Ullevaal.If indicated, the center gives reperfusion therapy with thrombolysis and/or thrombectomy.According to data from recent years, we assess around 2500 patients for suspected stroke each year, with around 800 diagnosed with AIS, including 300 admitted within 4 hours of symptom onset, of whom 23% receive thrombolysis with an average door-to-needle time of 28 minutes [5].The assessment of patients in the ED with the measurement of apixaban or rivaroxaban is shown in the graphical abstract.
Apixaban and rivaroxaban are the most frequently used DOACs in Norway, accounting for 67% and 22% of patients on DOACs, respectively [3].Based on the proportion of patients with atrial fibrillation in the stroke population, we estimated before the implementation that 20% of patients presenting with AIS would use a DOAC.
| Recruitment
Measurement methods for apixaban and rivaroxaban were introduced on September 15, 2021, and for the first year after the implementation, all patients admitted with suspicion of stroke on apixaban or rivaroxaban were consecutively registered in a quality register.There were no exclusion criteria.We assumed that the ethnicity of the patients was similar to the population of Oslo, but this was not registered.The biomedical laboratory scientist is part of the stroke call in the ED, facilitating rapid blood collection.All stroke physicians in the ED were instructed to order apixaban/rivaroxaban and LMWH/UFH AFXa for patients using either drug.Due to the uncertainty of the cutoffs, a conservative approach for excluding clinically relevant drug concentrations was chosen with cutoffs at 25 μg/L for both drugs with the drug-calibrated assays.Patients also had to satisfy ESO clinical criteria to be eligible for thrombolysis [6].Apixaban and rivaroxaban could also be ordered for up to 48 hours after blood collection from stored samples in patients with stroke without indication for thrombolysis and no need for immediate analysis.
| Variables
Relevant demographic variables, stroke etiologies, use of antithrombotics, drug concentrations, complications, and stroke severity assessed by the National Institutes of Health Stroke Scale (NIHSS) [15] were collected from the electronic medical record and registered in a quality register by a dedicated study nurse and stroke physician.
To ensure the inclusion of all relevant patients, the electronic medical record was checked for all patients with apixaban or rivaroxaban measurements ordered from samples in the ED during the study period, and concentrations were cross-checked.The total number of patients admitted with AIS and those treated with thrombolysis was retrieved from local quality registers.
| Laboratory methods
Venous or arterial blood samples were collected into 3.2% citrated tubes, which were rapidly transported by a pneumatic tube system to the central laboratory and centrifuged at 2800 × g for 5 minutes before analysis in primary tubes.We have previously verified that these centrifugation conditions lead to less than 10 × 10 9 /L residual platelets in plasma.Additionally, we compared the centrifugation at 2800 × g for 5 minutes with 2500 × g for 15 minutes for 2 blood samples spiked with apixaban or rivaroxaban with acceptable results (less than 10% difference).The samples were stored in primary tubes at room temperature for up to 48 hours; previous studies have indicated that apixaban and rivaroxaban measurements (AFXa) are stable for several days [16].
Apixaban, rivaroxaban, and LMWH/UFH AFXa analyses were performed on Sysmex CS-5100 instruments.For the apixaban and rivaroxaban assays, AFXa was converted to drug concentration using a calibration curve derived from apixaban and rivaroxaban calibrators, respectively.We used the Innovance anti-Xa reagents (Siemens Healthineers) for all 3 methods.Until June 13, 2022, we used Biophen Apixaban/Rivaroxaban Calibrators (Hyphen Biomed) and an instrument application made for the Biophen assay.Since June 14, 2022, we have used the new Siemens in vitro diagnostic regulation approved apixaban/rivaroxaban calibrators and applications.Further details about the laboratory methods are described in Supplementary Tables S1 and S2.When comparing the methods used before and after June 14, 2022, the differences were found to be acceptable (Supplementary Figures S1 and S2).Daily internal quality control was performed, and performance in external quality assessment was acceptable during the study period.The turn-around time was calculated from the time of blood collection to the time of the electronic report of the results from the laboratory information system.All orders with a turn-around time of more than 1 hour were manually reviewed, and orders where apixaban/rivaroxaban was requested more than 30 minutes after blood collection were not included in the calculation of turn-around time.In this case, we considered that apixaban/rivaroxaban was ordered not to consider eligibility for thrombolysis but to compare to LMWH/UFH AFXa and management of the patient in the diagnostic follow-up.
| Total yearly cost of drug measurements
The cost for reagents, calibrators, and controls was obtained from Siemens Healthineers.The company does not report list prices, but the stated prices are representative of the Nordic market.We assumed that calibration is performed once a year and that 2 apixaban controls and 2 rivaroxaban controls are run every day.The costs of calibration and controls will be the same regardless of the number of ordered analyses per year.Additionally, we estimated a yearly cost of 20 working hours for a biomedical laboratory scientist to run internal and external quality controls and various tasks necessary to be able to report apixaban and rivaroxaban.The marginal cost for AMUNDSEN ET AL.
working hours when running an apixaban or rivaroxaban analysis in addition to International Normalized Ratio and activated partial thromboplastin time, which would still be analyzed, was considered negligible.For the currency conversions, we used the average exchange rate for 2021 (1 Norwegian krone = €0.0984, 1 USD = €0.846).
| Statistical analysis
Continuous variables are presented as mean ± SD if they appeared to be normally distributed by visual inspection or otherwise as median with IQRs.Categorical variables are presented as numbers and percentages (%).Simple linear regression was performed in GraphPad Prism 9.4.1 (GraphPad Software).Missing data for LMWH/UFH AFXa were assumed to be missing at random and not imputed.Figures were created with GraphPad Prism 9.4.1 or BioRender.com.
| Ethical considerations
The decision to implement the apixaban and rivaroxaban measurements was based on a clinical need to meet the new acute reperfusion treatment options recommended by the ESO guidelines [6].However, this depended upon reliable concentration measurements and monitoring of the implementation to ensure its safety and effectiveness.Thus, patients were registered in a quality register approved by the head of the Stroke Department.By Norwegian law, patient consent is not necessary or mandatory for quality improvement projects.The register was approved by the hospital's Data Protection Officer (reference number 21/11742).
| Patient characteristics
During the 1-year follow-up period, we registered 148 episodes from 139 patients admitted with suspected stroke with known use of apixaban (124 episodes) or rivaroxaban (24 episodes).Of these, 72 episodes (49%) were diagnosed as an AIS.The most frequent indication for anticoagulation was atrial fibrillation.Patient characteristics are shown in Table 1.During the study period, there were 592 episodes with AIS; in 123 (20.8%) episodes, patients were treated with thrombolysis.
| Apixaban and rivaroxaban results, treatment with thrombolysis, and turn-around time
The inclusion and treatment of patients is shown in Figure 1.Drug concentrations were missing for 2 patients using apixaban and 1 patient using rivaroxaban.The majority of patients had drug concentrations in the range of 100 to 200 μg/L (Figure 2A).Seven and 5 patients had apixaban or rivaroxaban below the cutoff of 25 μg/L, respectively.Of these, 1 patient on apixaban and 3 on rivaroxaban were otherwise eligible and were treated with thrombolysis (Table 2).The other 8 patients had an unknown time for the stroke onset, were admitted too late, or had symptoms in regression.None of the 8 patients with a known time of last drug intake and drug concentration below 25 μg/L had taken the last dose more than 48 hours ago (range, 7 to 32 hours).Of the 4 patients with apixaban/ rivaroxaban concentration below the cutoff of 25 μg/L treated with thrombolysis, 2 had a marked clinical improvement according to the NIHSS (1 also had a thrombectomy); 1 patient with severe symptoms had no improvement, and 1 had a severe intracerebral hemorrhage (ICH) complication and died.Retrospectively, the medical history of this last patient changed from an unknown intake of DOAC to discontinuation of apixaban several weeks before the stroke.
The number of patients eligible at alternative cutoffs at 50 μg/L and 100 μg/L are shown in Table 3.
The median turn-around time was 38 minutes (IQR, 33-46) (Figure 2B).The turn-around time for the 4 patients who were treated with thrombolysis is shown in Table 2 Using linear regression analysis of the samples with apixaban concentration ≤100 μg/L, the cutoff for LMWH/UFH AFXa corresponding to 25 μg/L and 50 μg/L were found to be 0.13 and 0.27 × 10 3 IU/L, respectively.The ESO recommended cutoff at 0.5 × 10 3 IU/L corresponded to 90 μg/L for apixaban.For rivaroxaban, the number of results was not sufficient to perform regression analysis.
| Total yearly cost of drug measurements
The total yearly costs of apixaban and rivaroxaban measurements were €8124 and €7792, respectively (Supplementary Table S3).Approximately 75% of the costs were for controls, calibration, and other work necessary to be able to report apixaban and rivaroxaban results.The percentage of patients with suspected stroke using apixaban or rivaroxaban was lower than previously reported in a Swiss register study [4].The cutoff for the safe administration of thrombolysis is not well established [6,7,[11][12][13], and we thus chose a low cutoff (25 μg/L).
With a higher cutoff, more patients would have been eligible for thrombolysis.The ESO suggests that thrombolysis can be given when AFXa is below 0.5 × 10 3 IU/L, which corresponds to a drug concentration of 90 μg/L for apixaban.A recent large-scale register study showed that some centers use a cutoff at 100 μg/L [10].Using this cutoff, 12 more patients would have been eligible for thrombolysis compared to the 25 μg/L cutoff.Interestingly, this study also indicates that patients with AIS on DOACs have a lower risk of procedurerelated ICH than patients who do not use DOACs, irrespective of the drug concentration [10].However, selection bias could explain this finding, as patients on DOACs with high expected bleeding risk did not receive thrombolysis.Patients with atrial fibrillation on DOACs are generally older and with more comorbidities, with an increased risk of bleeding complications after thrombolysis compared with the general stroke population.Further, the mean NIHSS in our study population and the Norwegian Stroke Registry is lower than in the referred study, indicating milder strokes [5,10], where potential treatment effects must be balanced with the increased bleeding risk.The safety of thrombolysis for patients on DOACs should be validated in prospective studies.It is also important to note that similar concentrations of the different DOACs may not result in similar anticoagulant effects [17].
Interestingly, the patients with apixaban or rivaroxaban below the cutoff had relatively short time intervals since the last dose, much shorter than the 48 hours indicated as the time limit for safe thrombolysis treatment according to current guidelines.This indicates that drug concentration measurements are also helpful when the time of the last dose is known to be within the last 48 hours.
A sufficiently short turn-around time is necessary to ensure that the results will be useful in clinical decision-making, as thrombolysis efficiency is highly time-dependent.At our institution, the biomedical laboratory scientist assesses the patient in the ED as part of the stroke call, and we already had a rapid transportation system with high priority for samples from this patient group.It is important to note that each laboratory must verify that the centrifugation conditions are adequate.The door-to-needle time for the 4 patients who were treated with thrombolysis was longer than desirable and longer than the average at our institution (28 minutes).The risk of bleeding complications must outweigh the risk of poor outcomes due to treatment delay.This could be improved if point-of-care tests become available.
Using LMWH/UFH AFXa instead of the apixaban-or rivaroxabancalibrated assays is an attractive option from the laboratory and economic perspective since it, in theory, can be used for all FXa inhibitors.However, the methods are not validated by the manufacturer or approved for this use.Thus, the responsibility of validating the cutoff limit falls to the end user.Different LMWH/UFH AFXa assays have different sensitivities for apixaban and rivaroxaban [8].However, our results and previous results indicate that the Innovance LMWH/ UFH AFXa assay has sufficient sensitivity and a linear relationship between drug-specific measurements and LMWH/UFH AFXa in the relevant concentration interval, which indicates that drug and method-specific cutoffs can be established [8].To our knowledge, there has not been a discussion on how laboratories could ensure stable performance for LMWH/UFH AFXa assays to verify low concentrations of DOACs.In our opinion, firstly, validation studies for each combination of drug and method are necessary to establish cutoffs.Secondly, a pragmatic approach could be the verification of each lot/calibration using apixaban and rivaroxaban reference materials (eg, calibrators from other vendors) and daily performance by running apixaban and rivaroxaban controls with acceptance limits specified in IU/L around the established cutoff.
At many hospitals, apixaban and rivaroxaban measurements are not available; an important reason for this is probably the cost or the perceived cost [14].Even though the number of patients eligible for thrombolysis after drug measurements was small, we believe the benefit more than justifies the cost.Previous studies have found that thrombolysis for AIS may reduce lifetime health costs [18].While we did not perform a cost-effectiveness analysis, we think that the incremental cost per quality-adjusted life year for drug measurement and thrombolysis compared with no thrombolysis is likely to be well within the limits of willingness to pay off €64 000 used in similar studies in Norway [19].Since most expenses were from fixed costs such as controls, cost-effectiveness will be higher for centers with a large number of patients.The cost for drug measurement per performed thrombolysis could be lower than reported in our study.With a higher cutoff, more patients will be eligible for thrombolysis.The use of apixaban and rivaroxaban measurement for other indications, both for patients with stroke and other groups, would also lower costs per test.For patients with stroke, dose adjustments or changes of drugs for patients with AIS or in the etiological diagnostic work-up for patients with ICH could be possibilities [20,21].However, such use does not always necessitate a very short turn-around time; thus, samples could be sent to a centralized laboratory, lowering the costs.
This study has some weaknesses.First, the number of episodes with patients using apixaban/rivaroxaban was limited, and the number of patients treated with thrombolysis was low, meaning that the results may have a high degree of uncertainty.It also restricted us from concluding on the safety of thrombolysis for this patient group.Second, the attending physician in the ED was instructed to order apixaban or rivaroxaban for patients using any of the drugs.This may have been missed for some patients, particularly those with other known contraindications to thrombolysis.Thus, the real number of patient episodes with previous apixaban/rivaroxaban was likely higher.However, since eligibility for thrombolysis is carefully considered for each patient, it is unlikely that patients eligible for thrombolysis after apixaban/rivaroxaban measurement became available were missed.Third, we decided on a conservative low cutoff based on a lack of studies demonstrating the safety of thrombolysis for patients on DOACs.A study published after the end of our project inclusion indicates that it may be safe to treat with thrombolysis at higher DOAC concentrations [10].However, as already mentioned, this is controversial.Fourth, for the majority of the study period, we used a laboratory-developed method for the measurement of apixaban/rivaroxaban.The method comparison with the in vitro diagnostic regulation approved measurement procedures adopted for the last months of the study shown in the Supplementary file demonstrated a bias between the methods.This may have had some importance for the comparison between apixaban/rivaroxaban and LMWH/UFH AFXa measurements.Fifth, in order to evaluate the feasibility of drug measurements in the time-dependent acute setting, we assessed the turn-around time.Samples ordered more than 30 minutes after blood collection were excluded from this analysis.
In conclusion, our study showed that implementation of an SOP for apixaban and rivaroxaban measurements combined with thrombolysis to meet increased acute treatment options in new guidelines was feasible and led to a small increase in the number of patients treated with thrombolysis.However, more studies are needed to establish cutoffs for the safe administration of thrombolysis.Stroke centers, particularly those involved in studies using LMWH/UFH AFXa to verify low drug concentrations, should know and describe the relationship between LMWH/UFH AFXa and drug-specific measurements.We also encourage LMWH/UFH AFXa assay manufacturers to establish apixaban-, rivaroxaban-, and edoxaban-specific cutoffs for their reagents and protocols to ensure stable method performance for this application of the assay.
4 |
D I S C U S S I O N We introduced a local SOP for patients admitted with suspected stroke, incorporating apixaban-and rivaroxaban-calibrated AFXa assays in clinical decision-making.In the first year after implementation, this led to thrombolysis of 4 patients with AIS who previously would have been ineligible out of 123 episodes of patients treated with thrombolysis.The median turn-around time was 38 minutes, showing F I G U R E 1 Overview, inclusion, assessment, and treatment of patients.Created with BioRender.com.DOAC, direct oral anticoagulant.F I G U R E 2 (A) Apixaban and rivaroxaban concentrations.The dashed line at the cutoff is 25 μg/L, and the full lines represent the medians.(B) Turn-around time for apixaban and rivaroxaban measurements ordered from the Emergency Department; the full line represents the median.AMUNDSEN ET AL. that measurements are feasible in a time-dependent acute setting.The sensitivity of the LMWH/UFH AFXa assay to apixaban was sufficient to use the method for the exclusion of clinically relevant apixaban concentrations.We could demonstrate a linear relationship between apixaban and LMWH/UFH AFXa results below 100 μg/L.
. No patients were denied thrombolysis due to delayed apixaban or rivaroxaban results.
T A B L E 2 Characteristics of patients treated with intravenous thrombolysis.
|
2024-01-09T16:45:30.213Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "991a002d33212e4c776b3c39ee2d81110a5a69eb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.rpth.2023.102307",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5aaa1e44446f78a495db5a907928cee514189d80",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
229687441
|
pes2o/s2orc
|
v3-fos-license
|
Safety, Efficacy, and Patient Satisfaction with Initial Peripherally Inserted Central Catheters Compared with Usual Intravenous Access in Terminally Ill Cancer Patients: A Randomized Phase II Study
Purpose The purpose of this study was to investigate whether routine insertion of peripherally inserted central catheter (PICC) at admission to a hospice-palliative care (HPC) unit is acceptable in terms of safety and efficacy and whether it results in superior patient satisfaction compared to usual intravenous (IV) access. Materials and Methods Terminally ill cancer patients were randomly assigned to two arms: routine PICC access and usual IV access arm. The primary endpoint was IV maintenance success rate, defined as the rate of functional IV maintenance until the intended time (discharge, transfer, or death). Results A total of 66 terminally ill cancer patients were enrolled and randomized to study arms. Among them, 57 patients (routine PICC, 29; usual IV, 28) were analyzed. In the routine PICC arm, mean time to PICC was 0.84 days (range, 0 to 3 days), 27 patients maintained PICC with function until the intended time. In the usual IV arm, 11 patients maintained peripheral IV access until the intended time, and 15 patients underwent PICC insertion. The IV maintenance success rate in the routine PICC arm (27/29, 93.1%) was similar to that in the usual IV arm (26/28, 92.8%, p=0.958). Patient satisfaction at day 5 was better in the routine PICC arm (97%, ‘a little comfort’ or ‘much comfort’) compared with the usual IV arm (21%) (p < 0.001). Conclusion Routine PICC insertion in terminally ill cancer patients was comparable in safety and efficacy and resulted in superior satisfaction compared with usual IV access. Thus, routine PICC insertion could be considered at admission to the HPC unit.
Introduction
Oral administration of medication and nutrition is often difficult in terminally ill cancer patients because of progressive difficulties in swallowing, nausea and vomiting, intestinal obstruction, and consciousness disturbance [1]. Therefore, reliable intravenous (IV) access is an important issue in terminally ill cancer patients. However, these terminally ill cancer patients have limited or no peripheral venous access due to edema or repeated venous punctures from long-term IV therapy, including chemotherapy and blood transfusions. Thus, central venous access has provided an important role for IV access in terminally ill cancer patients.
There are several options for applying central venous catheters (CVCs) in cancer patients, including subclavian venous catheter, chemo-port (CP), and the peripherally inserted central catheter (PICC) approaches. Among these, PICC is well-tolerated insertion without catastrophic risk (e.g., pneumothorax or wound dehiscence) and provides medium-term intravascular access [2,3]. Terminally ill cancer patients are vulnerable to minor trauma due to poor performance and general conditions and may have behavior problems due to mental deterioration or delirium [4]. In addition, most of these patients have limited survival of 1-2 months [5,6]. Hence, terminally ill cancer patients need CVCs that are safe, comfortable to insert, and offer intermediate durability of IV access. Considering those aspects of terminally ill cancer patients and PICC, the PICC is an attractive alternative to other forms of CVCs.
However, limited data exist regarding the safety and effica-cy of PICC in homogenous terminally ill cancer patients [7][8][9]. Our group previously conducted retrospective and prospective studies examining the performance of PICC in terminally ill cancer patients [7,8], demonstrating high insertion success rate (86%-100%), low premature removal rate (10%-16%), and favorable patient-reported satisfaction (80%). However, these previous studies had limitations to elucidate superiority of the PICC compared with peripheral IV access and appropriate time to insert PICC in terminally ill cancer patients due to the retrospective or single-arm design. Currently, when terminally ill cancer patients are admitted to the hospicepalliative care (HPC) unit, peripheral IV access is maintained as long as possible, and insertion of CVCs such as PICC are considered when it is no longer possible to access peripheral veins. However, many terminally ill cancer patients experience discomfort related to frequent venous puncture because of poor peripheral IV access at the time of admission. In addition, when PICC IV access is necessary, PICC insertion is often impossible due to poor medical condition such as coagulopathy or delirious behavioral problems. Thus, early insertion of PICC upon admission may be effective in patients requiring parenteral nutrition or medication. The purpose of this study was to investigate whether routine insertion of PICC at the time of admission to a HPC unit is acceptable in terms of safety and efficacy and whether it results in superior patient satisfaction compared to usual IV access.
Patients
Terminally ill cancer patients who were admitted to the HPC unit at Pusan National University Yangsan Hospital between February 2017 and January 2020 were enrolled in this study. Terminally ill cancer patients receive no additional anti-cancer treatment in our institution and have estimated survival times of 1-2 months. Admission to the HPC unit is usually considered if parenteral nutrition and medication are required. Exclusion criteria were as follows: (1) patients who already had CVC such as CP, (2) patients with severe coagulopathies of platelet count less than 20,000 or internationalized normalized ratio higher than 2, (3) patients with evidence of overt sepsis, and (4) patients with severe behavioral problems that would make PICC insertion difficult. Patients with history of sepsis but negative culture test results and no signs of infection were allowed. If a prior PICC was removed due to unexpected events and reinserted, we did not count each PICC placement as a new event.
Study design
This study was a single-center, prospective, randomized phase II trial. The study subjects were admitted to the HPC unit between May 2017 and January 2020, stratified according to Eastern Cooperative Oncology Group (ECOG) performance status and previous infection history within 1 month, and randomly assigned to two groups: (1) the routine PICC arm (initial insertion of PICC at the time of admission, which means that the procedure was conducted within 3 working days after study enrollment), or (2) the usual IV access arm (maintenance of peripheral IV line until two trials a day, and late insertion of PICC). A research coordinator was responsible for randomizing patients to study arms using a computer-generated random allocation table with 4-block randomization.
PICC insertion and management
All PICCs were inserted by an interventional radiologist in the angiography room using ultrasound guidance or fluoroscopic imaging. All operators wore aseptic gowns, masks, and gloves, and all of the patients received a dressing with aseptic drape. Seldinger's technique was used routinely [10]. The PICC lines contained a single lumen and were made of second-/third-generation polyurethane. The location of the catheter tip was confirmed by chest radiography. None of the PICCs were sutured but were held in place with a StatLock Catheter Stabilization Device (BARD, Covington, GA).
No patient received prophylactic antibiotics or anticoagulation drugs for infection or thrombosis. Catheter replacement over a guidewire was strictly prohibited in this study. All patients received a closed dressing dampened with betadine on the catheter insertion site every 3 days. PICC tip cultures were performed when the catheter was removed at the time of discharge or within 30 minutes of death. If the catheter tip showed positive findings in culture, we checked for the presence of catheter-related blood stream infection (CRBSI) based on medical progress and laboratory findings.
Catheter monitoring and data collection
We assessed clinical complications such as pain, edema, bleeding, and local or systemic catheter-related infections. CRBSIs were defined by positive catheter tip culture and at least one positive peripheral blood culture of the same organism without other sources of infection. Catheter-related thrombosis occurs in two types; intra-catheter thrombosis and thrombophlebitis. The former is suspected when the catheter flow rate slows or back flush is impossible, and the latter is suspected when patients complain of arm edema or pain.
In addition, we evaluated patient-perceived satisfaction using a semi-structured questionnaire assessing the degree of IV access related with comfort at day 5 of study enrollment: "How is your satisfaction with the IV access so far?" (rated as "much comfort," "a little comfort," "no change," "a little discomfort," or "much discomfort"). Patients who underwent PICC insertion were evaluated for procedurerelated distress by the following question: "Did you experience distress because of the procedure?" (rated as "distressing," "a little distressing," or "not distressing"). Patients who underwent PICC in the usual IV arm were evaluated for comfort improvement at day 5 of PICC insertion by the following question: "How comfortable is the parenteral access after placement of the PICC?" (rated as "much more comfort," "a little more comfort," "no change," "a little more discomfort," or "much more discomfort").
Statistical analysis
The sample size was calculated based on an assumption of maintenance success rate of routine PICC arm (90%) and usual IV arm (95%) based on prior study [7]. With a non-inferior margin of 25%, power of 80%, and alpha of 0.05, each of the two groups was determined to be 29 patients. Considering a drop-out rate of 10%, a total of 33 patients were planned to be randomized to each arm. Patients who died within 7 days of study enrollment or were transferred to other hospitals, those who required PICC insertion within 5 days of enrollment in the usual IV access arm, and those who did not receive PICC insertion within 3 days in the routine PICC arm were identified as drop-outs and excluded from the final analysis.
The baseline demographics and PICC-related characteristics of the patients were summarized using descriptive statistics, including median, mean, and range. The primary endpoint was IV maintenance success rate, defined as the ratio of patients who maintained functional IV access until the intended time (death or transfer) to all patients. In the routine PICC arm, IV maintenance success was defined when the PICC was maintained until the intended time. In the usual IV arm, IV maintenance success was defined when the peripheral IV line was maintained until the intended time without requiring PICC or PICC maintained until the intended time if inserted. If PICC was required in the usual IV arm but PICC insertion was impossible due to the patient's general condition or coagulopathy, it was defined as IV maintenance failure. The secondary endpoints were the patients' perceived satisfaction and complication rate. The complication rates were reported as complications per 1,000 PICC days and a simple rate. The IV maintenance success rate, which is the primary endpoint, was compared between the two groups using the Z-test. Kaplan-Meier estimates were used to analyze the time to event variable. Survival comparisons were performed using univariate log-rank tests. Median follow-up duration was calculated according to the inverted Kaplan-Meier method. Statistical analyses were performed using SPSS ver. 17.0 (SPSS Inc., Chicago, IL).
Patients and characteristics
In total, 186 terminally ill cancer patients were admitted to the HPC unit during the study period, 66 patients of whom were enrolled in this study, and 33 patients in each group. In the routine PICC arm, 29 patients excluding those who died (3 patients) or were transferred (1 patient) within 1 week were analyzed. In the usual IV access arm, 28 patients were analyzed, excluding those who died (2 patients) or were transferred (1 patient) within 1 week, and two patients required PICC insertion within 5 days (Fig. 1). There was no difference in age, primary cancer site, or prior curative treat-ment between the two groups. There were no differences in factors indicating general medical condition and life expectancy such as ECOG performance status and simplified palliative prognostic index. Prior infection and previous central venous access performance were similar between the two groups (Table 1).
Results of IV access analyses
In the routine PICC arm, 27 of the 29 cases maintained PICC until the intended time (death, 23; transfer, 4); and the other two patients prematurely removed PICC due to delirium-related self-removal. In the usual IV arm, 11 patients maintained peripheral IV access until death, 15 patients underwent PICC insertion and maintained it until the intended time (death 13, transfer 2), and the other two patients were unable to undergo PICC due to poor medical condition. In summary, the IV maintenance success rate in the routine In the 44 PICC insertion cases, the mean time from admission to the HPC unit to PICC insertion was 0.84 days (range, 0 to 3 days) in the routine PICC arm and 8.65 days (range, 6 to 18 days) in the usual IV arm. PICCs were successfully inserted in all patients without immediate catastrophic complications. Procedure-related distress was mostly "not distressing" (76% in routine PICC group, 87% in usual IV group) or "a little distressing" (17% vs. 13%) in both arms. The median catheter life span was 16.0 days (95% confidence interval [CI], 8.5 to 23.5) in the routine PICC arm and 8.0 days (95% CI, 4.5 to 11.5) in the usual IV arm (Table 3). With median follow-up duration of 55.0 days (95% CI, 31.1 to 78.8), the median overall survival from admission to death was 16.0 days (95% CI, 10.7 to 21.3) in the routine PICC arm and 15.6 days (95% CI, 13.5 to 16.5) in the usual IV access arm (Fig. 2).
Patients perceived satisfaction
Regarding satisfaction with IV access on day 5 of study enrollment, patient-reported comfort levels were as follows in the routine PICC arm: much comfort (n=14, 48%) and a little comfort (n=14, 48%), with most patients (96%) reporting favorable satisfaction. In the usual IV arm, only 7% and 14% of patients reported much comfort and a little comfort, respectively, while 50% of patients reported no change and 25% reported a little discomfort. The routine PICC access group reported significantly better comport compared to the usual IV access group (p < 0.001). Among the 15 patients who actually underwent PICC in the usual IV arm, all answered Identify the cause of PICC removal among people who have actually undergone PICC. that they were more comfortable with IV access after PICC insertion (Table 4).
Complications and removal of PICC
Nine complications (28%, 14.1/1,000 PICC days) occurred in the routine PICC arm, while six complications (40%, 33.9/1,000 PICC days) occurred in the usual IV access arm. The most frequently documented complication was bleeding in nine cases, followed by feeling of irritation and self-removal in two cases. Cases with bleeding were only trivial bleeding and were resolved by simple compression ( Table 5). The mean time from PICC insertion to complication occurrence was 21.2 days (range, 3 to 57 days), except for bleeding complications which mostly occurred immediately after PICC insertion (mean, 2.2 days; range 1 to 8 days). There was no PICC complication-related death. Among the 36 cases (routine PICC arm, 23 cases; usual IV access arm, 15 cases) in whom PICCs remained positioned until death, the catheter tip was cultured in 18 cases and 10 cases, respectively. One case (6%) in the routine PICC arm had positive tip culture results, and the pathogen was Staphylococcus aureus. The patient was started on palliative sedation with intractable dyspnea due to cancer progression on day 52 of PICC insertion, and fever occurred on day 57. The clinicians focused on symptom control rather than the additional work-up considering systemic conditions, and the patient died on day 60. We concluded a diagnosis of CRBSI based on clinical data, although the patient did not fulfill definitive criteria.
Discussion
The current study showed effectiveness and safety of routine PICC insertion at the time of hospitalization for HPC, because routine initial PICC insertion does not increase complications compared to the usual IV access (i.e., delayed PICC) and has a similar IV maintenance success rate. Additionally, patient-perceived satisfactions in the routine PICC arm was significantly more favorable than that in the usual IV access arm. Thus, this study showed that PICC could be routinely inserted at admission to the HPC unit in terminally ill cancer patients, considering their poor general conditions and limited period of survival.
Even though PICC insertion was performed routinely at the time of hospitalization, it was maintained at 90% or higher until the intended time, and these values were not different from those of the usual IV access arm. This is in line with the good PICC performance observed in terminally ill cancer patients in our previous study [7]. On the other hand, one case of CRBSI (day 57) and two cases of self-removal (days 8 and 34), which can lead to premature removal, occurred only in the routine PICC group. This suggests increased risk of routine PICC insertion-related complications according to longer dwelling time of PICC, based on previous studies showing that increases in the maintaining period of PICC were correlated with higher incidence of complications [9]. However, even in the routine PICC group, the probability of CRBSI or premature removal was very low compared to those in previous studies investigating PICC [9,11,12]. Another important finding was that the routine PICC group showed significant patient's perceived satisfaction with respect to IV access compared to the usual IV access. This is an expected result considering that peripheral IV access is often difficult in HPC patients who are elderly or have undergone repeated puncture for long periods of time. Considering the excellent maintenance success rate of PICC, patient satisfaction with routine PICC, and the comprehensive goal of HPC being focus on patient quality of life, the IV access strategy using routine PICC at the admission of HPC unit could be effective.
As in our previous studies [7], the current study showed a PICC success rate of 100%, and there were no serious procedure-related complications during PICC insertion. Additionally, PICC insertion-related distress was trivial in both study arms. These favorable results may be due to the performance of all procedures by experienced interventional radiologists using radiologic guidance; therefore, a superior success rate and safety [12][13][14]. The success rate and safety at the time of PICC insertion are important in terminally ill cancer patients because they are in poor general status and vulnerable to trivial damage. Our results strongly support the benefits of early PICC insertion.
In this study, there were significantly fewer PICC-related complications than in previous studies [9,[11][12][13][14]. The reasons are presumed to the following characteristics of the current study; PICC insertion was performed by an experienced radiologist using ultrasound guidance or fluoroscopic imaging under strict sterile conditions rather than by bedside blind insertion, patients with hematologic malignancies associated with relatively high risk of adverse events were excluded, and strict PICC management and monitoring related to the characteristics of the prospective trial. Above all, considering that the mean time to occurrence of complications was 21.2 days, the limited life span of terminally ill cancer patients in the HPC (mean survival, 22.2 days) would be a major explanation of lack of complication. Therefore, our results suggest that routine PICC should be applied under strict PICC management can be performed in terminally ill cancer patients with limited lifespans.
The major limitation of this study was possibility of bias due to its nature as a clinical audit study and relatively small sample size single-center study. First, the measurement tool for assessing patient satisfaction by a study coordinator may result in an underestimation of patient-reported distress and overestimation of patient-reported usefulness. Nevertheless, we tried to minimize physician bias by including outcome evaluations conducted by another team's independent rater. Second, the sample size may not be sufficient to represent actual complication rates. Moreover, tip cultures were not obtained for all cases, and detailed evaluations were limited due to the poor general performance of terminally ill cancer patients. Therefore, considering the relatively small sample size, single center, limited survival duration, and procedure performed by interventional radiologists, the current study results need cautious generalization for overall HPC patients. The routine PICC could be considered in terminally ill cancer patients with limited lifespans under hospital settings where PICC insertion and management can be appropriately performed.
Despite these limitations, this study was the first randomized phase II study of PICC in terminally ill cancer patients using an active comparator. The study showed that more than 90% of routine PICC-inserted patients maintained the PICC with function until the intended time, and 97% reported satisfaction with IV access compared to 21% of satisfaction with usual IV access. Considering the characteristics of terminally ill cancer patients, such as poor general condition and limited period of survival, initial PICC insertion at the time of admission to the HPC is a safe and useful option for IV access. In conclusion, routine PICC insertion in terminally ill cancer patients showed comparable safety and efficacy and superior satisfaction compared with usual IV access. Thus, routine PICC insertion could be considered at admission to the HPC unit.
|
2020-12-24T09:13:23.555Z
|
2020-12-22T00:00:00.000
|
{
"year": 2020,
"sha1": "7ddfb7faf5d7749c191615e90dcae366bb331390",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-crt.org/upload/pdf/crt-2020-1008.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ab8bf1ce8f22cc73aa1d57cdd064f00c77de0e7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219121651
|
pes2o/s2orc
|
v3-fos-license
|
PATRIOTISM AND ARMENIAN STATEHOOD IN THE NORMS OF ARMENIAN LAW
The purpose of the article is to study and present the sources of Armenian law that contain provisions on patriotism and Armenian statehood. To achieve this goal, our task is to investigate the legal and patriotic labour standards of Hakob and Shahamir Shahamirian‘s Girk Anvanial Vorogayt Parats (Snare of Glory). It is no secret that the work ―Vorogayt Parats‖ is one of the most important documents of Armenian law, which for the first time in the reality of Armenia presents a holistic and orderly system of norms of various branches of law. As a result of the research were applied both scientific (analysis, the principle of historicity) and special (comparative-legal) methods. The study of the above mentioned legal and patriotic norms gives us the opportunity to conclude that they play a key role in the development of our national, legal, political thought and are a kind of value radiating patriotism.
As we know, legal norms of customary law have historically emerged after the formation of families; due to this, matriarchy, and then patriarchy formed. Based on family ties, tribes and clans were later formed.
At the initial stage these were types of peoples' communities, later, due to the emergence and development of labour division and trading relations, appeared state institutions with their mechanisms, leverages, supervision and enforcement bodies. Simultaneously, the legislative law emerged and developed.
Labor refers to such sources of Armenian law that contain patriotism. We will present the legalpatriotic norms of Hakob and Shahamir Shahamirians' -Vorogayt Parats‖ 1 , we will also briefly address some legal norms containing patriotism, democracy, humanity.
The very existence of the state implies the emergence of law and legal norms, and they are interrelated.
Historically the procedures of emergence and development of the state and the law took place also on Armenian Highland. It was the Armenian world, where Armenian statehood arose and developed.
In the Bible, the Armenian Highland was the earthly paradise, where originated paradise rivers -Araks (Genon), Tigris, Euphrates and Pison (Jorokh) and where Adam and Eve had settled for the first time (Nelson, 2010, p. 8).
Armenian Highland was considered to be the land of legal knowledge and sacred rituals. The climate, geographical location and the abundance of natural resources created good conditions for the emergence and development of crafts and trade. They, naturally, raised the need for legal regulation of trade, economic, social and other relations. Necessarily, legal norms appeared and developed that regulated those relations, the forms and mechanisms of their emergence and adoption. The demand for legal knowledge was subsequently developed.
The state of Armenian forefathers was called Aratta -Araratyan (-Ar‖ was the highest God). The state was famous for its governing structures and state mechanisms. The creation of Armens' state relates to V-IV millennia BC. In legends, Prince Hayk Askanaz on the shores of Lake Van, after the victory over the Babylonian King Bel, joined the possessions of Armenian princes within the borders of one kingdom, which was called Hayq-Hayastan-Armenia. Thus, the foundation of Haykazuni Kingdom was led. It lasted for 1776 years, following dynastic succession.
The head of the state was the Rulermonarch. He obeyed Gods and personally communicated with them in the Holy Temple. The Ruler as well was proclaimed God, he was the bearer of absolute power, and only he could rededicate to other persons or groups the implementation of certain parts of his power. The will of the Ruler had the force of law. He was prescribing rights and obligations, and he was creating official po-sitions and governing bodies, he was making appointments. The monarchic title was hereditary. The Armenian monarchic dynasties and kings' forenames of the V-III millennia BC were reflected on clay tablets.
Clay inscription is depicting Armenian Aramazd God as the ruler of the earth, waters and sky demonstrating the attributes of his absolute power. With his left leg, he leans to a seal -Law.
Armenian legal thinking created numerous legal monuments, which had survived to us in the form of hieroglyphs, cuneiforms, stone obelisks, scrolls, printed and manuscript press and in other forms and means.
Ancient philosopher and law-maker Shuruppak in his -Admonitions‖ demands: -Do not ignore the admonition that I give, do not break the speech that I speak‖.
-Do not violate your speech; your speech is the basis what you beat with power that will ruin you. Who will ruin houses, will stay under the ruins. Who will rise against men, will be attacked by men. Do not try to catch the water in your hands, your will stay exhausted. Do not steal, do not destroy yourself‖ (Lambert, 1996).
It is notable that admonitions contain practical, domestic, philosophical, as well as legal thoughts and rules, due to that we will consider them as legislative. Shuruppak attached great importance to public opinion, public evaluations. Guarantees of the significance of a man he is considering as redundancy, subordinate position --Do not guarantee, in order not to stay depend-ent… Witness of a man is his city‖.
Separate norms of -admonitions‖, as we see, contain direct sanctions: -Thief is a lion, and a slave when caught‖.
Shuruppak treats with great respect the heroes: -We should bow our heads to heroism‖. He put heroes on the same level as the sun. WISDOM 1(14), 2020 Law Code of Shulgi of Ar (Ur)the Legislator and Philosopher of the
22-21 Century BC
In this law does not exist the principle of punishment -an eye for an eye, a tooth for a tooth‖, and instead of physical punishments, it prescribes material damage in the form of fines. This is great progress in his times. It was considered that physical punishments are not only inhumane, but they also deprived people of workability and military service, i.e. of the ability to create goods and to participate in defence of the country.
Article 15: -If a man has cut off another man's limb with a weapon, he is to pay ten shekels of silver‖.
Bringing brief examples from ancient legal norms, now we will specifically present Shahamirian's -Vorogayt Parats‖. In this treasury of constitutional law, Shahamirians expressed all their conscious and subconscious patriotism. Arghutyants considered -Vorogayt Parats‖ to be a treasure (Arghutyan, n.d.), political and legal genius of the Armenian nation. In reality, it is a vivid example of the new ideology of 17-18 centuries.
The process of adoption of the idea of a legal state which is guided by the standards of democracy, legitimacy, representative government and free entrepreneurship is usually referred to the era of English and French revolutions, while they were clearly and vividly formulated in -Vorogayt Parats‖.
Shahamirian was of the opinion that the failure of Armenian statehood was caused by violations of lawfulness and legal order, as well as disobediences. -Only laws should be the Armenians' king and rule the Armenian land‖ (Avagyan, 2002, p. 37),he is writing. This is based on the priority principle that the Armenian nation can be saved by the legislative body elected in the result of joint national will, and the laws adopted by it; they should derive from nature and reasonfrom divine laws; they should act following the social morality and prosperity principles.
A question is raised: what did cause the creation of the project of the Armenian statehood constitution? What did make it valuable? What practical importance did it have? In order to an-swer the above-mentioned questions, let us analyze some episodes of Armenian liberating movement of the mentioned period.
The recreation of Armenian statehood and its preservation is a philosophical and political theory. It directly influenced the emergence and development of national thinking, the system of Armenian self-governance. The national ideology and the tasks of the state policy related to it, its practical use are closely connected to the law and legislation, the formation and influence of their epistemological and scientific purposes.
The attachment of great importance to the lawfulness approved by law makes it worthwhile by very law itself as the expression of the will of nations, peoples, population (in Shahamirian's work, the nation, because only the Armenian had the right to vote).
In the mentioned era, Armenia was divided between Ottoman Turkey and Iran. The second half of the 18 th century for progressive forces in Armenia and outside Armenia was the era of the liberation struggle. Shahamirian and collaborates were concerned with the issue of formation of a political system of the future Armenian state.
They worked out a full project of the struggle for the liberation of Armenia, the main axle of which was the necessity of joining Georgian King Erakle II and Karabakh meliks 2 .
The program was intended to be implemented due to the military and political assis tance of Russia. Armenian figures in order to ________________________ 2 There were 5 Karabakh melikdoms, they were also called Khamsa melikdoms: -khamsa‖ is an Arabian word and means -five‖. Some nations are still using the word in the mentioned meaning. Muslims, when saying -khamsa‖, mean five members of Muhammad's family. It also means Saint Mary's hand palm consisting of five fingers, and Muslims mean Fatima who had a mascot meaning (brings success). In case of meliks, -khamsa‖ also is explained as hand palm consisting of five fingers, and when they are joined and clenched they are becoming a fist. WISDOM 1(14), 2020 implement that purpose worked out two projects between Armenia and Russia, one of them was authored by Hovsep Argutinski 3 , the otherby Shahamirian. The precondition for this programming idea was the Treaty of Georgievsk, and Hovsep Argutinski contributed to its signing: Russia set a protectorate over the Kingdom of Kartli-Kakheti, preserving its self-governance.
In 1786 Erakle II granted to Shahamirian the title of Prince, and the latter suggested the number of reforms in Georgia, due to which the country shortly should become stronger. Highlighting the issue of population growth in Georgia, he advised the king to apply to all Armenians around the world with an address promising discounts, insurance of life and property.
The 19 th -century famous Armenian historian Aleksand Yeritsyan mentioned that Shahamirian suggested to Erakle II that the King's family, following his example, liberate all the serfs. Shahamirian was even ready to contribute with the necessary amount of money as a ransom price for the villagers belonging to church (Yeritsyan, 1883).
On 15 October 1787 in his letter addressed to Georgian King Erakle II, Shahamirian writes: -Remember, that not the people were created for you … You were selected by the destiny for your people… and your people prosperity and liberty are your liberty…‖ Shahamiryan also sent to Erakle II the coat ________________________ 3 Hovsep Argutinski (Hovsep Arghutyan, 1743Arghutyan, -1801 archbishop, outstanding representative of Armenian liberation movement. Representative of Arghutyan-Erkaynabazuk's oldest princely family. In 1773 was the Primate of Diocese of Russian Armenians. Personally knew Catherine II (Catherine the Great). Led active political correspondence with Georgian King Erakle II, Karabalh meliks, Shahamir Shahamirian and other Armenian figures. Actively participated in resettlement of Crimean Armenians and in foundation of New Nakhijevan, was elected Catholicos, but was not anointed, because died suddenly in Tbilisi on the way to Echmiatsin. of arms created by him for the joined state of Armenia and Georgia, designed in gold, diamonds, and asked him to approve it.
Erakle II with his signature and seal approved the template of the coat of arms, and on 4 December 1790 sent it to Shahamirian to Madras, where the King's proclamation was published and distributed in 1000 printing copies (Nersesyan, 1990, pp. 547-548).
Shahamirian had an active correspondence with Catholicos of Gandzasar Simeon Yerevantsi and Karabakh meliks. The national liberation movement headed by Shahamirian called for rebellion, being sure that the rebellion was the only way. Under the Iranian pressure, Shahamirian, together with his associates, was exiled from Georgia. Catholicos Simeon Yerevantsi believed that rebellion was premature, and would deepen the revenge and pressure by despots, leading to grave consequences.
On 15 January 1779, in his letter addressed to Gandzasar Catholicos Hovhannes Shahamirian, following the example in Bible, suggests: -First of all it is needed to recruit from clergy and seculars 12 teachers, instruct them to teach literacy to children, secondoblige everybody to send their children to schools … in order the youth, by reading books, be able to learn the history of its nation, to be inspired with liberation spirit … it is necessary to compile -a book of laws that care about the needs of society…‖ to adopt an obligation of obeying them. Armenians do not reach the condition, as Jewish, Egyptians or Greeks, they partially have national power, which should be protected from those who have a weak will or are selfless‖ (Nersesyan, 1990, pp. 135-137, 366-373). Speaking about the national power, Shahamirian meant Karabakh melikdoms. He had a dream to create a national state on the territory of five Artsakh melikdoms, in future their economic and military power would liberate the whole Armenia. Inspired by these ideas and standards, Armenian nation, especially intellectuals, were fighting for the liberation of Armenia divided between Turkey and Iran. Their legal-political thinking was directed to building an Armenian state. Shahamirian and his associates, for the first time, integrated and structured the Armenian political and legal-legislative democratic ideology, the level of which has not been exceeded up to now. That is a well-balanced mixture of laws and morality in its ideological sense. -If we want to be liberated and to be the lords of our land… protect human honour and dignity, to cleanse the mud of the past, to live with a clear conscience, first of all, we should build our own state order, work out the laws which will coincide with liberty, interests of our nation…, in that case, people would be dependant only on the laws adopted by the people, but not by individuals, when people are governed by laws and not by one individual. Those laws are our lords and our religion, our ruler and our king‖ (Avagyan, 2002, p. 3).
Shahamirian died in 1797 in India when he was 74 years old. One year later died the King Erakle II. The idea of creating the statethe dream of those outstanding figures remained a project.
Unfortunately, -Vorogayt Parats‖ did not become the constitution of independent Armenian state and remained just a project, a doctrine with its national content and orientation. It awakened Armenian national legal awareness. The introduction of -Vorogayt Parats‖ is the highest value of patriotic demonstration of political, national-liberation ideology. -…The most gifted, the most beautiful and the most special is the Land of Ararat, which by its highest Masis mountain was created by the Lord as the king over the earth and all the mountains, which, as the divine paradise became the foremother for our forefather Adam, so Ararat together with Masis became the land for the settlement of our forefather Noah, became a harbour for him. Thus, the Lord gave his blessing to the House of Nakhijevan and the Land of Ararat‖ (Avagyan, 2002, pp. 31-32). Roman historian Flavius Josephus (37-100 years) in his work -Antiquities of the Jews‖ is referring to: -Noah saw that the world is dry of water, and he stayed yet another seven days, released from his ark the animals, and he also left the boat with his family… Armenians call this place Nakhijevan and people living there up to today are showing the preserved remnants of the ark‖ (Flavius, 1996, p. 14). It was the will of God that Armenian King Abgar, without the preaches of clergies, without prophets and without witnessing magic, believed the word of the incarnation of the living God's son Jesus Nazirite -Christ, and for the first time in Armenia in 301, Christianity was adopted as state religion after the magic of Grigor Lusavoritch (Gregory the Illuminator) Pahlavuni.
-Vorogayt Parats‖ is a democratic, parliamentary republic's constitution based on a fundamental principle that the power derives from people and reports to people. It was based on principles of supremacy of law, the freedom and equality of rights of nations and ethnics, the equal protection of their rights, separation of powers, freedom of religion. It was prescribed that the legislative and executive powers were elected. Shahamirian was referring to fundamental principles of Armenian national ideology, and sources of Armenian law emerged and established during previous historical periodsthe natural divine origin of law and legal superiority, freedom, justice, considering a human being with its natural rights as a basis for the legislation.
The highest legislative body in Armenia Shahamirian named -Armenian house‖, there was excluded the presence of foreign nationals, gentiles and cultists, whose occupation of posts also in executive power was forbidden constitutionally. According to -Vorogayt Parats‖, every citizen regardless of his ethnicity, religion, gender, social and property status, within the law has the rights to freedom of speech, expression of his opinion and of conscious, to the choice of place of residence, to leave the state and other rights.
As Christians, the other gentile foreigners also could freely enter Armenia and reside under the protection of the state, paying taxes, dues, accepting Armenian judicial legislation and procedures that prescribed responsibility for their actions.
In the draft of the constitution there were also articles on military rules (proclamation of emergency situations)including the norms on manufacturing, importing and exporting weapon.
Humane treatment was prescribed for the populated areas, soldiers and population occupied by the Armenian army. The agreements contradicting to natural human rights and laws, as well as limiting human freedom and humiliating human dignity, circulating in that historical period, especially in eastern countries, were proclaimed non-valid. The 1 st Article of the constitution proclaimed the whole territory and natural wealth as Armenia national wealth, also inherited, and only Armenian state had the right to dispose of it, according to the constitution. The right to land ownership and sell had only Armenians who accepted the religion of the church. Foreign nationals and gentiles could use and own land and resources but could not dispose and could not alienate them. According to the mentioned Article, as Armenian borders were considered to be between the Mediterranean, Black and Caspian seas, locating on the territories of Mets Hayk (Kingdom of Armenia) and Cilician State, which -is the heritage of Armenian people, not an inch can be added or diminished from this land, here always should be preserved the Armenian House according to Armenian constitution‖. Constitution proclaimed that everyone who was born on Armenian territory -has the honour to be named Armenian‖ regardless of his nationality and religion. To speak and read, Armenian was compulsory (Article 2).
In the draft, Labor and Education were not only freedom and rights but obligation, because Armenians were obliged to take part in the economic growth of the state, which was the basis for the prosperity of people. The care for scientists, inventors, teachers and other professionals was considered to be the state's obligation. Severe punishments were prescribed for crimes against Armenian statehood and Armenian Church. It was clearly prescribed that punishment was not only retribution but had an educational effect.
In the court drafted by Shahamirian, there were 24 juries, the election of 12 of them was the procedural right of the accused.
Separate articles directly obliged the Armenian House to take care of and educate the illegally born children and children who lost their parents, to take care of the unworkable disabled people, elders who left without care and those who were in need. The collected means should be spent with the purpose to strengthen security in Armenia, organize and develop the education and healthcare in the state.
The head of the highest executive body was the minister-president. He is the head of the state --the first official and the servant of Armenian people‖.
Shahamirian demonstrates great respect to the followers of the Armenian royal house. In particular, if one of the representatives of Bagratuni royal family, accepting the laws and decisions of -Armenian House‖, would wish to become a minister, he could be elected to that position for life, while ministers could be elected for three years period.
Very surprisingly the 3 rd Article of -Vorogayt Parats‖ is actual: -Everyone regardless male or female… are equally free in their actions, no one has the right to rule the other‖. This norm of the Constitution is constant to John Lock's philosophical ideology on human freedom by the provision that natural human condition is the condition of absolute freedom. -Freedom is the behaviour not forbidden by the lawto follow his own wishes and not to be dependent on some other self-willed person's non-stable, non-definite and unknown will‖ (Lock, 1962, pp. 6-17).
Armenian people's wisdom proclaims: -You are free, but is not free to disturb others' freedom‖.
The 5 th Article is admiring: Everyone living on Armenian land with the tradition under which he worships the Lord, with the same strength he stays true to his belief and no one regardless who does not have right to be an obstacle for him. In an obvious way and clearly is defined as the right to freedom of conscience in today's concept and definition.
The 8 th Article of the draft is titled: -On the right of an Armenian to buy and sell lands‖. It determines: -Everyone, who is Armenian by his nationality and member of the Armenian community, christened by Christ and believer of Christ, can buy land as property and sell it only to representative or representatives of his nation and his religion‖.
Everything is said clearly and simply. If all the Armenian people had the right to vote, the right to be elected was prescribed only to men population having Armenian nationality and Armenian Christian religion: -… A respectable and humble man, having Armenian nationality, who is Christ's worshipper under the religion of the Armenian Holy Church, and born in Armenia‖ (Article 14).
It was prescribed that the members of parliament from each voting house should receive half silver monets annually, totally six thousand silver monets in order to be free of domestic arrangements and to be able to devote their time freely to the implementation of decisions and actions directed to people's prosperity. For the election of all the officials as a compulsory requirement was prescribed: -to be Armenian national and Armenian Holy Church's worship-per‖. The 4 th Article in the Constitution on the Armenian religion is considering as an honour the worshipping ritual of Armenian Holy Church, that is faithfully preached to us -from the patriarchic Vaghrshapat our patriarchic highest chair -Ejmiatsin's lightning Holy Church, by the truthful voice of Armenian patriarch‖. The Armenian national who being christened betrayed his belief was convicted to death.
Unique content and sense have the 126 th Article prescribing the description of money. It is titled -On the form of money‖ and required that the text -Prosperity to Armenian people‖ was written on a coin. Everyone who would come in touch with coins, by reading the mentioned text should vividly understand that money should, first of all, serve to the Armenian Statehood and Armenian people's prosperity.
In -Vorogayt Parats‖ (Article 82) it was prescribed that before a church wedding, spouses should sign a marriage contract, -… so that we can accept the blessing of our Creator Lord by the hand of the clergy of Armenian Holy Church, in order to grow and breed, to serve to our Lord, for the sake of high and caring Armenian state‖. If one of the spouses -reject and not perform his duties on due implementation of the contract he is obliged to pay five thousand monet to Armenian almshouse‖. The growth and reproduction of the nation are also prescribed as a compulsory provision, as national and state interest.
The 118 th Article is obliging the parents to buy one gun and one sword at their own expenses for their children who become 18 years old, as the first heritage.
On the whole territory of the Armenian state, all people are obliged to have two teachers for 25 houses, one of the teachersto teach children reading and writing Armenian, the otherto teach military service to male children (Article 101).
The draft of Constitution prescribes the requirement to leave the land undeveloped, every year the lands should be ploughed, sowed and fertilized on the proper time to receive a good harvest. And if someone is leaving the lands and gardens undeveloped, the plough and sow of the lands are implemented by the state. And if after that, the owner of the land does not solve the problem with the land, i.e. does not develop it, that land is becoming the ownership of the Armenian House.
If a person is suffered hooliganism or theft of is property, the ruler of the region, after getting confident in those facts, is obliged to restore the suffered material damage to the person, to take measures to find out punish and offender and to receive the restored money (Article 112).
As we see, the state assumes the obligation of restoring the damage caused to a person by the crime, because the state is undertaking the responsibility to eliminate the consequences of the crime committed against persons on his territory, and to recover the damage. It is obvious that Shahamirian is of the opinion that each state must keep his citizens away from criminal offences. By the recovery of the material damage caused by the crime among certain officials is increased the level of responsibility of solving the crime in order be able, alongside with punishing the offender, to recover the amounts paid from the state funds.
In the constitutional legal norms researched by us, partially such substantive solution we observe in the Constitution of Japan adopted in 1947. Namely, the 17 th Article says: -Every person may sue for redress as provided by law from the State or a public entity, in case he has suffered damage through illegal act of any public official.‖ The given constitutional legal provision refers only to those cases when the damage has been caused by the illegal act of any public official, and the damage recovery is implemented in accordance with procedural means and through judicial decisions.
According to the 3 rd Article of the RA Constitution of the 6 th of December, 2015, together with amendments, the state is providing the protection of fundamental rights and freedoms of persons and citizens, including the protection of the right to property. The European Court of Human Rights specifying the frameworks of the state obligations in the protection of the right to property guaranteed by the 1 st article of the European Convention on Protection of Human Rights and Fundamental Freedoms developed further the idea of the state's positive obligations.
It means that real and effective implementation WISDOM 1(14), 2020 of the right to property also requires certain positive measures; in particular when there is a link between the effective performing of person's right to property and the measures that a person may legitimately expect from the state to take.
That expectation also relates to a lawful obligation of the state to protect persons and citizens right to property from criminal offences. We are underlining that the recovery of caused damage in all cases is implemented under the judicial procedures, and it also should be mentioned that often this is happening after long court bureaucracy and may last for years.
Shahamirian in -Vorogayt Parats‖ is defining very simplified administrative procedure, and in particular, the following is said: -The administrator of the province takes the oath from him (the aggrieved person) on Holy Testament to get ensured that a man has lost his property, and, as an administrator of the province, he is obliged to recover the value of the lost property to that man, then search and find the thief or hooligan in order to receive back the amounts recovered and judge according to the law‖ (Avagyan, 2002).
All types of documents, both commercial and contractual, regardless of who has signed it, if contradicting to Armenian legislation and vital natural functions of a human, are recognized to be non-valid. The law obliges the state also to undertake the protection of everyone, who is living on his territory, and if he is captured, Armenian state -is obliged to save him and to return to his house and family‖ (Article 138).
To every Armenian resident is prescribed the right to bring a claim at court against any person without exclusions, including officials. Penitentiaries, dark prisons should be clean and comfortable not to harm the prisoner's health and constant to the actions of the offender (Article 153).
Shahamirian's legal definitions regarding women are unique. In particular, he says: -Females should not be captured, no matter Christian or pagan… female individuals cannot be enforced if they do not commit a crime‖. And in the 371 st Article, the servants upon the order of princes or high officials are forbidden to enter females rooms, especially married women's rooms, because these rooms are «sacred and sinless and we want no noise, conflict disturbs our mothers and women who gave birth to our sons. Excluding the cases when a criminal offender or someone punished to death should be arrested‖.
As we see, -Vorogayt Parats‖ is a new level of Armenian political and constitutional legal thinking and has a unique value in the world's constitutional culture. From this work, patriotism is radiated. That was a flight of social-political thinking, and it was seemingly the only possible strong progressive idea of anti-serfdom and Republican order in the 18 th century Armenia under Turkey's and Iran's rule, the realization of which should be desired. Shahamirian is considering himself as -one of the least worthies and the most humbles in Armenian land, and writing the constitution of his state he does not expect anything for himself, neither power, nor prosperity, nor glory‖, but he is guided only -by love towards his own nation and our country‖.
Hakob Shahamirian died at the age of 29 in Malacca of Malaysia, where he had vast holdings of cigarette. On his cemetery is written: -Welcome to you, who read the word on my cemetery, tell me about the freedom of my nation I always desired, if someone among us has been elevated as a savour and a ruler that I desired forever and severely in the world…‖ (Aghjean, 1993, pp. 215-225). WISDOM 1(14), 2020 DOI: 10.24234/wisdom.v14i1.323 Lilit KAZANCHIAN This study is also focusing on various approaches of well-known jurists on the essence, content and legislative consolidation of the fundamental rights of the individual.
FEATURES OF FUNDAMENTAL RIGHTS IN THE CONTEXT OF THE PHILOSOPHY OF LAW
The author comes to a conclusion that in recent decades, the philosophy of law (with the theory of state and law) took under its active protection and guardianship man with his rights, freedoms and legitimate interests, and which have ceased to be the subject of national legislation's regulation, and moved to the international legal platform. Consequently, the government is obligated to guarantee fundamental human rights and freedoms. Hence, theoretical, methodological and practical analysis of problems of the individual's legal status and elaboration of suggestions concerning the enhancement of national legislation, is one of the most actual problems of jurisprudence.
Keywords: fundamental human rights and freedoms, the legal status of the individual, legitimate interests, globalization, duties, citizen, democratic state, government.
A democratic, legal, and social state is a form of human coexistence where there are mutually agreed human relations, where the state and society assume a mutual obligation to help those in need, to influence the distribution of material goods, based on such principles of justice that guarantees of a decent life are created for everyone, as well as rights, freedoms and legitimate interests are protected (Harutyunyan, 2005, pp. 110-112;Yeritsyan, 2007, pp. 106-108). Consequently, the study and clarification of the concepts of human rights and fundamental free-doms, which are at the core of a person's legal status, are at the heart of the theory of modern philosophy of law (Huymens, 1995). Moreover, it is necessary to conduct a comprehensive study of the essence and content of rights, freedoms, legitimate interests of the individual in a modern democratic state. The system of rights, freedoms, legitimate interests and obligations that form the core of a person's legal status, as well as guarantees of their protection, is based on the fundamental principle of values, according to which a person is the highest value in the Repub-lic of Armenia. Moreover, the inalienable dignity of a person is an integral basis of his or her rights and freedoms.
The legal status of a person includes a combination of the rights, freedoms, duties and legitimate interests of the person, which, in turn, is a means of legal regulation that regulates the social status (position) of the person.
It is noteworthy that some of the modern legal scholars consider rights, freedoms and obligations as the main elements of the legal status of a person, and legitimate interests as additional (or derivative) ones (Rideau, 2003, pp. 23-24;Vitruk, 2008, p. 105).
In our opinion, this division of rights, freedoms, and duties has contributed to the humiliation of the essence of legitimate interests, as well as their important role in law, as a result of which this concept continues to be poorly studied in the legal literature. Based on the above, we propose to consider the rights, freedoms, duties and legitimate interests of the individual as the main elements of the legal status of the individual in the context of the philosophy of law.
Analysis of the Fundamental Rights and Freedoms of the Individual
It is obvious, that in the states which stand in the way of democracy, the rights and freedoms of the individual are not stationary and eternal, but are constantly changing and developing concepts (Marchenko, 2014, pp. 204-206;Trion, 2012, pp. 105-107). In addition, the basic rights and freedoms of the individual are not assigned by the state, since they do not exist because of formal consolidation, which is also very important, but because of the social capabilities of the person arising from the system of social relations. Furthermore, the source of individual rights and freedoms in a democratic society are real social relations, not the will of the legislator.
The conducted research shows that the social capabilities of the individual are social prerequisites for the formation, the regularity of development and the ability to use the advantages of legal rights, freedoms, legitimate interests, as well as the real content of duties. Therefore, a person's rights and freedoms are the social opportunities of the person enshrined in the law to possess certain goods to meet his or her needs. Moreover, legal rights and freedoms of a person acquire clear boundaries as a result of the state's implementation of legal regulation, and violation of these boundaries by a person is considered as illegal behaviour. In this case, of course, the legislator only considers the social opportunities to meet the needs of mankind, which, by stipulating in the norms of coexistence of public life, formally acquire the opportunity to be called human rights. Moreover, ideas about human rights, penetrating into the human masses, turn into a powerful material force, and for the state there is a need to fix the list of human rights determined by historical development in the law, that is, to establish the rights of a citizen as the legal rights of a person.
The idea of human rights also has a substantive basis, which was studied by K. Marx and F. Engels. They, considering man as a -result of history‖ and simultaneously leading the political and civil life of the subject, define the natural rights of the individual as historically formed bourgeois-democratic rights and freedoms, where the individual and the citizen are private owners (Marx & Engels, 1955, pp. 390-391) Taking the above idea as a basis, as well as the analysis of the material justification of human rights and social content, many soviet lawyers, such as I. Farber and G. Malcev (1969) be-gan to distinguish basic human rights from the rights of a citizen (pp. 26-27).
According to P. Nedbaylo (1965), the socio-political preconditions formed in the state and society are crucial to the formation of legally recognized, inalienable rights and freedoms of a person.. In this context, L. Voevodin (1997) rightly points out, that the real content of human rights as a socially conditioned opportunity, which in its essence is a requirement for the possession of certain social benefits (p. 115).
Meanwhile, V. Kartashkin (2018) (Ayvazyan, 2008, p. 12). At the same time, various definitions and comments on fundamental human rights and freedoms can only be accepted partially and with certain reservations, since they generally do not fully reflect the essence and content of this concept, for example, according to Yu. Troshkin (1998), the fundamental rights are only those rights that are enshrined in the Constitution and the most important human rights documents, define the ideals of humanism in society, limit the power and protect people from their arbitrariness (pp. 30-31). Therefore, this definition is narrow in content, since it does not fully disclose the essence and meaning of fundamental rights.
Moreover, studies have shown that, due to modern political and legal processes, individual rights and freedoms are gradually becoming a standard for the development of society, the establishment of the idea of the rule of law and a stable factor in international legal cooperation.
In the modern world, the category of universal (fundamental) rights of the individual has been formed in the context of the universal equality of people, which has a common, generally accepted and legal meaning for the world community (Loth, 1998, pp. 22-24).
It is obvious that human rights and freedoms have ceased to be an object of domestic policy and practice of the state, and have become a problem of the entire international community.
Nowadays, the scope of individual rights and freedoms is determined not only by the specific characteristics of a particular society but also by the development of the civilization of all mankind, as well as the degree and level of integration into the international community of a given state. Therefore, fundamental rights become a high level in the international legal plane, below which any state, claiming to be a democratic, legal and social state, cannot descend. It is undeniable that a new phase in the history of human rights began after World War II when the processes of cooperation and integration of states developed and human rights gained universal recognition through international joint affairs.
Thus, the UN General Assembly adopted the Universal Declaration of human rights (10.12.1948), which became the first-ever international universal document on the list of human rights and fundamental freedoms.
Thus, human rights are basic (fundamental) rights that are universal (extend to all who be- WISDOM 1(14), 2020 long to the biological type of Homo sapiens) and which are egalitarian (all are equal), as well as ensure a dignified life and development of the person in the context of the achievements of modern historical-social progress (Ebzeev, Aybazov, & Krasnoryadtsev, 2006, pp. 54-57). Considering a modern democratic, legal, social state, it becomes obvious that the priority of norms and principles of the internationally recognized human rights law in relation to domestic norms and principles is a categorical imperative of the international community.
At the same time, we agree with the opinion formed in recent years in the judicial literature and philosophy of law that the process of globalization cannot be the reason for the universalization of human rights, because the right to preserve the native language, culture, customs is the natural, inalienable right of every nation, ethnicity (Vencent 1989, pp. 49-54).
In our opinion, human rights and freedoms must correspond to the needs of a particular society and can have multiple forms of expression.
H. Behruz and M. Monshipouri rightly pointed out, that only those individual rights that correspond to the social problems of the society take into account cultural characteristics, religious traditions and beliefs, the accumulated experience of previous generations, and the moral principles of society can be recognized as universal rights (Behruz, 2006, pp. 20-22;Monshipouri, 1994). Consequently, there are objective justifications for both doubts and opposition to the universal nature of fundamental human rights.
First of all, there are regional, civilizational, and cultural differences in which people are born, raised, act, and think (Islamic, Jewish, etc.).
Second, there is a significant difference in the social conditions where people live in differ-ent countries, regions and continents.
Third, mutual disrespect for national and religious values in immigration processes. It is hard to agree with the opinion of several modern researchers that the list of human rights and freedoms enshrined in a number of declarations on fundamental human rights and freedoms including the Universal Declaration of Human Rights (1948) (Mutua, 2002, pp. 82-83;Mattelman, 1996, p. 110). It is no coincidence that regional acts such as the American Convention on Human Rights (1969), the African Charter on Human and People Rights (1981), the Arab Charter of Human Rights (1994) are anchored not only on Universal Declaration of Human Rights but also on other human rights acts. Definitely, the organizations, which adopt those documents, take into consideration cultural characteristics and do not exclude other interpretations of international human rights norms. Actually, in all cases, there are even fundamental differences between the principles and reality proclaimed in international legal and regional instruments. For example, the status of women in Islamic countries, -invio-lable‖ in India.
Therefore, progress is realized in any cultural civilization, in the process of gradual convergence of perception and implementation of the fundamental principles of human rights and, of course, is facilitated by the globalization of economic and law, immigration, exchange of cultural values, solution of global problems related to drug trafficking and the fight against international terrorism, natural and man-made disasters. WISDOM 1(14), 2020 In our opinion, there should be progress on the path of social and humanistic development and not the destruction of traditional values and the fall into prehistoric society.
On this issue, E. Lukasheva (2006) noted that the artificial acceleration of the process of adoption of international human rights norms, which contradicts the political, customary, cultural ideas of individual countries, regions is impermissible.
Therefore, we consider that it is necessary to respect a different world order and not try to change it through universal democratic and forcible implementation of human rights standards. It is important to have a constant dialogue of civilizations, a gradual and long-lasting process of perception, and adaptation to generally accepted norms and values, which opens the way to preserving the diversity and richness of the world.
It is known that those rights and freedoms that are more vital for the individual, society, and the government are enshrined in the Constitution and are called -basic rights and freedoms‖.
For example, Chapter 2 of the Constitution of Republic of Armenia: -Basic Rights and freedoms of the Human Being and the Citizen‖ includes such fundamental rights and freedoms of persons living in the territory of RA as the right to life, right to physical and mental integrity, the right to inviolability of the home, freedom of thought, conscience and religion, freedom of expression of opinion, right to judicial protection and the right to apply to international bodies for the protection of human rights etc. In addition, in Armenia, basic rights and freedoms of the human being and the citizen are regulated by other branches of law, which are not considered fundamental in their content, and therefore do not receive constitutional protection. Furthermore, rights enshrined in the current legislation specify, supplement and develop constitutional rights, which are based on the latter, and thus don't diminish the significance of constitutional rights and freedoms, their direct effect. For example, on the basis of these constitutional norms, the Criminal Code of RA contains many norms regarding sanctions provided for violations of the basic rights and freedoms of citizens. Moreover, according to article 81 of the Constitution of RA, the practice of bodies operating on the basis of international treaties on human rights, ratified by the Republic of Armenia, shall be taken into account when interpreting the provisions concerning basic rights and freedoms enshrined in the Constitution.
This leads to the conclusion that sectoral rights Supplement constitutional rights also for the reason that the latter are designed to fully cover the legal capacity of the individual in all spheres of various social relations, are independent and run parallel to the constitutional rights. Consequently, branch rights supplement constitutional rights also because they are meant to fully cover a person's legal capacity in all areas of social relations, are progressing in parallel and independently of constitutional rights. In other words, the ratio is not a ratio of whole and part, since both the basic and the rights established by the rules of the branches of law are independent. The correlation between these two groups of rights is that basic rights determine the content and main role not only of a particular right but also of the entire human rights system. Basic rights are rights that belong not to a particular group of people, but to each person. Therefore, we can say with confidence that basic rights are not only constitutional but also subjective rights.
Conclusions
Summing up the results of explored issues and considering the fundamental rights as a dynamic phenomenon of the philosophy of law, we conclude that it is necessary to consider the rights of the individual, based on the combination of social conditions in this society and the state and the legal norms built on their basis. Social opportunities for a person enshrined by the state in the Constitution and laws become legal requirements that are subject to the protection (guarantee) of the state. Moreover, human rights are an opportunity to determine the extent of one's own behaviour. As a result of our research, we have come to the simple conclusion, that if a specific fundamental human right is not enshrined in the Constitution of a state, then it must be recognized in that state, regardless of its constitutional provision.
It is obvious that basic human rights are the inalienable, socially necessary opportunities guaranteed by the government, to freely, consciously and responsibly possess the vital material and spiritual goods.
|
2020-04-30T09:04:44.038Z
|
2020-03-24T00:00:00.000
|
{
"year": 2020,
"sha1": "80c8b82b8eae4b65ab62c9415dcc6d7be2e747bc",
"oa_license": "CCBYNC",
"oa_url": "https://www.wisdomperiodical.com/index.php/wisdom/article/download/318/235",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "70dfc62e1a013650f4a76d0e04fd8271fae395b6",
"s2fieldsofstudy": [
"Law",
"History"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
263926450
|
pes2o/s2orc
|
v3-fos-license
|
Overexpression of α3, β3 and γ2 chains of laminin-332 is associated with poor prognosis in pancreatic ductal adenocarcinoma
Pancreatic ductal adenocarcinoma (PDA) is a worldwide health problem. Early diagnosis and assessment may enhance the quality of life and survival of patients. The present study investigated the potential correlations between the gene and protein expression of laminin-332 (LM-332 or laminin-5) and clinicopathological factors as well as evaluating its influence on the survival of patients with PDA. The expression of LM-332 subunit mRNAs in pancreatic carcinoma specimens from 37 patients was investigated by reverse transcription-quantitative polymerase chain reaction (RT-qPCR) analysis. Using immunohistochemical methods, the protein expressions of the three chains of LM-322 (LNα3, LNβ3 and LNγ2) were determined in 96 pancreatic carcinoma specimens, for association analysis with clinicopathological characteristics from patient data. The results of the prognosis analysis of three mRNAs expression datasets were validated in The Cancer Genome Atlas datasets. RT-qPCR results indicated that the overall relative values of LNα3 and LNγ2 mRNAs were increased in pancreatic carcinoma compared with the control. In immunostaining analyses LNα3 and LNγ2 expression was observed in all tumor tissues from the 96 patient samples. The expression levels of LNα3, LNβ3 and LNγ2 were associated with each other. LNα3 and LNγ2 positivity was significantly associated with differentiation, depth of invasion and advanced stage (P<0.05). The samples were classified into three groups: Basement membrane (B) type, cytoplasmic (C) type and mixed (M) type, according to their LNγ2 immunohistochemical expression patterns. The B type correlated significantly with differentiation (P=0.010) and the M type was significantly associated with hepatic metastasis (P=0.031). Patients with B-type LNγ2 demonstrated significantly better outcomes than patients with the C or M type (P=0.012 and P=0.003, respectively). Overexpression of the α3, β3 and γ2 chains of LM-332 may serve an important role in the progression and prognosis of PDA.
Introduction
Laminins are major components of the extracellular matrix (ECM). They localize to the basement membrane, and play essential roles in cell adhesion, differentiation, migration, and mechanosignal transduction. The laminin molecule is a cruciform heterotrimer assembled from α, β, and γ glycoprotein chains, encoded in humans by five α, three β, and three γ genes (1). To date, 16 distinct laminin isoforms have been identified in mammals (2).
Laminin-332 (LM-332) is a major member of the laminin family, consisting of LNα3, β3, and γ2 chains, encoded by the LAMA3, LAMB3, and LAMC2 genes, respectively. The three chains are expressed from the three genes separately, and subsequent formation of the heterotrimer is now considered an essential step in the production of LM-332 (3). Unlike the α3 and β3 chains, the γ2 chain is unique in the LM-332 trimer (4). LM-332 has been demonstrated to facilitate diverse actions in cultured cells, including roles in adhesion, scattering and migration, polarity, proliferation, and apoptosis, through focal adhesion and hemidesmosomes formed via an interaction between α3β1 integrin and α6β4 integrin (5,6). Moreover, these integrins also interact with molecules involved in important signal transduction pathways (7,8), which have important roles in tumor invasion and metastasis (9,10). These properties of LM-332 suggest that it may play an important role in carcinogenesis.
Although there are only a few reports concerning the expression of LNα3 and LNβ3 in human cancers, LNγ2 has been studied previously. Several immunohistochemistry investigations have indicated that LNγ2 is localized at the leading edge of invading carcinomas and its expression correlates Overexpression of α3, β3 and γ2 chains of laminin-332 is associated with poor prognosis in pancreatic ductal adenocarcinoma positively with invasiveness and poor patient survival (11). Shinichiro (12) reported that the cytoplasmic expression of LNγ2 demonstrates high invasive potential of tumors and is correlated with distant metastasis, especially hepatic metastasis, and with a poor prognosis. However, coexpression of the α3/β3/γ2 chains of LM-332 has not been reported in patients with pancreatic ductal adenocarcinoma (PDA). Accordingly, further study is required to identify the expression of the three subunits of LM-332 in PDA.
In a previous investigation, we demonstrated (through immunostaining) that LNβ3 was expressed in all patients with PDA and was related to differentiation, advanced stage, and survival time (13). In the present study, we expanded the scope of this exploration, including two other chains (LNα3 and LNγ2). Firstly, we analyzed the mRNA expression of LAMA3 and LAMC2 genes in pairs of pancreatic carcinoma and non-tumor pancreatic tissues from 37 patients. Secondly, we immunohistochemically examined the expression of LNα3 and LNγ2 in 96 tissue samples of PDA and assessed the potential relationships among the three subunits. Finally, we compared the expression levels of the three subunits and assessed the potential relationships between clinical and pathological features in patients with PDA postoperation.
Patients and methods
Patients and sample collection. Fresh specimens of PDA and non-tumor pancreatic tissues were obtained from patients (n=37) undergoing surgical resection at the Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital, College of Medicine, Zhejiang University, between February 2010 and March 2013. These experiments were approved by our institutional review board. Tissue specimens were snap-frozen in liquid nitrogen and stored at -80˚C.
Formalin-fixed, paraffin wax-embedded sections of 96 resected specimens were used for immunohistochemical staining. All 96 paraffin wax blocks were confirmed to contain tumor tissue by two pathologists; among them, 90 included adjacent normal pancreatic ductal tissue and 6 did not.
The following clinical data were collected: Patient age, gender, and outcome; the presence/absence of metastasis; and tumor location, size, margin status, TNM stage, degree of differentiation, and invasion degree and location (bile duct/duodenal, lymph node, serosa, portal vein, hepatic, perineural, vascular). No particular procedure was used to select the cases.
Patients were informed about the project and gave their written consent to participate in the study.
Follow up. Overall survival was measured from the time of surgery to the time of death or the last follow-up visit. Dates of death were determined from patient hospital records or follow-up telephone calls. The median survival time was 7.5 months, and the longest survival time was 35 months at the last follow-up visit.
Im m u n ohistochemist r y. For ma l i n-f i xed, pa ra ff i n wax-embedded tumor tissues from 96 patients were sectioned (4 µm thick), mounted on poly L-lysine-coated glass slides, and allowed to dry overnight at 65˚C. Briefly, slides were deparaffinized in two xylene washes and transferred through three changes in 95% ethanol, and then transferred to water. For antigen retrieval (α3, γ2), the slides were boiled in a pressure cooker containing 0.01 mol/l sodium citrate (pH 6.0) at maximum heat for 3 min and then cooled over 20 min to room temperature. Endogenous peroxidase activity was blocked in 1.5% methanol/hydrogen peroxide for 8 min at room temperature. Following incubation, the slides were washed three times in PBS for 2 min each. Then, the slides were incubated with the primary antibody: α3 antibodies (cat. no. sc-20143; Santa Cruz Biotechnology, Inc., Dallas, TX, USA) at 1:100 dilution or γ2 (cat. no. sc-25341; Santa Cruz Biotechnology, Inc.) at 1:250 dilution overnight at 4˚C. After washing three times in PBS for 2 min each, the bound primary antibody was detected using a ready-to-use secondary antibody kit (cat. no. K5007; Dako; Agilent Technologies, Inc., Santa Clara, CA, USA) for 30 min at room temperature, then the slides were washed three times in PBS for 2 min each and the chromogenic substrate 3,3'-diaminobenzidine tetrahydrochloride (DAB) was added. The specimens were counterstained with hematoxylin, mounted, and examined by light microscopy.
The percentage of tumor cells was scored as follows: 0, ≤5% tumor cells; 1, 6-25% tumor cells; 2, 26-50% tumor cells; and 3, >51% tumor cells. Scoring criteria for staining intensity were as follows: 0, no staining; 1, weak staining (light yellow); 2, moderate staining (yellow/brown); and 3, strong staining (brown). The staining index was evaluated as the product of the percentage of positive tumor cells and staining intensity scores. Using this method, we evaluated the expression of LNα3 and LNγ2 in the tumor and adjacent normal pancreatic ductal tissue by determining the staining index with scores of 0, 1, 2, 3, 4, 6, or 9. 0-1 is negative (-); 2-3 is weak-positive (+); 4-6 is the medium positive (+ +); >6 is strongly positive (+ + +) (15). In the statistical analyses, an optimal cutoff value was assessed as follows: A staining index score of >6 was used to indicate tumors with high LNα3 and LNγ2 expression, and a staining index score of ≤6 was used to define low LNα3 and LNγ2 expression.
According to the locations of LNγ2 immunohistochemical expression patterns, samples were classified into three groups, as follows: i) basement membrane (B) type: The LNγ2 was predominantly present in the basement membrane ECM and showed a continuous linear structure (>10% of ECM stained LNγ2-positive); ii) cytoplasmic (C) type: The LNγ2 was present in the cytoplasm of cancer cells (>10% of cytoplasm of cancer cells stained LNγ2-positive); and iii) mixed (M) type: The LNγ2 was present in the ECM and cytoplasm of cancer cells (>10% of ECM and cytoplasm of cancer cells stained LNγ2-positive).
LAMA3, LAMB3 and LAMC2 mRNA prognosis analysis of TCGA. The results of prognosis analysis of three mRNAs expression datasets were validated in the TCGA datasets. TCGA-pancreatic cancer mRNA data and clinical data (level 3) of the corresponding patients (178 tumor tissue) were downloaded from the TCGA Data portal. The expression analyses were carried out using BRB-ArrayTools (version 4.5; National Cancer Institute, Bethesda, MD, USA) (16). We identified three genes whose expression was significantly related to survival of the patient by survival analysis function of BRB array tools based on univariate proportional hazards models. We divide the gene expression level for low or high using the median value as the cutoff.
Statistical analysis. All statistical analyses were performed using the SPSS software (version 21.0; SPSS, Inc., Chicago, IL, USA). Differences in relative values of the three genes between the pancreatic carcinoma specimens and non-tumor pancreatic tissues were assessed using paired-sample t-tests. The relationship between immunohistochemical expression of three chains in the cancer tissues and clinicopathological characteristics was analyzed using a χ 2 (two-tailed) test or Fisher's exact test. Furthermore, the Kaplan-Meier method with a log-rank analysis was used to assess the correlation between expression levels of the three protein chains and survival rate. The Cox proportional hazards regression model was used for multivariate analyses. P<0.05 was considered to indicate a statistically significant difference. P-values between 0.05 and 0.10 were considered to indicate a trend towards an association.
The LNβ3 data are from results of our previous study using the same samples.
Results
mRNA expression of LAMA3 and LAMC2 between pancreatic adenocarcinoma and non-tumor pancreatic tissues. In this investigation, 37 pairs of primary pancreatic adenocarcinoma and corresponding non-tumor pancreatic tissues were chosen randomly for DNA analysis by QRT-PCR. The relative values of LNα3 and LNγ2 mRNA showed differential expression between pancreatic carcinoma and non-tumor pancreatic tissues: 1.560±1.511 and 0.996±1.112 in the former, and 2.701±2.863 and 1.592±1.745 in the latter. Like LAMB3, although the overall expression levels of LAMA3 and LAMC2 were increased compared to non-tumor tissues, some showed loss of expression or downregulation, so no statistically significant association was found (P=0.089 and P=0.054, respectively).
Overexpression of LNα3 and LNγ2 in PDA. The expression of LNβ3 in PDA, as assessed by immunohistochemistry, has been observed in 83 of 96 (86.5%) cases in our previous study (13). In the present study, staining for LNα3 and LNγ2 were negative, weakly positive, or moderately positive, while strong staining for (high expression of) LNα3 and LNγ2 was not observed in normal pancreatic ducts (Table I). Although the expression intensity varied, expression of LNα3 and LNγ2 was found in all tumor tissues. Strong staining for (high expression of) LNα3 was observed in 65 of 96 (67.7%) cases and strong staining for (high expression of) LNγ2 was observed in 49 (51.0%) patients. Because there was no adjacent normal pancreatic ductal tissue in six cases, expression for LNα3 and LNγ2 was not assessed in those cases. Fig. 1 shows the expression results for LNα3 in PDA. In normal pancreatic ducts, staining for LNα3 was negative. In carcinoma tissues, staining was found predominantly in the cytoplasm of cancer cells and at the invasive front; budding cancer cells often showed more intense cytoplasmic staining. The expression of LNα3 increased with the degree of differentiation. The cytoplasmic immunoreactivity of adenocarcinoma with squamous metaplasia was more intense than that in squamous metaplasia areas. The immunoreactivity was predominantly at the edge of cancer nests and weakly in the center of cancer nests. In the ECM of carcinoma tissues, LNα3 expression, when present in tumor cells, was often surrounded by a discontinuous staining pattern, with a floccular or lamellar structure. Fig. 2 shows the expressions of immunohistochemistry for LNγ2 in PDA. In normal pancreatic ducts, LNγ2 was negative. In carcinoma tissues, LNγ2 was overexpressed that similar to LNα3 and LNβ3. We also observed significant expression of Table I. Expression of LNα3 and LNγ2 in tissue n (%). showed high expression of LNα3, 13 (13.5%) showed low expression and 83 (86.5%) showed high expression of LNβ3, and 47 (49.0%) showed low expression and 49 (51.0%) showed high expression of LNγ2. The expression levels of LNα3, LNβ3, and LNγ2 were significantly associated with each other (Table II).
Association among LNα3 and LNγ2 expression and clinicopathological characteristics. According to the staining intensity of LNα3 and LNγ2 in the 96 patient samples with pancreatic ductal carcinoma, the clinical data detailed above were examined (Table III). LNα3 positivity was significantly associated with tumor differentiation, depth of invasion, and advanced stage (P<0.05). LNγ2 positivity was significantly correlated with differentiation, invasion into the serosa, depth of invasion, and TNM stage (P<0.05). Cases with LNα3 positivity had a higher tendency for serosa invasion than those negative for LNα3 (P=0.088).
Association between LNγ2 expression patterns and clinicopathological characteristics. Table III shows the associations between LNγ2 expression patterns and clinicopathological characteristics. Only the B-type pattern correlated significantly with differentiation (P= 0.010). There were significant differences between the enhanced LNγ2 expression in the basement membrane and the increase in differentiation, whereas no significant differences in histology were observed between C and M types. In addition, only the M-type pattern was significantly associated with hepatic metastasis (P=0.031). In this type, it was easy to find hepatic metastasis, whereas no significant differences in hepatic metastasis were observed in the C and B type patterns.
Survival. The median survival time was 7.911 vs. 18.434 months with strong vs. weak LNα3 expression by immunohistochemistry, respectively (Gehan test score, u=4.941, P=0.026; Table IV). The 1-year survival rate was shorter when LNα3 was highly expressed (21 vs. 57%, respectively). Patient outcomes for those with high expression were significantly worse than for those with low expression using the Kaplan-Meier method with log-rank analysis (P=0.008; Fig. 3A).
In our previous study, Patient outcomes for those with high expression were significantly worse than for those with low expression using the Kaplan-Meier method with log-rank analysis (13).
The median survival time was 7.234 vs. 18.961 months with strong vs. weak LNγ2 expression by immunohistochemistry, respectively (Gehan test score, u=8.248, P=0.004; Table IV). The 1-year survival rate was shorter when LNγ2 was highly expressed (14 vs. 60%, respectively). Patient outcomes for those with high expression were significantly worse than for those with low expression using the Kaplan-Meier method with log-rank analysis (P<0.001; Fig. 3B).
The median survival time was 7.044 vs. 19.373 months when all three subunits were highly expressed vs. other expression patterns, respectively (Gehan test score, u=9.996, P= 0.002; Table IV). The 1-year survival rate was shorter when all three subunits were highly expressed (11 vs. 61%, respectively). Patient outcomes for those with high expression of all three subunits were significantly worse than for those with other expression patterns using the Kaplan-Meier method with log-rank analysis (P<0.001; Fig. 3C).
The median survival time differed with the three expression patterns of LNγ2 (B type=34.000 months, C type=10.540 months, and M type=6.271 months). The 1-year survival rate also varied (B type, 70%, C type, 32%, and M type, 9%). Patients with the B-type pattern showed better outcomes than patients with the C or M types (Gehan test score, u=4.059 and 6.247, P=0.044 and 0.012, respectively). Using the Kaplan-Meier method with a log-rank analysis, case outcomes were significantly better for those with the B-type pattern than for those with the C or M type (P= 0.012 and P=0.003, respectively; Fig. 3D).
Consistent with our results, the prognostic value of LAMA3 and LAMB3 in pancreatic cancer were verified by the Cancer Genome Atlas (TCGA). The result demonstrated that high mRNA expression of LAMA3 and LAMB3 are correlated to poorer overall survival (P=0.001 and P=0.002; Fig. 4A and B respectively) in 178 tumor patients. The LAMC2 mRNA prognosis result showed that high mRNA expression of LAMA2 was correlated to poorer overall survival, but not significantly in TCGA pancreatic cancer datasets (P=0.181; Fig. 4C).
In univariate analyses, we determined the 9 most influential prognostic factors in patients with pancreatic adenocarcinoma (P≤0.05): Tumor location, duodenal invasion, depth of invasion, metastasis, TNM stage, LNα3/β3/γ2 protein expression levels, and LNγ2 expression patterns. Then these 9 factors were used in a multivariate model; however, none of them were significant predictive factors in patients with pancreatic cancer (Table V).
Discussion
The Co-expression of the α3, β3, and γ2 subunits of LM332 in human cancers rarely reported previously, especially in PDA. Generally, tumors derived from tissues normally express LM-332 might have high expression level of LM-332, such as cutaneous, esophageal, thyroid, and colon carcinomas (17)(18)(19). However, there is also generally decreased LM-332 expression in some tumors, such as advanced breast and prostate cancers (20,21).
The mechanism of the downregulation of the laminin-5-encoding genes (LAMA3, LAMB3, and LAMC2) was not clearly understood until recently. Several researchs showed that expression of the laminin-5-encoding genes was lost partially in lung, breast, prostate, and bladder cancers, and that one or more of the genes were methylated in cancer cell lines and tumors, with significant associations between the two (22)(23)(24)(25). In those studies, subgroups with a high Gleason score, a high preoperative serum prostate-specific antigen, and with an advanced stage had significantly higher methylation frequencies for LAMA3 than subgroups with low values. In addition, LAMA3 Table II. The association between LNα3, LNβ3 and LNγ2 expression in pancreatic ductal carcinoma n (%). promoter methylation frequency in breast tumor was associated with increased tumor stage and tumor size. In present study, the increased expression levels of LAMA3, LAMB3, and LAMC2 were observed in most pancreatic adenocarcinoma tissue when compared with non-tumor tissues (based on QRT-PCR), and in some tissues showed a loss of expression or downregulation. Further research is needed to validate whether loss of LAMB3 genes is associated with promoter methylation and is correlated with clinicopathological features of poor prognosis in pancreatic adenocarcinoma.
LNγ2 ------------------------------------------------Expression Low High P-value Low
Several previous studies of immunohistochemical (3,6,11,12) that focused on the expression of LNγ2 and the LNβ3/γ2 heterodimer of LM-332 in human cancer revealed that the β3 and γ2 chains were assembled into a β3γ2 heterodimer before forming an α3β3γ2 heterotrimer with the α3 subunit. The Co-expression of LNβ3 and LNγ2 also has been detected in hepatocellular carcinoma, squamous cell carcinoma of the tongue, colorectal carcinoma, basal cell carcinoma of the skin, biliary cancer, and gastric carcinoma (3,6,19,26). In biliary cancer, the high positivity of LNγ2 was significantly associated with worse differentiation, deeper depth of invasion (into the serosa), and more advanced stage, while an LNβ3 invasive front-dominant pattern is significantly associated with worse differentiation and more advanced stage (6). In human gastric cancer cell lines, there is a co-expression of LNγ2 and LNβ3 at the protein level, and it is significantly associated with deeper depth of invasion and more advanced tumor stage (3). Our results are consistent with the results before that the expression of three subunits of LM332 increased and play a substantial role in the progression and prognosis of PDA.
We previously reported of staining for LNβ3 in all patients with PDA and found that it was related to worse differentiation, more advanced stage, and shorter survival time (13). In current study, the positivity LNα3 and LNγ2 were significantly associated with worse differentiation, deeper depth of invasion, more advanced stage, and shorter survival time. and that the expression level of LNγ2 was also correlated with depth of invasion. What's more, the expression levels of LNα3, LNβ3, and LNγ2 was significantly associated with each other. Survival outcomes were significantly worse for patients with high expression of all three subunits than for those with other expression patterns. These results suggested that the three genes of LM332 undergo gene transcription by a related mechanism and might play an important role in the progression and prognosis of PDA.
The cytoplasmic expression of three subunits was elevated in all 96 adenocarcinoma tissues and often more intense in areas of the invasive front, cancer cell budding, or poor differentiation, suggesting that accumulation of the three subunits of LM332 may contribute to a more aggressive phenotype of carcinoma cells. Similar expression of LNγ2 protein in cancer tissue has also previously been reported (27,28).
In the nest of adeno-squamous carcinomas, cytoplasmic staining of the three subunits was often more intense at the invasive front and was weak or absent at the center. In esophageal squamous cell carcinoma and lung squamous cell carcinoma, the expression of LNγ2 was strong in cords or small nests of poor differentiation and was weak or absent in larger nests or large sheets of well-differentiated cells, indicating that LNγ2 expression is associated with worse differentiation (29)(30)(31). In present study, not only in squamous carcinomas but also in adenocarcinomas, the high expression of three subunits was associated with worse cancer differentiation, not only in squamous carcinomas but also in adenocarcinomas.
The Laminin expression in the stroma of the tumor differs with type of cancer tissue. In adenomas, the staining expression of LM332 subunits is continuous and even enhanced (32). In carcinomas, the expression of LM332 commonly displayed in a more disrupted pattern, or fragmentation, especially in invasive area (33)(34)(35). Until now, there are limited reports concerning the association between expression patterns of LNγ2 and its prognosis. Ito et al (29) classified the expression patterns of LNγ2 in esophageal cancer into two types: E type, with staining of the ECM such as the basement membrane and matrix, and C type, with cytoplasmic staining of cancer cells; the C-type pattern was associated with unfavorable outcomes. Masuda et al (30) described three types in lung squamous cell carcinoma: B ype, in which LNγ2 was present in the basement membrane; C type, in which it was present in the intracellular matrix; and F type, in which it was present in the cytoplasm and in part of the peripheral nest; only the F type was associated with a poor prognosis.
To the best of our knowledge, there is no previous report on expression patterns of LNγ2 being correlated with prognosis of PDA. Similar to Masuda et al (30), we classified LNγ2 expression in PDA into B-, C-, and M-types. Our results indicated that most of the basement membrane around the duct stained with LNγ2 was a continuous linear structure in well-differentiated adenocarcinomas. The C and M types showed no significant difference in tumor differentiation, While significant difference was observed between M-type and the other types in hepatic metastasis.
In the survival analysis, outcome of those with B-type patterns was significantly better than those with C-or M-type. The results demonstrate that the basement membrane structure in well-differentiated adenocarcinoma was maintained and that the continuous structures prohibited the invasion and metastasis of tumor cells, while the basement membranous structure in poorly differentiated adenocarcinoma was Figure 3. Correlation between LAMA3, LAMC2, three LN and three patterns of LAMC2 immunohistochemical expression in pancreatic cancer patients. (A) Kaplan-Meier plots for overall survival for a discriminatory median LAMA3 immunohistochemical expression, (B) LAMC2 (C) three LN and (D) three patterns of LAMC2. P-values were calculated using the log-rank test. LNα3 (laminin α3) and γ2 (laminin γ2) chains are encoded by the LAMA3 and LAMC2 genes, respectively. disrupted and was associated with poor prognosis in patients with PDA.
Laminins are essential components of the ECM, localized to the epithelial basement membrane. The interactions between tumor cell and laminins in tumor tissue are more complex. The expressions of laminins in the tumor and endothelial cells are upregulated, while the laminins stimulate the surrounding stromal cells to express matrix metalloproteinases (MMPs), promoting invasive growth of tumor cells by degrading surrounding ECM barriers and allowing new vascular budding (36). Oka et al (6) suggested that the laminins of the basement membrane in tumor tissue were degraded by MMPs secreted by tumor cells or from the ECM, resulting in accumulation of LNγ2 and LNβ3 at the invasive front, which may play a direct role in tumor invasion processes. Tani et al (37) reported that laminin-5 was synthesized and deposited in the basement membrane in pancreatic carcinomas; invading cells adhere to this newly produced basement membrane and migrate over it.
Based on our results, we suggest that the increased synthesis of the three subunits of LM332 resulted in them becoming deposited at the basement membrane and tumor stroma. The basement membrane in poorly differentiated pancreatic cancer becomes degraded by proteases and displays discontinuities or holes, which could promote the migration and/or invasion of pancreatic cancer cells via an interaction with α3β1 integrin and/or α6β4 integrin. However, the basement membrane showed a continuous linear structure, which may prevent pancreatic cancer cell migration and/or infiltration in well-differentiated adenocarcinoma. Further studies are needed to assess this hypothesis.
In conclusion, the increased expression of three subunits of LM332 might be an clinically survival indicator of PDA. Considering the important role of three subunits in disease progression, they may provide a new molecular target of therapy for pancreatic adenocarcinoma patients.
Funding
The present study was funded by Projects of Science and Technology Plan of JinHua of Zhejiang Province (grant no. 2015-3-005).
Availability of data and materials
All data generated or analyzed during this study are included in the published article.
Authors' contributions
JC and SAY contributed to the conceptualization and design of the study; JC drafted and critically revised the work; XYZ and DKZ performed the experiments. HZ, XML, JSL and XKW acquired, analyzed and interpreted the data. All authors read and approved the final manuscript.
Ethics approval and consent to participate
All study participants provided written informed consent to participate in the study. The study was approved by the Ethics Committee of the First Affiliated Hospital, College of Medicine, Zhejiang University.
Consent for publication
All study participants provided written informed consent for the publication of their data.
|
2018-06-22T00:14:20.148Z
|
2018-05-09T00:00:00.000
|
{
"year": 2018,
"sha1": "bef188f9baf02579168f8498194c4886926deaf1",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2018.8678/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bef188f9baf02579168f8498194c4886926deaf1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
257649110
|
pes2o/s2orc
|
v3-fos-license
|
COntrolling NUTritional Status (CONUT) as Predictive Score of Hospital Length of Stay (LOS) and Mortality: A Prospective Cohort Study in an Internal Medicine and Gastroenterology Unit in Italy
Background: Hospital malnutrition affects nearly 30% of patients in medical wards and correlates with worse outcomes. An early assessment is necessary to stratify the risk of short-term outcomes and mortality. The predictive role of COntrolling NUTritional status (CONUT) score in this context has not yet been elucidated in Western countries. We aimed to test CONUT at admission as a predictive score of hospital outcomes, in an Internal Medicine and Gastroenterology Department of an Italian Tertiary Care University hospital. Methods: We prospectively enrolled patients admitted to our center, stratifying them into the four CONUT classes (normal = 0–1; mild = 2–4; moderate = 5–8; severe = 9–12 points) according to serum albumin (g/dL), total lymphocyte count (/mm3), and total cholesterol (mg/dL); the primary outcome measure was length of stay (LOS) and the secondary one was in-hospital mortality. Results: Out of a total of 203 patients enrolled, 44 (21.7%) patients had a normal status (0–1), 66 (32.5%) had a mild impairment (2–4), 68 (33.5%) had a moderate impairment (5–8), and 25 (12.3%) a severe impairment (9–12). The mean LOS was 8.24 ± 5.75 days; nine patients died. A moderate-severe CONUT correlated with a higher LOS at the univariate [HR 1.86 (95% CI 13.9–3.47); p < 0.0001] and multivariate analysis [HR 1.52 (95% CI 1.10–2.09); p = 0.01]. The CONUT score was also a predictor of mortality, with an AUC of 0.831 (95% CI 0.680–0.982) and with an optimal cut-off at 8.5 points. Nutritional supplementation within 48 h from admission correlated with lower mortality [OR 0.12 (95% CI 0.02–0.56) p = 0.006]. Conclusions: CONUT is a reliable and simple predictor of LOS and in-hospital mortality in medical wards.
Introduction
Hospital malnutrition represents an acknowledged risk factor for many adverse clinical outcomes [1,2], and the clinical management of malnourished patients is affected by higher in-hospital morbidity, mortality, and healthcare costs. A recent study demonstrated additional costs for hospital malnutrition of over $58 billion dollars in Western countries [3]. It is estimated that about 30% of hospitalized patients both in the United States and Europe present with malnutrition or risk of malnutrition at admission [1]. In Italy, a recent hospital report found over half of the patients at risk of malnutrition and over a third already malnourished at hospital admission [4]. Malnutrition is also an independent risk factor of poor postoperative outcomes in surgical patients [5] and has been linked to an increased risk of infections [6], significantly higher mortality for sepsis [7], a higher risk of pressure ulcers, and a worse outcome of wound healing [8,9]. In critically ill patients, major outcomes such as the duration of mechanical ventilation, the length of stay (LOS) in intensive care units (ICU), or infections are influenced by pre-existing malnutrition [10]. Hospital malnutrition maybe more evident in a Gastroenterology Department due to the role of the gastrointestinal tract in nutrients absorption [4]. However, despite these known associations, in daily clinical practice, hospital malnutrition remains often unrecognized, and the assessment of clinical nutrition of hospitalized patients is still underrated, probably due to a lack of awareness among clinicians, while focusing on diagnosis or treatment [11]. Several tools have been released by international societies for the screening-Nutrition Risk Screening 2002 (NRS-2002), Malnutrition Universal Screening Tool (MUST)-for the diagnosis of malnutrition, the most recent being the Global Leadership Initiative on Malnutrition (GLIM) criteria [12]. Despite a large diffusion among scientific sessions, the real application of such validated tools appears not sufficient in hospital settings, perhaps due to a lack of training, staff, and time [13]. The COntrolling NUTritional status (CONUT) score, a simple index calculated using serum routine analysis (albumin, total lymphocyte count, and total cholesterol) is associated with short-and long-term prognosis in several diseases [14]. The CONUT score has been proven not only to correlate with malnutrition grade [11] but also to have a high predictive value concerning clinical outcomes and morbidity. For example, in patients with cancer, a higher CONUT score predicts a lower overall survival, a lower progress/recurrence-free survival, and a lower cancer-specific survival after surgery [15,16], and a similar predictive value has also been observed for non-solid tumors and other hematologic disorders [17][18][19]. However, the CONUT score has also been investigated as a predictor of morbidity or mortality in various conditions other than malignancies, for example, in patients undergoing liver transplant [20] or heart bypass surgery [21], in patients with acute heart failure [22], or in patients with pulmonary embolism [23]. To date, fewer studies have been produced about in-hospital short-term outcomes such as the LOS or 30-day re-admission rates in medical units. A recent monocentric Chinese study performed by Hao in 2022 demonstrated that a higher CONUT score predicts a higher LOS and inhospital mortality, specifically in patients with ischemic stroke [24]; another recent, large multicenter retrospective study performed in China in older adults, collecting data from more than eleven thousand patients, demonstrated that a higher CONUT score predicts a longer LOS and in-hospital mortality in elderly patients [25]. However, similar studies concerning LOS or in-hospital mortality in more heterogeneous cohorts of patients or in Western countries are still lacking.
Thus, we aimed to test CONUT at admission as a predictive score of hospital outcomes, such as LOS, in-hospital mortality, and 30-day re-admission rate in an Internal Medicine and Gastroenterology Department of an Italian Tertiary Care University hospital.
Study Design and Ethical Committee Approval
We performed a single-center, observational, prospective, cohort study. The study conformed to the Declaration of Helsinki and the norms of Good Clinical Practice. The Ethical Committee of Fondazione Policlinico A. Gemelli IRCCS, Catholic University of the Sacred Heart approved the protocol (code 2638/22). The STROBE guidelines for cohort studies have been followed [26].
Patients
Included patients were all adults (>18 years old) admitted to the Internal Medicine and Gastroenterology ward at the Fondazione Policlinico Agostino Gemelli IRCCS, Rome, Italy, from March 2021 to February 2022. All participants received information about the procedures to be performed in the study. Consent forms recording the agreement of patients to participate in the study were collected. Patients unable or refusing to give their consent to the study were excluded.
Protocol Description
Patients were assessed by the hospital staff (B.E.A. and M.I.) upon admission and then referred to internal medicine residents (R.B., M.D., and T.G.). Residents explained the protocol to the patients, requested informed consent, and collected data. Then, they collected demographic characteristics, primary diagnoses, and comorbidities; the registered date of hospital admission and discharge (or death, if any); clinical data; laboratory values; anthropometric-weight, height, and body mass index (BMI) -and other nutritional variables (i.e., NRS-2002, MUST, and nutritional supplementation). Due to the simultaneous presence of more diseases in this category of patients, the Charlson comorbidity index (CCI) [27] was calculated for each patient and preferred as a synthetic item instead of the single admission diagnoses. CONUT classes were defined based on serum albumin (g/dL), total lymphocyte count (count/mm 3 ), and total cholesterol (mg/dL) as reported in Table 1. The primary outcome measure for the present analysis was LOS and the secondary one was mortality during hospitalization. The re-admission rate within 30 days was also evaluated.
Data Collection and Statistical Analysis
Data were collected using a specific Excel© spreadsheet and shown using descriptive statistical methods. The Kolmogorov-Smirnov test was used to assess the normality of variables. Categorical variables were expressed as numbers (percentage) and continuous variables as mean ± standard deviation or median (interquartile range).
Patients were categorized according to total CONUT score into four classes (normal, mild, moderate, severe) and then grouped into two main classes ("normal-mild" and "moderate-severe") for the inferential analyses. To estimate the risk of moderate-severe CONUT relative to normal-mild CONUT for the primary and secondary outcome measures, we used a multivariable logistic regression model. Kaplan-Meier curves were drawn, and the log-rank test was adopted to compare the obtained LOS intervals according to CONUT main classes.
A receiver operating curve (ROC) was constructed to provide the sensibility and specificity of CONUT to predict mortality. The optimal cut-off value of CONUT was calculated by applying the Youden Method to the constructed ROC.
A previous study reported an incidence of CONUT of more than 4 of 53.1% [28]. With a margin of error of 7% and a confidence interval (CI) of 95%, we estimated 196 patients to be enrolled to intercept the above-mentioned incidences (percentages).
We used the STATA ® Software (Version 14.0, Stata Corporation; College Station, TX, USA) to perform statistical analyses.
Baseline Characteristics of Patients
Two hundred and three patients were evaluated, of which 127 (62.6%) were males and 76 (37.4%) females; the mean age was 66.05 ± 14.1 years. Most patients (68.5%) were admitted from the emergency department. The mean BMI (kg/m 2 ) was 25.02 (SD ± 4.88) and the mean CCI was 3.02 (SD ± 2.43). According to NRS-2002, 70 patients (34.5%) were at risk of malnutrition. Conversely, according to MUST, 31 patients (15.3%) were at medium risk whereas almost half of the entire sample (48.7%) were at high risk of malnutrition. According to CONUT, 44 (21.7%) patients had a normal nutritional status (CONUT 0-1), 66 (32.5%) had a mild (CONUT 2-4), 68 (33.5%) had a moderate (CONUT 5-8), and 25 (12.3%) had a severe impairment of nutritional status (CONUT 9-12). The mean LOS in days was 8.24 ± 5.75; 38 (18.7%) patients developed a refeeding syndrome (RS); 9 patients (4.4%) died during hospitalization. All baseline data are shown in Table 2. The CONUT classes (normal-mild vs. moderate-severe) correlated with age, admission type (elective or emergency), NRS-2002, MUST, the risk and occurrence of RS, the need for nutritional supplementation within 48 h from admission-either high-calorie and high-protein oral nutritional supplements (ONS) or artificial (enteral or parenteral) nutrition. As regards the main outcome measures, CONUT correlated with LOS and in-hospital mortality; re-admission within 30 days was not statistically different in the two groups (Table 3).
Associations of Risk Factors with LOS
Patients admitted with a CONUT score ≤ 4 had a lower mean LOS than those with a CONUT score ≥ 5 (6.5 ± 4.0 vs. 9.9 ± 6.4 days; p < 0.0001). At the univariate analysis, the ER admission, NRS-2002 > 3, MUST ≥ 2, a moderate/severe CONUT class, refeeding syndrome (RS) risk, and RS confirmed diagnosis were found to be risk factors for longer LOS. On the contrary, a normal-mild CONUT class was shown as a protective factor. In the multivariate analysis, ER admission, a moderate-severe CONUT score, and RS diagnosis were confirmed as independent risk factors of delayed LOS (Table 4). The Kaplan-Meier method confirmed different LOS curves between normal-mild and moderate-severe CONUT classes (p < 0.0001) as shown in Figure 1.
Associations of Risk Factors with Hospital Mortality
Nine patients (4.4%) died during hospitalization. Higher CONUT scores and RS diagnosis were shown as potential risk factors for mortality in the univariate analysis. On the other side, a higher BMI was associated with lower mortality risk, as well as nutritional
Associations of Risk Factors with Hospital Mortality
Nine patients (4.4%) died during hospitalization. Higher CONUT scores and RS diagnosis were shown as potential risk factors for mortality in the univariate analysis. On the other side, a higher BMI was associated with lower mortality risk, as well as nutritional supplementation received within 48 h from admission (Table 5). Due to the limited number of death events in our study population (9), a multivariate analysis was not feasible. However, as reported at the ROC curve, the CONUT score was a reliable predictor of mortality, with an area under the ROC curve (AUC) of 0.831 (95% CI 0.680-0.982); the optimal cut-off obtained was 8.5 ( Figure 2).
Discussion
After evaluating 203 patients admitted to an Internal Medicine and Gastroenterology Department, we demonstrated that the CONUT score can be a reliable predictor of higher LOS and in-hospital mortality. Indeed, at admission, patients reporting a CONUT score ≥ 5 points have nearly 90% probability of a longer LOS than those with a lower score. The predictive value of the CONUT score in assessing LOS was confirmed in the multivariate
Discussion
After evaluating 203 patients admitted to an Internal Medicine and Gastroenterology Department, we demonstrated that the CONUT score can be a reliable predictor of higher LOS and in-hospital mortality. Indeed, at admission, patients reporting a CONUT score ≥ 5 points have nearly 90% probability of a longer LOS than those with a lower score. The predictive value of the CONUT score in assessing LOS was confirmed in the multivariate analysis. Interestingly, an NRS-2002 score > 3 (risk of malnutrition) and MUST ≥ 2 (high risk of malnutrition) showed an association with a higher LOS only in the univariate analysis. This is of interest, due to the objective nature of the CONUT score, based only on simple laboratory tests easily obtained in almost all clinical settings. Even if mortality events were only nine during hospitalization, univariate analysis confirmed a high CONUT score as a predictive risk factor of mortality, as also shown in the ROC curve. Thus, we can argue that a baseline CONUT value of 9 (or more) at admission predicts mortality during the hospital stay.
These results align with those of other reports investigating the role of CONUT in predicting LOS and mortality in several hospital settings, especially in elderly patients and in Eastern countries [22,24,25,29]. In details, Nishi et al. performed a retrospective analysis of a multicenter Japanese registry involving 838 patients (mean age 72 years) admitted for heart failure (HF): high CONUT scores were correlated with increased risk of in-hospital death in unadjusted and adjusted models and LOS [29]. Kato et al., analyzing data from a similar registry of patients admitted for acute decompensated heart failure (ADHF) (2466 patients, mean age 80 years), concluded that high CONUT scores were associated with higher in-hospital mortality and infection even when adjusting for other clinical covariates [22]. More recently, a Chinese study including patients admitted for acute ischemic stroke (AIS) (1079 patients, mean age 81 years) found a linear association between CONUT scores and LOS, and a significant association with hospital mortality [24]. Another retrospective study, analyzing data from 11,795 older adult Chinese patients found a higher LOS in higher CONUT classes and recognized CONUT (at the score ≥ 6) as the best predictor of in-hospital mortality among other five nutrition-related tools (including NRS-2002) [25]. Despite the lesser study population, we confirmed such evidence in a prospective cohort study, in Italy, in a different clinical setting (Internal Medicine and Gastroenterology department) and enrolling patients with a younger mean age (66 years). This confirms the reliability of the CONUT score as a predictive marker of short-term clinical outcomes irrespective of the geographical area and the population's age. Indeed, the clinical value of CONUT resides in its simple laboratory data (albumin, cholesterol, lymphocytes count), reflecting the patients' immunonutritional status. As regards albumin, it has been questioned as a proxy measure of nutritional status or total muscle mass, and rather indicated as a negative acute phase protein [30]. However, low albumin serum concentrations still have a predictive role in adverse outcomes in different clinical contexts of disease-related malnutrition, as demonstrated in recent studies [31,32]. Moreover, low serum albumin levels are associated with increased short-and long-term mortality in hospitalized patients, and serum albumin levels are an important predictor of in-hospital mortality or hospital complications in elderly patients [33]. On the other hand, the total lymphocyte count (≤1500 cells/mm 3 ) may have a few limits due to other possible biasing conditions (i.e., hematological or infective diseases); however, recent studies on COVID have associated the total lymphocyte count with worse hospital outcomes and mortality in a context of severe inflammation [32,34]. Regarding total cholesterol, previous studies have associated low plasmatic levels with poor nutritional intake, systemic inflammation, and a worse prognosis in hospitalized patients, thus demonstrating a potential predictive value [35,36].
We did not find any difference between "normal-mild" and "moderate-severe" CONUT classes in terms of hospital re-admission within 30 days. This could be explained by the small number of re-admission events (8 vs. 5, respectively). Moreover, we do not register data about the re-admission type (if elective or by the emergency department), so we cannot make an inference about whether the re-admission could be related to malnutrition itself or other causes.
Our results highlighted the role of nutritional supplementation (received within 48 h from admission) in reducing mortality risk by nearly 90%. The nutritional supplementation included both high-calorie and high-protein ONS and artificial enteral or parenteral nutrition, according to the prescriptions of clinical nutrition team. This confirms the results of the EFFORT study, a multicentric randomized controlled trial, which demonstrated, in a large number of patients at nutritional risk, that an individualized nutritional support in medical inpatients could reduce adverse events and in-hospital mortality [37]. Moreover, in this study, we collected data about the occurrence of RS, since this work shared the same registry used for another of our study focusing on this topic, to which we remand for further details [38]. RS may occur when malnourished patients receive a prompt normocaloric artificial (enteral or parenteral) refeeding; it consists in a rapid shift in fluids and electrolytes in the intracellular space resulting in electrolytes abnormalities and cellular edema. It may have a dramatic impact in terms of morbidity and mortality, even if it is still underestimated and, as regards this study, it is significantly higher in the moderate-severe CONUT class. This confirms the efficacy of CONUT as a nutritional predictive score.
The strengths of this study are homogeneous data collection and the prospective nature of the study. Moreover, to the best of our knowledge, this is the first Italian study on this topic. The main limitations are the monocentric design and the small number of deaths which does not allow us to perform a multivariable analysis, even if this demonstrated the efficiency of the department care. Thus, we think that CONUT value in predicting in-hospital mortality should be further confirmed in other similar prospective studies. Moreover, we did not perform a complete nutritional assessment since this study lacks data about body composition. Further studies are warranted to correlate the CONUT score with body composition parameters such as body cell mass or muscle mass. Finally, the impact of statin therapy (as regards total cholesterol) and the presence of hematological or infective diseases (as regards lymphocyte count) have not been investigated.
Notwithstanding the above-mentioned limits, the study reflects the importance of using appropriate tools to stratify the nutritional risk at admission to the hospital, in order to prompt necessary nutritional interventions that could be effective in reducing mortality. Current guidelines [12] propose other nutritional tools such as NRS-2002, MUST, and GLIM Criteria, which are more standardized and focused on nutritional status. These tools investigate the amount and the speed of weight loss, the BMI, the reduced dietary intake, the severity of disease and, in the case of the GLIM criteria, also the loss of muscle mass. We also recognized the value of such an approach in clinical practice [4]. However, such a nutritional approach is still not widely spread until now in medical departments [12]. We thus decided to test another simple score as an objective and rapid method to predict prognosis. The CONUT score was demonstrated to be a simple, objective, and predictive method for this purpose, at least for hospital LOS and probably also for hospital mortality.
Conclusions
The CONUT score is a simple and reliable nutrition-related tool for stratifying the risk of higher LOS and predict mortality at admission. Given the relevance and ease of performing, health professionals should be incentivized to use the CONUT score in clinical practice to prompt personalized nutritional support. Indeed, we observed that early nutritional intervention (within 48 h of admission) could reduce in-hospital mortality.
The predictive role of different CONUT score cut-off values needs to be validated in populations with different diseases. Further studies are needed to confirm our preliminary results in large and multicentric medical cohorts. Institutional Review Board Statement: The study protocol was approved by the Ethical Committee of Fondazione Policlinico A. Gemelli IRCCS-Catholic University of the Sacred Heart (code 2638/22; date of approval 20 January 2022). The study has been conducted according to the Declaration of Helsinki.
Informed Consent Statement: All participants accepted to write an informed consent to publish their data in anonym form.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author for any academic use upon citation of this article.
|
2023-03-22T15:14:47.837Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "7dae17a9676bc79ebfe6cd954f71786cef09cb26",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/15/6/1472/pdf?version=1679191076",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ee15cefa0c93f66d51d3c9d70346af99c37565f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16098255
|
pes2o/s2orc
|
v3-fos-license
|
Amazon Forests’ Response to Droughts: A Perspective from the MAIAC Product
: Amazon forests experienced two severe droughts at the beginning of the 21st century: one in 2005 and the other in 2010. How Amazon forests responded to these droughts is critical for the future of the Earth’s climate system. It is only possible to assess Amazon forests’ response to the droughts in large areal extent through satellite remote sensing. Here, we used the Multi-Angle Implementation of Atmospheric Correction (MAIAC) Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation index (VI) data to assess Amazon forests’ response to droughts, and compared the results with those from the standard (Collection 5 and Collection 6) MODIS VI data. Overall, the MAIAC data reveal more realistic Amazon forests inter-annual greenness dynamics than the standard MODIS data. Our results from the MAIAC data suggest that: (1) the droughts decreased the greenness ( i.e. , photosynthetic activity) of Amazon forests; (2) the Amazon wet season precipitation reduction induced by El Niño events could also lead to reduced photosynthetic activity of Amazon forests; and (3) in the subsequent year after the water stresses, the greenness of Amazon forests recovered from the preceding decreases. However, as previous research shows droughts cause Amazon forests to reduce investment in tissue maintenance and defense, it is not clear whether the photosynthesis of Amazon forests will continue to recover after future water stresses, because of the accumulated damages caused by the droughts.
Introduction
Amazon forests, which contain 100 billion tons of carbon [1], not only play an important role in maintaining the biodiversity of the Earth's ecosystem, but also play an important role in the Earth's climate system through exchanges of energy, momentum, and mass with the atmosphere.It is suspected that Amazon forests could degrade to savannas, release the large amount of stored carbon into the atmosphere, and hence accelerate global warming significantly, in the era of climate change [2][3][4].Therefore, how Amazon forests respond to the variation of environmental factors (such as sunlight, temperature, and water), both seasonally and inter-annually, is of great concern to the research community.
Amazon forests have two seasons: wet season and dry season.The dry season generally lasts from July to September, and the wet season covers the remaining months of the year [5,6].Amazon forests are adapted to water variations between the wet season and dry season, because of the forests' deep roots, which can utilize water stored in deep soils [7,8].In fact, reduced cloud cover accompanying the dry season results in amplified sunlight reaching the forests' canopies, and this enhances the forests' photosynthesis, as confirmed by both field measurements and satellite remote sensing [9][10][11][12][13][14].In contrast to natural seasonal variation of water abundance, anomalously low water abundance in the dry season is called drought.According to field measurements and observations, drought can reduce Amazon forests' photosynthesis [15], aboveground biomass [16], autotrophic respiration [17], and even cause large trees to die [18].
There were two severe droughts over Amazon in the first decade of the 21st century: one in 2005 [19] and the other more severe one in 2010 [20,21].Both of these droughts happened in the dry season of the Amazon due to the anomalously high Atlantic sea surface temperature [19,20], instead of due to El Niño events, which usually decrease the wet season precipitation over the Amazon basin [22].
Because assessments based on field measurements and observations could be site specific and not representative of all the forests [15][16][17][18], it is only possible to assess the forests' response to droughts through satellite remote sensing [23], for example, using vegetation index (VI) to indicate the greenness (i.e., photosynthetic activity) [24].The most commonly used vegetation indices include: Normalized Difference Vegetation Index (NDVI), which is calculated using the near infrared and red reflectance measurements, and Enhanced Vegetation Index (EVI), which uses not only the near infrared and red reflectance, but also the blue reflectance to correct aerosol influences in the red band [25].The optical sensors mostly used for large area monitoring of tropical rain forests are the Moderate resolution Imaging Spectra-radiometer (MODIS) onboard the Terra and Aqua satellites of National Aeronautics and Space Administration (NASA), because MODIS sensors have adequate spatial, temporal, and spectral resolutions for addressing the challenging atmospheric conditions of the Amazon basin (i.e., frequent cloud cover and heavy aerosol loadings) [26].
The assessment of Amazon forests' response to the 2005 drought using satellite remote sensing was first documented in [27], which concluded that Amazon forests' photosynthetic activities increased during the drought, probably because of the increased solar radiation.However, it turned out that the cloud [28,29] and aerosol quantity flags [30] of standard MODIS data matter considerably in these assessments-with atmospheric contamination excluded, [5] reassessed the impact of the 2005 drought on Amazon forests, and found that Amazon forests did not green up during the 2005 drought, which agrees with ground-based observations [16,18].For the even more severe 2010 Amazon drought, Xu et al. [6] introduced additional quality screening to further mitigate the impact of residual atmospheric contamination, and showed wide spread greenness decline in Amazon forests using the Collection 5 (C5) MODIS vegetation index product.These findings from optical remote sensing also agree with those from active microwave remote sensing data, which show declines in canopy leaf abundance and moisture during and after the drought [31].However, the findings of [32] showed that the Amazon forests' greenness based on Collection 5 MODIS data has a decreasing trend after 2005, and the observed forests' greenness in drought years was not particularly different from that in non-drought years, hence claiming that even if Amazon forests decreased their photosynthetic activities during the droughts, satellite remote sensing could not detect such responses.
To address this issue, we need more advanced processing of MODIS measurements.The Multi-Angle Implementation of Atmospheric Correction (MAIAC) MODIS products [33][34][35][36] use a more sophisticated atmospheric correction algorithm and a more accurate and less conservative cloud detection algorithm than the standard Collection 5 and Collection 6 MODIS products [37,38], and incorporate improved sensor calibration like in Collection 6 MODIS products [39].In addition, the MAIAC algorithm also normalizes the sun-sensor geometry to nadir view and 45 ˝solar zenith angle; hence, excludes the issue of sun-sensor geometry variation in standard MODIS VI products [13].The aim of this paper is to evaluate the response of Amazon forests to droughts using the more advanced satellite optical remote sensing retrievals (i.e., MAIAC retrievals), and to resolve the confusion about the ability of satellite remote sensing in monitoring Amazon forests' response to droughts.
Data and Methods
We used the Collection 5 MODIS land cover product [40] to identify the distribution of Amazon forests, Tropical Rainfall Measuring Mission (TRMM) precipitation data [41] from 2000 to 2012 to identify the drought condition, and MODIS Collection 5 [25], Collection 6, and MAIAC [33][34][35][36] vegetation index products from 2000 to 2012 to assess the dynamics of Amazon vegetation greenness.
The quality of retrieved vegetation index in the Collection 5 and Collection 6 MODIS vegetation index product is indicated by quality flags [25].The quality flags indicate whether a pixel is a cloud pixel, a cloud shadow pixel, or a pixel with heavy aerosol loadings.Clouds and heavy aerosol loadings introduce errors to the retrieved Vegetation Index, so these atmospherically contaminated VI retrievals were excluded from the analysis.The excluding method we used was the same as in [5,6].The MAIAC vegetation index product does not provide vegetation index retrievals for atmospherically contaminated pixels [35]; therefore, all the retrieved MAIAC vegetation index values were valid and included in our analysis.
Optical remote sensing of the Amazon basin is particularly challenging because of the limited number of valid observations due to the frequent cloud cover and heavy aerosol loadings [26].Terra and Aqua satellites overpass Amazon forests in the morning and afternoon, respectively, and between the two overpasses on the same day, cloud and aerosol distributions could have changed.Therefore, we took advantage of the different overpass times to increase the number of valid observations by merging the vegetation index data from the Terra and Aqua platforms.The merging strategy is as follows.(1) From February 2000 to April 2002, valid vegetation indexes from Terra were used, because Aqua had not been launched until May 2002; (2) From May 2002 to December 2008, the average of the valid vegetation index from Terra and Aqua was used if both were available over the same 16-day time step; if only one platform provided a valid vegetation index, only one value was used; (3) From January 2009 onward, valid vegetation index data from Aqua were used preferentially-Terra VI were used only when Aqua VI were invalid, because Terra MODIS had a significant sensor degradation issue starting from 2009, especially impacting observations at large view zenith angles [39,42].
We calculated the dry season average (from July to September) vegetation index of each year from 2000 to 2012 following the method described in [5,6].
The standard MODIS vegetation index product is provided every 16 days, and there may be missing values in the product because of atmospheric contaminations, so there are at most six vegetation index values during the three months in the dry season.In order to obtain the dry season average greenness, we first calculate the monthly VI: when both 16-day VIs in a month exist, we take the average as the monthly VI; when only one 16-day VI exists in a month, we take the only available value as the monthly VI; when no 16-day VIs in a month exist, the monthly VI is unavailable.Then, the dry season mean VI is calculated only when all three monthly VIs are available.This calculating method reduced the biases caused by missing values of the standard MODIS VI product, because the Amazon dry season is also the growing season, during which VI increases [9].
The MAIAC vegetation indices data were provided every 8 days.We used the maximum compositing method to composite the 8-day MAIAC vegetation index data into 16-day data, following the compositing philosophy of standard MODIS VI products, and then used the same method to calculate the dry season mean greenness as we used for the standard MODIS VI data.
We calculated Amazon dry season average precipitation by directly averaging the monthly TRMM precipitation data in the dry season months from July to September, because the TRMM monthly precipitation data do not have missing values over the Amazon.
Dry season averages were used because the 2005 and 2010 droughts happened in the dry season [19,20], and because the cloud cover is less frequent during the Amazon dry season [9].These dry season average precipitation and vegetation index data were used to calculate the long-term means and standard deviations, which were then used to obtain the standardized anomalies of dry season average precipitation and greenness for each year.Pixels with standardized anomalies less than ´1 were identified as drought pixels or browning pixels.The standardized anomaly assessment method was also used in previous studies [5,6,27,32].Detailed data and methods descriptions are provided in the supplementary material of this paper.
Area of Amazon Forests with Valid Dry-Season Mean VI Retrievals
Over the Amazon forests, the Terra VI products provide more valid dry-season VI retrievals than the Aqua products, and this is true for both MAIAC and standard MODIS products (Figure 1).This indicates that clouds and aerosols are less prevalent over the Amazon in the morning than in the afternoon, considering Terra overpasses the Amazon forests in the morning and Aqua in the afternoon.The combination of Terra and Aqua VI products results in more valid dry-season average VI retrievals, and hence a more complete assessment of the response of Amazon forests' photosynthetic activity to droughts.
Area of Amazon Forests with Valid Dry-Season Mean VI Retrievals
Over the Amazon forests, the Terra VI products provide more valid dry-season VI retrievals than the Aqua products, and this is true for both MAIAC and standard MODIS products (Figure 1).This indicates that clouds and aerosols are less prevalent over the Amazon in the morning than in the afternoon, considering Terra overpasses the Amazon forests in the morning and Aqua in the afternoon.The combination of Terra and Aqua VI products results in more valid dry-season average VI retrievals, and hence a more complete assessment of the response of Amazon forests' photosynthetic activity to droughts.The area of forests with valid dry-season Terra-Aqua combined standard (Collection 5 and Collection 6) MODIS VI retrievals is around five million km 2 , which is ~80% of the area of the Amazon forests.When using the Terra product alone, we get valid dry season VI over only ~60% of the Amazon forests.The Terra-Aqua combined MAIAC VI product provides valid dry season VI covering around six million km 2 forests, which is ~95% of the Amazon forests, in each year after 2003.The consistent high percentages of valid dry-season mean MAIAC VI retrievals over the Amazon forests allow a much more complete assessment of the response of Amazon forests' photosynthetic activity to droughts than using standard MODIS VI products.The area of forests with valid dry-season Terra-Aqua combined standard (Collection 5 and Collection 6) MODIS VI retrievals is around five million km 2 , which is ~80% of the area of the Amazon forests.When using the Terra product alone, we get valid dry season VI over only ~60% of the Amazon forests.The Terra-Aqua combined MAIAC VI product provides valid dry season VI covering around six million km 2 forests, which is ~95% of the Amazon forests, in each year after 2003.The consistent high percentages of valid dry-season mean MAIAC VI retrievals over the Amazon forests allow a much more complete assessment of the response of Amazon forests' photosynthetic activity to droughts than using standard MODIS VI products.
Time Series of the Area of Greening and Browning Amazon Forests
Collection 5 MODIS VI products have previously shown decreasing trends of Amazon forests' greenness after 2005 irrespective of precipitation variations; therefore, Atkinson et al. [32] argued that even if the Amazon forests' greenness decreased during the droughts, MODIS Collection 5 VI data were not able to show this response.Here, we re-performed the analysis using MAIAC, Collection 5, as well as Collection 6 MODIS data, and found that Collection 5 MODIS VI data did show an apparent increasing trend of forested area with dry-season average greenness standardized anomaly less than ´1 (browning); however, neither MAIAC nor Collection 6 VI data showed such apparent trends (Figure 2).
Time Series of the Area of Greening and Browning Amazon Forests
Collection 5 MODIS VI products have previously shown decreasing trends of Amazon forests' greenness after 2005 irrespective of precipitation variations; therefore, Atkinson et al. [32] argued that even if the Amazon forests' greenness decreased during the droughts, MODIS Collection 5 VI data were not able to show this response.Here, we re-performed the analysis using MAIAC, Collection 5, as well as Collection 6 MODIS data, and found that Collection 5 MODIS VI data did show an apparent increasing trend of forested area with dry-season average greenness standardized anomaly less than −1 (browning); however, neither MAIAC nor Collection 6 VI data showed such apparent trends (Figure 2).The increasing trends of the area of Amazon forests showing browning (dry-season mean greenness standardized anomaly less than -1) based on Collection 5 MODIS VI products were caused by the Terra MODIS sensor degradation issue [42].Therefore, the improved sensor calibration The increasing trends of the area of Amazon forests showing browning (dry-season mean greenness standardized anomaly less than ´1) based on Collection 5 MODIS VI products were caused by the Terra MODIS sensor degradation issue [42].Therefore, the improved sensor calibration incorporated in both MAIAC and Collection 6 MODIS data is indispensable for the valid assessment of Amazon forests' inter-annual greenness dynamics [38].
Both MAIAC NDVI and EVI data show large area of browning Amazon forests in 2010, and in the neighboring non-drought years 2008, 2009, 2011, and 2012, Amazon forests did not experience extensive browning (Figure 2a,b).The anomalously low Amazon forest greenness in the dry seasons of 2004 and 2007 (Figure 2a,b) could be caused by the anomalously low water storage before the dry seasons in these two years [43].This suggests that the reduction of precipitation during wet seasons could also lead to decreased photosynthetic activities of Amazon forests in the following dry seasons.The anomalously low water storage in the wet seasons of 2004 and 2007 were related to the El Niño events in those two years [22,38].This suggests that future El Niño events, which tend to reduce the wet season precipitation over the Amazon basin, might also decrease the photosynthetic activity of Amazon forests.
The Collection 6 MODIS VI data show a large areal extent of forest browning in 2010, and no decreasing trend in Amazon forests' greenness (Figure 2e,f, Figures S6 and S9).This agrees with the MAIAC VI data; however, the Collection 6 MODIS VI data, especially the Collection 6 EVI data, show quite different absolute areal extents of forest greening and browning than the MAIAC VI data.For example, in 2010, over the drought impacted forests, the area of browning shown by C6 EVI is only ~50% of that shown by MAIAC EVI, and the area of greening is ~200% of that shown by MAIAC EVI (Table 1).This might be related to incomplete atmospheric correction of the C6 blue band reflectance.Further, the anomalously low C6 MODIS NDVI in 2000 and 2001 over the Amazon forests (Figure 2e, Figure S6) suggests there might be residual sensor calibration issues in those two years.The distributions of MODIS view angles from different years are very similar, if not identical [13], so the impact of MODIS view angle variation is more prominent in intra-annual analysis [13] than in this inter-annual analysis.Nevertheless, observations from high view angles may still induce errors in the spatial patterns of greenness anomaly.Therefore, the greenness anomaly assessment using the MAIAC data, which normalizes the sun-sensor geometry, is more reliable than using the standard MODIS data.
Overall, the MAIAC VI data reveal more realistic Amazon forests inter-annual greenness dynamics than the standard MODIS data, due to improved sensor calibration (with regard to Collection 5 MODIS data), sun-sensor geometry normalization, as well as better cloud detection and atmospheric correction algorithms.
Impact of the 2005 and 2010 Droughts
The Amazon basin experienced two severe droughts in 2005 and in 2010, with the 2010 drought being more severe than the 2005 drought.The 2005 drought mostly impacted the western Amazon basin south of the equator; and the 2010 drought impacted all of Amazonia south of the equator (Figure S2).The responses of the Amazon forests to the 2005 and 2010 droughts as assessed by various remote sensing data are illustrated in Figures 3 and 4.
For the 2005 drought, the NDVI data of MAIAC, Collection 5, and Collection 6 show greenness decline or no-change in most of the drought impacted regions (Figure 3a,b; Table 1), while the EVI data, especially the Collection 5 MODIS EVI data (Figure 3e), showed greenness increase in the drought impacted regions.The contradictory results between NDVI and EVI in C5 data could be due to their different retrieval algorithms.NDVI is calculated using just the red and near infrared bands, while EVI uses these two bands as well as the blue band.The blue band receives higher percentages of atmospheric radiation because of Rayleigh scattering than the red and near infrared bands.The Rayleigh scattering in blue band should have been corrected in MODIS products under ideal conditions, but the heavy and frequent aerosol loadings over Amazon during droughts [26] make the blue band atmospheric correction insufficient, which results in elevated blue band reflectance and consequently elevated EVI.Therefore, incomplete atmospheric correction elevates EVI values.Figure 3f suggests residual atmospheric contamination might also exist in the C6 MODIS EVI data.
Impact of the 2005 and 2010 Droughts
The Amazon basin experienced two severe droughts in 2005 and in 2010, with the 2010 drought being more severe than the 2005 drought.The 2005 drought mostly impacted the western Amazon basin south of the equator; and the 2010 drought impacted all of Amazonia south of the equator (Figure S2).The responses of the Amazon forests to the 2005 and 2010 droughts as assessed by various remote sensing data are illustrated in Figures 3 and 4.
For the 2005 drought, the NDVI data of MAIAC, Collection 5, and Collection 6 show greenness decline or no-change in most of the drought impacted regions (Figure 3a,b; Table 1), while the EVI data, especially the Collection 5 MODIS EVI data (Figure 3e), showed greenness increase in the drought impacted regions.The contradictory results between NDVI and EVI in C5 data could be due to their different retrieval algorithms.NDVI is calculated using just the red and near infrared bands, while EVI uses these two bands as well as the blue band.The blue band receives higher percentages of atmospheric radiation because of Rayleigh scattering than the red and near infrared bands.The Rayleigh scattering in blue band should have been corrected in MODIS products under ideal conditions, but the heavy and frequent aerosol loadings over Amazon during droughts [26] make the blue band atmospheric correction insufficient, which results in elevated blue band reflectance and consequently elevated EVI.Therefore, incomplete atmospheric correction elevates EVI values.Figure 3f suggests residual atmospheric contamination might also exist in the C6 MODIS EVI data.The widespread spurious 2005 Amazon green-up shown in the Collection 5 MODIS EVI product is not present in the assessment using MAIAC EVI (compare Figure 3d with Figure 3e).Collection 5 MODIS EVI data show more greening areas in the 2005 drought in our analysis than in the analysis of [5], because the long-term mean dry season EVI in our analysis is lower-we used four more years of data starting from 2009, and MODIS Collection 5 Vegetation Index products have a decreasing trend of dry season greenness of Amazon forests [29].Consequently, the Collection 5 MODIS The widespread spurious 2005 Amazon green-up shown in the Collection 5 MODIS EVI product is not present in the assessment using MAIAC EVI (compare Figure 3d with Figure 3e).Collection 5 MODIS EVI data show more greening areas in the 2005 drought in our analysis than in the analysis of [5], because the long-term mean dry season EVI in our analysis is lower-we used four more years of data starting from 2009, and MODIS Collection 5 Vegetation Index products have a decreasing trend of dry season greenness of Amazon forests [29].Consequently, the Collection 5 MODIS vegetation index products show positively skewed distributions of greenness standardized anomalies in the drought impacted forests in 2005, but MAIAC data do not show such skewed distributions (Figure 5a,b).
Based on the MAIAC VI data, in 2005, approximately 70% of the drought impacted Amazon forests showed normal greenness as in non-drought years, and the greening and browning fractions were all less than 15% (Table 1).Therefore, as a whole, the 2005 Amazon drought did not affect the greenness of the Amazon forests significantly; but regionally, at the epicenter of the 2005 drought, which was located in the southwestern part of the Amazon basin (Figure S2), the greenness of Amazon forests decreased considerably (Figure 3a).
For the 2010 drought, NDVI data from the MAIAC, Collection 5, and Collection 6 MODIS products show extensive browning in the entire drought impacted vegetation, including forests (Figure 4a-c).These results agree with those presented in [6].The contradictory 2010 greenness standardized anomaly patterns shown by the Collection 6 NDVI (Figure 4c, prevalent browning) and EVI (Figure 4f, extensive greening) also suggest residual atmospheric contamination may exist in C6 MODIS EVI data.
vegetation index products show positively skewed distributions of greenness standardized anomalies in the drought impacted forests in 2005, but MAIAC data do not show such skewed distributions (Figure 5a,b).
Based on the MAIAC VI data, in 2005, approximately 70% of the drought impacted Amazon forests showed normal greenness as in non-drought years, and the greening and browning fractions were all less than 15% (Table 1).Therefore, as a whole, the 2005 Amazon drought did not affect the greenness of the Amazon forests significantly; but regionally, at the epicenter of the 2005 drought, which was located in the southwestern part of the Amazon basin (Figure S2), the greenness of Amazon forests decreased considerably (Figure 3a).
For the 2010 drought, NDVI data from the MAIAC, Collection 5, and Collection 6 MODIS products show extensive browning in the entire drought impacted vegetation, including forests (Figure 4a-c).These results agree with those presented in In the 2010 drought impacted forests, both the MAIAC and standard MODIS NDVI products show more browning than greening, and the MAIAC products provide more valid retrievals than the standard MODIS VI products (Table 1 and Figure 5c,d).Based on the MAIAC data, about 60% of the 2010 drought impacted forests showed no change (standardized anomaly within −1 and +1), about 30% showed browning (standardized anomaly less than −1), and less than 10% showed greening (standardized anomaly greater than +1).The extensive browning detected by MAIAC NDVI and EVI cannot be due to residual atmospheric contamination, because residual atmospheric contamination has opposite effects on NDVI and EVI-it suppresses NDVI but elevates EVI.In the 2010 drought impacted forests, both the MAIAC and standard MODIS NDVI products show more browning than greening, and the MAIAC products provide more valid retrievals than the standard MODIS VI products (Table 1 and Figure 5c,d).Based on the MAIAC data, about 60% of the 2010 drought impacted forests showed no change (standardized anomaly within ´1 and +1), about 30% showed browning (standardized anomaly less than ´1), and less than 10% showed greening (standardized anomaly greater than +1).The extensive browning detected by MAIAC NDVI and EVI cannot be due to residual atmospheric contamination, because residual atmospheric contamination has opposite effects on NDVI and EVI-it suppresses NDVI but elevates EVI.Even though the absolute anomalous VI difference could be very small and within the error range of the VI product [32], we believe that the standardized anomalies are indicative of real vegetation greenness dynamics, and are not just caused by error of the VI product, because random errors of VI cannot generate spatial patterns of browning that agree well with the drought area.For example, the browning Amazon forests in 2005 was clustered around the epicenter of the drought, but was not clustered over other regions.
Both severe droughts (in 2005 and 2010) and wet season precipitation reduction induced by El Niño events (in 2004 and 2007) decreased the dry season photosynthesis of Amazon forests, but in the subsequent year, the dry season photosynthesis recovered from the previous decreases (Figure 2).For example, 91.4%, 86.0%, and 81.5% of browning Amazon forests returned to normal or greening in the year after the 2005 drought, the 2007 El Niño event, and the 2010 drought, respectively, as indicated by the MAIAC NDVI data (Figure S4).However, this does not imply the Amazon forests' photosynthesis recovery will continue to happen after future water stress events, because previous research [17] indicates that Amazon forests reduce the autotrophic respiration (i.e., reduce the investment in tissue maintenance and defense) to prioritize growth during droughts, compliant with eco-evolutionary theories-the accumulated reductions in autotrophic respiration might no longer support the photosynthesis recovery after future water stresses.
Conclusions
We assessed Amazon forests' response to recent droughts using the MAIAC vegetation index product, and compared the results with those from the Collection 5 and Collection 6 MODIS Even though the absolute anomalous VI difference could be very small and within the error range of the VI product [32], we believe that the standardized anomalies are indicative of real vegetation greenness dynamics, and are not just caused by error of the VI product, because random errors of VI cannot generate spatial patterns of browning that agree well with the drought area.For example, the browning Amazon forests in 2005 was clustered around the epicenter of the drought, but was not clustered over other regions.
Both severe droughts (in 2005 and 2010) and wet season precipitation reduction induced by El Niño events (in 2004 and 2007) decreased the dry season photosynthesis of Amazon forests, but in the subsequent year, the dry season photosynthesis recovered from the previous decreases (Figure 2).For example, 91.4%, 86.0%, and 81.5% of browning Amazon forests returned to normal or greening in the year after the 2005 drought, the 2007 El Niño event, and the 2010 drought, respectively, as indicated by the MAIAC NDVI data (Figure S4).However, this does not imply the Amazon forests' photosynthesis recovery will continue to happen after future water stress events, because previous research [17] indicates that Amazon forests reduce the autotrophic respiration (i.e., reduce the investment in tissue maintenance and defense) to prioritize growth during droughts, compliant with eco-evolutionary theories-the accumulated reductions in autotrophic respiration might no longer support the photosynthesis recovery after future water stresses.
Conclusions
We assessed Amazon forests' response to recent droughts using the MAIAC vegetation index product, and compared the results with those from the Collection 5 and Collection 6 MODIS vegetation index data.Overall, the MAIAC data reveal more realistic Amazon forest inter-annual greenness dynamics than the standard MODIS data, due to improved sensor calibration (with regard to Collection 5 MODIS data), sun-sensor geometry normalization, as well as better cloud detection and atmospheric correction algorithms.Our results from the MAIAC data suggest that: (1) the droughts decreased the greenness (i.e., photosynthetic activity) of the Amazon forests; (2) the Amazon wet season precipitation reduction induced by El Niño events could also decrease the photosynthetic activity of the Amazon forests; and (3) in the subsequent year after the water stresses, the greenness of the Amazon forests recovered from the preceding decreases.However, as previous research shows droughts cause Amazon forests to reduce investment in tissue maintenance and defense [17], it is not certain whether the photosynthesis of Amazon forests will continue to recover after future water stresses.Hence, it is important to keep monitoring the greenness of Amazon forests using satellite remote sensing.
Supplementary Materials:
The supplementary materials of this paper are available online at www.mdpi.com/2072-4292/8/4/356/s1. Figure S1: Land cover types over Amazon.The land cover types were identified using the IGBP classification system in the MODIS land cover product MCD12Q1; Figure S2
Figure 1 .
Figure 1.MAIAC products provide more valid observations than both Collection 5 and Collection 6 MODIS products over the Amazon forests in the Amazonian dry season, and the Terra-Aqua combined products yield more valid observations than the Terra or Aqua product alone.(a) Area of Amazon forests with valid dry season mean vegetation index from MAIAC data; (b) Area of Amazon forests with valid dry season mean vegetation index from Collection 5 MODIS VI product; (c) Area of Amazon forests with valid dry season mean vegetation index from Collection 6 MODIS VI product.The vegetation index used for this figure is NDVI.The area of the Amazonian forests is 6.4 million km 2 .
Figure 1 .
Figure 1.MAIAC products provide more valid observations than both Collection 5 and Collection 6 MODIS products over the Amazon forests in the Amazonian dry season, and the Terra-Aqua combined products yield more valid observations than the Terra or Aqua product alone.(a) Area of Amazon forests with valid dry season mean vegetation index from MAIAC data; (b) Area of Amazon forests with valid dry season mean vegetation index from Collection 5 MODIS VI product; (c) Area of Amazon forests with valid dry season mean vegetation index from Collection 6 MODIS VI product.The vegetation index used for this figure is NDVI.The area of the Amazonian forests is 6.4 million km 2 .
Figure 2 .
Figure 2. Time series of the area of greening and browning Amazon forests.Forests with greenness standardized anomalies greater than +1 are marked as greening forests; and forests with greenness standardized anomalies less than -1 are marked as browning forests.(a) Results from MAIAC NDVI data; (b) Results from MAIAC EVI data; (c) Results from Collection 5 MODIS NDVI data; (d) Results from Collection 5 MODIS EVI data; (e) Results from Collection 6 MODIS NDVI data; (f) Results from Collection 6 MODIS EVI data.Collection 5 MODIS vegetation index data show apparent increasing trends of browning area over the Amazon forests, but neither MAIAC nor Collection 6 data show such trends.
Figure 2 .
Figure 2. Time series of the area of greening and browning Amazon forests.Forests with greenness standardized anomalies greater than +1 are marked as greening forests; and forests with greenness standardized anomalies less than ´1 are marked as browning forests.(a) Results from MAIAC NDVI data; (b) Results from MAIAC EVI data; (c) Results from Collection 5 MODIS NDVI data; (d) Results from Collection 5 MODIS EVI data; (e) Results from Collection 6 MODIS NDVI data; (f) Results from Collection 6 MODIS EVI data.Collection 5 MODIS vegetation index data show apparent increasing trends of browning area over the Amazon forests, but neither MAIAC nor Collection 6 data show such trends.
Figure 3 .
Figure 3. Spatial patterns of remotely sensed Amazonian dry season greenness standardized anomalies of the drought-affected vegetation, including forests, over Amazon in the 2005 drought year.The Amazonian dry season generally lasts from July to September (JAS).(a) Results from MAIAC NDVI data; (b) Results from Collection 5 MODIS NDVI data; (c) Results from Collection 6 MODIS NDVI data; (d) Results from MAIAC EVI data; (e) Results from Collection 5 MODIS EVI data; (f) Results from Collection 6 MODIS EVI data.
Figure 3 .
Figure 3. Spatial patterns of remotely sensed Amazonian dry season greenness standardized anomalies of the drought-affected vegetation, including forests, over Amazon in the 2005 drought year.The Amazonian dry season generally lasts from July to September (JAS).(a) Results from MAIAC NDVI data; (b) Results from Collection 5 MODIS NDVI data; (c) Results from Collection 6 MODIS NDVI data; (d) Results from MAIAC EVI data; (e) Results from Collection 5 MODIS EVI data; (f) Results from Collection 6 MODIS EVI data.
Figure 4 .
Figure 4. Spatial patterns of remotely sensed Amazonian dry season greenness standardized anomalies of the drought-affected vegetation, including forests, over Amazon in the 2010 drought year.The Amazonian dry season generally lasts from July to September (JAS).(a) Results from MAIAC NDVI data; (b) Results from Collection 5 MODIS NDVI data; (c) Results from Collection 6 MODIS NDVI data; (d) Results from MAIAC EVI data; (e) Results from Collection 5 MODIS EVI data; (f) Results from Collection 6 MODIS EVI data.
Figure 4 .
Figure 4. Spatial patterns of remotely sensed Amazonian dry season greenness standardized anomalies of the drought-affected vegetation, including forests, over Amazon in the 2010 drought year.The Amazonian dry season generally lasts from July to September (JAS).(a) Results from MAIAC NDVI data; (b) Results from Collection 5 MODIS NDVI data; (c) Results from Collection 6 MODIS NDVI data; (d) Results from MAIAC EVI data; (e) Results from Collection 5 MODIS EVI data; (f) Results from Collection 6 MODIS EVI data.
Figure 5 .
Figure 5. Histograms of remotely sensed greenness standardized anomalies over drought impacted Amazon forests in 2005 and 2010.(a) Histograms of NDVI standardized anomalies in 2005; (b) Histograms of EVI standardized anomalies in 2005; (c) Histograms of NDVI standardized anomalies in 2010; (d) Histograms of EVI standardized anomalies in 2010.
Figure 5 .
Figure 5. Histograms of remotely sensed greenness standardized anomalies over drought impacted Amazon forests in 2005 and 2010.(a) Histograms of NDVI standardized anomalies in 2005; (b) Histograms of EVI standardized anomalies in 2005; (c) Histograms of NDVI standardized anomalies in 2010; (d) Histograms of EVI standardized anomalies in 2010.
:
Spatial patterns of standardized anomalies of dry-season precipitation over Amazon in the years from 2000 to 2012.The TRMM precipitation data were used to identify the dry season precipitation anomalies; Figure S3: Area of Amazonian forests with dry season precipitation standardized anomaly less than ´1 in the years from 2000 to 2012; Figure S4: Spatial patterns of standardized anomalies of dry-season MAIAC NDVI over Amazon in the years from 2000 to 2012.Gray shaded areas are areas with missing data; Figure S5: Spatial patterns of standardized anomalies of dry-season Collection 5 NDVI over Amazon in the years from 2000 to 2012.Gray shaded areas are areas with missing data; Figure S6: Spatial patterns of standardized anomalies of dry-season Collection 6 NDVI over Amazon in the years from 2000 to 2012.Gray shaded areas are areas with missing data; Figure S7: Spatial patterns of standardized anomalies of dry-season MAIAC EVI over Amazon in the years from 2000 to 2012.Gray shaded areas are areas with missing data; Figure S8: Spatial patterns of standardized anomalies of dry-season Collection 5 EVI over Amazon in the years from 2000 to 2012.Gray shaded areas are areas with missing data; Figure S9: Spatial patterns of standardized anomalies of dry-season Collection 6 EVI over Amazon in the years from 2000 to 2012.Gray shaded areas are areas with missing data.
Table 1 .
Percentages of greening, browning, no-change, and valid areas within Amazon forests impacted by the 2005 and 2010 droughts, as shown by various remote sensing products.
|
2016-04-23T08:45:58.166Z
|
2016-04-23T00:00:00.000
|
{
"year": 2016,
"sha1": "75f8531d782eaa23b5db6cd9110a0faf0f4a8add",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/8/4/356/pdf?version=1461389590",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "75f8531d782eaa23b5db6cd9110a0faf0f4a8add",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Geography",
"Computer Science"
]
}
|
247118509
|
pes2o/s2orc
|
v3-fos-license
|
Central Obesity and Associated Factors Among Urban Adults in Dire Dawa Administrative City, Eastern Ethiopia
Background Central obesity (CO) is a medical problem in which extra fat is accumulated in the abdomen and stomach extent that it may harm health. Furthermore, previous studies in Ethiopia predominantly relied on body mass index used to measure obesity and do not show distribution of fat. However, there is a paucity of information on the measurement of central obesity using waist circumference and associated factors in Ethiopia particularly in the study area. Hence, the purpose of this study is to assess the prevalence of central obesity and associated factors among urban adults in Dire Dawa, administrative city, Eastern Ethiopia. Methods A community-based cross-sectional study was conducted among 633 adults in selected kebeles of administrative city from October 15 to November 15, 2020. A multistage and systematic sampling procedure was used to select study participants. Central obesity is defined as a condition with waist circumference ≥83.7 cm for men and ≥78 cm for women with or without general obesity (GO). Odds ratio along with 95% confidence interval was estimated to identify factors associated with central obesity using multiple logistic regression analysis. Results The overall prevalence of central obesity was 76.1%; at 95% CI (73%, 80%). Associated factors of central obesity were age 45 years and above [AOR = 3.75, 95% CI (1.86, 7.55)], being female [AOR = 2.52, 95% CI: (1.62, 3.94)], alcohol consumption [AOR = 2.61, 95% CI: (1.69, 4.05], physical inactivity [AOR = 2.05, 95% CI: (1.23, 3.42)], and two hour and more time spent on watching television [AOR = 3.30, 95% CI: (1.59, 6.82)]. Conclusion The study shows central obesity was high in the study area. Age 45 years and above, being females, married, physically inactive, alcohol consumption, and spending a long time watching television was associated with central obesity. Having regular physical activity, limiting alcohol drinking, and limiting time spent watching television were recommended to prevent central obesity and associated risk among adults.
Of the total 633 study participants, the data was collected from 611 study participants which yield a response rate of 96.5%. Of these, more than one-third of the study participants, 435 (71.2%) were females, and more than half of the study participants, 383 (62.7%) had family size 4 and more and with the mean (+SD) age of the respondents were 37.6 (+12.08).
The prevalence of central obesity was significantly higher among females than males (57.1% vs 19%) correlates of central obesity found to be Age 45 years and above, being female, alcohol consumption, physical inactivity, and two hours and more time spent on watching television.
In Conclusion, the results revealed a high prevalence of central obesity among adults of Dire Dawa. In this population. Promotion of having regular physical activity, limiting alcohol drinking, and time spent watching television was recommended to prevent central obesity and associated risk among adults.
Background
Central obesity (CO) also known as abdominal or visceral obesity is a medical problem in which excess fat has accumulated in the abdomen to the extent that it may harm health and/or increase medical problems. 1 Obesity is nowadays a universal epidemic, with a probable 57.8% of adults, estimated to be classified obese by 2030, by the World Health Organization, 2,3 and the prevalence in Africa is estimated at 20-50% by 2025. 4 Ethiopia is also one of the lower-income countries in Africa experiencing a double burden including malnutrition and moving from underweight to overweight/obesity especially in urban Setting. 5 According to the 2014 estimation of WHO, 1.2% of men and 6.0% of females were either overweight or obese in Ethiopia. Between 1997 and 2016, the collective prevalence in the country increased significantly from 2.6 to 6.9% in females and, from 0.6 to 1.9% in males, 6 and according to the EDHS 2011 found that the prevalence of overweight and obesity for urban settings was 12.1% and 2.8%, respectively. 7 Worldwide have faced a growing "epidemic" of abdominal obesity (AO). 8 The prevalence of this form of obesity is growing extremely in industrialized countries as well as in unindustrialized countries. 9 In developed countries, the prevalence of abdominal obesity shows dramatically increasing for instance in Germany, Spain, and the United States of America it was found to be 33.9%,36%, and 56% respectively. [10][11][12] Another different study shows, in the United States of America, an increase over time, 46.4% among men and females is 65.4%. In China, enlarged from 8.5 to 27.8% among males and from 27.8 to 45.9% females, in Nigeria increased from 3.2 among males to 39.2 among females. [12][13][14] Whereas central obesity ranges from 24.4% in Dilla, to 41.7% in Mekelle. It is 33.5%, 37.6% in Addis Ababa and Gondar respectively. [15][16][17][18] Several factors are associated with central obesity. These include genetic factors, socio-economic factors, behavioral and environmental factors. 8 Disbursement of body fat is known to be a more independent and potent factor of disease and death, than total adiposity.
Obesity is known to be associated with many health circumstances. 19 However, central obesity is directly associated with increased Visceral abdominal fat and associates highly with hypertension, diabetes, dyslipidemia, metabolic syndrome, and coronary heart disease independent of body mass index (BMI) and it is a significant predictor. 20 Central obesity is related to death and illness, bigger disability, reduced quality of life, and increased health expenses. [21][22][23][24][25] Sedentary lifestyle, energy-reach diets, increasing urbanization, and changing modes of transportation are driving factors for a growing of abdominal obesity, which is associated with an increased cardiovascular risk. 6,26 Validated measures of obesity and body fat include; Body mass index (BMI), the waist-height ratio (WHtR), waisthip ratio (WHpR), and waist circumference (WC), 27 and Advanced highly sensitive diagnostic tools like Computed tomography (CT), Magnetic resonance and Dual Energy X-ray Absorption (DEXA) are available, due to high cost and technical difficulties, this is not feasible method to be used for the general population as an epidemiological tool. 28 Neck Circumference (NC) emerges as a novel, simple, and discrete upper body measurement which differentiates between obese and non-obese. 27 Moreover, several studies demonstrated the validity of NC as a measure of metabolic syndrome (MetS), as it correlates positively with the classical anthropometric indices such as BMI, WC, and WHtR. A comparative evaluation of waist circumference, weight-to-hip ratio, and BMI showed that waist circumference was presuming to both weight-to-hip ratio and BMI in the estimate of single and various cardiovascular illness risk factors. 30 Like many developing countries, Ethiopia is facing the costs of epidemiologic, demographic, financial, and nutrition changes which continue to favor the emergence of an epidemic of chronic non-communicable diseases. 31 Studies in Ethiopia show that the increasing occurrence of hypertension, diabetes, and death from chronic NCDS stated supports the evidence that Ethiopia is facing the disease burden observed in other Sub-Saharan African countries. 32 Even though some studies which were conducted in different parts of Ethiopia represent the presence of high prevalence of overnutrition (overweight or obesity) among adults, predominantly relied on BMI as a measure of obesity or overweight among adults, however, BMI does not distinguish weight gain due to extra fat accumulation or high muscle mass in the body. 33 Moreover, central obesity alone can help to determine the risk of getting obesity-related disorders/cardiometabolic risks. However, to the best of our knowledge, there has been little literature about the prevalence of central obesity and associated factors in Ethiopia.
Hence, this study aims to assess the prevalence of central obesity and factors among urban adults in Dire Dawa, administrative city, Eastern Ethiopia.
Methods and Materials Study Setting and Design
A community-based cross-sectional study was used in Dire Dawa city Administration from October 15 to November 15, 2020.
Dire Dawa is located at a distance of 550 Km from Addis Ababa, the capital city of Ethiopia. The Dire Dawa administrative council consists of nine urban and thirty-eight rural kebeles. According to Central Statistical Agency 34 in 2013, the total population of the administration was 405,444 of which 263,827 were urban population, and Adult is 52% out of the urban population. 35
Population and Eligibility Criteria
All adults aged 18 years and above who live in Dire Dawa urban kebeles were the source population. Adults aged 18 and above who were living in randomly selected urban kebeles in Dire Dawa were the study population. Adults aged 18 years and above who existed in Dire Dawa for greater than six months before the survey were included in the study. Those adults critically ill, pregnant women, adults with complicated medical problems, and who were physically disabled were excluded from the study since it could lead to difficulty in the measure anthropometry.
Sample Size Determination and Sampling Procedure
The sample size was calculated using both single and double population proportion formula, Then with the following assumptions; 50% expected prevalence of central obesity among adults, Confidence level to be 1.96 and level of significance (α=5%), degree of precision (d) 0.05, and 10% for non-response, and design effect 1.5. Hence, yielding the final sample size of 633. While the sample size for the double population proportion was estimated using the sample size for a cross-sectional study to compare the risk of central obesity for factors under Epi-info version 7.3.2 software. We assumed a 95% confidence level, 80% power, exposed to an unexposed sample ratio of 1. Finally taking the larger sample size estimated based on the above calculations, a total of 633 adults sample were required for these study.
The multistage sampling procedure was used in this research; at stage one, four kebeles were randomly selected from the entire nine urban kebeles using the lottery method; at stage two, the household was selected after obtaining sampling frame from each kebeles administration, and a total of 633 households were allocated to four selected kebele based on proportional to their household size.
Then, a systematic sampling technique was used to choose households to be visited for data collection. Interval (kth) for choosing households was determined by dividing the number of households by the entire sample size. After determining the kth interval i'e k=53, then the starting households between 1 and 53 is further selected by lottery methods and finally, the next households were identified by systematically every 53th households from each kebele and if there is above one suitable respondent were found in the selected household, only one respondent was chosen by lottery method. In cases where suitable will be not found in the selected household, a revisit was done a minimum of three times, and finally if they were not present considered as non-respondent.
Data Collection and Measurements
The data were collected through face-to-face interviewer-administered pretested structured questionnaires from an adult living in sampled households. About information on sociodemographic characteristics, behavioral, nutritional, and lifestyle characteristics, and health-related factors. The questionnaire was adapted from WHO-Step wise approach to chronic disease risk factor surveillance, used after minor modifications for study environments, 36 and other reviewed literature. Data were collected in the morning, afternoon on works days, and weekend during which time eligible adults were expected to be at home.
Data were collected by eight health care profession holding at least a BSc degree (trained health extension worker and there was one supervisor assigned per two kebele).
Two days of intensive training were given for data collectors and supervisors on the questionnaire, interview techniques, objective of the study, and how to keep privacy and confidentiality of the data gained from participants.
Household Food Insecurity access scale (HFIAS) was used to measure the reference period of four weeks to assess nutritional intake of individuals using FAO-FANTA III 37 and the level of physical activity was measured using the Global Physical Activity Questionnaire analysis guide. 38 The food frequency questionnaire (FFQ) was adapted from the WHO-STEP wise approach to assessing the dietary habits of the study population. The FFQ consisted of 12 food groups. Respondents were questioned to report their frequency of consumption of food items weekly. One usual week was evaluated from the last 12 months. 36 For dietary diversity, a simple sum of the number of food items has been considered. 39 A Dietary Diversity Score (DDS) was built by including the intake of the food groups for one week based on the definition that it is the sum of food groups consumed over the reference period.
For instance, respondents who eat one item from each of the food groups at least once during the week would have a high DDS. 40 A minimum of two meters were kept between interviewers and interviewees. All participants put on their face masks during an interview. Sanitizer and clean gloves were used while taking anthropometric data and before any forms of contact with the study participants.
Anthropometric measurement was performed using the standards of the WHO protocol for computing waist circumference. Waist circumference was measured using a non-stretchable fiber measuring tape.
The participants were requested to stand straight in a relaxed position with both feet together on a flat surface; one layer of clothing was allowed.
Waist circumference was taken approximately at the center between the lower margin of the last palpable rib and the top of the iliac crest measurements were done in triplicates, and the average was used to represent the WC of an individual, with the subject standing at the end of gentle expiration. 4
Operational/Standard Definition and Measurements
Central Obesity defined as specific for Ethiopian males and females is defined as a waist circumference (WC) ≥ 83.7 cm for men and ≥ 78 cm for women with or without general obesity (GO). 41 Physically activity level: If the total physical activity metabolic equal min/week is at least 600 metabolic equivalent of task (MET)-minutes were considered as active, and if the entire physical activity metabolic equivalent min/week is less than 600 MET minutes were considered as inactive. Wealth Index was a combined measure of a household's accumulative living standards and calculated based on the assumption of Principal component analysis (PCA), and those variables having a communality value >0.5 were used to produce a factor score. Accordingly, households were categorized into three wealth terciles for further analysis.
Ever Smoking: Individuals considered cigarette smoking at least once or daily in their lifetime. 42 Current Alcohol intake is defined as having consumed alcohol at least once in the past 30 days. 43 Khat Chewing is defined as individually considered chewing khat at least once in the 30 days preceding 44 High DDS is defined as the highest tertile of the one-week count intake food group, and Low DDS is defined as the lower two tertiles of the one-week count intake food group. 40 Self-reported NCD is defined as Doctor has told you that suffered from this disease and/or advised treatment for the same. 45
Data Processing and Analysis
The data's Completeness and consistency were checked manually. Then data were entered into Epi-data version v3.1 statistical software; after the data were cleaned by checking for errors, impossible or doubtful values, and inconsistencies that might be due to coding or data entry errors were exported to SPSS version 20.0 software packages for analysis. Frequency distribution was generated in which the outliers and missing data were identified before further analysis.
Preliminary data analysis included descriptive statistics, ie, means and standard deviations for continuous variables and frequencies, tables, graphs, and percentages for categorical variables which describe the population characteristics.
Simple logistic regression analyses were done to identify the association between the independent variable and the outcome variable then-candidate variables were selected for the final model at p-value ≤ 0.25. Finally adjusted odds ratio (AOR) along with 95% CI were estimated to identify predictors of central obesity using multiple logistic regression analysis. The level of statistical significance was declared at a p-value < 0.05.
Ethical Consideration
Ethical approval was obtained from the Research Ethical Review Committee of Dire Dawa University. Then, to get the required support, a formal letter was written from the college of medicine and health science to the Dire Dawa administrative health bureau.
To get permission for starting data collection in the community, a further supportive letter was obtained from the Dire Dawa regional health bureau for each kebele. Voluntary written and signed consent was obtained from each study participant after informing the objective, confidentiality, right to withdrawal, benefit, and risks of the study and that this study was conducted following the Declaration of Helsinki.
The interviewers were wearing face masks. Reasonable physical distance was kept between the involved individual during data collection.
Sociodemographic Characteristics
A total of 633 study participants, the data were obtained from 611 study participants which yield a response rate of 96.5%. Of those, more than one-third of the study respondents, 435 (71.2%) were females, and more than half of the study respondents, 383 (62.7%) had family size 4 and more and with the mean (+SD) age of the respondents was 37.6 (+12.08). Nearly half of the study participants 297 (48.6%) were Amhara ethnic group. Three hundred -ninety six of the participants were the marital status of married and nearly half of study participants 308 (504.4%) were Orthodox religious followers.
Almost half of the study respondents 287 (47%) were attended college and above level education, more than one-third of study.
Respondents were high-level wealth index tercile 292 (47.8%), and around one third 210 (34.4%) of the study participants were the low level of wealth tercile, more than half of study participant 444 (72.7%) were food secured and 180 (29.5%), were daily laborer (Table 1).
Factors Associated with Central Obesity
Age, sex, marital status Alcohol Consumption, watching television, physical activities, wealth index, and occupational status were significantly associated factors with central obesity in binary analysis at p≤0.25. Of these variables, age, sex, marital status, Alcohol Consumption, watching television, and physical activities were statistically significant in multivariate analysis at p<0.05 ( Table 2). The results of multivariable logistic regression analysis:-age above 45 years, were 3.75 times more likely to be centrally obese than those younger 18 (Table 3).
Discussion
The prevalence of central obesity in this study was 76.1%. Our study found that age, gender, alcohol consumption, physical inactivity, marital status, time used to watch Television, were a predictor of central obesity. This shows a high burden of unapparent illness related to central obesity which increases the risk of a consequent problem. The prevalence of central obesity reported in this study was higher than the results of the study conducted in Ethiopia, like northwest Gondar, 15 Addis Ababa, 16 Dila, 17 Mekelle, 18 India, 46 Eastern Sudan, 47 Tanzania, 48 Uganda, 49 West Africa, 50 southeastern Nigeria 51 Spain, 52 South China, 53 and North China. 54 This finding is in line with a study done in Northwestern Iran, 55 the possible justification for these discrepancies might be attributed to the sociodemographic difference of the study population. 56,57 Besides, the residence of the participants might also be one possible reason for the discrepancy in the study findings. This means that individuals who existed in urban areas and industrialized countries are at higher risk of getting central obesity due to their overconsumption of processed and energy-dense foods more frequently than that of the rural peoples. 57,58 As an individual eats high energy containing diet, the metabolic pathway shifts into an anabolic process, including fat biosynthesis, and in turn leads to an increase of fats in the circulatory system and the need to store it to maintain the metabolic activities, 58,59 Then, the extra fat will be accumulated in adipose tissue with the limitless manner and will influence people into the risk of becoming centrally obese. 59 Moreover, the majority of the study respondents were female in the current study area. This may increase the magnitude of central obesity in the current study. Another possible justification for the discrepancy may be the
609
difference of the cut-of-point reference for measurement of central obesity between the current and previous studies. 56 Scholars reported that the international diabetes Federation (IDF) and Third Adult Treatment Panel (ATP III) central obesity definition criteria are the best cut-off point for Sub-Saharan Africa and Asian population to reveal the higher treat of cardiometabolic risk. However, the international cut-off value for WC underestimates obesity (central obesity) among Ethiopian adults. 41 That means, the current study uses the optimum anthropometric cut-offs for detecting obesity and markers of metabolic syndrome classification criteria in Ethiopia, which takes a cut-off point of WC circumference ≥83.3 cm for men and ≥78.0 cm for women. 41 In this study, being female was 2.52 times were more likely to have centrally obese than male. This finding is consistent with the studies done in. 15,16,47,48,50,[53][54][55] This might be explained by studies about the differences in fat distribution in males and females where a female was found to have a larger abdominal subcutaneous adipose tissue area, and sedentary lifestyle and practice of less physical activity since most females who live in developing countries, including Ethiopia are housewives. Thus, they may spend more time at home with poor physical activities and the difference between the setting and the genetic susceptibility of the accumulation of fat between men and women postmenopausal redistribution of body fat in the abdominal area. [60][61][62][63] In this study, age 45 years and above were 3.75 times more likely to have centrally obese as compared to 18-29 years. A finding that is consistent with the study done in. 48,50,54,55,61,62 The possible explanation for this might be due to reduced basal metabolism and reduced physical activities due to age. Lower metabolic activity leads to more storage of fat in the body even with a low intake of diet. Also, besides, as people get older, the distribution and accumulation of fat shifts into the abdominal region and will have a chance to develop central obesity. 64 The results reveal that more time consumed on looking television was 3.3 times were more likely to have centrally obese than those who never watch television. A finding supported by a study done in. 48,65 The possible explanation for this might be, Sedentary lifestyle inhabit people's time of doing physical activities, which may increase the risk of developing obesity (central obesity), 48,65 and also this discrepancy could be attributed to lifestyle changes because the residents are likely to more readily adopt western lifestyle like increased television viewing, sedentary lifestyles. Such as reduced walking due to the availability of motorized transport, nutritional transition that predispose people to the risk of becoming obese. 66,67 This study reveals that consuming alcohol increases the risk of central obesity by 2.61 times more likely as compared to non-consumer. However, it is similar to the finding from other studies. 16,51 The possible explanation might be due to the high energy content of alcohol makes its consumption a potential contributor to the obesity epidemics, furthermore, drinking is shown to be associated with an increase in food intake; and this eventually leads to weight gain. 68,69 There appears to be no doubt about the inverse relationship existing between physical activity and obesity, and the benefit and its preventive effect by acting on the control of fats and cholesterol. 70,71 In this study, being centrally obese is 2.05 times among those physically inactive individuals compared to that found in those physically active ones. The finding of this study is similar to other studies done in. 16,70,71 In this study, we also found that participants who were married 2.07 times were more likely to have abdominally obese than single. The finding of the study is consistent with a study done. 49 The literature on the association between marital status and abdominal obesity is inconsistent. 48 But some researchers have reported similar positive associations between being married or formerly married and weight gain. 72 The possible justification may be married people being more abdominally obese has been attributed to a change of dietary patterns after marriage and increased social support. Married people are more likely to have a more stable eating pattern and the social support that comes from the responsibility of eating together. 73 Since the cross-sectional type of study could not show cause and effect relationship between different factors with the outcome variable.
The usage of self-reporting surveys such as information on sociodemographic factors, physical activities, healthy risky behavior, and health-related factors may have been resulting in under-reporting or over-reporting of the lifestyle behaviors.
Recall and social desirability bias are some of the limitations of the study; this was tried to minimize by probing the respondents about the event.
An anthropometric measurement error is also another limitation to minimize this; data collectors were well trained; standardization of anthropometric measures was done and the instrument was calibrated. Moreover, available research on central obesity and associated factors among adults was limited in the country.
Conclusion
This study reveals a high prevalence of central obesity among study respondent, which is relatively high related to respective studies and demonstrates a major Public health problem.
Central obesity is significantly influenced by a number of factors: -increased age, being female, time spent on television, being ever married, alcohol consumption, and physical inactivity. These results may, hence, call for corresponding stakeholders to plan suitable protective measures that will prevent further health problems that are associated with central obesity, furthermore in future studies, since this is a cross-sectional study the specific cause and effect relationship may not be established.
Thus, further studies with a strong study design should be considered for further justification.
Data Sharing Statement
All data concerning this result can be obtained from the corresponding author at any time on reasonable request.
|
2022-02-26T16:17:10.361Z
|
2022-02-01T00:00:00.000
|
{
"year": 2022,
"sha1": "767733b815381574ac9ba22ffdd0071919546cd2",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7f822e28b245988ba5d8e6f0e4e9cb15f895f83c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
24926314
|
pes2o/s2orc
|
v3-fos-license
|
Endogenous c-N-Ras provides a steady-state anti-apoptotic signal.
We report that c-N-Ras possesses an isoform-specific, functional role in cell survival under steady-state conditions. This function includes protection from programmed cell death by serum deprivation or upon treatment with apoptosis-inducing agents. The data demonstrate that c-N-Ras may play a functional role in the regulation of steady-state phosphorylated Akt and serine 136-phosphorylated Bad (Ser(136)-pBad). Immortalized N-Ras knockout fibroblasts possess nearly undetectable levels of steady-state Ser(136)-pBad. In contrast, wild-type control cells and the N-Ras knockout cells ectopically expressing c-N-Ras at control levels maintained easily detectable levels of Ser(136)-pBad both at steady-state and following treatment with tumor necrosis factor alpha. Similar results were seen with Ser(112)-pBad. These differences did not arise from differences in total Bad protein levels. These data correlate with the observation that the N-Ras knockout cells exhibit a heightened susceptibility to the induction of apoptosis. Ectopic expression of c-N-Ras in the N-Ras knockout cells at endogenous levels, compared with control cells, significantly rescues the apoptotically sensitive phenotype. Elevated expression of either c-Kirsten A-Ras or c-Kirsten B-Ras did not reverse the apoptotic sensitivity of the N-Ras knockout cells or result in increased levels of either phospho-Akt or phospho-Bad. Our results indicate that, at steady state, c-N-Ras possesses an isoform-specific, functional role in cell survival.
There are four mammalian Ras isoforms: Harvey (Ha), 1 N, and two splice variants of the Kirsten gene, Kirsten A (K(A)) and Kirsten B (K(B)). All four proteins are highly homologous except for the C terminus, where they share no sequence sim-ilarity. Ras GTP, the active form, interacts with diverse targets within the cell. Amino acids 32-40 and 60 -72 comprise the switch 1 and switch 2 regions, respectively, which are identical in all isoforms (1,2). When Ras binds GTP, both regions undergo conformational changes to form the effector binding pocket (3). Distinct Ras isoform functions are now becoming apparent. Transformation of C3H10T1/2 fibroblasts by expression of oncogenic G12V-Ha-Ras at endogenous levels requires the cooperation with cellular N-Ras (4). In vitro assays also suggest differences in Ras isoform-dependent activation of phosphatidylinositol (PI) 3-kinase and Raf-1 (5).
Most of the biochemical effectors of Ras have been identified by in vitro binding assays and yeast two-hybrid screening and include Raf kinases (6 -10), mitogen-activated protein kinaseextracellular signal-regulated kinase kinase (11), Ral guanine nucleotide dissociation stimulator family members (12)(13)(14)(15), PI 3-kinase (16), neurofibromin (17), and others (3,18). Only Raf-1 has been confirmed as an authentic target by its in vivo association with c-N-Ras (4). None of the remaining putative Ras effectors have been identified in Ras immunoprecipitates from cells not ectopically expressing either Ras or the putative target protein.
Ras is also thought to bind and activate PI 3-kinase, causing an increase in the production of 3-phosphorylated phosphatidylinositol lipids (16,26). Phosphatidylinositol 3,4,5-trisphosphate binds to protein kinase B/Akt directly, which then allows foritsactivationthroughphosphorylationby3-phosphoinositidedependent protein kinases 1 and 2 (27)(28)(29). Akt phosphorylates and activates glycogen synthase kinase 3 and p70 S6K (3,18). Akt also phosphorylates and inactivates proapoptotic Bad, a member of the Bcl-2 family of proteins. Phosphorylation of Bad on serine 136 by Akt and on serine 112 by an as yet unidentified kinase, possibly cyclic AMP-dependent protein kinase (30) or Raf-1 (31), leads to inactivation of Bad by its association with the phosphoserine docking protein, 14-3-3 (32)(33)(34). Phosphorylation of either site on Bad is sufficient to inhibit binding to the antiapoptotic proteins Bcl-x L and Bcl-2 (34,35), positioning Akt function in the cell survival pathway.
Ras has been reported to have a functional role in many cellular processes including cell proliferation, migration, differentiation, apoptosis, and certain immune responses (18,36).
Apoptosis, also known as programmed cell death, is an ordered disassembly of a cell, characterized by specific cellular and phenotypic changes including cell shrinkage, membrane blebbing, and DNA degradation (37,38). The role of Ras in apoptosis has focused on the effect of ectopically expressed, oncogenic Ras proteins and changes in apoptosis following treatment with various stimuli including tumor necrosis factor ␣ (TNF␣), Fas, and withdrawal of serum or growth factors. The reports of these studies are conflicting, in some cases suggesting that oncogenic Ras inhibits apoptosis (39 -41). In other instances, oncogenic Ras expression enhances apoptosis (42)(43)(44)(45)(46). The role of endogenous, cellular Ras isoforms in apoptosis has not yet been examined. We have found that endogenous c-N-Ras provides a steady-state survival or antiapoptotic signal. This antiapoptotic signal appears to be generated, at least in part, through regulation of basal phospho-Bad levels. Neither c-K(A)-nor c-K(B)-Ras can substitute for this c-N-Ras survival function.
Antibodies
Bad polyclonal, phosphospecific Bad polyclonal (Ser 112 and Ser 136 ), Akt polyclonal, and Ser 473 phospho-Akt polyclonal antibodies were from New England Biolabs. Phospho-MAP kinase monoclonal, anti-N-Ras monoclonal, anti-ERK2 polyclonal, anti-K(A)-Ras polyclonal, and anti-K(B)-Ras polyclonal antibodies were from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). Anti-FLAG monoclonal antibody was from Eastman Kodak Co. Hamster anti-mouse Fas receptor antibody (clone Jo2) (for activation of the Fas receptor) was from Pharmingen (San Diego, CA). Anti-Fas/CD95 antibody (used for Western analysis of Fas receptor) was from Transduction Laboratories. Anti-p55 TNF receptor I was from Biodesign International. Anti-rabbit secondary antibody conjugated to horseradish peroxidase (HRP) was from Transduction Laboratories, and goat anti-mouse-HRP was from Kirkegaard and Perry Laboratories (Gaithersburg, MD).
Cell Culture
N-Ras knockout (NϪ/Ϫ), heterozygote (Nϩ/Ϫ), and control Nϩ/ϩ mouse embryo fibroblasts (MEFs) were a generous gift from R. Kucherlapati (Albert Einstein College of Medicine) (47). K-Ras knockout and control Kϩ/ϩ MEFs were a generous gift from T. Jacks (Howard Hughes Medical Institute, Massachusetts Institute of Technology) (48). MEFs were immortalized by a modification of the 3T3 protocol (49). The MEFs were passaged 1:3 every 7 days until they developed a fibroblast morphology. To avoid any cell-specific changes arising from immortalization, multiple, independently isolated cell lines were used throughout these studies. Cells were grown in complete medium consisting of Dulbecco's modified Eagle's medium (Life Technologies, Inc.) containing 10% fetal bovine serum (Atlanta Biologicals), 1ϫ nonessential amino acids, and 1ϫ penicillin/streptomycin (Life Technologies). Cells were kept in complete medium in all experiments unless otherwise stated. MEFs were grown in complete medium with additional serum to a final concentration of 20%. Serum starvation was performed by rinsing cells twice with phosphate-buffered saline (PBS; 20 mM Na 2 HPO 4 , 120 mM NaCl, pH 7.4) and incubation in Dulbecco's modified Eagle's medium containing nonessential amino acids and penicillin/streptomycin.
Pharmacological Treatments
Recombinant murine TNF␣ (Calbiochem) was dissolved in 0.2-m filtered PBS containing 0.1% bovine serum albumin (Sigma) and stored in aliquots at Ϫ80°C. We have found that the TNF␣ potency varied with the number of freeze/thaw cycles. In general, each aliquot was used only twice. Activation of the Fas receptor was achieved by incubation of cells for the times indicated in complete medium containing 1 g/ml murine anti-Fas receptor (Pharmingen, clone Jo2, form NA/LE) and 0.5 g/ml recombinant protein G (Sigma). Staurosporine (Sigma) was dissolved in Me 2 SO and used at 75-100 nM.
Cloning and Transfections
N-Ras knockout cells stably expressing wild-type c-N-Ras (NϪ/ϪwtN cell lines) were generated by transfection of NϪ/Ϫ cells using Lipofectamine Plus (Life Technologies) with c-N-Ras/pIBW3 (a gift from Angel Pellicer, New York University), which has the c-N-Ras gene under the control of the thymidine kinase promoter, and selection in G418 (Fisher). Stable clones were maintained in complete medium containing 200 g/ml G418. N-Ras knockout cells stably expressing Bcl-2-FLAG (a gift from Alex Almasan, Cleveland Clinic Foundation) were generated by the same protocol. K(A)-Ras was cloned by polymerase chain reaction (Expand High Fidelity PCR System; Roche Molecular Biochemicals) from a bacterial expression vector containing the sequence of c-K(A)-Ras (gift from Berthe Willumsen, University of Copenhagen). Primers corresponding to the N-terminal region of c-K(A)-Ras (forward, 5Ј-AAGCTTCCCGGGGCGGCCGCGGATCCAT-GACGGAAT-3Ј) and the reverse complement of the C-terminal region of c-K(A)-Ras (reverse, 5Ј-ATCGATGTCGACGAGCTCTCTAGATTA-CATTATAACGCATTT-3Ј) were prepared by Life Technologies, Inc. Following the polymerase chain reaction, the product was ligated into pTargeT (Promega) containing a cytomegalovirus enhancer and promoter and the ligation product used to transform JM109-competent Escherichia coli cells. Colonies were selected on LB plates containing 100 g/ml ampicillin (U.S. Biochemical Corp.) and screened for the presence and direction of the transgene by restriction digest. Positive, forward clones were used to transfect N-Ras knockout fibroblasts by the method described above. A similar procedure was used to clone c-K(B)-Ras from G12V-K(B)-Ras/pZip (gift from J. Gibbs, Merck) where the forward N-terminal primer was extended beyond the 12th codon to back-mutate the valine 12 to the wild-type glycine (5Ј-ACACCATGACT-GAATATAAACTTGTGGTAGTTGGAGCTGGTGGCGTA-3Ј). The reverse complement or the C-terminal region of c-K(B)-Ras was used for the reverse primer (3Ј-AGATCTCCATGGGTCGACTATTTACATA-ATTACACACTTTG-5Ј). The resulting c-K(B)-Ras/pTargeT was transfected into N-Ras knockout cells as described. Prior to transfections, the c-K(A)-and c-K(B)-Ras plasmids were sequenced to confirm their identity with the sequences of mouse c-K(A)-or c-K(B)-Ras in the GenBank data base. All transfected clones were tested for the presence and level of expression of the transgene by Western analysis.
Preparation of Cell Lysates
All lysis buffers contained the following phosphatase inhibitors: 30 mM -glycerophosphate, 5 mM p-nitrophenyl phosphate, 1 mM each of phosphoserine and phosphothreonine, 0.2 mM phosphotyrosine, 100 M sodium vanadate, and the following protease inhibitors: 50 g/ml each of aprotinin and leupeptin, 25 g/ml pepstatin A, and 1 mM phenylmethanesulfonyl fluoride. For Western analysis of Ras expression, serine 473-phospho-Akt (pAkt) levels, total Akt levels, and phospho-MAP kinase (pMAPK levels), cells were harvested by scraping into PBS, and the resulting cell pellet was resuspended in p21 buffer (20 mM MOPS, 5 mM MgCl 2 , 0.1 mM EDTA, 200 mM sucrose, pH 7.4) containing 1% FIG. 1. Western analysis of N-Ras knockout, control, and N-Ras knockout cells ectopically expressing c-N-Ras. NϪ/Ϫ, Nϩ/ϩ, and NϪ/ϪwtN cells were harvested, and lysates were prepared as described in p21 buffer containing 1% saponin followed by centrifugation and resuspension of the pellet in p21 buffer containing 1% CHAPS (U.S. Biochemical Corp.). The lysate was centrifuged at 13,000 ϫ g, the supernatant was retained, and protein concentration was determined by the method of Bradford (50). 100 g of protein was loaded in each lane of a 13% SDS-polyacrylamide gel. Following electrophoresis, the gel was transferred to PVDF (Hybond P; Amersham Pharmacia Biotech). The membrane was blotted with anti-N-Ras monoclonal antibody (Santa Cruz Biotechnology) and developed using HRP-coupled goat anti-mouse secondary antibody and standard ECL techniques. The standard is histidine-tagged, recombinant N-Ras and runs at approximately 30 kDa. CHAPS (U.S. Biochemical Corp.) and incubated for 20 min on ice. The lysate was centrifuged again at 13,000 ϫ g, and the supernatant was retained. Protein concentration was determined by the method of Bradford (50). For Western analysis of total Bad or phospho-Bad levels, cells were harvested by trypsinization, combined with their medium, and centrifuged at 1000 ϫ g for 10 min. The cells were washed once in Tris-buffered saline (TBS; 20 mM Tris, 140 mM NaCl, pH 7.4) and solubilized in TBS containing 1% Nonidet P-40 (Igepal, Sigma) and phosphatase and protease inhibitors as described. After 20 min on ice, the lysate was centrifuged at 13,000 ϫ g, and the supernatant was retained for protein measurements and Western analysis.
Western Analysis
Lysates containing equal amounts of protein were loaded onto SDSpolyacrylamide gels. Following electrophoresis, the proteins were transferred to polyvinylidene difluoride (PVDF) (Hybond P; Amersham Pharmacia Biotech). Blocking was performed in 5% nonfat milk containing 5% newborn calf serum (Life Technologies). Blots were incubated with primary antibodies for 2-3 h at room temperature or overnight at 4°C Following electrophoresis, the proteins were transferred to PVDF, and the blot was developed with antiphospho-MAP kinase monoclonal antibody and goat anti-mouse-HRP secondary antibody. Detection was performed using standard ECL techniques (Amersham Pharmacia Biotech). The results are representative of three separate experiments. B, top, pAkt levels in the N-Ras knockout cells. N-Ras knockout, control, and NϪ/ϪwtN reconstituted cells left untreated (t ϭ 0) or treated with TNF␣ and cycloheximide, as in A, were harvested at the indicated times, and lysates were prepared in p21 buffer containing 1% CHAPS as described. 50 g of protein was loaded in each lane of a 10% minigel. Following electrophoresis, the gel was transferred to PVDF and immunoblotted with Ser 473 -pAkt rabbit polyclonal antibody (New England Biolabs) and developed using anti-rabbit-HRP secondary antibody (Transduction Laboratories). Detection was performed with ECL-Plus (Amersham Pharmacia Biotech) and a Molecular Dynamics Storm Imager set on chemifluorescence. The results are representative of two separate experiments. Lower panel, total Akt levels. N-Ras knockout, control, and NϪ/ϪwtN reconstituted cells were untreated or treated with TNF␣ at 1 ng/ml in the presence of cycloheximide, as in A, and harvested at the indicated times, and lysates were prepared as described above. 50 g of protein was loaded in each lane of a 10% SDS-polyacrylamide gel, and following electrophoresis proteins were transferred to PVDF and immunoblotted with Akt rabbit polyclonal antibody (New England Biolabs) and developed with anti-rabbit-HRP secondary antibody (Transduction Laboratories). Detection was performed with ECL (Amersham Pharmacia Biotech) and exposure to film (Hyperfilm ECL; Amersham Pharmacia Biotech). The results are representative of three separate experiments. C, pBad levels in N-Ras knockout cells. N-Ras knockout, control, and N-Ras knockout cells ectopically expressing c-N-Ras (NϪ/ϪwtN) were left untreated or treated with TNF␣ and cycloheximide as described for A. At the indicated times, cells were harvested by trypsinization and washed in cold PBS, and lysates were made in TBS, 1% Nonidet P-40 as described under "Experimental Procedures." Protein concentrations were determined as in Fig. 1. 150 g of protein was loaded in each lane of a 13% SDS-polyacrylamide gel. Following electrophoresis, the gel was transferred to PVDF membrane and immunoblotted with Ser 136 -pBad polyclonal antibody (New England Biolabs). The membrane was developed with anti-rabbit secondary (Transduction Laboratories), and detection was performed with ECL-Plus and imaging with a Molecular Dynamics Storm Imager as in B. The results are representative of four separate experiments. D, total Bad levels in N-Ras knockout cells. N-Ras knockout, control Nϩ/ϩ, and NϪ/ϪwtN reconstituted cells were untreated or treated with TNF␣ at 1 ng/ml in the presence of 2.5 g/ml cycloheximide. At the indicated times, lysates were prepared as in C. 150 g of protein was loaded in each lane of a 13% SDS-polyacrylamide gel. The gel was transferred to PVDF and blotted using anti-Bad polyclonal antibody (New England Biolabs) and anti-rabbit secondary antibody. Detection was performed as in C. The results are representative of three separate experiments. ϪwtN) were left untreated or treated for 4.5 h with 1 ng/ml TNF␣ in the presence of 2.5 g/ml cycloheximide. At the indicated times, the cells were harvested by trypsinization and followed by washing in TBS, 0.1% Tween. The blots were incubated with either goat anti-mouse horseradish peroxidase (HRP) (Kirkegaard and Perry Laboratories) or anti-rabbit HRP (Transduction Laboratories). After washing, the blots were developed, as indicated, with ECL (Amersham Pharmacia Biotech) and exposure to film (Hyperfilm ECL; Amersham Pharmacia Biotech) or with ECL-Plus (Amersham Pharmacia Biotech) and detection with a Molecular Dynamics Storm Imager.
Apoptosis Assays
TUNEL Analysis-Untreated cells or cells treated for the indicated times were harvested by trypsinization and combined with their medium (to collect any detached cells), centrifuged, and washed once in cold PBS. The cell pellets were resuspended in 1% paraformaldehyde (EM Science) in PBS and incubated on ice for 15 min. The fixed cells were centrifuged and washed once with PBS and resuspended in cold 70% ethanol. TUNEL analysis was performed by fluorescence-activated cell sorting using the APO-BRDU flow cytometry kit for apoptosis according to the manufacturer's directions (Phoenix Flow; Pharmingen).
Cell Death ELISA-Untreated or treated cells in 12-well cluster plates were scraped in their medium and centrifuged at 500 ϫ g for 5 min. The cell pellet was resuspended in 200 l of lysis buffer supplied by the manufacturer (Cell Death Detection ELISA Plus kit; Roche Molecular Biochemicals). 20-l aliquots were used in the analysis that measures the appearance and relative amounts of cytoplasmic histoneassociated-DNA fragments (mono-and oligonucleosomes) with detection by a microtiter plate reader at 405 nm, according to the manufacturer's instructions. Incubation was performed overnight at 4°C instead of 2-3 h at room temperature as suggested by the manufacturer. The reading from the negative control (buffer only) supplied by the manufacturer was subtracted from all sample values.
Ras Signaling in N-Ras Knockout Cells-Expression of c-N-
Ras is absent in all immortalized N-Ras knockout cell lines (NϪ/Ϫ) (Fig. 1, top). The expression levels of c-N-Ras in the N-Ras knockout cells ectopically expressing c-N-Ras (NϪ/ ϪwtN) are similar to that observed in the control Nϩ/ϩ cells (Fig. 1, bottom). All cell lines, except K(i)-Ras knockout cells, express K(i)-Ras (see below), and none express detectable levels of Ha-Ras (data not shown).
Since the N-Ras knockout cells express only c-K(A)-and c-K(B)-Ras, they present a unique system to examine signaling systems that might specifically require c-N-Ras. We chose to test for changes in either phospho-MAP kinase or phospho-Akt levels, since each of these is regulated through a distinct Ras signaling pathway (Raf-1 and PI 3-kinase, respectively). Differences between N-Ras knockout cells and control cells in the level of activated MAP kinase or Akt were examined both at steady state and following agonist stimulation. We examined phospho-MAP kinase (p42 and p44) levels under steady-state growth and following treatment with TNF␣ in the presence of cycloheximide ( Fig. 2A). The N-Ras knockout cells, control Nϩ/ϩ cells, and N-Ras knockout cells stably expressing c-N-Ras at control levels possessed similar levels of phosphorylated MAP kinase at steady state and following treatment with TNF␣. There was a small increase in the level of phospho-MAP kinase at 1 h that decreased to steady-state levels after 4 h. This is consistent with the report that both Jun N-terminal kinases and extracellular signal-related kinases (ERKs) are activated in a Ras-dependent manner following Fas ligation in SHEP cells (51). Recently, two groups reported that phosphorylation of Bad on serine 112 is regulated by the MAP kinase pathway (31,52). The results from these studies suggested that the MAP kinase pathway is necessary for Ser 112 phosphorylation and inactivation of proapoptotic Bad, similar to Ser 136 phosphorylation of Bad by Akt (32-34, 53, 54). Our data suggest that the MAP kinase pathway is unaffected by the absence of c-N-Ras. While our laboratory has demonstrated that c-N-Ras preferentially binds to Raf-1 in G12V-Ha-Ras-transformed C3H10T1/2 fibroblasts, it is possible that as a result of continuous culturing of the N-Ras knockout fibroblasts in serumcontaining medium, these cells may have adapted alternative mechanisms that lead to MAP kinase activation.
Unlike MAP kinase, Akt can be activated by a Ras/PI 3-kinasedependent pathway (3,18). Our results demonstrate that, at steady state, the N-Ras knockout cells possess minimal levels of pAkt in contrast to control cells (Fig. 2B, upper panel). Ectopic expression of c-N-Ras in the N-Ras knockout cells significantly restores the level of pAkt to levels comparable with those observed in the control cells. The differences observed in pAkt are not a result of differences in the total amount of Akt (Fig. 2B, bottom panel). The N-Ras knockout cells, control Nϩ/ϩ cells, and the N-Ras knockout cells ectopically expressing c-N-Ras (NϪ/ϪwtN) possess similar levels of total Akt protein both at steady state and following treatment with TNF␣. This implies that activation of the c-N-Ras/PI 3-kinase/ Akt pathway may be impaired in N-Ras knockout cells.
c-N-Ras Function Influences Steady-state Levels of Phosphorylated Bad (pBad)-Bad can be phosphorylated on position 136 by Akt (32)(33)(34)53), which can itself be activated by a Ras-dependent PI 3-kinase pathway (55). Phosphorylation of Bad on serine 112 and/or 136 results in the sequestering of pBad by cytosolic 14-3-3, allowing an increase in free, antiapoptotic Bcl-2 and Bcl-x L (37,56). c-N-Ras could provide a steadycombined with their medium, centrifuged, and washed once in cold PBS. The cells were fixed in 1% paraformaldehyde in PBS and incubated on ice for 15 min. The fixed cells were centrifuged and washed once with PBS and resuspended in cold 70% ethanol. TUNEL analysis was performed by fluorescence-activated cell sorting using the APO-BRDU flow cytometry kit for apoptosis according to the manufacturer's directions (Phoenix Flow; Pharmingen). The experiment was performed in triplicate and is representative of at least four experiments. B, TNF␣ treatment of N-Ras knockout MEFs. N-Ras knockout and control MEFs were plated in 12-well cluster plates and treated with either cycloheximide at 2.5 g/ml or with the same concentration of cycloheximide and TNF␣ at 1 ng/ml for 4 h. The treated cells were scraped in their medium and centrifuged at 500 ϫ g for 5 min. The cell pellet was lysed with lysis buffer provided in the Cell Death Detection ELISA Plus kit (Roche Molecular Biochemicals). Following centrifugation at 500 ϫ g for 5 min. 20-l aliquots were placed in the strepavidin-coated microtiter plate wells along with 80 l of the immunoreagent, containing incubation buffer, anti-histone-biotin antibody, and anti-DNA-POD (peroxidase) (provided in the kit). Incubation was performed overnight at 4°C (rather than the suggested 2-3 h at room temperature recommended by the manufacturer). The following day, the wells were washed and developed with the substrate provided followed by measurement of the absorbance at 405 nm. The assay was performed twice in triplicate. C, apoptosis induction by Fas ligation. TUNEL analysis of cells (NϪ/Ϫ, Nϩ/ϩ, and NϪ/ϪwtN) was performed for cells either untreated or treated for 8 h with 1 g/ml anti-mouse Fas receptor antibody (clone Jo2; Pharmingen) and recombinant protein G at 0.5 g/ml (Sigma). At 8 h, the cells were fixed and analyzed for TUNEL-positive cells as described for A. This experiment is representative of at least two experiments performed in duplicate. D, induction of apoptosis by serum withdrawal. N-Ras knockout, control, and N-Ras knockout cells ectopically expressing c-N-Ras at endogenous levels (NϪ/ϪwtN) were rinsed twice in PBS and incubated in serum-free medium for 0, 24, and 48 h. At the indicated times, the medium was collected and combined with the trypsinized cells, and the cells were fixed and analyzed for TUNEL-positive cells as described for A. This experiment is representative of three separate determinations. E, reversal of apoptotic sensitivity of N-Ras knockout cells by Bcl-2 expression. N-Ras knockout cells were transfected with Bcl-2-FLAG, and stable clones were isolated by selection in G418. NϪ/Ϫ, Nϩ/ϩ, and Bcl-2-expressing NϪ/Ϫ clones were untreated or treated with 1 ng/ml TNF␣ in the presence of 2.5 g/ml cycloheximide. At the indicated times, medium was collected, and the cells were harvested and assayed for TUNEL-positive cells as described for A. Inset, anti-FLAG immunoblot blot of NϪ/Ϫ(Bcl-2-FLAG)-expressing clones. CHAPS solubilized lysates were prepared, and 100 g of each was electrophoresed and transferred as described in the legend to Fig. 2A. Blotting was with anti-FLAG monoclonal antibody (Kodak), and development was with HRP anti-mouse secondary and standard ECL techniques. This experiment is representative of two different determinations.
FIG. 4. Reversal of the apoptotic sensitivity of N-Ras knockout cells is specific for the c-N-Ras isoform.
A, Western analysis of c-K(B)-Ras levels in N-Ras knockout and control Nϩ/ϩ cells. Cells were harvested by scraping in ice-cold PBS, and lysates were prepared in p21 buffer containing 1% CHAPS as described. Protein concentration was determined as in Fig. 1. 100 g of protein was loaded in each lane of a 13% SDS-polyacrylamide gel. Following electrophoresis, the proteins were transferred to PVDF, the blot was cut, the upper half was incubated with anti-ERK2 polyclonal antibody, and the bottom half was blotted with anti-K(B)-Ras polyclonal antibody. Both halves were developed with anti-rabbit HRP secondary antibody, and detection was with standard ECL techniques. The first lane is 25 state, survival signal through its regulation of basal Akt activity. In view of the differences observed in steady-state pAkt levels between N-Ras knockout and control cells, we examined the levels of pBad. In contrast to control Nϩ/ϩ cells, the levels of Ser 136 -pBad were barely detectable in the N-Ras knockout cells and did not change upon treatment with TNF␣ (Fig. 2C) or Fas receptor ligation (data not shown). Stable expression of c-N-Ras in the N-Ras knockout cells restored the levels of Ser 136 -pBad to nearly control levels. Similar results were observed with Ser 112 phosphorylation of Bad (data not shown). To be certain the differences observed in the levels of Ser 136 -pBad did not result from changes in Bad expression, parallel samples were analyzed for total Bad (Fig. 2D). The results demonstrate that there are no differences in the level of total Bad between N-Ras knockout, control, or N-Ras knockout cells ectopically expressing c-N-Ras. This implies that the differences in pBad levels arise from differences in basal or "tonic" signaling by a c-N-Ras/Akt-dependent pathway.
N-Ras Knockout Cells Possess Heightened Susceptibility to Undergo Apoptosis-One of the cell's protective mechanisms against apoptosis is the phosphorylation of the proapoptotic Bcl-2 family member, Bad (37,57). The decreased steady-state levels of pBad in the N-Ras knockout cells could imply that they are more susceptible to apoptotic agents. We therefore examined the sensitivity of the N-Ras knockout cells to the induction of apoptosis by treatment with apoptotic agonists or serum starvation. Treatment of N-Ras knockout cells with 1 ng/ml murine TNF␣ in the presence of cycloheximide results in the rapid onset of apoptosis, 40 -50% by 4.5 h, as measured by a TUNEL assay (Fig. 3A). Reconstitution of N-Ras knockout cells by expression of c-N-Ras at endogenous levels (NϪ/ϪwtN3 or NϪ/ϪwtN8, Fig. 1, bottom) results in a significant resistance to TNF␣ treatment, more similar to control cells (Fig. 3A). Similar results were obtained by cell counting and by using the Cell Death Detection ELISA Plus assay (Roche Molecular Biochemicals) (data not shown).
To be certain that the differences in apoptotic sensitivity of the N-Ras knockout and control Nϩ/ϩ fibroblasts were not simply a result of immortalization, we tested the sensitivity of the MEFs to treatment with cycloheximide and TNF␣ (Fig. 3B). Both the N-Ras knockout and control MEFs demonstrated some sensitivity to the presence of 2.5 g/ml cycloheximide alone as measured by the Cell Death Detection ELISA assay. Higher absorbance values reflect increased levels of cytoplasmic histone-associated DNA fragments, which is a measure of the relative degree of apoptosis. The N-Ras knockout MEFs demonstrated significant sensitivity to the addition of TNF␣ at 1 ng/ml. In contrast, the control MEFs were not sensitive to the addition of TNF␣ above that observed with cycloheximide alone. This implies that the differences seen in the immortal-ized cell lines are reflective of similar sensitivity observed in the MEFs.
Treatment of the N-Ras knockout, control, and NϪ/ϪwtN reconstituted cells with activating anti-Fas antibody resulted in similar findings as observed with TNF␣ treatment (Fig. 3C). The N-Ras knockout cells demonstrate 25% apoptosis by 8 h of treatment with anti-Fas antibody and soluble protein G, which is reversed by ectopic expression of c-N-Ras at endogenous levels (NϪ/ϪwtN3 or NϪ/ϪwtN8, Fig. 3C). In both instances, we did not detect significant differences in the level of either p55 TNF receptor I or CD95/Fas receptor in the established knockout cell lines compared with control cell lines (data not shown).
Serum starvation also led to enhanced cell death by apoptosis of N-Ras knockout cells compared with control Nϩ/ϩ cells with the restored NϪ/ϪwtN cells again displaying significant, although partial, reversion (Fig. 3D). Here withdrawal of serum to induce apoptosis takes longer, 40% cell death by 48 h, which is not unlike the results seen with IL-3 withdrawal from pro-B lymphocytes (40). These data suggest that the absence of c-N-Ras function in the N-Ras knockout cells renders them more apoptotically sensitive, possibly through altered levels of pAkt and pBad. The observations that multiple N-Ras knockout cell lines are more sensitive to a variety of apoptotic inducers suggest that c-N-Ras functions in a global fashion in providing a steady-state survival signal.
Since there are very noticeable differences in the steadystate levels of pBad in the presence versus the absence of c-N-Ras (Fig. 2C), we tested whether the stable expression of Bcl-2 would protect N-Ras knockout cells from TNF␣-induced apoptosis. Stable transfectants of NϪ/Ϫ cells with a FLAGtagged Bcl-2 renders all clones resistant to TNF␣-induced apoptosis (Fig. 3E). It seems likely that the overexpression of Bcl-2 compensates for the higher levels of unphosphorylated Bad in the parental N-Ras knockout cells. Shifting to a higher steady-state level of Bcl-2 by overexpression presumably alters the ratio of Bcl-2 to unphosphorylated Bad present in the N-Ras knockout cells, allowing for a more resistant phenotype, similar to control Nϩ/ϩ cells, to be achieved.
Neither c-K(A)-nor c-K(B)-Ras Substitutes for c-N-Ras in Providing a Steady-state Survival Function-
We set out to test whether the restoration of the control Nϩ/ϩ cell phenotype was specific for c-N-Ras. The levels of c-K(A)-and c-K(B)-Ras were examined in all cell lines. Both c-K(A)-and c-K(B)-Ras appear to be up-regulated in the N-Ras knockout cell lines compared with control Nϩ/ϩ cells (Fig. 4, A and B; the levels of MAP kinase proteins are shown as a control for protein loading). This up-regulation may be a consequence of the immortalization process and/or the continuous culturing on the N-Ras knockout cells in serum-containing medium. The elevated lev-Cells were harvested, and 100 g of protein was loaded in each lane of a 13% SDS-polyacrylamide gel. Following electrophoresis and transfer, the blot was incubated with anti-K(A)-Ras polyclonal antibody and developed with anti-rabbit HRP and standard ECL techniques. The exposure was for 15 s except for the last lane, which was exposed for 5 min. Equal protein loading was confirmed with anti-ERK2 blotting as in A (data not shown). The standard is 25 ng of bacterially expressed c-K(A)-Ras protein. The results are representative of three separate experiments. C, TUNEL analysis of untreated and TNF␣-treated N-Ras knockout fibroblasts stably transfected with c-K(A)-Ras. c-K(A)-Ras was cloned into pTargeT vector (Promega), the resulting c-K(A)-Ras/pTargeT was transfected into N-Ras knockout cells, and stable clones were selected in G418 as described under "Experimental Procedures." N-Ras knockout, control Nϩ/ϩ, and NϪ/Ϫ(2)wtK(A)-Ras clones were untreated or treated with 1 ng/ml TNF␣ in the presence of 2.5 g/ml cycloheximide. Untreated cells and cells treated for 3.5 h were harvested, and their media were collected. The cells were fixed, and TUNEL analysis was performed as described in the legend to Fig. 3A. The values for the untreated cells were less than 3% and are not shown in the figure. The results are representative of three separate experiments. D, K-Ras knockout cells are insensitive to the induction of apoptosis. TUNEL analysis of cycloheximide-treated and TNF␣ plus cycloheximide-treated K-Ras knockout and control K ϩ /ϩ cells was performed. K-Ras knockout cells (KϪ/Ϫ) and control Kϩ/ϩ cells were treated with 2.5 g/ml cycloheximide alone or in combination with 10 ng/ml TNF␣ for 24 h. Following the incubation, the cells, along with their medium, were harvested and fixed, and TUNEL analysis was performed as described in the legend to Fig. 3A. The results are representative of three separate experiments. E, measurement of apoptosis in K-Ras knockout and control MEFs. K-Ras knockout (KϪ/Ϫ) and control Kϩ/ϩ MEFs were plated in 12-well cluster plates and treated with TNF␣ at 10 ng/ml in the presence of 2.5 g/ml cycloheximide or with cycloheximide alone for 6 h. The cells were harvested and lysed as described in the legend to Fig. 3B. The level of apoptosis was measured using the Cell Death Detection ELISA Plus kit as described in the legend to Fig. 3B. The absorbance values obtained by treatment with cycloheximide alone were subtracted from the TNF␣-treated sample values. The assay was performed in triplicate. els of c-K-Ras proteins may have been necessary for these cells to survive in the absence of c-N-Ras. Overexpression of the K-Ras gene products did not result in (a) protection from apoptotic agents or (b) restoration of the either basal pAkt or pBad levels. These biological events and biochemical properties were only restored by the ectopic expression of c-N-Ras. Ectopic expression of additional c-K(A)-Ras into N-Ras knockout cells did not reverse their apoptotic sensitivity (Fig. 4C). None of the stable c-K(A)-Ras-expressing clones were protected from TNF␣-induced apoptosis. Similar results are seen with ectopic expression of c-K(B)-Ras (data not shown). Studies with K-Ras knockout cells support the results with overexpression of c-K(A)-and c-K(B)-Ras in the N-Ras knockout cells. K-Ras knockout cells do not express either c-K(A)-or c-K(B)-Ras; nor do they express detectable levels of Ha-Ras (data not shown). They provide a system to study the function of c-N-Ras alone. Treatment of immortalized K-Ras knockout cells with cycloheximide and TNF␣ at 10 ng/ml (10-fold higher concentration than that used with the N-Ras knockout cells) for 24 h did not cause an increase in apoptosis above that observed with cycloheximide alone (Fig. 4D). Cycloheximide alone caused some apoptosis that probably results from the extended incubation time (24 h rather than the 4-h incubation time with the N-Ras knockout cells). In contrast, the Kϩ/ϩ control cells demonstrated a high level of apoptosis in response to TNF␣ treatment that is above the level observed with cycloheximide alone (Fig. 4D). Similar results were observed with treatment of the KϪ/Ϫ and Kϩ/ϩ cells with 75 nM staurosporine (data not shown). We also tested the sensitivity of the K-Ras knockout and control Kϩ/ϩ MEFs and found that they responded in a similar fashion to the immortalized cell lines. The Kϩ/ϩ MEFs demonstrated higher apoptosis than the K-Ras knockout MEFs after 6 h of treatment with cycloheximide and TNF␣ at 10 ng/ml (Fig. 4E). Cycloheximide had a significant effect in both K-Ras knockout and control Kϩ/ϩ MEFs after 24 h of treatment (data not shown). The data with the K-Ras knockout MEFs and immortalized K-Ras knockout fibroblasts, both of which express only c-N-Ras, support the idea that c-N-Ras, but not c-K-Ras, possesses a steady-state survival function. We interpret these results to suggest that c-N-Ras specifically acts to provide a steady-state survival signal through its regulation of steady-state pAkt and pBad.
Our results indicate that, unlike cells that express c-N-Ras, steady-state, exponentially growing N-Ras knockout cells possess very little pBad (Fig. 2C). This implies that the steadystate balance between pro-and antiapoptotic Bcl-2 family proteins may be significantly different for N-Ras knockout cells compared with control Nϩ/ϩ cells. It could be postulated that it is this difference that makes N-Ras knockout cells poised to undergo apoptosis given any death-promoting stimulus. The reversal of sensitivity to apoptotic stimuli by expression of Bcl-2 suggests that Bcl-2 compensates for higher levels of unphosphorylated Bad in N-Ras knockout cells. If one of the functions of c-N-Ras is to provide a steady-state signal through PI 3-kinase to maintain basal Akt activity and pBad levels, then the absence of c-N-Ras could result in an altered ratio of Bcl-2 or Bcl-x L to Bad. It is apparent that c-N-Ras plays a role in "setting and maintaining" the position of the pBad/Bcl-2 or Bcl-xL "rheostat" as has been suggested for Bax/Bcl-2 (57)(58)(59). In view of the different expression levels of c-K(A)-and cK(B)-Ras, our data also specifically link c-N-Ras, but not c-K-Ras, function to the control of pBad levels and the biological end point of cell survival. We could not, however, mimic the apoptotic sensitivity of the N-Ras knockout fibroblasts by long term treatment of control Nϩ/ϩ cells with PI 3-kinase inhibitors (data not shown), suggesting that the mechanism through which c-N-Ras provides its antiapoptotic function goes beyond just the regulation of steady-state phospho-Bad levels.
Between 2 and 8% of the cellular Ras is GTP-bound in serum-deprived cells (60 -63). Serum withdrawal induces significant apoptosis in the N-Ras knockout fibroblasts compared with control cells and N-Ras knockout fibroblasts ectopically expressing c-N-Ras at control levels ( Fig. 3D). At 48 h following serum withdrawal, there was less than 10% apoptosis in the control cells and the N-Ras knockout cells ectopically expressing c-N-Ras (NϪ/ϪwtN). TUNEL analysis revealed nearly 40% apoptosis in the N-Ras knockouts at 48 h following serum starvation. These data imply that even under conditions of serum deprivation the small amount of c-N-Ras-GTP that is likely to be present in control and reconstituted NϪ/ϪwtN cells may be sufficient to maintain survival in the absence of serum.
|
2018-04-03T02:48:26.305Z
|
2000-06-23T00:00:00.000
|
{
"year": 2000,
"sha1": "e2b2233084f6f6227bda9f83dd4f36e86243b1b4",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/275/25/19315.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "d534874626718f8ea692d3ba9e73e15b89e0392a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
195664636
|
pes2o/s2orc
|
v3-fos-license
|
Validation of the “Mind the Gap” Scale to Assess Satisfaction with Health Care among Adolescents
Background: At present, more than 90% of adolescents with chronic conditions survive into adulthood as health care users and move pediatric to adult care with their chronic illness. Therefore, the need satisfaction scale focuses specifically on transitional care and reflect the increasing expectations among youth and their parents. Aims: To examine the validity and reliability of the Turkish version of Mind the Gap scale. Study Design: Methodological study. Methods: The Turkish versions of Mind the Gap scale and Patient Assessment of Choronic Illness Care scale were applied to the participants in two tertiary hospitals in Ankara. The validity was evaluated with factor analyses and content-scope validity; the reliability was evaluated with item-total score correlation, internal consistency, and continuity methods. Results: A total of 109 adolescents and 157 parents completed the questionaire. The content validity was confirmed. Exploratory factor analysis was used to determine the factor structure of the scale. Both adolescent and parent scales formed three sub-dimensions and explained 71% and 73% of the variation, respectively. The Cronbach’s alpha reliability coefficient of Mind the Gap scale 1 and Mind the Gap scale 2 were 0.89 and 0.87, respectively, with internal consistencies of the parent’s scales reaching 0.92 and 0.90. The test-retest reliability coefficients totalled 0.88 and 0.85 for the adolescents and parents, respectively. The suitability of the model was examined with confirmatory factor analysis. Conformity indices and x2/df value of the model were in good fit to data. Conclusion: The Turkish version of the Mind the Gap scale is a valid and reliable scale for evaluating the needs, expectations, and satisfaction of adolescents and their parents in terms of health care.
The life expectancy of children with chronic conditions has risen over the past few years. Today, most adolescents with chronic diseases transition to adulthood (1). The successful transition interventions for chronically ill youth from pediatric to adult care also gained importance. The American Academy of Pediatrics emphasizes the importance of high-quality, age-appropriate, and uninterrupted health care services as a person transitions from adolescence to adulthood and providing self-management and independent living activities to adolescents (2)(3)(4)(5)(6). This purposeful and high-quality health care transition process, which starts in the early adolescence, aims to maximize the lifelong functioning and well-being of youth with special healthcare needs (2,7). The quality of health care is assessed by the care satisfaction of the patients. Studies evaluating care satisfaction are commonly performed in the adult population (8). These studies show that the care satisfaction in adults affects the adjustment to care procedure, symptom management, continuity of care, trusting the healthcare providers, and decrease in hospital admissions (9)(10)(11)(12)(13). However, studies evaluating care satisfaction in children and adolescents are quite limited and these studies focus on evaluating expectation and needs of children and adolescents rather than evaluating care satisfaction (7,8,14,15). The existing patient satisfaction surveys evaluate the services from the care provider's point of view, neglect the user's expectations. In our country, no satisfaction scale focuses specifically on transitional care nor reflect the youth and their parents's expectations and needs. However, the care quality and patient satisfaction must be evaluated from the patient's perspective to provide effective communication with individuals with chronic conditions and include them in the treatment process (8,16). This study aimed to evaluate (i) the validity and reliability of the Turkish "Mind the Gap scale" (MGS) to evaluate the transition health services satisfaction in adolescents with diabetes and their parents.
The scale, which is focused on the transition care, is expected to contribute to the assessment of the needs and satisfaction of adolescents and their parents.
Design and participants
This methodological study was conducted with volunteers and randomly selected adolescents (n=109) and accompanying parents (n=157) who were recruited from two pediatric endocrinology clinics of two tertiary hospitals in Ankara. The inclusion criteria for adolescents were as follows: (i) followed-up diagnosis of diabetes at least one year where the study was conducted; (ii) age between 14-21 years old; (iii) ability to read and understand Turkish. The adolescents were excluded from the study if they presented diabetes-related complications and diabetes-related or unrelated neurological problems as they might alter the perspective of diabetes and diabetes care. A total of 5-10 subjects were recommended for each item to achieve the validity and reliability studies (17).
Procedure
The data were obtained by using the individual questionnaire based on self-evaluation, Turkish MGS, and Turkish Patient Assessment of Choronic Illness Care. Written informed consent was obtained from all participants. The project was approved by the local ethics committee (ethics committe no: 50687469-1491-164-15/1648-4-289). The data collection period was approximately 30 min per participants. As a re-test, after 3 weeks, the scale was filled by 54 adolescent with diabetes to assess the reliability.
Demographic data form
The demographic data included questions about the age, sex, date of diagnosis, and being informed about diabetes.
Mind the Gap scale
The MGS, which was developed by Shaw et al. (8), is a sevenpoint Likert scale which allows the assessment of the health care satisfaction of adolescents with chronic conditions and their parents. The construction of the scale was based on multiple inconsistency theories relating to the gap between individual expectations and perceptions (18). The scale consists of four questionnaires, that evaluates the "best care (MGS 1 )" and "current care (MGS 2 )" from adolescents' and parents' perspectives separately. A total of 22 items were selected for adolescents and 27 items for parents to assess the interpersonal relationships, health care process, and care environment ( Table 1). The difference between the participant's rating of the "best" and "current" care in the study shows the quality of the transition care.
Patient Assessment of Choronic Illness Care
The scale, which was developed by Glasgow et al. (19), was validated. Patient Assessment of Choronic Illness Care is a simple tool, which consists of 20 items and 5 subscales, to assess the health care among patients with chronic conditions (19). The respondents were asked to rate the items using a five-point Likert scale anchored by "strongly disagree" at 1 and "strongly agree" at 5. The increase in score from the scale indicates the increasing satisfaction of the patient (20).
Equivalance of language and content validity
After obtaining the permission to adapt the MGS into Turkish, the scale was independently translated by three language experts and two Turkish researchers. Then, the Turkish version was retranslated into English by two other experts in the English language. The final form of the scale was obtained after the expert opinions of two nursing academicians, a biostatistician, and pediatric endocrinologist experienced in transitional care and research methods. After the language equivalence was established, the scale was tested on 10 participants who were then excluded from the remainder of the study. After the expert opinions, we determined to use the MGS without making any changes on the scale items.
Statistical analysis
All analyses were performed using the IBM SPSS Statistics for Windows, Version 21.0. Armonk, NY: IBM Corp. The reliability was tested using Cronbach's alpha coefficients, item-total subscale correlations, and repeatibility of the scale for the complete scale and for each subscale. The self-care scale was used to determine the criterion validity of the scale. Validity was evaluated using the exploratory factor analysis and confirmatory factor analysis. Principal component analysis and varimax rotation were used for exploring the dimensionality. The items with loadings >0.4 were selected as a factor. The Kaiser-Meyer-Olkin measure and Bartlett's test of sphericity were used to evaluate the sample's adequacy. The relational assumptions between subscales were compared with oblimin rotation.
Ethic
The
RESULTS
The present study was conducted with 266 volunteer participants (109 adolescents with diabetes and 157 accompanying parents), who met the inclusion criteria, to evaluate the validity and the reliability of "MGS".
Participants' characteristics
The mean age of the adolescents was 15
Exploratory factor analysis
First, the sampling adequacy was confirmed with the Kaiser-Meyer-Olkin measurement (adolescent: 0.729, parent: 0.787) and Bartlett's test of sphericity (p<0.01). The test results confirmed the appropriateness of the sample and the sufficient association between variables to perform factor analysis (21). The factor loads were analyzed with the principal component and orthogonal varimax rotation technique and found to be higher than 0.4 (22)(23)(24). All items in the adolescent and parental forms presented an Eigenvalue higher than 1 and were considered as factors (23,24). According to the exploratory factor analysis results, the adolescent and parents scales consisted of a threefactor structure which explained 71% and 73% of the variation in adolescent and parental scores, respectively (Table 1).
Confirmatory factor analysis
The suitability of the model structure obtained with exploratory factor analysis was tested with confirmatory factor analysis. (Table 2). Therefore, the model of the scale was analyzed in terms of modification indices and residuals, and causal relationships between the data and model fit indices were evaluated (25). Modification indices and residuals can invalidate the whole model by affecting the coherence between the data and the model or the causal relationships among data (25). None of the variables were excluded from the model given the high values of modification indices that indicate the relationship between the variables and regression coefficients with the factors; additionally, none of the variables were higher than 2.8 according to the standardized residuals (24,25). Several covariances have been observed between the variables as most of the GFIs were also within the acceptance limits. The confirmatory factor analysis was reapplied to the model of the scale, and results showed that the measurement model better matched the data after covariances (Table 3). In this context, the three-factor model of MGS is in accordance with the sample group and will be used without any change in the model of the scale and the variables were subdivided into factors similar to those of the original scale model according to the exploratory factor analysis.
Criterion-related validity
For the criterion-related validity, the Turkish Patient Assessment of Choronic Illness Care was applied to the research group, and the correlation between the two scales was examined. According to the correlation coefficient (Pearson correlation) value, statistically significant positive correlations existed between the Turkish Patient Assessment of Choronic Illness Care and MGS 2 of both adolescents (r=0.60, p<0.01) and parents (r=0.51, p<0.01).
Internal consistency
For the Cronbach's alpha internal consistency reliability coefficient, the values for MGS 1 and MGS 2 were 0.89 and 0.87 (adolescent) and 0.92 and 0.90 (parent), respectively. Table 4 lists the item-total score correlations and Cronbach's alpha internal consistency coefficient values of the adolescent and parent scales and their sub-dimensions (management of the environment, provider characteristics, and process isues). The Cronbach's alpha coefficient of the sub-dimensions of adolescent and parental forms ranged between 0.70-0.89 and 0.80-0.92 respectively.
Reliability of the scale
The adolescent and parent MGS forms were reapplied to 44 adolescents and 56 parents, respectively, three weeks after the first implementation.
DISCUSSION
The MGS is a simple self-assessment scale designed to assess the health care satisfaction of adolescents with chronic conditions and their parents (8). In this study, the psychometric properties of the MGS in the Turkish sample were evaluated. First, the scale was translated and back-translated from the original language into the target language to evaluate the language equivalence of the scale (26,27). Then, the scale items were examined by experts in terms of clarity and intelligibility for content validity. The scale assesses the individual care satisfaction in the transition period and was used in the adolescent and parent sample groups. The scale was considered as understandable and easy to apply. The exploratory factor analysis was performed to examine the scarcely definable significant factors, which can be defined collectively by a large number of variables (26,27). The exploratory factor analysis of the adolescent scale resulted in a 22-item scale with 3 identified subscales that clarified 71% of the total variance, whereas that of the parent scale resulted in a 27-item scale with 3 identified subscales that clarified 73% of total variance ( Table 1). The exploratory factor analysis results of Turkish MGS were similar to those of the original scale and proved the high structural validity of the Turkish MGS features. The variables were subdivided into factors similar to those of the original scale model according to the exploratory factor analysis (8). When we evaluated the factor loads of the items by principal components analysis and varimax orthogonal rotation technique, as expected, the item loads were higher than 0.30 (28).
The fitness of the model obtained by exploratory factor analysis was examined with GFI, and the results are shown in Table 2. The most commonly adopted ones are the resemblance rate (x 2 /df ), root mean square error of approximation, GFI, and adjusted GFI (29).
Published reports indicated that values of x 2 /df ratio lower than 3.0 are considered as indicator of good fit, and those between 0 and 1 for root mean square residual and below 0.05 for root mean square error of approximation are desirable (23,24,26,29). Our study showed the good fit indicated by the x 2 /df ratio (2.49) and GFI and adjusted GFI. Although the GFIs were within acceptable fit limits, the other indices (comparative fit index, incremental fit index, Tucker-Lewis index, root mean square residual, and root mean square error of approximation) were beyond the acceptable ranges. Therefore, the model of the scale was analyzed in terms of modification indices and residuals, and causal relationships between the data and model fit indices were evaluated (25). None of the variables were excluded from the model given the high modification indices and regression coefficients of the factors; similarly, none of the variables were higher than 2.8 according to the standardized residuals (23)(24)(25). According to the results of the exploratory factor analysis, The item loads were not under 0.4, and the variables were subdivided into factors similar to the original scale model. Certain covariances have been observed between the variables as most of the GFIs were also within the acceptance limits. The confirmatory factor analysis was reapplied to the model of the scale, and the results revealed that the measurement model better matched the data after determining the covariances ( Table 3). The fit indices obtained in our study support the acceptability of the structural model of Turkish MGS. The Turkish Patient Assessment of Choronic Illness Care, which was developed with the same population and tested for validity and reliability, was performed to test the criterion validity. The correlation between the results of both scales was analyzed, showing a statistically significant relationship (positively, at a level of 0.01) between the total scores of Turkish Patient Assessment of Choronic Illnes Care and MGS2 scores of both adolescent and parent total scores (Adolescents: r=0.60, p<0.01; Parents: r=0.51, p<0.01). Both scales showed satisfaction with the current care. In our study, the results showed that MGS 2 accurately assesses the current care satisfaction of the adolescents with diabetes and their parents in the period of transition. The reliability of the scale was assessed by internal consistency using Cronbach's alpha and item-total correlations. The itemtotal score correlation coefficient should be higher than or equal to 0.30, and the items with a value lower than 0.30 should be excluded (24,27 item-total score correlations of MGS 1 , MGS 2 , and their subscales were similar to those of the original scale and ranged between 0.36 and 0.83 (Table 4). In this context, a strong correlation exists between the items and the whole scale. Table 4 shows the Cronbach's alpha internal consistency reliability coefficient values of the whole scale and sub-dimensions (management of environment, provider characteristics, and process issues). The Cronbach's alpha values for the MGS 1 and MGS 2 totaled 0.89 and 0.87 (adolescents) and 0.92 and 0.90 (parents), respectively. The internal consistency of each sub-dimension was indicated by the Cronbach's alpha values ranging between 0.71 and 0.92. High Cronbach's alpha coefficients indicate that the scale comprises consistent and balanced substances (17,22,24,26). The Cronbach's alpha of the original entire scale for adolescents and parents were 0.91 and 0.94, respectively. Based on these results, our study obtained alpha coefficient values similar to the findings of Shaw et al. (8).
In conclusion, the "MGS" adapted to Turkish is a valid and reliable tool to assess the satisfaction and determine the health care expectations and needs of Turkish adolescents with diabetes and their parents.
|
2018-11-15T22:16:15.630Z
|
2018-11-12T00:00:00.000
|
{
"year": 2019,
"sha1": "ce59e4ccd2b2eb269d439d8dd40acb862373422d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4274/balkanmedj.galenos.2018.2017.0168",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce59e4ccd2b2eb269d439d8dd40acb862373422d",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
139703827
|
pes2o/s2orc
|
v3-fos-license
|
Amorphous Thin Film for Thermoelectric Application
Amorphous-InGaZnO is n-type semiconductor material and has enormous potential such as transparency, flexible application owing to a low temperature fabrication process. In this study, Effects of annealing on the thermoelectric properties of a-IGZO thin film are evaluated for a low temperature process. We also demonstrated flexible TEG using a-InGaZnO and PEN substrate.
Introduction
Transparent Amorphous Oxide Semiconductor (TAOS) have been well-studied as channel layers for thin film transistors (TFTs), owing to their good transparency, flexibility and room temperature process. Among them, amorphous-InGaZnO (a-IGZO) thin films have been studied for applications other than TFT, which has been reported for a wide range of research results [1]. Our research group had previously reported on the thermoelectric performance of annealed a-IGZO [2,3].
In this study, the thermoelectric properties of as-deposited and annealing a-IGZO were investigated. Additionally, a flexible a-IGZO thermoelectric generator (TEG) using polyethylene naphthalate (PEN) was fabricated. PEN is commonly used as a flexible substrate and offers good stability up to around 150 o C. Therefore a low temperature processes are needed.
Effect of annealing temperature
At first, annealing effects were evaluated. The a-IGZO thin film (200 nm) was deposited on quartz substrates using RF magnetron sputtering using sintered targets (In:Ga:Zn_2:2:1, at %) with an input power of 100 W. The thin film deposition was performed at a pressure of 0.6 Pa under O2 and Ar atmosphere. In this process, oxygen partial pressure was set at 0%, 1% and 4.5%. Then, Au/Mo electrodes were deposited by electron beam evaporation. Finally, samples were annealed. The Seebeck coefficient and electrical conductivity were obtained at the room temperature.
As a results, annealing process is needed to increase the thermoelectric property (power factor: PF) in the samples with fabricated 1% and 4.5% oxygen partial pressure. On the other hand, the post annealing process is not necessary in the samples with fabricated 0% partial pressure.
Demonstration of flexible InGaZnO-TEG
Then flexible device was demonstrated as follows, the a-IGZO thin film (200 nm) was deposited on PEN substrate (2 cm × 4 cm) with same process (with 0% partial pressure). Au/Mo electrodes (20nm/100nm) were deposited by electron beam evaporation. No heat treatments were performed since a polymer substrate is used (Fig.2).
Output voltage 6.7 mV and output power 0.12 nW was obtained at temperature difference of 53 K (Fig. 3). The thermoelectric performances are stable against bending. From the shape of the fabricated device, it can be seen that the thermoelectric module can generate 0.1nW with only two IGZO parts.
Demonstration of flexible InGaZnO-TEG with heat guide
Flexible thermoelectric generator with heat guide was demonstrated using a-InGaZnO and Mo electrode uni-leg structure with heat guide (Fig.4). The sample TEG was fabricated using PEN substrate with metal masks and photolithography process. The heat guides were formed using KMPR photoresist (thickness: 100m). The 625 pairs of a-InGaZnO and Mo areas was designed. As shown in Fig.5, the demonstrated TEG was fully flexible. Output power of the demonstrated device was still small. However, the output power will be improved by optimizing the atomic composition of InGaZnO and the device structure.
Conclusion
The effects of annealing on thermoelectric properties of amorphous InGaZnO thin film were shown. And we also demonstrated flexible TEG using Mo and a-InGaZnO uni-leg structures on PEN substrate. If the Mo electrode were changed to the transparent conductive materials such as ITO, transparent and flexible TEG is possible with this structures. Additionally, semiconductor fabrication processes of IGZO are already well established. Therefore, it is easy to integrate the hundreds or thousands of pistructures and it will generates W scale output power.
|
2019-04-30T13:07:51.893Z
|
2018-07-01T00:00:00.000
|
{
"year": 2018,
"sha1": "b1aaadc2115f2bd67bb4c142ad2ed7120ae6170c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1052/1/012016",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "74aaee11152fb6375a2b3f347dbaa7c4f7bcfe2a",
"s2fieldsofstudy": [
"Physics",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
221283064
|
pes2o/s2orc
|
v3-fos-license
|
Self-Repairing Hybrid Adder With Hot-Standby Topology Using Fault-Localization
Effective self-repairing can be achieved if the fault along with its exact location can be determined. In this paper, a self-repairing hybrid adder is proposed with fault localization. It uses the advantages of ripple carry adder and carry-select adder to reduce the delay and area overhead. The proposed adder reduces the transistor count by 115% to 76.76% as compared to the existing self-checking carry-select adders. Moreover, the proposed design can detect and localize multiple faults. The fault-recovery is achieved by using the hot-standby approach in which the faulty module is replaced by a functioning module at run-time. In case of 3 consecutive faults, the probability of fault recovery has been found to be 96.1% for a 64-bit adder with 8 blocks, where each block has 9 full adders.
I. INTRODUCTION
The possibility of single-event-upset (SEU) in digital systems has risen as a result of the increase of on-chip system complexity as well as reduced clock cycles [1], [2]. The presence of radiation and other environmental conditions further enhance the probability of SEU [3], [4]. To handle SEU, the concept of ''totally self-checking'' was introduced. A system is characterized as totally self-checking if it remains unaffected by a fault, or produces a non-coded output for every generated fault [5]. In addition to fault detection, fault recovery should also be considered to ensure hardware reliability [6]. This is why the concept of built-in self-repair is becoming increasingly pertinent to current digital systems [7]. Fault recovery however becomes challenging in an inter-connect hardware design because of fault propagation. Therefore, fault localization becomes necessary for such type of hardware design.
Adder is an essential element present in almost all digital systems thus the introduction of built-in self-repair in adders can play a vital role in digital designs [8], [9]. Moreover, the presence of carry propagation chain makes adder an ideal case to understand the phenomenon of handling faults between inter-connected modules. To achieve this, both fault detection and localization should be performed.
The associate editor coordinating the review of this manuscript and approving it for publication was Heng Wang .
Ripple carry adder (RCA) and carry select adder (CSeA) are among the most commonly adopted adder topologies, hence many of the reported reliable adders are based on these topologies. In [9], a self-checking CSeA using 2-pair-2-rail checker encoding approach was proposed. It uses the advantage of the parallel rail of RCA present in CSeA, where each RCA rail produces an output for one of the initial carry inputs, i.e. C in = 0 or 1. Outputs of the two parallel blocks were compared to detect the presence of fault. This design is only valid for 2-bits and later an improved n-bit CSeA design was proposed in [10]. The relationship between the two parallel rails of RCA in [9] was further utilized in [11] to design a single RCA-Based self-checking CSeA with fault localization. The reported RCA performs addition for C in = 0 and the resulting sum-bits are used to generate the sum-bits for C in = 1. The design in [11] requires 12% less transistor count than the self-checking CSeA in [9]. In [12], a self-checking CSeA is proposed using parity prediction approach in which the operands are provided to the adder along with their respective parities. It however cannot perform fault localization and has limited fault coverage, because it can only indicate fault if it occurs in odd number of bits. Although these approaches were shown to be effective for SEUs, they can not perform fault recovery with minimum area overhead.
To boost the reliability of adders, the most conventional method is known as triple modular redundancy (TMR), which involves two redundant modules employed to produce VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ additional outputs, and the final output is selected via a voter circuit [13], [14]. However, the fault propagation phenomenon may cause a common mode failure which cannot be handled by the TMR. To address this issue, a shifted operand approach is used in [15] for TMR based self-checking ALU design. A similar concept of shifted and rotated operands is also used in [16] to minimize the required diversity in ALU architecture for self-checking TMR design. Another problem of TMR is its large area which is at least twice of a normal design. To overcome this limitation, a partial adoption of TMR is utilized in [17]. In this approach, only the most significant bits (MSB) block will be triplicated, which however increases the possibility of system failure to more than 50%. The concept of TMR despite its simplicity, cannot be applied in systems where area is the major concern. Some non-conventional techniques have also been adopted for reliable adder designs. One of such techniques is the self-repairing signed digit adder proposed in [18], in which fault localization is achieved because of the limited propagation of carry chains to the neighboring adder block. It uses both hardware and time redundancy for self-checking and self-repair. However, this design can only provide fault detection when odd number of bits are faulty, and it is sensitive to the parity predictor and error indicator. In [19] a self-repairing conditional sum adder (CoSA) with single spare hot-standby approach is presented. However, it only provides self-checking and repairing for the conditional selection cell module which is the building block of CoSA. This limited fault coverage approach makes the design less robust. These techniques provide less area overhead than TMR but the fault coverage is limited and also fault recovery is not always possible.
In this paper, a self-checking and repairing hybrid adder (HA) design with reduced area and time overhead is proposed. The proposed adder utilizes the advantage of the low complexity of RCA and the high speed of CSeA. Fault-detection and localization are realized by using a self-checking full-adder (SFA), in which fault detection is independent of the propagated carry. To minimize the area, a single RCA based CSeA approach is adopted together with a hardware friendly implementation using pass transistor logic. Moreover, square-root topology is used to reduce the delay in the proposed design. The proposed self-checking HA with fault localization and multiple fault detection feature, requires on average 50% more transistor count as compared to traditional CSeA design. A distributed fault-recovery mechanism using hot-standby approach is further proposed to reduce the probability of system failure.
The remainder of this paper is organized as follows. Section II describes the proposed self-repairing HA design. Comparative analysis with previous approaches is presented in Section III. Finally, concluding remarks are presented in Section IV.
II. PROPOSED SELF-REPAIRING HYBRID ADDER DESIGN
The self-repairing HA design is proposed by considering the area overhead, delay and the fault coverage.
A. HYBRID-ADDER DESIGN
The time required for CSeA to compute the lowest bits is more than the required time for RCA. This additional delay is caused by the MUX. Therefore, if a simple RCA for initial bits is employed, the design will be more efficient in terms of hardware and time-delay. The complexity will also be reduced with the use of RCA as the beginning block. This is why in the proposed HA design, the least significant bits are computed using RCA, while a single RCA-based CSeA is used for computing the higher bits, as shown in Fig. 1(a) and (b). In addition to this, the proposed HA design follows the square-root topology because a linear CSeA design has similar time delay as that of simple RCA. Therefore, sub-linear delay approach has been considered to balance the delay path by diving the adder in to blocks where the size of the block increases linearly from m, m + 1, . . . , m + l.
It should be noted that RCA-Block (RBL) is the fundamental building block of RCA, shown in Fig. 1(a), whereas, the CSeA constitutes of two fundamental blocks that is the initial block (INL) and the Adder Block (ABL), as shown in Fig. 1(b). The reason for having two fundamental block for constructing CSeA is because of the basic principle of single RCA based CSeA design which states that: Except for the least significant bit which are always complement to each other, the Sum bit computed for complement value of initial C in will also be complement to each other if all the lower Sum bits are equal to logic 1.
The initial block (INL) is therefore responsible for generating the least significant Sum bit by taking the complement of the Sum bits generated at initial C in = 0. All the other Sum bits will be generated by using the Adder Block (ABL) in which the AND gate is used to determine the status of the previous SUM bits computed for C in = 0 while the XOR gate generates the corresponding SUM bit for C in = 1 by considering the status of the previous SUM bits. The number of ABL used for designing CSeA block is equal to the (size_of _the_CSeA_block − 1).
In Fig. 1(b), the partial Sum and C out bit is represented by S j i and C j i , respectively, where j indicates the initial C in and i indicates the bit number. The fault is indicated by the error signal E f . The final C out will be generated by using the Module of Final C out (MOFC). The C out generated by the MOFC after each CSeA block will be treated as an actual C in for the next CSeA Block. Whereas, the C out of RCA block is used as an actual C in for the first CSeA block.
B. FAULT DETECTION AND LOCALIZATION
Fault localization is achieved by using the approach of selfchecking, independent of the propagated carry. In [11], a selfchecking full adder was presented which can detect a fault based on its internal functionality and is independent of the propagated carry. The relationship between input and output bits of full adder was utilized for self-checking. Consider a full adder with inputs A, B, C in , and the outputs Sum, C out , as shown in Fig. 1(c). The fault will not be indicated until Property 1 remains valid for that full adder: It can be observed from Fig. 1(c), that the self-checking full adder can be designed with the expense of an extra Equivalence Tester (E qt ) bit, which is required to indicate the relationship of the input bits. The E qt will be equal to 1 if all input bits are equal and vice versa. Hence, the following three equations from Eq. (1) to (3) need to be implemented for designing a self-checking and fault localized adder.
Since the goal of this design is to reduce the area overhead without compromising the reliability, Equations Eq. (1) to (3) which are used for designing a self-checking and fault localized full-adder need to be implemented with minimum transistor count. A high speed and area efficient full adder design is found in [20]. However, this approach cannot be adopted completely because of the logic sharing between Sum and C out , due to which the probability of common mode failure increases. Therefore, the equation and transistor level implementation of C out has only been adopted from their design.
The final implementation of Eq. (1) to (3) using pass transistor-based approach is shown in Fig. 2.
C. SELF-REPAIRING APPROACH
A hot-standby approach has been adopted for fault recovery. In this approach, if the fault is detected in any of the full adders, the generated error signal will shift the input bits such that the faulty adder will not be used for computation. The main challenge in doing this shift operation is the carry chain which is linked between each consecutive full adder, and the X i bit which is indicating the status of all previous Sum bits in each CSeA block, as shown in Fig. 1(b). The problem of carry chain has been resolved by making C out to be dependent on error signal E f of the SFA. In case of fault, the C out (i.e. C i ) will be equal to C in (i.e. C i−1 ). Since, X i indicates the status of all previous Sum bits computed for initial C in = 0, if any previous Sum bit is equal to 0 then X i will be 0, else it will be 1.The Sum bit of each ABL is dependent on the previous value of X i , therefore in case of fault detection the value of X i should not be updated for the next ABL block. In order to achieve this, the Error signal (E f ) has been used to replace the SUM bit in case of fault detected, because X i is produce through an AND gate and if the current Sum bit value is set to logic 1 then the previous value of X i (i.e. X i−1 ) will be propagated.
Note that X i is only propagated to the ABLs present in each CSeA block along with the next MOFC block, and it will not be propagated to the next CSeA block because each CSeA block is independent of the previous block. In order to accommodate all these changes, the fundamental blocks for extending the CSeA to an n-bit CSeA that is ABL and INL in Fig. 1(b), has been modified, as shown in Fig. 3(a) and (b), respectively.
Since the carry-chain exists in RCA block as well, the fundamental block of RCA (i.e. RBL) has also been modified, as shown in Fig. 3(c). However, the OR gate present in the modified RBL is not applicable for the first full adder of RCA because of the absence of any previous error signal. The final SUM bit generated by the adder also needs to be shifted in order to accommodate the shifted operands. Therefore, additional multiplexers have been used to perform the shift operation for SUM bits, as shown in Fig. 3(d).
The self-repairing part is only limited to the number of spare SFA. However, the self-checking property of adder block remains active even if the fault recovery is not possible, which illustrates that, after replacement if any SFA gets faulty then the fault will be indicated but cannot be handled. In order to improve the rate of recovery for larger adder size of more than 8-bit, the number of spares needs to be increased such that each block has one spare module, which means that each block can handle single fault recovery at time. The reason of keeping a single spare in each block is because the probability of having multiple faults in smaller blocks is less than the larger blocks. To illustrate this idea, let n − bit adder is divided in to N blocks with each block have t full adders. Let r random faults be introduced to the system, then the probability of having x faults in a same block without replacement can be computed by Eq. (4). where; the range of x will be equal to 0 < x <= t. However, the system will not be able to recover the fault in a block if x > 1. The recovery will still remain possible in all other blocks where x < 1. Therefore, the probability of the system failure when every single block gets more than 2 faults is given by Eq. (5). The probability of fault recovery in a block can be computed by Eq. (6).
The probability of fault recovery if 2 out of 3 faults occurred in a single block of the adder is shown in Table. 1. To analyze the impact of block size on fault recovery, three different size of adders are considered such that each adder is constructed using two different block sizes. It can be observed from Table. 1 that the number of full adders in each block decreased with the increase in number of blocks. Whereas, the number of spare modules increased with number of blocks because each block has single spare module for recovery. The overall size of adder can be determined by Eq. 7. It can also be observed from Table.1 that increasing the number of full adders in a block will increase the probability of failure of that block. For example, a 32-bit adder can be built by using 2 blocks and 4 blocks, each of which have 17 and 9 full-adders, respectively. It can easily be observed that as the number of blocks increases from 2 to 4 the probability of block failure decreases from 38.6% to 13.6%. However, the area-overhead of adder will also increase with the increase in number of blocks.
III. RESULTS AND BENCHMARK
The proposed design with self-checking property is compared in terms of area overhead and fault coverage with the reported self-checking CSeA in [10] and [12]. Also, the proposed self-repairing HA is compared with self-repairing CoSA approach [19] and reduced precision TMR [17]. The transistor counts of each module for self-checking and self-repairing HA is presented in Table. 2. It should be noted that the Sum and Carry bypass modules are required for self-repairing design, therefore, they are not considered while comparing the area-overhead of self-checking design. The required number of logic gates and other modules along with the total transistor overhead is shown in Table. 3, where n is the size of adder, m is the size of RCA block and k is equal to the total number of CSeA blocks used in the design. The value of k varies with adder size and in this work, the value of k has been selected to be 1, 2, 3, 5 and 8 for 4-, 8-, 16-, 32-and 64-bit adder, respectively. In standard CSeA design without self-checking, the transistor count for full adder and MUX has been reduced to 12 and 4 respectively, because the Eqt and checker module are not required.
A. COMPARISON WITH SELF-CHECKING CSeA
The area overhead of the proposed self-checking HA without recovery is compared with the previously reported self-checking CSeA design. It should be noted that a uniform complementary pass transistor logic design approach has been adopted while comparing the transistor counts, such that an inverter has been used after every stage of pass transistor. The implementation of sub modules with and without self-checking is shown in Fig. 4.
It can be observed from Table. 4 that the proposed design requires on average 50% more transistor count as compared to the standard CSeA design without self-checking. Whereas, the required number of transistors are reduced by 76.76% and 115% as compared to [12] and [10], respectively. It should be noted that the proposed approach also requires 68.68% less transistor count as compared to our previous proposed self-checking CSeA [11]. The increase in transistor count for different adder sizes has been shown in Fig. 5. It can be observed that our proposed approach shows least overhead as compared to the previous approaches.
In addition to the reduced area overhead, the proposed design possesses fault localization property and can detect multiple faults, with the condition that a single module should not have multiple faults, while [10] can only detect single fault at a time and [12] can only detect faults in odd number of bits without fault localization. In addition to the problem associated with odd number of erroneous bits, the approach in [12] is not totally self-checking because of the presence of logic sharing between the SUM and the propagated carry block. Any fault in the shared logic will easily get masked and cannot be detected with the parity prediction approach.
The power estimation is done using Cadence tool and it has been found that the traditional 32-bit CSeA design requires 4.51 mw power which increased to 6.90 mw for our proposed self-checking HA. Hence, 52.9% power consumption has been increased by using our proposed design. The delay for computing the final C out using traditional CSeA design and the proposed self-checking CSeA has been shown in Eq. (8) and (9), respectively, where h is the number of full adders in the final CSeA block. In can be observed that delay has been increased by a factor of only two logic gates.
B. COMPARISON WITH SELF-REPAIRING APPROACHES
The proposed self-repairing adder with single spare module required an average of 186% area overhead as compared to traditional CSeA without self-checking as shown in Table. 4. The power consumption for 32-bit HA design has also been increased by 184.2% as compared to traditional CSeA. In terms of time overhead, the latency can be observed by Eq. (10). The overhead is mainly caused by the MUXs controlling the carry propagation chain.
The proposed self-repairing HA design is compared with the previously reported self-repairing CoSA [19] and reduced precision redundancy adders (RPRA) [17]. It should be noted that both CoSA and RPRA approaches consider graceful degradation in which some portions of the circuitry have been considered for fault detection and recovery. In case of CoSA, the design cannot detect fault during the actual addition operation. Moreover, only single conditional selection cell (CSC) module can be tested at a time with a given test pattern. The self-checking property has not been considered for modules other than CSC like chain of MUXs, shift registers etc. Furthermore, the self-repairing process is also expensive because the whole CSC module which is responsible for 2-bit addition, has to be replaced with the spare one. In addition to this problem the designed CoSA will not have fault diagnosis ability, if there is no further spare module available.
The RPRA [17] approach on the other hand can only correct the error in the MSB, while the Least Significant Bits (LSB) is fed directly to the output. Hence, fault in both LSB and the voter for MSB, is undetectable. Also, the fault propagated through LSB to MSB via C out can not be detected.
In contrast to the previous approaches, the proposed design can perform run-time fault detection during actual addition process and also can detect multiple-faults at a time with the condition that each module should not have more than one fault. The fault recovery is dependent on the number of spare modules but the self-checking property of the design remain valid whether the recovery is possible or not.
IV. CONCLUSION
A self-checking and repairing HA design has been presented with reduced area overhead and increased fault coverage as compared to the previously presented design approaches. The HA design follows the architecture of single RCA based CSeA with the only difference of initial bits, which has been computed using RCA. A run-time self-repairing approach has been adopted by using hot-standby topology. The proposed design can be extended easily to any size by using fundamental block design presented in the paper.
The proposed design with self-checking has been compared with the previously reported self-checking CeSA in terms of area and fault coverage. It has been observed that the proposed self-checking HA approach with the delay overhead of only two logic gates, requires 50% more transistors as compared to the traditional CSeA without selfchecking. Whereas, the required overhead is 76.76% and 115% less than the previously proposed self-checking CSeA approaches. Moreover, due to the distributed self-checking mechanism, the proposed approach can detect and localized multiple faults with the condition that a single module should have single fault at a time.
A hot-standby approach has been adopted for fault recovery. The area overhead has been increase to 186% as compared to standard CSeA approach. However, the probability of recovering multiple faults has been increase as compared to the previous self-repairing CSeA approaches. A 64-bit adder with 8 equally sized blocks can handle 3 consecutive faults with 96.1% probability with the condition that each block have single spare module. It should be noted that the self-checking property remained valid irrespective of the possible recovery which was not possible in previous approaches.
|
2020-08-20T10:03:54.498Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "34f9169ca260deb70cf73c760a7477b209bb8998",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09166473.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "0dc57b0a17dd0cc722dffad13d86ffcebc6a5a71",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
258311495
|
pes2o/s2orc
|
v3-fos-license
|
Not Always a “Buffer”: Self-Compassion as Moderator of the Link Between Masculinity Ideologies and Help-Seeking Intentions After Experiences of Intimate Partner Violence
Many women and men experience intimate partner violence (IPV) during their lifetime. However, only relatively few people actually seek formal help after such an experience. The current study applied the mediated-moderation model of self-compassion and stigma that has previously been used to explain men’s help-seeking behavior for depressive symptoms. The current study analyzed whether conformity to masculinity ideologies (CMI), self-stigma, and self-compassion were related to women’s and men’s intention to seek formal help after IPV experiences. A cross-sectional online questionnaire study was conducted with 491 German-speaking participants (65.8% women/34.2% men; age: M = 36.1 years; SD = 14.2). Participants read three vignettes about experienced IPV and then indicated how likely they would be to seek medical or psychological help if they were in the main character’s situation. Additionally, the Conformity to Masculine Norms Inventory, Self-Stigma of Seeking Help Scale, and Self-Compassion Scale were used. Separate manifest path models for women and men revealed that strong CMI was linked to strong self-stigma in women and men. In turn, strong self-stigma was linked to weak intentions to seek formal help after IPV experiences. In men, strong self-compassion weakened (i.e., “buffered”) the link between CMI and self-stigma. However, direct associations between strong CMI and weak intentions to seek formal help remained, especially for those participants with strong self-compassion. The current study adds to the existing literature on associations between CMI, self-compassion, and self-stigma by showing that those links are also relevant in women. However, self-compassion might not always act as a “buffer” and mediators that explain links between strong CMI and weak intentions to seek formal help in people with strong self-compassion need to be found in future studies.
Introduction
Intimate partner violence (IPV) is the intentional use of force or power against a current or former intimate partner and can be classified as physical violence (i.e., physical force that inflicts pain), psychological violence (i.e., behaviors intended to humiliate or control another individual), or sexual violence (i.e., any sexual act against a person using coercion or force) (WHO, 2010). A considerable proportion of the population is affected by IPV over their lifetime (WHO, 2021), whereby women are more often affected by IPV than men (e.g., Schlack et al., 2013). Even though many people experience IPV, only relatively few people actually seek psychological or medical help after such experiences (Martin et al., 2023). Women have been reported to more often seek help after experiences of IPV than men (e.g., Martin et al., 2023).
Past research on men's reluctance to seek formal help focused on their conformity to masculinity ideologies (CMI) and self-stigma as barriers or self-compassion as potential facilitator (Booth et al., 2019;Cole & Ingram, 2020). The current study uses the IPV stigmatization model (Overstreet & Quinn, 2013) and a mediated-moderation model of self-compassion and stigma (Wong et al., 2019) to investigate whether CMI, self-stigma, and selfcompassion were related to women's and men's intention to seek formal help after IPV experiences.
Women's and Men's Help-Seeking Behavior After IPV
Women experience IPV more often than men (Schlack et al., 2013). Therefore, the focus of many large-scale prevalence studies and reports concentrate on women's IPV experiences (European Union, 2014;WHO, 2021). Globally, one in four women experiences physical and/or sexual violence from an intimate partner at least once in her lifetime (WHO, 2021). A large-scale study among women living in the European Union (EU) revealed that 22% of women living in Germany and 13% of women living in Austria had at least once experienced physical and/or sexual violence carried out by a current or past partner (European Union, 2014). Around 6% of women in Germany have experienced psychological violence perpetrated by their partners in the preceding 12 months (Lange et al., 2016).
A review of prevalence rates reported that 3% to 20% of men are affected by physical violence, 7% to 37% of men experience psychological violence, and 0% to 7% of men are targets of sexual violence committed by a current or previous partner (Kolbe & Büttner, 2020). In Germany, it has been reported that 1% of men had experienced physical violence exerted by a partner, and 3% had experienced psychological violence perpetrated by a partner within the past 12 months (Lange et al., 2016;Schlack et al., 2013).
Even though a substantial proportion of people experience IPV, people rarely seek psychological or medical help after such an experience (Martin et al., 2023). One study conducted in the United States reported that 55% of people who experienced physical, psychological, and/or sexual violence sought formal help after those experiences (Cho et al., 2020). In another U.S. study, 4% to 12% of participants reported that they had sought medical help after experiencing IPV, and 31% to 43% of participants said they had sought psychological help after experiencing IPV (Martin et al., 2023). Of the women who participated in a large-scale study in the EU, 15% to 22% reported that they had sought medical help after experiencing physical or sexual violence carried out by the current or past partner (European Union, 2014).
Many studies that compared women's and men's help-seeking behavior after experiences of IPV found that women are more likely than men to seek psychological or medical help after those experiences (e.g., Martin et al., 2023). For instance, in a study 29% of women as compared to 12% of men reported that they had sought medical help after experiencing physical and/ or sexual violence committed by an intimate partner. Psychological help was sought by 35% of women and 16% of men after experiencing IPV (Ansara & Hindin, 2010). In Austria, it has been assessed that 21% of women and 20% of men who experience physical violence, 14% of women and 10% of men who experience psychological violence, and 11% of women and 5% of men who experience sexual violence seek medical help. Psychological help is sought out by 22% of women and 11% of men who sustain physical violence, 20% of women and 11% of men who experience psychological violence, and 16% of women and 3% of men who undergo sexual violence (Kapella et al., 2011).
Masculinity Ideologies, Self-Stigma, and Self-Compassion
The difference in women's and men's willingness to seek medical or psychological help is not only reported for help-seeking after IPV. Past research has consistently reported that men are overall less likely than women to seek medical or psychological help. One prominent explanation for men's reduced willingness to seek formal help is men's stronger CMI (Addis & Mahalik, 2003).
Masculinity ideologies are culturally defined standards or norms for male behavior (Pleck et al., 1993). Masculinity ideologies depict men as being confident, competent, successful, self-reliant, and as having high self-esteem. Additionally, men are expected to be physically strong and able to defend themselves. CMI often conflicts with seeking help for psychological or medical problems, especially if help-seeking necessitates the expression of fears, intimate emotions, or vulnerabilities (Addis & Mahalik, 2003).
Accordingly, men who are affected by IPV not only reported that they felt the experience of IPV conflicted with their ability to uphold masculinity ideologies (Hogan et al., 2022), but many men have also reported that they avoided or delayed seeking help after experiencing IPV because they attempted to conform to masculinity ideologies or because CMI felt like a barrier to help-seeking (Huntley et al., 2019).
One way in which CMI might influence help-seeking is the increased experience of self-stigma when seeking formal help (Arnocky & Vaillancourt, 2014). People who hold a stigma in relation to help-seeking have a poor opinion of people who need or seek help and perceive help-seeking to be something unacceptable. People with a strong self-stigma target those negative messages toward themselves in cases where they need or seek formal help (Vogel et al., 2006). The IPV stigmatization model (Overstreet & Quinn, 2013) identifies selfstigma as an important barrier toward help-seeking after IPV experiences. The mediated-moderation model of self-compassion and stigma (Wong et al., 2019) incorporates the link between strong CMI and strong self-stigma in explaining men's reluctance to seek formal help.
The mediated-moderation model of self-compassion and stigma (Wong et al., 2019) includes self-compassion, that is, the tendency to be kind toward oneself and take an understanding, nonjudgmental attitude toward one's inadequacies and failures (Neff, 2003(Neff, , 2023. Thereby, self-compassion is often seen as a "buffer" that might weaken the strong relationship between CMI and self-stigma Wasylkiw & Clairo, 2018).
So far, the mediated-moderation model of self-compassion and stigma (Wong et al., 2019) has been applied to men, but not to women. However, women might also adhere to norms and standards described by masculinity ideologies (García-Sánchez et al., 2018;Zamarripa et al., 2003). Women's CMI has been linked to stronger acceptance of IPV (McDermott et al., 2017). Furthermore, CMI in women was linked to a reduced willingness to seek formal help (McDermott et al., 2018). Thus, for women, too, self-stigma with regard to help-seeking can be a barrier to seeking formal help after IPV experiences (Alves-Costa et al., 2023;Lelaurain et al., 2017).
Aim of the Current Study
To date, most research on barriers to formal help-seeking after IPV experiences stems from North America, and more studies on this topic from the EU are needed (Lelaurain et al., 2017). The mediated-moderation model of selfcompassion and stigma (Wong et al., 2019) has not been tested in women. Furthermore, the associations between CMI, self-stigma, self-compassion, and help-seeking intentions after IPV experiences have not been tested.
The current study addressed those gaps in the literature. Similar to previous studies on the links between men's adherence to masculinity ideologies, self-stigma, and help-seeking for depression (Cole & Ingram, 2020;Mahalik & Rochlen, 2006), the current study used vignettes, that is, short depictions of hypothetical situations, to assess women's and men's intention to seek formal help after experiencing IPV. Thereby, the mediated-moderation model of self-compassion and stigma (Wong et al., 2019) was tested with regard to help-seeking after IPV experiences in a sample of German-speaking persons in the EU. Based on the model the following hypotheses were tested (see the figure in Supplemental Material S1): • • Hypothesis 1 (H1): Strong CMI is linked to stronger self-stigma with regard to formal help-seeking after IPV experiences in women and men.
• • Hypothesis 2 (H2): Strong CMI and strong self-stigma are linked to reduced intentions to seek formal help after IPV experiences in women and men. • • Hypothesis 3 (H3): Self-compassion weakens (i.e., "buffers") the link between CMI and self-stigma in women and men.
Participants
For the analysis, responses from 491 participants were considered. The sociodemographic characteristics of the sample are shown in Table 1. Of the participants, 65.8% were women and 34.2% were men. On average, participants were 36.1 (SD = 14.2) years old. The majority of the participants had German nationality and identified as being heterosexual. Nearly half of the sample were single people and were employed in paid work. Similar numbers of participants had finished vocational training, had university entrance-level qualification, or had a university degree (Table 1).
Procedures
The study was conducted online from March 2021 to October 2021 and was hosted on SoSci: der onlineFragebogen (http://soscisurvey.de/). First, all students at an Austrian Medical University were invited by email to participate in the study. Second, the study was promoted with paid advertisement on Facebook. The paid advertisement targeted all persons over the age of 18 years living in Germany, Austria, and German-speaking regions of Italy. The email invitation and the paid advertisement informed that the current study was about help-seeking behaviors after having negative experiences. The link provided in the email invitation and the paid advertisement led to the first page of the questionnaire that more specifically revealed that the study's purpose was to better understand help-seeking intentions after experiencing IPV. Because women predominantly followed the invitation to participate, starting from June 2021 we targeted only men with the advertisement on Facebook. We halted recruitment after not receiving additional responses, while at the same time increasing the reach of the advertisement.
Participation in the study was voluntary and anonymous. Individuals did not receive any incentive for participating. Only after giving informed consent were participants able to access the questionnaire. The medical university's Ethics Committee confirmed (on January 18, 2021) that under Austrian law the current study did not require formal approval by an Ethics Committee.
Measures
Sociodemographic information. Sociodemographic information was assessed with self-constructed questions about participants' gender, age, sexual orientation, relationship status, highest level of education, employment, and nationality (Supplemental Material S2). For the analysis only participants who identified as women or men were considered because too few people with other identities participated. Because only few participants had Italian nationality (n = 33; 6.7%) and one participant (0.2%) had Turkish nationality, a new variable for nationality was formed, in which the two categories German nationality and Austrian and other nationalities were subsumed. A new variable for education was formed, in which secondary school (n = 61; 12.5%) and vocational training (n = 72; 14.7%) were summarized in one category. Finally, a new variable for employment was formed, in which the categories unemployed (n = 39; 7.9%), retired (n = 41; 8.4%), and parental leave (n = 8; 1.6%) were summarized in one new category, namely not in paid work.
Help-seeking intentions. The current study's approach to assessing participants' intention to seek formal help was informed by previous studies that assessed men's potential responses when experiencing symptoms of depression (Cole & Ingram, 2020;Mahalik & Rochlen, 2006). Similar to the previous studies, the current study used vignettes, to describe a main character's IPV experiences. Specifically, three vignettes, one about psychological, one about physical, and one about sexualized IPV, were presented (see all vignettes in the Supplemental Material S3). The gender of the main character was identical to the participants' indicated gender identity. After each vignette, participants were asked to imagine being in the described situation and to indicate how likely they would be to seek medical help or professional psychological help (Cole & Ingram, 2020). Because an uneven number of response categories was recommended, especially for questions where participants might be indecisive (Weijters et al., 2010), we used a 5-point Likert scale (1 = very unlikely; 5 = very likely) instead of the previously used 4-point Likert scale (Cole & Ingram, 2020;Mahalik & Rochlen, 2006). Mean scores across all responses were calculated. Higher scores indicated stronger intentions to formally seek help after IPV experiences. The previous study, which assessed formal help-seeking intentions of men for depressive symptoms, had an internal consistency of α = .92 (Cole & Ingram, 2020). In the current study, the internal consistency was α = .89 (Supplemental Material S4).
Self-constructed questions were used to ask participants whether they had sought medical help, psychological help, or other professional help at least once after own experiences of interpersonal violence (1 = no; 2 = yes). All participants who responded that they had used at least one formal help resource were coded as having availed themselves of formal help after experiences of interpersonal violence.
Conformity to masculinity ideologies. CMI was assessed with the 30-item short version of the Conformity to Masculine Norms Inventory (CMNI) . In comparison to the original version of the CMNI, theCMNI-30 items (CMNI-30) is considerably shorter, but still includes 10 of the original 11 scales. Furthermore, some items' wording had been changed to improve item clarity and consistency (Levant et al., 2020). Participants were asked to respond on a 6-point Likert scale (1 = strongly disagree; 6 = strongly agree) whether statements describing behaviors or opinions that conform to masculinity ideologies apply to them. One example item read, "I will do anything to win." The CMNI-30 has been validated only in men and has been shown to have satisfactory internal consistencies (most scales had α coefficients above .70) (Levant et al., 2020). For the current study, the German-language version of the CMNI-30 was used (Komlenac et al., 2023). A higher mean total score indicated stronger CMI with a satisfactory internal consistency in women and men (Supplemental Material S4; Ponterotto & Ruckdeschel, 2007).
Self-compassion. Self-compassion was measured with the German-language version (Hupfeld & Ruffieux, 2011) of the Self-Compassion Scale (SCD-D) (Neff, 2003). The questionnaire consists of statements describing possible reactions to difficult situations. Participants indicated on a 5-point Likert scale how often they reacted in the described manner (1 = almost never; 5 = almost always). For the current study, a scoring scheme, which summarized the 26 items in two scales, namely self-compassion and self-coldness, was used . For this purpose, the self-coldness scale consisted of items assessing being self-judgmental, tending to feel alone, and being focused on and overwhelmed by negative emotions (e.g., "I'm intolerant and impatient towards those aspects of my personality I don't like"). The Self-Compassion Scale measures the extent to which a person is kind to themselves, tries to balance and control upsetting emotions and realizes that other people share difficulties and negative feelings that are similar to one's own (e.g., "I try to be understanding and patient towards those aspects of my personality I don't like.") Neff, 2003). The two scales, self-compassion and self-coldness, proved to have satisfactory internal consistencies (Coefficient omega hierarchical (ωH) >.84) ). In the current study, those scales also had satisfactory internal consistencies (αs >.90; Supplemental Material S4). Higher scores for self-compassion indicated higher levels of self-compassion, whereas higher scores for selfcoldness indicated lower levels of self-coldness (Hupfeld & Ruffieux, 2011).
Self-Stigma of Seeking Help Scale. Self-stigma, hence the degree to which an individual's self-esteem or self-worth is threatened by the prospect of needing professional help, was assessed with the Self-Stigma of Seeking Help Scale (SSOSH) (Vogel et al., 2006). For the current study, a revised sevenitem version with satisfactory psychometric properties (αs = .87-.89) was used (Brenner et al., 2021). German-language items that were provided online were used (Vogel, 2020). In order to better assess self-stigma in relation to seeking medical help after experiences of interpersonal violence, the wording of items was changed. All phrases that referred to "psychological help" were changed to "medical help" or "physician." Additionally, all phrases that referred to psychological problems were changed to "experiences of interpersonal violence" (e.g., "I would feel inadequate if I went to a physician for help after experiencing interpersonal violence."). Participants were asked to indicate on a 5-point Likert scale their agreement or disagreement with the statements (1 = strongly disagree; 5 = strongly agree). In the current study, acceptable internal consistencies were obtained (Supplemental Material S4).
Lifetime experiences of violence. In order to measure cumulative lifetime experiences of interpersonal violence, the Cumulative Lifetime Violence Severity Scale was used (Scott-Storey et al., 2020). This scale assesses experiences of interpersonal violence during childhood (11 items) and adulthood (11 items). For each description of an experience of interpersonal violence (e.g., "Since the age of 18 I have been hit, kicked, slapped, burned, choked, or otherwise physically hurt by a caregiver or family member [other than a partner]"), participants indicated how often (1 = never; 5 = often) they had experienced such a situation. Additionally, they indicated how distressed they were (1 = not at all distressed; 5 = very distressed) by each experience of interpersonal violence they had at least rarely experienced. One mean score across all frequency and distress items was calculated for each participant, indicating the frequency and/or severity of cumulative lifetime experiences of interpersonal violence. The original Cumulative Lifetime Violence Severity Scale was validated in men and proved to have satisfactory internal consistencies (αs = .78-.86) (Scott-Storey et al., 2020). For the current study, all items were translated from English to German with the forth-and-back procedure (Brislin, 1970). The total score showed satisfactory internal consistencies (Supplemental Material S4).
Statistical Analysis
Descriptive statistics, chi-square tests, and t-tests were performed to determine average responses of participants and to compare responses between women and men. Correlation analyses were performed to calculate the bivariate relationships between the studied variables.
To test the current study's hypotheses, two manifest path models, one for women and one for men, were calculated. Those models were calculated with the PROCESS macro (Hayes, 2018; https://www.processmacro.org) for SPSS. For the current analysis, SPSS for Windows, version 26.0 (IBM Corp., Armonk, NY, USA) was used. Model 10 of the PROCESS macro (Hayes, 2018) was chosen with bootstrap bias-corrected 95% confidence intervals (bootstrap sample was n = 5,000). Significant results were indicated when p ≤ .05 or when 95% confidence intervals did not include zero.
The calculated model is shown in the Supplemental Material S1. In each manifest path model, intention to seek formal help (variable Y) was predicted by self-stigma (mediator, variable M), and CMI (variable X). Self-coldness (variable W) and self-compassion (variable Z) were entered as moderators for the link between CMI and intention to seek formal help and the link between CMI and self-stigma. In all analyses, the control variables age, nationality, relationship status, sexual orientation, education, employment, own past formal help-seeking, and own experiences of interpersonal violence were considered.
In total, 1,265 persons gave informed consent and opened the online questionnaire. Of those participants, 704 participants were excluded because they gave wrong responses to two attention-check items ("Please select the response 'Agree'") (Huang et al., 2012). Furthermore, 23 participants stopped filling out the questionnaire before reaching the end. Two participants were excluded because they gave an unrealistic age (above3,232), and two participants indicated being younger than 18 years. Additionally, six participants did not reveal their age. There were too few nonbinary persons (n = 22; 4.2%), trans men (n = 4; 0.8%), inter* persons (n = 1; 0.2%), or people with a different gender (n = 6; 1.1%) to include them in the statistical analysis. Four people did not indicate their gender and could therefore not be included in the analysis. Ultimately, 491 responses were used for the analysis.
From previous research (Booth et al., 2019), links between CMI and selfstigma or links between self-stigma and intentions to seek formal help were expected to be medium in effect size (Cohen, 1992). For such effects to be detected in manifest path models with the use of bias-corrected bootstrap coefficients, a minimum sample size of n = 148 is suggested (Fritz & Mackinnon, 2007). In the current study, the sample size for women (n = 323) and for men (n = 168) satisfies this requirement.
Descriptive Statistics
Overall, participants reported few incidents of own experiences of interpersonal violence (Supplemental Material S4). Nearly one-third of the participants reported having at least once sought out formal help after experiencing interpersonal violence (Table 1). After reading the vignettes about the main character experiencing different forms of IPV, participants reported that they would be moderately likely to seek formal help if they were in the main character's situation (Supplemental Material S4).
In general, participants did rather not conform to masculinity ideologies, whereas men reported conforming to those ideologies more often than women (Supplemental Material S4). Participants reported moderate self-compassion and self-coldness. Thereby, women reported stronger self-coldness than men. Finally, the prospect of seeking medical help after experiences of interpersonal violence did rather not weaken participants' sense of self-worth or selfesteem (Supplemental Material S4).
Correlations
All correlations between variables are reported in Supplemental Material S5. Participants with frequent and/or distressing experiences of interpersonal violence were more likely to have sought formal help after such experiences. Furthermore, having had frequent and/or distressing experiences of interpersonal violence was linked to stronger CMI, weaker self-compassion, stronger self-coldness, and stronger self-stigma.
Strong CMI was linked to weaker self-compassion and stronger selfcoldness in women and men. Additionally, strong CMI correlated with strong self-stigma. Strong self-stigma was associated with weaker self-compassion and stronger self-coldness.
Help-seeking intentions after IPV experiences were stronger in older participants and those who had sought formal help after experiences of interpersonal violence in the past. Weak CMI, strong self-compassion, and weak self-coldness were linked to stronger intentions to seek formal help. Finally, strong self-stigma was associated with weaker intentions to seek formal help (Supplemental Material S5).
Mediation and Moderation Analysis
Women. All path coefficients of the manifest path model in women are reported in Table 2. Woman with German nationality and those who had experienced frequent and/or distressing experiences of interpersonal violence reported having stronger self-stigma than women with Austrian and other nationalities and women who had no or few experiences of interpersonal violence. Additionally, strong CMI and weak self-compassion were linked to strong self-stigma with regard to seeking medical help after experiences of interpersonal violence. Strong self-stigma in turn was linked to weaker intentions to seek formal help. Those women who took advantage of formal help in the past after experiencing interpersonal violence more often intended to seek formal help than women who did not seek help in the past. Self-compassion moderated the link between CMI and help-seeking intentions ( Table 2). Strong CMI was especially linked to weaker intentions to seek formal help in women with strong self-compassion (bs = −0.8 to −0.5, ps = .012-.031). In women with weak self-compassion, CMI was not linked to the intention to seek help after IPV experiences (bs = 0.0-0.2, ps = .507-.800). Thereby, in women with weak CMI, those with strong self-compassion had stronger intentions to seek formal help than women with weak self-compassion (see figure in Supplemental Material S6). This difference between women with strong and weak selfcompassion was not evident in women with strong CMI.
Finally, strong CMI was indirectly linked to weaker intentions to seek formal help after IPV experiences via self-stigma only in women with weak self-compassion and strong self-coldness. At other levels of self-coldness and self-compassion confidence intervals of indirect path coefficients included zero, thus were not significant (Supplemental Material S8).
Men. All path coefficients of the manifest path model in men are reported in Table 2. Men with strong self-coldness were more likely to have strong selfstigma than men with weak self-coldness. The significant interaction CMI × self-compassion indicated that strong CMI was especially linked to strong self-stigma in men with weak self-compassion (bs = 0.7-0.8, ps ≤ .001). In men with strong self-compassion, CMI was not linked to self-stigma (bs = 0.3-0.3, ps = .069-.275).
Strong self-stigma in turn was linked to weaker intentions to seek formal help. Self-compassion moderated the link between CMI and help-seeking intentions (Table 2). Strong CMI was especially linked to weaker intentions to seek formal help in men with strong self-compassion (bs = −0.7, ps = .002-.016). In men with weak self-compassion, CMI was not linked to the intention to seek help after IPV experiences (bs = −0.2 to −0.1, ps = .357-.360). Thereby, in men with weak CMI, those with stronger self-compassion had stronger intentions to seek formal help than men with weak self-compassion (see figure in Supplemental Material S7). This difference between men with strong and weak self-compassion was not evident in men with strong CMI.
Finally, strong CMI was indirectly linked to weaker intentions to seek formal help after IPV experiences via self-stigma only in men with weak selfcompassion and strong self-coldness. At other levels of self-coldness and self-compassion confidence intervals of indirect path coefficients included zero, thus were not significant (Supplemental Material S9).
Discussion
The current study investigated barriers to and facilitators of formal helpseeking after IPV experiences in German-speaking women and men in the EU. Thereby, it was revealed that strong CMI was linked to stronger selfstigma with regard to formal help-seeking in women and men (H1). Strong self-stigma and strong CMI, in turn, were linked to weak intentions to seek psychological and/or medical help after IPV experiences (H2). Selfcompassion was found to be a potential facilitator because strong self-compassion weakened (i.e., "buffered") the link between CMI and self-stigma in women and men (H3).
Masculinity, Self-Stigma, and Help-Seeking
One explanation for people's low willingness to seek psychological or medical help after IPV experiences (Martin et al., 2023) is people's strong selfstigma with regard to help-seeking (Alves-Costa et al., 2023;Lelaurain et al., 2017). In line with the IPV stigmatization model (Overstreet & Quinn, 2013), in the current study especially those participants, who regarded needing or seeking help as something unacceptable or something that threatens their sense of self-worth (Vogel et al., 2006), had weaker intentions to seek formal help after IPV experiences. Thus, self-stigma can be seen as a barrier to formal help-seeking after IPV experiences.
The current study further revealed that CMI co-occurred with women's and men's strong self-stigma concerning formal help-seeking. People who try to conform to masculinity ideologies might perceive an incompatibility between being stoic, strong, independent, or invulnerable and needing help after IPV experiences, especially if help-seeking necessitates the expression of fears, intimate emotions, or admitting vulnerabilities (Addis & Mahalik, 2003). The current study's results are in line with previous findings from qualitative studies. Many men have reported that they avoided or delayed seeking help after IPV experiences because they attempted to conform to masculinity ideologies or because CMI felt like a barrier to help-seeking (Huntley et al., 2019).
Previously, it has been shown that adherence to masculinity ideologies in women was linked to a reduced willingness to seek formal help for suicidal thoughts (McDermott et al., 2018). The current study adds that, similar to men, strong CMI in women is linked to strong self-stigma and in turn to low willingness to seek formal help after IPV experiences (Cole & Ingram, 2020). Thus, the current study adds support for the argument that it is relevant to understand how masculinity ideologies affect the lives of men and women (Whorley & Addis, 2006) and not to focus solely on men when studying masculinity ideologies.
Masculinity Ideologies and Self-Compassion
According to the mediated-moderation model of self-compassion and stigma (Wong et al., 2019), self-compassion, that is, being kind to oneself also when perceiving own inadequacies and failures (Neff, 2003), can weaken (i.e., "buffer") the link between CMI and self-stigma. In line with the model, numerous studies have shown that self-compassion can moderate the association between CMI and self-stigma concerning help-seeking for psychological problems Wasylkiw & Clairo, 2018). Thereby, in men with high levels of self-compassion the link between strong CMI and selfstigma for formal help-seeking was weaker than in men with high levels of self-compassion (Booth et al., 2019). The current study replicated those findings with regard to formal help-seeking after IPV experiences. Namely, only in men with weak self-compassion show strong CMI linked to strong selfstigma. The analysis of indirect path coefficients further revealed that strong CMI was linked to reduced intentions to seek formal help after IPV experiences via increased levels of self-stigma only in women and men with weak self-compassion and strong self-coldness.
The current study is unique in that it shows that strong self-compassion was linked to weaker self-stigma for formal help-seeking in women. Even though self-compassion is often discussed as "colliding" with the ability to conform to masculinity ideologies (Wasylkiw & Clairo, 2018) and most research on associations between self-compassion and self-stigma for formal help-seeking has focused on men Wasylkiw & Clairo, 2018), on average, women and not men have lower levels of self-compassion (Yarnell et al., 2015). In the current study, this difference in levels of selfcompassion between women and men also became evident. Therefore, the focus of research on the benefits of having high levels of self-compassion for formal help-seeking needs to shift to also include women. The current study revealed that having stronger self-compassion might go hand in hand with reduced self-stigma in women and men alike.
However, the analysis of direct links between CMI and intention to seek formal help revealed that self-compassion might not always act as a "buffer," whereas in people with weak CMI strong self-compassion was linked to stronger intentions to seek formal help, this association was not evident in persons with strong CMI. Thus, CMI might be linked to reduced willingness to seek help, via other routes than self-stigma (e.g., devaluations and negative judgments of own needs for and use of formal help) (Wong et al., 2019). Other cognitive (e.g., appraisal strategies or benefit finding) and social (e.g., giving forgiveness or receiving social support) processes linked to self-compassion might mediate associations between CMI and intentions to seek formal help (Wong et al., 2019). Future studies that include cognitive and social processes are needed to explain the current study's findings of direct links between strong CMI and weak intentions to seek formal help, which are especially prevalent in persons with strong self-compassion.
Implications
Similar to previous studies of the links between men's CMI, self-stigma, and help-seeking for depression (Cole & Ingram, 2020;Mahalik & Rochlen, 2006), the current study shows that CMI was linked to reduced intentions to seek formal help after IPV experiences. Therefore, healthcare providers need to be sensitive to and understand consequences of cultural pressures to endorse and conform to masculinity ideologies (American Psychological Association, 2018). It has been recommended that the topic of masculinity ideologies and gender-sensitive treatment already be included in clinical practice training (Seidler et al., 2019) or medical education (Komlenac & Hochleitner, 2019). In this way, healthcare practitioners should become aware of their own gender-based attitudes and expectations. They should also be aware of and explore their patients' endorsement of and CMI (Seidler et al., 2018). Healthcare practitioners may try to use their patients' CMI to best advantage by reframing the healthcare situations in a way that promotes self-management, empowerment, accountability, and autonomy throughout treatment. In such healthcare settings, patients who strongly conform to masculinity ideologies might be able to utilize healthcare offers after experiencing IPV, whereas at the same time being able to maintain their sense of masculinity (Komlenac et al., 2020;Seidler et al., 2018).
The current study extends previous suggestions by fostering awareness for masculinity ideologies affecting the lives of men and women (Whorley & Addis, 2006). Even though, overall, women report lower levels of CMI (García-Sánchez et al., 2018;Zamarripa et al., 2003), women's CMI is linked to women's reduced willingness to seek formal help (McDermott et al., 2018). Therefore, it is recommended that patients' endorsement of and CMI be explored in women and men and appropriate gender-sensitive approaches in clinical care be applied after IPV.
In addition to healthcare providers, other services such as social services or victim support organizations offer formal help to people who experience IPV. As is the case with healthcare services, a relatively small proportion of people seek out social services or victim support organizations after experiencing IPV (European Union, 2014). The current study's findings and recommendations might be applicable to those services.
Because of the previous and current findings that self-compassion can weaken the association between CMI and self-stigma concerning helpseeking Wasylkiw & Clairo, 2018), and self-compassion interventions might increase a person's positive health behavior (Biber & Ellis, 2019). For instance, self-compassion interventions have been linked to higher levels of mindfulness and lower levels of self-criticism (Ferrari et al., 2019). Future studies are needed to develop strategies for how to target specific populations (e.g., adolescents, college students, clinicians, healthcare professionals, parents, and spouses) with such self-compassion interventions (e.g., workshops and online webpages) in order to help people access healthcare more easily when the need arises (e.g., after IPV experiences).
Limitations
The current study did not focus on participants' actual help-seeking behavior, but provided participants with vignettes and assessed their intentions to seek formal help after IPV experiences. Vignettes seemed appropriate because they can model real-life decisions in specific contexts or situations. Additionally, vignettes are often used to study sensitive topics, they are nonpersonal and believed to elicit fewer socially desirable responses (Wallander, 2009).
The current study was conducted during the coronavirus disease 2019 (COVID-19) outbreak in Europe. On the one hand, an increased awareness for the importance of addressing mental health problems was evident during this period. On the other hand, many healthcare settings implemented infection-control measures by in part reducing outpatient appointments. Thus, increased awareness and higher demands for mental health services were not met with an increased availability of such services (Moreno et al., 2020). Some participants could have indicated no intent to seek formal help after IPV experiences because of the prospect of not easily receiving proper help during this time of high demand for psychological and medical health services.
The CMNI (Levant et al., 2020;Mahalik et al., 2003) and the Cumulative Lifetime Violence Severity Scale (Scott-Storey et al., 2020) were developed and originally validated for men only. The CMNI-46 proved to have at least partial metric invariance, indicating that factor loadings were similar across women and men (McDermott et al., 2018). However, further validation studies and stronger support for metric and scalar invariance are needed to increase the confidence that those scales measure similar constructs in women and men.
The cross-sectional study design precludes any conclusions of causality or the temporal order of found relationships. Future prospective, longitudinal, or experimental studies could address and prevent those limitations.
Lastly, some limitations of the current study's sample need to be mentioned. One part of the current study's results is based on a convenience sample of university students. Thus, the proportion of people with university entrance-level education and a university degree was relatively high in the current sample. Such a sample may significantly differ from other populations (Henrich et al., 2010). In general, studies with university students as participants find associations with larger effect sizes than studies with more general samples (Henrich et al., 2010). Finally, it was found to be difficult to recruit enough men for the current study. Some nonrespondents might especially uphold certain masculinity ideologies or be reluctant to seek help after undergoing IPV experiences (Näslindh-Ylispangar et al., 2008). Therefore, the reported intentions to seek help following IPV experiences might be an upper estimate, and the possibility to generalize results to a larger proportion of the population might be limited.
Conclusion
The current study revealed that self-compassion weakened (i.e., "buffered") the strong link between high levels of CMI and high levels of self-stigma for seeking help after experiencing IPV. Self-compassion interventions might increase women's and men's positive health behavior including willingness to seek formal help after experiencing IPV (Neff, 2023). In contrast to previous studies about self-compassion and help-seeking that focused on men Wasylkiw & Clairo, 2018), the current study highlights that especially women could benefit from such interventions, given that they reported lower levels of self-compassion in the current and previous studies (Yarnell et al., 2015).
The analysis of direct links between CMI and intention to seek formal help revealed that strong CMI was linked to a reduced intention to seek formal help, especially in persons with strong self-compassion. Namely, intentions to seek formal help did not differ among people with different levels of selfcompassion when they strongly conformed to masculinity ideologies. In contrast, in people with weak CMI strong self-compassion were linked to stronger intentions to seek formal help. Future studies need to study other mediators in addition to self-stigma that might explain how CMI is linked to intention to seek formal help. In contrast to the route via self-stigma, for alternative routes, self-compassion might not act as a "buffer."
Author contributions
NK, EL, FM, and MH designed the research. NK and MH collected the data. NK analyzed and interpreted the data. NK wrote the manuscript. AW critically commented the manuscript. All authors read and approved the final manuscript.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article: This study was conducted as part of the second and third authors' master's theses.
Ethics Approval and Consent to Participate
The medical university's Ethics Committee exempted the current study from full ethics review. The study was conducted in accordance with the Declaration of Helsinki (World Medical Association, 2013) and the APA standards (APA, 2002). All participants gave written informed consent.
Availability of Data and Materials
The datasets used and/or analyzed in the current study are available from the corresponding author on reasonable request.
Supplemental Material
Supplemental material for this article is available online.
|
2023-04-26T06:17:10.426Z
|
2023-04-25T00:00:00.000
|
{
"year": 2023,
"sha1": "69d2139cdf5a6706ecfef2f2a2b064be3523930a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/08862605231169766",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "9b01f9ed3fa38ca261050c9721f73c9a0aafe676",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
243307346
|
pes2o/s2orc
|
v3-fos-license
|
A Framework for Monitoring Ecosystems-Based Adaptation to Climate Change: Experience from The Gambia
: Implementing ecosystems-based adaptation (EbA) to climate change is challenged by the need to monitor biophysical, socio-cultural, and economic impacts which are usually context-specific. Therefore, robust frameworks are required that integrate impacts to better understand EbA effectiveness. Monitoring frameworks that are universally applicable to EbA are desirable, however their universal application is problematic as they should reflect a community-driven design that accommodates both donor reporting functions and the generation of local-level data and information to support management actions and community initiatives. Initial products from this research include a generic, five-step process for developing and testing adaptation indicators, a robust framework consisting of (i) the indicators, data and information used to design the framework, (ii) the operational EbA platform that houses and computes the adaptation indicators, and (iii) the participating institutions, and initial, community-level applications to guide water management, replenishment of the vegetation cover, and business development. Immediate benefits to rural communities include the re-orientation of performance indicators mapped to their needs as opposed to donor reporting alone. The framework contributes to the set of tools currently in use for EbA monitoring by offering an umbrella within which existing tools can be applied. Near-term future research will focus on improving the utility of the framework and its platform beyond reporting on key performance indicators (KPIs) by adapting the EbA platform to support changing management needs. Future research will be needed to understand the extent to which the environmental changes in The Gambia compared to changes across the Sahel and Sudano-Sahel regions of West Africa and whether the lessons learned from The Gambia could be extrapolated to the subregion.
Introduction
Earth's ecosystems are the foundation for human development, so their health is essential for the social and economic security of its population. As demand for ecosystem services escalates with the global population projected to reach 9.7 billion in 2050 [1], so will pressure on food production, particularly in the face of climate change [2].
As the Intergovernmental Panel on Climate Change (IPCC) has reported, the global climate system has warmed, and the changes observed since the 1950s are unprecedented within periods spanning from decades to millennia. Increasing concentrations of greenhouse gases (GHGs) from anthropogenic sources have triggered this atmospheric and oceanic warming. As a consequence, snow and ice packs have diminished, while the sea level has risen. Continued emissions of GHGs will cause further warming and changes in components of the climate system [3]. five principles: (i) reduce social and environmental vulnerabilities, (ii) generate societal benefits in the context of climate change adaptation, (iii) restore, maintain or improve ecosystem health, (iv) be supported by policies at multiple levels, and (v) support equitable governance and enhance community capacity to implement EbA initiatives [11]. These criteria form a structure for defining performance indicators to monitor adaptation.
EbA embraces the sustainable management of forests, grasslands, wetlands, and coastal zones to reduce the harmful impacts of climate hazards including shifting spatial and temporal variability of rainfall, changes in maximum and minimum temperatures, stronger storms, and increasingly variable climatic conditions [12]. Examples of EbA practices include agroforestry to increase resilience of crops to droughts or excessive rainfall, integrated water resource management to cope with prolonged drought and change in rainfall patterns, and sustainable forest management to stabilise slopes, prevent landslides, and regulate water flow [13].
EbA is nested within the broader concept of nature-based solutions and shares common elements with a variety of approaches to building the resilience of socio-ecological systems [14]. These approaches include community-based adaptation, ecosystem-based disaster risk reduction, climate-smart agriculture, and green infrastructure and often use participatory processes for community engagement. Not surprisingly, EbA is increasingly viewed as an effective means of addressing the linked challenges of climate change and poverty in developing countries, where many people are dependent on natural resources for their livelihoods [15].
Greater adoption of EbA faces several challenges including economic and financial constraints, social and cultural barriers, governance and institutional weaknesses, and difficulties in establishing the evidence base to show impact. This manuscript addresses the latter of these, as despite considerable investments going into monitoring EbA [16], generating the specific costs and benefits for monitoring EbA adaptation is not often clearly articulated.
Ever since the need to adapt to climate change was understood, the global CC community has sought a universal set of adaptation metrics. Christiansen et al. [17] noted that within the donor community (GCF, GEF, etc.), metrics are used to prioritize limited adaptation funding, including for comparing investments across regions, sectors, and contexts. Similarly, there was initially a need for monitoring for management accountability, hence the need for universal application.
In time, however, the need for tools that also guide project teams in results-based management grew, as adaptation metrics may be even more useful as learning and management tools than just for project evaluation and fund allocation.
No single set of metrics exists that will meet all needs. Context is crucial. Within the EbA community, managers must find a balance between conceptual approaches for better transparency and accountability and pragmatic tools for project monitoring and evaluation. Table 1 summarizes the advantages and disadvantages of universal metrics for adaptation. Table 1. Advantages and disadvantages of universal metrics for adaptation (Adapted from Christiansen et al. [17]).
Issues Advantages Disadvantages
Political Reduced risk of squandering or misappropriation of funds If recipient country has low capacity to collect and report adaptation metrics, it may not have same access to funds as a well-managed country
Ethical
Transparency-allocation of funds based on merit.
Allocation of funds will always have some level of value judgement, so may not go to most in need.
Economic
Ex-ante identification of promising projects Ex-post monitoring Ex-post project adjustment Indicator measurement is uncertain, potentially biasing allocation of funds to project wherein benefits can easily be monetized.
Within the adaptation community, there is uncertainty as to what EbA encompasses. For example, in coastal zones, stabilization is an important EbA intervention if it reduces the vulnerability of coastal communities to sea level rise or river flooding. Appropriate metrics could include limited frequency or severity of flooding, lessened frequency of landslide, or reduced damage to properties due to water intrusion. However, if an EbA project addresses adaptation in farming systems, a different set of metrics must reflect the context where interventions are implemented. Since having a standardized indicator set for EbA is problematic, there is a need for teams implementing EbA to agree on appropriate metrics for the work while respecting the principles of transparency and accountability.
Finally, since EbA is achieved within an ecosystem, effective ecological practices must be combined with cultural and social aspects of adaptation.
Problem Statement
Current efforts to track ecosystems-based adaptation to climate change are challenged by the complexity of adaptation monitoring-not only because EbA touches multiple sectors which makes the selection of indicators difficult, but also because monitoring frameworks are often built to satisfy donor or national reporting needs, with universally applicable frameworks, and therefore may not fully capture the impacts at community levels. Hence, the research gap is that transparent processes and robust frameworks are needed to improve EbA monitoring. The research presented here contributes to the body of knowledge on EbA tools by developing a process and framework for monitoring EbA adaptation that satisfies both requirements. We use a case study from the Green Climate Fund (GCF) EbA project in The Gambia to present the lessons and emergent results acquired in the process of building a context-specific framework.
An Overview of Efforts in Developing Monitoring Frameworks for Ecosystem-Based Interventions
Various efforts to develop indicators of adaptation to climate change across different sectors have demonstrated the difficulty in developing robust monitoring frameworks. In the public health sector, Doubleday et al. [18] proposed a set of climate change indicators for state and local health departments to track adaptation efforts. However, they found that additional refinement based on local context was required to improve their uptake in policy and planning. Ebi et al. [19] learned that adaptation indicators must map to upstream drivers of climate-sensitive health outcomes. Indicators should monitor (1) vulnerability and exposure to climate-related hazards, (2) current impacts and projected risks, and (3) adaptation processes and health system resilience. To be robust, proposed indicators must capture uncertainties about the magnitude and pattern of climate change.
In the global environmental change sector in Europe, Klostermann et al. [20] also found that context is crucial for developing adaptation indicators, although they note that the provision of common framework elements would help European Union member states to create or improve their adaptation monitoring. To reduce the risk of environmental hazards in Carinthia, Austria, Zischg et al. [21] undertook an analysis of flood risks in all municipalities. The results were used to set priorities in planning flood protection and formed the basis for a monitoring system.
In the urban sector, Feldmeyer et al. [22] provided an indicator set to measure resilience to climate change and adaptation using a participatory approach to account for context-specific parameters. However, they found that a purely quantitative, indicatorbased approach was not sufficient and additional qualitative information was needed. For managing urban water projects, Larson et al. [23] noted the challenges of monitoring and evaluation (M&E) required by funding agencies and recommended a combination of methods including logical frameworks and best/worst case scenarios depending on project stage and specific monitoring objectives. In the national energy sector, Pineda et al. [24] were successful in building a composite indicator that helped transition the Columbia energy sector from reactive to anticipatory scenarios.
The European Commission (EC) [25] produced a handbook for developing frameworks and indicators for nature-based solutions (NBS) based on 17 EU-funded projects. The EC NBS framework serves as a reference for EU policies and programmes while orienting urban practitioners in preparing robust impact evaluation frameworks at different scales.
For biological conservation, Conroy et al. [26] noted that, although potential impacts of climate change are understood at global or regional scales, impacts at finer scales are not. Nevertheless, following the precautionary principle [27], conservation decisions cannot await "perfect information" and instead must proceed in the face of uncertainty. Moreover, conservation should precede in an adaptive management framework as new information becomes available.
From a more practical perspective, users of adaptation monitoring frameworks often need to integrate tools within a single platform. In some cases, the problem is simply technical and relates to the software or hardware used. Even more difficult is the integration of bio-physical and socio-economic data into decision support systems since scientists and the development practitioners whom they support often think and plan along themes and sectors but not necessarily across disciplines. Although additional training in crosssystems planning is useful, experience and wisdom gained through time will always be highly valued.
Finally, users prefer participatory, community-led, and gender-sensitive planning tools because these reflect the need to negotiate among interests in the real world [28]. It follows that user-defined tools should be given more attention than those originating from external sources.
Methods
The methodology for developing the framework consisted of a global review of existing EbA monitoring tools, an extensive baseline survey to identify data needs for EbA monitoring and gaps for the case study from The Gambia, selection of key performance indicators (KPIs) most relevant to local community needs, collection of field data, and EbA monitoring platform design and development. The
Review of Existing Ecosystem-Based Adaptation (EbA) Monitoring Tools
The initial step for this research was to assess existing EbA monitoring tools to guide indicator development and framework design. EbA monitoring tools are technologies designed for managers to explore options for the use of land resources based on their ecosystem characteristics and the socio-economic conditions of the population that depend on them. EbA technologies are mostly information technology (IT) based and support decision making in land evaluation, suitability analysis, land capability classification, and agro-ecological zoning. These options incorporate the needs of different sectors operating in a landscape while optimizing and sustaining resource use. However, the diversity of EbA tools to meet these wide needs challenges users who could benefit most from them.
EbA tools are similar to those used in sustainable land management (SLM) and are classified as biophysical, socio-economic, integrated tools, or databases and have overlapping characteristics.
Biophysical tools assist the user to analyse biophysical attributes (climate, soil, terrain, water, etc.) and their interactions for land evaluation. The output identifies EbA alternatives based on land suitability. For example, soils are classified based on suitability for a specific use, fertility constraints, and linkages to yield, productivity, physical and chemical properties.
Socio-economic tools characterize social and economic settings required for EbA planning and implementation. They include approaches and methods of participatory data collection and decision-making and provide an understanding of the social capital that should be driving the implementation and monitoring processes.
Integrated tools include both biophysical characteristics and social and economic conditions and generally incorporate principles, approaches and methods of participatory planning, with the overall objective of reaching mutually beneficial outcomes for all stakeholders.
Databases can facilitate EbA planning by providing readily available data as input. These databases provide maps and data on soil and terrain characteristics, land degradation, land cover, land use, climatic data including future projections, crops and yields, food, agriculture, water resources, adaptability/suitability of identified plant species for a given environment, and socio-economic data and statistics on poverty, population, tenure and gender.
Information on these tools is not always easily accessible to those who need it. A useful starting point is the EbA Navigator tool produced by International Institute for Environment and Development (IIED), International Union for the Conservation of Nature (IUCN), UNEP-World Conservation Monitoring Centre (UNEP-WCMC) and Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ). The tool is available online at https: //www.iied.org/help-pilot-navigator-tools-for-ecosystem-based-adaptation (accessed on 1 October 2021). This searchable database currently includes information on more than 240 tools and methods for planning, implementing and monitoring EbA or sustainable land management (SLM) activities.
Tools in the EbA Navigator fall into three categories, including overview documents, manuals and handbooks and tools specific to project needs. Overview documents provide context on ecosystem resilience or food security and are generally written at the global or regional levels but have limited use at a local project management scale. Manuals/short courses/handbooks for project management include project planning, theory of change, development of logframes, adaptive management, M&E, etc. There are many such tools available, and some are required by specific donors for financial and technical reporting. Specific tools often are needed to access and/or process socio-economic or biophysical data, for example in data modelling.
Other efforts to develop tools are found within SLM practices and have many similarities with EbA. To assist SLM practitioners, the International Fund for Agricultural Development (IFAD) developed its Integrated Approach Programme (IFAD IAP) on Food Security (IAP-FS) in sub-Saharan Africa with funding from the Global Environment Facility (GEF). The IAP-FS targets agro-ecological systems in the drylands of Sub-Saharan Africa (SSA) where the need to enhance food security is directly linked to opportunities for generating local and global environmental benefits [29]. A summary of tools from the IAP-FS programme is given in Table 2.
Users often find it difficult adapting generic EbA tools to their specific needs due to their complexity. In a survey conducted by FAO [30], respondents mentioned the difficulty of using EbA tools in environments for which they were not designed or for which local data had to be generated through inference rather than observation. On the one hand, FAO [30] found that, despite technological advances in IT, remote sensing and GIS, tool development in EbA has not kept pace with new challenges in land and water resource management. The most common shortcomings were low spatial or temporal resolution, resulting in variable data quality and more general information than appropriate for the desired scale of operation. On the other hand, the survey found that such tools and knowledge will always be needed for supporting effective EbA that meets demand for land and water resources while enhancing governance at all scales.
The following section describes the methods used in developing EbA monitoring indicators and a framework from The Gambia, West Africa.
Case Study: Monitoring Ecosystem Based Adaptation Outcomes in The Gambia
Poverty and environmental degradation threaten rural livelihoods in The Gambia. Climate change exacerbates these threats as droughts and floods are increasingly severe, resulting in reduced agricultural production and unsustainable extraction of resources from forest ecosystems by rural households. At present, the Government of The Gambia (GoTG) has insufficient financial resources and technical capacity to build the climate resilience of rural Gambians [31] as outlined in its National Adaptation Program of Action (NAPA).
Sanneh et al. [32] developed a survey-based method for prioritizing climate change adaptation based on the NAPA in The Gambia. Their results indicated that the five most important adaptation categories in order of priority were health, forestry, water, food security, and energy. Furthermore, adaptation approaches included health education, public sensitization, water supply infrastructure development, microfinance, and infrastructure and technology enhancement [32]. The case study chosen takes a step toward implementing these approaches.
The Green Climate Fund (GCF) has been a champion of promoting climate change adaptation with over 60 projects worldwide [16]. As a priority, GCF operates in developing countries which typically are most at risk to climate change as possibilities to adapt may be lacking.
With the GoTG's Ministry of Environment, Climate Change and Natural Resources (MECCNR) as the Executing Agency, and the UN Environment Programme (UNEP) as the Implementing Agency, The Gambia was chosen as a focus country for GCF support because it is on the frontline of the struggle to reduce climate change impacts and because its ecosystem resources are fundamental for survival of its mostly rural population. The International Centre for Research in Agroforestry (ICRAF) was selected as the technical partner supporting project implementation.
The GCF solution is the large-scale implementation of the EbA approach including community forests (CFs), community protected areas (CPAs) and agroforestry on farms in participation with local communities [31] through the project 'Large-scale Ecosystembased Adaptation in The Gambia: Developing a climate-resilient, natural resource-based economy' (hereafter referred to as 'The GCF project'). Project details are available at https://www.greenclimate.fund/project/fp011 (accessed on 1 October 2021).
The project's investments in EbA: (i) increase the generation of ecosystem goods and services through establishment of a climate-resilient natural resource base; and (ii) identify and promote climate-resilient livelihood options for rural communities through establishment and protection of natural resource-based businesses, for example sustainable production and marketing of timber, firewood, honey and fruit [31].
EbA monitoring in The Gambia has relevance at different scales. At local and national levels, EbA monitoring feeds into land use planning and decision making based on data collected at the appropriate scale (1:50,000 or larger). Examples of local level applications include location of water collection points for farming and tree nursery development, learning which multi-purpose community group interventions have greatest potential for sustainability after project end, or confirming which indigenous tree species have the greatest survival rate with the objective of repairing ecosystem biodiversity and improving community resilience to climate change.
At regional and global levels, EbA monitoring contributes to better science which encourages collaboration through establishing common baselines, knowing trends, assessing risks and further informing policy at regional or national levels. Land managers in West Africa need to strengthen monitoring systems to understand regional climate and environmental dynamics since Sahelian populations are at high risk of increased precipitation variability, droughts and floods [33]. For example, knowing regional trends allows decision makers (Senegal and The Gambia) to benefit from synergistic planning, improve the science of species biodiversity, remove trade barriers to permit local producers better access to markets, and improve ecosystem resilience at regional scale.
This longer-term research objective, although not addressed in the current manuscript, is intended to meet higher level adaptation monitoring efforts that contribute to the global body of knowledge while emphasizing critical considerations of local contexts.
The true value of EbA monitoring is felt most when integrating both local and regional levels by aligning top-down and bottom-up planning and implementation.
Predefined Project Key Performance Indicators (KPIs)
Methods to develop the framework and populate its EbA monitoring platform with data and statistics are grouped according to (1) processes to select, develop, test, validate, and implement KPIs and internal project management indicators; and (2) processes to design, develop, test and deliver the EbA monitoring platform and its geo-data portal. The two sets of processes are integral to each other and were designed accordingly.
KPI selection and development were characterized by participatory processes throughout project implementation. An initial set was defined by the GCF, UNEP, and GoTG and subsequently approved by the GCF [31]. However, the process to select and validate indicators needed further buy-in and adjustment from national project stakeholders, particularly from the team responsible for project management and reporting.
An inception workshop was held on 17-18 May 2018 (Figure 1) to review the predefined KPIs and make the monitoring process more effective and efficient. For the predefined KPIs, relevant data available and the current locations of the identified data were specified (Table 3). The main gaps identified by the working groups were that the KPIs were not specific enough to reflect project outcomes while meeting management needs. Hence, the working groups identified sub-indicators for each KPI. The specificity of the sub-indicators made the monitoring practical and easier for local practitioners to apply. The project team used a bottom-up process to validate indicators by tagging them to EbA activities preferred by local communities such as beekeeping, fruit production, sale of woodfuels, etc. After the inception workshop, the list of sub-indicators was further refined to include number of trees planted, mortality rate, and number of knowledge products.
Contribution to National Forest fund
Tax paid from NRbusinesses (Dalassi)
Regional records of licensing issuance
Department of Forestry Subnational (regional) level
Mainstreaming EbA in policies
No. of policies and strategies integrating EbA
EbA Stakeholders
Departments of Community Development, Forestry and parks and wildlife National and subnational (regional) level Table 4 summarizes the data and information needs for the GCF Gambia project stakeholders. Bi-monthly or more frequently, depending on season.
The Need for a Robust Platform for EbA Monitoring
Raw data initially but aggregated to information for sharing with national policy makers as well as local village leaders. Banjul and regional offices. Among the barriers that the GCF project is addressing is that GoTG and private sector entities reliant on ecosystem services have insufficient knowledge and technical capacity to promote natural resource-based businesses. The project addressed this gap by including policy support, institutional strengthening and knowledge generation. The aim is to increase the quality and availability of information to inform policymakers, researchers, investors and the general public on the relative effectiveness and commercial viability of large-scale EbA [31]. To achieve this aim, the EbA platform serves as a one-stop shop for obtaining details about the KPIs, baseline data on EbA interventions within local communities, case studies of successful natural resource-based businesses, lessons learned on implementation arrangements, return on investment, and best practices.
National Policy makers
Specific functions of the EbA platform include: • Supporting day-to-day project management by collecting and aggregating feedback from local communities on their social and economic development which in turn serves as a basis for dialogue with them. • Building project data and high level KPIs required for donor reporting [8]. Similarly, KPIs can be used to assess effectiveness of project leadership while allowing for adaptive management of project implementation. • Providing a geo-spatial platform (GIS) that serves as a basis for capturing, processing, storing and visualizing project data.
•
Establishing a time series of vegetation cover change based on available satellite imagery for establishing regional trends.
•
Building national and local capacity on how to promote communication and awareness of EbA potential to project stakeholders through management training.
Baseline Survey for EbA Indicator Selection
The GCF project team visited Gambian institutions to learn how they manage and share data and information. The team assessed quality, relevance, accuracy and metadata needed for project management and their capacity to host and manage new data systems.
The survey found that, given its unique history and size, The Gambia is reasonably well covered with EbA core data sets at the national level, but lacks datasets at local scales to support project management. Some of the geographic databases were produced from agreements between The Gambia Ministry of Local Governments and Lands and the Japan International Cooperation Agency to jointly conduct aerial surveys in the year 2000. These 1:50,000 scale databases include district-level data (e.g., administrative boundaries, socioeconomic data and settlement names), transportation layers, buildings, small objects, and other structures, water resources, and land cover and topographic features. These data layers were input to the preliminary project planning.
However, the survey identified several gaps in the available layers. The lowest level administrative boundary data available was the district level, thus missing the village level administrative and community forest boundaries and associated data needed for establishing the EbA project baseline. Available settlement, village, and community-level data were in point form rather than in polygon formats. The implication was that villagelevel maps would be derived from a combination of remote-sensing data and district-level point data and not from pre-existing polygon files.
Collection of Field Data
Field data were collected to build the indicators and the platform was used for aggregation and analysis. For example, the entity "beneficiaries" had the following fields for each entry: The constituent metadata for each KPI were defined so that the data could be aggregated by type, by village, district, or higher levels, while accruing over different time periods.
EbA Monitoring Platform Design and Development
The workflow for the EbA platform development is depicted in Figure 2. The platform was developed in parallel with the indicator selection and refinement as new information concerning indicators and their attributes became available. The platform design is based on location data from the four project regions identified within The Gambia (Lower River, Central River South, Central River North and Upper River Regions), entities (community forests, community protected areas, multipurpose centres (MPCs), individual households), and indicators. The platform tracks and displays the achievement of project impacts and is adaptive to details that emerge from the user community.
Based on the indicator development, a web-based, data submission tool was built. The platform team developed the web pages using PHP programming language with the data residing in a MYSQL database. The web-based tool can be accessed from anywhere with an internet-enabled computer. The last step was to build an interactive dashboard to analyze and visualize the data. The dashboard calculates KPI values from field data.
The platform was presented to EbA project stakeholders during a hands-on workshop. Participants were presented with an interactive dashboard to visualize the data entered during field work. Modules included principles of data and information, data acquisition, data analysis, visualization of results, reporting and data quality. Trainees calculated KPI values, further prioritized the most important indicators as number of hectares restored, income generated (USD), and number of households benefitting from the project. One of the immediate impacts from the working sessions was the participants' growing awareness of the critical importance of village and community level data, leading to better ownership of data quality management.
The EbA monitoring platform was then directly linked to Geoportal so that results could be displayed spatially. The Geoportal is a suite of software products designed by the ICRAF GeoScience team to manage and disseminate geospatial data. In order to ensure sustainability after the completion of a project, the team developed a system based on Free and Open-Source Software for Geospatial (FOSS4G), with GeoNode (www.geonode. org (accessed on 1 October 2021)) as the software suite. To further strengthen the local ownership and sustainability, the EbA monitoring platform was linked to the web portal of the Ministry of Environment, Climate Change and Natural Resources. FOSS4G eliminates the need of expensive licenses to be renewed on an annual basis. The Gambia Geoportal modules include: • User account module covers user registration, user types, account management, groups and access privileges.
•
Layer module provides uploaded layers, a section to add new layers, metadata and style editing functionality and sharing functionality.
•
Maps module allows users to view a list of previously created maps, create new maps, edit maps with new styling functionality.
•
Search module allows users to search data based on access privileges.
•
Administrator module explains additional privileges of a superuser. The administrator module also covers managing the GIS Server (GeoServer).
EbA Platform Functionality
The effective monitoring of EbA derives from accurate field data, so the platform is remotely accessible to the field team for data entry. Table 5 below summarizes the fields for data entry for each sub-indicator with respect to KPI #1 relating to numbers of project beneficiaries. The data fields are similar to the other 25 KPIs. These fields captured the key descriptors: time, place, intervention, quantity, beneficiaries, and textual description.
Data for each of the sub-indicators were then aggregated to build into each of the KPIs. Figure 3 depicts the process by which data were aggregated in order to inform project managers of status and impact. Aggregation can be by area (village, district, region, or project wide) or theme (tree mortality, gender impact or both) to compute KPIs. Table 5. Detailed data fields for the sub-indicators aligned with the key performance indicators (here presented only for KPI 1 for illustration purposes only).
Sub-Indicators Detailed Data Fields for Sub-Indicators
Number of households benefiting from the project Data can then be aggregated, for example, to generate numbers of women participating in income-generating activities by village and compared with the same statistic for a different location. Another example is a comparative analysis of tree mortality by district to investigate possible causes. The results inform project managers how to better under-stand challenges to gender impact or tree health and then adapt management planning, if necessary.
Conditions for access include two levels, one for the general public and a higher security level requiring registration for data entry, managers, etc.
The institutions that execute EbA monitoring provide the framework's foundation, as their roles include not only data entry and indicator production, but also operating feedback loops to allow for adaptive management. Table 4 identifies these institutions and their information requirements. Information provided on a demand-driven basis is the key to grow the skills and capacity to back EbA implementation. As capacity grows, barriers to EbA implementation are alleviated as project stakeholders will have the confidence to make informed decisions. Moreover, sharing information across a network of institutions provides insights across sectors.
Results: A Process for Indicator Selection, the EbA Framework with Its Platform, and Initial Applications
The results comprise 3 elements: (i) a generic, user-friendly, 5 step process for selecting EbA indicators, (ii) a robust framework for EbA monitoring based on these indicators, and (iii) initial applications.
The Five-Step Process for Selecting EbA Indicators
The first step in indicator selection comprised a background review of three types of tool/document, including: • a few, general documents on how ecosystem resilience is linked to climate change adaptation for background/context, such as those produced by the Intergovernmental Panel on Climate Change (IPCC), Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES), etc. • general manuals, short courses, etc. for project management (results-based management, logframes, theory of change, adaptive management, etc.), and • specific tools depending on the nature of the project such IT platforms to incorporate project data and information, including spatially explicit data, to support project planning, management, execution, monitoring, reporting and evaluation. These tools may integrate biophysical data with socio-economic data at a local project scale to understand win-wins, tradeoffs and impacts.
The second step consists of an expert review of indicators used for reporting to GCF or other donors to confirm applicability, refine indicators for reporting purposes, and explore potential adaptation for community-level use. The second step may include a review of EbA indicators from different contexts (urban, water, energy, biodiversity, etc.) and geographies (national, regional, continental) versus the local context. The initial assessment of indicators should include cost-effectiveness. Results from the second step feed into the initial platform design.
The third step is the in-country testing of the resulting indicator menu and includes visits to project sites, and consultations with Government, business and community leaders and members. Table 4 captures their data and information needs. Indicator refinement includes the development of second-level indicators that allow for targeted adaptation monitoring based on community priorities while aggregating to high level indicators. The platform design is similarly refined based on inclusion of second-level indicators. The assessment of cost effectiveness is continued.
The fourth step includes refinement and confirmation of indicators in a workshop environment. Such a broad consultation favors cross sector analysis and civil society inclusion.
The fifth step for selecting EbA indicators comprises data collection and analysis during project execution and applies principles of adaptive management as needed. Results from indicator use are also shared with the science community.
The final indicator set for the EbA Gambia project monitoring included sub-indicators that were easy to measure at relevant scale and could be aggregated for monitoring purposes. The final set was developed after the realization that the EbA interventions were broader than what is described in the initial set of KPIs selected during project design. Table 6 presents the final set of KPIs and sub-indicators used for monitoring EbA adaptation. Table 6. Final set of KPIs and sub-indicators agreed for the EbA Gambia project, with designation as output or outcome indicators.
Key Performance Indicator
Sub-Indicators The development of the EbA monitoring framework yields immediate benefits for the rural community. In particular, the initial set of performance indicators provided by the GCF mostly focused on values set for reporting to the donor. Therefore, the production of community-driven performance indicators directs the project investments toward results that provide benefits to the end users and ultimately enhance The Gambia's efforts to adapt to climate change based on the inherent capacity of its ecosystem resources and the rural population that depends on them. A less tangible but nevertheless important benefit is that the EbA project managers in The Gambia were exposed to processes for monitoring impacts that orient them increasingly toward rural community needs. The benefit can be realized in other efforts during their professional careers. Figure 4 presents the conceptual framework design, "The System", with its component functions. The System is designed from an organizational perspective to provide a robust architecture for other EbA initiatives globally. The Use the System activity enables internal EbA Gambia project users to perform tasks on handling project-related data, information, and projects as well as reporting, while providing the necessary description of requirements to the Manage and Oversee the System activity for completing their tasks. These user requirements govern the System. The resulting data are analyzed to help in developing and enforcing policies, guidance, direction, and standards to manage the protection, control, and implementation of the resources needed to deliver the EbA Gambia Platform. The Protect and Secure the System activity takes security parameters from the Manage and Oversee the System activity for the Platform resources to minimize their vulnerability to both exploitation and attack, and to prevent unauthorized use. Both the Protect and Secure the System activity and the Control and Operate the System activity monitor the System for vulnerabilities and provide appropriate responses to detected incidents. The Control and Operate the System activity takes policy, guidance, and direction from the Manage and Oversee the System and works to ensure the delivery of the EbA Gambia platform. The Provide System Platform activity supplies the elements and components that underpin the tasks to catalog EbA resources, collate EbA data, and report on project status and progress.
The EbA Monitoring Framework: Design and Functionality
The above Methods section describes the data and information that are collected, flow through the platform, and used to produce statistics and support decision making.
The Gambia EbA data and information platform can be accessed at http://ebaproject. worldagroforestry.org/ (accessed on 1 October 2021).
Framework Limitations
Although the process for developing indicators is re-useable and the framework can be adapted to different contexts, the platform developed for the GCF Gambia project would need to be adjusted depending on context and user needs. In time, as knowledge of new management needs is refined, the platform would need to be further adapted.
Finally, although user training is not a limitation, a training programme is a required component in framework development. Figure 5 depicts the results of preliminary spatial analysis to show preferred areas for EbA activities such as beekeeping, forest species enrichment planting, and forest products enterprises adjacent to the community of Gaindeh Njie.
Examples of Initial EbA Monitoring Applications for EbA Planning
The above results were assessed by community leaders and the EbA project team in order to prioritize the locations and types of EbA intervention.
Discussion and Conclusions
The research presented here respects the requisite five EbA principles mentioned in the introduction and proposes a monitoring framework which includes indicators that address each of them. The findings support the position that, although a universal framework is not applicable to all EbA initiatives, the framework and process from EbA Gambia reflects a strong agreement from beneficiaries and can be used for EbA projects with similar objectives.
The results presented here build on previous research (for example, Sanneh et al. [32]) to prioritize approaches to CC adaptation through the implementation of The Gambia's NAPA.
The framework contributes to our global understanding of how best to use ecosystem resources for adapting to climate change and offers a framework that can be adapted to other regions. It complements the set of tools described in Table 2 by offering a robust framework within which such tools can be applied.
The benefit of the GCF/Gambia EbA monitoring is to allow users to analyze trends in socio-economic growth within the target communities (changes in income level, diversity of livelihoods increased, increase in numbers and diversity of community groups working in EbA) along with changes in ecosystem status (% tree cover, tree species diversity) to adjust project management and inform policy as needed. The platform can be adapted, with modifications, to other community-based, EbA projects.
The robust framework allows for donor reporting while tracking impacts most relevant to local community needs.
An unanticipated impact was the project managers and implementers' increased awareness of the critical importance of community-level data, leading to better ownership of data quality management. Within the GCF/Gambia project team, users appreciated the importance of accommodating changes in KPIs and associated attributes during the validation exercise. Only when data are collected in the field is it possible to understand the issues surrounding data aggregation, relevance and possible weaknesses with the KPIs.
The main gaps are those related to the utility of the platform beyond reporting on KPIs for which project activities were initially focused. Future research needs will focus on making the platform more robust to explore and support new management needs.
Such adaptation monitoring frameworks and the information they generate will always be needed for effective EbA projects. Nevertheless, the authors emphasize the importance of on-the-ground experience particularly when interpreting integrated biophysical with socio-economic data and information.
Further research will be needed to understand the extent to which the changes in The Gambia compare to environmental changes across the Sahel and Sudano-Sahel in West Africa. For example, project results will be compared with regional vegetation greenness indices over a 5-10 year period to show impacts and trends. This task goes beyond the remit of the current GCF/The Gambia project, however it is globally and regionally relevant because it demonstrates how to adapt to the worst of climate change impacts.
|
2021-10-15T15:29:41.006Z
|
2021-10-02T00:00:00.000
|
{
"year": 2021,
"sha1": "0bd030908886381452a2fcb81ecfe55f8cd90d23",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/19/10959/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d23025a0d8f463f8138b5fca6455622b7fc73164",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
226986872
|
pes2o/s2orc
|
v3-fos-license
|
The silence of binary Kerr
A non-trivial $\mathcal{S}$-matrix generally implies a production of entanglement: starting with an incoming pure state the scattering generally returns an outgoing state with non-vanishing entanglement entropy. It is then interesting to ask if there exists a non-trivial $\mathcal{S}$-matrix that generates no entanglement. In this letter, we argue that the answer is the scattering of classical black holes. We study the spin-entanglement in the scattering of arbitrary spinning particles. Augmented with Thomas-Wigner rotation factors, we derive the entanglement entropy from the gravitational induced $2\rightarrow 2$ amplitude. In the Eikonal limit, we find that the relative entanglement entropy, defined here as the \textit{difference} between the entanglement entropy of the \textit{in} and \textit{out}-states, is nearly zero for minimal coupling irrespective of the \textit{in}-state, and increases significantly for any non-vanishing spin multipole moments. This suggests that minimal couplings of spinning particles, whose classical limit corresponds to Kerr black hole, has the unique feature of generating near zero entanglement.
I. INTRODUCTION
One of the fascinating realizations in the interplay of gravitational scattering amplitudes and dynamics of compact binary systems, is the equivalence of minimally coupled spinning particle and rotating black holes. In the analysis of three-point amplitudes of particles with general spin, a unique amplitude for massive spin-s particle emitting a massless graviton, was defined kinematically in [1] and termed minimal coupling. The term reflects its matching to minimal derivative coupling when taking the high energy limit for s ≤ 2. Since massless particles have spins bounded by 2 in flat space, the role of minimal coupling with s > 2 was initially not clear.
Through a series of subsequent analysis [2][3][4][5][6], it was understood that the spin multipoles generate by minimal coupling are exactly that of a spinning black hole, i.e. the spin moments in the effective stress-energy tensor of the linearized Kerr solution. This was verified by reproducing the Wilson coefficients of one-particle effective theory (EFT) [7,8] for Kerr black hole [3], and the classical scattering angle at leading order in the Newton constant G to all orders in spin [4].
While the equivalence can be established through various direct matching, the principle that underlies such correspondence remains unclear. In this letter, we seek to answer this by studying the spin entanglement entropy. We will use the action of the 2 → 2 S-matrix in the Eikonal limit on two particle spin-states. By measuring the relative entanglement entropy for the final state, defined as where ρ in,out is the reduced density matrix for the in and out-state, remarkably we find that ∆S ≈ 0 when the S-matrix is given associated with minimal coupling, or equivalently, when the EFT Wilson coefficients are set to the black hole value, unity. Any deviation from unity significantly increases the relative entropy.
II. ENTANGLEMENT VIA S-MATRIX
The study of entanglement in scattering events has a long history, which, for recent developments we refer to [9][10][11][12]. Denote the two particle Hilbert space by H = H a ⊗ H b , for each subsystem we can further divide into spin and momentum degrees of freedom, e.g. H a = H sa ⊗ H pa . In computing the entanglement from scattering, there are two sources of difficulty. First, the trace over momentum states lead to divergences due to the infinite space-time volume, and introducing cut-off leads to regulator dependent results, see e.g. [13,14]. Second, under Lorentz rotations, the spin undergoes Thomas-Wigner rotation and one does not have a Lorentz invariant definition of the reduced density matrix [15,16] (see [17] for further discussions).
On the other hand, the same difficulty also appears in the extraction of conservative Hamiltonian of binary systems from relativistic scattering amplitudes. In particular, in a 2 → 2 scattering process, the spin (little group) space of the incoming particles are invariably distinct from the outgoing space, as their momentum are distinct. However, by augmenting the S-matrix with Thomas-Wigner rotation factors, the final state can be mapped back into the spin Hilbert space of the incoming state. Indeed such Hilbert space matching procedure was used heavily in the computation of the spin-dependent part of the conservative Hamiltonian [18][19][20].
We thus consider elastic scattering in the spin Hilbert space H = H sa ⊗ H s b . With a given in-state, we can obtain the out-state via the S-matrix as: where U a,b are the Hilbert space matching factors which arXiv:2007.09486v1 [hep-th] 18 Jul 2020 will be discuss in the next section [21]. The total density matrix of the out-state is then simply ρ out a,b = |out out| and the reduced density matrix is given by ρ a = tr b ρ a,b . Equipped with ρ a we can consider a variety of entanglement quantifiers. A canonical choice is the entanglement entropy, i.e. the Von Neumann entropy of the reduced density matrix S VN = −tr a [ρ a log ρ a ]. Note that here, S VN in principle depends on the in-state. For a quantifier that is independent of the in-state, we can consider the entanglement power [22] given by where Ω represents the spin-s phase space.
In the following, we will consider the elastic S-matrix acting on |in = |s a ⊗ |s b , i.e. the in-state is set up as a pure state. Thus by computing the entanglement entropy of the out-state, we obtain the entanglement enhancement of the scattering process.
III. THE EIKONAL AMPLITUDE IN SPIN SPACE
In this section, we compute the leading order amplitude ab → a b for general massive spinning particles in the Eikonal limit. Working in the center of mass frame , and the momentum transfer q = p a − p a = (0, q), the Eikonal limit correspond to q 2 → 0. After Fourier transform to the impact parameter space, we obtain the Eikonal phase whose exponentiation yields to the S-matrix in the Eikonal limit.
A. Spin-s Amplitudes and Hilbert Space Matching
We begin with the scattering of spinning particles induced by gravitational interactions. At leading order in the Newton constant G, the four point amplitude for the ab → a b illustrated in fig. 1, can be written as [19]: where q µ is the the transfer momenta, ε i is the polarization tensor of the spinning particle, τ a,b = q·S m a,b and the exponential parameters are defined as cosh Θ ≡ pa·p b mam b and η = ±1 labelling the exchanged graviton's helicity. The function W (ητ ) is defined as: where S is the Pauli-Lubanski spin-vector and C a,n , C b,n parametrizes the possible distinct couplings for particle a, b. For a spin-s particle there are 2s degrees of freedom. These coefficients can be directly matched to the Wilson coefficients of the one-particle effective action (see [23] for the all order in spin action) in the classical-spin limit, which corresponds to taking s → ∞, → 0 while keeping the classical spin S ≡ s fixed (see [24] for a more detailed discussion). For rotating black holes, all Wilson coefficients are setting to one, i.e. C a,n = C b,n = 1.
As shown in ref. [18], we can transform the spin-vector S in an operator acting in the little group space through the insertion of a complete set of polarization tensors associated to the incoming particles: where {I s }, {J s } are the SU (2) indices of particle a, b.
In components, we have that where Σ is the spin-s rest frame spin operator satisfying the commutation relation [Σ i , Σ j ] = i ijk Σ k . Then the operator τ in the little group space is given by Writing eq. (4) in term of T leads to an amplitude that corresponds to an operator acting on states in distinct little group space, as the momenta of a, b are distinct from a , b . This can be rectified by the so called Hilbert space matching procedure which utilize the Lorentz transformation that relates the momenta of the in-states to the out-states, to convert the out-states Hilbert space back to the in-states [18,19]. The result is the additional Thomas-Wigner rotation factors for each of the two particles. For example, for particle a this factor, in leading order in q 2 , is written as where E a ≡ (q, u a , u b , a a ) = µνρσ q µ u ν a u ρ b a σ a , a a = S a /m a , u a,b = p a,b /m a,b and E = E a + E b . In summary, the amplitude after the Hilbert space matching, denoted by M , is given by Expanding eq. (10) up to order O(S 2si ) gives where we used the shorthand notation T a,b ≡ (q · a a,b ) and The explicit form of the coefficients A m,n in eq. (11), up to m, n = 2, is given by where C a,2 and C b,2 are the Wilson coefficients for each particles, (c Θ , s Θ ) ≡ (cosh Θ, sinh Θ) and r a,b ≡ 1 + E a,b /m a,b . We can see that the Wilson coefficients C a,n and C b,n starts to appear at A 2,0 , which means that we need to go to at least to spin-1 to compare the difference between black holes and other objects.
B. Eikonal Phase
The Eikonal phase, at order O(G), is given simply by the Fourier transform of the tree-level amplitude in eq. (10) to the impact parameter space: Since q 2 → 0 in the Eikonal limit, we have q · p = q 2 /2 → 0. This orthogonality between q and p defines the impact parameter space, which is the plane perpendicular to the incoming momentum, i.e. b = (b x , b y , 0). Note that, in this limit, we can simply replace all S in eq. (11) by Σ, which is the rest frame spin operator. The S-matrix in the Eikonal approximation is then the exponential of the phase: This allow us to write the out-state in the Eikonal approximation, replacing the matrix element of U a U b S by the ones of S Eikonal in eq. (2):
IV. THE ENTANGLEMENT ENTROPY OF BINARY SYSTEMS
We now have all the ingredients necessary to compute the entanglement entropy and the entanglement power for the out-state in the Eikonal approximation. We first compute the entanglement entropy for spin-1 particles, which corresponds to keep spin operators up to second power for each particle in the Eikonal phase. Starting with a pure state |in = |↑↑ , the entanglement entropy for the resulting out-state yields directly to the relative entropy ∆S in eq. (1). The result is plotted in fig. 2 against the Wilson coefficients pair (C a,2 , C b,2 ). Remarkably, the minimum is exactly at the Kerr black hole value C a,2 , = C b,2 = 1 and deviating from this point raises the entropy of the system. This is unchanged for different choice of in-states, which is illustrated by the computation of entanglement power given by eq. (3) and shown in fig. 2.
In order to show that this is indeed a robust result, we also consider higher spins. Using the same set up we calculate the relative Von Neumann entropy for spin-3 massive particles, which has a total of 5 + 5 = 10 Wilson coefficients. In our extensive scan, we find that the black hole value, C a,i = C b,i = 1 for i = 2, · · · , 6, is the unique point that gives the minimum value. As an illustrative example, we set all Wilson coefficients to one except the pair (C a,2 , C b,2 ) and plot ∆S with respect to (C a,2 , C b,2 ) in fig. 4. The results show the minimum at (1, 1), while the two orthogonal valleys represent keeping only one of the coefficient at one. In fig. 5, we plot C a,2 = C b,2 = C 2 and C a,3 = C b,3 = C 3 , while keeping all remaining coefficients one. Once again the corresponding black hole point gives near zero entanglement.
While the deformation of each Wilson coefficient away from the unity raises the entanglement entropy, comparatively, the effect of C 2 is dominant. This is illustrated in fig. 3 that compares ∆S for deforming the three different pairs of Wilson coefficients in the spin-2 system. We can observe that deforming (C a,2 , C b,2 ) has the dominant effect in generating entanglement.
Finally, we expect that including higher spins do not change the main result. The minimum of the relative entropy is always at the Kerr black hole Wilson coefficient point. Moving away from this point quickly increase the entanglement entropy. A comparison between the spin-1, spin-2 and spin-3 cases keeping all Wilson coefficients one except C a,2 can be seen in fig. 6. FIG. 6. Comparison between the relative entanglement entropy for spin-1, spin-2 and spin-3. All Wilson coefficients are set to one except Ca,2.
V. CONCLUSIONS
In this letter, we consider the entanglement entropy generated by gravitationally coupled binary systems. By considering the Hilbert space of spin states, we demonstrate that minimal coupling for massive arbitrary spin particle have the unique feature of generating nearly zero entanglement in the scattering process. Given the correspondence between minimal coupling and rotating black holes, the result suggests that such feature can also be attributed to the entanglement properties of spinning black holes. Note that such phenomenon is reminiscent of what was found in strong interactions, where entanglement suppression is associated with symmetry enhancement [10].
As mentioned in the introduction, there is a general correspondence between minimal coupling and black-hole like solutions in four-dimensions. This includes Reissner Nordstrom, Kerr Newman [25,26], Taub-NUT [27] and Kerr-Taub NUT [28]. Furthermore, gravitationally induced spin-multipoles has also been studied recently in the context of fuzzball microstates [29]. For Kerr Newman there are additional electromagnetic spin multipoles, while for Kerr-Taub NUT and fuzzballs, the minimal couplings are dressed with additional complex phase factors. It will be fascinating to explore their features in the spin entanglement entropy. Finally, it will also be interesting to understand quantum corrections, in particular whether or not they generate anomalous gravitational multipole moments.
|
2020-07-21T01:01:05.590Z
|
2020-07-18T00:00:00.000
|
{
"year": 2020,
"sha1": "9f21d4aa9f7ba107020daae239730a00a3a4a718",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.125.181602",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "9f21d4aa9f7ba107020daae239730a00a3a4a718",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
3843120
|
pes2o/s2orc
|
v3-fos-license
|
New Insights on the Management of Wildlife Diseases Using Multi-State Recapture Models: The Case of Classical Swine Fever in Wild Boar
Background The understanding of host-parasite systems in wildlife is of increasing interest in relation to the risk of emerging diseases in livestock and humans. In this respect, many efforts have been dedicated to controlling classical swine fever (CSF) in the European Wild Boar. But CSF eradication has not always been achieved even though vaccination has been implemented at a large-scale. Piglets have been assumed to be the main cause of CSF persistence in the wild since they appeared to be more often infected and less often immune than older animals. However, this assumption emerged from laboratory trials or cross-sectional surveys based on the hunting bags. Methodology/Principal Findings In the present paper we conducted a capture-mark-recapture study in free-ranging wild boar piglets that experienced both CSF infection and vaccination under natural conditions. We used multi-state capture recapture models to estimate the immunization and infection rates, and their variations according to the periods with or without vaccination. According to the model prediction, 80% of the infected piglets did not survive more than two weeks, while the other 20% quickly recovered. The probability of becoming immune did not increase significantly during the summer vaccination sessions, and the proportion of immune piglets was not higher after the autumn vaccination. Conclusions/Significance Given the high lethality of CSF in piglets highlighted in our study, we consider unlikely that piglets could maintain the chain of CSF virus transmission. Our study also revealed the low efficacy of vaccination in piglets in summer and autumn, possibly due to the low palatability of baits to that age class, but also to the competition between baits and alternative food sources. Based on this new information, we discuss the prospects for the improvement of CSF control and the interest of the capture-recapture approach for improving the understanding of wildlife diseases.
Introduction
Understanding the mechanisms of disease dynamics in wildlife populations is of increasing interest in relation to the risk of emerging diseases in livestock and humans [1]. In this respect, wild boar (Sus scrofa sp.) have been the subject of much work as the increase in their numbers throughout Europe has led to an increasing risk of disease emergence, persistence and transmission to other species [2,3]. The classical swine fever (CSF) virus is one of the persisting pathogens observed among European wild boar populations [4,5,6,7,8] and represents a major source of disease for the domestic pig, with potentially substantial economic consequences [9]. The management of wild CSF outbreaks is mandatory in the European Union (Council Directive 2001/89 EC). Oral vaccination is considered as the main tool for controlling CSF in the wild [10,11]. However, infection has sometimes persisted for years or re-emerged despite a huge vaccination effort [11]. Accordingly, a better understanding of CSF dynamics and vaccination effect is required.
Because they appeared to be more often infected and less often immune than older animals, the young wild boar have been assumed to be important virus carriers, which had to be either destroyed or vaccinated in their early life [4,6,12]. However, hypotheses on the role of piglets and their capacity to eat the vaccine-baits have derived from experiments conducted under laboratory conditions [13,14,15,16] or from the percentages of immune/infected individuals observed in the hunting bags [4,12,7]. The interpretation of the effect of vaccination using hunting data is particularly questionable because sampling bias never can be ruled out from cross-sectional studies. Moreover vaccination and infection induce the same antibody reaction: a seropositive individual could either have been vaccinated or have been infected and have recovered [11]. To our knowledge, longitudinal studies aiming to describe the individual outcome of infection and immunization have never been performed in the wild.
The present paper investigates individual histories in freeranging wild boar that were captured, marked and recaptured. The study was performed in an area where a natural outbreak of CSF occurred and where vaccination was implemented [17]. We targeted 2-7 month old piglets, which were supposedly the most at risk of being infected [11] and which could be recaptured more frequently than older individuals [18]. A multi-state capture-markrecapture (CMR) modelling approach was used to estimate the probability of becoming infected and of becoming immune during and outside of the vaccination periods.
Using this approach, we first described the outcome of infection (duration/mortality) in piglets in the wild to discuss their capacity to maintain the chain of transmission. Secondly, we determined the effect of vaccination in piglets and the prospects for improving CSF control in wild populations.
Study area
The study was conducted in the Petite Pierre National Reserve (PPNR), north-eastern France (48.5uN,7uE) [19,20]. The PPNR is an unfenced 2,800 ha area located in the Vosges Mountains, i.e., a continuous forested area (.3,000 km 2 ) inhabited by a wild boar metapopulation where CSF virus has been demonstrated to circulate ( Fig. 1) [17,21]. Two CSF waves have been documented in the Vosges Mountains: a first wave during the 1990s and a second wave from 2003 to 2007 [17,21] (Fig. 1). During the second wave, the CSF virus has been observed in the PPNR from January 2005 up to November 2006. An approximate number of 400 wild boar (before the hunting period and after births) may be estimated, considering that 150 wild boar are hunted on average each year in the PPNR, and assuming that each individual wild boar has the same probability of being shot-dead as in the area studied by Toigo et al. (2008) [22].
Wild boar sampling
Captures were performed once a week from 18 th May to 24 th August 2005 and from 9 th May to 21 st September 2006 (Fig. 2), using box traps specifically adapted for catching piglets [23]. In order to maximize the probability of capturing different individuals, 11 traps were set in different valleys. Blood samples were taken for serological and virological examination. Each trapped animal was marked with ear-tags to allow individual identification and was released immediately after handling without anaesthesia.
All wild boar killed by hunters in the study area and its surroundings were compulsorily subjected to serological and virological examinations [17,24,25]. We focused our analysis on individuals less than one year old shot in November (i.e., just after the autumn vaccination sessions). Individuals were aged from tooth eruption or body weight [19,18], with carcasses of less than 30 kg assumed to be less than 1 year old.
Diagnosis of disease status
For antibody examination, commercially available ELISA kits (Herdcheck CSFV Antibody test kit or CHEKIT CSF SERO Antibody, both distributed by IDEXXH and having the same sensitivity) were used according to the manufacturer's instructions.
For virological examination, the CSF virus genome was first amplified by real-time polymerase chain reaction (r-RT-PCR) using a commercial kit (TAQVET PPCH or ADIAVET CSFH) according to manufacturer's instructions [26,27,28] To confirm the viropositive result, virus isolation or sequencing were performed on the PCR positive samples at the French Reference Laboratory for CSF (ANSES) according to the EU-Diagnostic Manual for CSF (Decision 2002/106/EC).
Oral vaccination
Oral vaccination had been implemented in the study area since February 2005 according to the protocol recommended by [29], i.e., three 1-month interval double distributions of vaccine-baits in spring, autumn and winter. In 2005, distributions were conducted on the 12 th February/12 th March (winter), on the 7 th May/4 th June (spring) and on the 27 th August/24 th September (autumn). In 2006, distributions were conducted on the 25 th March/22 nd April (winter), on the 3 rd June/1 st July (summer) and on the 9 th September/7 th October (autumn) (Fig. 2). Vaccination was expected to influence the proportion of immune individuals 2 to 4 weeks after each vaccination because baits consumption occurs within a few days of deployment [30,31] and 2 to 4 weeks are required for seroconversion [32]. Accordingly, most piglets were younger than 4.5 months during winter and spring vaccinations, but older during autumn vaccinations. According to laboratory experiments [15] piglets are likely to eat baits from the age of 4.5 months; the probability of becoming immune was thus expected to be much higher after the autumn than after the summer vaccination session.
Seroprevalence
To test the effect of autumn vaccination on immunity in piglets, we compared the proportion of immune individuals (seroprevalence) among those captured before vaccination (August) and those shot after vaccination (November). For this purpose we used only the last observation for piglets captured in August and we tested the difference between these two proportions using the normal approximation. The statistical analyses were performed using R 2.7.2 (the R foundation for statistical computing 2008, available at http://www.r-project.org/).
7.
Multi-state capture-mark-recapture approach 7.1 The Jolly movement model (JMV). In wildlife ecology, capture-mark-recapture (CMR) modelling has been developed for estimating the survival rate in animals that have been marked and recaptured (or resighted) from time to time and for which the date of death is unknown [34]. The data collected according to CMR approaches for one individual (individual histories) can be summarized as a series of ones and zeros, animals being recaptured or not recaptured during a series of capture sessions. Specific multiplicative multinomial models have been developed for estimating separately the probability of survival and of recapture between two capture events for a group of individuals. These models have been progressively generalized to take into account differences in capture and survival rates over time or among different groups [34]. Then, multistate CMR models were developed to take into account the fact that individuals could also experience different ''states'' from time to time. In multistate CMR approaches, the individual history is a series of zeros (no successful recapture) and categorical values depending on the state of each individual observed at each effective capture (Fig. 3). In order to take into account possible ''movement'' between states over time, the Jolly movement model (JMV) has been developed for estimating the probability of transition from one state to another between two capture sessions [35]. According to the JMV model ( Fig. 3), the recapture of one individual at time t+1 and in state j, given that this animal was captured at time t and in state i, depends on three probabilities: first, the probability of survival depending on the initial state i, then the probability of transition between states i and j (conditionally to the survival), and lastly, the probability of being recaptured that may either be constant or dependent on time, groups or states. The model parameters are estimated using an iterative process between the model and the observed data, according to the principle of maximum likelihood [35].
7.2 Application of the JMV model to epidemiology. In the present study, the JMV model was used to estimate the survival and the probability for any piglet to move between the three states previously defined (Fig. 4). We were particularly interested in estimating the immunization and infection rates classically described in epidemiological models, corresponding to the probabilities for any susceptible piglet to become immune or infected (T SU-to-IM or T SU-to-INF ) between two captures sessions [36,37,38] (Fig. 4). The captures were performed weekly to take into account the virus dynamics, our recapture capacity and the welfare of wild piglets (maximum of one bleeding per animal and per week). All the ''movements'' were considered as possible, except from state INF to state SU because infected individuals either die or recover but never move back to the susceptible state [32]. Since we captured piglets less than 7 months old, antibodies had three potential origins: natural infection, vaccination or maternally derived antibodies. Differentiation between antibody origin on a single blood sample was not possible (Pol unpublished data) so we explored the variation of the immunization rate according to the time period. In 0-3 months old piglets, maternally derived antibodies (MDA) are gradually disappearing [39]. Contrary to MDA, the immunity induced by vaccination or natural infection (active immunity) is considered lifelong whatever the age of the piglets [32]. As a result of oral vaccination, the probability of becoming immune was supposed to increase during the vaccination periods (i.e., 2 to 4 weeks after each vaccination session), while active immunity was expected to occur at any time due to infection. We also considered that susceptible animals becoming immune out of the vaccination periods could have been infected for a short time but not observed during this period (INF unobserved ) (Fig. 4). To address these biological hypotheses we explored the variations of the immunization rate according to three time periods (Fig. 2): N Period 1: during the period when piglets were on average 0-3 months old and when vaccination was not performed, the probability to lose antibodies (passively transmitted by the mother) was expected to be higher than during the two other periods, N Period 2: during the vaccination sessions, whatever the age the piglets, the probability of acquiring antibodies (after consuming the oral vaccine) was expected to be higher than during the two other periods, N Period 3: during the period when they were on average more than 3 months old and when vaccination was not performed, piglets were no longer expected to lose or acquire antibodies, except due to some unobserved short-term non-lethal infection.
We conducted separate analyses for 2005 and 2006, because different individuals were concerned. In order to detect possible infringement of the model hypotheses (recapture heterogeneity between individuals or over time) we performed goodness-of-fit (GOF) tests of the fully time-dependent Jolly Move model (JMV), using the program U-Care 2.2.5 [40] (available at http://www. cefe.cnrs.fr). Then, taking into account the GOF analysis, the JMV modelling was performed using M-SURGE 8 [35,41] (available at http://www.cefe.cnrs.fr). We compared the models, either assuming a constant survival or a survival depending on the state. Survival was expected to be lower in infected than in uninfected individuals owing to the potential lethal effect of CSF virus [16,42]. Survival might also be lower in susceptible than in immune animals due to the occurrence of lethal-acute infections in piglets that were thus no longer captured. Starting with the best model regarding survival, we compared the models, assuming that transition probabilities were either dependent or independent of the time periods previously defined. Given that we aimed to test the effect of several covariables (state, time periods) on the survival and the transition probabilities, which enhanced the risk of type I error, and that the models we compared were not all nested, model selection was based on the Akaike Information Criterion corrected for small sample size and adjusted for over-dispersion (QAICc) [43]. When the difference in QAICc was less than 2, the most parsimonious model was selected [43]. Once the model selection was achieved, significant differences between specific parameters of the ''best model'' were tested using Wald tests at the threshold of p#0.05 using M-SURGE 8 [41,43] infected animals that recover before 4 weeks post-infection) such as defined by Kramer-Schadt et al. [44]. Initial proportions of SU, INF, and IM used in simulations were those observed at first capture, and the initial number of piglets was arbitrarily fixed at 1000 to scale the results.
Capture and hunting data
From May to August 2005, 116 piglets were captured between one and 14 times, among which 21 were infected. Among these 21 piglets, none was captured and identified as infected in more than 2 consecutive weeks: they were subsequently either captured and recorded as immune or not recaptured. From May to September 2006, 218 piglets were captured once to 17 times, among which none was infected. In November 2005 and in November 2006 we sampled 49 and 76 hunted piglets (7-10 months old), respectively.
Goodness of fit of the JMV model
Capture transience, corresponding to animals captured only once, was detected in both 2005 (x 2 = 33.5, df = 14, P = 0.002) and 2006 (x 2 = 70.8, df = 20, P,0.001). This is not surprising because the study area was not fenced and was not large enough to include the home ranges of captured wild boar, and many animals could potentially be captured once while dispersing or at the edge of their home range [45]. Capture transience is an infringement of the assumptions of the JMV model and generates bias in the estimation of survival [46]. To avoid this bias, we removed the first capture from all life histories [47] so that the analyses were finally
Selection of CMR models
In 2005, we observed all three disease states previously defined. In 2006 however, only states SU and IM were represented. The number of parameters and QAIC c of the models are detailed in Table 1. Table 1). In 2005, the probability of antibody loss was higher during the period 2 (T IMtoSU-period2 = 0.177; se = 0.081) than during the period 3 (T IMtoSU-period3 = 0), corresponding to the expected loss of MDA in 0-3 months old piglets. In 2006, on the contrary, the probability of antibody loss was null during the period 1, (i.e., when piglets were ,3 month of age) and was lower during the period 2 (T IMtoSU-period2 = 0.094; se = 0.017) compared to the period 3 (T IMtoSU-period2 = 0.252; se = 0.036) (W = 3.97, p,0.001). This observation possibly arises because of higher antibody rates in mothers' colostrums in 2006 compared to 2005. Considering the individual histories, we observed that antibody loss occurred mainly when the piglets were in average 4-5 months old. The antibody loss could also be lower during the period 2 compared to the period 3 due to vaccination. But during both study years, we detected no effect of the vaccination period on the probability of becoming immune (T SUtoIM ) (models with time-dependent transitions having a higher QAIC c than models with a constant rate of transition, Table 1) suggesting that few piglets acquired antibodies consecutive to the summer vaccination sessions. In 2005, both susceptible and immune animals became infected. The probability of becoming infected tended to be lower in immune (T IMtoINF-2005 = 0.026; se = 0.019) than in susceptible animals (T SUtoINF-2005 = 0.083; se = 0.030) although this effect was only marginally statistically significant (W = 1.61, p = 0.055). This trend is consistent with the partial protection provided by MDA during the first months of life [14]. The estimation of the recovery rate (i.e., the probability of moving from INF to IM) was not accurately estimated because too few infected piglets were recaptured later as immune, most of them remaining unseen after one to two weeks after the first detection of infection. The probability of becoming infected (T SUtoINF , T IMtoINF ) or recovered (T INFtoIM ) was not significantly different among the periods (models with timedependent transitions having a higher QAIC c than models with a constant rate of transition, Table 1).
Model predictions
In 2005, infections were observed during the entire capture period. We estimated that the average duration of infection was 1.18 weeks and that proportions of lethal-chronic, lethal-acute and transient disease courses were: P chronic = 0.001, P acute = 0.795, P transient = 0.204. In 2006, we detected no infected piglets but we cannot dismiss unobserved infections since animals acquired antibodies out of the vaccination period (i.e., period 2) and since the survival rate was lower in susceptible than in immune piglets.
Discussion
Our longitudinal study of individual survival/infection histories showed that CSF was highly lethal and vaccination ineffective in piglets.
During the study, most of the infected piglets (80%) did not survive more than two weeks, while the others (20%) quickly recovered, and were thus transiently (i.e., briefly) infected. Even though we cannot rule out a rare occurrence of chronic infection, our study demonstrates that chronic infection seldom occurs among wild piglets. This result is contrary to the previous observation [13] of infected piglets surviving 39 days. However, this former study was conducted in a single piglet litter and under laboratory conditions, which may have enhanced artificially the survival of infected individuals. According to the models developed by Kramer-Schadt et al. [44], a virus being so lethal in piglets in the wild is unlikely to persist by circulating only in that age class. We thus consider that piglets did not constitute the main CSF reservoir, even though the proportion of infected individuals observed in the hunting bags was higher in young animals than in adults [17]. Alternatively, we hypothesize that chronic infections occurred more frequently in older animals, which are more resistant than piglets to the pathogenic action of CSF [32], even though these individuals have been difficult to detect using the hunting data [11,17]. Unfortunately, we could not test this hypothesis given that older animals are very difficult to recapture weekly. We also have to consider that the population size (conditioned by the size of the forested areas) is an important factor for disease persistence since the probability of maintaining the chain of transmission through chronic infections increases with the number of animals [7,44]. In a large forest (ex: Vosges Mountains and Palatinate), CSF might persist and spread again despite infection being extinct in a given locality (ex: PPNR). It is thus important that management measures for controlling CSF are implemented to the whole area at risk [17].
For piglets, the probability of becoming immune to CSF appeared to be unrelated to vaccination, whatever the vaccination period. Indeed, the probability of becoming immune did not increase during the summer vaccination sessions in both years of the study, and the proportion of immune piglets was similar among those hunted in early winter, after the autumn vaccination sessions (September), and among those captured in late August. These results suggest a low efficacy of the two first vaccination sessions in piglets. This result may arise during the summer because piglets were too small to eat the baits [15]. The age of piglets had been considered as the main factor driving their capacity to eat the vaccine-baits because in captivity consumption had been observed only among the piglets that were more than 4.5 months old [15]. But in the present study we did not detect an effective immunization of piglets in autumn, i.e., when most piglets were 6-7 months old. We thus consider that the age of piglets was not the only factor that influenced the vaccine-bait uptake during the study. A competition with alternative food sources such as crops and oak mast may have also decreased the palatability of baits to wild boar [31]. Although ineffective in summer and autumn, vaccinating piglets in this area seems possible during wintertime [31], i.e., when most animals are large enough to eat baits and when the food availability does not compete with the vaccine baits. Since 2007, the autumn sessions have been moved from September to November or December [17]. New baits have been recently developed to try to vaccinate piglets in their early life for a better control of CSF [15] or bovine tuberculosis [48]. However, given that alternative food sources cannot be avoided [31] and that animals more than 6 months old are possibly more likely to maintain the chain of transmission than piglets (results of the present study), to improve vaccination in wintertime is possibly the best option for improving CSF control in this European ecoregion.
Our capture-mark-recapture approach was useful for assessing individual disease outcome and vaccination effect. By considering the effect of the trap-site in the recapture-probability and by removing the first capture from each individual history, we avoided the major infringement of the model hypotheses. However, we cannot exclude some biases in the CMR process. First, our trapping was certainly biased in favour of the social groups having a dominant status on the feeding grounds and thus being more likely to be vaccinated than others (baits are delivered on the feeding grounds) [31,49]. Secondly, the accuracy of model estimations may have been limited because we did not capture all the animals every week and we could have missed short-term infections between two consecutive recaptures. Moreover, false negative or positive results can never be excluded [50]. In particular, it is likely that a fluctuation in the serological results when maternal derived antibodies became low has generated part of the flux we observed between the susceptible and the immune states outside of the vaccination periods. However, we consider that these methodological limitations did not invalidate our qualitative interpretation of the individual histories and main results: i.e., the short and lethal infections in piglets, the low efficacy of vaccination. While the former studies based on hunting data only hypothesized the role of piglets from average percentages, the multi-state recapture approach used here explored the true kinetics of infection and the effect of vaccination in the wild. This study has thus clarified the role of piglets (minor) and the factors influencing vaccination efficacy (i.e., the food availability and not only the age of piglets). We finally recommend this approach for a better understanding of wildlife diseases when capture-mark-recapture data are available and may complete the cross-sectional surveys [51].
|
2017-04-13T17:16:16.713Z
|
2011-09-22T00:00:00.000
|
{
"year": 2011,
"sha1": "ce606fd4ce4f49af76de6315fe6c77fe5eca6fcd",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0024257&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce606fd4ce4f49af76de6315fe6c77fe5eca6fcd",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
240292757
|
pes2o/s2orc
|
v3-fos-license
|
HSOA Journal of Environmental Science: Current Research Wood Wastes: An Optimal Solution that Could Reduce the Threat of Co Emissions on People Lives
As the term environment deals with the behavior between the living organisms and the median where they live which is scientifically known as the gas layer that surrounds the earth (air), and consisting of some gases that could be considered as the major factors of the main life components. Modern human innovations, such as boilers, furnaces, heaters, cars, trucks, ships ….etc., emitting high gas elimi-nations leading to the imbalance of the natural components of the air specially the unbelievable increase in carbon oxides causing a direct threat on the human health. Being exposed to what exceeds 30 ppm of carbon monoxide concentrations may lead to death. Headaches, nausea, breath shortness, chest pains, losing control, fatigue and many other that seem to be some symptoms of a common cold, are resulting direct and indirect exposing to less carbon monoxide con- centrations, depending on the age, health, the concentration of the inhaled gas and the duration of expose. At the time, while Egypt is suffering from over population, an increase in pollutant ratios and a decrease in the national income problems, the need to find a solution in order to increase the national income and reduce the pollutant ratios becomes a necessity. A one of which that could be an easy, pow- erful and most prominent way of reducing carbon emissions is what is known as wood pellets. The way of how to make use of straw, dry leaves, branches, grass, palm fronds, carpentry and wood working residues, wooden construction remains and even trash in order to get a powerful source of energy is called wood pellet manufacturing. Egypt in general and the city of Damietta in particular is one of the richest areas in wood wastes. The way of using about 33 thousand tons of wood wastes annually generated from nearly 150 thousand factories and workshops located in Damietta, [1,2] rather than what exceeds 23 million tons of palm fronds, straw and other agricultural residues 1 in order to generate a renewable energy resource such as wood pellets either as an independent small project or attached to some of the existing factories is the main theme of the research, while using of wood pellets production lines in order to reduce the wood residues, carbon monoxide emissions and increase the Egyptian national income could be stated as the main research outcomes.
Introduction
The term environment could be easily described as the science that deals with the relation of living organisms with the place where they live. The different kinds of gases surrounding the earth such as oxygen and nitrogen which are considered the life's constituents of all living organisms could de defined as the atmosphere. The disorder or change in the percentage or concentration of any of the atmospheric components could lead to negative effects on all living organisms.
Modern human activities has had a major impact on the imbalance of the natural components of the atmosphere caused by various and modern human innovations such as boilers fireplaces, furnaces, heaters, cars, trucks, etc., in such a way causing a serious threat for the whole life in either a direct or an indirect way. The direct exposure what exceeds 30 ppm of carbon monoxide concentrations is considered a serious risk that may lead to death, while some symptoms such as headache, dizziness or nausea, shortness of breath, chest pain, inability to control, fatigue, etc. could happen when exposed to lower concentrations, depending on the health status of the person, age, concentration of gas inhaled and the duration of exposure.
In order to decrease the enormous rising in the percentage of carbon emissions clean, renewable and eco-friendly energy resources should be sought. In this regard using of wood wastes in the form of wood pellets could be stated as an effective energy resource that could decrease the high rates of carbon emissions. That topic was presented at that time due to the existence of some promising investment opportunities based on the increasing demand of wood pellets, especially used in cold areas for heating purposes, in a way that can achieve a qualitative leap for the Egyptian economy and may lead to a raise in the national income.
Hypothesis
Wood pellets production units could be constructed from wood waste, either as an independent small project, or to be attached to factories where these capsules could substitute other fossil fuels
Abstract
As the term environment deals with the behavior between the living organisms and the median where they live which is scientifically known as the gas layer that surrounds the earth (air), and consisting of some gases that could be considered as the major factors of the main life components. Modern human innovations, such as boilers, furnaces, heaters, cars, trucks, ships ….etc., emitting high gas eliminations leading to the imbalance of the natural components of the air specially the unbelievable increase in carbon oxides causing a direct threat on the human health. Being exposed to what exceeds 30 ppm of carbon monoxide concentrations may lead to death. Headaches, nausea, breath shortness, chest pains, losing control, fatigue and many other that seem to be some symptoms of a common cold, are resulting direct and indirect exposing to less carbon monoxide concentrations, depending on the age, health, the concentration of the inhaled gas and the duration of expose. At the time, while Egypt is suffering from over population, an increase in pollutant ratios and a decrease in the national income problems, the need to find a solution in order to increase the national income and reduce the pollutant ratios becomes a necessity. A one of which that could be an easy, powerful and most prominent way of reducing carbon emissions is what is known as wood pellets. The way of how to make use of straw, dry leaves, branches, grass, palm fronds, carpentry and wood working residues, wooden construction remains and even trash in order to get a powerful source of energy is called wood pellet manufacturing. Egypt in general and the city of Damietta in particular is one of the richest areas in wood wastes. The way of using about 33 thousand tons of wood wastes annually generated from nearly 150 thousand factories and workshops located in Damietta, [1,2] rather than what exceeds 23 million tons of palm fronds, straw and other agricultural residues 1 in order to generate a renewable energy resource such as wood pellets either as an independent small project or attached to some of the existing factories is the main theme of the research, while using of wood pellets production lines in order to reduce the wood residues, carbon monoxide emissions and increase the Egyptian national income could be stated as the main research outcomes. (coal, natural gas, diesel or diesel) in various production processes, especially in cement production [3], which will lead to significant reductions in carbon emissions that threaten human life.
Research Methodology
Descriptive analytical method is used relying on the compiling, comparing and analyzing of information and facts in order to get acceptable perceptions, ideas and considerations.
What is meant by wood pellets
The process of transforming organic fuels to energy through collecting of the non painted fine and rough wood wastes and storing them in absence of air in order to avoid its degradation or mold growth. The humidity percentage of the wastes should be reduced from 50 -55% to 8 -12% at maximum. Sawdust, fine wastes and the rough ones that doesn't exceed 300 mm × 250 mm (width × Thickness) are directly taken to the grinder or hammer mill, while bigger wastes should be firstly divided into smaller pieces (Figure 1) [4].
After milling, the product is filtered and purified from various types of impurities, such as paper, metal and plastic, in a process known as shakers. The resulting sawdust is placed in a mixer or feeder forming a paste, which is then passed or compressed through a perforated surface with the desired dimension (usually 6 mm and sometimes 8 mm). The capsules are then cut by an automatic cutter or pellet machine, and then left for cooling to be ready for use (Figure 2) [5].
Wood pellets are divided into two types, first the flat die, in which the waste paste is passed through a perforated disk. Many spare parts are consumed in that, due to the intense heat generated by the production processes (Figure 3).
The other and most effective, but the most expensive type is the ring die one, in which the paste is passed through a perforated ring. That type is used in larger scale units with higher productivity. For smaller projects with lower production capacity, the flat die pellet is preferably used (Figure 4).
Energy capsules are packed in either 25 kg bags per bag or in 50 × 50 × 70 wooden boxes and then transported to the place where it is used.
Specifications of energy pellets according to DIN 51731 -Ö NORM M 7135 [7,8]
Pellets are distinguished by its as low humidity (less than 10%), which allows them to burn with high-yielding outputs, as well as being highly dense, allowing easy practical storage and transportation facilities. As the studies have shown that the thermal efficiency of ovens and boilers had increased by 85% with carbon that doesn't exceed 250 ppm, the researchers aim to reduce carbon emissions by 40% by 2030 through using energy pellets as a source of fuel rather than other currently used fossil fuels [9,10].
Conclusion
1. Using of wood-pellets could reduce the carbon monoxide emissions.
2. Wood-Pellets production lines could be used as a solution to increase the Egyptian national income as the price of only one packed ton of wood pellets in the world market is between 220 $ and 369 $ [11], and the demand of wood pellets around the world is enormously rising (20.3 million tons in 2015) [12].
3. Wood-Pellets production lines could be a suitable and an appropriate solution to get rid of wood residues and wastes especially in Egypt.
4.
Agricultural residues could be also used in wood pellets generation but with lower energy impact and greater percentages of nitrogen and that is due to the use of fertilizers that are rich in nitrogen.
Recommendations
It is recommended to stimulate, support and encourage young business men towards the role that each of them can perform leading to the public and private benefit through the establishment of wood pellets production units, which ensure the provision of a clean, more efficient and green source of energy, as well as the safely disposal of large quantities of wood wastes generated from various manufacturing processes, that could preserve the urban and aesthetic appearance of the cities.
The cost establishing a medium-flat die-wood pellets unit capacity of 100 -200 kg/h, is as follows: First crusher or wood grinder or Hammer-mill with a Siemens 11 KW motor, production capacity 200 -400 kg/h, 400 kg weight, dimensions 1200 × 580 × 950, 380 V .
|
2019-06-13T13:24:23.244Z
|
2018-08-23T00:00:00.000
|
{
"year": 2018,
"sha1": "b8d37a3c06cc7250129c400e27e10f632ffb6d77",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24966/escr-5020/100002",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ae9b9ffff172256852f98a11f92de301f5e2ab13",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
242027921
|
pes2o/s2orc
|
v3-fos-license
|
Femoral Head and Liner Exchange in Patients with Atraumatic Dislocation. Results of a Retrospective Study with 6 Years Follow-Up
Background and Objectives: Femoral head and liner exchange is an established treatment for polyethylene wear but has had a more limited role in the treatment of other conditions including dislocation, because of concerns about an increased postoperative dislocation rate. Some authors have considered dislocation associated with polyethylene wear to be a contraindication for this procedure. Materials and Methods: Our retrospective review evaluated the outcome of head and liner exchange in a small consecutively operated heterogeneous cohort of 20 patients who presented with dislocation unrelated to trauma, component malposition or component loosening. Of this group, 12 had prior primary total hip arthroplasty, and 8 had prior revision total hip arthroplasty, and included 4 patients with prior revision for dislocation. Mean follow-up was 6 ± 3.5 years (range 1–145 months). Results: Kaplan–Meier analysis revealed a revision-free implant survival from any cause of 80% (confidence interval 95%:64.3–99.6%) at 5 years after head and liner exchange (index surgery). At final follow-up, 83.3% of patients (n = 10) with prior primary total hip arthroplasty and 62.5% of patients (n = 5) with prior revision total hip arthroplasty, had not required subsequent revision for any cause. None (0%) of the primary total hip arthroplasty group and 3 (38%) of the revision arthroplasty group had required revision for further dislocation. Of the eight revision arthroplasty patients, four had a prior revision for dislocation and three of these four patients required further revision for dislocation after index surgery. The fourth patient had no dislocation after index surgery. One additional patient who had prior revision surgery for femoral component fracture suffered dislocation after index surgery, but was successfully treated with closed reduction. Conclusions: In our study population, femoral head and liner exchange was an effective treatment option for patients with prior primary total hip arthroplasty and also for a highly select group of revision total arthroplasty patients with no prior history of dislocation. Femoral head and liner exchange does not appear to be a viable treatment option for patients who have had revision total arthroplasty after prior dislocations.
Introduction
Total hip arthroplasty (THA) is one of the most successful orthopedic procedures with 10-year survival rates of 90% and an annually increasing frequency in a progressively younger patient population [1]. However, a minority of patients suffer severe consequences with joint instability, the most common postoperative morbidity, and the second most frequent cause of revision surgery after component loosening [2][3][4].
The majority of dislocations are single episodes (2/3) and can be treated conservatively with closed reduction. Surgical intervention is frequently required for recurrent and late episodes [3,8,9], and several surgical strategies are utilized. Revision THA is most commonly utilized with a variety of components including constrained or lipped liners and dual mobility (DM) cups. Removal of a well-fixed acetabular cup results in bone loss [3].
Less frequently, less invasive modular component exchange is utilized in patients with a well-fixed acetabular component and a well-fixed femoral stem, adequately functioning abductor mechanism and absence of component malposition [3,8,10,11]. Head and liner exchange is a shorter, less complex procedure than RTHA involving fixed component exchange and conserves bone stock [3]. Prior studies of modular component exchange for the treatment of dislocation are few and have shown variable results. One prior study included PTHA cases only. Study cohorts varied from 11-48 hips with follow-up periods of 36-69 months [3,8,9,12,13]. Re-dislocation rate varied from 0% [9] to 55% [3].
Our current study of 20 patients treated for dislocation with modular component exchange differs from prior similar studies by the longer follow up period and a more diverse patient population that included patients with prior RTHA for recurrent dislocation.
The goal of our study was to assess the effectiveness of head and liner exchange for the treatment of dislocation in our PTHA and RTHA groups and our cohort as a whole. We hypothesized that patients with prior revision for dislocation would have a higher dislocation rate after index surgery.
Materials and Methods
Between January 2019 and September 2019 we utilized our internal database and conducted a retrospective chart review of 94 consecutively operated hips (92 patients) who received femoral head and liner exchange (index surgery) for a variety of causes between January 2004 and December 2013, at our hospital. Of this group, 34 hips (33 patients) were treated for periprosthetic infection, 23 hips (22 patients) were treated for polyethylene wear, 17 hips (17 patients) were treated for various causes (metallosis, inlay dislocation, ceramic head fracture) and 20 hips (20 patients) were treated for dislocation and are the focus of this study.
Exclusion criteria included evidence of component malposition or loosening as identified by clinical and/or radiographic examination, dislocation resulting from trauma and combination surgical procedures. Our study was approved by the local University Ethics Committee (Ethikkommission der Medizinischen Fakultaet Heidelberg, File Number S-548/2016).
Patient Demographics and History
Our cohort consisted of 20 patients (20 hips), 10 men and 10 women, who presented with dislocation. No patient was lost to follow up, but one patient died from unrelated causes during the study period at 126 months post index surgery and is included in the study.
At index surgery, patient mean age was 63 ± 13 years (range 27-79) and the mean BMI was 29.2 ± 6.9 kg/m 2 (range 20.1-44.6). The right hip was revised in 8 patients and the left hip in 12. All patients were classified according to the American Society of Anesthesiologists (ASA) classification [14]; Class I: 2 patients, Class II: 9 patients, Class III: 9 patients. Tables 1 and 2 display the diagnoses that necessitated prior hip procedures for both patient groups (PTHA and RTHA). In 12 patients the index surgery was the first revision surgery after PTHA (Table 1), and 8 patients had at least 1 prior revision procedure for a variety of causes including prior revisions for dislocation in 4 RTHA patients ( Table 2).
The mean time interval between the index procedure and the prior procedure was 62.4 ± 51.4 months (range 0.75-170 months) for the PTHA group and 26.6 ± 26.3 months (range 2-72 months) for the RTHA group.
Preoperative Assessment
Pre-operative assessment included clinical assessment, routine labs, and AP and axial (Lauenstein) radiographs with the patient in the supine position for assessment of component position [15]. Acetabular cups and femoral stems were well fixed radiographically [16,17]. All cups were well positioned and the inclination and anteversion angles were in the Lewinnek 'safe zone' [18]. The mean inclination angle was 41 ± 6.4 degrees, and the mean anteversion angle was 16 ± 6.1 degrees (measurement done using AP radiographs and TraumaCAD ® software (TraumaCAD ® , Brainlab AG, Munich, Germany). These were confirmed intraoperatively. CT scans were used to evaluate cases of questionable malposition or loosening. CT scans allow more accurate assessment of component malposition and are considered the gold standard [19]. Scintigraphy with Tc was not used.
Surgical Procedure and Follow Up
The surgical approach utilized the prior approach to minimize scarring and soft tissue deficiency. It was transgluteal in 11 patients, anterolateral in 5 and non-standard (modified transgluteal or anterolateral) in 4 RTHA patients with multiple prior surgeries and preexisting osseous and soft tissue deficiencies (Girdlestone procedure and proximal femoral fracture) ( Table 2).
All index procedures were performed by surgeons experienced in hip surgery; however, consecutive surgeries were not always done by the same surgeon. Intra-operatively stable acetabular cup fixation was confirmed, and component malposition was excluded. The liner locking mechanisms was examined and confirmed to be intact in all cases.
The femoral head size was increased in 13 hips, decreased in 1 hip and unchanged in 2 hips. The pre-operative head size was not documented in 4 hips, but all received a 32 mm or larger replacement femoral head (Tables 1 and 2). All patients received peri-operative intravenous cefuroxime and postoperative thrombo-embolic prophylaxis.
After modular exchange, mobility and stability of the joint was tested intra-operatively by taking the operated hip through a complete range of motion (ROM). Impingement was excluded by confirming clearance between the greater trochanter and the pelvis and between the metal femoral neck and polyethylene or metal rim, with the limb positioned in maximal extension and external rotation and in 90 degrees flexion and maximal internal rotation [20].
Patients were followed in the outpatient clinic at 3 and 6 months post index procedure and yearly thereafter for a mean of 6.0 ± 3.5 years or until dislocation, demise or revision surgery.
Incidence of postoperative dislocation.
2.
Implant survival with no further surgical intervention for dislocation; overall and for PTHA and RTHA groups separately.
3.
Implant survival from all causes.
Statistical Analysis
Patient characteristics were described using descriptive statistics. The re-revision free survival was described using Kaplan-Meier estimators; the 60-month survival rate of the implants for all causes was described using corresponding 95% confidence intervals (CI).
The software SPSS ® Version 22.0 (SPSS Inc, Chicago, IL, U.S.A.), SAS ® Version 9.4 (SAS Institute Inc., Cary, NC, USA) and Microsoft Excel (Microsoft, Redmond, WA, USA) were used to record and analyze the data.
Results
An overview of patient outcome is represented in the flow chart (see Figure 1).
Discussion
Our results are comparable to prior studies despite our heterogeneous patient population including some at very high risk. At mean 6 years follow up, we achieved 100% and Twenty patients (20 hips) received femoral head and liner exchange for treatment of dislocation. One patient died of unrelated causes and with no dislocation at 126 months post index surgery, and is considered a treatment success. Twelve patients (hips) had prior PTHA and eight patients had prior RTHA (Tables 1 and 2).
Kaplan-Meier Analysis revealed a re-revision-free implant survival from all causes of 80.0% (CI95%: 64.3-99.6%) at 5 years after head and liner exchange (see Figure 2). At final follow up of 6.0 ± 3.5 years, five patients required surgical re-revision for any cause (25%). Implant survival from any cause was 75% (15 patients).
Of the five patients who required re-revision, further dislocation was the cause in three patients; each had a history of RTHA and prior surgical treatment for dislocation. Implant survival from further dislocation was 85% (17 patients) ( Table 2). The two remaining patients who required re-revision experienced component loosening (one femoral stem and one acetabular component). Both were PTHA patients (Table 1).
One additional RTHA patient with a history of femoral component fracture, also experienced post index surgery dislocation, but this was successfully treated with closed reduction ( Table 2).
The incidence of dislocation post-index surgery was four hips (20%).
Discussion
Our results are comparable to prior studies despite our heterogeneous patient population including some at very high risk. At mean 6 years follow up, we achieved 100% and 50% success (no dislocation) in our PTHA and RTHA groups respectively. Lachiewicz et al. reported 82% and 50% success in their PTHA and RTHA groups respectively with small cohorts and shorter follow up periods [12]. Biviji
Discussion
Our results are comparable to prior studies despite our heterogeneous patient population including some at very high risk. At mean 6 years follow up, we achieved 100% and 50% success (no dislocation) in our PTHA and RTHA groups respectively. Lachiewicz et al. reported 82% and 50% success in their PTHA and RTHA groups respectively with small cohorts and shorter follow up periods [12]. Biviji et al. defined success as up to one postoperative dislocation and achieved 76% and 64% success in their PTHA and RTHA groups respectively [8]. Our overall implant survival with no additional surgery for dislocation was 17/20 (85%), postoperative dislocation was 4/20 (20%), and implant survival from all causes was 15/20 (75%). Our four patients who developed hip dislocation had multiple prior procedures with re-revision for dislocation in three patients and included the only patient who received a constrained liner during index surgery. Re-revision for aseptic loosening was required in two additional patients; one femoral shaft and one acetabular cup.
Hip instability is the most frequent cause of morbidity after THA [3] and up to 70% of dislocations occur in the first 3-6 months post THA [2,4]. Approximately 2/3 of first dislocations can be successfully treated conservatively [8]. Up to 42% of dislocations are recurrent and require surgical intervention [21,22]; dislocation is the most common reason for early revision THA [2]. Trabecular Metal™ (TM) shells have been associated with a high rate of postoperative dislocation [7,23].
Charissoux et al. defined three categories based on time of presentation from index surgery: 3-6 months resulting from inadequate healing; up to 5 years related to resumed activities; more than 5 years resulting from polyethylene wear [2]. Component malposition and trauma also contribute to late occurrence [9,24].
Revision surgery for instability has had variable success and is directed at cause if identified [25]. Preoperative history, physical exam, lab and radiologic studies with CT scan if necessary are important to identify periprosthetic infection, component malposition, and P/E wear that influence subsequent surgical procedures. Interventions for dislocation include revision of a malpositioned component, trochanteric advancement, increase of femoral head size, implantation of bipolar and tripolar cups or constrained acetabular components and femoral head and liner exchange [2,3,8,9,[11][12][13]26,27]. Combined procedures are sometimes employed [28].
Femoral head and liner exchange is a treatment option in cases with a stable well implanted acetabular cup that is not malpositioned [3,8,10] and has been utilized increasingly in cases with P/E wear [29] with reports of implant survival that are comparable to those of more complex procedures [30]. Head and liner exchange is considered controversial by some when used for other indications [30] and has had limited use in the treatment of periprosthetic joint infection, femoral head and liner dissociation, liner fracture and/or detachment and instability of the hip [11]. Published literature on the topic of femoral head and liner exchange for treatment of dislocation following THA is relatively sparse.
Advantages of femoral head and liner exchange over RTHA with exchange of wellfixed components for the treatment of dislocation include the avoidance of significant bone loss associated with the removal of a well-fixed component, a shorter operative and recovery time, and decreased blood loss [29]. These advantages must be weighed against the disadvantages of possible incomplete debridement of osteolytic areas and granulomas [29] and an increased postoperative dislocation rate as reported in some studies [3,[30][31][32]. Retention of a cup with a damaged or inferior locking mechanism can be managed by cementation of a liner [11]. Concerns about an increased postoperative dislocation rate have resulted in the relatively infrequent use of head and liner exchange in cases other than P/E wear [11], and in some studies the presence of associated dislocation was an exclusion criterion [26,29,30,32,33].
Possible risk factors for dislocation are patient, procedure and component related. Guo et al. conducted a systematic review and meta-analysis of risk factors after RTHA and found that a history of instability and prior revisions was the most significant risk factor as a result of bone loss, soft tissue damage and abductor insufficiency [34]. An increasing number of prior revisions was associated with an increasing risk of dislocation [4]. Cumulative rates of dislocation increase over time, with an increase of 1% every 5 years after the first year [35] with a 35% cumulative re-dislocation rate 15 years after revision for dislocation [36]. Our results are consistent with these findings; our cohort included six patients with two prior procedures, one with three prior procedures, and one with four prior procedures, placing them at high risk for subsequent dislocation. Of these, four patients had a prior history of hip dislocation and three of these suffered re-dislocation and required re-revision. The fourth was successfully treated with closed reduction (Table 2).
Other patient risk factors are neuromuscular and cognitive impairments that decrease ability to co-operate postoperatively (i.e. Parkinson's disease, alcoholism, dementia) [4,6,27], advanced age associated with diminished tissue healing [35,37] and BMI >30 kg/m 2 [4,38]. Our patient with BMI 43 kg/m 2 suffered dislocation but was successfully treated with closed reduction. Patients with lumbosacral pathology including sagittal spine deformity or prior spine surgery have abnormal pelvic tilt that influences acetabular cup position and are at greater risk of postoperative dislocation [39,40].
Acetabular component malposition has been identified as a possible cause of recurrent instability in up to 1/3 of cases [19,41]. Lewinnek et al. found that acetabular cup orientation significantly influenced dislocation after THA and proposed a 'safe zone' for cup inclination and anteversion angles after PTHA of 40 ± 10 and 15 ± 10 degrees respectively, measured radiographically with the patient supine [18]. Lewinnek's safe zone is not applicable when standing due to spino-pelvic tilt [42]. However, some studies have documented >55% of dislocated cups were within the safe zone [25,43]. Our cohort had mean cup inclination and anteversion angles within the safe zone as measured radiographically and intraoperatively.
A posterior surgical approach has been considered a risk factor, since the majority of dislocations occur posteriorly [4,27,[44][45][46]. Kwon et al. in their meta-analysis found that this was only true with inadequate or no capsular repair [45]. The majority (11) of our patients had a transgluteal approach, five had antero-lateral and four had non-standard approaches as a result of extensive muscular and osseous defects [45].
Choice of components can influence outcome. Femoral head diameter, neck length and head neck ratio affect dislocation risk [47]. Larger heads have greater ROM and 'jump' distance with less chance of impingement; 22 mm heads are associated with more risk than larger heads [2,6,26,35] and 28 mm heads have greater risk than 32 mm [10,34]. Howie et al. found that 28 mm heads had 5× more dislocations than 36 mm. Liner type also affects risk and is reduced with elevated rim liners [26,48]. Constrained components have been associated with inadequate locking mechanisms and increased cup loosening with poor results in some studies [3,10,49,50]. More recently dual mobility (DM) cups have received increasing attention and usage in RTHA including cases of dislocation, with results superior to those of constrained liners [51]. Lange et al. [52] reported a 5% recurrent dislocation rate after first time revision for instability. De Martino et al. in their systemic review found a 3% dislocation rate in RTHA in high risk patients [53]. Bruggemann reported a 1.4% dislocation rate after cementing dual mobility cups into TM shells (off-label use) [7]. However, there is rarely intraprosthetic dislocation, a unique complication, and possible accelerated P/E wear with resultant osteolysis, prompting caution for use in younger patients [53,54].
This retrospective review has a number of limitations including the possibility of incomplete documentation in some cases and lack of a control group. Our patient cohort was small and prohibited the assessment of variables that could influence the results that are not statistically significant because of the small number. Procedures were done by several surgeons and variations in technique cannot be excluded, although all followed the hospital protocol.
Comparison with other studies is difficult because of differences in patient cohorts, definitions of failure, components and surgical techniques utilized and follow up periods.
Although our study has a small patient cohort, it serves as an addition to a sparse body of information that relates to femoral head and liner exchange in the treatment of dislocation.
Conclusions
The advantages of our protocol compared to RTHA involving exchange of well-fixed components include shorter operative time, less complex surgery and preservation of bone stock.
Our cohort consisted of 20 patients, 12 with prior PTHA and eight with prior RTHA. All 12 patients with prior PTHA had no history of prior surgery for dislocation and were effectively treated with femoral head and liner exchange.
The eight patients with prior RTHA included four with prior RTHA for dislocation; of these four patients, three suffered a further dislocation after head and liner exchange and required further revision. One additional patient with a history of revision for femoral component fracture, experienced dislocation after head and liner exchange and was successfully treated with closed reduction.
For the entire cohort, implant survival from dislocation was 85% and implant survival from all causes was 75% at a mean follow up of 6 ± 3.5 (range 0.08-11.8) years. Two PTHA patients required later revision for component loosening.
Our results suggest that for patients with prior PTHA and no prior history of dislocation, this protocol could be a viable treatment option. However, patients with a history of multiple prior THA's that include treatment for instability are at high risk of further dislocation.
The literature on femoral head and liner exchange in dislocation cases is sparse to date, therefore additional studies would be of great value. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data that support the findings of this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-11-04T15:24:16.780Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "48012afe7710f97044e24366efeab27859edee28",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/57/11/1188/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84580693f33bf043120f12aaaeadb00a2c7353d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
238817719
|
pes2o/s2orc
|
v3-fos-license
|
Perspective on Crude Palm Oil Production: The Effect of Raw Palm Oil and Biofuel Prices
Efficiency is the important things in production process. Some production factors as labor, materials, and machinery must be calculate accurately. The purpose of this research is for analyzing the influence of raw palm oil and Biofuel prices against crude palm oil production. Sample in this research is time series data that specialized production data, The technique analysis is using analysis of multiple linier regression. The results from analysis show correlation between raw palm oil and biofuel prices with crude palm oil production is 57.1 %, The relationship between raw palm oil against crude palm oil production have significant effect and biofuel prices against crude palm oil production have no significant effect. Based on the results of F test there are the significant influence between raw palm oil and biofues prices against crude palm oil production. finding in this research is PT. Wilmar using 3 type of fuel for production process, petroleum, biofuel and waste of raw palm oil production which makes biofuel prices have no effect on crude palm oil production
Introduction
Coconut palm as the plant which produce Crude Palm Oil and the core palm (Kernel Palm Oil) is one of plantation commodity which become source of producing foreign exchange non oil and gas for Indonesia. Demand of Crude Palm Oil each year continues to increased because of the the widespread utilization, not just for the consumption needs, but already utilized as the material the basic for drugs and cosmetic. Change of Crude Palm Oil demand in international market will influence structure price, then changes in world Crude Palm Oil prices will affect production as well as Indonesia's Crude Palm Oil export offerings including the economy Indonesia in general. (Azwar, 2015). With the very fertile region, Indonesia becomes one of the biggest country in world which produce Crude Palm Oil. With this condition makes Indonesia possible to produce alternative source energy that originated from palm oil. Because of the recent price increase in crude oil and growing environmental concerns, biodiesel has become an important alternative fuel that acts as the lifeblood of the retailing industries that are highly depended on the logistics and transportations to deliver their goods on time. ( ,65 million kl 3,00 million kl 476,9 thousand kl 2017 3,41 million kl 2,57 million kl 187,3 thousand kl 2018 6,16 million kl 3,75 million kl 1,82 million kl 2019 8,39 million kl 6,39 million kl 1,31 million kl (Sumber: Aprobi) Table 1 shows a comparison between domestic use and biodiesel exports due to the government's mandatory policy The results of the analysis from Rambe, Kusnadi and Suharno (2019) which used data from various related agencies such as the Central Bureau of Statistics, Ministry of Agriculture, BAPPEBTI, World Bank and other institutions related to this research which are analyzed by dynamic system models show that the development of Indonesian biodiesel has not been able to meet the level of the blending rate according to mandatory biodiesel. Efforts to increase the achievement of the blending rate level can be done by providing biodiesel subsidies. Export duty policy is also needed in an effort to maintain the stability of domestic CPO prices and the price of palm cooking oil. Pt. Wilmar Nabati is the largest biofuel producer in West Sumatra. With Mandatory policy and export tax rate make PT. Wilmar Nabati using 3 types of source energy in the production process. In addition using Petroleum and biodiesel as main fuel for run machinery, PT. Wilmar Nabati using also fuel from waste of Raw Palm Oil production with the composition of the 24% from prices Raw Palm Oil. This aims to increase the cost efficiency of production resulting from biodiesel usage policy.
According to Lumbantoruan, Poerwanto, Tarigan (2013), one of the methods that can be used in planning cpo production planning is the mathematical method of goal programming. The difference between goal programming method and linear programming method is that it can handle the problem of optimal allocation or optimum combination of several opposing problems. Thus the decision taken is a satisfactory result of some of the alternatives offered. The decision variable stipulated in this study was the amount of CPO production in each month during the period 2012. As for the target constraints used in the data processing process are minimizing production costs, minimizing TBS procurement costs, maximum CPO production, CPO demand, TBS availability, TBS processing goals and processing time availability. Forecasting the number of requests is analyzed from the sales data of the previous period by using the kuadratis method. (2015), that the production of Indonesia Crude Palm Oil had significant positive influence to volume of Indonesia Crude Palm Oil export.
On production process, materials known as main input for transforming input become output. Quality of the outputs depends on the quality of inputs. The more quality of coconut palm fruit will produce high quality materials of crude palm oi production. Moreover Kewinoto and Sjahruddin (2015) research results show, there are two independent variables that have an influence on the stock price on plantations producing palm oil (Crude Palm Oil) are: Price Crude Palm Oil and Sales Volume, while the other two independent variables, namely inflation and the exchange rate has no significant effect on the stock price on plantations producing palm oil (Crude Palm Oil). With the high price of palm oil, it influences more capital for investment and recruitment of labor to increase the production of palm oil. Since the price of palm oil is determined by many factors, the factor that influences palm oil prices is the availability of substitutes such as the prices of soybean oil. The price of crude oil is also an important factor that influences palm oil prices.
Biodiesel is a mono alkyl ester compound which produced through a tranesterification reaction between triglycerides and methanol become methyl esters and glycerol with help of basic catalysts. Triglycerides is well known as oil from fruit. The process of esterification and transesterification Crude Palm Oil it turns out able to produce the fuels material. The fuels material from Crude Palm Oil more eco-friendly environment because free nitrogen and sulfur. In addition to it the content of acid oleic yang achieve 55 % in oil palm pretty as the material consideration for using oil palm as the material raw making the material fuel vegetable (Nugroho, 2014). The process that has been done to produce biofuels such as thermal cracking that takes place at high temperatures and pressures that cause great energy needs, so that is currently developed the process of renal engkahan. The process can convert vegetable oil into alternative fuels (biofuels).
Hydrocracking is a process of storytelling by reacting vegetable oils with a certain amount of hydrogen gas under certain temperature and pressure conditions. Products from the hydrocracking method will be produced biofuels in the form of straight chain liquid alkanes from C-15 to C-18. This hydrocracking process has its advantages and disadvantages. In terms of advantages, this process can provide high conversion, yield towards middle distillate is also high, the resulting alkane quality has a high number of setana. In terms of weaknesses, this process requires considerable energy because hydrocracking operates at high temperatures and pressures, requiring special equipment, determination of the right reaction conditions (catalyst type, catalyst preparation, temperature, pressure and reaction time
Research methods
The design research is quantitative and associative method, quantitative method is systematic scientific research against a parts and phenomenas with the all relationships. The purpose research quantitative is to develop and using mathematical models, theorytheory and hypothesis which is corrrelatted with the natural phenomena. Quantitative research used for test a theory and for show the relationship between variables. Associative/causal research is the research which use for analyzing the relationship between variables or how a variable affect other variables. In this research data and information obtained with time series data According to Anggoro (2008) population is all data which will to be attention in a scopes and specific time that we define. This research use data from the population which all source on year 2017 until 2018 on PT. Wilmar Nabati. According to Nugroho (2005) Sample is most member of the from the population which selected with using specific procedure to be expected can represent the population, is most from subject research which selected and is considered representing the the overall. So based on understanding the above, sample from this research is data Raw Palm Oil, prices Biofuels and data production from PT. Wilmar Nabati.
Variable is a the concepts which have a variation value, and can be describe into independent variable and dependent variable, in this research,we use 2 (two) independent variables and 1 (one) dependent variable As for the definition for these three variables that are following:
. Biofuel Prices
Biofuel Prices is a the value of money exchanged for getting the benefits and having or using that energy for running the activity of production process, where Biofuels prices which used is period 2017 -2018.
Multiple Linier Regression Analysis
Used for analyze the influence of independent variables against dependent variable. In this research is for analyze the influence of Raw Palm Oil and Biofuel prices against Crude Palm Oil production with the formula: Y = a + bt X1 + b2 X2 + e Wheres Y = Crude Palm Oil Production X 1 = Raw Palm Oil X 2 = Biofuel prices A = Constanta b 1 ,b 2 = Regresion coeffisient e = Dummy variabel According to Sugiyono (2007) Analysis of correlation used to knowing influence or relationship between independent variable and dependent variable. Correlation analysis aims for measure the strength of linier relationship between two variables. Correlation not showing the functional relationship or with the other word, correlation is not distinguish dependent variable with independent variable. The coefficient determination on the point is measure the how much ability from the regression model in explain a variation of dependent variable, value of the coefficient determination is between zero and one. The value of the R 2 which small means the ability independent variable in explain a variation of dependent variable is very limited, while if R value approaching to one, that's means independent variables ability to explain the variation of dependent variable is very high. (Ghozali,2006)
Results and Discussion
Crude Palm Oil Production The process of production is start with separate bunches of the coconut palm fruit with the fruits. The results the separation is produce 66% the coconut palm fruits and 34% empty of bunches. The next stage is separate shell and fibers from the coconut palm fruits. In addition, fibers and shell from the coconut palm fruits will be processed then. Fibers will squeezed for produce raw Crude Palm Oil with the composition of the results by 15%, and shell from coconut palm fruits will processed for taken the core from fruit which named Kernel with the composition of the results by 14%. The rest of the waste production on stage this is by 15%. On the next stage is processing Kernel. shell from the coconut palm fruits will processed for produce Kernel, on this process there are rest of the waste by 9% and just 5% to become Kernel. In the conclusion, from the composition of the 100% Bunches of the fruit fresh generated Crude Palm Oil by 21%, Kernel by 5% , waste of the process will be use for the fuels material by 24%, and the rest is junks. Fluctuated of prices is rely on the availability of the coconut palm fruits from the palm oil plantations. PT.Wilmar use coconut palm fruits supply distthousandtion from own plantation on division 1-3, PTPN VI, and the public plantation Jambi districst area. Crude Palm Oil price behavior is crucially dependent on both the supply and demand factors. On the supply side, both Crude Palm Oil production and palm oil stock play a significant role in terms of influencing the Crude Palm Oil price behavior. On the demand side, exports of oil palm products is a key factor influencing Crude Palm Oil price behavior. Crude Palm oil price behavior will result in a 'shock' reaction if the element of market sentiment becomes unpredictable. The combination of fundamental and market sentiment factors are considered the 'rule of thumb ' that will determine Crude Palm Oil price equilibrium in the world market. (Rahman, Balu and Faizah, 2015) Price of Biofuel (X2) In the process of production Crude Palm Oil, theres 2 kinds of fuel which use for manchinery, Biofuels and waste from Raw Palm Oil. Biofuels produced by the company with the cost of unit/liter with range between Rp. 7.072 s/d Rp. 9.348, Biofuels used for turn on generator before the main machine enabled, while the main machine functions for run all production activity. The waste from Raw Palm Oil fuels generated through boiler where is 24% from unit Raw Palm Oil/Kg. Prices from waste Raw Palm Oil fuel is between range Rp. 216 s/d Rp. 459. Coefficient correlation between Raw Palm Oil and Biofuel price with Crude Palm Oil production is 51.1%. Correlation between Raw Palm Oil and Biofuel price with Crude Palm Oil production is 57.1%, then relationship this correlation could be said with strong enough correlation, which Raw Palm Oil and Biofue prices have the influence of by 57,1% against Crude Palm Oil production.
Coefficient Correlation
The Coefficient Determination Value R square or the coefficient determination is 0,326 or 32,6%. But because this research has two independent variables then it will use adjusted R square, by 0,251. This means 25,1% variance from numbers Crude Palm Oil production can described by a variation variables from Raw Palm Oil and Biofuel prices. Value of 3,001,000 constants , means all things of variables independent is considered constant, hence the value of production Crude Palm Oil is 3,001,000. Score coefficient regression variable Raw Palm Oil 0.085 that means it if score of Raw Palm Oil increasing one unit, then will upgrade amount of production Crude Palm Oil of 0.085 unit. So also on the contrary if happen a decrease the value of Raw Palm Oil by one unit, t h e n w i l l d e c r e a s e Crude Palm Oil production by 0.085 unit. Score coefficient regression variable price Biofuel -46,674 that means, if an increase in the price of Biofuel is one unit, it will bring down Crude Palm Oil production by 46647 unit. So also on the contrary if happen Biofuel prices decrease by one unit, then will improve Crude Palm Oil production by 46.647 unit.
The value of t test obtained by 2,808 and the value of t table 2,0930 with level significance 0,012, this show that Raw Palm Oil have influence against Crude Palm Oil production and the others variabel, the value of t test obtained is by 0,305 and the value of t table 2,0930 with level significant by 0,764, this show that Biofuel prices has no effect against Crude Palm Oil production. The result from Purba dan Hartoyo (2010) show the demand for CPO is not responsive to changes in the price of the material fuel, in both the short and long term. It is caused by the limited share of exports which can be met by Indonesia, to respond to the demand for biodiesel from Indonesia. Soy oil is a source of biodiesel that is most dominant in the world market with a share of 76%, while exports of biodiesel material the raw CPO is 24%. So the response to do not the elastic. Of 24%, the share Indonesian biodiesel is 18% and Malaysia amounted to 6%. Wilmart Group will reduce biofuel usage if Biofuel price is increasing and as substitude, they will using fuels from waste of Crude Palm Oil production for efficiency production cost.
Research results from Hermawan, Edison and Damayanti (2015) also produce that production factors that in the form of raw material, capital and machine in a manner simultaneous have influence significant to Crude Palm Oil production while it partially only raw-material and capital that have influence which is significant towards Crude Palm Oil production, meanwhile machine have no significant effect on Crude Palm Oil production. According the results from Septian, Basri and Pailis (2015), test is simultaneous regression test (F test) showed raw materials, labor, and machine had a significant effect on Crude Palm Oil. Partial regression test (t test) showed that the variables of raw materials and machinery had a positive and significant effect on the variable production of Crude Palm Oil. Petroleum prices have a real impact on the price of Indonesian Crude Palm Oil exports. If the price of oil increases, it will cause the price of Crude Palm Oil exports also to increase. The export price of Crude Palm Oil in addition is having a real impact on Crude Palm Oil export volume. This means that if the export price increases, in addition to causing an increase in the volume of the Indonesian Crude Palm Oil exported, it will also cause the price of Crude Palm Oil in the country (domestic) increased. Crude Palm Oil in addition to an output of the plantation company, it is also an input for the cooking oil company. Therefore, domestic Crude Palm Oil price changes will cause changes in the production of palm oil. From domestic Crude Palm Oil price has a significant influence on the production of palm oil. If the domestic Crude Palm Oil price increases it will lead to the decline of palm oil production (Murti,2017)
Conclusion
From the results research on PT. Wilmar, Composition Raw Palm Oil materials which use for production process is 66% from Raw Palm Oil that processed to produce Crude Palm Oil, while 34% is a waste form of empty bunches. Theres two type fuels which PT.Wilmar use for machinery, first is Biofuel and s e c o n d one i s waste f r o m R a w P alm Oil. Usage of Biofuel for production procces is a policy from Wilmart Group, which is as main fuels to operate machinery with price range Rp. 7,072 to Rp. 9,348. Raw Palm Oil and Biofuel prices has significant influence to Crude Palm Oil production, while t test show that Raw Palm Oil have influence against Crude Palm Oil production and Biofuel prices has no effect against Crude Palm Oil production. Besides using Biofuel in producing, PT .Wilmar also using waste of Crude palm oil production as fuels for run the machinery. Waste of Crude palm oil production is generated through production process of Crude Palm Oil with composition 24% from the purchasing price of Raw Palm Oil with price range Rp. 192 to Rp. 459
|
2021-09-09T20:49:32.575Z
|
2021-07-27T00:00:00.000
|
{
"year": 2021,
"sha1": "1402b18c7694320d74747f0d68d8f24f8550da23",
"oa_license": "CCBYSA",
"oa_url": "https://jman-upiyptk.org/ojs/index.php/ekobistek/article/download/60/60",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "33cf0bb615a24cf7f9ae42df7e4df8f9ba27682b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
81152239
|
pes2o/s2orc
|
v3-fos-license
|
MDCT angiography in evaluation of pediatric hemangiomas and peripheral vascular malformations
Vascular neoplasms and malformations have been a troublesome area for diagnosis and management. The disease requires multidisciplinary care and the nomenclature is non-uniform. The classification has been evolving ever since Mullicken et al, made some sense out of it in the year 1982. The most recent and acceptable classification is by ISSVA in 1996, which has been last updated in 2014. The clinical presentation is highly variable, ranging from mild cosmetic deformity or nondescript swelling to grotesque lesions or even high output failure. Visceral involvement in head and neck and body lesions makes matters even more complex. Radiologist has a role not only to make a diagnosis but is an integral part of the management team as an interventionist. ABSTRACT
INTRODUCTION
Vascular neoplasms and malformations have been a troublesome area for diagnosis and management. The disease requires multidisciplinary care and the nomenclature is non-uniform. The classification has been evolving ever since Mullicken et al, made some sense out of it in the year 1982. 1 The most recent and acceptable classification is by ISSVA in 1996, which has been last updated in 2014. 2,3 The clinical presentation is highly variable, ranging from mild cosmetic deformity or nondescript swelling to grotesque lesions or even high output failure. Visceral involvement in head and neck and body lesions makes matters even more complex. 4 Radiologist has a role not only to make a diagnosis but is an integral part of the management team as an interventionist.
Radiographs have limited but crucial role by unequivocal detection of phleboliths as a typical rounded density with a central lucency. Initial imaging modality for evaluation is doppler sonography. It solves the preliminary questions of whether the lesion is of vascular origin, high or low flow, and also depicts presence and absence of soft tissue mass which is the main feature to separate hemangiomas and vascular tumors from other malformations. Real time maneuvers like breathing, valsalva, dependent positioning also help in distinguishing lymphatic from venous malformations. 5,6 The questions left unanswered after a thorough doppler examination are the extent of lesion, the feeding and draining vessels, deeper visceral or osseous involvement. 2 These questions need a cross sectional modality with wider FOV. Since long, it is a settled argument in literature that MRI is best suited for the purpose. CT scan, although provides the answers, loses the argument due to high radiation dose of a multiphasic CT examination in pediatric patients. CT offers advantages over MRI like better spatial resolution for small arterial feeders, better depiction of bowel and osseous lesions, easier interpretation of head and neck lesions free from susceptibility artifacts of skull base and airway, and shorter scanning times requiring much less sedation than MRI. 7 Even today, a MDCT scanner is far more widely available in resource -poor countries and is less expensive than a high-end MRI scanner and compatible expertise. It may be a reasonable approach to evaluate the role of multiphasic MDCT in these patients, acquiring delayed scans at low doses and keeping the radiation dose to a minimum. 8
METHODS
A prospective study was done in the Department of Radiodiagnosis in our institution in collaboration with Department of Pediatric Surgery.
Thirty six consecutive pediatric patients with known or suspected hemangioma or peripheral vascular malformations coming to our institution over a period of 18 months were included in the study. The study excluded previously treated lesions, patients with known allergy to contrast material, CNS, pulmonary and other visceral lesions. One case was later histopathologically diagnosed as embryonal rhabdomyosarcoma of orbit and another turned out to be sarcoma with liver metastasis. They were excluded from calculations. Thus, a total of 34 cases were included in the study.
Each patient was duly counselled, and an informed consent was obtained. All patients included in the study underwent proper history, clinical examination after which they were subjected to MDCT angiography (MDCTA) and doppler sonography.
Clinical examination included evaluation of the following features: location of the lesion, size of the lesion, multifocality, color, position test, presence of bruit, presence of ulceration, prominent veins, and any deformity if present.
All patients included were subjected to CT angiography on Philips Brilliance 40 CT unit. Moderate sedation was administered when required.
The following algorithm was used in performing CT: • Scout view for planning • Non contrast CT: 5mm slices limited to the involved area for any phleboliths, hemorrhage and localizer image for care bolus was taken. • MDCTA Contrast volume and injection rate: 2ml/kg body weight for 300mgI/ml at 1-2ml /s. First pass contrast enhancement by bolus tracking, venous phase and delayed phase (when indicated in hemangiomas and low flow malformations) was taken.
Radiation dose adjustment: mAs and kVp was altered as per age and size of patient to minimum required levels. Scan range was carefully restricted to cover only the essential area. Pitch, detector rows, gantry rotation was adjusted to balance radiation dose, scan duration, spatial resolution, noise and artifact levels. Scan parameters are tabulated in Table1. • DSA was done when interventional therapy was contemplated. • Percutaneous sclerotherapy was done in patients with low-flow lesions. • Follow up was taken for 1-6 months by clinical parameters and color doppler.
Final diagnosis was conducted for each patient, a final diagnosis was assigned taking into account the clinical history, physical examination findings, evolution of the lesion, color doppler, CT angiography findings, DSA and response to treatment.
Statistical analysis
Fisher exact test was used to determine the statistical significance of 3D-MDCT angiography features which help to distinguish between: • Hemangiomas and vascular malformations • High-flow and low-flow vascular malformations.
RESULTS
Of the 34 vascular lesions studied a total of 19 low flow malformations (14 venous, 3 venolymphatic and 2 lymphatic malformations) 10 hemangiomas and 5 arteriovenous malformations were finally diagnosed. The CT angiography findings along with follow up/treatment and final diagnosis are summarized in Table 2. The key findings of various lesions are depicted in Figure 1 to 6.
The usual clinical presentation of infantile hemangiomas was in infancy with a well-defined red violaceous swelling. The lesion appeared as a soft tissue mass on NCCT, which showed intense enhancement on arterial phase. Feeding arteries were seen entering the lesion in an organized manner. Early draining veins were also seen in some cases. There was no contrast wash out on venous phase. MIP reconstruction showed an enhancing mass, looking like a cauliflower head on VR images ( Figure 1).
Non involuting congenital hemangioma usually present in early childhood as a violaceous color swelling. They show few focal areas of vascular enhancement in a soft tissue mass on venous phase. Maximum enhancement is seen on delayed phase ( Figure 2).
Venous malformations usually present in early or late childhood with a localized or large swelling. Phleboliths could commonly be seen on NCCT. Peak enhancement was seen in venous phase with multiple vascular channels within the lesion. No soft tissue mass was present as was seen in hemngioma. The channels appeared as lacy tangles on MIP reconstruction ( Figure 3).
Lymphovenous malformations presented similarly with a skin colored swelling. Vascular enhancing channels intervened with non-enhancing cystic areas could be seen on venous phase ( Figure 4).
Venous malformations could be syndromic. A case of Klippel-Trinauney was diagnosed in our study. Patient's clinical presentation was with limb swelling with tortuous veins and nevi. NCCT showed phleboliths in anterior wall of urinary bladder.
Venous phase showed abnormal enhancing vessels in right thigh and gluteal muscles. Right femoral vein was not visualized. There was abnormal wall enhancement of rectosigmoid colon. An abnormal vessel noted draining to femoral vein coursing from anterior to posterior aspect of thigh suggesting vein of Serville ( Figure 5).
Arteriovenous malformations were less common and appeared as bright reddish swelling. Arterial phase showed intense enhancement of multiple tortuous vessels with early opacification of draining veins. Venous phase showed washout of contrast. Arterial VR image showed tense tangle of vessels. DSA was used for embolization ( Figure 6).
Significance of MDCTA findings is given in Table 3. All AVMs showed maximum enhancement in arterial phase. Majority (n=11/12) venous malformations showed maximum enhancement in the venous phase. appearance of homogenously enhancing soft tissue mass. Difference is highly significant (p=0.0001). Most venous and venolymphatic malformations (n=15/17) showed the appearance of lacy tangles of vessels. AVMs also showed appearance of tangle of channels but tense in appearance.
VR appearance
Low flow malformations did not attain the contrast density required to produce a volume reconstructed image. High flow malformations appeared like a tense tangle. Most (n=4/5) hemangiomas gave the appearance of cauliflower head. Difference between hemangiomas and AVM for tense tangle of vessels is highly significant (p=0.0003).
Follow-up and final diagnosis
Five patients diagnosed as hemangioma were administered which were in proliferating phase were administered oral prednisolone started on 3mg/kg/d for 1 month tapered by 0.5ml every 2-4 weeks and discontinued in 6-9 months when the lesions showed signs of involution and decrease in size. Rest of the hemangiomas were kept on observation. They showed decrease in size in 6-9 months duration. Patientsdiagnosed as venous/venolymphatic/lymphatic malformations (n=19) were administered sclerotherapy using polidocanol. size in a follow up of 2 months. 1 case did not give consent for the embolization. Embolization could not be performed in one case due to technical difficulties as there were many large tortuous feeders.
Of the 34 vascular lesions studied a total of 14 venous malformations, 10 hemangiomas, 5 AVMs, 3 venolymphatic malformations and 2 lymphatic malformations were diagnosed. Thus, a total of 19 lowflow vascular lesions were diagnosed.
DISCUSSION
A Of the 34 cases evaluated, MDCTA diagnosis was correct in 31 cases, giving a diagnostic accuracy of 91.2%. 2 venous malformations (n=14) and 1 hemangioma(n=10) were misdiagnosed. CT angiography correctly diagnosed all the high-flow lesions (n=5).
On plain scans the presence of phleboliths were noted in hemangiomas (n=2) and venous/venolymphatic malformations (n=10). None of the high-flow malformations showed presence of phleboliths. It was found that presence of phleboliths significantly differentiates low-flow and high-flow malformations (p=0.039) however no such significance in differentiation between hemangiomas and vascular malformations were found (p=0.45). This is in concordance with study by Paltiel et al. 5 Dubois et al, in their review maintain that hemangiomas show persistent enhancement, so do the venous malformations which enhance peripherally and slowly after contrast injection. 9 AVMs on the other hand are seen as highly enhancing lesions with numerous feeding and efferent vessels without persistent tissue staining. This is in concordance with our study in which none of the hemangiomas (n=10) and low-flow vascular malformations (n=19) showed washout of contrast. All the AVMs in the study (n=5) showed washout of contrast. Thus, washout of contrast was found to be a characteristic feature of AVMs (p=0.0001) It was also found that phase of maximum enhancement can also be used to differentiate high-flow and low-flow malformations. All the high-flow malformations (n=5) enhanced maximum in arterial phase whereas none of the low-flow malformations (n=19) enhanced in arterial phase (p=0.0001).
However, it was found that hemangiomas (n=10) also enhanced in delayed phases (n=5) so did the low-flow vascular malformations (n=6), thus delayed phase enhancement was not found useful in differentiating hemangiomas and low-flow malformations (p=0.69).
Presence of early draining vein was found in hemangiomas (n=4) and AVMs (n=5). While early draining vein was absent in all low-flow vascular malformations (n=19). Presence of early draining vein was found to be a significant feature distinguishing AVMs and low-flow malformations (p=0.0001).
However, presence of early draining vein was not found to be a significant feature distinguishing hemangiomas and vascular malformations(p=0.39). The presence of early draining vein in infantile hemangioma has been described in the literature and differentiation of infantile hemangiomas and AVM with this respect in imaging is difficult. [9][10][11] It was observed that doppler was useful in this respect to differentiate hemangiomas and high-flow malformations. In the hemangiomas showing early draining veins doppler showed venous flow in the draining veins whereas in AVMs arterialization of draining veins was observed. This is at par with previous studies and literature maintaining that the arterialization of veins is a specific feature of arteriovenous shunting and is seen in high-flow lesions. 5,10,11 On MIP reformations 9 out of 10 hemangiomas showed appearance of soft tissue enhancing mass while only two venous malformations showed similar appearance. Appearance of enhancing soft tissue mass was found to be significant in differentiating hemangiomas from vascular malformations (p=0.0001). This is in concordance with studies by Bittles et al, and Leng et al, where they found hemangiomas to be soft tissue enhancing mass in 2D images. 12,13 Also, this can be considered as the CT equivalent of soft tissue appearance on grey scale USG which was the only multivariate predictor to differentiate hemangiomas from vascular malformations as assessed by Paltiel et al. 5 Venous/venolymphatic malformations (n=17) on MIP reformations were noted as lacy tangle of channels. This appearance for low-flow malformations has also been described in studies. 12,13 However, in present study 2 low-flow malformations were noted as soft tissue mass with delayed enhancement. Doppler was helpful in correctly diagnosing them as venous malformations as on grey scale they showed channels instead of soft tissue mass.
On VR images AVMs appeared as tense tangle of vessels. It was found to be significant in differentiating AVMs from hemangiomas (p=0.0003). This appearance has also been noted in previous studies in the topic. [12][13][14] Another 3D feature of AVMs described in the previous studies by Bittles et al, Leng et al and Tao et al, was the presence of tortuous feeders entering the lesion in a disorganized way. [12][13][14] This finding was also found in present study where all the AVMs (n=5) had tortuous feeders entering the lesion in a disorganized way. VR appearance of hemangioma described in previous studies is a lobular mass with 2 to 3 small feeding vessels entering the lesion in an orderly manner. [12][13][14] The feeding vessels were not tortuous. 4 of the hemangiomas in our study showed a cauliflower head appearance. All of them showed feeders that were not tortuous and entering the lesion in an organized manner. This appearance was significant in distinguishing hemangiomas from vascular malformations (p=0.0045) as none of the vascular malformations showed this appearance also AVMs (n=5) showed tortuous feeders entering the lesion in disorganized manner. Literature and the previous studies maintain that CT angiography with reformation techniques enables to demonstrate the anatomy of lesions and their stereoscopic relationship with surrounding structures. CTA combined with image manipulation techniques including MIP, MPR and 3D volume rendering are invaluable for providing anatomic information relevant to treatment planning. 9,10,[12][13][14] In present study MPR reformation was particularly found to be useful in this respect. Multiplanar reconstruction enabled to define the extent of the lesions and was useful in trace the draining and feeding vessels. Of the 34 lesions included in pr study 3 lesions had visceral extension.
It was able to delineate this extension by CT angiography whereas by doppler it was not able to demonstrate the visceral extension and channels along bowel wall due to obscured window by bowel gas and soft tissue. Superior spatial resolution of CT angiography was also useful in demonstrating the involvement of muscles, deep structures which include parotid and submandibular glands in head and neck region.
One patient with arteriovenous malformation was not subjected to coil embolization as there were multiple tortuous feeders. CT angiography was able to demonstrate the multiple tortuous feeders from internal iliac artery while doppler failed in this respect as it was able to demonstrate only one arterial feeder and draining vein. It was also not able to trace the feeder on doppler due to obscured window. This highlights the importance of CT angiography in proper patient management.z Present study although has a small sample size, it is the study on vascular lesions and 3D-MDCTA which has largest sample size to date. The previous studies by Bittles
CONCLUSION
CT angiography provides a fast and useful investigation in assessment of pediatric peripheral vascular lesions. Not only it is effective in determining the extent of lesion and relationship with surrounding structures, when coupled with MIP and VR reformations it is useful in characterizing the lesion and aiding in diagnosis for a proper management.
|
2019-03-18T14:04:37.676Z
|
2018-11-26T00:00:00.000
|
{
"year": 2018,
"sha1": "31c38f0bf6181691f651196bed3aa642f89d8bdf",
"oa_license": null,
"oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/5728/4365",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1af98f62a46702679d494c1f6d48d69499ee53eb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208557173
|
pes2o/s2orc
|
v3-fos-license
|
Expression of alternative developmental pathways in the cabbage butterfly, Pieris melete and their differences in life history traits
Abstract The seasonal life cycle of the cabbage butterfly, Pieris melete is complicated because there are three options for pupal development: summer diapause, winter diapause, and nondiapause. In the present study, we tested the influence of temperature, day length, and seasonality on the expression of alternative developmental pathways and compared the differences in life history traits between diapausing and directly developing individuals under laboratory and field conditions. The expression of developmental pathway strongly depended on temperature, day length, and seasonality. Low temperatures induced almost all individuals to enter diapause regardless of day length; relatively high temperatures combined with intermediate and longer day lengths resulted in most individuals developing without diapause in the laboratory. The field data revealed that the degree of phenotypic plasticity in relation to developmental pathway was much higher in autumn than in spring. Directly developing individuals showed shorter development times and higher growth rates than did diapausing individuals. The pupal and adult weights for both diapausing and directly developing individuals gradually decreased as rearing temperature increased, with the diapausing individuals being slightly heavier than the directly developing individuals at each temperature. Female body weight was slightly lower than male body weight. The proportional weight losses from pupa to adult were almost the same in diapausing individuals and in directly developing individuals, suggesting that diapause did not affect weight loss at metamorphosis. Our results highlight the importance of the expression of alternative developmental pathways, which not only synchronizes this butterfly's development and reproduction with the growth seasons of the host plants but also exhibits the bet‐hedging tactic against unpredictable risks due to a dynamic environment.
| INTRODUC TI ON
It has been clearly recognized that diapause is an important mechanisms for synchronizing seasonal development and activity in subtropical and temperate zone insects. However, diapause has another important role that is often ignored-it permits the insects to breed dispersively at different periods, because their chances of survival are greatly enhanced (Danks, 1987;Masaki, 1980;Tauber, Tauber, & Masaki, 1986;Xue & Kallenborn, 1993). Portions of many insect populations are known to enter into diapause while the remaining insects continue to develop and reproduce. For example, 6 years of field observations of summer diapause in the zygaenid moth, Pseudopidorus fasciata have shown that only 20%-28% of individuals of the overwintering generation and 49%-60% of the first generation entered summer diapause as prepupae, while the rest continued to develop and produce the next generation (Xue & Kallenborn, 1998). In the fly, Pegomyia bicolor, 6 years of field observations indicated that 41%-70% of individuals that pupated during April 5-7 entered pupal diapause, while the rest continued to emerge and oviposit and produced the second generation (Xue, Zhu, & Shao, 2001).
The cabbage butterfly, Pieris melete Ménétriés is a serious pest of crucifers in the mountain areas of the Jiangxi Province, PR China and has a multivoltine life cycle with both summer and winter diapause in the pupal stage. The effects of temperature and photoperiod on diapause induction and termination have been evaluated in detail in this butterfly species under laboratory and field conditions (Xiao, Li, Wei, & Xue, 2008;Xiao, Wu, He, Chen, & Xue, 2012 (Xiao et al., 2009). These studies also revealed that high temperatures strongly weakened the diapause-inducing effects of long day length and significantly reduced the incidence of summer diapause; whereas winter diapause can be induced under short day-length at relatively high temperatures (Xiao et al., 2009;Xue et al., 1997). In the field, there are two distinct infestation peaks per year, one in the spring and a second in autumn. According to our field observations for 9 years (1988, 1989, 1994, 1995, 2003, 2004, 2005, 2006, and 2007), if the overwintered pupae eclosed into adults between mid-March and early April (1988April ( , 1989April ( , 1994April ( , 1995April ( , 2003April ( , 2005April ( , and 2007), almost all their progenies would have entered summer diapause and produced one generation. However, if adults emerged between late February and late March, some progenies produced by the early emerged adults would have developed without diapauses (33.33% in 2004;34.04% in 2006;Xiao et al., 2012), these progenies emerged as adults in late April and produced the second generation. In autumn, aestivating individuals emerge between the end of August and early November. Early-emerging individuals can produce three generations in autumn under conditions of relatively high temperatures and intermediate day lengths. However, late-emerging individuals produce only one generation because of the relatively low temperature and short day lengths occurring in late autumn.
Thus, there are one to three generations in autumn (Xue, Zhu, & Wei, 1996). Furthermore, there are always some individuals entering winter diapause regardless of temperature, as indicated by the fact that 3.85% of individuals in 2003, 4.65% in 2004, and 6.78% of individuals in 2005 that hatched in August entered winter diapause even under high temperatures-from 26.4 to 31.2°C (Xiao et al., 2012).
Therefore, this insect species may serve as an excellent experimental model to test the differences in life history traits between the diapausing and directly developing individuals. In the present study, we tested the influence of temperature, day length, and seasonality on the expression of alternative developmental pathways in P. melete under laboratory and field conditions and their differences in larval and pupal development time, pupal weight and growth rate, and adult weight and weight loss, aiming to understand how temperature, day length, and seasonality affect the evolution of their life-history traits.
| Experimental insects
The cabbage butterflies, P. melete used in the experiments originated from a wild population in the Tonggu County (28.5°N, 114.4°E; at an altitude of approximately 240 m above sea level), Jiangxi Province, PR China. Mature larvae prior to pupation were collected from crucifers in the vegetable gardens in mid-November, 2015 and late April, 2016 and then were transferred to wooden insectaries (30 × 30 × 35 cm) for pupation and hibernation and estivation under natural conditions. Adults from the overwintering or aestivating pupae were released to an outdoor web-screened insectary with cultivated flowering Chinese cabbage, Brassica chinensis for mating and oviposition in the spring or autumn, respectively. Eggs laid on leaves were collected in Petri dishes (height 2 cm; diameter 9.0 cm) lined with moistened filter paper every day and were used to conduct the experiments.
| Laboratory experiments
After hatching, young larvae from the spring generation were reared in Petri dishes (height 2 cm; diameter 9.0 cm) containing moistened filter paper and fresh leaves of B. chinensis with four larvae in a Petri dish. The Petri dishes were randomly divided into four groups and were placed in four illuminated incubators (LRH-250-GS, Guangdong Medical Appliances Plant) with constant temperatures of 16, 19, 22, and 25°C. The photoperiod was identical in all treatments (24-hr L/D cycle, L:D 12.5:11.5 hr). At least 30 Petri dishes were used for each temperature treatment. The Petri dishes were checked daily and supplied with new fresh leaves when needed. After pupation, pupae were placed individually in a transparent plastic box (3.5 cm in diameter and 6 cm in height) lined with filter paper and the box was covered with gauze. The pupae were monitored for eclosion to determine the developmental pathway for each individual. Based on the current experiment, if they did not emerge within 35 days at 16°C, 15 days at 19°C, 12 days at 22°C, and 9 days at 25°C, they were assumed to be in diapause. Diapause pupae were placed at 8°C for 30 days in continuous darkness and then transferred to L:D 15:9 hr and 18°C conditions to terminate the pupal diapause and observe the adult emergence. The number of females and males was recorded daily.
| Field experiments
Newly hatched larvae from the spring generation were transferred to B. chinensis plants grown in an outdoor web-screened insectary. When the larvae matured they were placed individually in a transparent plastic box (3.5 cm in diameter and 6 cm in height) lined with filter paper and fresh leaves for pupation. After pupation, the pupae were transferred individually to a clean transparent plastic box for eclosion. Adults emerging from nondiapause pupae were released into another outdoor web-screened insectary to mate and produce the second generation using the same protocol. Nondiapause pupae generally emerged within 8-13 days in the spring generations. Thus, each pupa that did not emerge within 15 days in spring generations was assumed to be in summer diapause. Similar to the actions during laboratory experiment, aestivating pupae were placed at 8°C (a low temperature that can accelerate the development of diapaus and shortened diapause duration) for 30 days in continuous darkness (Xiao, Wu, Chen, & Xue, 2013), and then transferred to LD 15:9 hr, 18°C conditions to terminate pupal diapause and observe the adult emergence. The number of females and males was recorded daily.
In the autumn generations, diapausing and nondiapausing pupae were obtained using the same method as that used for the spring generations. Nondiapause pupae generally emerged within 7-21 days in the autumn generations. Thus, each pupa that did not emerge within 25 days was assumed to be in diapause. The diapause pupae were treated under the same conditions as were the aestivating pupae to observe the adult emergence. The diapause pupae of the second autumn generation were maintained under natural conditions until adult eclosion the following spring.
The data of the mean daily temperature experienced by larvae for each generation were collected from the weather station of Jiangxi Agricultural University.
| Measurement methods
For each diapausing and directly developing individual obtained from both laboratory and field conditions, we measured the larval and pupal development time from hatching to pupation and adult eclosion, pupal and adult weight, growth rate, and proportional weight loss at metamorphosis. We calculated the pupal weight on the 2nd day after pupation and adult weight after the release of the meconium by using an electronic balance (AUY120; Shimadzu). The individual growth rate of each larva used in the experiments was calculated according to the methods of Gotthard, Nylin, and Wiklund (1994): Growth rate = ln (pupal weight)/larval time × 100. This formula gives a relative growth rate representing the mean weight gain per day. Weight loss between pupation and adult eclosion was calculated using the following formula: proportion weight loss = 1 − (adult weight/pupal weight).
| Statistical analyses
Statistical analyses were conducted using the SPSS 17.0 statistical software package (IBM, www.ibm.com). Life history traits were analyzed in relation to temperature, development pathway, and sex with the general linear model. The nonsignificant three-way term (temperature-by-development pathway-by-sex) was dropped from the final model in the analysis. One-way analysis of variance (ANOVA) was used to determine whether there were significant differences in life history traits in different development pathways at each temperature. One-way analysis of variance and Duncan's test were used to compare the differences in life history traits between sexes in each development pathway and at each temperature. Throughout the text, all means are given with ±1 SEs.
| Comparisons of life-history traits between diapausing and directly developing individuals at constant temperatures
Temperature, developmental pathway, and their interactions (temperature × developmental pathway) significantly affected larval development time (Table 2). Larval development time significantly decreased as rearing temperature increased, with the larval development time of directly developing individuals being shorter than that of diapausing individuals and with significant differences at 19 and 22°C ( Figure 1, Table S1, p < .05).
Pupal weight was significantly affected by temperature and sex (Table 2). Pupal weight gradually decreased as rearing temperature increased from 16 to 25°C (Figure 1), which is consistent with the general pattern in ectothermic animals (Atkinson, 1994). Males were slightly larger than females at each temperature, but this difference was not significant (see Table S1, p > .05). Individuals that developed directly into adults attained relatively lower pupal weights than did individuals entering diapause (Figure 1), although significant differences were not found at each temperature (Table S1, p > .05).
The temperature and developmental pathway significantly affected larval growth rate ( Table 2). The growth rate increased significantly as the rearing temperature increased. Individuals that developed directly into adults had significantly higher growth rates than individuals entering diapause ( Figure 1, Table S1).
Adult weight was significantly affected by temperature and developmental pathway (Table 1).
Adult weight gradually decreased as the rearing temperature increased ( Figure 1). Although Table 1 shows a significant effect on adult weight induced by developmental pathway, there was no significant difference in adult weight between diapausing and directly developing individuals at any temperature, with diapausing individuals being slightly larger than directly developing individuals ( Figure 2, Table S1). Male adults were slightly larger than female adults ( Figure 2, Table S1).
Temperature significantly affected weight loss from pupa to adult (Table 1). The proportional weight losses of diapausing pupae were similar to those of directly developing individuals at all temperatures (55%-59%; see Table S1). There were no significant differences in weight loss between males and females at any temperature (Table S1).
| Developmental pathways under field conditions
In the spring generations, almost all individuals entered summer dia- generation), only a few individuals underwent direct development (Table 2). This is because they experienced relatively low mean daily temperatures and gradually prolonging day lengths. In the autumn generations, most individuals developed without diapause (90% in the first autumn generation, 78.5% in the second autumn generation) when they experienced gradually shortening day lengths (close to the intermediate day lengths) and relatively high mean daily temperatures (Table 3).
| Comparisons of life-history traits between diapausing and directly developing individuals under field conditions
Larval development time was significantly influenced by temperature and developmental pathway ( for males ( Figure 3, Table S2, p < .05).
Pupal weight was significantly affected by temperature, sex, developmental pathway, and a significant interaction (developmental pathway × sex; Table 4). Pupal weight gradually decreased as the mean daily temperature increased, with diapausing individuals being slightly larger than directly developing individuals (Table S2). Male pupae were generally slightly larger than female pupae, with a significant difference at 20.8°C (Figure 3, Table S2, p < .05).
Growth rate was significantly affected by temperature and developmental pathway; developmental pathway and sex had a significant interaction (Table 4). The growth rate increased significantly as the mean daily temperature increased. Moreover, the growth rate was higher in directly developing individuals than diapausing individuals, showing significant differences for males at mean daily temperatures of 17.6, 20.8 and 35.6°C (Figure 3, Table S2, p < .05). (Table 4). Adult weight gradually decreased as the mean daily temperature increased (Figure 4), with diapausing individuals being slightly larger than directly developing individuals (Table S2).
Male adults were larger than female adults, with a significant difference at 20.8°C ( Figure 4, Table S2, p < .05).
Proportional weight loss from pupa to adult was significantly affected by temperature (Table 4). However, the proportional weight loss did not show significant differences between diapausing individuals and directly developing individuals at any temperature, although diapausing individuals experienced significantly longer durations of pupae than did directly developing individuals (67-97 vs. 7-13 days).
| D ISCUSS I ON
The experimental results in P. melete under both laboratory and field conditions showed similar patterns in that the expression of the developmental pathway is highly plastic, depending on temperature, day length, and their interaction (see Tables 1 and 3 use of the available food resources (approximately 3.5 months) for building up the population. The higher autumn temperature induction of diapause in some individuals reflects a bet-hedging tactic that allows the butterfly to escape from various unpredictable physical or biotic factors, such as farming practices (insecticide applications, thinning of the seedlings and harvesting), interspecific competition (aphids and the beetle Phaedon brassicae), and autumn drought, thus avoiding the catastrophic elimination of an entire population.
Given the physiological responses that control the propensity to enter diapause, the expression of an alternative development pathway in P. melete may be dependent upon a threshold. Those individuals above a certain threshold value may enter diapause; those below it may avert diapause. Alternatively, alternative developmental pathways may entail that conditionally expressed genes, although carried by all individuals within a population, are expressed and exposed to selection only a fraction of these individuals at any given time (Van Dyken & Wade, 2010).
Under both laboratory and field conditions, pupal and adult weights for both diapausing and directly developing individuals of P. melete gradually decreased with increasing temperature, showing a typical thermal reaction norm for ectotherm body size, denoted as the temperature-size rule (TSR; Atkinson, 1994). However, increasing evidence has shown that the reverse TSR in insects is also common.
For example, reversals of the TSR have been found in four species of mayfly (Atkinson, 1995); four species of British grasshoppers (Willott & Hassall, 1998); the tropical butterfly, Bicyclus anynana (Fischer, Bot, & Brakefield, 2004); the small cabbage white, butterfly, Pieris rapae (Kingsolver, Massie, Ragland, & Smith, 2007); the Asian corn borer, Ostrinia furnacalis (He, Tang, Huang, Gao, & Xue, 2019;Xiao et al., 2016); and the rice stem borer, Chilo suppressalis (Fu, He, Zhou, Xiao, & Xue, 2016;Huang, Xiao, He, & Xue, 2018). As such, why do some insect species follow the TSR and some exhibit the reverse TSR? We speculate whether an insect species follows the TSR or not may be related to its diapause characteristic. Those species with summer diapause may exhibit the TSR, as indicated by the cabbage beetle, C. bowringi (Tang, He, Chen, Fu, & Xue, 2017) and this butterfly, P. melete because their reproductive periods occur in the spring and autumn Xiao et al., 2016), and the rice stem borer, C. suppressalis (Fu et al., 2016;Huang et al., 2018). These two species enter winter diapause in response to high autumn temperatures and experience strong selection for body size under warm conditions.
Additional insect species with similar diapause characteristics will be investigated to confirm this speculation.
To date, few studies have tested the differences in weight loss between the diapausing and directly developing individuals. In the cotton bollworm, Helicoverpa armigera, proportional weight losses were slightly lower in diapausing individuals than in directly developing individuals at 20 and 22°C, but slightly higher at 25°C (Chen et al., 2014). In the present study, the proportional weight losses were similar between the diapausing and directly developing individuals under both laboratory and field conditions at the same temperature conditions, despite those diapausing individuals exhibiting much longer pupal durations than did the directly developing individuals (Tables S1 and S2). Such a case indicates that the process of diapause did not affect body weight change during metamorphosis. Furthermore, the weights of both female and male adults were still slightly higher in diapausing individuals than in the directly developing individuals. Thus, newly emerged female adults from diapause development should have relatively high fecundity than those from direct development because female fecundity is generally positively correlated with adult body weight when the number of eggs is assessed as lifetime fecundity under standard conditions or by dissection (Honek, 1993). Therefore, relatively large body sizes in diapausing individuals are generally considered to be adaptive because of their greater reserves (Hahn & Denlinger, 2007) and may ameliorate the negative cost of diapause.
ACK N OWLED G M ENTS
We thank all in the laboratory for their technical assistance with the experiment. This research was supported by a grant from the
CO N FLI C T O F I NTE R E S T
The authors declare that they have no conflict of interest.
AUTH O R CO NTR I B UTI O N S
FS-X conceived and designed the research. JJ-T, HM-H, and C-Z conducted experiments and analyzed the data. L-X and SH-Wu wrote the manuscript. All authors read and approved the manuscript. Gotthard, K., & Berger, D. (2010). The diapause decision as a cascade switch for adaptive developmental plasticity in body mass in a F I G U R E 4 Comparisons of weight loss from pupae to adults between diapausing and directly developing individuals of Pieris melete at different mean daily temperatures in the field. The symbols of ▫, □, and represent the mean values, ± SEs and ± SDs, respectively. The values with different lowercase letters are significantly different between diapausing and directly developing individuals at a significant level of 0.05. The mean daily temperatures of 16.8, 17.6, 20.8, and 25.6°C represent the first spring generation, the second autumn generation, the second spring generation, and the first autumn generation, respectively
|
2019-10-24T09:06:54.245Z
|
2019-10-21T00:00:00.000
|
{
"year": 2019,
"sha1": "4c81390b314c8ece76b35c9ba9a5395e55cd9539",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.5731",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "10d721208c23ca77e4ff47815baa373d0953f007",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
16914189
|
pes2o/s2orc
|
v3-fos-license
|
Nutrient Value of Leaf vs. Seed
Major differences stand out between edible leaves and seeds in protein quality, vitamin, and mineral concentrations and omega 6/omega 3 fatty acid ratios. Data for seeds (wheat, rice, corn, soy, lentil, chick pea) are compared with corresponding data for edible green leaves (kale, spinach, broccoli, duckweed). An x/y representation of data for lysine and methionine content highlights the group differences between grains, pulses, leafy vegetables, and animal foods. Leaves come out with flying colors in all these comparisons. The perspective ends with a discussion on “So why do we eat mainly seeds?”
There is a significant difference between seed protein and leaf protein. Seeds (grains and legume pulses) are in the business of plant reproduction and nurturing the developing plant. Leaves, on the other hand, deal mainly with photosynthesis in the mature plant, a process of harnessing visible radiance to produce carbohydrates, and biochemical energy.
Seed protein is a composite of hundreds of different enzymes and structural proteins (Yang et al., 2013), however, its protein complement is dominated by a family of storage proteins: In corn kernels its zein, which comprises up to 60% of the endosperm protein (Larkins and Holding, 2009); in wheat grains its glutenins, which accounts for 40% of the grain protein (Liu et al., 2012); in the rice grain its glutelins, which comprise over 80% of the seed protein (Shyur et al., 1988).
Storage protein imparts individuality to the seed grain: The insolubility of zein in water (Shukla and Cheryan, 2001), the elasticity of glutenin in dough (Kieffer, 2006), the gelling of glutelin in rice (Agboola et al., 2005). However, along with individuality, an imbalance in nutritional composition often crops up. Many seeds are deficient in one or more of the essential amino acids that our bodies cannot synthesize and which we obtain solely from food intake. For example, several cereal grains are deficient in lysine and tryptophan, while legume pulses are often deficient in methionine and/or cysteine (Shewry et al., 1995; Figure 1A).
The general difference in amino acid composition among the grains, legumes, and leafy vegetables can readily be visualized by comparing methionine and lysine values ( Figure 1B). The grains and most other monocot food plants are generally poor in lysine (see the boxed positions for wheat, corn, and rice), while the dicot legume pulses are often lacking in methionine (see the boxed positions for soy, chickpea, and lentil). Leafy vegetables on the other hand (see boxed positions for spinach, broccoli, and duckweed) edge into the FAO standard quadrant along with the animal foods.
Leaf protein is likewise composed of hundreds of enzymes and is likewise dominated by a single polypeptide complex: RUBISCO (ribulose 1,5-bisphosphate carboxylase/oxygenase), which is a crucial component in the photosynthetic fixation of atmospheric carbon within green plants. RUBISCO (previously known as Fraction 1 protein), is located in leaf chloroplasts and can account for 50% of total leaf cell protein (Kawashima and Wildman, 1970). In some plants, RUBISCO (WHO technical Report Series 935, 2007), except for values in red. c The thickness of the FAO standard lines is due to different requirements for "adults" and "children and adolescents" (WHO Technical Report Series 935, 2007). The range varies, respectively, from 1.6 to 1.7 for Methionine and 4.5 to 4.8 for Lysine. even crystallizes within the leaf due to its high concentration (Willison and Davey, 1976). Many chloroplast proteins, including RUBISCO, are highly conserved at the gene and protein levels (Sane and Amla, 1991). Thus, RUBISCO is pretty much the same protein in all green leafy plants, with only a few amino acids changes from species to species. Importantly, RUBISCO is rich in the essential amino acids, with usually eight of the designated nine at percentages meeting FAO (Food and Agricultural Organization of the United Nations) nutritional criteria (Kung and Tso, 1978). Leafy plants such as spinach, broccoli, and duckweed (a monocot plant consisting of nothing much more than a single leaf), in fact provide protein containing all the essential amino acids in percentages meeting FAO standards (Figure 1A). In order to achieve a fully nutritional state, seed protein often needs to be a mix of several sources; for example, the famous combination of sesame seeds (tahini) rich in methionine but poor in lysine, with chickpeas (humus), rich in lysine but poor in methionine ( Figure 1B).
VITAMINS IN LEAVES AND SEEDS
Vitamins are essential nutrients required in small amounts that our bodies are not able to supply in sufficient quantity. Therefore, they must be obtained from the foods we eat. The complement of vitamins in leaves and seeds are very different. Grains are generally low in vitamins, legume pulses are spotty (for example, green pea is rich in vitamin C but not in other vitamins) while leafy vegetables are often rich in several vitamins. This can be readily seen by comparing vitamin concentrations for green leafy vegetables with comparable data for grains and pulses in USDA's National Nutrient Database (Nutritiondata Tools, 2014). Edible green leaves, including duckweed (Landolt and Kandeler, 1987;Marizvikuru and Gwaze, 2013), generally have at least an order of magnitude more pro-vitamin A (i.e., beta-carotene), vitamin B1 (thiamine), vitamin C (ascorbic acid), vitamin E (alpha tocopherol), and vitamin K (naphthoquinones) than do grains or pulses (Table 1A).
MINERALS IN LEAVES AND SEEDS
Metal ions are crucial for our body. They frequently serve as cofactors in enzymatic reactions and are also important for maintaining protein structure. A third of human proteins bind metal ions, with over 10% of enzymes in our body requiring zinc for activity (Azia et al., 2015). The comparative metal ion profile for leaves and seeds is reminiscent of that for vitamins. Grains such as wheat, rice, and corn are relatively low in metal ions, legume pulses such as soy have increased amounts of several minerals, while green leafy vegetables such as kale, spinach, and duckweed (Feedipedia, 2013) are richer in many minerals (Table 1B).
There is a caveat, however, when considering metal ion data. While the amino acid composition (Atanasova, 2008) and the vitamin profile (Mozafar, 1993) of edible plants can be somewhat influenced by the fertility of the soil or the water in which they are growing, the metal ion composition is often more responsive (Macnair, 2003;Chibuike and Obiora, 2014). Water plants such as duckweed are particularly responsive to metal concentrations in their nutrient medium (Wang, 1990). The upshot is, metal ion concentrations quoted for leaves and seeds are, to a large extent, specific for the conditions of fertilization.
OMEGA-6 VS. OMEGA-3 FATTY ACIDS IN LEAVES AND SEEDS
Current research indicates that an excess of omega-6 fatty acids in our diets can promote prothrombotic and proaggregatory activity, while omega-3 fatty acids promote an anti-inflammatory and anti-thrombotic physiology (Simopoulos, 2002(Simopoulos, , 2006. There is, in general (with exceptions, such as chia seeds (Nutritiondata Products, 2014), a stark difference between seed and leaf fatty acid compositions. While the former are high in omega-6, the latter are high in omega-3 (Table 1C). In addition, α-linolenic acid, which is abundant in many green leafy vegetables and is a major source of omega-3, can metabolize in our bodies to longer chain fatty acids such as eicosapentaenoic acid, and docosahexaenoic acid. These in turn may beneficially affect chronic disease control (Simopoulos, 2002).
SO WHY DO WE EAT MAINLY SEEDS?
The major portion of the calories in Western and many other diets comes from seeds and seed products, particularly from a very narrow field of four sources: wheat, rice, corn, and soy. The recent, huge increase in the use of soy oil, with its biased linoleic acid/α-linolenic acid ratio, has in fact driven a change in the omega-6/omega-3 ratio from ∼ 1:1 to ∼10-30:1 in the American population (Blasbalg et al., 2011), a change which may impact negatively on several health aspects (Simopoulos, 2002(Simopoulos, , 2006. Why if the nutrition value is so clearly on the side of leaves do we feed mainly on potentially problematic seeds and seed products? The answer seems to lie partly with intrinsic biological issues and partly with big business practice. Roughly speaking, wheat and rice grains, corn kernels, and soybeans are harvested at moisture levels between 15 and 25% (see statistics, Nutritiondata Tools, 2014), while fresh, edible green leaves, such as spinach, broccoli, lettuce, and duckweed each have a moisture level of >90% (see statistics, Landolt and Kandeler, 1987;Nutritiondata Tools, 2014). Therefore, to capture equal amounts of solids, one has to consume about four to six times more leaves than seeds, grains, or beans. An additional factor is oxalate, which has antinutrient activity and is prevalent in leafy vegetables (Aletor and Adeogun, 1995). However, in this regard, seeds have their own Achilles heel in the form of anti-nutritional allergens (Taylor et al., 2015).
External factors are also at play: Commercial seed crops are adept at production of carbohydrates, oils, and proteins. Increasingly used as feed, they are efficiently transmuted into animal protein and processed food products. Moreover, with massive silo storage, grains function as international commercial commodities (Pollan, 2007). In the case of soy beans, an increased demand for soy protein for industrial production of beef and chicken led to an excess of soy oil as a byproduct, which quickly became a food staple for restaurants, and the fast food industry (Blasbalg et al., 2011).
With a growing awareness of health issues generated by seed dominated diets, and the documented abundance of nutrients in leafy vegetables, a move in the West appears to be developing back to leaf-based foods and, importantly, to an increased variety of plant species decorating our meal plate.
|
2017-05-02T19:50:25.432Z
|
2016-07-21T00:00:00.000
|
{
"year": 2016,
"sha1": "9726e59429891c4d12416191d5147a3ebf1c0ccb",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2016.00032/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9726e59429891c4d12416191d5147a3ebf1c0ccb",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
272551337
|
pes2o/s2orc
|
v3-fos-license
|
Postgraduate Operating Room Nursing Students’ Experiences with Blended Learning Combining Digital Learning Paths and Basic Skills Training as Preparation for Internship: A Qualitative Study
Introduction Numerous pedagogical practices ought to be contemplated for the acquisition of practical aptitudes imperative to postgraduate operating room nursing education. The employment of digital technologies has emerged as a strategic focus in higher education and learning paths exhibit potential as a digital approach in nursing education. Objective This study aimed to investigate the experiences of postgraduate OR nursing students who underwent a blended learning approach, which combines digital learning paths with skills training, and to explore how this approach prepares students to attain specific learning outcomes during their internship period. Methods This qualitative study employed a descriptive, exploratory design and utilized focus group interviews facilitated by an interview guide to gather qualitative data. A purposive sampling strategy was employed, and the collected data were analyzed using a systematic text condensation approach. Results The analysis of the data revealed two main categories and five subgroups. The first category, “Blended learning serves as adequate preparation for internship,” includes subgroups that highlight the advantages of diverse learning activities that aid in the development of a strong foundation in practical skills. The positive influence of peer collaboration fosters improved learning through social interaction, while the organization of the curriculum has a significant impact on students’ learning experiences. The second category, “The importance of skills training and behaving in an operating theater context,” consists of subgroups that emphasize the necessity of progressing from basic technical skills training to simulation pedagogy to ensure appropriate behavior in the operating room. Small group sizes, close monitoring, and assessment by educators contribute to effective learning. Conclusion The integration of digital learning paths with skills training fosters a problem-solving approach and encourages active and collaborative learning. Skills training in small groups, timely feedback, and coordination among subject managers to handle the students’ workload can create an optimal learning environment.
Introduction/Background
In the field of operating room (OR) nursing, the acquisition of skills is a gradual process that occurs during both educational and clinical internship phases.This progression entails engaging in tailored learning activities at an appropriate taxonomical level.Postgraduate OR nursing students must first master fundamental skills, such as surgical hand disinfection, sterile dressing, patient positioning, and instrumentation, before advancing to more complex practical skills (Cuming, 2022).
To enhance the quality of OR nursing education, the application of constructive alignment as an educational principle can be beneficial (Biggs et al., 2022).The term "constructive" pertains to the learners' active role in constructing meaning through relevant learning activities.On the other hand, the term "alignment" refers to the actions undertaken by educators (Biggs et al., 2022).Constructive alignment is an outcome-based educational approach that entails aligning crucial elements of the educational program, such as teaching and learning activities and assessment methods, with the intended learning outcomes of students (Biggs et al., 2022).
In Norway, the education of OR nurses comprise two options: postgraduate education with a duration of 18 months (90 credits, full-time studies) and a master's degree with a duration of 24 months (120 credits, full-time studies).The key distinction between these programs is the inclusion of a master's thesis worth 30 credits.As of January 2023, all educational institutions offering OR nursing education are required to adhere to the new National Curriculum Regulations for Norwegian Health and Welfare Education (Forskrift om nasjonal retningslinje for operasjonssykepleierutdanning, 2023).In line with this curriculum, a university college situated in south-eastern Norway use a blended learning strategy for their OR nursing program.This approach combines a digital learning path with skills training in the foundational OR nursing skills course during the first semester of the program.
Through various learning activities, peer learning was an educational strategy where students collaborated in small groups to meet the course's learning outcomes (Table 1).
Peer learning includes collaboration, reflection, and communication between students either at the same level or at different academic levels (Josse-Eklund et al., 2023).The accumulated knowledge suggests that students in peer learning gain greater confidence and independence in learning and acquire a higher level of personal and professional skills (Josse-Eklund et al., 2023).Our project activities included reading and discussing the syllabus and recommended literature, watching procedural videos, solving assignments and quizzes, and training procedural skills together.Table 1 provides an overview of the learning activities and learning outcomes covered in this course.
Several studies have highlighted the limited opportunities for OR nursing students to acquire practical skills.This scarcity underscores the importance of providing extensive skills training due to the unfamiliarity of the clinical environment and the unique roles and responsibilities of OR nurses (Mafinejad et al., 2022;Prince, 2004;Sarikaya et al., 2006).As OR nursing students prepare for their first internship period, it becomes crucial for them to acquire basic skills in the most common procedures performed in the operating theater.This preparation is necessary for achieving a state of readiness, as outlined in Table 1.The learning outcomes specified in this table are closely aligned with what OR nursing students should aim to master during their initial internship period.
Review of Literature
The use of digital technologies in higher education has become a global priority, with a focus on innovation and advancement (Sormunen et al., 2022).In line with this, the Norwegian government, in its report 'Quality Culture in Higher Education,' emphasizes the importance of providing stimulating and diverse learning and assessment methods -Implementing infection prevention and applying hygienic principles -Quality assurance in terms of the preparation and use of surgical instruments and operating materials -Assessing the patient's health and preventing complications in surgical treatment and positioning that utilize digital opportunities for all students (Ministry of Education and Research, 2017).Digital learning interventions in higher education have shown positive outcomes.They enhance students' professional knowledge, skills, and attitudes, while also improving their academic performance, collaborative abilities, and study skills (Button et al., 2014;Hernon et al., 2023;Männistö et al., 2020;Sormunen et al., 2022).Additionally, digital learning has been linked to increased professional confidence and self-efficacy in clinical skills, problem-solving, decisionmaking, teaching and counseling, and professional communication skills (Sormunen et al., 2022).Overall, the integration of digital learning methods in higher education has proven to be beneficial for students, empowering them with a comprehensive set of skills and improving their overall educational experience.Numerous studies have demonstrated that blended learning in health education yields more favorable outcomes compared to traditional learning methods (Lee & Park, 2018;Vallée et al., 2020).Blended learning involves a combination of traditional face-to-face learning and either asynchronous or synchronous e-learning (Vallée et al., 2020).In a digital learning path, students progress through learning activities in alignment with the descriptions of the intended learning outcomes (Biggs et al., 2022).Digital learning paths are closely interconnected with the concept of active learning.Active learning requires students to actively process, reflect upon, and apply the content based on their own preexisting knowledge and the professional context (Prince, 2004).
Digital learning paths make teaching more dynamic, engaging, and beneficial for students (Welch Bacon & Gaither, 2020).By activating multiple senses, digital learning paths acknowledge that students have different learning strategies and styles, and that employing a variety of teaching methods enhances motivation to learn (Repstad & Tallaksen, 2006).These digital learning paths provide students with the opportunity to construct new knowledge upon their existing knowledge.When new knowledge is integrated with preexisting knowledge, learning and retention are enhanced (Piaget & Inhelder, 1969;Vygotsky, 1978).
Digital learning paths offer the potential to enhance learning outcomes across various disciplines and mitigate barriers related to resources, geographical limitations, and time constraints.This, in turn, contributes to a more flexible education system (Fossland, 2015).The emergence of diverse learning platforms, such as Learning Management Systems and Virtual Learning Environments, further facilitates flexibility for students and provides a wider array of learning and assessment methodologies for OR nursing students (Fossland, 2015;Meum et al., 2021).The terms e-learning, online learning, virtual learning, web-based learning, and distance learning all encompass various forms of digital learning.Studies comparing traditional learning methods with digital learning methods have consistently shown that digital learning methods are advantageous in terms of skills performance and higher content retention (Abarghouie et al., 2020) and the development of self-leadership and problem-solving skills (Lee & Park, 2018).Furthermore, e-learning resources have been found to enhance preparedness among medical students, undergraduate nursing students, postgraduate OR nursing students and anesthesiology nursing students before their practice in the OR, with videos being identified as the most beneficial resource (Fagerdahl et al., 2021).In addition, e-learning has been shown to increase knowledge regarding patient moving, transferring, and positioning among OR nurses (Khorammakan et al., 2024).A blended learning course focusing on minimally invasive surgery for nurses has also demonstrated potential in bridging the training gap for nurses working in different countries (Ortega-Morán et al., 2021).
Despite these positive findings, there is a scarcity of information regarding the use of blended learning which combines digital learning paths with skills training in the field of postgraduate OR nursing education.This research gap justifies the need for further investigation in this area.Thus, this study aimed to investigate the experiences of postgraduate OR nursing students who underwent a blended learning approach combining digital learning paths with skills training.The main objective was to explore how this approach prepares students to successfully attain specific learning outcomes during their initial internship period.
Design
This research employed a qualitative descriptive exploratory design, utilizing focus group interviews as the primary method of data collection (Hunter et al., 2019).The use of focus groups is well-suited for investigating human characteristics such as experiences, thoughts, motives, and attitudes (Malterud, 2012).Qualitative research aims to understand rather than explain and to describe rather than predict (Malterud, 2017).Given the limited number of studies exploring student experiences with blended learning approaches in postgraduate OR nursing education, this design was deemed appropriate (Hunter et al., 2019).Furthermore, this study adhered to the Consolidated Criteria for Reporting Qualitative Research guidelines, ensuring transparent reporting of the research process (Tong et al., 2007).Figure 1 demonstrates the timeline template for the study, outlining the various stages and activities conducted.
Research Question
What are the experiences of postgraduate OR nursing students with a blended learning approach that integrates a digital learning path with skills training, and how does this approach prepare them for their internal internship?
Participants and Setting
The study took place at a university college in southeastern Norway, which offers postgraduate and master's degree programs in OR nursing education.A purposive sampling strategy was utilized to recruit postgraduate students enrolled in this program (Polit & Beck, 2017).All students (n = 41) enrolled in the postgraduate OR nursing educational program were given detailed information about the study through both written and oral means.The first author personally delivered the information in the classroom and also sent out information via email.Interested students who wished to participate were instructed to email the first author.A total of 17 female and 1 male student agreed to take part in the focus group interviews.
Data Collection
Two focus groups with nine participants in each group were conducted four weeks after the skills training session.At this point, the students had started their eight-week internship period.The focus groups utilized a semistructured interview guide (Table 2), which consisted of a few open-ended questions to facilitate discussions and exchange of opinions among the students (Kvale & Brinkmann, 2009;Malterud, 2012).If needed, follow-up questions were posed to ensure comprehensive coverage of the topic.The interview guide was developed in collaboration with several of the authors but was not pilot-tested prior to the study.
The focus group interviews were led by one moderator and one comoderator who were both trained in conducting this type of interview.The interviewers were educators who were not directly involved in the operating nursing program, allowing the students to freely express their opinions.The interviews took place at the university college and had an approximate duration of one hour each.All the interviews were digitally recorded and subsequently transcribed verbatim by the first author, resulting in a transcription material of 41 pages in Microsoft Word format.Participant validation of the transcripts was not performed.
Institutional Review Board Approval
The study obtained ethical approval from the Norwegian Agency for Shared Services in Education and Research (no. 878323) and from the head of the department at the university college.Written consent forms were distributed to the students, and they were required to sign these forms before participating in the interviews.It was assured that participation or nonparticipation in the study would not have any impact on the students' future studies, and they had the right to withdraw from the study at any time without facing any negative consequences.The study followed ethical guidelines and standards outlined in the Declaration of Helsinki, including obtaining written informed consent, ensuring the right to withdraw, and guaranteeing anonymity for the participants (Shrestha & Dunn, 2019;The World Medical Association, 2022).
Data Analysis
The analysis of the data was conducted by three authors (LK, VT, and MHR) using the systematic text condensation (STC) method, as described by Malterud (2012), which draws inspiration from Amedeo Giorgi's psychological-phenomenological method (Malterud, 2017).Systematic text condensation, similar to phenomenology, aims to understand the subjective experiences of the participants as the basis for generating knowledge.
The analytic process consisted of four steps.In the first step, each researcher independently read through the entire dataset and identified initial themes by taking a broad, overall view of the material.In the second step, meaning units were identified, and the researchers organized these units into codes to create an analytical framework that allowed for potential nuances.
Coding involved systematically decontextualizing the text, extracting portions of the text from its original context, and grouping them with related elements in the light of theoretical perspectives (Malterud, 2012(Malterud, , 2017)).In this step, the initial themes were refined through discussions among the authors, resulting in two main code groups.In the third phase, each code was further divided into subgroups, and illustrative quotations from the interviews were selected for each subgroup.To condense the data, artificial quotes (artifacts) were created to represent the content of the individual meanings within each subgroup.
In the fourth phase, the extracted elements were recontextualized by integrating them back into their original context and summarized in the form of interpretive syntheses.This process aimed to provide an overall understanding of the data and generate descriptive and conceptual insights that captured the essence of the participants' experiences.An example of this process is outlined in Table 3.
Throughout the analysis, the researchers engaged in ongoing discussions to ensure consensus and enhance the reliability and validity of the findings.By including excerpts and quotes from the interviews, the authors aimed to illustrate and support the main themes identified during the analysis.
Rigor
Ensuring trustworthiness is crucial in qualitative research to maintain reliability and validity (Jayasekara, 2012).Although the terms reliability and validity are essential criteria for quality in quantitative paradigms, in qualitative paradigms the terms credibility, neutrality or confirmability, consistency or dependability, and applicability or transferability are to be the essential criteria for quality (Jayasekara, 2012).
In this study, several actions were taken to increase trustworthiness.Credibility was enhanced by recruiting participants who were capable of answering the research question effectively.Additionally, during the analysis process, coding of the data was discussed multiple times among three of the authors, and input from other researchers was sought in the final phase to ensure credibility of the findings (Polit & Beck, 2017).As the first author was an OR nurse educator, efforts were made to ensure that interpretations were derived from the data itself, further strengthening confirmability (Nowell et al., 2017).Dependability was ensured by thoroughly documenting the research process and using the same semistructured interview guide in both focus groups.This consistency reduced the potential for variation in data collection and analysis (Jayasekara, 2012).To enhance transferability, a comprehensive description of the participants, the setting, the digital learning path, and the skills training program, as well as the research process, was provided.The use of quotations from the participants and the discussion of findings in relation to other international studies also contributed to the trustworthiness of the study (Jayasekara, 2012).Reflexivity, which involves critically reflecting on one's role as a researcher and its influence on the research process, was addressed by the first author through written reflexivity notes.These notes described the choices and considerations made during the analysis process, providing transparency and an opportunity to identify potential preconceptions that could have impacted the analysis (Malterud, 2017).Collectively, these actions taken to enhance trustworthiness contribute to the overall rigor and quality of the study.
Participant Characteristics
The participants mean age was 34.7 years.To apply for the postgraduate OR nursing education, registered nurses must have a minimum of 2 years' work experience.In this sample, registered nurses had a mean of 8.6 years of work experience before attending the postgraduate OR nurse education (Table 4).
Research Question Results
From the analysis of the data, two main categories and five subgroups were identified (Table 5).These categories and subgroups provide a comprehensive understanding of the experiences of postgraduate OR nursing students.The presence of many similarities and few differences between the two focus groups indicates that a comprehensive understanding of the experiences of postgraduate OR nursing students has been achieved, and further data collection may not yield significantly different insights.
Main Category 1: Blended Learning Serve as Adequate Preparation for Internship Subgroup 1: Various Learning Activities Help Develop the Foundation for Practical Skill Performance.The various learning activities in the digital learning path were perceived to give the OR nursing students a theoretical understanding of skills important for the field and were helpful as preparation before training basic practical skills.The focus on basic skills was perceived as helpful because the students had no previous experience in OR nursing or knowledge of required competences.They described the content of the learning path as relevant and that attention to basic skills was important for their professional development to master basic skills as OR nurses.The students emphasized that the theoretical topics in the digital learning path provided them with a deeper understanding when it comes to training practical skills.Thus, they could begin the training session at a higher taxonomic level.It was highlighted that the various learning activities in the learning path prepared them for the upcoming internship period.
The digital learning path prepared me for both skills training and practice.I was more prepared to master the basic skills I needed before the internship period.I read about the basic skills and saw them on videos; therefore, it was easier for me to accomplish these tasks when I had to do them myself.I now understood more clearly how and why those in the video perform the skills this way.(Student 5-FG1) During the skills training, the students experienced being able to discuss their theoretical understanding with the educators and fellow students and thus gain a deeper understanding of the topic.
I think that the learning path was very preparatory for the skills training.Yes, because there were some tasks we were unsure about.However, we got more answers when we had the skills training.We were able to ask the educators questions along the way and discuss various issues.(Student 6-FG1) Subgroup 2: Collaboration with Peers Promote Enhanced Learning Through Social Interaction.The digital learning path was arranged for students to collaborate in group activities with fellow students, with the opportunity to discuss and solve tasks together.The groups themselves decided when and where they wanted to carry out the digital learning path.Working together in groups was deemed a positive experience because it allowed students the opportunity to organize collaborative work based on the needs of the whole group according to requirements to be met.The students emphasized that collaboration and discussions about different topics and tasks promoted their learning outcomes.Working in groups helped to strengthen social cohesion and provide a community for the students.The students also found that collaboration in groups made them feel more responsible for participating and more responsible for each other's learning.
It was a very nice way to get to know each other and connect.Some know a little about one thing, and others know something else.And then we manage to figure things out together.We sort of take responsibility for each other.(Student 8-FG2) Students, to which Norwegian was their second language, expressed that collaboration with peers had even a greater positive consequence for them, as it led to a better understanding of the assignments.This implies that individual studying can involve more difficulties and possibilities of misunderstanding due to language problems.
It was nice to work together because when you sit alone and read, it's so easy to misunderstand, at least for me, who struggles with the language.(Student 2-FG1) The students described how the organization of the digital learning path contributed to active learning.Student groups solved the tasks in the learning path slightly differently.Most of the groups solved all the tasks together, but some groups divided the tasks between them and only solved the quizzes together.Respondents from groups who organized the work as the latter expressed that they probably would have learned more if they had worked together on solving the tasks.
My group worked on the tasks together on the first day, but later we divided the tasks between us because of the time pressure and other exams.But it wasn't as it was intended with discussions and such.I learn more when we work together.(Student 2-FG1) Students who worked closely with their peers stated that they became more active and involved themselves.
There were many discussions, even between groups at times, so everyone became very active.(Student 7-FG1) I think those quizzes were smart because you had to know a bit about different subjects.We went through the quizzes together, but the fact that we had to physically answer them individually, I think, is an advantage, because then it wouldn't be just one person doing it in the group.(Student 8-FG2) Subgroup 3: The Organization of the Curriculum Affects Students Learning.Some students perceived the organization and structure of the course suboptimal as preparation for the upcoming internship period.They expressed that in their experience the basic skill training came too early in the semester and the skills were hard to recall when they started their internship period several weeks later.To keep the skills and procedures fresh in their minds, the students suggested that basic skills training should be organized closer to the internship period.
The learning path and skills training should have been conducted closer to practice; I had kind of forgotten everything when I started practice.(Student 9-FG2) The students experienced that the organization of the semester, with several subjects that were not planned in relation to workload and exams, influenced their learning.They had to prioritize studying for the exam and had not actively used the blended learning opportunities to be prepared for the upcoming internship period.Furthermore, it emerged that the students thought there was too much focus on the achievement of individual technical skills compared to collaborative and nontechnical skills.They expressed a need for more training on how to behave in the operating theater, especially areas such as handing over sterile equipment, and how to behave and communicate in the operating theater.They expressed that this lack of experience posed a challenge in their introduction to the OR environment in the first internship period.The students asked for simulation pedagogy to obtain an impression of how they should behave in an OR context.They wanted several procedures and skills to be put together to prepare them for how an OR really works.
Maybe we could simulate after skills training so that we can obtain an understanding of what it is like in the operating room and not just perform one procedure at a time.More attention to the use of surgical instruments and how these should be handed over to the surgeon in the OR was requested by the respondents.The basic skills training lacked information about which instruments were used for various procedures and operations.This was knowledge that the students expressed would be helpful for their first internship.This was also the skill that most students expressed that they feared failing in during the upcoming internship period.
Subgroup 2: Small Group Sizes and Close Follow-up and Assessment from Educators Facilitates Learning.Students expressed that the organization of the basic skills training influenced learning.They expressed the importance of thorough educational assessments to facilitate learning.Skills training in large groups were experienced to give less learning than expected.An example given by the students was related to training on techniques for sterile handwashing and disinfection.Only one sink was available for every 10 students, resulting in idle time while waiting for access to training.Thus, the students expressed that large groups were associated with fewer opportunities to practice skills and repetitive training as well as less access to the educators.With the large groups, the feedback provided from educators was impaired by the number of students and limited time.
I learned a lot when we were in small groups and could practice again and again.(Student 1-FG2) The students highlighted that feedback and corrections from educators in skills training were important for effective learning.Without feedback, they became unsure whether they had performed the procedures correctly, and the exercise was perceived as less meaningful.One student said: No one could tell if I made a mistake; then, there was no point in practicing.Training practical skills in small groups was however experienced as conducive to learning.Advantages reported by the students were that small groups provided less noise, less waiting, and better opportunities to repetitive training.The students expressed that they preferred small groups and the presence of several educators, with the possibility of feedback.Skills training organized in this way can thus promote and improve learning compared to large groups.
Discussion
This study aimed to investigate the experiences of postgraduate OR nursing students who underwent a blended learning approach, which combines digital learning paths with skills training.The main objective was to explore how this approach prepares students to successfully attain specific learning outcomes during their initial internship period.The students emphasized that the use of the digital learning path with different educational methods promoted active and collaborative learning and helped students to develop the foundation for practical skills development and served as good preparation for skills training.Learning together, using the peer-learning method in a digital learning environment has been shown to have encouraging effects for enhancing students' knowledge, competence, satisfaction, and problem-solving skills (Josse-Eklund et al., 2023;Männistö et al., 2020).Our findings support the idea that digital technologies enable increased availability of knowledge and teaching resources and thus facilitate a learner-centric approach to teaching, adding value to learning processes (Meum et al., 2021).In addition, the students appreciated being able to conduct learning activities whenever it suited them.This is in accordance with more flexible solutions found in higher education, supported by several studies dealing with digital learning methods for students (Fossland, 2015;Meum et al., 2021).The various activities in the digital learning path were found to be useful both in preparation for students' skills training and as preparation before the upcoming internship period.Students emphasized that the videos and the opportunity to familiarize themselves with the theory provided them with a deeper understanding of the foundation for skills development, enabling them to start at a higher taxonomic level before skills training and the internship period.This is consistent with the findings from Fagerdahl et al. (2021) revealing that videos made the students feel more prepared and relaxed when attending the OR.The variety of tasks in the digital learning path, such as reading research articles, solving tasks, taking quizzes, and watching videos together in groups, contributed to a deeper understanding of why basic skills were carried out in the way they were.This is also in line with Smeby and Heggen (2014), who emphasized the importance of coherence in the educational program to integrate theory and practice.However, designing learning paths in a way that leads the students through different learning activities to achieve learning outcomes may be challenging for educators, as it requires time and skills to incorporate new technology that emerges continuously (Button et al., 2014;Hernon et al., 2023).
Both the digital learning path and the skills training program facilitated active and collaborative learning.Discussing and solving tasks together in the digital learning environment were experienced as positive and promoted learning.The students found that the discussions they had together yielded the most learning because they learned a lot from listening to fellow students' points of view.This can be seen in relation to the transformative learning theory, which emphasizes that the goal of adult education is to help individuals become more autonomous thinkers by learning to negotiate their own values, meanings, and purposes rather than to uncritically act on those of others (Mezirow, 1997).Critical reflection and participation in discourses thus become significant elements in higher education pedagogy (Mezirow, 1997).
In the present study, OR nursing students experienced that the learning path helped them to strengthen social cohesion.Through group work, they were able to acquaint themselves with their fellow students and receive support from each other, which resulted in a sense of community.Similarly, a systematic review of randomized control trials also highlighted that digital collaborative learning environments contributed to interaction skills and problem-solving skills, in addition to satisfaction and motivation for learning (Sormunen et al., 2022).Besides, another systematic review and metaanalysis found that peer-assisted learning benefited academic performance and clinical skills performance (Brierley et al., 2022).Social support is likelier to occur when social interaction is a dominant type of learning activity (Berings et al., 2008;Männistö et al., 2020).Groups that solved all the tasks and quizzes in the digital learning path together took more responsibility for each other's learning.This was particularly important for students whose second language was Norwegian, as group work helped them increase their understanding of the subjects.Previous studies have shown that second-language nursing students must learn three languages in parallel: Norwegian, academic concepts, and nursing terminology (Garone et al., 2020).In addition, second-language nursing students struggles with both oral and written assignments, the interpretation of assigned texts and the systematization of content in their own texts (Amaro et al., 2006;Black & MacKenzie, 2008;Crawford & Candlin, 2013).These findings are important for nurse educators to consider when designing learning activities.
Groups that divided the tasks between them due to time constraints and only solved the quizzes together reported lower learning acquisition than those who fully collaborated through the digital learning path.However, earlier studies have shown that experiences with group work, such as freeriding, challenging group processes, group sizes, and types of tasks impact satisfaction with group work (Chang & Kang, 2016).According to social cognitive learning theory, learning is achieved when students are active and interact with those around them (Locke, 1987).Educators in higher education have a responsibility to facilitate adult students' functioning as more autonomous and socially responsible thinkers.This requires communicative learning; therefore, group work may contribute to such learning (Mezirow, 1997).
Some of the students stated that the basic skills training program came too early in the semester, leading to low skill retention in relation to their upcoming internship period.In addition, the semester schedule contained several parallel subjects to attend for the students, with a perceived high total workload.The students then prioritized studying for their exams in other subjects and did not work on the digital learning path as much as planned.These findings show the major impact exams pose on students' priorities.To prevent students themselves having to set such priorities, this finding calls for educators to cooperate on student's workload and exams when planning the academic year for various subjects (Kyndt et al., 2016).This is in line with constructive alignment theory, which specifies that educators should consider the overall organization of subjects to facilitate optimal learning conditions (Biggs et al., 2022).
Training on practical basic skills provided students with confidence, even if they occasionally failed.In accordance with deliberate practice, skills training with the opportunity for repetition and feedback allows students to experience mastery at a higher level than if they had no prior experience (Ericsson et al., 1993;Ericsson & Harwell, 2019).Deliberate practice suggests that organizing skills training around learning goals, rehearsal and feedback loops are essential to develop highly skilled practitioners (Donoghue et al., 2021;Welch & Carter, 2018).In the present study, the students wanted more training on skills because such training gave them self-confidence.In addition, the students expressed that they wanted more training on how to behave in an OR environment.Handing over instruments to the surgeon was described as one of the skills most students were afraid of doing wrong, congruent with findings from Fagerdahl et al. (2021) who argued that stress and anxiety among students may inhibit learning.According to Benner (2010) complex skills must be learned in authentic clinical learning environments.Our students called for increased use of simulationbased learning to present them with learning opportunities in an environment that replicates an operating theater.The opportunity to apply technical and nontechnical skills in a realistic environment can prepare students for their professional roles as OR nurses.Research reveals that regular interprofessional simulation-based training to expand the repertoire of situations students are likely to encounter in clinical practice is in demand (Kaldheim et al., 2021).However, students want to start learning basic technical skills before moving into more advanced interprofessional training, seeking to build new knowledge on top of their existing knowledge (Kaldheim et al., 2021).For OR nursing education, it is important to be able to create coherence and help students recontextualize the knowledge they have developed at school to another learning arena, such as the operating theater (Smeby & Heggen, 2014).
Varying group sizes affected the learning outcomes of the skills training.In large groups with many students, there was a significant amount of waiting, fewer opportunities to practice, and weakened feedback from educators.Consequently, smaller groups were more conducive to learning and provided less noise, less waiting, and better opportunities to practice several times.Although this skills training program was not subject to examination or testing, some of the students expressed that there was no point in practicing if no one gave them feedback.This shows that feedback is a critical component of effective tutoring, as feedback facilitates student learning and performance improvement (Weallans et al., 2022).According to the study by Männistö et al. (2020), educators play a crucial role in providing feedback to strengthen students' self-confidence and mastery of skills.It is therefore important for education programs to have enough teachers present to be able to provide timely feedback.
Strengths and Limitations
No previous research exploring how postgraduate OR nursing students experience the use of digital learning paths combined with skills training were found, which strengthens the relevance of this study.However, some limitations must be addressed.
Various suggestions exist for optimal participant numbers in focus groups, but most authors suggest that an adequate group size should involve 4-12 participants, with the optimal size being between 5 and 10 ( Krueger & Casey, 2015;Morgan, 1996;Sim, 1998).The group must be large enough to provide comprehensive data, but small enough for everyone in the group to be heard.However, the focus group size may have contributed to follow-up questions not being asked to further elaborate a deeper understanding of their statements.The experience from the present study is that the data were rich and that all the participants were active.Nevertheless, there is a possibility that some students did not share everything they wanted because the groups were relatively large, with nine participants in each group.Jayasekara (2012) claims that the real strength of focus groups is the insights given into the sources of complex behaviors and motivations, in addition to exchange of opinions by discussing both common experiences and unique experiences (Malterud, 2012).This corresponds with the purpose of our study exploring OR nursing students' experiences with the use of digital learning paths combined with skills training.Leentjens and Levenson (2013) raised ethical issues about the recruitment and inclusion of students in university research projects because students may be required or coerced to participate, violating their privacy.In our study, the first author who invited the students to participate in the study was an OR nurse educator, which may have led to students feeling obliged to participate.However, the moderator and the comoderator were educators from other disciplines; thus, the students were encouraged to speak more freely.The moderators had experience with digital learning paths but were not familiar with the OR context.This may have led to natural contextual follow-up questions being omitted.
Implications for Practice
The findings suggest that a blended learning approach may enhance students' ability to achieve defined learning outcomes.However, several important variables influencing the students' learning process must be addressed.The use of small groups with close follow-up by the educators' during skills training was highlighted by the students as one important element that could increase students' learning outcomes.Besides, students called for more simulation-based training on how to behave in the operating theater in a professional manner.In addition, virtual reality may also offer an alternative clinical experience to physical simulation to increase the students' level of confidence (Sen et al., 2022;Siah et al., 2022).Furthermore, training practical skills should appear closer to the internship period to facilitate the transfer of skills.Better cooperation between educators on student's workload and exams when planning the academic year for various subjects may also facilitate better learning outcome for the students.Educators should take these factors into consideration when organizing education for the OR nursing students.
Conclusion
Using digital learning paths and basic skills training as pedagogical tools to achieve defined learning outcomes can lead to more effective and deeper learning for postgraduate OR nursing students.However, there is a high demand for training in operating theater behavior before OR nursing students' internship periods.Supplementing skill training with tion training in a realistic clinical environment may better prepare OR nursing students for internship periods.Feedback and corrections from educators, in addition to sufficient time and smaller groups, are essential elements in high-quality skills training.Educators must intensify collaboration within various subjects to plan the students' academic year with the aim of optimizing the student's learning outcomes.New technologies, such as virtual reality, may also contribute to student's clinical experiences in a future curriculum.
Everything was relevant and very useful to understanding how and why we should perform the skills.(Student 1-FG1) I'm glad I had it before the skills training because then I could both see how it should be done and read in advance what the theoretical background was.(Student 2-FG1) It allowed me to get to know the others better.I think it's good that I have someone else who supports me in a way.(Student 2-FG1) Then we could talk about the tasks and hear what others think.I learn a lot from what other people think.(Student 2-FG2) The exam took the focus away from skills training.It kind of came at the same time as we were preparing for exams in other subjects.(Student 7-FG2) Main Category 2: The Importance of Skills Training and Behaving in an Operating Theater Context Subgroup 1: Desire for Taxonomical Progression from Basic Technical Skills Training to Simulation Pedagogy to Develop Correct Behavior in the OR.Training on basic skills was described by the students as very useful.There was a consensus from the respondents that they wanted more of such training.Although it was considered as basic skills in OR nursing, the respondents expressed that all the technical skills were experienced as complicated.Skill training with the possibility of repetition gave them a sense of mastery at a higher level compared to having no practical training before the internship period.The skill training made me more confident about basic procedures.Even though I failed a little, I experienced mastery.(Student 4-FG2)
Table 1 .
Learning Activities and Outcomes in the Digital Learning Path.
Can you describe how you experienced the digital learning path and skills training in preparation for the first internship period?Can you describe this in more detail?Can you elaborate on this?Can you give an example?Can you link this experience to a specific part of the learning path?Does anyone else want to add anything?Can you tell us about how you collaborated with other students during the digital learning path and skills training?
Table 3 .
Example of the Analytical Process.
Table 4 .
Description of the Participants.
Table 5 .
Main Categories and Subgroups.
|
2024-09-12T06:17:52.169Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "ff274a5fc2124ee6ab4bacf929713fc50d64b452",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/23779608241278541",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f70aac148e22fd6ddb2300528f06333ee8eed97",
"s2fieldsofstudy": [
"Education",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255114492
|
pes2o/s2orc
|
v3-fos-license
|
Stroke risk in older British men: Comparing performance of stroke-specific and composite-CVD risk prediction tools
Stroke risk is currently estimated as part of the composite risk of cardiovascular disease (CVD). We investigated if composite-CVD risk prediction tools QRISK3 and Pooled Cohort Equations-PCE, derived from middle-aged adults, are as good as stroke-specific Framingham Stroke Risk Profile-FSRP and QStroke for capturing the true risk of stroke in older adults. External validation for 10y stroke outcomes was performed in men (60-79y) of the British Regional Heart Study. Discrimination and calibration were assessed in separate validation samples (FSRP n = 3762, QStroke n = 3376, QRISK3 n = 2669 and PCE n = 3047) with/without adjustment for competing risks. Sensitivity/specificity were examined using observed and clinically recommended thresholds. Performance of FSRP, QStroke and QRISK3 was further compared head-to-head in 2441 men free of a range of CVD, including across age-groups. Observed 10y risk (/1000PY) ranged from 6.8 (hard strokes) to 11 (strokes/transient ischemic attacks). All tools discriminated weakly, C-indices 0.63–0.66. FSRP and QStroke overestimated risk at higher predicted probabilities. QRISK3 and PCE showed reasonable calibration overall with minor mis-estimations across the risk range. Performance worsened on adjusting for competing non-stroke deaths. However, in men without CVD, QRISK3 displayed relatively better calibration for stroke events, even after adjustment for competing deaths, including in oldest men. All tools displayed similar sensitivity (63–73 %) and specificity (52–54 %) using observed risks as cut-offs. When QRISK3 and PCE were evaluated using thresholds for CVD prevention, sensitivity for stroke events was 99 %, with false positive rate 97 % suggesting existing intervention thresholds may need to be re-examined to reflect age-related stroke burden.
Introduction
Population ageing continues to be associated with rising burden of stroke (Feigin et al., 2021;Johnson et al., 2019). Current practice assesses stroke risk together with that of coronary heart disease (CHD) as the composite risk of cardiovascular disease (CVD), through tools such as QRISK3 in England (Hippisley-Cox et al., 2017), Pooled Cohort Equations in the US (PCE) (Goff et al., 2014a) and SCORE2 across Europe (Hageman et al., 2021). There are two main concerns with this approach. Firstly, evidence points to attenuation (Odden et al., 2014) and even reversal (Ahmadi et al., 2015) with increasing age of associations between traditional risk factors and CVD (van Bussel et al., 2020), including stroke (Lind et al., 2018). However, except for the recent SCORE2-OP (SCORE2-OP Working Group and ESC Cardiovascular Risk Collaboration, 2021), development samples for CVD risk tools have been predominantly middle-aged (Bambrick et al., 2016). Secondly, stroke and CHD have interrelated yet distinct pathophysiology. Literature suggests that the relative role and predictive power of conventional risk Abbreviations: AF, atrial fibrillation; BRHS, British Regional Heart Study; CHD, coronary heart disease; CIF, cumulative incidence function; CPI, centred prognostic index; CVD, cardiovascular disease; FSRP, Framingham stroke risk profile; HF, heart failure; KM, Kaplan-Meier; MI, myocardial infarction; NICE, National Institute For Health And Care Excellence; PCE, pooled cohort equations; PI, prognostic index; SCORE, systematic coronary risk evaluation; Sn/Sp, percent sensitivity/percent specificity; TIA, transient ischemic attack. factors likely differs between heart disease and stroke (Endres et al., 2011;Giang et al., 2013;Syed et al., 2012). Moreover, underlying causes of stroke (Lindley, 2018) and the proportion of heart and circulatory diseases constituted by fatal stroke events (British Heart Foundation, 2022), change with ageing. There is little evidence on how well composite-CVD prediction rules, derived from mostly middle-aged adults, capture the true risk of stroke events in older adults.
Few stroke-specific risk tools have been validated in an older UK population. The Framingham Stroke Risk Profile (FSRP) (D'Agostino et al., 1994) overestimated risk in European older adults with only average discrimination, particularly among men (Bineau et al., 2009;Voko et al., 2004). QStroke developed later from UK primary care data (Hippisley-Cox et al., 2013) has not been independently validated in older British adults free of prevalent CVD.
To address these research gaps, we first externally validated 2 Sspecific (FSRP and QStroke) and 2 composite-CVD (QRISK3 and PCE) risk tools for predicting the 10y risk of stroke outcomes in older men of the British Regional Heart Study (BRHS). Because competing causes of death in older cohorts can affect model performance (Livingstone et al., 2021;Nanna et al., 2020;Nguyen et al., 2020), we considered our external validation with and without adjustment for competing nonstroke mortality. Second, we evaluated how the tools classified men with respect to stroke events using cut-offs based on observed risk, and for composite tools, clinically recommended thresholds for CVD intervention. Finally, we additionally assessed performance of risk tools head-to-head in a common subsample of men who at baseline were free of a wide range of cardiovascular conditions and not under specific CVD prevention treatments, to better inform primary prevention.
Methods
We follow Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis -TRIPOD guidelines for reporting validation studies (Collins et al., 2015).
QStroke for men
Male patients (n = 1748108) 25-84y (mean 45y) registered on the QResearch primary care database over 1st Jan 1998 -1st Aug 2012; and without with a history of stroke/TIA and anticoagulant use (Hippisley-Cox et al., 2013). Primary outcome was the first recorded diagnosis of stroke or TIA, excluding haemorrhagic stroke.
The BRHS validation sample
BRHS is a prospective study which began in 1978-1980 by recruiting a socially representative sample of 7735 men aged 40-59y, drawn at random from age-sex registers of 24 primary care practices across Britain (Walker et al., 2004). In 1998-2000 (baseline for this analysis), 4252 men 60-79y (mean 68y) participated in the 20y questionnaire-based, physical and clinical re-examination. Follow-up for incident fatal and non-fatal events is available to 2018 through national mortality and 2yearly primary care record reviews for 96 % of the participants. For external validations, men were followed from baseline to the first of stroke/TIA event or death; or a maximum of 10y to match the time horizon of the above-described risk tools. All participants provided written informed consent in accordance with the Declaration of Helsinki. Ethical approval was obtained from National Research Ethics Service Committee London -Central, Reference number: MREC/02/2/ 91.
Definitions of endpoints and predictors of all risk tools with corresponding BRHS measures are detailed in Supplementary Tables 1 and 2.
Statistical analysis
We first validated each tool for its respective stroke outcome using sub-samples of men selected per its eligibility criteria. We subsequently examined the FSRP, QStroke and QRISK3, that share ischemic strokes and TIAs as common outcomes, head-to-head in a further common subsample of men without a history of stroke, TIA, CHD (MI, angina, percutaneous transthoracic coronary angioplasty and, coronary artery bypass grafting), HF, AF, intermittent claudication and statin or anticoagulant use. We did not include PCE in this sub-analysis because TIAs are not part of its original outcome, which would lead to inherent miscalibration.
Missing data in validation samples ranged from 6 to 12 %, with minimal differences between men with/without complete information, especially with respect to outcome events (Supp. Tables 3A-E). We hence limited our analysis to complete cases. Validation samples fulfilled a minimum of 100 events as criteria for sample size (Collins et al., 2016).
External validation was informed by guidelines from Royston and Altman (Royston and Altman, 2013) and Steyerberg (Steyerberg, 2019), and conducted in Stata 17.
The 4 risk tools model their predictors using (Cox) proportional hazards models. We calculated 10 yr predicted probabilities (P) of the outcome using where BaselineS(10) = published 10y baseline survivor function of the relevant risk tool. Prognostic Index (PI) = linear predictor calculated using published predictor coefficients and BRHS values of predictor variables. Centred (CPI) = PI centred using published means.
For composite-CVD tools, predicted probabilities were multiplied by the proportion of all events that were stroke/TIA, 0.366 for QRISK3 (calculated from published data (Hippisley-Cox et al., 2017)), or hard strokes, 0.289 for PCE (requested from authors) as analysed elsewhere (D'Agostino et al., 2008;Majed et al., 2013).
Discrimination refers to how well a model separates participants who go on to have an event from those that don't (Royston and Altman, 2013). We assessed this using Harrell's C index [95 %CI] (somersd package), which can range from 0.5 (as good as chance) to 1 (perfect discrimination). We also visually inspected separation of Kaplan-Meier (KM) survival curves of 4 risk groups according to 16th 50th and 84th centiles of the PI (Royston and Altman, 2013).
Calibration refers to the accuracy of a model's predictions i.e. how closely predicted probabilities agree with observed probabilities, overall and at various levels of predicted risk (Royston and Altman, 2013). We used the beta coefficient of the CPI as a single predictor in a Cox proportional hazards model to measure the calibration slope [95 %CI] indicating overfitting where the slope is < 1 and underfitting where the slope is > 1 (Van Calster et al., 2016). We assessed mean calibration (calibration in the large) as ratio of global mean predicted risk to observed risk (KM method), where a ratio greater/<1 indicates global over/under-estimation. We assessed moderate calibration (Van Calster et al., 2016) by comparing KM observed risk at 10y with mean predicted risk in deciles of predicted risk (pmcalplot package (Ensor et al., 2018)), and additionally across 4 age-groups (≤65, >65-≤70, >70-≤75 and > 75 years) in the common subsample.
We examined sensitivity/specificity (Sn/Sp%) of tools using stroccurve package to account for censoring (Cattaneo et al., 2017), at a threshold corresponding to the overall KM observed risk and at conventional clinical thresholds for composite-CVD risk i.e. 10 % for QRISK3 (National Institute for Health and Care Excellence, 2014) and 7.5 % for PCE (Arnett et al., 2019;Goff et al., 2014a).
Sensitivity analyses accounting for competing non-stroke mortality were run as described by Wolber's et al (Wolbers et al., 2009). Further details available in supplementary methods.
Results
Comparisons of baseline and performance characteristics between the BRHS validation sample and development sample of each tool are given in Supp. Tables 4-7. Overall BRHS men had a mean age of 68y, and a median follow-up of 10y. A greater percentage of BRHS men were on blood pressure treatment and fewer of them were current smokers. Information on Townsend scores, valvular heart disease (except that indicated by use of anticoagulants and so excluded), systemic lupus erythematosus and mental illness was not available in BRHS. Table 1 provides summary performance indicators of each validation. There was no violation of proportional hazards over time.
Table 1
External validation of stroke-specific and composite-CVD risk tools in older men of the BRHS. Fig. 1d). Mean calibration was 1.12. Predicted probabilities followed the KM failure function closely with slight over estimation in intermediate deciles (Fig. 1d). Using the KM 6.77 % cut off, PCE had Sn/Sp 73/52 %.
Using the National Institute For Health And Care Excellence (10 % (National Institute for Health and Care Excellence, 2014)) and American College of Cardiology/American Heart Association (7.5 % (Arnett et al., 2019;Goff et al., 2014a)) CVD intervention thresholds to respectively categorise men as high or low risk based on QRISK3 and PCE composite-CVD probabilities (prior to correction for stroke outcomes), gave 99 % sensitivity for respective stroke events, with specificity 2-3 % indicating a very high false positive rate. Examining higher cut-offs (Supp . Table 8) improved specificity and positive predictive values at the expense of sensitivity but negative predictive values remained high.
Sensitivity analyses adjusting for competing risks
Adjustment for non-stroke deaths generally worsened discrimination and calibration of all tools (Table 1). Calibration slope deviated further below 1, and when estimated with respect to cumulative incidence function (CIF) of events, mean calibration showed slightly increased global over-prediction. In decile-based plots overestimation was exaggerated ( Fig. 1a-d, grey diamonds and dashed line graphs).
Performance of FSRP, QStroke and QRISK3 on a common CVD-free sample
There were 2441 men (mean age 68y) experiencing 113 ischemic strokes and 83 TIA events over 10y (Supp. Tables 9-10). QStroke had a higher C-index 0.6584 [0.6220-0.6949] than FSRP and QRISK3 however confidence intervals for all tools overlapped, and KM survival curves indicated similar discrimination across the 3 tools with FSRP and QStroke discriminating less between low-and intermediate-risk groups (Supp. Fig. 2).
QRISK3 showed better mean (Supp . Table 10) and decile-based calibration (Fig. 2a-c). Relative overestimation by FSRP and QStroke was more evident in higher deciles particularly with respect to CIF. The tools showed similar Sn/Sp when examined using KM risk and CIF cutoffs. On comparing predicted risks of the three tools according to deciles of their averaged risk, agreement was evident for nearly all except the highest deciles (Fig. 3).
In analysis by age-groups, the gap between CIF and KM risk progressively widened with age (Supp. Tables 11, Fig. 4); for men > 75y, the CIF was 4 % lower than KM risk. Both FSRP and QStroke overestimated risks up to 75y. For men > 75y, mean risk predicted by FSRP was lower than the KM-estimate but higher than the CIF, while that predicted by QStroke was similar to KM-risk and 3 % higher than CIF. QRISK3 predictions were also higher relative to KM and CIF risks, but the difference became smaller with age. In men > 75y, QRISK3 underestimated risk markedly in comparison to the KM estimate, but mean prediction was more aligned with CIF.
Discussion
With more adults reaching old age, it is necessary to employ the right tool for assessing absolute stroke risk. We investigated how well composite-CVD risk prediction tools QRISK3 and PCE; and stroke-specific FSRP and QStroke captured the true risk of stroke events in older men. Bearing in mind the slightly different stroke outcomes of these tools, we discuss implications of three main findings.
Firstly, both types of tools discriminated only modestly, with discrimination falling further on adjustment for competing risks.
Secondly, stroke-specific FSRP and QStroke tended to overestimate risk at higher predicted probabilities while composite-CVD tools QRISK3 and PCE showed better global calibration with minor mis-estimations across the range of risk. Calibration generally worsened when nonstroke deaths were accounted for. However, in men > 70y without a broad range of cardio/cerebro/vascular conditions, QRISK3 showed better calibration despite adjustment for competing deaths.
Finally, all tools displayed similar sensitivity (63-73 %) and specificity (52-54 %) using validation sample-based observed risk as cut-offs. However, when QRISK3 and PCE were evaluated using risk thresholds recommended for primary prevention of CVD, both falsely categorised a large proportion of men as high risk for stroke events.
Stroke discrimination in older adults needs improvement
The low discrimination of these tools is somewhat expected because of the less heterogenous case-mix of BRHS men (Steyerberg, 2019), particularly with regards to age, the main driver of risk. Comparing stroke-specific scores, predictors like body mass index, cholesterol:HDL ratio, family history of CHD and chronic kidney disease, not part of the FSRP model, help QStroke discriminate ischemic strokes and TIAs marginally better. However, the same additional variables do not seem to improve discrimination as part of QRISK3 that was developed to predict composite risk. This suggests that newer markers being explored for improving CVD risk stratification in older adults should be tested for competing risks adjusted stroke-specific prediction. Coronary artery calcium is one such biomarker which has shown promise for improving risk stratification of CHD but not similarly for stroke (Yano et al., 2017). Until new evidence translates into guidelines, clinical judgement on the use of blood biomarkers associated with stroke risk (Folsom et al., 2013) such as natriuretic peptides which reflect subclinical cardiac dysfunction, and vascular imaging to capture atherosclerotic burden may be helpful on a case-by-case basis (Bambrick et al., 2016).
In older men risk prediction by composite-CVD tools is comparable to if not better than by stroke-specific tools
It is suggested that predicted riskhence calibration may be more important for clinical decisions than discrimination (Cook, 2007) especially in older populations whose risk distribution is narrower. In men overall, FSRP tended to over-predict risk of stroke/TIA at higher probabilities but estimated risk well across low-mid deciles. This could be because although BRHS men were similar in age to FSRP men, with similar mean PI (D' Agostino et al., 1994;Wolf et al., 1991), they were more frequently using blood pressure medication and fewer of them smoked. Calibration was also good for QStrokepossibly due to having been developed from a UK population that was more contemporaneous to this BRHS sample, except for over-estimation in the highest risk group. In comparison, QRISK3 and even PCE, developed using North American cohorts, displayed slight misestimation through the lowintermediate risk range.
Considering intervention decisions are often made at intermediate risk ranges, the over-prediction by stroke-specific tools in higher deciles may not be of consequence (Nguyen et al., 2020). However, adjustment for non-stroke mortality as a competing event worsened calibration. And in general, CIF diverged more from KM risk with increasing predicted risks. This magnified overestimation which also became apparent at lower predicted risks.
Additionally, both predicted risks and competing mortality increase with ageing (Wolbers et al., 2009). Accordingly, when comparing prediction of ischemic strokes and TIAs in men without CVD, overestimation relative to CIF by stroke specific tools was clearly evident in successive age groups. Yet, QRISK3 showed better calibration in those > 70y. Interestingly, this contrasts with recent findings regarding the effect of non-CVD mortality on the performance of QRISK3 with respect to a composite outcome (Livingstone et al., 2021). We acknowledge our sample is much smaller in comparison but draw attention to the possibility that predictions of composite-CVD and individual components may be operating differently in older populations.
The effect any of this has on clinical utility would depend on the cutoff for intervention. There are no agreed thresholds for stroke risk alone. When we examined the 8 % CIF in CVD-free men (solid blue cut-off Figs. 2 and 4), FSRP appeared more likely than QSTROKE and QRISK3 to misclassify low-risk men (intermediate deciles) as being at high risk, although classification across age bands was comparable.
So, what does this mean for current clinical practice?
Older adults with clinically manifest CHD, HF, arrhythmias and intermittent claudication are generally in receipt of preventive therapies to reduce future CVD events, including strokes. Hence the need for risk stratification becomes more relevant for those ageing without a history of these conditions in whom clinical decisions on interventions are a challenge. In this context, composite-CVD tools like QRISK3 and PCE, developed to aid primary prevention using cohorts that exclude most CVD conditions appear more appropriate for stroke risk prediction. QRISK3 has been recommended for use in adults up to 84y (National Institute for Health and Care Excellence, 2014), and PCE up to 75y (Arnett et al., 2019). While we show that these tools may be reasonably well calibrated for stroke events beyond midlife, using established CVD intervention thresholds of QRISK3 and PCE in older men results in excellent sensitivity but a very high false positive rate for stroke outcomes. This suggests that men may be considered eligible for interventions that they don't need/benefit from.
Arguably some of these men may be at high risk of CHD when a pharmaceutical intervention such as statin is justified. However, strokes comprise an increasing proportion of first CVD events with increasing age (British Heart Foundation, 2022); and the benefit of statins for the primary prevention of stroke in older adults is debatable for a number of reasons (Saeed, 2020;Shepherd et al., 2002;Volpe and Patrono, 2021). In BRHS, the fraction of hard CVD occurring over 20y that were strokes increased from 30 % when men were followed from a baseline age of 50-70y; to 50 % when followed from 70 to 90y (data not shown).
Moreover, similarly poor specificity has been observed even for broader CVD outcomes when evaluating 7.5 % PCE risk in 66-75y participants of the Framingham Offspring Study, indicating the need for selecting intervention thresholds based on age (Navar-Boggan et al., 2015). Revised European guidelines on CVD prevention take this into consideration and recommend age-specific thresholds (Carballo et al., 2022).
The importance of context in applying risk tools has also been highlighted elsewhere (Gulati et al., 2022;Shah et al., 2022). The implication is that for apparently healthy adults 60y and older, composite-CVD risk models can be used for stroke risk prediction but perhaps need to be (1) updated to reflect their stroke risk more closely; and (2) re-evaluated to ascertain thresholds appropriate for increasing age (Nanna et al., 2020), including for stroke specific work-up/ interventions besides statins. Until then, clinicians should be aware of the potential of misclassification and the ever-continuing need for patient discussions on risk enhancers/modifiers and shared decision making.
Limitations
There are some key limitations to our analyses. First, although haemorrhagic strokes have been excluded from models predicting ischemic strokes and TIAs, based on mortality and validation of primary care data; we cannot be sure that this captured all cases of cerebrovascular bleeds as BRHS linkage to hospital episodes is still in progress. However, because of their higher mortality, it is likely that this would be a small number. Second, TIAs have been based on primary care reports according to a clinical, time-based criteria. This may have included TIA mimics. However, TIAs present less frequently to hospital; even within the QRISK development data, majority of TIAs were identified only through primary care records (Hippisley-Cox et al., 2017).
Third, the two Q-models have predictors some of which were not available in BRHS. This includes Townsend scores and type 1 diabetes. However, the alternative index of multiple deprivation used in BRHS was not associated with strokes/TIAs in the sample. And based on selfreported use of insulin only up to 35 men could potentially be type 1 diabetic. BRHS also did not have echocardiographic measures nor direct inquiry on valvular heart disease. But some of these men may have already been excluded by proxy use of anticoagulants per the QStroke model. Systemic lupus erythematosus and mental illness could not be determined, and other predictors like steroid use and erectile dysfunction were reported by few men so it is unclear to what extent they would have contributed to performance regarding stroke. Others have pointed out though that complex models do not necessarily have an advantage over simpler ones (Dziopa et al., 2022). This was in fact here indicated by PCE, based on a handful of core predictors discriminating somewhat better than other toolsand perhaps relates to PCE predicting a more definite outcome of hard strokes only.
We also acknowledge that some comparisons between model performances are based on subjective observation of calibration plots, but (non-test based) visual judgement on calibration to determine the better model is widely used (Collins and Altman, 2012;Schneider et al., 2022;Yourman et al., 2012).
Still, this comparison of four risk tools with regards to stroke prediction has been conducted in a reasonably large sample of older men with near complete 10y follow up. Our findings, particularly those relating to CVD-free men are worth verifying in a larger, multi-ethnic, mixed-gender primary prevention cohort.
Conclusion
In older British men, both stroke-specific and composite-CVD risk tools discriminate stroke risk weakly. Non-stroke deaths influence accuracy of predicted risks, but intervention thresholds determine if competing events are strong enough to limit use of tools. In those without a history of CVD or statins, QRISK3 remains relatively well calibrated for stroke events. However, existing models and/or thresholds should be re-examined to reflect proportional stroke burden in older adults.
Sources of Funding
AA is funded by UK Medical Research Council Doctoral Training Programme (MR/N013867/1). SPP by UK Medical Research Council Career Development Award (MR/P020372/1). The BRHS is funded by a British Heart Foundation grant (RG/19/4/34452).
The funding bodies had no role in conception, analysis or reporting of this validation work.
Data Availability
Data supporting the findings of this study are available from the study manager (Ms L Lennon; l.lennon@ucl.ac.uk) upon reasonable request.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
Data supporting the findings are available from the study manager (Ms L Lennon; l.lennon@ucl.ac.uk) upon reasonable request.
|
2022-12-26T16:06:33.374Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "f7cf6cc128427422464335fccff85bc9eefaac1a",
"oa_license": "CCBY",
"oa_url": "https://discovery.ucl.ac.uk/id/eprint/10163057/1/1-s2.0-S2211335522004053-main.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "e84a7013e8ca8ef551c563cfc14bc35f8a2fe209",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
239718203
|
pes2o/s2orc
|
v3-fos-license
|
Partnership with a Theater Company to Amplify Voices of Underrepresented-in-Medicine Students
Medical education has a long history of discriminatory practices. Because of the hierarchy inherent in medical education, underrepresented-in-medicine (URiM) students are particularly vulnerable to discrimination and often feel they have limited recourse to respond without repercussions. URiM student leaders at a USA medical school needed their peers, faculty, and administration to know the institutional racism and other forms of discrimination they regularly experienced. The students wanted to share first-person narratives of their experiences; however, they feared retribution. This paper describes how the medical students partnered with a theater company that applied elements of verbatim theater to anonymously present student narratives and engage their medical school community around issues of racism and discrimination. The post-presentation survey showed the preponderance of respondents increased understanding of URiM student experiences, desired to engage in conversation about inclusion, equity, and diversity, and wanted to make the medical school more inclusive and equitable. Responses from students showed a largely positive effect from sharing stories. First-person narratives can challenge discriminatory practices and generate dialogue surrounding the experiences of URiM medical students. Authors of the first-person narratives may have a sense of empowerment and liberation from sharing their stories. The application of verbatim theater provides students the safety of anonymity, thereby mitigating fears of retribution.
INTRODUCTION
The field of medicine has a long history of discriminatory practices toward racial and ethnic minorities, women, and members of the LGBTQ+ community. 1 Because of the inherent hierarchy in medical education, medical students are particularly vulnerable to discriminatory practices and may feel they have limited recourse to respond to discrimination. 2 Underrepresented-in-medicine (URiM) students experience "death by a thousand cuts," often with the perception that they are alone to shoulder and overcome injurious behavior inflicted by peers, faculty, and administrators.
I.
Social Impetus and Desire for Change Spring 2020 saw the confluence of three social exigencies in the United States: the disproportionate burden of the COVID-19 pandemic on people of color; 3 wide-spread awareness of racist police brutality; 4 and resurgence of demands for equity within medicine by the White Coats for Black Lives organization. 5 URiM student leaders at the Frank H. Netter MD School of Medicine at Quinnipiac University, CT, USA, felt compelled to awaken their medical school community to the bias and discrimination they faced regularly. The URiM student leaders (15 people representing Student National Medical Association, Latinx Medical Student Association, Netter Pride Alliance, Asian Pacific American Medical Student Association, Student Government Association, and White Coats for Black Lives) met regularly with the medical school dean and associate deans to address issues of culture and institutional racism. They needed their peers, faculty, and administration to know that institutional racism and other forms of discrimination were present in the school of medicine, despite vision and mission statements that prioritize equity and inclusion. To that end, in addition to advocating for policy changes, the URiM student leaders wanted to share personal stories from their classmates with the hope that the narratives had the power to instigate change for the better at the school of medicine. They wanted to be heard and seen and to have their perceptions recognized and valued; yet, they could not shake the fear of retribution if they were truly honest about their experiences. Therefore, the students concluded that anonymous storytelling was the safest approach.
II. Foundational Deliberations and Partnership
To anonymizing their stories, the students first had to deliberate two foundational features: One: How to account for a plurality of opinions and make decisions? Two: How to speak about their personal experiences and maintain anonymity?
The 15 URiM student leaders elected three students (GH, VS, SD) to organize the event and imbued the three organizers with decision-making capacity. The three students, named the Crossroads Organizers, arrived at decisions by consensus.
With the aim of maintaining student anonymity, a faculty member (AW) suggested the students investigate using a theater company to present their stories -the premise being that the actors would provide anonymous cover for the students while speaking the students' words. Having a script comprised exclusively of storytellers' words is a foundational technique of verbatim theater. 6 The students decided the Crossroads Organizers would meet theater company representatives to seek assurance that their stories would be presented with respect and appropriate representation. Squeaky Wheelz Productions 7 is a theatrical production company specializing in giving voice to the stories of minoritized individuals.
III. Recruit Story Authors
The Crossroads Organizers used email and social media platforms such as Facebook, GroupMe, and Instagram to invite medical students to author stories. Authors were given three weeks to submit their stories via an anonymous survey drop on Google Forms, thus assuring that no one, including the Crossroads Organizers, knew the identity of the story authors.
IV.
Work with the Theater Company The Crossroads Organizers met with the director of the theater company (AMW) multiple times to discuss logistics for the production. The director guided the medical students to refine their goals for the audience and authors (see Theater Company Process) and establish the timeline and task list to arrive at a finished product promptly. Initially, the Crossroads Organizers thought it would be a good idea to have the authors and actors meet to discuss their specific stories and roles. However, after further discussion, they decided that meeting would compromise the anonymity of the student authors and might discourage them from coming forward. Instead, they let the authors know they had the option of meeting their actor. In the Google Forms survey, the Crossroads Organizers provided the opportunity for authors to list specific demographic characteristics of the actor they wanted to portray their story. For example, a Latinx author could choose to have a Latinx actor portray their story.
V. Theater Company Process
The Squeaky Wheelz actors and director met to discuss how best to use their artistic skills to serve the students' goals. Given that the collaboration occurred amid the COVID-19 pandemic, there were health, safety, technology, and geography parameters that informed the creative decisions. Actors were recorded individually, and then the footage was edited to create one cohesive piece. Using video meant that in addition to the artistic choices about casting, tone, pacing, and style (which are elements of an in-person event), there were also choices about editing, sound, and mise-en-scéne ("putting in the scene" or what is seen on screen).
The Squeaky Wheelz director collaborated with the Crossroads Organizers about the project goals for their audience and colleagues. For example, the theater company encouraged the Crossroads Organizers to consider questions such as: Do they want to tell the viewer what to feel? If a student author identified their race or ethnicity in their story, should casting reflect that as well? Is there anything they would like the audience to know in a disclaimer, or should the stories stand on their own?
Consequent to the discussions, the theater company made the decision to not include music or sound (often used in film to dictate emotion), to cast actors of the same race or ethnicity if the author included such identifiers, and to create an introduction for the piece. The introduction stated: "The stories you are about to hear are the true, lived experiences of students in this program, read by actors. Students submitted these words anonymously. We, the actors, ask you to listen." The Crossroads Organizers expressed their goal was to share the stories authentically and to be clear that these were real experiences, not fictional accounts performed by actors. To serve these goals artistically in the mise-en-scéne, each actor was filmed in front of a plain white wall, in a medium-close-up, and holding a piece of white paper in the bottom corner of the frame from which they read the story. The actors looked straight to the camera for most of their reading, occasionally looking down to the paper to indicate that these words belonged to someone else visually.
Actors were directed to "read the words," not "perform the story" -to communicate the words simply and clearly rather than projecting an assumed emotionality behind the story. This choice was made for two reasons: One, the stories were submitted anonymously, and an assumed emotion may have been inaccurate to the author's intention; two, without projecting assumed emotionality, the audience has permission to feel and think for themselves in response.
All the actors worked independently to prepare for their virtual shoot dates. They also were available if any student authors wanted to meet about their personal stories. [One author chose to meet with their actor.] The final production, comprised of 16 student stories, was entitled Netter Crossroads: A Discourse on Race, Gender, Sexuality, and Class.
VI. Finding the Audience
Knowing the unique features and importance of the video as a tool to increase awareness of institutional racism and discrimination within the school community, the Crossroads Organizers aimed to secure as large a viewing audience as possible. To that end, they sought and obtained approval from the dean of the school of medicine to show the video during the Annual State of the School Address, which historically is delivered on the first day of classes and attracts a sizable cross-section of students, faculty, and staff.
The Crossroads Organizers asked the dean and associate deans to make event attendance mandatory to engage as many students and faculty as possible in active reflection about discrimination, racial inequality, and social injustice within the medical education community. The deans agreed to make attendance mandatory for first-year medical students and to strongly encourage all other students, faculty, and staff to attend.
The Annual State of the School Address is typically an in-person event. Because of the COVID-19 pandemic, all university events had to be hosted virtually on Zoom. The Crossroads Organizers valued the real-time shared experience of viewing the video as a community, so they decided to divide the video into four short segments. The shorter length increased the likelihood that the video's audio and visual quality was not affected. Between the video segments, the Crossroads Organizers presented national data about underrepresentation in medicine.
VII. Attendee Feedback
The 2020 State of the School event had 279 attendees who watched the video, Netter Crossroads: A Discourse on Race, Gender, Sexuality, and Class, in real-time. The audience was comprised of medical students, medical school faculty, staff, and administrators, and university administrators. A four-question Likert-scale survey and open-response field disseminated after the event indicated the vast preponderance of attendees were favorably impressed by the Crossroads video (see Table 1). Approximately 84 percent (67/80) of respondents strongly agreed or agreed with the statement that their understanding of URiM student experiences had increased based on the presentation. Approximately 82 percent (66/80) of respondents strongly agreed or agreed with the statement that the Crossroads presentation effectively conveyed the challenges of URiM students. Seventy percent of respondents (56/80) strongly agreed or agreed with the statement that they were more inclined to engage in conversation about inclusion, equity, and diversity since seeing the Crossroads presentation. Approximately 77 percent (62/80) of respondents strongly agreed or agreed that since seeing the Crossroads presentation, they wanted to learn more about how to help make the medical school more inclusive and equitable. "I fear that welcoming new students virtually to our school by sharing stories of bias, racism, and sexism at our own institution may have left them feeling even more isolated and insecure." "We need less of these presentations."
VIII. Student Author Survey
After the assembly, the Crossroads Organizers posted announcements on their social media sites inviting the student authors to respond to two queries about their experiences of sharing their stories. Since the student authors were anonymous and unknown to the Crossroads Organizers, they could not directly query the student authors. Instead, the posted announcement asked for open responses to the following questions: a) How did writing and sharing your personal experience at Netter make you feel? b) How did viewing your story portrayed by actors during the state of the school address make you feel?
Six of the sixteen student authors put their responses anonymously in a Google Forms survey drop. The authors indicated a range of feelings about writing and sharing their personal experiences. Several authors expressed appreciation for the process and the psychotherapeutic effects. Two authors expressed concern about how their stories would be received. a) What could you do to prevent this scenario from ever even occurring? b) Now that it has occurred, how will you support this student? c) What structural changes and/or policies need to be in place for corrective action to be effective? d) If the scenario in the video happened to you, what would you do next? e) Why would you make that choice? f) Alternatively, if you witnessed this happen to a student or faculty member, what would you do? g) Why would you make that choice? h) What are the potential personal and professional consequences of your choice?
In alignment with the published literature, our small sample of student author respondents experienced positive therapeutic effects from the process of writing and sharing their stories. 8 At the same time, seeing other authors' stories of discrimination portrayed by actors ignited anger and sadness for some of our students as they recognized the depth of trauma within the community.
Partnership with a theater company provides students the safety of anonymity when telling their stories, thereby allaying their fears of retribution. While some student authors maintained a sense of vulnerability despite the anonymity, they also expressed a sense of empowerment, hopefulness, and pride.
Medical educators and administrators must take bold steps to address institutional racism in a meaningful way. Health humanities, including theater, can help the medical education community recognize and overcome the harms imposed on URiM students by institutional racism and other forms of discrimination and awaken capacity for compassionate, respectful, relationship-based education.
|
2021-10-25T20:56:15.419Z
|
2021-08-24T00:00:00.000
|
{
"year": 2021,
"sha1": "07ab0218342d0283ae75374b43ddc1ce6207dd05",
"oa_license": "CCBY",
"oa_url": "https://journals.library.columbia.edu/index.php/bioethics/article/download/8590/4413",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8d47a22f5a8508bce425d4789321234aebf872b7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.