id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
252710529 | pes2o/s2orc | v3-fos-license | Severity of malnutrition among underweight children and family‐related factors: A cross‐sectional analysis of data from the 2019 Ethiopian Demographic and Health Survey (EDHS)
Abstract Background and Aims Malnutrition is one of the key factors in children's inappropriate physical and mental development. It is a significant issue that results in the deaths of 3.5 million children under the age of 5 every year worldwide. This study's primary goal was to pinpoint important family‐related causes of underweight child malnutrition in Ethiopia. Methods The data were gathered from the Central Statistical Agency's 2019 Ethiopian Demographic and Health Survey. Data were examined using descriptive statistics and an ordinal logistic regression model after the sample was chosen using a stratified, two‐stage cluster sampling approach. Results Overall 6101 underweight children were involved in the study of which 5019 (82.27%) were severely underweight, 28 (0.46%) were moderately underweight and 1054 (17.28%) were mild. The result showed that, birth order (p < 0.001), partners education, (p < 0.001), partner occupation (p < 0.001) and type of place of residence (p < 0.001) were associated with child malnutrition and also child from poorest family (p = 0.01, adjusted odds ratio [AOR]: 0.745, CI: −0.534, −0.056), children from workless partner (p = 0.169 AOR:CI: 1.855, −0.262, 1.498), and female children (p < 0.001, AOR: 0.793, CI: −0.369, −0.093) were severely malnourished. Conclusions wealth index combined, sex, and region have statistically significant effect on Severity of malnutrition. Female children were highly malnourished. Children in Amhara, Afar, and Tigray region were highly affected by severe malnutrition relative to other regions. Hence, the government is recommended to impose action on child nutrition in the area as it is a public health issue.
| BACKGROUND
Child malnutrition, which includes both under-and over-nutrition, is a global health issue. The three primary anthropometric indicators (AIs) used to measure child nutritional status are stunting, wasting, and underweight. 1 Underweight is a direct indicator of both chronic and acute malnutrition since it represents both low height-for-age and low weight-for-age. Contrarily, it has been estimated that the underlying cause of around half of all child fatalities globally is under nutrition, which is the study's main emphasis. Because of this, malnutrition is a serious public health and development issue not only in developing nations but also globally. 2 Malnutrition is a significant health issue that kills 3.5 million children worldwide each year. While child malnutrition has generally decreased during the 1990s, it has been rising throughout Africa during that time. More than 25% (143 million) of under-5 children in the developing countries are malnourished. 2 Although malnutrition is rapidly decreasing in the last two decades, it is still a major public health problem particularly in underdeveloped countries. 3 Stunting and wasting afflict 159 million and 50 million children worldwide, respectively, according to research derived from data collected from 2000 to 2019. One in every 13 children worldwide was wasted, and malnutrition is to blame for one-third of all infant deaths.
Malnutrition in children has an impact on their academic performance, physical, and mental growth over the course of their life, and it is a sign of the economic and health state of a nation. Children who are smaller at birth and children whose moms are thin (with a BMI < 18.5) are more prone than children from their counterparts to be stunted, wasted, or underweight. 4 In Sub-Saharan Africa (SSA), Ethiopia has the second-highest rate of malnutrition. According to the 2005 Ethiopia Demographic Health Survey (EDHS), roughly 47% and 11%, respectively, of Ethiopian children under the age of 5 were stunted and wasted. Additionally, 38% of them were underweight, and 11% of them were seriously underweight. 5 After 5 years, in 2011 EDHS, underweight decreased from 38% to 29% while severe underweight was slightly declined from 11% to 9%. Among regional states of Ethiopia, the highest underweight prevalence was observed in Amhara region which is 33.4%.
According to a cohort of children's ages, those between the ages of 24 and 35 months have the largest percentage of underweight children (34%) and those under 6 months had the lowest percentage (10%). 6 Ethiopia has made very little progress in recent years in lowering the frequency of child malnutrition. Between 1995 and 2000, the rate of wasting increased slightly, from 9.2% to 9.6%. However, throughout the same time period, the percentage of severely wasted children decreased by 47.1% (from 3.4% to 1.8%). A significant portion of this reduction can be attributed to changes in rural regions, where severe wasting was cut in half. When the data are broken down by gender, it is clear that boys did better in 1995-1996 in terms of both wasting and severe wasting whereas females did better in 1999-2000.
Anthropometrical measures, typically weight and length, are used to assess children's health as a function of growth (or height).
Understanding the causes of malnutrition is essential to combating it.
By utilizing binary logistic regression, both empirical and qualitative investigations on the factors of child malnutrition have been carried out in Ethiopia. However, the ordinal nature of severity of underweight child malnutrition is missed. Therefore, the primary goal of this study was to use the ordinal logistic regression model to discover the determinant variables of underweight child malnutrition in Ethiopia.
| Study population and sampling design
The study used the 2019 EDHS national data, which included anthropometric measurements for 6101 (weighted) children that were complete and tenable. 7 Each region was stratified into urban and rural regions as part of the 2019 EDHS sample selection process, which produced data for 6101 children's characteristics. 7 The weightfor-age anthropometric index is a very accurate general measure of the nutritional health of a community.
| Variables
The children's nutritional status was assessed using weight-forage and divided into three groups: severely undernourished (Z- The following factors are regarded as independent variables in the study: birth order, husband's education level, husband's and partner's employment, region, number of household members, wealth index combined, and child's sex.
| Statistical analysis
We conducted a descriptive analysis based on the valid data acquired, utilizing frequency distribution, percentage, and a test of association using chi-square distribution. For identifying risk factors of severity of malnutrition among Ethiopian children, ordinal logistic regression model or the commutative logit model, logit(P(Y j 1 − ( ≤ 1) , Y = the dependent variable and, J = 1,2….J-1 has been used to identify the risk factors. 8 SPSS version 26 was used for data administration, cleaning, and analysis. When testing hypotheses to find associations, differences, and correlations, variables were recorded for the intended categorization, and p < 0.05 was used to determine significance.
| RESULTS
The prevalence of underweight child malnutrition was examined and related risk variables were identified using descriptive and ordinal logistic regression techniques, respectively.
In this study, a total of 6101 malnourished youngsters from two city administrations and nine regional states were taken into account. Based by Harari (77.9%) and SNNPR (77%) regional states.
At a 5% level of significance, the wealth index combined, place of residence, child's gender, region, partner's employment, birth order number, and the number of household members all had a statistically significant relationship with child malnutrition (Table 2).
| Ordinal logistic regression
To For the ordinal logistic regression model, the test of parallel lines revealed a p-value > 0.99, which is higher than the 5% level of significance. This meant that the proportional odds assumption seems to have been true for the model (Table 3).
According to the results of the multiple logistic regression analysis, the severity of child underweight malnutrition in ethiopia is significantly influenced by the sex of the child, region, partner's employment, and wealth index (Tables 4 and 5). Based on partner occupation, the odds ratio for children whose
| DISCUSSION
This study has attempted to determine the risk variables for severity malnutrition in children using 2019 EDHS data. Ordinal logistic regression analysis was used to conduct the study. Child's gender, region, partner's profession, and wealth index combined were found as statistically significant associated with severity of child malnutrition.
According to the study, the family wealth index was highly correlated with how severely malnourished an Ethiopian kid was.
Children from low-income families were more likely to have severe malnutrition than those from high-income families. This result is congruent with a research conducted in the North West Ethiopian region of East Gojjam Zone. 9 Similar results were documented in studies by Umesh et al. since 2020 whose findings indicated household food access as a significant predictor of child malnutrition. 10 this study, on the contrary of other studies females are more exposed for severity of malnutrition than males.
The result of this study showed that region is a significant predictor for child malnutrition in Ethiopia. Children from Amhara and Afar region were more malnourished. The results of this study were in line with those of a study conducted in Ethiopia by Teklie et al., which found that children residing in the Afar, Oromia, and Somali areas, respectively, had a 32%, 33%, and 60% lower chance of being
CONFLICT OF INTEREST
The authors declare no conflict of interest.
DATA AVAILABILITY STATEMENT
Following formal online registration and submission of the project title and description, the data was requested from the CSA website and obtained. Access to the data is available at http://www. statsethiopia.gov.et/.
ETHICS STATEMENT
The CSA authority upholds a variety of ethical guidelines and practices for the survey and also obtains informed consent from survey participants before data collection. Additionally, we have received permission from the CSA to use the data via the DHS website. Therefore, no other institutions' ethical approval or participant agreement are required for the study. and that any discrepancies from the study as planned (and, if relevant, registered) have been explained. | 2022-10-05T15:03:23.561Z | 2022-10-03T00:00:00.000 | {
"year": 2022,
"sha1": "bfd6fcfb1372336d6ddc940ee2b8c35694fd6b14",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8aa7784364af1ed4f36ac478ad5235e95bc599a7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265222088 | pes2o/s2orc | v3-fos-license | Study on maintenance of eyeball morphology by foldable capsular vitreous body in severe ocular trauma
Objectives To explore the feasibility and safety of using a foldable capsular vitreous body (FCVB) in managing severe ocular trauma and silicone oil-dependent eyes. Methodology This is a retrospective study of 61 ocular trauma patients (61 eyes) who presented to the Department of Eye Emergency, Hebei Eye Hospital from May 1, 2018, to May 31, 2019, including 51 male patients (51 eyes) and 10 female patients (10 eyes) with an average age of 44.98 ± 14.60 years old. The oldest patient was 75 years old, and the youngest was 8 years old. These cases represented 51 eyes with severe eyeball rupture and 10 eyes with severe, complicated ocular trauma, which became silicone oil-dependent after the operation. These patients received FCVB implants, and data regarding their visual acuity, intraocular pressure, changes in eye axis, cornea, retina, and FCVB state were recorded after the operation. Results In all patients, the FCVB was properly positioned and well supported with the retina. All 61 patients cleared a follow-up window of 1–36 months with no reports of important changes in their visual acuity. Among the patients, 91.8% reported normal intraocular pressure, the retinal reattachment rate reached 100%, and the eyeball atrophy control rate reached 100%. There was no report of rupture of the FCVB, allergies to silicone, intraocular infection, intraocular hemorrhage, silicone oil emulsification, or sympathetic ophthalmia. Conclusions Foldable capsular vitreous bodies (FCVBs) designed to mimic natural vitreous bodies are suitable as long-term ocular implants that can provide sustained support for the retina without the need for any special postoperative postures. Their barrier function may effectively prolong the retention time of the tamponade and prevent various complications caused by direct contact of the eye tissues with the tamponade.
Introduction
Ocular trauma is one of the main causes of visual impairment [1].Severe cases of ocular trauma are generally associated with poor prognoses [2].The most common causes of severe ocular trauma include injuries by impact, sharp objects, and explosions.In most cases, the posterior segments of the eyeball are involved.Severe trauma manifests in conditions including ruptured globes, intraocular foreign bodies, ocular penetration injuries, posttraumatic endophthalmitis, etc. [1,2].Severe ocular trauma poses an intractable problem to ophthalmologists, especially when all patients hope to salvage the involved globes.
Currently, the validated management method of posttraumatic retinal detachment is vitrectomy combined with the implantation of vitreous substitute tamponade [3].The main tamponades include air, perfluoropropane (C3F8), silicone oil, and perfluorocarbon liquid, with C3F8 and silicone oil being the most commonly used.Patients receiving tamponade made with C3F8 and silicone oil are required to rest in a certain prone position after the operation.C3F8 is an inert gas that is expansible, which means that even a slight excess of gas administered during the procedure may lead to complications such as secondary glaucoma and central retinal artery occlusion.Silicone oil is a kind of polysiloxane with organic side chains [4].It has adequate viscosity and surface tension and limited expansibility.Although it can effectively seal retinal breaks, it is associated with the risk of serious complications over time (e.g., cataracts, glaucoma, and silicone oil emulsification).Patients are exposed to the risk of developing various complications, including silicone oil-dependent eyes after vitreoretinal surgery when they have ruptured globes with large wounds and loss of ocular contents [5][6][7].In the most serious cases, patients may experience eyeball atrophy, which eventually requires eyeball removal or prosthetic eye installation.
Foldable capsular vitreous bodies (FCVBs), as a novel vitreous substitution tamponade, offer a promising new option for ophthalmologists.FCVB is a kind of medical rubber product suitable for long-term tamponade.It provides stable support against the retina, which helps avoid the intraocular toxicity caused by direct contact between the tamponade and the intraocular tissues; it reduces the chances of developing relevant complications caused by the vitreous substitutes while restoring the shape maintenance and support function of the vitreous body of the globe, eliminating the patient's need for enucleation [5,8,9].
However, FCVB has been validated via in vivo experiments [10] and has been widely applied clinically [5,9,[11][12].Recently, Hashem Abu Serhan et al. provided the first systematic review of studies reported on FCVB implantation and found that FCVB implantation has been used in the management of various complicated ocular conditions, including severe ocular trauma and silicone oil-dependent eyes.When compared to SO, FCVB showed good visual outcomes, fewer IOP fluctuations, and a good safety profile [13].In this retrospective study, we evaluated the feasibility of FCVB in the severe case setting via a systematic review of 61 cases of FCVB procedures performed in our hospital during 2018-2019.
FCVB Material
The FCVB used in these procedures was developed by the State Key Laboratory of Ophthalmology under the Sun Yat-Sen Ophthalmology Center of Sun Yat-Sen University and codeveloped and produced by Guangzhou Vesber Biotechnology.The product received the Chinese medical device registration certificate for clinical use in China on July 27, 2017.This product was designed in the shape of a vitreous cavity and came with a drainage tube and a pressure-adjustable drainage valve, similar to the design of a glaucoma valve (Fig. 1).
Study subjects
We systematically reviewed 61 patients receiving FCVB implants within one year (5/1/2018-5/31/2019) since FCVB was introduced in the Hebei Eye Hospital; the sample represented 61 eyes (51 males with 51 eyes; 10 females with 10 eyes).The average age of the patients was 44.98 (± 14.60) years old, with the oldest patient being 75 years old and the youngest 8 years old.There were 32 eyes with no light perception, 51 eyes with serious rupture, and 10 eyes with serious, complicated trauma, which became silicone oil-dependent after the operation (Table 1).Patients with severely ruptured globes underwent one-stage suturing and vitrectomy combined with FCVB implantation 9-28 days (mean: 14.22 ± 2.66 days) after the injury.The patients with silicone oil-dependent eyes had their silicone oil tamponade removed to release their intraocular proliferation and deformation before receiving FCVB implantation.There were 43 eyes with ciliary body detachment and 30 eyes with aniridia or iris defects.Before the implantation procedures, the patients and their families were informed about the operation and implants in detail and signed informed consent forms.This clinical trial was conducted following the Declaration of Helsinki and was approved by the Ethics Committee of Hebei Eye Hospital (Ethics Approval Number: HBSYKYYLL2018-02).All patients signed informed consent forms.The inclusion criteria were patients with severe retinal detachment that could not be managed with simple or heavy silicone oil tamponade, including one of the following conditions: (1) severely ruptured globes, with retinal or choroidal defects; (2) large posterior sclera dehiscence, accompanied by choroidal or retinal detachments, which could not be repaired; (3) severe ocular trauma with recurrent retinal detachment after silicone oil tamponade; (4) retinal detachment with stiffening degeneration, which could not be treated with silicone oil alone; (5) axis length of 16-25 mm.Exclusion criteria were defined as being met by (1) patients with known allergy to silica gel or with keloid-prone constitution; (2) patients with severe ocular inflammation; (3) patients with transparent crystalline lens in the eye for operation; (4) patients with a visual acuity of 0.4 or under on the fellow eye; (5) patients having a fellow eye with intraocular surgical history; (6) patients with severe systemic diseases (e.g., diseases involving the cardiovascular, respiratory, digestive, nervous, endocrine, or urogenital systems).
Ophthalmology examination
All patients underwent routine preoperative and postoperative examinations in their follow-up clinical visits, which included visual acuity inspection using the international standard visual acuity chart, slit lamp examination, front endoscopic segment inspection, intraocular pressure measurements, AS-OCT, B-scan ocular ultrasound, UBM inspections, corneal endothelial measurements, and AS photography and color fundus photography examinations.
Operation procedures
All patients underwent vitrectomy.When the vitreomacular traction and proliferative membranes were removed during the operation, the retina's morphology recovered and was ready for FCVB implantation in the next stage.The capsule was checked for airtightness before it was aspirated to a vacuum state, folded, and loaded into an injector.The intraocular perfusion incision was approximately 3.5 mm from the limbus that was constructed and prolonged, and the capsule was then pushed into the eye in perfusion under the operator's direct vision.With the lens surface of the capsule positioned upward, the capsule was then fully expanded by injecting silicone oil via the drainage valve until the retina was well supported.The scleral incision was then sutured, and the drainage tube was ligated and sutured to the sclera (Fig. 2).
Follow-up indicators
The postoperative follow-up ranged from 1 to 36 months, with 61 eyes followed up at 6 months, 61 eyes at 12 months, 57 eyes at 24 months, and 52 eyes at 36 months after surgery.The patients were followed for their visual acuity, intraocular pressure, the state of their cornea and retina, and the position of their FCVB.They were also observed for any ocular inflammation, abnormal intraocular hemorrhage, and/or sympathetic ophthalmia for the safety evaluation of FCVB implants.
Statistical analysis
Because the poor visual acuity of patients with severe complex ocular trauma cannot be expressed by ordinary visual acuity scale values, they need to be graded and scored according to the best visual acuity: 0 for no light perception, 1 for light perception, 2 for manual, 3 for index, 4 for ≤ 0. In this study, visual acuity, intraocular pressure, and corneal endothelial count were compared between preoperative and final postoperative follow-up scores using the Wilcoxon signed-rank test.Axial length values were expressed as the mean ± standard deviation, and the Wilcoxon signed-rank test was used for the comparison of the affected eye with the healthy eye and for the comparison of the preoperative and postoperative final followup scores of the affected eye.The statistical analysis was performed on GraphPad version 9.0 (GraphPad Software, San Diego, CA), and a P < value of 0.05 was considered statistically significant.
Operation results
All 61 patients (61 eyes) enrolled in this study received successful FCVB implantation.During the operation, severe retinal detachment with giant retinal tears was observed in all patients.The remaining retina was restored.Thirty-two eyes received retinal photocoagulation.AV-13.5PFCVB was selected for the operation, and the injection volume of silicone oil was 2.2-4.0 mL (mean: 3.22 ± 0.5) (Table 1).
Changes in visual acuity
Before the operation, 32 affected eyes had no light perception, 19 eyes had light perception, 6 eyes had hand motion vision, and 4 eyes had finger-counting vision.At the last follow-up, 24 eyes had no light perception, 19 eyes had light perception, 9 eyes had hand motion vision, and 6 eyes had finger-counting vision.The difference between the final follow-up visual acuity score and the preoperative visual acuity score was not statistically significant (P = 0.7658).
Changes in intraocular pressure
Before the operation, 49 eyes had unmeasurable intraocular pressure due to corneal opacity, of which the intraocular pressure of 3 eyes was estimated to be T-2 by finger palpation, 27 eyes were estimated to be T-1, and 19 eyes were estimated to be Tn.The remaining 12 eyes had intraocular pressure in the range of 6-16 mmHg.At the last follow-up, the intraocular pressure of 25 eyes remained unmeasurable because of corneal opacity, among which 7 eyes were estimated to have T-1 level intraocular pressure and 18 eyes were estimated to have Tn.The remaining 27 eyes had IOPs in the range of 10-21 mmHg.The IOP score at the final follow-up was higher than the preoperative IOP score, and the difference was statistically significant (P < 0.0001).
Corneal changes
Before the operation, 12 eyes had clear or basically clear corneas, 30 eyes had corneas with localized opacity and measurable corneal endothelial cell counts, and 19 eyes had visible corneal edema or corneal blood stained with unmeasurable corneal endothelial cell counts.The remaining 42 eyes had a corneal endotheliometer range of 775-2845 cell/mm 2 .At the last follow-up, the corneal endothelial cell counts of 22 eyes remained unmeasurable due to corneal opacity.The remaining 30 eyes had a corneal endothelial count range of 557-2078 cells/ mm2.The endothelial count score at the final follow-up was lower than the preoperative score, and the difference was statistically significant (P < 0.05).There were no patients with corneal loss at the time of final follow-up, but corneal clouding was worse than before surgery in 32 patients, including 11 with total corneal white clouding, 6 with cosmetic corneal contact lenses, and 5 with thin prosthetic lenses after conjunctival masking (all patients under 45 years of age with high cosmetic requirements) (Fig. 3A).
Axis changes
Before the operation, the axial length of the affected eye was 22.71 ± 1.69 mm.The axial length of a healthy eye was 23.38 ± 0.9 mm.The mean value of the affected eye was slightly lower than that of the healthy eyes, and the difference was statistically significant (P < 0.001).At the last follow-up, the axial length of the affected eyes was 23.31 ± 0.86 mm, which was improved compared with the preoperative period, and the difference between the two was statistically significant (P < 0.05).There were 11 patients whose axial lengths were at least 2 mm shorter than the reference range before the operation.The difference was as great as 8.68 mm in the worst case.Their intraocular pressures were evidently below the normal range, which indicated globe atrophy.The intraocular pressures of the 11 patients after the operation were higher than before and between 10 and 21 mmHg, and their ocular axes were extended.
Fundus retinal reattachment
The reattachment status of the patient eyes were assessed according to the results of binocular indirect ophthalmoscopy, B-scan, ocular ultrasound, and color fundus photography.There was complete retinal restoration and no recurrence at the last follow-up (Fig. 3B).
Status of FCVB after operation
The slit lamp examination showed that the FCVBs were in the correct position.Only 12 patients developed hyperplastic membranes in the anterior capsule.Orbital CT showed complete eye rings and a full globe.B-scans showed that the capsules had homogeneous echoes and a complete shape.UBM revealed normal depths in the patients' anterior chamber, and the anterior membrane of the capsule had normal reflection, and the FCVBs were in contact but did not compress the ciliary bodies (Fig. 3C-E).
Patient safety evaluation
No broken capsules or silicone allergies were reported in the patients.Except for a few cases of mild to moderate conjunctival hyperemia and aqueous flares, no obvious ocular inflammation occurred in the patients during the follow-ups.No intraocular hemorrhage, silicone oil emulsification, or sympathetic ophthalmia was found in the patients at the time of final follow-up.
Discussion
Severe ocular trauma is one of the important causes of uniocular sight loss [14] worldwide.Globally, there are 180,000 patients, with approximately 33,000-50,000 of them children [15].In the management of ocular trauma, one of the fundamental issues facing ophthalmologists worldwide is to restore structural integrity in time to salvage the damaged globe.
In this study, we explored the application of foldable capsular vitreous bodies in the management of severe ocular trauma.We found no evident changes in the 61 patients reviewed after FCVB implantation, which suggested that even though the FCVB could provide continued support to the retina and hold it to its normal anatomical position, it could not reverse the damage to the ocular tissues, especially damage to the optic nerve and retinal posterior poles, and thus could not restore the visual functions of the eyes involved.This further demonstrated that FCVB serves only to restore the normal shape of the globe without restoring or improving visual acuity and was thus consistent with previous findings [16].In addition, some patients may experience aggravated corneal opacities after implantation or develop hyperplastic membranes around the capsule, both of which may compromise their vision.
Intraocular pressure is an important factor in maintaining the homeostasis of the eye.Severe ocular trauma may result in damage to the iris and the ciliary body or loss of ocular contents due to a ruptured globe, affecting intraocular pressure [17].In our research, we found that the intraocular pressure of 91.8% of patients remained in the normal range after the operation.Their UBM images showed that the FCVB did not compress the ciliary body and that the function of the ciliary body remained unaffected.This illustrated that FCVB can maintain the shape of the posterior chamber to allow the aqueous humor circulation to be slowly recovered until the ciliary body function is restored on its own.However, further investigations are needed to determine whether the supporting function of the FCVB would be sufficient to maintain the intraocular pressure of patients with severe ciliary body defects and avoid low intraocular pressure caused by decreased aqueous humor secretion.Trauma to the eye may lead to injuries to various ocular structures, including the cornea, one of the most frequently damaged sites.Repeated operations after trauma are also accountable for the loss of corneal endothelial cells [14,18].We found that the patients' corneal endothelial cell count had decreased after the operation, but no patients reported corneal endothelial decompensation by their last follow-up clinical visit.During the followup window, 52.5% of patients experienced aggravated corneal opacity, with 11 of them reaching the state of corneal leukoma and losing light perception.The cause of corneal opacity may be severe damage to the corneal endothelium caused by large corneal and/or corneal limbal wounds.Localized corneal opacity or severe corneal edema occurred after the first-stage suturing.It was speculated that the implantation of FCVB may have caused detrimental effects on the metabolism of nutrients in the aqueous humor, leading to insufficient corneal nutrient supply and consequentially postoperative corneal opacity and even corneal leukoma.Due to the short followup window, further exploration is required to determine whether FCVB may lead to bullous keratopathy.
Severely ruptured globes are often accompanied by grave damage to the ciliary body, retina, and choroid.During the procedures to manage ruptured globes, it is often observed that the affected eyes have giant retinal tears or proliferative contractile cellular membranes that are difficult to flatten, detached choroids that cannot be reset, or persistent low intraocular pressure after the operation.All these factors made it difficult to remove the silicone oil in the eye after the operation, which led to the development of silicone oil-dependent eyes [19].Long-term exposure to silicone oil may lead to complications, including intraocular toxicity and silicone oil emulsification.Regular replacement of silicone oil is needed, which in turn eventually inflicts band keratopathy and/ or global atrophy, resulting in inevitable enucleation [20].FCVBs have excellent mechanical and optical properties and biocompatibility with human eyes.They are designed to mimic the vitreous cavity.During the implantation procedure, the capsule is injected into the vitreous cavity and inflated by the injection of silicone oil.Afterward, the inflated capsule can effectively support the maintenance of the morphology of the globe and the intraocular pressure.No special postoperative position is needed.Since the silicone oil in the capsule will not be in direct contact with the aqueous humor, the silicone oil is unlikely to be emulsified [5,21].Among the patients reviewed in this study, the retinal reattachment rate after FCVB implantation was 100%; this was higher than the observed 73-89.6%reattachment rate of post-traumatic retinal detachment treated with vitrectomy combined with inert gas or silicone oil tamponade in previous studies [22][23][24].
Ruptured globes are threatening conditions that in the worst cases may lead to structural disorder of ocular tissues, leading to massive leakage of eye contents and eventually globe atrophy and even enucleation [25].In our research, we identified 11 patients who had severely ruptured globes with giant tears and massive loss of eye contents.Their eyes showed obvious dents before the operation.The B-scan showed clear patterns of globe atrophy.After FCVB implantation, their intraocular pressure returned to the normal range, the axial length of their eyes extended, the eye globe returned to a full shape, and atrophy was controlled, which negated the need for eye removal.The control rate of globe atrophy in this study was 100%, and none of the 61 patients had eyeball enucleation, which was significantly below the post-trauma enucleation rate of 11.8-41.8% in studies outside of China [26].Although there were 11 patients who underwent conjunctival patching due to corneal leukoma, their eyeballs remained in good shape, and they had undergone the procedure for cosmetic reasons.
When a rupture is not managed with care in time, it can easily cause endophthalmitis in the affected eye and even lead to sympathetic ophthalmia in the healthy eye [25].In our research, there was no report of rupture of the FCVB allergies to silicone, intraocular infection, intraocular hemorrhage, silicone oil emulsification, or sympathetic ophthalmia.This could fully demonstrate the safety of this technique and its contribution to alleviating the patient's suffering and improving their postoperative quality of life.
In summary, by reviewing cases of application of FVCB in complex, refractory vitreoretinal diseases such as retinal detachment caused by trauma, we have found evidence to support its safety and efficacy in maintaining the morphology and intraocular pressure of post-traumatic eyeballs while eliminating the need for special postoperative positions and avoiding complications such as secondary glaucoma, band keratopathy, or the displacement of the silicone oil tamponade to other tissues.This procedure can effectively salvage damaged eyes and negate the need for enucleation, which could inflict inevitable psychological and physical damage on the patient.However, we have yet to verify the longest duration that FVCBs can safely stay in the eyes, which will be subject to further investigation.
Fig. 2
Fig. 2 Steps of FCVB implantation.(a) Scleral incision construction: 5 mm incision into the limbus followed by a 4-5 mm straight incision, and one 1 mm lateral incision on each side to form an incision of ฺ shape.(b) Push the capsule into the eye via a syringe (c) Inject silicone oil (d) Palpation of the scleral pressure (e) Adjust the capsule position (f) Flush out the blood in the eye (g) Fix the drainage tube (h) Suture the subconjunctival tissue and bulbar conjunctiva
Fig. 3
Fig. 3 Images of before/after the FCVB implantation.(a) Conditions of a cornea with inferior limbal rupture before/after the operation (b) Fundus photography after FCVB implantation (c) Orbital CT images before and after FCVB implantation (d) B-scan images before and after FCVB implantation (e) UBM images before and after FCVB implantation
Table 1
FCVB size selection, volume of silicone oil and basic information for 61 cases | 2023-11-17T14:17:08.202Z | 2023-11-16T00:00:00.000 | {
"year": 2023,
"sha1": "73389fda084987d45c4319ad5cd6365cf8258fd7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "a0d1208cc781e7a63e5ad67e5f23c7ef291b931b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118650432 | pes2o/s2orc | v3-fos-license | Electron-positron pairs production in a macroscopic charged core
Classical and semi-classical energy states of relativistic electrons bounded by a massive and charged core with the charge-mass-radio Q/M and macroscopic radius R_c are discussed. We show that the energies of semi-classical (bound) states can be much smaller than the negative electron mass-energy (-mc^2), and energy-level crossing to negative energy continuum occurs. Electron-positron pair production takes place by quantum tunneling, if these bound states are not occupied. Electrons fill into these bound states and positrons go to infinity. We explicitly calculate the rate of pair-production, and compare it with the rates of electron-positron production by the Sauter-Euler-Heisenberg-Schwinger in a constant electric field. In addition, the pair-production rate for the electro-gravitational balance ratio Q/M = 10^{-19} is much larger than the pair-production rate due to the Hawking processes.
The energetics of this phenomenon can be understood as follow. The energy-level of the bound state 1S 1/2 can be estimated as follow, wherer is the average radius of the 1S 1/2 state's orbit, and the binding energy of this state Ze 2 /r > 2mc 2 . If this bound state is unoccupied, the bare nucleus gains a binding energy Ze 2 /r larger than 2mc 2 , and becomes unstable against the production of an electron-positron pair. Assuming this pair-production occur around the radiusr, we have energies of electron (ǫ − ) and positron (ǫ + ): where p ± are electron and positron momenta, and p − = −p + . The total energy required for a pair production is, which is independent of the potential V (r). The potential energies ±eV (r) of electron and positron cancel each other and do not contribute to the total energy (8) required for pair production. This energy (8) is acquired from the binding energy (Ze 2 /r > 2mc 2 ) by the electron filling into the bound state 1S 1/2 . A part of the binding energy becomes the kinetic energy of positron that goes out. This is analogous to the familiar case that a proton (Z = 1) catches an electron into the ground state 1S 1/2 , and a photon is emitted with the energy not less than 13.6 eV.
In this article, we study classical and semi-classical states of electrons, electron-positron pair production in an electric potential of macroscopic cores with charge Q = Z|e|, mass M and macroscopic radius R c .
II. CLASSICAL DESCRIPTION OF ELECTRONS IN POTENTIAL OF CORES
A. Effective potentials for particle's radial motion Setting the origin of spherical coordinates (r, θ, φ) at the center of such cores, we write the vectorial potential A µ = (A, A 0 ), where A = 0 and A 0 is the Coulomb potential. The motion of a relativistic electron with mass m and charge e is described by its radial momentum p r , total angular momenta p φ and the Hamiltonian, where the potential energy V (r) = eA 0 , and ± corresponds for positive and negative energies. The states corresponding to negative energy solutions are fully occupied. The total angular momentum p φ is conserved, for the potential V (r) is spherically symmetric. For a given angular momentum where v ⊥ is the transverse velocity, the effective potential energy for electron's radial motion is where ± indicates positive and negative effective energies, outside the core (r ≥ R c ), the Coulomb potential energy V (r) is given by Inside the core (r ≤ R c ), the Coulomb potential energy is given by where we postulate the charged core has a uniform charge distribution with constant charge density ρ = Ze/V c , and the core volume V c = 4πR 3 c /3. Coulomb potential energies outside the core (11) and inside the core (12) are continuous at r = R c . The electric field on the surface of the core, where the electron Compton wavelength λ e =h/(mc), the critical electric field E c = m 2 c 3 /(eh) and the parameter β is the electric potential-energy on the surface of the core in unit of the electron mass-energy.
B. Stable classical orbits (states) outside the core.
Given different values of total angular momenta p φ , the stable circulating orbits R L (states) are determined by the minimum of the effective potential E + (r) (10) (see Fig. 1), at which dE + (r)/dr = 0. We obtain stable orbits locate at the radii R L outside the core, for different p φ -values. Substituting Eq. (14) into Eq. (10), we find the energy of electron at each stable orbit, For the condition R L > ∼ R c , we have where the semi-equality holds for the last stable orbits outside the core R L → R c + 0 + . In the point-like case R c → 0, the last stable orbits are Eq. (15) shows that there are only positive or null energy solutions (states) in the case of a pointlike charge, which corresponds to the energy-spectra Eqs. (3,4,5) in quantum mechanic scenario.
While for p φ ≫ 1, radii of stable orbits R L ≫ 1 and energies E → mc 2 + 0 − , classical electrons in these orbits are critically bound for their banding energy goes to zero. We conclude that the energies (15) of stable orbits outside the core must be smaller than mc 2 , but larger than zero, E > 0. Therefore, no energy-level crossing with the negative energy spectrum occurs.
C. Stable classical orbits inside the core.
We turn to the stable orbits of electrons inside the core. Analogously, using Eqs. (10,12) and dE + (r)/dr = 0, we obtain the stable orbit radius R L ≤ 1 in the unit of R c , obeying the following equation, and corresponding to the minimal energy (binding energy) of these states There are 8 solutions to this polynomial equation (18), only one is physical, the solution R L that has to be real, positive and smaller than one. As example, the numerical solution to Eq. (18) is R L = 0.793701 for β = 4.4 · 10 16 and κ = 2.2 · 10 16 . In following, we respectively adopt non-relativistic and ultra-relativistic approximations to obtain analytical solutions.
First considering the non-relativistic case for those stable orbit states whose kinetic energy term characterized by angular momentum term p φ , see Eq. (10), is much smaller than the rest mass term mc 2 , we obtain the following approximate equation, and the solutions for stable orbit radii are, and energies, The consistent conditions for this solution are β 1/2 > κ for R L < 1, and β ≪ 1 for non-relativistic . As a result, the binding energies (22) of these states are mc 2 > E > 0, are never less than zero. These in fact correspond to the stable states which have large radii closing to the radius R c of cores and v ⊥ ≪ c.
Second considering the ultra-relativistic case for those stable orbit states whose the kinetic energy term characterized by angular momentum term p φ , see Eq. (10), is much larger than the rest mass term mc 2 , we obtain the following approximate equation, and the solutions for stable orbit radii are, which gives R L ≃ 0.7937007 for the same values of parameters β and κ in above. The consistent condition for this solution is β > κ ≫ 1 for R L < 1. The energy levels of these ultra-relativistic states are, and mc 2 > E > −1.5βmc 2 . The particular solutions E = 0 and E ≃ −mc 2 are respectively given by These in fact correspond to the stable states which have small radii closing to the center of cores To have the energy-level crossing to the negative energy continuum, we are interested in the values β > κ ≫ 1 for which the energy-levels (25) of stable orbit states are equal to or less than −mc 2 , As example, with β = 10 and κ = 2, R L ≃ 0.585, E min ≃ −9.87mc 2 . The lowest energy-level of electron state is p φ /(Ze 2 ) = κ/β → 0 with the binding energy, locating at R L ≃ (p φ c/Ze 2 ) 1/3 → 0, the bottom of the potential energy V in (0) (12).
A. Bohr-Sommerfeld quantization
In order to have further understanding, we consider the semi-classical scenario. Introducing the Planck constanth = h/(2π), we adopt the semi-classical Bohr-Sommerfeld quantization rule which are discrete values selected from continuous total angular momentum p φ in the classical scenario. The variation of total angular momentum ∆p φ = ±h in th unit of the Planck constant h, we make substitution in classical solutions that we obtained in section (II).
1. The radii and energies of stable states outside the core (14) and (15) become: where the electron Compton length λ =h/(mc).
2. The radii and energies of non-relativistic stable states inside the core (21) and (22) become: 3. The radii and energies of ultra-relativistic stable states inside the core (24) and (25) become: Note that radii R L in the second and third cases are in unit of R c .
B. Stability of semi-classical states
When these semi-classical states are not occupied as required by the Pauli Principle, the transition from one state to another with different discrete values of total angular momentum l (l 1 , l 2 and ∆l = l 2 − l 1 = ±1) undergoes by emission or absorption of a spin-1 (h) photon. Following the energy and angular-momentum conservations, photon emitted or absorbed in the transition have angular momenta p γ = p φ (l 2 ) − p φ (l 1 ) =h(l 2 − l 1 ) = ±h and energy E γ = E(l 2 ) − E(l 1 ). In this transition of stable states, the variation of radius is ∆R L = R L (l 2 ) − R L (l 1 ).
This state l =l > 0 is not protected by the Heisenberg indeterminacy principle from quantummechanically decaying inh-steps to the states with lower angular momenta and energies (correspondingly smaller radius R L (31)) via photon emissions. This clearly shows that the "Z = 137catastrophe" corresponds to R L → 0, falling to the center of the Coulomb potential and all semiclassical states (l) are unstable.
Then we consider the stability of semi-classical states against such transition in the case of charged cores R c = 0. Substituting p φ in Eq. (29) into Eq. (16), we obtain the selected semiclassical statel corresponding to the last classical stable orbit outside the core, Analogously to Eq. (37), the same argument concludes the instability of this semi-classical state, which must quantum-mechanically decay to states with angular momentum l <l inside the core, provided these semi-classical states are not occupied. This conclusion is independent of Zα-value.
We go on to examine the stability of semi-classical states inside the core. In the non-relativistic case (1 ≫ β > κ 2 ), the last classical stable orbits locate at R L → 0 and p φ → 0 given by Eqs. (21,22), corresponding to the lowest semi-classical state (33,34) with l = 0 and energy mc 2 > E > 0. In the ultra-relativistic case (β > κ ≫ 1), the last classical stable orbits locate at R L → 0 and p φ → 0 given by Eqs. (24,25), corresponding to the lowest semi-classical state (35,36) with l = 0 and minimal energy, This concludes that the l = 0 semi-classical state inside the core is an absolute ground state in both non-and ultra-relativistic cases. The Pauli principle assures that all semi-classical states l > 0 are stable, provided all these states accommodate electrons. The electrons can be either present inside the core or produced from the vacuum polarization, later will be discussed in details.
We are particular interested in the ultra-relativistic case β > κ ≫ 1, i.e., Zα ≫ 1, the energylevels of semi-classical states can be more profound than −mc 2 (E < −mc 2 ), energy-level crossings and pair-productions occur if these states are unoccupied, as discussed in introductory section.
IV. PRODUCTION OF ELECTRON-POSITRON PAIR
When the energy-levels of semi-classical (bound) states E ≤ −mc 2 (27), energy-level crossings between these energy-levels (25) and negative energy continuum (10) for p r = 0, as shown in Fig. 2.
The energy-level-crossing indicates that E (25) and E − (10) are equal, where angular momenta p φ in E (36) and E − (10) are the same for angular-momentum conservation.
The production of electron-positron pairs must takes place, provided these semi-classical (bound) states are unoccupied. The phenomenon of pair production can be understood as a quantummechanical tunneling process of relativistic electrons. The energy-levels E of semi-classical (bound) states are given by Eq. (36) or (27). The probability amplitude for this process can be calculated by a semi-classical WKB method [19]: where |p ⊥ | = p φ /r is transverse momenta and the radial momentum, The energy potential V (r) is either given by V out (r) (11) for r > R c , or V in (r) (12) for r < R c .
The To obtain a maximal WKB-probability amplitude (41) of pair production, we only consider the case that the charge core is bare and • the lowest energy-levels of semi-classical (bound) states: p φ /(Ze 2 ) = κ/β → 0, the location of classical orbit (24) R L = R b → 0 and energy (25) E → E min = −3βmc 2 /2 (28); • another classical turning point R n ≤ R c , since the probability is exponentially suppressed by a large tunneling length ∆ = R n − R b .
In this case (R n ≤ R c ), Eq. (42) becomes and p r = 0 leads to Using Eqs. (41,43,44), we have Dividing this probability amplitude by the tunneling length ∆ ≃ R n and time interval ∆t ≃ 2πh/(2mc 2 ) in which the quantum tunneling occurs, and integrating over two spin states and the transverse phase-space 2 dr ⊥ dp ⊥ /(2πh) 2 , we approximately obtain the rate of pair-production per the unit of time and volume, where E s = Ze/R 2 c is the electric field on the surface of the core and the Compton time τ =h/mc 2 . To have the size of this pair-production rate, we consider a macroscopic core of mass M = M ⊙ and radius R c = 10km, and the electric field on the core surface E s (13) is about the critical field (E s ≃ E c ). In this case, Z = α −1 (R c /λ) 2 ≃ 9.2 · 10 34 , β = Zαλ/R c = R c /λ ≃ 2.59 · 10 16 , and the rate (47) becomes which is exponentially small for R c ≫ λ. In this case, the charge-mass radio Q/(G 1/2 M ) = 2 · 10 −6 |e|/(G 1/2 m p ) = 8.46 · 10 −5 , where G is the Newton constant and proton's charge-mass radio It is interesting to compare this rate of electron-positron pair-production with the rate given by the Hawking effect. We take R c = 2GM/c 2 and the charge-mass radio Q/(G 1/2 M ) ≃ 10 −19 for a naive balance between gravitational and electric forces. In this case β = 1 2 (Q/G 1/2 M )(|e|/G 1/2 m) ≈ 10 2 , the rate (47) becomes, where the notation mM = R c /(2λ). This is much larger than the rate of electron-positron emission by the Hawking effect [23], since the exponential factor exp {−0.492(mM )} is much larger than exp {−8π(mM )}, where 2mM = R c /λ ≫ 1.
V. SUMMARY AND REMARKS
In this letter, analogously to the study in atomic physics with large atomic number Z, we study the classical and semi-classical (bound) states of electrons in the electric potential of a massive and charged core, which has a uniform charge distribution and macroscopic radius. We have found negative energy states of electrons inside the core, whose energies can be smaller than −mc 2 , and the appearance of energy-level crossing to the negative energy spectrum. As a result, quantum tunneling takes place, leading to electron-positron pairs production, electrons then occupy these semi-classical (bound) states and positrons are repelled to infinity. Assuming that massive charged cores are bare and non of these semi-classical (bound) states are occupied, we analytically obtain the maximal rate of electron-positron pair production in terms of the core radius, charge and mass. We find that this rate is much larger than the rate of electron-positron pair-production by the Hawking effect, even for very small charge-mass radio of the core given by the naive balance between gravitational and electric forces. orbits where E + has a minimum (15). All stable orbits are described by cp φ > Ze 2 . The last stable orbits are given by cp φ → Ze 2 + 0 + , whose radial location R L → 0 and energy E → 0 + . There is no any stable orbit with energy E < 0 and the energy-level crossing with the negative energy spectrum E − is impossible. All stable orbits inside the core are described by β > κ > 1. The last stable orbit is given by κ/β → 0, whose radial location R L → 0 and energy E → E min (28). We indicate that the energy-level crossing between bound state (stable orbit) energy at R L = R b and negative energy spectrum E − (25) at the turning point | 2011-06-24T16:53:27.000Z | 2011-02-07T00:00:00.000 | {
"year": 2011,
"sha1": "c45562d9995d330cc67507aa3d688e7a3890dc82",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2010.12.061",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "59cd6bec1e1f569e2adea9908e1f50ad4fbcf9d9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
52931046 | pes2o/s2orc | v3-fos-license | Job Satisfaction Among Doctors from Jiangsu Province in China
Background Doctors’ job satisfaction has an important effect on medical and health services. This study assessed the level of job satisfaction in Chinese doctors and explored how influencing factors differ between general practitioners and specialists. Material/Methods The Minnesota Satisfaction Questionnaire (MSQ) on job satisfaction was distributed to 1883 doctors in Jiangsu province, including 850 general practitioners and 1033 specialists. Data analysis was performed with SPSS 20.0. A one-way ANOVA was used to analyze doctors’ job satisfaction and logistic regression analysis was used for multivariate analysis. Correlation analysis was done on the 5 dimensions of satisfaction. Results The average MSQ score of all surveyed doctors was 3.11±0.87, with general practitioners (GPs) and specialists scores of 2.81±0.84 and 3.35±0.82, respectively. Analysis of doctor satisfaction indicated that gender, age, marital status, educational attainment, professional title, and seniority were statistically significant (P<0.05). Overall satisfaction was most closely related to the job itself (r=0.96); work, work environment, and interpersonal relationship were closely related with lead management. Conclusions The level of job satisfaction of Chinese doctors, especially general practitioners, needs to be improved. Measures such as improving education levels, work environment, and relationships should to be taken soon to improve doctors’ job satisfaction in China.
Background
Job satisfaction is often defined as a kind of feeling that one person has about his/her job [1]. In the 21 st century, "human resources" plays an increasingly important role in enterprise management [2]. As an important indicator of employees' sense of belonging to the enterprise, job satisfaction is also getting more attention. Education, personal income, and workplace relationships proved to be positively and significantly related to all 3 of these indicators of job satisfaction [3]. Higher job satisfaction can give employees a stronger sense of belonging to the company and improve their motivation. In contrast, lower job satisfaction can weaken the feeling of belonging and enthusiasm of employees, and increases employees' willingness to quit [4]. For physicians, job satisfaction not only affects their own career, but can also affect patients [5].
With deepening reform of the medical system in China, people's awareness of health has been increasing, and the demand for high-quality medical and health services has increased [6]. According to reported data from the fourth survey released by the Chinese Medical Doctor Association on doctor practice conditions in China,48.51% of medical staff are not satisfied with their current practice environment, and 95.66% of doctors surveyed believe their effort and income are not consistent [7]. There are several factors affecting doctors' job satisfaction, for example, work intensity and pressure is increasing yearly for clinical doctors; doctors are frequently maligned, negative reporting on doctors is prevalent in media, and the unpopular doctor professional evaluation system and drug price system in China have negative effects on the clinical practice environment, decreasing the level of doctors' job satisfaction and negatively affecting the health care service process [8]. Numerous studies show a correlation between a doctor's job satisfaction, the quality of medical services offered, and the likelihood the doctor will quit. Meanwhile, doctor's job satisfaction is directly related to the patient's satisfaction with the medical service [9]. Hass et al. [10] confirmed that there is a correlation between doctors who had high job satisfaction and patients who had obtained treatment from these doctors, which showed that patients treated by doctors with high job satisfaction scores were twice as satisfied as those treated by doctors with low job satisfaction scores. There is less focus and research on the job satisfaction of doctors at present, which leads to the lack of status research on Chinese doctor's job satisfaction. Doctors can only provide high-quality health service to patients if they are adequately cared for and respected internally [11].
Salary, promotion, and job safety are crucial for improving job satisfaction. Therefore, Wen et al. [12] suggested that the government increase its financial investment in primary care facilities, especially in less-developed areas, and reform incentive mechanisms to improve the job satisfaction of primary care doctors, and considered policies such as establishing a social pension programmer for village-level doctors and providing more opportunities for job promotion among primary care doctors. The present study investigated the overall job satisfaction of doctors in China to explore its influencing factors, and to provide policy advice to improve the clinical doctors' job satisfaction.
Study population
A cross-sectional survey was conducted during July-August 2016 in Jiangsu province, southeastern China. We randomly selected 1 city in each geographic region of Jiangsu province (northern, middle, and southern). We divided clinical doctors into general practitioners and specialists. General practitioners (GPs) refer to the doctors working in community health care institutions, whereas specialists are often working in specialized and general hospitals.
Data collection
In this study, we used stratified random sampling to select 27 general hospitals and 27 community hospitals from the south, central and north of Jiangsu province, separately. We recruited 2010 doctors according to the calculation of sample size, using the formula: According to reported data from the 4th survey released by the Chinese Medical Doctor Association on doctor practicing conditions in China [16], 48.51% of medical staff are not satisfied with their current practice environment (P 0 =0.52). Relative permissible error is 15%, and d=0.15×0.52=0.078, μ a =1.96. Considering the 95% response rate, the design effect was 1.8, and the minimum sample size for each level (a general practitioner or specialist in each region) was calculated as n=299. There were 81 general hospitals and 81 community hospitals included. We selected 10-13 doctors from each hospital. The sample size was divided into 3 levels according to south, central, and north areas of Jiangsu province, and 2 levels according to specialist doctors and general practitioners (N=2996=1794). Finally, a total of 2010 clinicians were selected as respondents and were surveyed by questionnaire.
Ethics approval
Approval to conduct this study was granted by the Ethics Committee on Human Research (Institutional Review Board) at Zhong Da Hospital, Southeast University.
Statistical analysis
Data were recorded using the data management software Epidata for Windows, ver. 3.01 (http://www.epidata.dk), and data analysis was performed in IBM SPSS Statistics for Windows, ver. 20 (www.ibm.com/legal/copytrade.shtml). Survey respondents' basic characteristics and job satisfaction scoring are indicated as percentages. A one-way ANOVA was used for the analysis of doctors' job satisfaction, and logistic regression was used for multivariate analysis. The dependent variable was doctors' job satisfaction, and independent variables were doctor classification (GPs=1, specialists=0), gender (female=1, male=0), age (>41 years, £40 years), marital status (married, divorce, or widowed=1, single=0), educational attainment (master's and above=1, undergraduate and below=0), professional title (intermediate title and above=1, junior title and below=0), and seniority (>5 years=1, £5 years=0). Correlation analysis was used to assess the relationships among the 5 dimensions of satisfaction, with P<0.05 used as the criterion for statistical significance.
Basic characteristics
For the survey, 2010 questionnaires were issued and 1883 valid filled questionnaires were recovered, for an effective response rate of 93.7%. The basic information of respondents is shown in Table 1. Of the 1883 doctors responding, 850 were general practitioners and 1033 were specialists (Table 1). More male than female doctors were included. Most respondents were under the age of 50, and a large portion of respondents were married. As shown in Table 1, the chi-square values of differences between specialist doctors and general practitioners in terms of age, marital status, educational attainment, professional title, and seniority were statistically significant (P<0.05).
Satisfaction analysis of each item for general practitioners and specialists
The overall satisfaction of the doctors was 3.11±0.87, with internal satisfaction equal to 3.09±0.90 and external satisfaction equal to 3.14±0.90 (Table 2). Overall satisfaction of general practitioners (GPs) was 2.81±0.84, with internal satisfaction equal to 2.78±0.86 and external satisfaction equal to 2.86±0.88. For specialists, overall satisfaction was 3.35±0.82, internal satisfaction was 3.33±0.85, and external satisfaction was 3.37±0.84. Overall, the degree of job satisfaction of specialists was higher than that of general practitioners. The highest scored item of doctors was question 12 (the way hospital policy is implemented, 3.29±1.13), and the lowest scored was question 7 (work that does not violate my conscience, 2.95±1.31). Each item differed significantly (P<0.05) between general practitioners and specialists, and there was a statistically significant difference in satisfaction between specialists and general practitioners (P<0.05).
Factors influencing doctors' job satisfaction
Single-factor analysis of doctor satisfaction indicated ( Table 3) that job satisfaction was highest in: male doctors over female doctors (P<0.05); doctors more than 50 years old (P<0.05); married doctors (P<0.05); doctors with master's degree and above (P<0.05); and, doctors who have worked for 5-10 years (P<0.05). The single-factor analysis of internal and external satisfaction is consistent with that of overall satisfaction.
Logistic regression analysis (Table 4) showed that doctor classification, gender, education, and professional title significantly affected doctors' job satisfaction. In addition, gender, education, and work time were factors affecting general practitioners' job satisfaction, while marriage status, birth, education, and professional title were factors affecting specialist doctors' job satisfaction.
Correlation analysis of doctors' job satisfaction in 5 dimensions
Overall satisfaction is divided into 5 dimensions: work itself, work environment, interpersonal relationship, lead management, pay, and benefits. The correlation analysis revealed that overall satisfaction was most closely related to the job itself. In addition, work itself, work environment, and interpersonal relationships were all closely correlated with the lead management dimension (Table 5).
Discussion
In China, general and specialized hospitals are often superior to primary health care institutions; therefore, specialists often have a superior work environment, higher salary and benefits, and better career development compared to general practitioners. However, specialists have heavier workloads and pressure, especially the competitive pressure imposed by a professional title and scientific research requirements. There is a high and increasing level of mental distress and discontent among GPs, and targeted interventions are needed to addresses GP mental health and job satisfaction [16]. Nonetheless, the present study found that for all measures of job satisfaction (overall, internal and external), specialists ranked higher than general practitioners. The 3 highest-scored items of doctors were: the way company policies are put into practice, the working conditions, and praise doing a good job. The highest-scored item for general practitioners was working conditions. For specialists, the highest-scored item was also working conditions. General practitioners and specialists all gave low scores for question 7 ("Being able to do things that don't go against my conscience"). This may be related to China's drug price system. China's drug price system links part of the doctors' income to the prescriptions they give. Increasing income and benefits levels for doctors, especially general practitioners, may increase doctors' satisfaction. Communication activities should be carried out to enhance internal mutual cooperation, creating a harmonious, interactive work atmosphere, and improving doctors' working enthusiasm.
The influence of gender on job satisfaction has previously been the focus of academic research [17,18], showing that male doctors scored higher than female doctors on all items in general
7166
practitioners, which is similar to the present study. General practitioners with master's education level and above scored higher than those with undergraduate level and below. In specialists, however, undergraduate and below scored higher than master's and above. The difference between the 2 groups was statistically significant. Compared to primary health institutions, general and specialized hospitals typically require higher levels of vocational and technical experience for doctors. Research
7167
shows that a higher degree can lead employees to have higher expectation for their career [19]. As a result, higher-educated doctors may have higher expectations for their career, and if working at a primary health institution, these expectations may not be met, and thus they have lower job satisfaction. On the other hand, for less-educated doctors working in general and specialized hospitals, their lack of education may be an obstacle, lowering their competitiveness in their career development, especially in terms of scientific research and professional title, and this may also reduce job satisfaction.
Doctor classification, gender, education, and professional title are factors that affect doctors' job satisfaction. In addition, gender, education, and work time affected general practitioners' job satisfaction, while marriage status, education, and professional title affected specialists' job satisfaction. More working years is accompanied by increasing salary and promotion, leading to greater social status and confidence in future career development; it is thus unsurprising that these factors lead to increased job satisfaction. Marital status has an impact on people's daily life, especially their emotions. Good marital status makes people happy, leading to work motivation and enthusiasm. Divorced and widowed individuals often lack self-care in their daily lives, and are affected by various aspects of negative effects that can also have an impact on work motivation and efficiency. Zhou [20] suggested that measures are needed to promote continuing education and personal health, balance workload, and income, and to rebuild trust and respect for medical staff, thereby improving job satisfaction among physicians and nurses in tertiary public hospitals. Meanwhile, work itself, work environment, and interpersonal relationship are closely correlated with lead management. Thus, we need to take measures to improve the abilities of lead management.
As an important constituent group of health service providers, doctors directly affect the quality of medical and health services provided. Improving doctors' job satisfaction is thus important to improving the quality of medical and health services, the relationship between doctors and patients, and patient satisfaction in the treatment process. With the deepening of the new reform in China's medical and health care system and the implementation of hierarchical diagnosis and treatment, general practitioners and specialists undertake different responsibilities at different levels of medical institutions. The results of this survey provide suggestions for improving doctors' job satisfaction. First, general practitioners' job satisfaction is less than that of specialists overall. However, with implementation of the first-option at the primary system, general practitioners play the role of gatekeeper in residents' health. As the first-contact doctor for residents, it is important to improve the job satisfaction of general practitioners. This can be done by improving income levels, benefits received, and working environment, and regularly developing the teaching and training work of general practitioners. When the level of medical technology in general practitioners increases, their work enthusiasm also increases.
Second, job satisfaction in specialists was mainly related to the working time and the degree of busy work. Implementing a hierarchical diagnosis and treatment system, especially the first-option in primary medical institutions (guiding patients to go to primary medical institutions first), will allow primary medical institutions to undertake treatments of the most frequently-occurring diseases, to some extent relieving the degree of busy work of specialists.
Finally, although general practitioners and specialists undertake different labor divisions in different levels of medical institutions, they are all included in the health care system in China. Under the backdrop of hierarchical diagnosis and treatment implementation in China, strengthening the linking and cooperation projects between different levels of medical and health institutions, and enhancing information-sharing and
Conclusions
The level of job satisfaction of Chinese doctors, especially general practitioners, needs to be improved. Doctor classification, gender, education, and professional title are factors that affect doctors' job satisfaction. Measures such as improving education levels, work environment, and relationships should be taken soon to improve doctors' job satisfaction in China. In addition, the improvement of doctors' own education and professional title is also beneficial to the improvement of job satisfaction. | 2018-10-22T06:13:30.655Z | 2018-10-08T00:00:00.000 | {
"year": 2018,
"sha1": "15ff076981cdafa3c7c3b97f7cb9fb9d4d7ca994",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc6190724?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "15ff076981cdafa3c7c3b97f7cb9fb9d4d7ca994",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
7975722 | pes2o/s2orc | v3-fos-license | Brains and Minds: A Brief History of Neuromythology
L'A. demontre la place de l'esprit et de la conscience dans l'etude de l'activite du cerveau a travers les travaux des neuroscientifiques et des philosophes
It is consistent with this outlook that neuroscientists, and philosophers of mind impressed by neuroscience, have been inclined to believe that as we learn more about what happens in the brain, we are getting closer to understand ing how the mind works; more fundamentally, what mind is and how it comes to exist. This is misleading about the scope and limits of science and this 'scientism' may give science a bad name.
Against neuromythology, I will argue that not only has neuroscience failed to cast light on how there is such a thing as the mind but also that it is unlikely ever to do so. In arguing this, I do not intend to diminish the spectacular and important advances that have been made in understanding the workings of the brain, rather to deny that they give us any insight into the nature (or the basis) of consciousness, the mind or, if you will, the soul of man.
The modem belief that the activity of the brain provides an adequate explanation of human consciousness has several connected elements: • that mental phenomena are identical with neural activity (or patterns of neural activity) taking place in certain parts of the brain • that in the case of perception, this activity is caused by energy impinging on the brain ('the causal theory of perception') • that the brain, in this regard, is like a computer (mind is the information processing activity of the brain) • that mind/consciousness can be understood in terms of the evolutionary processes that gave rise to the brain.
Stephen Pinker's statement that 'the mind is a system of organs of computation designed by natural selection to solve the problems faced by our evolutionary ancestors'2 gathers together many of these strands.
The brain as the seat of the soul
The history of neuromythology begins with a fatal mistake: the assumption that there isindeed must bean organ in the body wherein the soul or mind or consciousness is located. Although the theory antedated him, Hippocrates gave it its most striking expression. In On the sacred disease (epilepsy) he declared that: Men ought to know that from the brain, and from the brain only, arise our pleasures, joys, laughter and jests, as well as our sorrows, pains, grief and tears. Through it, in particular, we think, see, hear, and distinguish the ugly from the beautiful, the bad from the good, the pleasant from the unpleasant ...3 The brain is not only a necessary but also a sufficient condition of conscious experiences.
Powerful support for the central role of the brain in consciousness comes from the many ordinary observations that indicate that the condition the brain is in and the condition the mind is in are closely correlated. A bang on the head, with damage to the brain, may remove vision, impair memory or alter personality, all of which suggest that vision, memory, personalityeverything from the most primitive buzz of sensation to the most exquisitely constructed sense of selfdepend crucially on the func tioning of the brain. For neuromythologists it follows that the mind or soul is housed in the brain.
The location of the soul within the brain
If we accept the notion that the mind is housed in the brain, or the soul is seated there, it seems reasonable to wonder whereabouts in the brain it is to be found. There was a very long argument as to whether the soul was in the parenchyma or the ventricles. Both possibilities were supported by ingenious arguments4.
Over the centuries, ventricular and parenchymal theories underwent considerable elaboration. According to one popular version, different ventricles housed different faculties of the soul: the anterior ventricle was the seat of fantasy or imagination; the middle ventricle the seat of reason; and the posterior ventricle that of memory. Eventually the notion that the parenchymatous tissue was the seat of the soul gained the upper hand but as recently as 1796, Soemmering (discoverer of the substantia nigra) arguedagainst Swedenborgthat the ventricular fluid was the repository of the soul. By then, however, the parenchymal theory was unassail able and the question was where exactly within the parenchyma the soul was located. Thomas Willis was the first to suggest the cerebral cortex, a location favoured by contemporary neurophilosophers.
As speculative neurophilosophy, which drew on sources as disparate as religious doctrine, clinical observation, and rampant rationalising guesswork, gave way to what we would recognise as neuroscience, the question of the location of the soul became rather more complex. Perhaps (as the defeated ventricular theorists had believed) it was located in several places rather than one; or perhaps it was diffused over the entire brain, or over part of the brain such as the cerebral cortex. The decisive intervention was that of Gall, the co-founder of phrenology, who promulgated the following principles5: • the brain (especially the cortex) is the organ of the mind • the brain is a composite of parts, each of which serves a distinct, task-specific 'faculty' • the size of the different parts of the brain, as assessed chiefly through the examination of the cranium, is an index of the relative strengths of the different faculties being served.
The third principle has dominated and damaged the phrenologist's posthumous reputation, but the first two principlesthe pre-eminence of the cortex in mental func tion and the localisation of different mental faculties within the cortexhave made an enduring contribution to the framework of neuroscientific research. Gall and Spurzheim were the first since Willis to identify the cortex as the basis of higher mental function. Moreover, the second principle addresses a serious philosophical problem. Historians6 have linked phrenology with the need to deal with the problems arising out of John Locke's theory of knowledge, promulgated in his enormously influential Essay concerning human understanding. Locke repudiated the notion of innate ideas and asserted that all knowledge came from the senses. The mind at birth was a 'tabula rasa' -a clean slate or a blank sheetand was effectively construct ed out of experiences organised only according to their associations. But if the mind was a blank sheet at birth, and built up out of experiences, how did it manage to avoid being just a heap of impressions, a slop of accumulated experiences and their echoes in memory, not too different from deliriummore of a Jackson Pollock than a Hughlings Jackson. There needed to be an innate material basis for the organisation of the material of which the mind was composed. Hence Gall and Spurzheim's5 separate mental faculties associated with discrete organs in the brain.
With the advent of more sophisticated physiological experimentation, and the precise observation of both clinical and pathological aspects of neurological damage reported by authors such as Broca and Hughlings Jackson, the doctrine of localisationin particular the localisation of functions within the cortexbecame irresistible. The locali sation of higher mental functions in the brain had already been dramatically suggested by the 19th century's most famous neurological patient, Mr Phineas Gage, a railway worker who had an unfortunate encounter with a steel rod. This event, as the result of which he lost a lump of his frontal lobes, changed him from a purposeful, industrious worker, even tempered and impeccably mannered, into an evil tempered drunken drifter.
With the advent of modem methods of stimulating and recording from the central nervous system, of delineating its multifarious internal anatomical and physiological connections, and of imaging the living brain using a variety of modalities, we are now truly in a neo-phrenological era in which it seems as if every discernible function has its own piece of circuitry. This trend towards localisationism has been driven by the conceptual and empirical advances set out in Table 1.
Today's neo-phrenology is, of course, a long way on from the phrenology of Gall and Spurzheim. Not only is there now increasing emphasis on the plasticity of the brain, softwired modules and logic circuits rather than discrete anatomical sites, but also the functions into which the soul is fractionated tend to be things like object localisation, edge detection and encoding of episodic memory rather than the sense of justice or amatory propensity. The fundamental framework established by Gall and his later-19th-century successors, however, is the same.
The problems of neuromythology7'8
Everyday life and neuroscientific observations all point to the inescapable conclusion that consciousness is due to and 'plasticity') in the CNS are demonstrated certain activity in the brainthat mental activity is neural activity. The original conjecture by the Greeks -Hippocrates pre-eminent among themthat the brain is the seat of the soul has, it seems, been triumphantly vindicated by modem science: the multiple functions of the secular mind are located in the cerebral cortex. Now, all that remains is to work within this secure framework to tease out the details of what happens in different locations and how those locations relate to one another. And this, we are led to believe, is what has been happening, at an everincreasing pace, over the last 100 years. This is very disturbing. It would seem to suggest that our inmost selves are intimately connected with brain activity and, therefore, we humansfar from being metaphysically or ontologically unique beingsare part of the material world. There are, however, many reasons for not jumping straight from the observation that the state of the mind depends on the state of the brain, to the belief that the brain is the seat of the mind.
First, the relationship between objectively observed neural activity and the subjectively experienced contents of consciousness is profoundly puzzling (Fig 1). There are three favoured explanations of the connection between nerve impulses and conscious experiences: the dual aspect theory, the causal interaction theory, and the identity theory. There are obvious flaws with the first two theories, which I have discussed elsewhere*8. The front runner is the identity theory, which is espoused implicitly or explicitly by most neuroscientists. This theory, which asserts that conscious experiences are identical with certain events located centrally in the brain, actually inherits all the weak nesses of the other two theories but objections to it have focussed on one explanatory gap: that neural activity is not at all like contents of consciousness.
The contrast between the monotonous similarity of neural activity and the infinite variety of the perceived world is one worry. The argument that location in the brain explains allthat sounds are experienced when the hearing neurons are excited and sights are experienced when the visual neurons are excitedis self-evidently circular. Even if there were some way of generating experienced qualities, how would we account for the discrepancy between their variety and the monotonous activity of the nervous system? According to most writers who believe in the identity theory, the basis for the necessary variety is to be found not in the individual impulses but in their patterns, the patterns of large numbers of impulses considered together. They argue that although individual nerve impulses are very much alike, there are millions of different possible patterns of impulses, and it is these patterns that underwrite the infinite variety of the world as presented in subjective experience. The trouble with this argument is that patterns do not exist in, even less for, the elements that make up the pattern. They exist only for an external observer, a consciousness that extracts the pattern.
Consider the array of dots in Fig 2. It could be seen as a single array of nine dots; as an array of six dots on the left and three on the right; or as an array of three dots on the left and six on the right; or as any of a vast number of possi bilities. What this tells us is not that the array is infinitely rich in patterns, but that it has no inherent pattern; that its patterns exist only insofar as they are extracted; and they can be extracted only insofar as they are perceived.
What about the more fundamental objection to the identity theory, that nerve impulses not only fail to capture the variety of experience (the differences between different experiences), but they fail to seem like experiences at all? This objection has been countered by an argument from 'levels' of description or observation (Fig 3).
Philosophers and some neuroscientists have argued that the relationship between nerve impulses and conscious experiences is like that between water molecules and water. Water molecules are totally unlike water: they do not possess the properties of wetness, shininess, liquidity etc. There is, however, no doubt that water really is identical with H2O molecules: H2O molecules and drops of water are the same thing observed at different levels. It is argued, by analogy, that neural activity and conscious experience are also the same thing, perceived at different levels.
This analogy falls victim to arguments similar to those that undermine the patterns argument. The concept of levels implies levels of observation, and levels of observa tion presuppose observation and hence consciousness, and so cannot explain the relationship between the seemingly unconscious, third person neural activity of the brain and first person conscious experience. Neither does it explain why some neural activity supposedly has the property of being identical with consciousness while most neural activityfor example that which takes place in the cerebellum, the spinal cord, the peripheral nerves, as well as much of the activity recordable in the cerebral cortexdoes not. It would be as though some molecules of H2O counted as water and others did not.
Norand this is absolutely crucialdoes it account for the fundamental and unique characteristic of conscious experience, what philosophers call its 'intentionality' its character of being about something (Fig 4). One's consciousness of an objectthe mental event, or the nerve impulses, explicitly refers to something other than itself. How do neural discharges in the brain refer back to the object that triggered them? The inward causal chain leading from the object to impulses in the cerebral cortex (represented by the top arrow in the diagram) is consistent with the materialistic framework of neuroscience, but the outward intentional linkwhereby the impulses 'reach out' to, refer to, are about the seen objectmost certainly is not. There is nothing else in nature corresponding to this outward intentional link.
Neural impulses, and hence brain activity and the brain, are even less able to account for the unity of conscious experience. There are impulses all over the brain, but there is no single place where they all come together in a moment of consciousness.
Why do we need to have such a place where it all comes together? This will become evident when we consider the long-range, explicit internal connectedness of conscious ness that is necessary for us to be the responsible agents able to operate effectively in our complicated world. Giving this lecture at the College was a commitment that knitted together a multidimensional lacework of moments: the moments many months before it, when I accepted the invitation to speak and discussed the title of my talk; the Sunday mornings in the few weeks preceding it, in which I wrote the lecture; and those moments in which I deployed all sorts of implicit knowledge in order to find my way via taxi and train and foot to the Royal College of Physicians at the right time in the right place, while in the grip of a thousand other preoccupations and floating in a sea of sense data. That I succeeded in arriving to speak as planned is a remarkable tribute to the inexpressibly complex inner organisation of my life and its extendedness across time. Somehow, bursts of electricity in the wetware of the brain seem an inadequate explanation for the exquisitely struc tured mind that we all have. The problem thrown up by John Locke's theory of knowledgethat the mind threatens to be a heap of impressionsis not solved by the modul arity that phrenologists and contemporary neuroscience attribute to the brain precisely for the reason that while modularity serves the purpose of keeping things tidily apart, it obstructs the need to bring them together in the moment of consciousness.
But there is an even deeper problem than that of bringing everything together. The brain must at the very same time keep vast numbers of projects, actions, micro-projects and micro-actions, distinct. Moreover, to make things even more difficult, those distinct projects must relate to many thousands of others, as each provides the others' framework of possibility. And worse, moment-to-moment conscious ness has to retain a global openness in order that one can enact planned activities in a sea of unplanned contingen cies, for example, avoiding a bicyclist while crossing the road.
We tend to overlook the complexity of the most ordinary aspects of our lives when we think about the neuro physiological basis of consciousness. And this is my central message: neuromythology seems halfway plausible only if it is predicated upon a desperately impoverished account of our many-layered, multi-agenda, infinitely complex but wonderfully structured and organised selves. We could summarise the problem very simply as follows. If we try to address the problem of unity of consciousness by adopting a holistic account of the brain, we encounter insuperable difficulties in accounting for the way in which so many different things, which have to be kept apart, are kept apart and do not collapse into mind-mush or delirium. If we try to address the problem of the multiplicity of distinct elements of our conscious lives by adopting a localisationistic account of the brain, we encounter equally insuperable difficulties in accounting for the way in which everything comes together sufficiently for us to live active, coherent lives.
The question of unity and control amid diversityand a continual rain of the half-expected unexpectedpicks out a deeper problem: that of accounting for the fact that there is such a thing as the first person (the me, here, now) to which all this variety is ultimately referred. Without such a unifying elementwhat Kant called the unity of appercep tion, and rather unfortunately described as 'the I think that accompanies all my perceptions' -the brain would simply be a colloidal suspension of unhaunted modules, which is how the cognitive scientist seems to present it.
The notion of the first personthe 'I-ness' of conscious nessnot only highlights the unity of consciousness neces sary for one to act as a responsible agent in a complex world. It opens onto a deeper issue: the origin of the sense of me, here, now; of the suffering agent, the responsible creature who is a viewpoint on the world. And yet this sense that 'I am this thing' is required if my body is to enjoy ownership and I am to have the feeling that I am here now, that I am, to use Heidegger's phrase, a being whose being is an issue for itself. The fact that things matter to one's brain has no basis within the neuroscientific account of the brain/mind. Mattering has no place in the materialist world picture of the identity theorist. In short, there is no basis in the brain either for the unity of consciousness nor for the connection between this unified consciousness and the fundamental intuition of self.
The language of neuromythology8
And so we are forced to a conclusion opposite to the one drawn earlier: that consciousness cannot be due to activity in the brain and that cerebral activity is an inadequate explanation of mental activity. Hippocrates' claim that 'from the brain and from the brain 0111/ arise all our experiences cannot be true. And yet this is a message that has certainly not got through to many neuroscientists and their neurophilosophical fellow travellers. How can this be? The reason is this: they increasingly speak to each other and think to themselves in a language that conceals from them the barrenness of their explanations, what I have called the language of neuromythology. This re-describes the mind in mechanical terms and so enables it to remain within the materialist and even biological framework of neuroscience the mind is construed as a collection of mechanisms. At the same time, it treats these mechanisms as though they were machines which, since the machines are artefacts, may then be spoken of as if they had purposes. Nowadays, the machines in question are usually computers. The mindly brain and the brainy mind merge in the concept of 'information processing', which both are supposed to indulge in.
By this means, the language of neuromythology sidelines the distinctive mystery of human consciousness and fore closes on the ways in which we may think about ourselves and our possibilities.
Separating neuroscience from neuromythology
Hippocrates, instead of asserting that 'men ought to know that from the brain, and from the brain only, arise our pleasures, joys, laughter and jests, as well as our sorrows, pains, grief and tears'3, ought to have asserted more modestly that a normally functioning brain is a necessary but not a sufficient condition of our experiencing anything at all. This more modest claim would have set us on a different track and not licensed the wild overstatements that neuroscientists and some of their neurophilosophical fellow travellers have made over the last few decades. This is not to say that this escape route is not fraught with difficulties, as I have discussed elsewhere8.
One exciting consequence of the failure of neuro mythology is that it suggests that making complete sense of the relationship between the brain and consciousness will require a new theory of knowledge or indeed a new account of what kinds of things there are in the world: in short either an epistemological or an ontological rethink. As it is, the neural theory of consciousness, neuromythology, is not only inadequate in itself but depends upon a savagely impoverished account of our own nature as wholly mysterious human animals, at once part of nature and at the same time distant from it, if only to the extent of being able to articulate it.
Meanwhile, neuroscientists may be reassured that what they are doing is not worthless. While they are most certainly not discovering how the mind works, or how it is created in the brain, they are learning more about the conditions under which normal experience and volition is possible: the necessary but not the sufficient conditions. And for me, as a clinician concerned to nurture or encour age those necessary conditions in patients from whom they have been withdrawn, that is good enough. As for neuro science, metaphysics it is not; worthwhile it certainly is. It just needs to know its limits, so that good science is not discredited by bad philosophy, and scientism does not cause scientists to be justly accused of 'single vision and Newton's sleep'9. | 2018-04-03T00:00:34.947Z | 2000-11-01T00:00:00.000 | {
"year": 2000,
"sha1": "288ec3aa7627e0e23f43d21b07694d22f469904f",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a9976ef4d9ac009934569beab6b93f6ee7ac4a06",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15945597 | pes2o/s2orc | v3-fos-license | A Comparative Study of Contemporary Color Tongue Image Extraction Methods Based on HSI
Tongue image with coating is of important clinical diagnostic meaning, but traditional tongue image extraction method is not competent for extraction of tongue image with thick coating. In this paper, a novel method is suggested, which applies multiobjective greedy rules and makes fusion of color and space information in order to extract tongue image accurately. A comparative study of several contemporary tongue image extraction methods is also made from the aspects of accuracy and efficiency. As the experimental results show, geodesic active contour is quite slow and not accurate, the other 3 methods achieve fairly good segmentation results except in the case of the tongue with thick coating, our method achieves ideal segmentation results whatever types of tongue images are, and efficiency of our method is acceptable for the application of quantitative check of tongue image.
Introduction
Tongue diagnosis is one of the important contents of "Four Diagnoses" in Traditional Chinese Medicine (TCM). The "Four Diagnoses" means observation, listening, interrogation, and pulse-taking. Traditional tongue diagnoses depend on observations of tongue features such as color, shape, moisture, and texture by TCM doctors. The results of tongue diagnoses are influenced not only by the experience of TCM doctors but also by the surrounding environments. Therefore, nowadays many researchers use digital camera to take tongue photos and utilize computer to make quantitative checks and analyses of tongue images, that is, objectification of tongue diagnoses. To check and analyze tongue images quantitatively, we first need to segment tongue body region out of the background which is so called tongue image extraction. And automatic and accurate extraction of tongue image is an important sign of intelligent analysis of tongue images.
In recent years, various kinds of image segmentation methods have been applied to the application of tongue image extraction. Among these methods, the representative ones are active contour method, level sets method, region growing and merging, random walk method, and so forth.
Active contour method which was so called Snakes model was widely used in tongue image extraction application. Shi et al. [1] utilized geodesic active contour model to make tongue image segmentation. The proposed approach could enhance the accuracy and practicability obviously, compared with other work. When the surface of the tongue image was not regular, this might lead to a failure to extract tongue body region out of the background successfully. Shi et al. [2] presented a fully automated active contour initial method that utilized prior knowledge of the tongue shape and its location in tongue images. This method increased the curve velocity but decreased the complexity. The only inconvenience of this method is that 4 points were needed to specify in order to build initial contour. Liang and Shi [3] proposed a new tongue segmentation approach based on the combination of the feature of tongue shape and the Snakes correction model. In this method a rough tongue contour was got using the features of tongue image in HSI color model. The experimental results showed this method was efficient in the case of tongue images given by the authors, but the number of the experimental samples was quite limited. Zhai et al. [4] transformed tongue image into HSI color model and dual Snake algorithm was used to obtain the accurate contour of the tongue body. Through testing, this method had proved 2 International Journal of Biomedical Imaging to be satisfactory for the specific tongue image segmentation. But the initial inside contour and initial outside contour when implementing the dual Snake algorithm were difficult to obtain, so this method is quite theoretical from a certain point of view. Ning et al. [5] presented an automatic tongue segmentation method which used a region merging method to make segmentation and utilized Snakes algorithm to refine the region merging result. The proposed method greatly enhanced the segmentation performance, but the accuracy of this method was not high when processing some tongue image samples given by the authors. Li [6] suggested a kind of tongue image extraction method using improved Snakes model. Through the minimum calculation of Snakes model, estimated contour line was further processed which could improve the accuracy of tongue image extraction, but the efficiency of the proposed method is not mentioned in the article. Wang et al. [7] proposed an improved tongue image extraction approach based on Snakes model, in which the tongue image was described in two other color space and a two-step Snakes implementation was used. The accuracy and reliability of this method were improved, but the efficiency of this method might be quite low due to the processing with high complexity using this method. Fu et al. [8] used radial edge detection to get rough contour of the tongue image, utilized pair-color-remove to remove the lip, and applied Snakes method to get the exact contour of the tongue. The accuracy of this method was proved in the experiment, but the efficiency of this method was not mentioned in the article.
Level sets and random walk methods also brought enough attention in the application of tongue image extraction by related researchers. Zhu and Du [9] introduced a kind of color tongue image fast segmentation method based on level sets, in which the boundary feature weight function was improved and a kind of variable time step method was introduced. Both accuracy and efficiency were improved greatly, compared with the traditional level sets method. But when the surface of tongue body region was not regular, the segmentation effect of this method might not be very ideal. Li et al. [10] proposed a novel method for tongue contour extraction based on improved level set curve evolution, in which an automatic initialization of contour was presented and both the color information and tongue contour shape were used to segment tongue images. Applying this method to the large database of tongue images, promising experimental results were achieved. But the efficiency of this method was not mentioned in the article. Zhu and Du [11] suggested a kind of improved random walk algorithm and applied it to color tongue image segmentation, in which toboggan algorithm was adopted to segment original tongue image into initial regions; a newly designed weighted-graph was built and random walk algorithm was applied to make final segmentation. Both the accuracy and efficiency of this method were greatly improved, compared with traditional random walk method. But for the tongue image with irregular coating on its surface it might lead to a failure to segment tongue body region successfully.
Other kinds of methods enriched the application of tongue image extraction. Zhu et al. [12] suggested a kind of color tongue image extraction method which utilized greedy rules with fusion of color and space information in order to extract tongue body region from background accurately. The accuracy of this method was quite high. In particular, for those tongue images with coating, this method could achieve relatively good segmentation effects. And the efficiency of this method was acceptable and practical for the applications of tongue diagnoses. Xu et al. [13] proposed a fully automatic tongue detection and tongue segmentation framework. Compared with other existing methods, this framework was fully automatic without any need of adjusting parameters for different images and did not need any initialization. But there was only one sample in the experiment, so it was not enough to prove the accuracy of this method for various kinds of tongue images. Zhong et al. [14] suggested a novel method to segment the tongue image automatically with the mouth location method and active appearance model. Due to the different positions of tongues in the tongue images, this method needed to use different initial contours to segment tongue body region, which brought some inconvenience to the applications of tongue diagnoses. Yang et al. [15] proposed an image segmentation algorithm based on the shortest path. The theoretical basis was quite detailed, but the accuracy of this method was not completely proved according to the experimental results and the efficiency of this method was not mentioned in the article. Li et al. [16] studied a new theory of fuzzy rough sets and presented a method for segmenting tongue images, which extracted condensation points by the theory of fuzzy rough sets, quartered the data space layer by layer, and softened the edge of the dense block by drawing condensation points in the borders. The application result indicated that the algorithm could avoid segmenting image excessively and speed up segmentation velocity by fuzzy grid dividing. Nevertheless, the number of the sample in the experiment is only one, so that it is not enough to prove the wide practicability of this method in the applications of tongue diagnoses. Zhong et al. [17] suggested a kind of new method for segmenting the toothmarked tongue images, which converted RGB color space into HSI color space and used Otsu threshold value to complete segmentation of tooth-marked tongue images. It was mentioned in the article that the speed of the method was quick and the accuracy of the method was high, but no enough evidences were provided in the article. Chen et al. [18] combined one of the graph theory image segmentation methods and multiresolution image segmentation together to segment the image on two different resolutions. The proposed method was novel, but the accuracy was not high which was only 87.3% and the efficiency test was not provided by the authors. Zhang and Qin [19] designed a new method for tongue image segmentation, which combined gray projection and threshold-adaptive method to segment tongue images. The experimental results of this method were fairly good, but the comparisons with other methods in accuracy and efficiency were lacking. Li and Wei [20] proposed an adaptive segmentation algorithm to segment tongue images efficiently, which divided tongue image into several parts, used an iterative approach to calculate each subblock threshold, and used each local threshold to segment tongue images.
International Journal of Biomedical Imaging 3 The experimental results showed that the algorithm could segment well the tongue images whose background and boundaries were not clear. But only 2 samples which were a tongue image with withered coating and a tongue image with white coating were processed and the effectiveness of this method was not proved by the limited number of samples. Zhao et al. [21] utilized mathematical morphology to describe shape features of images, which was combined with HSI color model to segment tongue images. The effect of this method for the tongue without coating was fairly good, but if there was thick coating on the surface of tongue body region, it might lead to a failure to segment tongue body region successfully. Du et al. [22] suggested a kind of color tongue image segmentation algorithm based on HSI model in which original images were converted into HSI color space, tongue images were segmented by threshold values of hue, and intensity and sequential algorithm was used to mark the connected regions. The experimental results of this method were quite good for those tongue images the surfaces of which were regular, but when the surfaces of tongue body regions were not regular, it might fail to segment tongue body region successfully.
In the methods [3,4,9,11,12,17,21,22] mentioned above, there were a common ground that they were using HSI color model to describe the features of tongue image. And HSI color model is closer to human vision and due to specificity of the application of tongue image extraction using HSI color model can achieve better segmentation results than other methods. In addition, tongue coating is of important meaning in TCM clinical tongue diagnoses and to extract tongue image with coating is of a certain difficulty. Therefore, we designed and implemented a kind of tongue image extraction method which utilized multiobjective greedy rules and made fusion of color and space information to extract tongue body region in HSI color model. Owing to the fusion of color and space information, this method could extract tongue image with coating accurately, which other methods could not be competent to achieve. From now on, we will discuss HSI color model and the principle of tongue image extraction in HSI color model as well as compare the typical method [21], method [22], and Snakes method with our recent tongue image extraction method suggested in a new China invention patent.
HSI Color Model.
A static image is commonly expressed in a 2-dimensional pixel matrix. Each pixel is composed of 3 colors, that is, red, green, and blue. So RGB color model is feasible and suitable to express and store static images. RGB color model can be denoted in Figure 1.
HSI color model uses hue, saturation, and intensity 3 elements to describe the features of images, which is closer to perception principle of human vision. Herein, hue is the color type of a pixel, saturation is the degree to which a certain color is mixed into other colors, and intensity is the brightness of a pixel. HSI color model can be denoted as Figure 2. The formulae of hue, saturation, and intensity are shown (1), formula (2), and formula (3) respectively. Consider
Principle of Tongue Image Extraction Based on HSI
Color Model. In many traditional segmentation methods, intensity is the only feature to decide whether it is the object or background in the image. Even if it feasible in many segmentation applications, as far as tongue image extraction is concerned, using intensity information is not enough to segment tongue body region out of background. As we can see from Figure 3, the intensity of tongue body region and that of face region are identical. And in Figure 4, which is the grayscale histogram of tongue, there is only one peak, which represents the face and tongue body regions. Therefore, we can not distinguish tongue body region from face region only by intensity information.
The main hue of tongue body region is red and entire tongue body region is connected. The hue of surrounding region (such as face, teeth) is different from that of tongue body region, except the mouth lip region which is connected to tongue body region. Nevertheless, the intensity of the boundary between tongue body region and mouth lip region is relatively low compared to that of tongue body region and mouth lip region. Therefore, it is possible to separate tongue body region out of the surrounding regions by hue and intensity information.
Traditional threshold segmentation is based on analyses of histogram and the main feature of tongue body region is its red hue. In hue image of tongue, there are not only pixels with high values but also pixels with low values on the tongue body region as Figure 5 shows. In hue histogram of tongue image, the red hue lies on the start and the end of it, as we can see in Figure 6. The hue distribution of the histogram conforms to the hue image of tongue. In hue histogram of tongue, it seems that there are 3 peaks in it. The range of hue histogram is from 0 degree to 360 degree; that is, 360 degree is its cycle. In fact, the start and the end of hue histogram are adjacent. In order to make 0 degree and 360 degree adjacent in hue histogram, we need to make some transformations to the histogram. The concrete method is to move the part from 180 degree to 360 degree to the left of the histogram and move the part from 0 to 179 to the right of the histogram. In this way, 0 degree and 360 degree in the histogram are adjacent in the hue histogram. The hue image of tongue after transformation is shown in Figure 7 and the corresponding hue histogram is shown in Figure 8.
From Figure 8, we can see 2 obvious peaks in the hue histogram. According to Ostu's theory, it is feasible to segment the object region from the background by the valley threshold. And as we mention above, even if the tongue body region and mouth lip are connected, the intensity of the boundary between them are relatively low. Therefore, using hue and intensity information we can separate tongue body International Journal of Biomedical Imaging region out of background successfully. Now, we introduce 2 typical kinds of tongue image extraction method based on HSI color model.
Method Based on Mathematical Morphology and HSI
Color Model. Zhao et al. [21] suggested a kind of color tongue image segmentation method based on mathematical morphology and HSI. The principle and procedure are given as follows.
Step 1. Convert original tongue image from RGB color space into HSI color space.
Step 2. Use hue information to make binarization of the tongue image.
Step 3. Use clustering algorithm to make object region clustering.
Step 4. Use mathematical morphology such as opening and closing to remove small holes on tongue body region.
This method is the most typical and earliest color tongue image extraction method based on HSI. This method is efficient, but it can only handle the tongue image without coating. Later, we will discuss the experimental effect of this method.
Method Based on Sequential Algorithm and HSI Color
Model. Du et al. [22] introduced a kind of color tongue image segmentation method based on sequential algorithm and HSI color model. This method is an improved method of the method mentioned in [21]. Its principle and procedure are given as follows.
Step 1. Convert original tongue image F0 from RGB color space into HSI color space.
Step 2. Segment tongue image using hue and intensity information and image F1 with mouth and tongue body regions is obtained.
Step 3. Sequential algorithm is used to segment tongue image F1 and the tongue body region in image A1 is obtained.
Step 4. Mathematical morphology closing operation is applied for A1 to fill small holes on the tongue body region and image A2 is obtained.
Step 5. Image A2 is multiplied by original image F0 to gain the target tongue image successfully.
Compared with the method [21], this method can get more accurate segmentation results. But when it comes to a tongue image with very large size, this method may be a little bit slow. The same as the method mentioned in [21], this method cannot handle the tongue image with thick tongue coating well. Later, the practical effect of this method will be illustrated in detail.
Method with Fusion of Color and Space Information by
Applying Multiobjective Greedy Rules. Zhu et al. [12] introduced a novel approach for color tongue image extraction with fusion of color and space information, which was suggested in a recent authorized China invention patent. This method used multiobjective greedy rules and made fusion of color and space information to extract tongue image. HSI color model was used to describe the color features, in which both hue and intensity were utilized. Due to introduction of space information, this method showed a great advantage over other methods in which this method could extract tongue image with coating accurately. As mentioned before, tongue image with coating is of important clinical diagnostic meaning; for example, white tongue coating indicates exterior syndrome and cold syndrome, and yellow tongue coating indicates heat syndrome and interior syndrome.
There are 4 multiobjective greedy rules in this algorithm, which are denoted as follows.
Rule 1. The pixels in the target region are tongue substance.
Rule 2.
If the pixels in the target region are not tongue substance, they must be tongue coating, which are surrounded by the pixels of tongue substance. 6 International Journal of Biomedical Imaging Rule 3. Each included pixel in the target region must be tongue substance or tongue coating; otherwise, it is abandoned.
Rule 4. The target region is the largest connected region.
From the 4 rules mentioned above, we can know that the goal of this algorithm is to find the largest region most similar to tongue substance and tongue coating features.
By applying the 4 rules mentioned above, the procedure of this algorithm can be denoted as follows.
Step 1. Set initial target region to null.
Step 2. If the intensity of start pixel is high and its hue is close to tongue substance, pixel is included in target region .
Step 3. Get adjacent pixel set , which is adjacent to pixel .
Step 4. Start a loop from here.
Step 5. Set tag MeetConstraint to false.
Step 6. If is not null and tag MeetConstraint is false, start a new loop from here.
Step 7. Search and find a pixel in , which is most similar to S.
Step 8. Herein, there are 2 conditions. One is if hue of is similar to that of , or if hue of is similar to tongue coating and is surrounded by tongue substance pixels. The other one is if the intensity of is high. If the 2 conditions mentioned above are met, execute the following 5 steps.
Step 9. Set tag MeetConstraints to true.
Step 10. Add pixel to target region .
Step 11. Remove pixel from pixel set .
Step 12. Add those pixels adjacent to but not in into .
Step 13. Exit the loop which starts from Step 6.
Step 14. If the conditions mentioned in Step 8 are not met, remove pixel from pixel set .
Step 15. Loop from Step 4 to Step 14 until tag MeetConstraint is equal to false.
In the algorithm described above, color and space information are fully utilized. Therefore, it can lead to a better segmentation result, even if there is a thick coating with different colors on the region of tongue body.
Accuracy Comparisons.
In order to describe the accuracy of our method with fusion of color and space information, we implemented 4 typical kinds of color tongue image extraction methods and compared the results of manual segmentations with the results of these 4 methods. These 4 color tongue image extraction methods are geodesic active contour mentioned in [1], the method based on mathematical morphology and HSI color model mentioned in [21], the method based on sequential algorithm and HSI color model mentioned in [22], and our method with fusion of color and space information suggested in a recent China invention patent. The first method which is geodesic active contour method is implemented with Matlab and the other 3 methods are implemented with VC++. Seven typical kinds of tongue images were taken into account, which are light red tongue, light white tongue, red tongue, deep red tongue, purple tongue, tongue with thick white coating, and tongue with thick yellow coating. All the experimental samples are taken by digital cameras manually under natural light circumstances. And to reduce the processing time costs, all the original tongue images are shrunk to a certain size. Because the colors of the former 5 kinds of tongue image are 5 typical tongue colors in clinical tongue diagnoses and the last 2 kinds of tongue images with thick coating are of important and obvious clinical meanings, the comparison experiment is quite convincible. The segmentation results of these tongue image extraction methods are shown in Figure 9.
As we can see from Figure 9, the results of geodesic active contour are not ideal and the contour curves do not fit well with the boundary of tongue body. Because geodesic active contour takes intensity as the main feature, its segmentation results are greatly affected by the texture of the tongue body surface. For the rest of Figure 9, as we can see, the rest 3 kinds of tongue image extraction methods achieve fairly good segmentation results, except for the last two tongue images with thick coating. As we can see from Figure 9(hh) and Figure 9(nn), the segmentation results of the method mentioned in [21] are quite wrong. This is because the hue color of tongue body surface is not homogeneous. As we can see from Figure 9(oo), the segmentation result of the method mentioned in [22] contains not only tongue body region but also mouth lip region, which is absolutely wrong. This is because the hue and intensity of the tongue body region and the mouth lip region are quite similar. As we can see from the last column in Figure 9, which are the segmentation results of our method, gratifying results have been achieved and all the tongue image segmentations achieve quite good effects. This is owing to the design of fusion of color and space information in our method.
To evaluate the effects of the results of 4 kinds of tongue image extraction methods objectively and quantitatively, we introduce 2 measurement values. One is recognition rate and the other is error rate. The recognition rate and error rate can be denoted as follows: Herein, TP is the number of pixels which are correctly recognized as tongue pixels, FN is the number of pixels which International Journal of Biomedical Imaging are tongue pixels but incorrectly recognized as background pixels, and FP is the number of pixels which are background pixels but incorrectly recognized as tongue pixels. The recognition rates of 4 kinds of tongue image extraction methods are shown in Table 1 and the error rates of 4 kinds of tongue image extraction methods are shown in Table 2.
As we can see from Tables 1 and 2, the recognition rates of geodesic active contour method for light red tongue, light white tongue, and tongue with thick white coating are above 80%. But for deep red tongue, purple tongue, and tongue with thick yellow coating, the recognition rates are lower than 70%, except the error rate of the first method for light red tongue which is up to 7.17% and the error rates of the first method for the other tongue images are quite low. The recognition rates of the method based on mathematical morphology and HSI for tongue images without coating are quite high, but for the tongue images with thick coating the recognition rates are quite low. When it comes to the tongue with thick yellow coating, the recognition rate of the second method mentioned in [21] is even lower than 6%. And as we can see from Table 2, the error rates of the second kind of method are lower than 2%. The recognition rates of the method based on sequential algorithm and HSI for most types of tongue images are above 90%, except that the recognition rate for the tongue with thick yellow coating is lower than 75%. And the error rates for most types of tongue images are quite low, except that the error rate for the tongue with thick yellow coating is up to 37.05%. The recognition rates of our method for most types of tongue images are higher than 90%, except that the recognition rate of our method for the tongue with thick yellow coating is a little bit low which is 83.89%. But our method is the most accurate one in these 4 kinds of methods for the extraction of tongue image with thick yellow coating. And the error rates of our method for most types of tongues are not more than 1.2% which is quite low. Generally speaking, as we can see from Tables 1 and 2, the effect of the second method which is the method based on mathematical morphology and HSI is better than that of the geodesic active contour method, the effect of the third method which is the method based on sequential algorithm and HSI is better than that of the second method, and the effect of our method is better than the former 3 kinds of methods in most cases.
Efficiency Comparisons.
To show the efficiency of these 4 kinds of tongue image extraction methods, we compare the execution time cost of each method. And the efficiency comparisons are shown in Table 3. As we can see from Table 3, the time cost of geodesic active contour is quite long ranging from 340 seconds to 548 seconds, but the time costs of the other 3 methods are quite low. The time costs of the method mentioned in [21,22] only cost less than 1 second to fulfill the whole segmentation task. And the time cost of our method is about tens of seconds, which is basically acceptable for the applications of tongue image quantitative checks.
Conclusions
In this paper, we described in detail 3 kinds of contemporary color tongue image extraction methods based on HSI color model. HSI color model is closer to the perception of human vision, and most of all using HSI color model as the color feature can achieve better segmentation results. Due to the important clinical diagnostic meaning of tongue image with coating, in the case of which traditional tongue image extraction method cannot handle this kind of tongue image well, we suggest a kind of tongue image extraction method with fusion of color and space information, which can handle tongue image with coating quite well. In the experiments, we compare these 4 kinds of tongue image extraction methods, which are geodesic active contour, the method based on mathematical morphology and HSI, the method based on sequential algorithm and HSI, and our method with fusion of color and space information, respectively. As the experimental results show, the effect of geodesic active contour is not very ideal. In most cases, the other 3 kinds of tongue image extraction methods achieve fairly good results. When it comes to the tongue with thick tongue coating, only our method achieves ideal results. In efficiency comparisons of these 4 methods, the efficiency of geodesic active contour is quite low, the efficiency of the method mentioned in [21,22] is quite rapid, and the efficiency of our method is basically acceptable and practical. | 2016-05-12T22:15:10.714Z | 2014-11-20T00:00:00.000 | {
"year": 2014,
"sha1": "c2c71cb8cadbaff9ba9151903f703d6c40cb1031",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijbi/2014/534507.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a52b41160eaee49c2f78bbe1fa67b0d2025cf72",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
253962946 | pes2o/s2orc | v3-fos-license | The Multiparameter Measurement Technique in a Large-Aperture Rectangular Laser Beam Aberration Correction System
Adaptive optics (AO) can effectively improve the beam quality of solid-state slab lasers. However, the aperture of the output beam increases as the output power of the laser increases, resulting in a larger measurement system. Ultimately, a more complex AO system needs to be designed. To meet the requirements of conjugate imaging in an AO system, it is of research significance to coordinate and optimize the system's structural dimensional parameters while enabling the detection of multiple parameters, such as the wave-front and beam quality. In this paper, a multiparameter measurement technique in a large-aperture rectangular laser beam aberration-correction system is proposed. The system layout is optimized with total dimensions of 400 mm × 150 mm × 246 mm (L × W × H). The AO system conforms to the requirements for conjugate detection and can perform wave-front detection, far-field evaluation, and near-field detection of a 160 mm × 120 mm rectangular beam produced from a solid-state slab laser. The findings reveal that the measured PV values of wave-fronts of the measurement system are less than 0.288 μm, the RMS is no more than 0.079 μm, the average far-field beam quality factor is 1.248 times the diffraction limit, and the average near-field beam uniformity is 0.533 at a temperature of 20 °C ± 10 °C; these results satisfy the technical parameters.
the power and improving the beam quality. Solid-state slab lasers have continually demonstrated their superiority with the rapid development of the gain medium [3], [4], cooling technology [5], [6], and resonant cavity optimization [7]. The output power has increased from the kilowatt scale to the megawatt scale; however, the beam quality declines as power increases, owing to thermal effects and other factors, resulting in a limited brightness increase. Benefiting from the increasingly mature adaptive optics (AO) technology, solid-state slab lasers have the potential to combine high power and high beam quality, and corresponding research results have been achieved. For example, in 2014, Yang et al. from the Institute of Optics and Electronics, Chinese Academy of Sciences, enhanced the beam quality of a solid-state slab laser with an output power of 1.3 kW from 13.1 to 2.3 times the diffraction limit using an AO system without wave-front detection [8]. In 2017, the Institute of Optics and Electronics at the Chinese Academy of Sciences reduced the output beam's PV values of wave-front from 57.26 to 1.87 μm with a low-order aberration autocorrection approach under limitations [9]. In 2018, Yang et al. determined the beam quality factor for a 750-MW solid-state slab laser to be 1.64 times the diffraction limit using an AO system combined with the related low-order aberration-correction approach [10]. In 2021, Wang et al. employed a completely closed-loop AO-controlled off-axis multirange amplification system to strengthen the beam quality of an 1178-J, 527-nm laser to almost the diffraction limit [11].
The gain medium size grows as the power of the solid-state slab laser increases, leading to a steady increase in the output beam aperture and increased difficulty in designing the AO system. The difficulty of designing the detection unit in an AO system stems from the multiparameter detection and evaluation of the rectangular large-aperture laser beam, which entails detecting wave-front aberrations and assessing the beam quality and uniformity while meeting the constraints of conjugate imaging, small volume, and environmental temperature change. Furthermore, the high energy of the laser being measured affects the environmental temperature, thereby introducing additional thermogenic [12] aberration into the detection unit.
Essential research has been conducted to address the abovementioned issues. To meet the requirement for large-aperture wave-front detection, Wang et al. [11] established a Keplerian system using two lenses to compress a 260 mm × 260 mm square aperture laser beam. Nonetheless, the total focal lengths of this This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ Keplerian system reached 2.05 m. Despite the employment of a reflector to fold the optical path, the detection unit was still huge and could only measure a single parameter. Zhang [13] achieved a single-parameter measurement of the far-field beam quality of a high-power fiber laser using large measurement equipment. The beam quality factor was measured to be 1.41 at 90.1 W and 1.81 at 3.04 kW. In Li's study on high-power laser systems with high-quality near-field beams, a 10× telescope system was adopted as the detection system to compress the beam to 6 mm to satisfy the target surface of the near-field camera. Nevertheless, the telescope system has a tube length up to 1.3 m and a single parameter measurement system [14]. Xiang et al. [15] adopted a nonfocal Keplerian telescope system with a compressional magnification of 11× to perform wave-front detection with a 150 mm × 150 mm square beam aperture, and the tube length reached 2.42 m, which was also excessive. Sylvain et al. [16] compressed a beam with an aperture of approximately 75 mm using a Keplerian telescope system in the detection system to meet the effective target surface of the wave-front sensor in a wave-front correction study of ultra-high intensity laser beams for the 200-TW laser system. However, the tube length was too long, nearly 1.04 m.
The current technical method is superior for detecting laser beam parameters, such as wave-front distortion, near-field (uniformity), and far-field (beam quality). However, it is difficult to realize miniaturization and simultaneous detection of various parameters, and the effect of temperature on the measurement system has not been considered. To address the existing problems, first, a scheme using large magnification beam compression followed by splitting detection was selected based on the technical requirements, i.e., a large detection aperture, conjugate imaging, and multiparameter detection, as well as the imaging principles of the Keplerian system. Second, simulation models were built using the technical indexes of the measurement system on the common-aperture telescope, far-field detection subunit, and near-field detection subunit to analyze the imaging quality and tolerance. The simulation results lay a foundation to establish the experimental platform. Third, the mechanical system was designed under the optical system, and the adaptability of the environment was analyzed. Finally, relevant experiments were performed to confirm the design outcomes. Fig. 1 illustrates the schematic diagram of the measurement system, which consists of a common-aperture telescope, wavefront detection subunit, far-field detection subunit, and near-field detection subunit. The common-aperture telescope primarily transforms the detection problem of a large aperture into that of a small aperture. Then, it cooperates with the wave-front detection subunit, far-field detection subunit, and near-field detection subunit to detect the wave-front distortion, far-field, and near-field of the corrected laser beam, respectively.
B. Technical Indicators and Analysis
Tables I-III list the optical indexes of the multiparameter measurement system, whereas Table IV shows the mechanical indexes.
The transmission peak-to-valley (PV) value must be less than or equal to λ w /3, where λ w denotes the wavelength of the laser, with the development of beam quality and load capacity requirements for high-power laser devices [17]. To ensure a correct result, the wave-front detected by the wave-front detection subunit and that corrected by the deformable mirror should have the same physical value, which can be achieved by conjugating the detection surface of the wave-front detection subunit and the deformable mirror. It can be understood that the deformable mirror is at the diaphragm position, and images are at the exit pupil position of the telescope system through the common-aperture telescope system. Therefore, the wave-front detection subunit is positioned at the exit pupil to ensure conjugation relation. Therefore, the entrance pupil position of the common-aperture telescope is 0.5 m. Because the input energy of the laser is too high, many beam splitters are required to output majority of the energy, leaving just a small amount of energy for weak light detection. To ensure the smooth entry of the compressed beam into the subsequent system, the optical path is required to use beam splitting and folding. Owing to the limitations of the installation position of the wave-front detection subunit, an exit pupil position of 40 mm or greater is needed to achieve optical path folding and installation.
The more pixels a spot occupies, the more sensitive the calculation will be. According to John E. Greivenlamp [18], the minimum criterion for the accuracy of the centroid calculation is to cover at least 8 pixels in the diameter of the Airy disk. The far-field detection unit is made up of a far-field detection subunit and common-aperture telescope, which have a combined focal length of f z = 5500 mm. The theoretical Airy spot radius is 1.22·λ·f z /D = 35.7 μm, where λ is the main wavelength, f z is the combined focal length, and D is the equivalent circular aperture. Because the pixel size of the selected camera is 6.9 μm, the theoretical radius of the image spot is 35.70/6.9 = 5.17 pixels, and the aperture of the diffracted spot is about 10 pixels. These meet the technical index requiring that the aperture of the image spot be greater than 8 pixels, which indicates that the combined focal length of the common-aperture telescope and the far-field detection subunit is reasonably designed.
The following two factors are primarily considered in nearfield detection, that is, the coverage of the camera, and the ability to guarantee the integrity of the light spot in the case of an off-axis field of view. If the coverage area is too large, the offaxis field of view cannot be assured, while calculation accuracy will be insufficient if it is too narrow. The near-field detection subunit and common-aperture telescope work together to create a near-field detection unit with a combined shrinkage ratio of 49.5 times and a beam size of 3.23 mm × 2.42 mm. The pixel size of the detection camera is 6.9 μm, and the number of pixels that can be covered is 3.23/0.0069 × 2.42/0.0069 = 468.1 × 350.7 pixels. The system meets the minimum coverage of 360 × 270 pixels, which indicates good magnification distribution. It is undeniable that the AO system's aberration control is quite crucial. When designing the optical system, especially the common-aperture telescope system, the double Gaussian initial structure is employed to get rid of distortion, astigmatism, coma, etc., and the spherical aberration caused by the large-aperture system is eliminated via the aspheric technology. Simultaneously, the aberration in the system can further be eliminated by calibration.
A. Wave-Front Detection Unit Design
The wave-front detection subunit is the wave-front detector in the measurement system and is situated at the exit pupil location of the common-aperture telescope; hence, this section concentrates on the optical design and analysis of the commonaperture telescope. Because of the size constraint of the target surface, high-magnification beam compression is required. The common-aperture telescope is actually a system that compresses the beam aperture. The optical system for beam compression [16] is a nonfocal telescope system [19] with a 160 mm × 120 mm aperture and a 200-mm-equivalent circular aperture. Although the detected beam has a rectangular aperture, we design the optical system based on the diameter of the outer circle of the rectangular aperture. Compared with the Galilean telescope system, the Keplerian telescope system can achieve image transmission, with an object-image conjugate relationship, thereby allowing for adaptive correction. Therefore, the Keplerian telescope system is selected for this application. In addition, a band-pass filter can be utilized to suppress stray light in the relevant experiment since the laser correction is mostly a single wavelength.
The focal lengths f 1 and f 2 of the objective and eyepiece groups were calculated as f 1 = 444.994 mm and f 2 = 40.454 mm using the design parameters of the common-aperture telescope, visual magnification formula, Gauss formula, and turning surface formula of the Keplerian telescope system. In theory, the tube length of the Keplerian telescope system is equal to the sum of the focal lengths of the objective and eyepiece groups, 485.448 mm, which is larger than 320 mm. The telephoto structure can further decrease the tube length. The telephoto structure specifies that the tube length coefficient k is the ratio of the tube length to the focal length, and 0 < k < 1. The telephoto structure's parameters using the required two-piece lens are related as follows when combined with Gaussian optics: where f α and f β are the focal lengths of the objective group and eyepiece group of the telephoto structure, respectively; a is an introduced coefficient, 0 < a < k; l 2 is the flange back, expressed as the distance from the center of the rear surface of the last lens of the optical system to the image-space focus of the system; and W is the tube length of the system. The objective group of the common-aperture telescope is designed as a telephoto structure when the focal length is greater than the tube length and the initial selection of the tube length coefficient k 1 = 0. The theoretically determined ideal lens is replaced with an actual lens, and the structure of the wave-front detection unit is shown in Fig. 2. Because of the large aperture of the system and the common-aperture telescope tube length constraint of 320 mm, if all spherical lenses are employed to provide acceptable image quality, numerous lenses will be required to correct the aberration, resulting in a huge, complicated structure. Furthermore, the f-number of the first lens is rather high. To improve the optical performance, simplify the system structure, and minimize the system size [20], the front surface of the first lens is constructed as an aspherical surface with an 8th-order correction term. This surface has coefficients of −4.718 × 10 −9 , −4.476 × 10 −15 , and −4.635 × 10 −19 for the 4th-, 6th-, and 8th-order terms, respectively, through using optical design software, optimization computation, and analysis. The telephoto structure of the eyepiece also serves to expand the exit pupil distance to accommodate the installation area of the wave-front detector. The exit pupil distance is 42.1 mm, and the tube length is 319.5 mm, meeting the design specifications.
A telephoto structure is employed in both the far-field detection subunit and the near-field detection subunit to shorten the optical length and decrease the tube length of the subunit to reduce the volume and weight of the measurement system. Simultaneously, both the far-field detection subunit and nearfield detection subunit are connected with the exit pupil of the common-aperture telescope through a long entrance pupil distance, and the system volume is compressed through splitters and prisms.
B. Far-Field Detection Unit Design
The far-field detection unit's design parameters are listed in Table II. The combined focal length of the far-field detection subunit and the common-aperture telescope is f z = 5500 mm, and the subunit focal length of far-field detection is f a = 5500/η = 500 mm, where η is the magnification of the common-aperture telescope. The far-field detection subunit uses a three-piece structure to focus the beam on the image surface, thereby reducing system space and weight. Fig. 2 depicts the far-field unit. The final tube length of the far-field detection subunit is 75 mm.
C. Near-Field Detection Unit Design
The near-field detection subunit requires that the exit pupil distance be greater than zero to satisfy the detection camera's reception. To decrease the space and weight of the system, the near-field detection subunit, combined with the parameters listed in Table III, uses a Keplerian structure system with two pieces in the front group and three pieces in the rear group to transfer the beam image and accomplish secondary beam compression. Fig. 2 illustrates the near-field detection unit. The final near-field detection subunit has a length of 146.4 mm and an entrance pupil distance of 34.3 mm.
IV. SIMULATION ANALYSIS
Athermalization design [21] and image-quality analysis of the system are carried out based on a sensible layout and volume compression of the optical system to eliminate thermal aberrations. Fig. 3(a) demonstrates the wave-front PV values of various FOV in the wave-front detection unit at various temperatures, with PV values greater than 0.1442λ. Fig. 3(b) depicts those in the far-field detection unit at various temperatures. The system shows good imaging quality with a PV value greater than 0.1223λ. Fig. 3(c) represents those in the near-field detection unit at various temperatures, with PV values greater than 0.1729λ, indicating that each subsystem has high imaging quality.
To further analyze the variation in the beam quality of the combined system, the ratio of the system's imaging spot to the diffraction-limited spot size at the 83.6% energy point in the surrounding energy curve is used to approximate the equivalent beam quality EBQ. Fig. 4 depicts the variation in the surrounding energy radius under different temperature conditions. The black curve in the figure depicts the energy diffraction limit in the far-field detecting unit, while the color one reflects the energy curve for each field of view. They almost coincide with the diffraction limit curve. With the small field of view, the curves of each field of view are assumed to coincide. The equivalent beam quality at 10°C, 20°C, and 30°C was obtained through the equivalent beam quality calculation approach. At 20°C, the far-field beam quality is 1.17 times the diffraction limit. At 10°C and 30°C, the far-field beam quality decreases somewhat, but EBQ values of 1.21 and 1.22 times the diffraction limit are achieved, respectively. The results reveal that temperature has little influence on the far-field beam quality, with the maximum fluctuation value being around ΔEBQ = 0.05 times the diffraction limit.
The distortion of the near-field detection unit at 10°C, 20°C, and 30°C, which are 0.2273, 0.2272, and 0.2274%, respectively. The system has high image quality and complies with index standards.
V. TOLERANCE
The variation in the manufacture and assembly of the optical system is analyzed, and the RMS wave-front is chosen as the reference for the tolerance of the measurement system, combined with the function of tolerance analysis in optical design software. The Monte Carlo approach is used with 1000 iterations to examine the tolerances of each system, as well as the most sensitive elements. The purpose of the Monte Carlo simulation in Zemax is to assess the global effect of tolerances. During the simulation, a series of random lenses that meet the tolerance requirements are generated and then evaluated against the criteria. Any number of designs can be generated during the process by employing the statistical approaches of Normal, Uniform Parabolic, and Parabolic Distribution. Table V shows the tolerance data range for the optical systems. Tables VI and VII show the results of their statistical analyses, respectively.
For both units, the first lens of the common-aperture telescope is the most sensitive element discovered through Monte Carlo analysis. The surface and element tilts are approximately ±0.6', and the decentering is approximately ± 0.015 mm. The tolerances of the far-field detection and near-field detection units are achievable.
The RMS values of wave-fronts of the far-field detection unit have a 98% probability of being less than 0.1723λ, which satisfies the wave-front aberration design standards. That of the near-field detection unit also meets the design standard with a 98% probability of being less than 0.1819λ.
VI. MECHANICAL SYSTEM DESIGN AND ANALYSIS
The mechanical structure of the multiparameter measurement system is designed based on the optical system. Under multiparameter measurement, the installation location is efficiently modified and planned, resulting in a compact optical-mechanical system structure. The system is composed of a common-aperture telescope, wave-front detection subunit, far-field detection subunit, and near-field detection subunit. The benefits of titanium alloy (TC4) include high strength and excellent heat resistance (lower thermal expansion coefficient). Therefore, TC4 is chosen as the mechanical system material to limit the impact of external temperature variations on the measurement system. The mechanical structure of the common-aperture telescope serves as the mainframe of the measurement system. As shown in Fig. 5, other subunits are attached to the mainframe. The main assembly and alignment principles for the measurement system follow. First, the mechanical supports and optics are constructed and tested to verify that the design tolerances are met. Then, the common-aperture telescope, wave-front detection subunit, far-field detection subunit, and near-field detection subunit are sequentially assembled and aligned, and the angular deviation of the lens is calibrated interferometrically. Finally, the overall structure is installed and debugged for good performance. The size of the system is 400 mm × 150 mm × 246 mm (L × W × H), and its weight is 23.84 kg, which meets the design requirements.
Finite-element analysis (FEA) is used to examine the structure of the measurement system. Considering that the main operating environment of the measurement system is 20°C ± 10°C, it is determined whether the deformation of the system induced by gravity and temperature is within the tolerance of the optical system.
A. Gravity Analysis of Common-Aperture Telescope Optical System
The surface shape of the lens and the imaging of the system are affected by gravity. As a result, a gravitational deformation analysis is performed on the common-aperture telescope's four larger-aperture lenses. The largest deformation in the radial direction occurs in the first lens with a deformation amount of 2.309 nm. Much smaller than the eccentricity tolerance, its effect is negligible.
B. Mechanical System Gravity Analysis
The four larger-aperture lenses are designed with independent support structures to enable slight adjustments of the lenses to improve image quality. Because the weight of the lens deforms the mechanical structure and affects the image quality, the equivalent mass of the larger-aperture lens and its independent support structure act on the corresponding position of the measurement system in the form of a distributed mass. The influence of this distributed mass is analyzed. Both the far-field and near-field detectors have a mass of 0.12 kg and also apply a distributed mass to their respective locations within the system. To optimize the analytical performance, these structures were excluded from the FEA. The measurement system showed the most deformation on the top of the front group and the bottom of the middle group of the mainframe with respective deformations of 55 nm and 27 nm. These maximum deformation values are smaller than the most sensitive decentering tolerance of ± 15 μm, indicating that the impact of gravity does not affect its ability to meet the tolerance requirement.
C. Influence of the Mechanical System on the Optical System Under Varying Temperature
The change in temperature causes thermal expansion and contraction of the structural materials, resulting in varying degrees of decentering and tilt in the optical elements or systems, which can influence the image quality of the optical system. If the mechanical structure is severely deformed, the optical system suffers irreversible image-quality degradation, making it incapable of accomplishing normal measurement tasks. Using FEA software, a thermal study of the mechanical structure at 10°C and 30°C was performed.
Comprehensive analysis of gravity and temperature factors and the impact of temperature on the structure of the measurement system mainly occurs in the front and middle groups of the common-aperture telescope and HR1. The maximum relative decentering in the X and Y directions are 1.173 μm and 6.091 μm, respectively; the overall maximum tilt is 0.00129°; and the maximum tilt of HR1 is 0.0035°. The most sensitive tolerances obtained via analysis were ± 0.6 (± 0.01°) tilt and ± 15 μm decentering; thus, the maximum values obtained herein fulfill the tolerance standards.
The FEA results of optical element deformation, decentering, and tilt are imported into Zemax software to examine the wavefront aberration of the far-field detection unit and the near-field detection unit.
At 10°C and 30°C, Fig. 6 depicts the PV values of wave-fronts of the far-field detection unit and near-field detection unit following decentering and tilt. Although it meets the system tolerance standards, structural materials with lower expansion coefficients can be employed to improve the system's performance.
VII. MECHANICAL SYSTEM DESIGN AND ANALYSIS
The experimental system represented in Fig. 7 was built in the laboratory. The laboratory temperature has been kept at 22°C±5°C, with the lowest and maximum temperatures reaching 5°C and 40°C, respectively, and the relative humidity was 50%.
In an adaptive optics (AO) system, noise and detection error can produce errors in the slope measurement of a Hartmann-Shack (H-S) wave-front sensor and have further effects on the performance of the AO System. The noise in an AO system can be divided into the readout noise and the photon noise. The detection error in an AO system results from the discrete sampling by using number-limited CCD pixels in the H-S sensor and the dead-space between the CCD pixels [22]. The adaptive optics technology's correction effect is essentially unaffected when the signal-noise ratio is high while the effect decreases dramatically compared to the noise-free state when the ratio declines. Therefore, it is vital to remove the impact of noise on the image. All in all, it is also a future research direction to probe into the influence of noise on the multiparameter measurement system under varying signal-noise ratios, even under the combined influence of noise and aberration. Fig. 8(a)-(c) show the three-dimensional (3D) image distribution of the wave-front detector at 10°C, 20°C, and 30°C, after noise removal, respectively. The corresponding PV values are 0.282 μm, 0.267 μm, and 0.288 μm, and the corresponding RMS values are 0.079, 0.072 μm, and 0.075 μm. Fig. 9(a)-(c) show beam-quality images at 10°C, 20°C, and 30°C with beamquality β factors of 1.26, 1.22, and 1.27 times the diffraction limit, respectively, meeting the technical requirements. Fig. 11 shows the 3D peak beam intensity. The β is defined as β = (A/A DL ) 1/2 [23], where A and A DL are the fractions of power contained with a far-field bucket with a radius of λ/D x · D y for the measured beam and a diffraction-limited beam with the same near-field dimensions and wavelength, respectively.D x and D y , respectively, are the width and height of the measured near-field beam profile [24]. The two-dimensional (2D) beam distributions obtained by the near-field detector at 10°C, 20°C, and 30°C are shown in Fig. 10(a)-(c), respectively. The near-field image changes slightly due to the temperature change, and the beam shape exhibits barrel distortion, which is primarily caused by machining and installation errors in the optical system, resulting in highorder aberrations in the system and diffraction deformation of the spot. Nevertheless, the imaging is improved. The Uniformity is defined as Uniformity = (I max − I min )/Ī [25]. Thus, the near-field depicted in Fig. 10(a)-(c) are 0.535, 0.474, and 0.502, respectively.
The multiparameter measurement system was tested 40 times in the laboratory under random temperature variations of 20°C ± 10°C. The measured data for far-field β are shown in Fig. 12. The maximum, minimum, and average values of β are β max = 1.284, β min = 1.226, and β ave = 1.248, respectively. Data regarding the near-field are shown in Fig. 13. Uni max = 0.570, Uni min = 0.472, and Uni ave = 0.533 are the maximum, minimum, and average values, respectively.
VIII. CONCLUSION
The dimensions of the proposed measurement system are 400 mm × 150 mm × 246 mm (L × W × H). It can concurrently measure the wave-front, far-field, and near-field of the 160 mm × 120 mm aperture laser beam emitted by the solid-state slab laser. The common-aperture telescope adopts the Keplerian system, which ensures the portability and compactness of the optical system while providing object-image conjugation capabilities to meet the AO requirements. Other detection subunits are constructed on the mainframe of the measurement system based on the mechanical structure of the common-aperture telescope. This effectively compresses the structural size of the measurement system, enhances its compactness, and reduces its volume. In 40 tests, β max = 1.284, β min = 1.226, β ave = 1.248, Uni max = 0.570, Uni min = 0.472, and Uni ave = 0.533. In the future, the optical system and mechanical system will be optimized to increase the index of the measurement system itself. The common-aperture telescope will be upgraded to reduce the tolerance sensitivity and simplify the installation and debugging of the system. Furthermore, the measurement system will be continuously evaluated. | 2022-11-26T16:14:38.632Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "6756927273fbd050cfdef1e3d474508a74e82c95",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/4563994/4814557/09961155.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "6d6ef8718bf5fe4801d6cb660a88ddc4a37b9a68",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
16239392 | pes2o/s2orc | v3-fos-license | Enhanced cognitive behaviour therapy for adults with anorexia nervosa: A UK–Italy study
Anorexia nervosa is difficult to treat and no treatment is supported by robust evidence. As it is uncommon, it has been recommended that new treatments should undergo extensive preliminary testing before being evaluated in randomized controlled trials. The aim of the present study was to establish the immediate and longer-term outcome following “enhanced” cognitive behaviour therapy (CBT-E). Ninety-nine adult patients with marked anorexia nervosa (body mass index ≤ 17.5) were recruited from consecutive referrals to clinics in the UK and Italy. Each was offered 40 sessions of CBT-E over 40 weeks with no concurrent treatment. Sixty-four percent of the patients were able to complete this outpatient treatment and in these patients there was a substantial increase in weight (7.47 kg, SD 4.93) and BMI (2.77, SD 1.81). Eating disorder features also improved markedly. Over the 60-week follow-up period there was little deterioration despite minimal additional treatment. These findings provide strong preliminary support for this use of CBT-E and justify its further evaluation in randomized controlled trials. As CBT-E has already been established as a treatment for bulimia nervosa and eating disorder not otherwise specified, the findings also confirm that CBT-E is transdiagnostic in its scope.
Introduction
Anorexia nervosa in adulthood has been described as "one of the most difficult psychiatric disorders to treat" (Halmi et al., 2005). Reluctance to engage in treatment is common, and in those who do accept treatment the outcome is often poor. Hospitalization is essential in some cases and it generally results in weight gain, but it is expensive and disruptive and often followed by weight loss Kaplan et al., 2009;Walsh et al., 2006). A treatment that produced enduring change would be of great value, especially if it were deliverable on an outpatient basis .
Anorexia nervosa is also difficult to study (Agras et al., 2004;Bulik et al., 2007;Fairburn, 2005;Halmi, 2008;Lock et al., 2012). This is because of its relative rarity, the associated medical risks, the lengthy duration of treatment, and the importance of follow-up to determine whether treatment effects persist over time. Nine studies of psychosocial treatments have been published and several have run into major difficulties (Halmi et al., 2005;Lock et al., 2012). Almost all the studies have been small in size, the average number of patients per condition being less than 20. These methodological challenges, together with the disappointing or inconclusive results of the studies to date, have led to the suggestion that new treatments for anorexia nervosa should undergo extensive preliminary testing before being considered eligible for evaluation in randomized controlled trials (Agras et al., 2004;Fairburn, 2005;Halmi et al., 2005;Lock et al., 2012). Alternatively it has been proposed that the focus of research should shift away from adults and on to the treatment of adolescents as they appear to be easier both to treat and to study (Halmi, 2008).
Cognitive behaviour therapy is a potential candidate as an outpatient treatment for anorexia nervosa since it is the leading empirically supported treatment for bulimia nervosa (National Institute for Clinical Excellence, 2004;Shapiro et al., 2007), a disorder with overlapping psychopathology. The cognitive behavioural treatment for bulimia nervosa has recently been adapted with the goal of making it suitable for any form of eating disorder, including anorexia nervosa (Fairburn, 2008;Fairburn, Cooper, & Shafran, 2003). To this end, the new "enhanced" form of the treatment (CBT-E) focuses on modifying the mechanisms thought to perpetuate all forms of eating disorder psychopathology (Fairburn et al., 2003). The treatment has been shown in two independent studies (combined N ¼ 245) to produce sustained change in those eating disorder patients who are not significantly underweight, whatever their DSM diagnosis (Byrne, Fursland, Allen, & Watson, 2011;Fairburn et al., 2009). The utility of the treatment with the remaining eating disorder patients, those with anorexia nervosa, has yet to be established.
In light of the recommendation that new treatments for adults with anorexia nervosa undergo extensive preliminary evaluation, we studied the effects of CBT-E in two representative, markedly affected, patient samples. Many of these patients would ordinarily have been hospitalised. We chose to include patients who were significantly underweight in order to test the full potential of the new treatment.
The study was designed to address four key clinical questions. First, among adults with marked anorexia nervosa, what proportion is able to complete this outpatient treatment? Second, among those patients who can complete the treatment, what is their outcome? Third, are the changes sustained? And fourth, are there baseline variables that predict treatment completion? By studying two independent patient samples we were also able to determine whether there is consistency in these patients' response to CBT-E.
Design
Two samples of patients were recruited, one from the UK and the other from Italy. Both comprised patients with anorexia nervosa who had a body mass index (BMI; weight in kg/height 2 in m) of 17.5 or below, a commonly used threshold for anorexia nervosa. All the patients were offered CBT-E and, if they accepted, were provided with 40 sessions of treatment over 40 weeks. This was the only psychological or behavioural intervention that they received. The patients were then entered into a closed follow-up period lasting 60 weeks during which they received no further treatment unless it was judged necessary on clinical grounds. The studies were approved by the local human subjects committees.
Setting and participants
The UK sample was recruited from consecutive referrals by family doctors and other clinicians to two well-established National Health Service eating disorder clinics, one serving central Oxfordshire and the other serving Leicestershire. Each referral was assessed by a senior clinician who established the patient's diagnosis and eligibility for the study. The Italian sample was recruited in a similar way. It comprised consecutive referrals to an eating disorder clinic serving the Verona area.
The UK patients had to fulfil the DSM-IV diagnostic criteria for anorexia nervosa (American Psychiatric Association, 1994), bar the amenorrhoea criterion, and to have a BMI between 15.0 and 17.5. In addition, they had to be aged between 18 and 65 years and provide written informed consent after receiving a complete description of the study. The entry criteria for the Italian sample were the same except that there was no lower BMI limit. The exclusion criteria for both samples were as follows: i) the patient being unsafe to manage on an outpatient basis (N ¼ 4 and 0; UK and Italian samples respectively); ii) having received in the previous year a specialist treatment for anorexia nervosa (N ¼ 4 and 0); iii) having a co-existing Axis 1 psychiatric disorder that precluded immediate eating disorder-focused treatment, such as psychosis or drug dependence (N ¼ 11 and 4); and iv) not being available for the 40 week period of treatment (N ¼ 4 and 4). If it was thought that there was a comorbid major depressive disorder in addition to the eating disorder, this was treated with antidepressant medication prior to starting CBT-E. Patients who were already receiving psychotropic medication were weaned off this prior to entering the study (N ¼ 2 and 2), an exception being clinically warranted antidepressant medication (N ¼ 22 and 17) which was kept stable during treatment with CBT-E.
Patients who met the study entry criteria at the initial assessment were offered two further appointments in order to describe the treatment and obtain consent. Fig. 1 shows the recruitment and retention figures for the UK and Italian participants.
Intervention
CBT-E is a treatment for patients with eating disorder psychopathology. With patients who are underweight, it has three phases. In the first, the emphasis is on increasing patients' motivation to change. Then, if willing, patients are helped to regain weight while at the same time they tackle their eating disorder psychopathology including their extreme concerns about shape and weight. In the final phase the emphasis is on helping them develop personalized strategies for identifying and immediately correcting any setbacks. The focused version of CBT-E was used as it appears to have greater clinical utility than the broad version (Fairburn et al., 2009). A detailed manual for therapists is available (Fairburn, Cooper, Shafran et al., 2008).
Treatment involved 40, 50-min, one-to-one sessions over 40 weeks. A single therapist treated each patient. There were seven UK therapists of whom five were clinical psychologists and two were psychiatric nurse specialists. The Italian site had four therapists, all of whom were clinical psychologists. All the therapists had prior generic clinical experience and experience treating patients with eating disorders, and each received six months initial training from CGF and ZC (UK) and RDG and CGF (Italy). Weekly supervision meetings were held throughout the study and were led by CGF and ZC (UK) and RDG (Italy). The therapists had six-monthly booster workshops led by CGF. All the sessions were recorded and these recordings were used as part of supervision to ensure that the treatment was well implemented.
Assessment
The assessment points were before treatment, at the end of treatment and 60 weeks later.
Body weight and body mass index
Weight was measured using a beam balance scale and height was measured using a wall-mounted stadiometer.
Eating disorder features
The UK patients were assessed using the 16th edition of the Eating Disorder Examination interview (Fairburn, Cooper, & O'Connor, 2008) (EDE) and its self-report version (Fairburn & Beglin, 2008) (EDE-Q6.0). The assessors had no involvement with treatment. The Italian patients were assessed using the Italian translation of the EDE-Q6.0.
General psychiatric features
In the UK sample, these were measured using the Brief Symptom Inventory (Derogatis & Spencer, 1982) (BSI), a short version of the Symptom Checklist-90 (Derogatis, 1977) (SCL). In the Italian sample the full SCL was used. The two instruments generate the same Global Severity Index (GSI).
Statistical analysis
The statistical analysis was undertaken by HAD using standard treatment research data analytic procedures. Data are presented as N (%) for categorical data and as means (with standard deviation, SD) or medians (with range) for continuous data. Differences between groups were expressed as difference in proportion or relative risk (RR) for categorical data and as mean difference for continuous data. Chi-squared (c 2 ) or Fisher's exact tests (as appropriate) were used to compare categorical measures between the two groups, and t-tests or ManneWhitney tests (as appropriate for the distribution of the data) to compare continuous measures.
McNemar tests for categorical data and paired t-tests or Wilcoxon matched pairs signed rank test (as appropriate) for continuous data were used to compare differences within groups. Logistic regression analyses were used to identify independent predictors of outcome in terms of treatment completion at follow-up. Statistical significance was taken at two-sided p < 0.05 throughout, with 95% confidence intervals (CI) used to express the uncertainty in the data.
The two samples
Recruitment continued until 99 patients had entered the study (UK, N ¼ 50; Italy, N ¼ 49). The demographic characteristics of the two samples are shown in Table 1. They primarily consisted of single, female patients in their mid-20s. The eating disorder was well established in most cases with the mean duration of anorexia nervosa being three years. Table 2 shows their clinical characteristics. Both samples were substantially underweight, the Italian sample weighing significantly less than the UK one (p < 0.001) reflecting the difference in the BMI entry criteria at the two sites. In most other respects the UK sample had significantly higher levels of psychopathology than the Italian one.
Intent-to-treat findings at end of treatment and 60-week follow-up Although the primary goal of this study was to determine the proportion of patients with marked anorexia nervosa who can complete this outpatient treatment, and their treatment response, intent-to-treat data are reported in Table 2. The method of data imputation involved moving the last available data point forward as this has been most commonly used approach in the studies to date. It can be seen that there was a marked increase in weight. By the end of treatment the mean BMI had increased from 16.1 (SD 1.2) to 17.9 (SD 1.8) and over the 60-week period of follow-up it remained stable (mean BMI 17.8, SD 2.0). The increase in weight was accompanied by a decrease in eating disorder psychopathology and general psychiatric features.
The patients who were withdrawn were highly symptomatic when they left the study and, without exception, they were referred for more intensive treatment.
Question 2 e What is the outcome among those who complete CBT-E?
Question 3 e Are the changes sustained following CBT-E?
There was high compliance with follow-up with 84% (53/63) of the treatment completers being successfully reassessed. A minority Table 2 Characteristics of the two samples before treatment, after treatment and at 60-week follow-up. Intent-to-treat data shown as mean (SD) unless otherwise stated.
Question 4 e Are there baseline predictors of treatment completion?
There were no statistically significant relationships between study site, age, eating disorder duration or BMI and whether or not the patient completed treatment. Treatment completion was, however, significantly associated with severity of eating disorder and general psychopathology with those with greater psychopathology being less likely to complete. Global EDE-Q, EDE-Q shape and weight concern subscales, GSI score, and the frequency of binge eating and purging were significantly higher in non-completers than completers (at least p < 0.05), with the strongest relationships observed in terms of global EDE-Q and EDE-Q weight concern scores and the presence and frequency of purging (all p < 0.01). On multiple regression, only EDE-Q weight concern score and purging retained an independent effect, with the adjusted RRs for treatment completion for patients with EDE-Q weight concern score 3 being 0.29 (95% CI 0.11e0.79) and that for the presence of purging being 0.36 (0.14e0.91).
Discussion
The aim of this study was to obtain robust data on the outcome following a new outpatient treatment for anorexia nervosa, a notoriously treatment-resistant condition when present in adults Halmi et al., 2005;Walsh et al., 2006). To achieve this aim, two independent and sizeable cohorts of patients were treated with CBT-E, and then entered into a 60-week closed period of follow-up. All the patients at recruitment had a BMI of 17.5 or below.
There were three main findings. The first was that two-thirds of the patients in both samples were able to complete this lengthy treatment. The remaining one third either ceased to attend or was withdrawn due to concerns about their physical health or lack of progress. The great majority of these patients were highly symptomatic at the point that treatment stopped. The fact that a third of the patients did not complete treatment is not surprising given that over half started treatment with a BMI below 16.5, a threshold recently recommended for hospitalization ) (UK sample n ¼ 22, 44%; Italian sample n ¼ 31, 63%).
The second finding was that among those who completed CBT-E there were substantial improvements in weight and eating disorder psychopathology. This was true of both samples. The mean weight gain was 7.5 kg (16.5 lb) with over sixty percent of the patients gaining sufficient weight to enter the WHO's healthy BMI range. In addition, almost ninety percent had minimal eating disorder Table 3 Characteristics of the two samples before treatment, after treatment and at 60-week follow-up among those who completed treatment. Data are shown as mean (SD) unless otherwise stated.
psychopathology at the end of treatment, despite the weight gain. This is of note as in anorexia nervosa concerns about eating, shape and weight are aggravated by eating more, gaining weight and changing shape, and they are likely to contribute to the high relapse rate that is typically seen. The third finding is therefore of particular interest. It concerns the stability of the changes obtained. Despite there being little exposure to further treatment, the changes were generally well maintained with there being only a slight deterioration in weight and eating disorder features. This is in marked contrast to the reports of high rates of relapse over the 12 months following hospitalization, even with ongoing therapeutic input Kaplan et al., 2009;Walsh et al., 2006).
The intent-to-treat analyses were included to allow comparisons to be made with other studies, although doing so is complicated by the fact that patient samples differ in their characteristics and those to date have been markedly smaller than the present one. Two of the key outcome variables can be compared across the various studies. The first is completing treatment. Two-thirds of the present study's patients completed CBT-E, a completion rate that is typical for this type of patient group (Dare, Eisler, Russell, Treasure, & Dodge, 2001;McIntosh et al., 2005). The second variable is weight gain. The mean intent-to-treat weight gain was 5.0 kg. This compares favourably with the mean weight gain of 2.7 kg achieved by the only other outpatient study to report weight change in similarly underweight patients (Dare et al., 2001). It also compares well with the weight gains obtained with a generic form of CBT, interpersonal psychotherapy and a form of specialist clinical management in a less severely affected patient group (McIntosh et al., 2005).
The present study had certain strengths. First, the recruitment of two parallel samples, both large for studies of the treatment of anorexia nervosa, allowed us to determine whether the findings are likely to be robust. Second, the patients were recruited from consecutive referrals to long-established eating disorder clinics, each providing the main clinical service for their local area. The findings are therefore likely to be generalizable to mainstream clinical services elsewhere. Third, as noted earlier, the cases were not mild. Fourth, in contrast with many other studies, the index treatment, CBT-E, was these patients' sole psychological treatment: no other interventions were taking place in the background. Lastly, the patients were followed up for over a year after completing CBT-E, the period when relapse is most likely to occur.
Given its specific aims, the study had three main limitations. The first is that the findings cannot be generalized to patients with a BMI below 15.0 or above 17.5. Second, a longer period of follow-up would have been desirable to determine the stability of the changes in the long term. Third, while the samples were large for this comparatively uncommon disorder, the study only had modest statistical power for detecting site effects. Larger multisite studies would be needed to confirm that the effects of CBT-E are replicable. If a broader perspective on the study is taken, the other main limitation is that CBT-E was not compared with another treatment. This was intentional and in line with the recommendations outlined earlier. It does mean that no conclusions can be drawn regarding the effectiveness of CBT-E versus other approaches.
What the present study provides is robust benchmark data on the immediate and longer-term outcome following the use of CBT-E to treat adults with anorexia nervosa. Two-thirds of the patients were able to complete the treatment and among them there were substantial improvements in weight and eating disorder features that were well maintained. These findings are sufficiently promising to justify the evaluation of CBT-E in randomized controlled trials. The findings also confirm that CBT-E is a "transdiagnostic" treatment as there are now data supporting its use in anorexia nervosa, bulimia nervosa and eating disorder not otherwise specified. It can also be used with adolescents (Dalle Grave, Calugi, Doll, & Fairburn, 2013) and with inpatients (Dalle Grave, Calugi, Conti, Doll, & Fairburn, 2012).
Funding
The study was funded by two programme grants from the Wellcome Trust, London (046386). CGF is supported by a Principal Research Fellowship from the Wellcome Trust (046386). The Wellcome Trust had no involvement in the design or execution of the study. grateful to Laura De Kolitscher, Simona Ginetti and Elettra Pasqualoni who served as assessors. | 2016-05-12T22:15:10.714Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "87c2ad86ee3cf1e2a48c1cdf89559142e0c9212d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.brat.2012.09.010",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c4dd75e6ec49aa62b726a24bce0aaeaf2ff9df9c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251368618 | pes2o/s2orc | v3-fos-license | “Flipped” clinical rotations: A novel approach
Abstract Background Near the beginning of the COVID‐19 pandemic in the United States, medical students were pulled out of all in‐person patient care activities. This resulted in massive disruption to the required clinical rotations (clerkships), necessitating creative curricular solutions to ensure continued education for medical students. Approach In response to the lockout, our school adopted a “flipped” clinical rotations model that assigned students to remote learning activities prior to in‐person patient care activities. This approach allowed students to continue their clinical education virtually with a focus on knowledge for practice while awaiting return to the shortened in‐person portions of their rotation. In planning the modified clinical curriculum, educational leaders adhered to several guiding principles including ensuring flexible remote curricular components that would engage students in active learning, designating that no rotation would be completely virtual, and completing virtual educational activities and standardised exams before students returned to in‐person experiences. Evaluation End of rotation evaluations and standardised exam scores were analysed to determine the effectiveness of this model. Despite the disruption associated with the pandemic and the rapid implementation of the “flipped” rotations, students continued to rate the overall experiences as highly as traditional clinical rotations. Students also performed similarly on standardised exams when compared to cohorts from other classes at the same experience level. Implications While borne out of necessity during a pandemic, the lessons learned from our implementation of a “flipped” rotations model can be applied to address problems of capacity and clinical preparedness in the clinical setting.
| BACKGROUND
In March 2020, in response to the emerging COVID-19 pandemic, the Association of American Medical Colleges (AAMC) issued guidance suspending medical students from all direct in-person patient care.
While this resulted in massive disruption to required clinical rotations (i.e., clerkships), it also presented opportunities for innovative curricular solutions that could transform medical education, 1 by leveraging existing digital infrastructure and optimising "flipped" experiences. [2][3][4][5] The "flipped" classroom model has been widely and successfully adapted in medical education, particularly in the preclinical curriculum. 6 Research has shown that the "flipped" classroom has several potential advantages, including its ability to create time and space in an existing curriculum for educational innovations. 7 The "flipped" classroom model has also been used on a limited basis for conference topics within some clinical rotations, to save time and improve student engagement. [8][9][10][11] To meet the challenges of the recent pandemic, we adopted a model of delivering clinical knowledge "en bloc" in advance of in-person activities across all clinical rotations. Our experience suggests that this model may have utility in more conventional educational contexts.
| APPROACH
Case Western Reserve University School of Medicine (CWRU SOM) responded to the AAMC guidance by creating a "flipped" clinical rotation model that was applied across all core clinical rotations: family medicine, internal medicine, neurology, obstetrics and gynaecology, paediatrics, psychiatry, surgery, emergency medicine, and geriatrics.
Clinical elective rotations, including acting internships, were not "flipped." Traditionally, students rotate through clinical rotations with direct patient care activities fully integrated with teaching sessions (lectures, case-based discussions, small group learning) and culminating in a standardised exam. Our "flipped" clinical rotations model assigned students to virtual knowledge-building activities followed by a standardised exam prior to a briefer period of in-person patient care activities. This approach allowed students to continue their clinical education virtually with a focus on "knowledge for practice," while awaiting return to the shortened in-person patient care portions of their rotations.
This approach allowed students to continue their clinical education virtually with a focus on "knowledge for practice." Two hundred ten medical students were removed from their clinical rotations immediately following AAMC's guidance. Educational leaders worked quickly to develop a modified curriculum that could be applied during the suspension from clinical activities. While initial guidance from accrediting organisations suggested that it may be reasonable to delay or cancel certain clinical rotations due to the pandemic, we opted to ensure that every student would experience all of the required rotations. 12 In planning the modified clinical curriculum, educational leaders adhered to several guiding principles: (1) ensure a flexible virtual curriculum that would engage students in active learning and knowledge-building; (2) all rotations would require hands-on patient care activities (shortened up to 50%); (3) virtual educational activities (e.g., lectures and conferences) would occur prior to inperson clinical experiences; and (4) standardised exams (e.g., National Board of Medical Examiners Subject exams) would be given before students were reintegrated into the corresponding clinical work. Rotation directors across our four hospital affiliates collaborated by discipline to create virtual curricula that mapped to the university's existing rotation-specific learning objectives. Virtual curricula included teaching sessions led by faculty, residents, and graduating medical students, interactive online cases (e.g., Aquifer, OnlineMedEd), 13,14 and some virtual health care visits.
In addition to developing content and structure for the virtual curriculum, educators adapted schedules to accommodate the initial and any subsequent clinical suspensions. Educators avoided extending the year into the following academic year, to prevent adverse impacts on graduation or residency applications, and rotation start dates for subsequent classes. As the initial surge of COVID cases waned locally, a restart date of 1 June 2020, was identified as the earliest day that students would be safely allowed back into the hospitals (example revised schedule shown in Figure 1). While the total length of each clinical rotation was not shortened, the in-person portions were, with time spent on the virtual curricula making up the difference. This allowed students near the end of the academic year (late-year) to finish without creating delays for the rising class (early-year) students and avoided the need to teach two cohorts at the same time. Earlyyear students participated in the virtual curriculum while late-year students completed in-person clinical activities.
This allowed students near the end of the academic year (late-year) to finish without creating delays for the rising class.
All teaching sessions and test preparation for the standardised exams were moved to the virtual phase of the rotations. In our traditional model, students take standardised exams at the end of the rotation, requiring them to divide their time between patient care activities and exam preparation. Scheduling the exams before inperson activities allowed students to concentrate on patient-centred activities upon their return to the clinical setting. Also, days normally reserved for exam administration were reclaimed for clinical activities.
| EVALUATION
In-person clinical activities resumed on 1 June 2020 at all affiliate hospitals. We measured the success of the "flipped" clinical rotations by reviewing qualitative and quantitative data from the end-of-rotation evaluations completed anonymously by students and by student performance on standardised exams. Quantitative and qualitative data were obtained from CWRU SOM's Medical Education Data Registry, which is an IRB approved data registry of aggregated, de-identified data (IRB20151105) that can be used for educational research and quality improvement purposes. All "flipped" rotations pivoted to pass/ fail (in lieu of tiered grading), so final grade distributions could not be analysed to detect grade differences with traditional rotations.
Student ratings of the perceived quality of each rotation were examined for comparison between traditional and "flipped" rotations using a rating scale of "Poor, Fair, Average, Very Good, or Excellent." The traditional cohort were students from the Class of 2021 who had undertaken rotations that ended before March of 2020. The "flipped" Students on "flipped" rotations were better able to concentrate on patient care activities and less distracted by preparation for exams.
Results from the standardised exams were analysed using oneway analysis of variance (ANOVA) to determine if there were any differences between traditional and "flipped" rotations ( Table 2). To compare students taking "flipped" rotations with peers undertaking F I G U R E 1 Traditional and "flipped" schedules for a late-year student that still needed to complete neurology, psychiatry, surgery, and emergency medicine rotations; 1 June 2020 was chosen as the in person restart date traditional rotations, exam scores were analysed for "late-year" M3 students from two consecutive academic years at the same time point
| IMPLICATIONS
The "flipped classroom" model has been successfully integrated into a variety of areas of the medical education curriculum. Individual rotations that have adapted this model have reported increased learning motivation, improved interactivity during teaching sessions, and better post-test performance. 9,11 When implemented across an entire clinical rotation program, we showed that overall ratings and student performance remained similar to our traditional approach.
Our "flipped" rotation model aided in re-integrating students into the clinical setting after a pandemic lockout by delivering clinical knowledge ahead of time. Furthermore, it allowed medical students to complete core clinical training without experiencing significant delays.
While the pandemic removed students from in-person activities for 11 weeks, all students were able to complete core rotations on schedule (or with no more than a 1-month delay) without shortening the total length (remote learning plus in-person activities) of each rotation.
This approach allowed most students to maintain their original sequence of rotations in the critical time between the end of their first clinical year and the residency application season.
It allowed medical students to complete core clinical training without experiencing significant delays. Decoupling the knowledge for practice and in-person clinical training allows one group of students to train inperson while the other engages in remote learning.
T A B L E 1 Representative comments from end of rotation anonymous surveys
Positive comments
Negative or constructive comments Appreciation for rapid response and flexibility: "Fantastic learning experience given unique challenges that came with COVID and shortened clinical rotations. Gave a great deal of autonomy for students to learn and practice medicine." "Did a fantastic job with being flexible given unique circumstances of COVID and shortened clinical rotations." "Excellent overall. Great adapting to COVID to ensure a quality educational experience." "The flipped classroom approach to virtual curriculum was fantastichelped solidify concepts and engage students. Clerkship directors were responsive to student concerns and were flexible in approach to teaching to best meet students' needs. All staff and residents were strong teachers during the in-person clerkship." Appreciation for faculty engagement: "Very detailed and organized in terms of accommodating students during the height of the COVID outbreak and planning meaningful online activities. Also, very interactive faculty and residents that are interested in teaching students." "The residents and faculty were very much engaged in our learning and enthusiastic to give us as much experience as possible in the short amount of time we had." "The clinical experience was excellent. The residents were all wonderful to work with and very supportive of us. Clerkship directors responded to feedback regarding the virtual curriculum." Challenges of shortened schedule: "It would be nice if the clerkship were longer, but this was limited due to COVID-19." "Unfortunately, just due to the nature of the shortened rotation from COVID and virtual visits, I was only able to see 1 patient per day, most of which were healthy elderly patients that did not have any 'geriatric syndromes'. I can only rate the clerkship as fair." Limited patient exposure: "It was hard to only do a lot of televisits, but I know given the pandemic, that was the only option. Maybe doing teaching via conferences as the televisits were not that educational would be useful." "Not sure how to improve this, but patient volume on the inpatient service and clinic were pretty low." "Virtual curriculum Friday lectures were so long (9 am -5/6 pm) that I could not pay attention to them. They should have been reduced or spaced out. It was not effective learning because we were all so tired." "I liked our teaching attending sessions but I do not know how truly helpful they were all the time and since we had done a virtual didactic portion I sort of wanted to stay with the team in the afternoon." T A B L E 2 Two comparisons of student performance on standardised exams By delivering the knowledge for practice ahead of their first in-person experiences, students can augment their learning.
At our institution, we reverted to the traditional rotation structure after the initial disruption and had no subsequent suspensions
ACKNOWLEDGEMENT
The authors would like to thank Dr. Klara Papp for her help with the statistical analyses.
CONFLICT OF INTEREST
The authors have no conflict of interest to disclose.
ETHICAL APPROVAL
All student data were obtained from Case Western Reserve University School of Medicine's Medical Education Data Registry whose purpose is to facilitate education research and quality improvement. The IRB protocol (IRB202151105) was a de-identified data registry that was determined as exempt by the institution. As it was exempt, signed consent was not necessary from each student. | 2022-08-07T06:16:18.853Z | 2022-08-06T00:00:00.000 | {
"year": 2022,
"sha1": "43fc2c48c8172a29e2c19b9c745e0bf83bcfb2cd",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "5951017d501aca89d8c4a15fa0c6d1cf5a8e3dc3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
106205008 | pes2o/s2orc | v3-fos-license | Circular Business Models for Extended EV Battery Life
: In the near future, a large volume of electric vehicle (EV) batteries will reach their end-of-life in EVs. However, they may still retain capacity that could be used in a second life, e.g., for a second use in an EV, or for home electricity storage, thus becoming part of the circular economy instead of becoming waste. The aim of this paper is to explore second life of EV batteries to provide an understanding of how the battery value chain and related business models can become more circular. We apply qualitative research methods and draw on data from interviews and workshops with stakeholders, to identify barriers to and opportunities for second use of EV batteries. New business models are conceptualized, in which increased economic viability of second life and recycling and increased business opportunities for stakeholders may lead to reduced resource consumption. The results show that although several stakeholders see potential in second life, there are several barriers, many of which are of an organizational and cognitive nature. The paper concludes that actors along the battery value chain should set up new collaborations with other actors to be able to benefit from creating new business opportunities and developing new business models together.
Introduction
In July 2018, there were over 55,000 electric vehicles (EVs) in Sweden [1].Worldwide, the number exceeds 3 million and is expected to increase to between 125 million and 220 million by 2030 [2].Vehicle original equipment manufacturers (OEMs) have ambitious goals to transform their fleets.For example, as of 2019, Volvo Cars will no longer launch vehicles that are driven solely by internal combustion engines, transforming their portfolio into one based on hybrids and plug-in EVs.Buses and other heavier vehicles are also becoming increasingly electrified.
While EVs are expected to reduce the climate impact and pollution problems of transport, many of the materials used in the batteries are toxic and rare and might thus reduce the sustainability performance of EVs in impact categories such as human toxicity, acidification and eutrophication potential [3][4][5][6].Extending the battery life cycle is therefore a crucial aspect in improving EVs' contribution to overall sustainable development.By 2025, 250,000 metric tons of EV lithium-ion batteries (LIBs) are expected to have reached end-of-life [7].In this context, end-of-life means that the batteries are no longer considered useful in a vehicle, but they still retain 70-80% capacity.Being able to make use of that capacity, and only then recycle the batteries, might lead to big sustainability improvements.
Capturing the value that is left in a product after use is the cornerstone of circular economy.Through direct reuse, refurbishment, remanufacturing, and/or recycling, waste can be eliminated [8].Remanufacturing and reuse slow down the resource cycle by extending products' life while recycling closes the resource loop [9,10].The processes of reuse and recycling are complementary to each other, and the largest sustainability benefit can be reached if EV batteries are first reused and then recycled.
There are currently a number of established businesses on the market, such as Spiers New Technologies Inc (SNT), a US-based provider of "4R" services (repair, remanufacturing, refurbishing and repurposing) for advanced battery packs used in hybrid and electric vehicles.However, a look at the market also reveals a number of recent businesses created by established car manufacturers.While many car manufacturers have conducted pilots, only a few, such as Nissan and Renault, have launched their second-life businesses.Nissan and Renault have launched brands (XStorage Home Systems and Powervault respectively) in the household energy storage market and focus on private households with solar panels in the UK as their core customer segment.Moreover, a number of third-party entrepreneurs are attempting to establish second life battery businesses.For example, the start-up company Freewire Technologies develops portable EV charging stations, and Relectrify, a start-up based in Australia, focuses on battery management systems to squeeze more value out of used batteries and facilitate the transition of batteries into a second life in residential solar storage, commercial peak-shaving, grid support and beyond.
To enable the transition to a circular economy, with reuse and recycling, specific product designs and business models are required [9].When transitioning from linear to circular product logics, business models and value chains need to become circular in order to create value and satisfy customer and stakeholder needs sufficiently [11].However, while technological solutions are advancing, economic and regulatory aspects have not yet been able to provide sufficient framework and incentives for a circular economy with slowed and closed EV battery cycles.There is not enough understanding of how companies can create business models that facilitate a circular economy [12].
Aim and Scope
In this paper, second life of EV LIBs is studied through interviews and workshops with stakeholders to provide an understanding of how the battery value chain and related business models can become more circular.An illustration of the value chain, as referred to in this paper, is provided in Figure 1.The value chain starts with design and manufacturing.After first life, the battery's health and capacity are checked to see if it can be used in a different vehicle or in a stationary application or if it needs to be recycled directly.If a second life is possible, the battery is refurbished.Depending on the battery and the application, refurbishment can include different processes.The aim of the paper is to contribute to the ongoing discussion on the circular economy by identifying barriers to and opportunities for second use of EV batteries, and by exploring business models in which increased economic viability of second use and increased business opportunities for stakeholders will lead to reduced resource consumption.
Second use of EV batteries is an issue of global interest, but this study is centered around Swedish and European actors and conditions.The EV market is growing in Sweden, there are national actors in all parts of the battery value chain, European Union (EU) legislation applies, and a large-scale second-life demonstration project (https://www.riksbyggen.se/globalassets/1-media-riksbyggen/2-bostad/bostadsratter/vastra-gotaland/brf-viva/lagring-av-el-i-begagnade-bussbatterier-i-riksbyggen-brfviva.pdf) in Gothenburg, Sweden's second largest city, is drawing interest among several stakeholders.Thus, a study with a mainly Swedish scope can be considered to be of general interest.
Theoretical Background on Business Models
The business model is the logic of the firm for how it creates and captures value in a specific business.With technological advancements and new emerging businesses and markets, the business model has recently been viewed as a source of innovation, albeit complementary to traditional types of innovation such as product, process, or organizational [13].
Innovation of the business model is increasingly highlighted as equally important as the idea or the technology that enables the innovation, meaning that new ideas and technologies cannot generate a competitive advantage without a fitting business model.New products, services and technologies can be commercialized through different business models and accordingly may drive different performances [14].Therefore, the business model has become popular as a source of competitive advantage for the firm [15,16].Business model innovation may manifest both in terms of renewal of existing business models [17][18][19] or as a means for competing with multiple business models [20][21][22].
Innovation of the business model may involve changes in one or multiple business model components and their links to one another, and it is therefore generally viewed as a complex, emergent, and uncertain process [23,24].Despite many advantages of business model innovation highlighted in literature, firms may face substantial challenges and barriers towards working with change and transformation of their business models and in many instances, they are prone to failure.
Organizational barriers may manifest in different forms: Resistance to allocation of resources to the new business model, especially if the new business model creates conflicts with existing assets and capabilities [14,25,26]; lock-in that is manifested in switching costs to the new business model for customers or other stakeholders [27]; complications of developing a new business model in parallel to existing one(s) [28] and management of multiple business models [21]; inertia due to uncertainty about the effectiveness of new business models [29] and anticipating performance implications of the new business model ex ante [30].
Cognitive barriers [31] are related to: Filtering out ideas which are not in line with the dominant logic due to managerial cognition that hinders envisioning alternative business models and understanding the opportunity inherent in business model innovation [14,32]; lack of top management leadership to envision business model innovation and to figure out the required structures, capabilities and processes for the new business model [18,33].Realizing a need for change in the business model is not only related to the top management leadership.It is also related to the distribution of authority and decision making in the management team.In companies where middle managers have decision-making authority and delegation for cooperation with external parties, the likelihood of sensing the need for business model innovation is higher [34].
Technical Background on EV Batteries and Recycling
Battery packs are made up of modules, with any number of cells, and a battery management system (BMS) [35].They have different shapes, and different chemistries.Some chemistries have higher specific power (more suited to power delivery, e.g., lithium iron phosphate (LFP)), others have higher specific energy (better for energy storage, e.g., lithium cobalt oxide (LCO)).Supply issues of individual substances could impede production.For example, there is a so-called bottleneck of cobalt because both mining and refining occur in only a few places, a couple of which are politically unstable (such as the Democratic Republic of Congo).In addition, cobalt is a bi-product or co-product of gold and copper, making production of it dependent on those markets [36].Nickel-heavy chemistries that utilize less cobalt, such as lithium nickel manganese cobalt oxide (NMC), are however expected to increase [37].
The batteries reach the end of their functional first life once they have lost 20% to 30% of their capacity [38].When this exactly is depends on many factors, including:
•
Consumer behavior when charging and discharging, and other usage patterns such as driving styles [39,40].
• Technical specifications of the battery, including the powertrain efficiency [41].
Sweden and the EU have legislation that requires recycling, and there is a growing interest in establishing systems for collection and management of used LIBs internationally [7].Recycling technologies can be roughly categorized in three techniques: Hydrometallurgical, pyrometallurgical and mechanical processes [42].In most cases, a combination of these recycling techniques is used [7].
Hydrometallurgy is a chemistry-specific leaching-intensive process that can recover lithium, aluminum, and other high-value materials [43].The process is preceded by a mechanical separation and crushing of batteries [42].A solvent is added to the crushed batteries, and this mixture is filtered.Acid is then used to separate metals [35].Either precipitation using an alkaline solution or electrolysis is used to recover the metals from the leach solution [42].
Pyrometallurgy is a thermal treatment process, that includes pyrolysis, smelting, distillation, and refining [42].High-value materials such as nickel, cobalt and copper can be recovered [43].Batteries are shredded and slowly heated, after which plastics and solvents are burned in pyrolysis, where organic material is decomposed [42].The remainder is smelted in a furnace and combined with limestone to create slag [43].Metals are then separated through distillation [42].Nickel, cobalt, and copper are recovered, while lithium and manganese usually end up in slag [35].
Literature concludes that with 'state of the art' recycling, a large fraction of materials can be recovered: Over 90% of lithium, cobalt, manganese, nickel, copper and aluminum [44,45].In current practice, however, recycling rates are much lower [45,46].
Disposition
In Section 2, the methodology used in the study is presented along with the material that the results are derived from.Section 3 presents results, and Section 4 provides an analysis of what the results mean for circular business models and a discussion on the requirements for circular business models to develop.Section 5 concludes the paper.
Methodology and Material
To understand the various aspects of circular business models for EV batteries, background material was gathered.A scientific literature study was performed to gather data about battery manufacturing, second life and recycling.Desk research was performed, on three themes: Technology scanning, market analysis and stakeholder and network analyses; all with respect to EV batteries, second life and recycling.The material from this initial research phase was then used to plan the interviews and the stakeholder workshops that are the main sources of data in this paper.In addition, informal phone and email conversations with respondents have been used as input.
Interviews
This paper mainly builds on data gathered in 20 interviews with EV battery experts from 16 Swedish or global stakeholders in the battery value chain.The interviews were semi-structured, with a tailored interview guide for each stakeholder.Common topics were barriers and opportunities for second life and/or recycling of EV LIBs, possible business models for more circular value chains, battery design, and perspectives on standards and regulation.Most of the interviews were done by phone, and some in person.They lasted between 30 and 90 min and were recorded and transcribed.All interviews had one or two respondents.The interviews were done in Swedish or English.Quotes from interviews in Swedish have been translated with the aim of capturing the essence of the respondent's statement.Table 1 shows the different categories of stakeholders represented in the study.All respondents are kept anonymous, as it would not add value to the paper to specify the companies or agencies that were interviewed.First, several interviews were strategically planned to include actors in different parts of the value chain, which was mapped during the initial desk research.The respondents were asked what stakeholders they believed would be valuable to interview, and some of their suggestions were then contacted and interviewed.This process continued.Such "snowball sampling" was administered so that newfound aspects could be identified along the way.This may mean that some aspects are less explored than they ought to be, and others may not have appeared at all.Despite this potential limitation, the goal to interview stakeholders from all parts of the value chain and obtain a view of barriers and opportunities from each stakeholder, was reached.There is an emphasis on interviews with OEMs, as they manufacture and sell the EVs that the batteries in question come from and thus have an interest to participate in studies like this one.Finding battery manufacturers with time to spare for an interview about second life proved difficult, which is why only one interview could be conducted in that segment.
Workshops
Two workshops were conducted with interviewees.The idea behind the workshops was to find solutions to the barriers found in the interviews, to identify pathways to make the most of the potential of extended EV-battery life cycles, and to discuss strategies and business models to achieve circular value chains.The first workshop focused on further understanding the main barriers identified in the interviews.The second workshop was designed for problem-solving; categorizing the identified barriers, exploring relationships between the different categories, and identifying the most critical barriers.During the workshops, it became clear that rather than finding solutions, the barriers to second life and recycling needed to be explored further.By documenting the discussions among the stakeholders, the workshops provided material that complements the interviews with regard to barriers to a more circular value chain, relationships between stakeholders, and views on legislation.
Data Analysis
The material from the interviews and the workshops were analyzed using content analysis.First, recurring categories were identified: Battery design, business models, costs, collaboration, logistics, producer responsibility, safety, standardization.Then, these categories were explored with respect to the different points of view of the stakeholders.Thus, common perceptions of barriers and opportunities could be identified.This material was then analyzed from a business model perspective, using theory presented in the next section.
Opportunities for Second Life
From our perspective, second life makes complete sense, if you can bring the cost right down and bring up the life [of the battery].(Energy storage supplier)
Actors Who can Create Business Solutions
Second use of EV batteries is often seen as an opportunity to delay disposal and recycling, which currently present burdens for OEMs, as well as an opportunity to squeeze value out of existing resources.Today's low volumes mean that the possibilities of such a business are small, but in the next few years volumes of LIBs on the market are expected to increase greatly.According to an industry expert, EV OEMs will have what it takes to seize a big part of the expanding energy storage market.They have the knowledge of their batteries and the best chance of maintaining or reestablishing control of the LIBs in their EVs.
Recycling industries also see new opportunities, in making themselves natural intermediaries between the vehicle end-user or OEM and a second life for the battery.Recycling actors already receive end-of-life vehicles, shred, sort and sell materials, and have facilities and a large network of dismantlers that can manage disposal of EVs.In order for a second-life battery to be useful, it however needs some degree of repurposing.A number of third-party entrepreneurs are currently attempting to establish second-life battery businesses, with repurposing at the heart of their business models.Some energy storage suppliers also work with second-life batteries, refurbishing them for new applications.
Applications for Second-Life Batteries
The desk research and interviews show that there are several possible applications for second-life batteries, which are listed in Table 2. Using batteries to store renewable electricity is gaining interest, and there are several demonstration projects where second-life batteries are used for this application.Using batteries for power demand reduction, thus being able to reduce transmission capacity and costs, is another area of increasing interest.A second-life battery has several benefits.Its extended life means that as much usefulness as possible is gained from a product that otherwise is rejected when 75-80% capacity remains.Studies show that capacity drops linearly down to 80% and then drops at a faster rate (e.g., ref. [47]).Some respondents emphasized that the battery might not be very useful after its first life in an EV, and several respondents commented on the uncertainty that surrounds the capacity drop after first life.Second life in a different application other than an EV has not been studied.Given that the battery is used under other circumstances, it could be useful for a long time.
When buying a battery from a manufacturer for a vehicle, one wants assurance that the battery performs well in that particular application. There are requirements for the manufacturer, of what performance is required of the cells. But then other things [than first life] are not studied. It would be interesting to perform tests and see what the battery can do after 75%, or 80% [capacity], as that is somewhat unknown. (OEM)
A second-life battery can be considered a "good enough" product at a fair price for certain customers.For example, home storage of solar power produced in a household might require some charging during the day and some discharging in the evening.The requirements are very different from those of a vehicle, where the battery must also handle high power (In vehicles, requirements for high power rates and power bursts could also be met by supercapacitors [48][49][50]).
One of the major errors, I would say, is that the second usage company, they see it as if they get a bad product./ . . ./ But it's not.For the usage that they need, it's a product that is OK, it's going to be cheaper than if they buy the 100% health, not used battery, and they just need to make their business around it./ . . ./ Buying a 100% [capacity] battery for their usage would be an overkill, they would overpay.I think they will need to see it more as an opportunity to / . .
. / find [what] that they need for the right price. (OEM)
A second-life LIB might also be considered a safe product, as it is built for demanding conditions and has been tested thoroughly during its life in the vehicle.The repurposing process is demanding, as each cell needs to be controlled and the BMS needs to be set up to fit the battery's new surroundings and application, but then a second-life installation might actually be safer than batteries dedicated for home storage.
Technical Challenges
Despite the apparent reuse value remaining in the imminent piles of batteries, there are aspects that challenge the idea of reuse.EV LIBs are made by many different manufacturers with many different constructions, which include variations in number and type of cell, physical shape and chemistry.LIBs are not labelled with their specific chemistry, so neither third-party battery refurbishers nor recycling actors know which kind of LIBs they receive.In addition, each LIB has a tailored BMS which regulates critical functions of the battery.This means that large costs are often associated with repurposing.Standardization of diagnostics, health monitoring, packing and labeling could simplify the process, but as common standards could interfere with competition between manufacturers this is a sensitive issue.
Transport is another troublesome issue, as used LIBs can be considered to be hazardous waste.That means that transport is costly and highly regulated.Some logistics firms will not transport used LIBs, and air freight is not allowed at all.This is of course a problem for recycling as well as for second life.According to multiple respondents, transport is generally the most expensive part of battery recycling.This brings up the question of where markets are located -for EVs, for second-life solutions and for recycling.One respondent means that there might be a need for a global second-life LIB market, as for example there may be many EVs in Sweden but a low use for stationary electricity storage.The second-life batteries from Swedish EVs could be more valuable elsewhere.
Legislation to Ensure Recycling of LIBs
The European Union's Battery Directive [51] states that 50% of the weight of an EV LIB shall be recycled.According to the Swedish Environment Protection Agency, in Sweden that is obtained.However, current legislation does not create incentives for further recycling, which could be achieved for example by specifying recycling requirements.
Lithium, cobalt, nickel / . . ./ are the important metals to recycle but the weight of them [is] just a fraction of the whole module, so there has to be an update on how to define / . . ./ the demands on the recyclability of the batteries.(OEM) The actor that puts the battery on the market has producer responsibility, i.e., responsibility for providing a system for collection and recycling when the battery becomes waste.That responsibility can be transferred if the battery turns into a new product, with a new function or under a new brand.It is not always entirely clear which actor has the producer responsibility, and uncertainty about legal issues could discourage actors from engaging in second-life endeavors.In a workshop discussion with representatives from OEMs, recycling industry and the research community, legislation and responsibility were discussed as main issues to be clarified to stimulate more circular business models.Other respondents worry that EV batteries might be lost if given a second life-that with too many actors involved in the value chain, batteries might not end up in recycling at all.
Uncertainties
Despite the potential for second life to be a good fit for several applications that are less demanding than an EV, there is currently no market for second life.Partly, that is because EV sales have been low until recently.But according to respondents, it is also very much due to uncertainty about the future: which LIB chemistries will be used, what will new batteries cost, and how will second-life batteries perform in different applications?
Some respondents argue that it would be better to recycle used EV batteries directly, instead of giving them a second life.The rationale is that with the expected technology development, the raw material would be put to better use in a new battery.
It is a no-brainer, understanding that batteries need to return [to the manufacturer] for recycling of metals. (Battery manufacturer)
However, due to uncertainty about future battery volumes and chemistries, investments in recycling processes are not easily accomplished.New battery chemistries are developed and produced, energy density is improving, and battery prices are falling.The cost and availability of different materials affect battery prices, but also the content of batteries.Resource concerns and recycling challenges depend on what materials are used.The cost of virgin-material batteries and the technological development affect the profitability of recycling and the demand for recycled material.
What I see, is that recycling companies are not prepared, at this point, to [make] large investments in technologies for recycling for example lithium until they see that the market prices, the commodity prices encourage that type of technology, because it's a very large investment./ . . ./ The recycling industry is following, quite closely, the OEMs.No one is really willing to take the risk of developing a large-scale infrastructure or technology for a certain type of battery chemistry when the battery chemistries themselves are actually changing.So this is one of those Catch 22 situations, who is going to be the first to take the initiative.(OEM)
Barriers for Circular Business Models
Analyzing the challenges and barriers for second life and recycling of EV LIBs from a business model perspective, they can be categorized along three aggregate dimensions: Cognitive, organizational, and technological, as shown in Table 3.
Table 3. Overview of barriers to second life, in a business model perspective.
Second life
Lack of interest in second life applications that are conflicting with the existing business models.
Regulatory uncertainties in
relation to producer responsibility and the definition of the product during the second life.
Lack of standardization beyond the cell level, and in module and pack levels.
Not realizing the potential value in second use in the existing market(s).
Not investing in collection of existing batteries due to low volumes.
Lack of knowledge on the remaining capacity after first life.
Lack of collaboration along the value chain.
Recycling
Aligning investments with previous business models based on selling raw materials.
Risk of investment in large scale automated processes when future technology advancements are uncertain.
Variations in number and type of cell, physical shape and chemistry.
Cognitive barriers are related to decision makers being uncertain about how promising future business models centered on second life will be.They may be reluctant to invest in new business models that conflict with their up-and-running businesses due to potential mismatches with the company's long-term strategies.Organizational barriers are related to adaptations needed to support development and scaling of new business models that cannot be supported with existing resources and capabilities and which require new ways of working, new flow of resources, and information, new processes and structures, etcetera.Technological barriers are exogenous and are related to lack of standardization in design of new batteries beyond the cell level, which makes preparing them for second life and recycling costly and complex, and to the uncertainty of how they will perform after first life (with respect to capacity loss).
Several barriers are related to the cognitive and organizational dimensions rather than the technological dimension.This is an interesting finding, which stands in contrast with the current practical and research focus which highly attends to the technological dimensions and overlooks cognitive and organizational barriers in utilizing the technological advances.Understanding the relationships between different types of barriers, and whether certain barriers (e.g., technological) are antecedents to other types of barriers (e.g., organizational), is probably necessary in order to better understand how to achieve a circular EV battery value chain.
Four Business Model Scenarios
In the following section, four different scenarios to adopt circular economy principles for potential business models are conceptualized; see Figure 2 for an overview.These business models are different in relation to customer value proposition and the value network they require to function in [12].The customer value proposition determines the positioning of companies in the market according to their customer segments, customer relationships, and distribution channels [31,52].The value network defines the ways through which companies interact within their ecosystems and reorganize their own internal activities.With close collaboration between the OEM and the recycling company, the recycling actor can collect removed batteries from the cars after their first use, from workshops or dismantlers.In this scenario the recycling company performs both unpacking and recycling in an automated process which allows handling of large volumes of batteries with different designs.Moving towards this scenario requires investments from the recycling actors in scalable and automated recycling processes.This is currently perceived to be of high uncertainty given that such processes need to match future battery designs and chemistries which are not yet well defined.After the first use in a vehicle, diagnostics are performed by workshops or dismantlers, to decide whether the batteries are in good condition and have capacity for reuse in a car.The certified workshop, which is directly in communication with the OEM, performs refurbishment and repair of the battery which is then placed in a car in the same or a new market (e.g., with less intensive driving demands).Model (c) is already under test with an OEM that has fleet operators as their main customer segment.The OEM has a take-back system in place for collection of the cars after a period of use, refurbishment of the batteries and then putting the car in a market which requires less intensive driving for its second use.
(d) Circular model II: Battery production and use in vehicle + repackaging and second life in a different application + state of the art recycling After the first use in a vehicle, an early diagnosis is performed by dismantlers to decide whether the batteries have capacity for reuse in a car, whether they are fit for refurbishment, repacking and transportation for use in second-life applications (e.g., home electricity storage), or if they should be recycled.Based on this decision the battery may enter different flows.This process can decrease handling and transportation costs by assuring that the batteries will end up in the right place after their first use.For a transition to a second life, the battery needs to be repacked and the BMS needs to be adjusted or even replaced, which are additional activities that need to be incorporated in the business model.Model (d) is currently under test by a new venture, which designs and manufactures smart energy storage systems for households.The company has recently started a partnership with an OEM to reuse their EV LIBs in home energy storage units.This partnership is estimated to reduce the production costs of home energy storage (as provided by this new venture) by 30%.
This last scenario, model (d), would require the highest degree of collaboration among the different stakeholders in the value network, including the OEM, dismantlers, recycling actors and second-life actors.A reflection from the second stakeholder workshop is that the greatest uncertainties around this scenario concern the product definition and how this may change during transition from first to second use.This is related to which legislation might apply.For instance, two EU directives may apply to a used EV LIB: The Battery Directive [51] and the End-of-life Vehicles Directive [53].For successful collaborations, there should be no ambiguity of product definition and applicable legislation.
Future Knowledge Development
As the previous sections show, there are several barriers to second use of EV batteries and even to improved recycling processes.Some barriers are of a technological nature, others have to do with interpretation and application of legislation, and yet others are related to the many uncertainties regarding second life of EV batteries.The analysis shows that with regard to business models, it is clear that many of the barriers are organizational or cognitive.This implies that for circular business models to become a reality, the actors in the EV battery value chain are required to take active roles.For them to be able to do that, it seems that they need better information.Some questions that arise when trying to establish a circular value chain are:
•
What is the value of a second-life battery?
• How does a used EV battery perform in different stationary applications?
• For how long can a second-life battery be expected to be useful?
• What is the value of recycled battery materials?
• How should legislation be interpreted throughout a circular value chain?
• How can it be made clear who has producer responsibility at different stages?
• What are the consequences of second life, with regard to ecological and social sustainability?
It is likely that actors need answers to these questions in order to establish a strategy for battery second life.This study has demonstrated that there is currently not enough knowledge for even hypothetical answers to be provided, so further research is required in several areas.There is a lot of ongoing research, for example in the area of technical improvement of recycling processes.However, that research needs to be connected to technical, organizational and economic perspectives on battery production and use, in order to capture which strategies might hold more value.There are also several ongoing second-life demonstration projects.Research should follow them closely in order to learn about the potential and difficulties of using EV batteries in different environments.For circular business models to gain popularity, actual data on the value and environmental and social benefits are likely required.Research that builds on this study and delves deeper into understanding what barriers exist for different circular business models could be beneficial for actors along the value chain who wish to explore opportunities related to second life.Furthermore, research needs to approach the application of different directives with regard to producer responsibility and the safety of battery handling.This study shows that uncertainty about how to interpret legal documents could discourage actors from exploring circular business models.Perhaps even more importantly, clear definitions of producer responsibility at any given time are vital for ensuring that a battery will end up in recycling no matter how many stakeholders have been involved in its value chain.
Concluding Remarks
There is potential for actors throughout the battery value chain to explore the use of EV batteries in second-life applications.Once the batteries reach end-of-life, there is also potential for recycling processes to salvage more materials than occurs today.Enhanced recycling could have rather big environmental benefits.The paper shows that some actors consider second-life EV LIBs as potentially safe products, with reasonable economic value, that fit the requirements of electricity storage for a wide range of actors.There are business opportunities all along the value chain: OEMs might benefit from selling the used battery to a second-life actor instead of paying for disposal; battery refurbishers might grow their businesses, adapting used EV LIBs for second life in other applications; energy storage providers might offer solutions with smaller ecological footprints; recyclers might use their expertise in collecting and dismantling as part of the process from first life to second, and to recycle sought-after metals.
Yet, there are several barriers to both second life and improved recycling.In this paper, barriers are characterized as cognitive, organizational or technological.There are the cognitive barriers of not being very interested in new business models, or not finding enough value in second-life solutions; the organizational barriers related to investment risks and legal issues; and the technological barriers of a general lack of design standards and uncertainty in capacity loss after first life.These barriers may be alleviated by collaboration between actors in different parts of the value chain, sharing their expertise and learning from others.
OEMs naturally optimize batteries for first life, not for use thereafter.However, if OEMs were to collaborate with battery refurbishers, second-life users and recyclers from the start, it might be possible to find ways to simplify the path to second life and recycling, to make transfer through the value chain less costly and to learn how and when second life adds value.By such collaborations, informal standardizations could develop.Collaborations might also make it easier for recyclers to predict future volumes of batteries and their chemistries.By including battery manufacturers in collaborations, requirements for recycled material to be used in battery production might be illuminated, thus possibly reducing the risk of investments in recycling processes.Collaborations throughout the value chain could likely simplify the issue of determining producer responsibility at different parts of the battery's life cycle, thus reducing the risk of batteries getting lost and not being recycled at end of life.
The recommendation for stakeholders is thus that they seek collaboration with other actors in the battery value chain, in order to be able to explore new business opportunities and develop new business models together.Moreover, for society to achieve goals related to battery reuse and recycling, stimulating collaboration in battery value chains could be a good complement to stimuli focused on technology development.
Figure 1 .
Figure 1.The circular EV battery value chain.
(c) Circular model I: Battery production and use in vehicle + repair and refurbishing for second use in vehicle in the same or a new market + state of the art recycling.
Table 1 .
Stakeholders participating in interviews and workshops.
Table 2 .
Possible applications for second-life batteries. | 2019-04-10T13:12:23.158Z | 2018-11-02T00:00:00.000 | {
"year": 2018,
"sha1": "b31f3ebad3634b6008ff6559f530386bf95014c7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-0105/4/4/57/pdf?version=1541146117",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "55370d30af98a3beeb3aaf2706e946065f00dfcf",
"s2fieldsofstudy": [
"Business",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
234196869 | pes2o/s2orc | v3-fos-license | Tunable conductance and spin filtering in twisted bilayer copper phthalocyanine molecular devices
We investigate theoretically the quantum transport properties of a twisted bilayer copper phthalocyanine (CuPc) molecular device, in which the bottom-layer CuPc molecule is connected to V-shaped zigzag-edged graphene nanoribbon electrodes. Based on a non-equilibrium Green's function approach in combination with density-functional theory, we find that the twist angle effectively modulates the electron interaction between the bilayer CuPc molecules. HOMO (highest occupied molecular orbital)–LUMO (lowest unoccupied molecular orbital) gap, spin filtering efficiency (SFE) and spin-dependent conductance of the bilayer CuPc molecular device could be modulated by changing the twist angle. The conductance reaches its maximum when the twist angle θ is 0° while the largest SFE is achieved when θ = 60°. The twist angle-induced exotic transport phenomena can be well explained by analyzing the transmission spectra, molecular energy level spectra and scattering states of the twisted bilayer CuPc molecular device. The tunable conductance, HOMO–LUMO gap and spin filtering versus twist angle are helpful for predicting how a two-molecule system may behave with twist angle.
Introduction
Metal phthalocyanines have attracted widespread attention in the past decade, and more and more experimental and theoretical researches have been developed for their high stability, rich thermal and electrical properties, and easily modied electronic structure and spin state by changing the central atom. [1][2][3][4][5][6][7] Molecular spintronic devices will play a key role in nanoelectronics. The bilayer system exhibits a diversity of spinrelated phenomena [8][9][10][11] and twistronics 12 has become a hotspot in research of bilayer two-dimensional materials for which manipulating the electronic behaviours can be achieved by rotating the relative orientation of adjacent layers. The spatial localizations of electronic states lead to Moire bands, [13][14][15] topological features, [16][17][18] insulated states [19][20][21] and unconventional superconductivity 22,23 due to strong correlations and interlayer coupling in Moire superlattices formed by smallangle twisted layered 2D systems. 24 The spin orientation in a 3D topological insulator can be tuned by changing the incident angles. 25 The carrier transport properties can be tuned dramatically by periodic magnetic elds and Rashba spin-orbit coupling. 26 The conductance of a topological insulator quantum dot can be tuned by changing the Fermi energy, the width of the topological insulator constrictions and the quantum well bandgap. 27 However, there is no systematic research of the twist angles of molecular devices.
In this paper, based on non-equilibrium Green's function (NEGF) in combination with density-functional theory (DFT), 28,29 we study the quantum transport property of a CuPc molecular device consisting of a CuPc molecule with different twist angle linked by two V-shaped zigzag-edged graphene nanoribbon (GNR) electrodes. We can control the local spin states and associated quantum transport property of the device by changing the twist angle. The results show that HOMO-LUMO gap, spin lter efficiency (SFE) and spin-dependent conductance of the twisted bilayer CuPc molecules (TTBCPM) vary with the twist angle. The change trend of conductance and SFE is almost opposite for large q. The conductance is at its maximum for q ¼ 0 and the largest SFE is at q ¼ 60 . Physical mechanisms are proposed for these phenomena and the quantum transport phenomenon with twist angle is further understood by analysing the transmission spectra, molecular energy level spectra and scattering states.
Computational details
We investigate the transport properties of a twisted bilayer CuPc molecule device (TTBCPMD) by using DFT combined with the Keldysh NEGF formalism, as implemented by the Nanodcal transport package. 30 Fig. 1(a) and (b) show top and side views of the structure of the TTBCPMD. In the horizontal plane, we rotate the top CuPc around the central Cu atom by an angle q relative to the xed bottom CuPc. The device consists of three parts: le and right electrodes (which extend to AEN) and central scattering region, which contains twisted bilayer CuPc molecules, and le and right buffer layers. The le and right GNRs are symmetrically connected to the two carbon atoms of the le and right benzenes of the bottom CuPc molecule, respectively. The top CuPc molecule is situated horizontally above the bottom CuPc molecule. The cutoff energy is set to be 150 Rydberg, electrode temperature is chosen to be 300 K and the k-point grid is set to be 100 Â 1 Â 1. Electrodes of ZGNRS of 8 atoms in width are considered with a supercell (4.9332 Â 27.821 Â 13Å) subjected to periodic boundary conditions. We introduce a vacuum layer of about 10Å in y direction and z direction to eliminate interaction between GNRs and bilayer CuPc molecule in neighboring cells, and the edge atoms, both electrodes and the central region are saturated with hydrogen (H) atoms to remove the dangling bonds. 7 The exchangecorrelation function is described by the local density approximation proposed by Perdew and Zunger. The structures are not relaxed because perfect twist angle structure without relaxation can help us to more clearly obtain the physics induced by angles between the bilayer CuPc molecules.
The spin-polarized zero-bias conductance is given by the Landauer-Buttiker formula 31 as where e is the electron charge, h is Planck's constant and E F is the Fermi level. The conductance unit is G 0 ¼ e 2 /h. For every spin state, T s (E F ) is given by 32 where G L,R stand for the contact broadening functions related to the le and right electrode, and G R,A represent the retarded and advanced Green's functions of the central region, respectively.
Hamiltonian matrix for the scattering region of the device, S c is the overlap matrix, and S R L and S R R are the retarded self-energy of the le and right lead.
Under equilibrium states, SFE is dened as where T [ (E F ) and T Y (E F ) stand for the transmission coefficient of the spin-up and spin-down states at the Fermi level, respectively. Scattering states are the eigenstates of the open two-terminal device structure linking z ¼ ÀN to z ¼ +N and are useful for analyzing the transport properties of the device. 30,32,33
Results and discussion
We considered the twist angle from 0 to 90 because of the structural symmetry of the TTBCPMD. Fig. 1(c) shows the difference in total energy D TE between the TTBCPMD of twist angle q and the untwisted reference, i.e., D TE ¼ TE q À TE 0 , which shows that if D TE is small, TE q is small. We can nd that the total energy for q ¼ 0 is the highest and the total energy for q ¼ 70 is the lowest, which correspond to the largest and least total conductance for q ¼ 0 and q ¼ 70 . The copper atom in the CuPc molecule forms a bond with the N atoms along the transport direction, and the distance is bigger than the distance between the copper atom and the N atoms in the lateral direction shown in Fig. 1(a). The asymmetry of the structure causes the interaction between the upper and lower CuPc molecules to be different when the twist angle is 20 and 70 , so there will be an energy difference between the twist angles of 20 and 70 . Fig. 2(a) shows that the spin-dependent conductance varies with twist angle q. The le inset shows the total conductance (total G), spin-up (SU, black-line) conductance and spin-down (SD, red-line) conductance for q ¼ 0-10 . The total G and SD conductance show a downward trend for q ¼ 0-20 , 30-40 , 55-70 and an upward trend for q ¼ 20-30 , 40-55 , 70-90 . The SU conductance in the right inset shows clearly a downward trend for q ¼ 0-30 , 45-60 and an upward trend for q ¼ 30-45 , 60-90 , which corresponds to the change of transmission at the Fermi level with different twist angle q. We focus on the Fig. 1 The structure of the TTBCPMD. The red, blue, black, and grey spheres represent Cu, N, C, and H atoms, respectively. The molecule in the scattering region is CuPc, the electrodes are V-shaped zigzag-edged GNRs. (a) and (b) are top and side views of the molecular device. (c) The difference between the total energy of TTBCPMD with twist angle q and the untwisted reference, i.e., q 0 ¼ 0 .
transmission at the Fermi level with q ¼ 10 , 20 , 30 , 40 and 45 since the trend of conductance versus q is almost symmetric with respect to q ¼ 45 . The inset shows the transmission with dense energy points of the device versus q. At the Fermi level, the transmission decreases when q goes from 10 to 20 and from 30 to 40 , and increases when q goes from 20 to 30 and from 40 to 45 in the SD channel shown in Fig. 2(b). The above trend of the transmission corresponds to the change of the SD conductance. The transmission in the SU channel decreases when q goes from 10 to 30 and increases when q goes from 30 to 45 in Fig. 2(b), which corresponds to the change of the SU conductance, and it can be described by eqn (1) when the devices are under equilibrium states and summed along the xdirection.
The SFE reaches its maximum at q ¼ 25 and 60 . This can be understood by analyzing the spin-polarized transmission coef-cient at the Fermi level at zero bias. Fig. 3 shows that SFE > 87% and T [ (T Y , which indicates that the SD channel mainly determines the SFE. Therefore, we should use an appropriate value of q to obtain larger conductance and SFE to improve the practicability of a twisted-angle bilayer CuPc molecular device in electronics and spintronics. The conductance and SFE change smoothly with q. In order to understand these transport behaviors better, the molecular levels, HOMO-LUMO (H-L) gap and scattering states of the TTBCPMD are given in Fig. 4 and 5.
To understand the underlying mechanism of the observed conductance in Fig. 2(a), we calculate the energy spectra and HOMO-LUMO gap of the center scattering region (CSR) versus twist angle q as shown in Fig. 4. In Fig. 4(a), for the SU energy level, as the twist angle increases, HOMOÀ2 (HÀ2), HOMO and LUMO+2 (L+2) levels are far away from the Fermi level, and the HÀ1 level approaches the Fermi level. The L and the L+1 levels rstly approach and then move away from the Fermi level. For the SD energy level, as the twist angle increases, the HÀ2, the L, the L+1, and the L+2 levels are far away from the Fermi level, and the HÀ1 level approaches the Fermi level. The H level rstly approaches and then moves away from the Fermi level. The dependence of conductance on the twist angle is basically the opposite of the trend of the H-L gap changes. Generally, a large H-L gap corresponds to a small conductance. We nd that the energy spectrum and H-L gap are nearly symmetric with respect to the line of q ¼ 45 in Fig. 4(d). Therefore, we only analyze the energy spectra and H-L gap in half the region, i.e., q ¼ 0-45 . The L+2 level and HÀ2 level move away from the Fermi level obviously when q changes from 0 to 10 as shown in Fig. 4(b). The L level, the rst orbital above L (L+1) level, the H level, and HÀ1 level in SU and SD channels change slowly with q. The trend discussed above corresponds to the energy dispersions of SU and SD carriers in the TTBCPMD versus q. The HÀ2-L+2 gap increases with q. The HÀ1-L+1 gap shows a downward trend and the H-L gap shows an upward trend in SU and SD channels for q ¼ 0-10 in Fig. 4(b). The HÀ2-L+2 gap and H-L gap in the SD channel increase and the HÀ1-L+1 gap decreases with q for q ¼ 0-20 . The energy spectra shown in Fig. 4(c), and the HÀ2-L+2 gap, HÀ1-L+1 gap, H-L gap in SU and SD channels shown in Fig. 4(d) determine the corresponding conductance in Fig. 2(a). Fig. 5 shows the real-space scattering states of the TTBCPMD system at the Fermi energy E F and the incoming k x ¼ 0. The scattering of incoming state 1 of lead L (S1LL) and lead R (S1LR) averaged along direction z for the SU channel at zero bias voltage in Fig. 5(a) becomes weaker when q increases from 10 to 30 , and becomes stronger when q increases from 30 to 45 ; S1LL and S1LR for the SD channel at zero bias voltage in Fig. 5(b) become weaker when q increases from 10 to 20 , and from 30 to 40 and become stronger when q increases from 20 to 30 , and from 40 to 45 . These results intuitively explain the change of the SU and SD conductance with increasing q in Fig. 2(a): the SD conductance shows a downward trend for q ¼ 10-20 , 30-40 and an upward trend for q ¼ 20-30 , 40-45 ; the SU conductance shows a downward trend for q ¼ 10-30 and an upward trend for q ¼ 30-45 .
The results of theoretical simulation could provide guidance for experimental studies on how to achieve molecular devices with higher SFE. In experiments, different rotation angles of double-layer molecules can be achieved through probe manipulation. Therefore, we can obtain a large spin-dependent conductance and SFE by controlling the twist angle of the bilayer CuPc molecule device, which will be helpful for the design of molecular electronics and spintronics.
Conclusions
In conclusion, we have investigated the spin-dependent conductance, SFE, transmission spectrum, energy spectrum, HOMO-LUMO gap and scattering state in TTBCPM by using the DFT-NEGF method. The local spin states and associated quantum transport property of the TTBCPMD can be effectively controlled by changing the twist angle. The SFE of the device reaches its maximum of 98.85% at q ¼ 60 and the largest conductance is 0.0088G 0 at q ¼ 0 . Physical mechanisms are proposed for these phenomena. These results indicate that the two twisted-angle bilayer CuPc molecular device holds great promise in molecular electronics and spintronics.
Conflicts of interest
There are no conicts to declare. | 2021-05-11T00:06:35.414Z | 2021-04-07T00:00:00.000 | {
"year": 2021,
"sha1": "372c78108e26959fa6e2f64b59d27141490e1753",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/na/d0na01079k",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fc83030d5185e0bb895d680138ded599cba0d58",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
233024739 | pes2o/s2orc | v3-fos-license | Multi-Unit Directional Measures of Association: Moving Beyond Pairs of Words
This paper formulates and evaluates a series of multi-unit measures of directional association, building on the pairwise {\Delta}P measure, that are able to quantify association in sequences of varying length and type of representation. Multi-unit measures face an additional segmentation problem: once the implicit length constraint of pairwise measures is abandoned, association measures must also identify the borders of meaningful sequences. This paper takes a vector-based approach to the segmentation problem by using 18 unique measures to describe different aspects of multi-unit association. An examination of these measures across eight languages shows that they are stable across languages and that each provides a unique rank of associated sequences. Taken together, these measures expand corpus-based approaches to association by generalizing across varying lengths and types of representation.
Introduction
The goal of this paper is to generalize measures of linguistic association across both the direction of association and the number of units in a sequence. Association measures quantify which linguistic sequences co-occur in a significant or meaningful way (e.g., Church & Hanks 1990;Gries & Stefanowitsch 2004;Gries 2008). Traditionally, association has been viewed as a relationship between two lexical items. Given the utterance in (1a), for example, traditional measures represent the degree to which neighboring pairs of words such as (1b) are associated with one another. The problem is that this misses larger phrases such as (1c) that contain more than two words. The point of this paper is to expand the scope of association measures to sequences of varying length and level of representation while maintaining directional distinctions.
(1a) please give me a hand here (1b) give me (1c) give me a hand (1d) give me a hand here (1e) give [NOUN] a hand (1f) give her a hand The expansion to phrases of varying length creates a new problem that pairwise measures implicitly ignore: segmentation. For example, the phrase in (1d) contains the phrase from (1c) as a sub-sequence but also includes here. Association measures that are not confined to arbitrary lengths must be able to segment sequences in order to identify phrases like (1c) nested within larger sequences like (1a). Similarly, association can be generalized beyond word-forms to describe sequences such as (1e), in which a partially-filled slot, NOUN, allows greater descriptive generalizations that encompass phrases like (1f) as well as (1c). This creates an additional problem: is (1c) or (1e) the best representation for this phrase?
In order to better understand this problem, the paper develops and evaluates a series of multi-unit directional association measures, each building on the pairwise ΔP measure (Ellis 2007;Gries 2013), across eight languages (German, English, Dutch, Swedish, French, Italian, Spanish, Portuguese) at two levels of representation (lexical and syntactic). This evaluation importantly allows us to observe both (i) relationships between the measures and (ii) the stability of their behavior across languages.
between give and me should not be calculated using the association between give and anything other than me. The problem is that measures which take external information into account (e.g., Shimohata, et al. 1997;Zhai 1997) effectively prohibit the search for associated sequences across many lengths and types of representation.
Fifth, many sequences contain arbitrary segmentations. For example, give me from the longer phrase give me a hand is an arbitrary segmentation if it follows only from a length restriction. When measuring multi-unit association, we can (i) try to develop measures that are length agnostic so that a single set of measures covers all lengths or (ii) class results by length so that we find the most associated bigrams, trigrams, etc. in independent batches. The goal in this paper is to provide measures that generalize beyond sequence length. This generalization increases the impact of the segmentation problem: given a sequence of associated units, as in (1a) above, how can we determine whether (1a) as a whole is a collocation or whether it contains a sub-sequence like (1c) which is a collocation?
In this paper a sequence is a string of words for which we know only precedence relations. In the sequence the big red dog, for example, we observe that red comes before dog.
Such precedence relations are the only ones that we observe (i.e., semantic and syntactic relations are not directly available). An individual 'instance' is one occurrence of a sequence and may occur many times in a large corpus. Association strength, then, is a measure of how meaningful a particular precedence relationship is across instances. Thus, if red dog occurs together only a few times relative to the individual frequencies of red and dog, the precedence relationship that we observe in this particular instance does not generalize to strong association across the corpus. The essential difference between pairwise and multi-unit measures is that pairwise measures require only the concept of co-occurrence (i.e., that red and dog occur together in an observed string) while multi-unit measures require the additional concept of precedence relations: the big red dog is actually a chain of precedence relations that holds across individual pairs. We need to generalize from co-occurrence of units to co-occurrence of precedence relationships between units.
Within a sequence, individual units can be either lexical items (2a) or parts-of-speech (2b). We can generalize across types of representation by referring to sequences of units, not specifying if a particular slot is filled by a lexical item or by (any member of) a syntactic category. An abstract sequence is given in (2c), where each letter indicates a unit (e.g., a lexical item) with dashes separating slots in the sequence (i.e., positions occupied by units). The advantage of this abstraction is that we can define multi-unit measures without assuming the number of units. First, we need the concept of 'end-points': each sequence has a left and a right end-point: the first and last units in the sequence. For example, the left end-point in (2c) is A and the right end-point is G. Second, we need the concept of 'sub-sequences': any sequence of more than two units can be reduced to one or more contained sequences; for example, the sequence in (2c) includes among others the sub-sequences given in (2d) through (2f). To make the problem of sub-sequences concrete, the sentence in (2g) contains the multiword idiom give me a hand along with a number of sequences like me a hand that are not meaningful. The problem for multi-unit measures is to determine where the boundaries of an idiom begin and end. In other words, multi-unit measures must be able to indicate when a subsequence is more associated than the sequence as a whole. Third, we need the concept of 'neighboring pair': any two adjacent units within a sequence. Thus, the set of neighboring pairs in (2a) is: the big, big red, red dog.
The core of all the multi-unit measures developed in this paper is the pairwise ΔP: Let X be a unit of any representation and Y be any other unit of any representation, so that XA indicates that unit X is absent and XP indicates that unit X is present. We are concerned with association in both possible directions, left-to-right (LR) and right-to-left (RL). The LR measure is p(XP|YP)p(XP|YA) and the RL measure is p(YP|XP) -p(YP|XA). This is simply the conditional probability of co-occurrence in the given direction (i.e., of Y occurring after X) adjusted by the conditional probability without co-occurrence (i.e., of Y occurring without X). In its original formulation, the ΔP was meant to indicate the probability of an outcome given a cue, p(XP|YP), reduced by the probability of the outcome in the absence of the cue, p(XP|YA). In linguistic terms, the outcome is co-occurrence of two units and the cue is the occurrence of only one of the units. In this paper, the direction of association being measured is notated using a sub-script: left-to right is written as ΔPLR and right-to-left as ΔPRL.
For the purposes of illustration, Table 1 defines a schematic co-occurrence matrix that will be used to show how the pairwise ΔP is calculated (for further details, see Gries 2013). This matrix allows an abstraction on top of observed co-occurrences (i.e., strings in which unit X and unit Y occur as XY). The number of occurrences of X and Y together is given by a. The number of occurrences of X without Y is given by b and of Y without X by c. To capture the size of the corpus, the number of units occurring without either X or Y is given by d. These four variables allow other quantities to be defined: the total number of occurrences of X, for example, is a + b (i.e., its occurrences both with and without Y). For the base pairwise measure, the LR conditional probability p(XP|YP) can thus be calculated as a / (a + c) or the number of cases of X and Y occurring together over the total number of cases in which Y occurs. Here, the presence of Y is the conditioning factor and this represents left-to-right association. The RL conditional probability p(YP|XP) can be calculated as a / (a + b) or the number of cases of X and Y occurring together over the total number of cases in which X occurs. Thus, the presence of X is the conditioning factor and this represents right-to-left association. The full formula for ΔPLR is given in (3a) and for ΔPRL in (3b).
Consider the phrase give me a hand, whose relevant frequencies from the Corpus of Contemporary American English (Davies, 2008) are shown in Table 2. For example, give occurs on its own 189,583 times and occurs preceding me 15,049 times. These frequencies, when put into a co-occurrence table, give the values shown in Table 3. When put into the formulas in (3), this provides a ΔPLR of 0.015 and a ΔPRL of 0.077. The purpose of this brief example is to show how individual frequencies are abstracted into co-occurrence frequencies which are then used to calculated the ΔP measure in both directions. The main problem addressed in this paper is that current work does not cover multi-unit sequences, does not adequately cover direction-specific measures, does not cover multiple types of representation, and does not adequately examine the behavior of different measures across languages. In order to address these gaps in the literature, the next section introduces a multilingual experimental set-up using data from the Europarl Corpus representing eight languages and two levels of representation (lexical and syntactic Portuguese: pt), with 650k speeches each. This allows each language to represent the same domain. We consider sequences containing between 2 and 5 units with part-of-speech tagging performed using RDRPosTagger (Nguyen, et al. 2016).
The first question is purely descriptive: how many sequences are there across languages, lengths, and levels of representation? How do frequency and dispersion constraints change the number of sequences? Dispersion, the distribution of a sequence across a corpus (i.e., Gries 2008, Biber, et al. 2016, is implicitly treated by processing the corpus in chunks of 500 speeches; any sequence that falls below a per-chunk frequency threshold (set at 10) is discarded.
This favors evenly dispersed sequences while maintaining efficiency. A further individual unit threshold (set at 50) removes sequences that contain infrequent lexical items.
The algorithm for extracting sequences has two passes: first, building an index of individual units and, second, building an index of sequences. In the first pass, all individual words are counted. Infrequent words are discarded; a word must occur roughly once every million words to be indexed. Given the Zipfian distribution of word frequencies, a very large number of less frequent words would need to be indexed without this threshold. The effect of the Zipfian distribution is much greater with multi-unit sequences because there are many more sequences than there are units. The individual frequency threshold means that no sequences containing words below that threshold need to be indexed, reducing the problem of large numbers of infrequent sequences. Because the algorithm processes small batches of the corpus in parallel, each batch also contains a very large number of very infrequent sequences whose total frequency cannot be known until all batches are processed. The per-chunk threshold allows the algorithm to discard infrequent sequences within each batch. The influence of the individual frequency threshold is shown in Figure 1. As this threshold is raised from 500 to 2,000 the number of sequences shrinks quickly, as represented by 'Sequences Before Threshold'. On the other hand, if we enforce an additional sequence frequency threshold of 1,000 the growth is much reduced, as represented by 'Sequences After Threshold'. This means that the individual unit threshold removes a large number of sequences, but that most of these removed sequences are themselves infrequent. In part, this follows from the fact that a given sequence can be no more frequent than its least frequent unit.
Figure 1. Influence of Frequency Threshold for Individual Units for English
Dispersion is implicitly enforced by using a per-chunk sequence frequency threshold to remove those sequences which do not occur in a given part of the corpus; because the corpus is processed in many chunks this reduces the prominence of poorly distributed sequences. The impact of this threshold is shown in Figure 2, where the individual threshold is held constant (at 2,000) and the per-chunk threshold is raised from 2 to 5. As before, two conditions are compared: 'Sequences Before Threshold' is the set of all sequences and 'Sequences After Threshold' is the set of all sequences that occur more than 1,000 times. We see, then, that 0 500 1000 500 1,000 1,500 2,000 Thousands Sequences Before Threshold Sequences After Threshold increasing the per-chunk threshold sharply reduces the total number of sequences in the corpus.
However, the per-chunk threshold has a much reduced impact on more frequent sequences. This means, as before, that the thresholds used to maintain efficiency largely impact sequences that occupy the very long tail of the Zipfian distribution. Note that Figures 1 and 2 use different thresholds than all later figures in order to make these comparisons possible.
Figure 2. Influence of Per-Chunk Threshold for Sequences for English
The number of words contained in 650k speeches is given in Figure 3; because the speeches largely overlap, variations across languages are linguistic in nature. Although relatively similar, the corpora range from 45.9 million words (English) to 54.3 million words (French). The purpose of Figure 3 is to show that we expect some variation in sequences across languages simply as a result of having different numbers of units in the corpus. It turns out, however, that the number of sequences for each language varies more widely than this baseline, from 153k (German) to 298k (French) as shown in Figure 4.
Words in Corpus
As also shown in Figure 4, the number of sequences when parts-of-speech are included is much higher across all languages than the number of purely lexical sequences. Given that each language has a separate tag set and tagging model, it could be the case that finer-grain tags for some languages produce a larger number of sequences. However, there is also variation in the number of purely lexical sequences: ranging from 14,800 (Swedish) to 29,300 (Spanish). This is visualized in Figure 5 with a closer look at only lexical sequences, showing that tag sets are not the sole cause of this variation. In fact, the distribution of lexical and total sequences largely correspond across languages, again indicating that the number of sequences is more than an artifact of the tag sets. The number of sequences also illustrates the importance of efficiency: the number of sequences grows quickly when length and representation constraints are removed. . This is the case for each language but the magnitude of the increase varies: the difference between the maximum and minimum number of sequence types across languages is 9k for length 2 but 69k for length 5. This shows, again, that the difference in sequences across languages is greater than the baseline variation in the number of words in each corpus. Further, a high number of sequence types that is caused simply by a higher number of categories (i.e., more word types or parts-of-speech) would result in a lower average sequence frequency. Instead, the opposite is the case, with a higher number of sequence types often co-occurring with a higher average type frequency (not shown). These variations, then, reflect differences across languages that justify the empirical examination of association measures across languages even though including language as a dimension of variation complicates the analysis. A list of the parameters used for the experiments described in the following sections is given in Table 4.
Mean ΔP and Sum ΔP
A multi-unit sequence can be viewed as a sequence of neighboring pairs. Our first two For example, Table 4 shows the top lexical sequences for the Σ(ΔP) measure in the LR direction: most sequences contain several units. On the other hand, the µ(ΔP) tends to favor shorter sequences because it looks at the average association across neighboring pairs: highly associated pairs will rise to the top and longer sequences will have difficulty matching simple pairs. In both cases, length's main influence is that longer sequences can better tolerate weak links. For example, the top sequences for Σ(ΔPLR) include two phrases that contain in order to make and six that contain of the european union. The high association of these sub-sequences tends to promote any sequence that contains them. This illustrates the segmentation problem: when is a sub-sequence better than the sequence as a whole? Note that capitalization is not used in Table 5; this is because association measures are calculated on lower-case representations. What is the relationship between the sequence rankings produced by the µ(ΔP) and Σ(ΔP) measures? The top sequences in Table 5 suggest that the two measures produce significantly different rankings, but this is actually not the case. Quantitative relationships between these and other measures, however, will be considered in more detail only in Section 5.
Minimum ΔP
The first two multi-unit measures reveal the problem of sub-sequences: many of the top sequences contain the same neighboring pairs. The Minimum ΔP, or M(ΔP), tries to identify weak links within a sequence. The idea is that such weak links provide a quick check to see if a sequence contains unassociated material. For example, if the Σ(ΔP) of in order to make and in order to make it are the same (at this level of precision), then the final pair make it clearly does not add to the overall association of this sequence: it is a weak link. This is formalized in (5), where NP is again the set of neighboring pairs in a sequence and M(ΔPLR) is simply the minimum observed across all neighboring pairs. Thus, the M(ΔP) on its own is not an association measure because it simply finds the weakest link in a chain of pairwise association values. But, when combined with other measures, it provides a way of filtering out sequences with segmentation problems.
(5a) NP = Set of Neighboring-Pairs in Sequence For example, Table 6 shows the same rankings as Table 5, this time with sequences containing a M(ΔPLR) of less than 0.01 removed. The µ(ΔPLR) in Table 5 had only individual pairs among its top sequences; it turns out that M(ΔP) is always the same as µ(ΔPLR) and Σ(ΔPLR) for sequences of length two. Thus, only the filtered and unfiltered top ranked sequences for Σ(ΔPLR) are shown in Table 6. On the one hand, the repeated sub-sequences from (6) share their left end-points (i.e., i would). Given the sequence in (7c), the formula is shown for RE(ΔP) in (7d) and for RB(ΔP) in (7e). The difference between these variants is in the sub-sequences they are comparing.
If a sub-sequence has a higher mean association value than the full phrase, this measure will have a value near or even below zero. The closer the value is to zero, the more the full sequence represents a poor segmentation. For example, the phrase in (6a) has an RE(ΔPLR) of 0.414, showing that it improves upon its immediate sub-sequence. The phrase in (6c) has an RE(ΔPLR) of 0.380, showing that it also improves upon its immediate sub-sequence. However, the incomplete phrase in (6b) has a much lower RE(ΔPLR) of only 0.039. We see from this example a case of multiple nested sequences, none of which have obvious weak links but which we still need to distinguish between. The Reduced class of measures allows us to quantify this aspect of association, giving a high ranking to (6a) and (6c) but a low ranking to (6b).
The top lexical sequences for Σ(ΔPLR) are shown in no memory: so long as each link is strong, the likelihood of a particular series of links (i.e., the phrase in 8a) is not taken into account. How can we find sequences whose association as a chain of precedence relations is stronger than its purely pairwise association? The Divided class of measures is our first attempt to capture chains of precedence relations by viewing each sequence as a pair consisting of one end-point and the rest of the sequence as a single unit. For example, the phrase in (8a) The DE measure makes a pair out of the right end-point and the remainder of the sequence: (ABCD|E). These represent the conditional probability of encountering the remainder of the sequence when given part of the sequence. The idea is that strong collocations can be quantified by how much one end-point selects the remainder of the sequence. Going back to the phrase in (8a), the individual pairwise links between these units are weak, as discussed above; the former has a pairwise association (LR) of 0.011. However, given former yugoslav republic it is very likely to have been preceded by the; and given the former yugoslav it is very likely to be followed by republic. As a result, this phrase is highly ranked by DB but not by the measures previously discussed.
The top ranked sequences from DB(ΔPLR) are shown in Table 8, again using M(ΔP) to filter out weak links. Each of these phrases has a very high value for DB but would not have been captured as a collocation given only a series of pairwise links (i.e., their Σ(ΔP) is rather low).
Further, neither of the Reduced measures is able to capture these non-pairwise patterns, as shown by the generally low values for RB(ΔP) and RE(ΔP) in Table 8. (10b) and (10c), so that the pairwise association between the end-points in sequences of varying length can be defined as in (10d). In this case, if the end-points are not observed to co-occur, a value of zero is given.
Given this constraint, that the end-points must in fact co-occur, the E(ΔP) has limited coverage: in the Europarl dataset for English, only 650 multi-unit lexical sequences have cooccurring end-points (out of 8,850 lexical sequences with more than two units). Of these, however, 611 would not have been ranked highly on the previous tables, either because none of the measures ranks them highly or because they contain a weak pairwise link. Filtering for weak links is unnecessary here because sequence-internal association is irrelevant. Thus, none of the select sequences in Table 9 would have been identified as a meaningful sequence without the E(ΔP). Note that all sequences which share a pair of end-points receive the same value for this measure (e.g., the second world and the arab world both receive a value of 0.353). The sequences shown in Table 9 are selected, rather than showing the full ranking, because there are many variations on these templates: for example, the full ranking includes 18 types of councils. What we see, however, is that a Table 10 shows each neighboring pair from (11a) with its LR and RL association value and their PD. The CS simply sums the PD column, for a value of 0.447. The CC is the number of occurrences of the least common direction. Here, only one pair has a negative value (indicating a dominant RL association), so that the direction changes only once (i.e., CC = 1). The first use for these measures is as an additional filtering mechanism. For example, Table 12 shows the top and bottom of the rankings produced by CS (the top sequences are filtered by M(ΔPLR) and the bottom sequences by M(ΔPRL)). Although asymmetries in directional pairwise association was one of the starting points for this paper, this is the first time we have been able to use these asymmetries to our advantage in distinguishing between different directions of association on a single scale. We now calculate all of these measures for the phrase the european union budget, which consists of three neighboring pairs shown in Table 14 with their LR and RL values. Also shown in Table 14 are the end-points (the budget) and two sub-sequences (the european union and for English is shown in Table 15, with left-to-right association below the diagonal shaded in blue and right-to-left in italics above the diagonal shaded in green. Darker shades indicate higher correlation; the legend is shown below the table.
Relations Between Directions and Measures
The only two measures that are very highly associated (above r = 0.75) are μ(ΔP) and Σ(ΔP); this very high correlation holds in both directions. The other area of overlap is between RB(ΔP) and RE(ΔP), on the one hand, and μ(ΔP) and M(ΔP), on the other hand (these measure are of course also correlated with Σ(ΔP)). The correlations between these measures in both directions are around 0.50. This is because, as sequence length increases, the number of pairs contributing to the summed components of the Reduced class of measures also increases. The larger conclusion from these correlations, however, is that most of the measures produce different sequence rankings: each of the measures captures a particular pattern of multi-unit association and thus highlights aspects of association that may be missed by other measures.
Stability Across Languages and Representation Types
How stable are these measures across languages and representations? This section looks at properties of the distribution of each measure as more direct evidence of cross-linguistic variation: each language has a different set of sequences, so we cannot compare ranks of sequences. Instead we compare their distributions. This is important for understanding the behavior of association measures. So far we have been examining lexical, syntactic, and mixed sequences together. Here we separate lexical and syntactic sequences. The question is whether measures of association are able to generalize across types of representation. To answer these questions, we compare each measure (left-to-right) on two conditions: first, using only lexical sequences; second, using only part-of-speech sequences. For each measure, a flat line (across languages) indicates that a particular property of the distribution is consistent. The distance between the red and blue points indicates whether a particular property of the distribution is consistent across lexical and syntactic sequences. While there are many language-specific and measure-specific observations that could be made using Figure 9, for example that French syntactic sequences are an outlier for their mean value, we focus instead on consistency across languages and representations in order to identify areas in which results from smaller studies may be insufficient.
Kurtosis, the degree to which a distribution is peaked, is consistent across representation types and languages for most measures. The two outliers are Italian lexical sequences for E (ΔP) and Spanish syntactic sequences for DE(ΔP), both of which have significantly higher peaks. This means that these two categories of sequences are more heavily centered around their mean values. In the first case, this means that Italian lexical sequences are less likely to have associated end-points (because the mean here is zero). In the second case, this means that Spanish syntactic sequences are less likely to have their core components predict the right end-point. Beyond these two exceptions, however, we see that the measures remain consistent across languages and representation types in their kurtosis. Figure 9. Kurtosis, Skew, and Mean Across Languages for LR: Lexical (Blue) and Syntactic (Red) Skewness, the degree to which a distribution is right-tailed or left-tailed, is also relatively consistent across languages, although here we see a slight separation between lexical and syntactic sequences: in all cases except the Divided class, lexical sequences have a more righttailed distribution than syntactic sequences. The two exceptions are the same as before: Italian lexical sequences for E(ΔP) and Spanish syntactic sequences for DE(ΔP). The Italian lexical sequences for the E(ΔP) are not as much an outlier here: lexical sequences across languages are significantly more right-tailed than syntactic sequences for this measure. This means that there are fewer sequences with high values for E(ΔP) for lexical sequences, an outcome that makes intuitive sense because many phrases like blue paper pizza have lexical end-points that are unlikely to co-occur but syntactic end-points that are likely to co-occur (i.e., ADJECTIVE -NOUN).
Mean, the center value of the distribution, is more consistent across languages and conditions than skew, with most plots being flat with very close lexical and syntactic sequences.
There are three exceptions to this: First, the mean value of syntactic sequences is lower for μ(ΔP), indicating that syntactic sequences are generally less associated. Second, French syntactic sequences are an outlier in many of the measures, showing a generally lower mean than syntactic sequences for other languages. Third, DB(ΔP), while consistent across languages, has a significantly lower mean for syntactic sequences in general.
This sort of analysis is important because we want to generalize these measures across languages and representations, but this requires that the measures are relatively consistent in their behavior. Many studies do not cover multiple languages so that each of the exceptions noted above would be viewed as a measure-specific variation on smaller datasets. As shown in the external resources accompanying this paper, the influence of frequency weighting and of using the unadjusted conditional probability as the base measure are also consistent across languages.
This shows that these measures generalize well across conditions.
Conclusions
The motivation for this paper has been to generalize association measures across varying sequence lengths and levels of representation. The problem is that generalizing across different lengths creates segmentation problems and generalizing across type of representation creates very large numbers of sequences. Both the qualitative analysis (Section 4) and the large-scale quantitative analysis (Section 5) suggest that the measures developed here capture different aspects of multi-unit association. This implies that we are unlikely to find a single measure that captures all facets of multi-unit association. The current approach produces a vector of association values for each sequence, one or more of which can reveal meaningful or otherwise interesting collocations. For example, in Section 4 we used the M(ΔP) and CC(ΔP) measures to filter results from other measures, thus combining multiple measures in a simple way. This is an expanded version of earlier suggestions of using tuples of association, frequency, dispersion, and entropy (Gries 2012). The studies in this paper strongly suggest that a vector-based representation is important once we leave behind pairs of lexical items for sequences of varying lengths and levels of representation.
A vector-based approach complicates the use of association measures because we now have 16 measures producing 16 distinct sequence rankings. In order to make sense of these measures, we take up the idea of filtering in Table 16 by presenting a list of top LR and RL sequences produced by combining the measures into a single direction-specific feature ranking.
In order to filter sequences, first, we have 'constraint' measures that must be satisfied: Sequences that have weak links are removed from the ranking; this is defined as an M(ΔP) that falls below 0.01. Sequences that have shifting directions of association are also removed from the ranking; this is defined as an CC(ΔP) greater than 1. Second, we have 'ranking' measures: for the remaining sequences, we represent each one using the its highest direction-specific measure. For example, if a sequence has an E of 0.04, an RB of 0.004, and an DB of 0.005, then it is represented using E, the measure which has the maximum value across all individual measures representing that sequence. This results in the sequences shown in Table 16. Figure 4), syntactic sequences are very common and thus tend to have lower association values than lexical items. Second, in both directions the sequences fall into multiple linguistic categories: complex noun phrases (e.g., the european union), complex verb phrases (e.g., able to VERB DETERMINER), and partially-fixed argument structures (e.g., disputes arising from, i would like to). This range of sequence types shows the robustness of a vectorbased approach to association.
To put vectors of association measures into a wider context, how do they compare with word embeddings (e.g., Erhan, et al. 2010;Pennington, et al., 2014)? First, the items defined are sequences rather than single units. Second, and more importantly, word embeddings are relations between units or sequences and their context (i.e., skip-grams along dimensions of common lexical items for each unit) while vectors of association values are relations between units in a sequence regardless of context. Thus, it is plausible to conceive of a two-staged approach in which (first) a vector of association values is used to identify those sequences which are interesting or meaningful, and then (second) word embeddings are used to measure the similarity of the contexts in which these sequences occur. In short, association values indicate whether and how a sequence is meaningful while word embeddings indicate whether and how individual units are meaningful; there are many relationships between these two corpus-based representations that deserve further exploration.
Appendix 1: Comparing the Unweighted and the Frequency Weighted ΔP
This section examines the difference between the raw ΔP and the frequency-weighted ΔP: how does this change the overall distribution of sequences? We are interested in this comparison because the combination of association and frequency has the potential to resolve theoretical conflicts between the relative importance of these two types of measures (e.g., Bybee 2006; Gries 2012). On the other hand, by merging both measures together (through multiplication), frequency weighting represents sequence A that has an association score of 0.9 and a co-occurrence frequency of 50 in exactly the same way that it represents sequence B that has an association score of 0.045 and a co-occurrence frequency of 1000. We have seen in the main paper itself that frequency weighting has a different qualitative effect for each measure; but that analysis considered only the top ten lexical sequences for each measure in a single language.
Here we look at all sequences across eight languages.
Our first approach to the comparison is to look at the agreement in the ranking of sequences with and without frequency weighting using Spearman correlations. High correlations mean that the conditions rank sequences in a similar way but low correlations mean that there is a difference that needs to be investigated further. This is shown in Figure 1 [ Figure A1. [ Figure A2.
Normalized Distribution of Raw (Blue) and Frequency Weighted (Red) Measures]
The basic generalization is that the unweighted ΔP measures have a distribution much more peaked around a few values (i.e., most association values cluster around 0) while the frequency-weighted measures are more evenly distributed (i.e., there are more values that are very positive or very negative). The summed weighted values have a particularly large range.
The distributions across languages are similar, which again indicates an absence of languagespecific effects. Here we also see that the LR and RL distributions are quite similar, so that there are no direction-specific effects influencing the distribution that need to be explored further.
The purpose of the analysis displayed in Figures 1 and 2 is to identify where frequency weighting has a strong influence and to determine whether this influence in consistent across languages. The conclusion is that it does have a consistent influence on some measures, specifically μ(ΔP). The initial conclusions from the English examples discussed in the main paper indicate that the unweighted measure favors sequences that may be rare but which always occur together in the dataset (i.e., named entities such as Porto Alegre). The weighted measure, however, favors sequences that are both associated and contain individual units that are highly frequent (i.e., idiomatic phrases such as in order to). These sequences likely have lower association, in the sense that "in" occurs in many other collocations, but are promoted by their sheer frequency.
This raises two considerations: First, which measure should we use? The answer here depends on the task: if we want to find named entities, then the unweighted measures seem to perform better; if we want to find grammaticalized sequences, however, the frequency weighted measures seem more appropriate. Second, what is the cognitive status of frequency weighted association? Are there other methods of combining association and frequency that correspond better to a cognitive process that language learners use to grammaticalize structure from observed usage? While this is a matter for future work, one approach is to employ both sorts of measures for the task of learning grammatical structures and evaluate which produces the more accurate representations. For example, if frequency weighted association consistently reveals grammaticalized structures more clearly than raw association, this would provide one piece of evidence that frequency and association, combined in this way, have a certain cognitive reality.
This is a question for future work, however, and the purpose here is to identify where and how robustly these conditions differ in order to identify where such future work should focus.
Appendix 2: Comparing the ΔP with Conditional Probability
The pairwise ΔP that forms the core of each of the multi-unit measures subtracts the conditional probability of one unit occurring without the other from the conditional probability of both units occurring together. The next task is to examine the influence that this adjustment has: how would the behavior of these measures change if we simply used the conditional probability itself as the core measure? We start by looking at the similarity in sequence ranks between these two conditions, using a heat map of Spearman correlations across languages in Figure A3. The point of this visualization is to reveal those contexts in which the conditions differ and which thus merit further examination. We again see relative consistency across languages, with the variation occurring across measures. In this case, the only measures showing low agreement are the M(ΔP) and the E(ΔP), in both directions.
Correlation Between ΔP and Conditional Probability Across Languages]
The ΔP controls for the presence of the outcome without the cue. Another way of looking at this adjustment is that it controls for the baseline probability of the second unit occurring after any generic unit in the corpus. This baseline creates negative values for cases in which the current pair co-occurs less frequently than the baseline. For example, given the sequence give me, a negative value for the ΔP would indicate that me is less likely to follow give than any random unit in the corpus. It is not surprising that this adjustment has a significant influence on the M(ΔP), then, because this adjustment highlights the presence of weak links. The low correlation between conditions for this measure shows that the ΔP is actually doing what it is meant to do: reveal cases in which observed association is accidental.
The other measure in which the conditions differ is both directions of E(ΔP), which is meant to find sequences that have fixed end-points but variable internal units. The particularly low correlation here is because the ΔP takes on negative values when the end-points are not frequently observed together. In these cases, the probability of the outcome without the cue is much higher than the probability of the outcome with the cue. Again, this is a scenario in which the ΔP excels at measuring the particular property that is highlighted by the E(ΔP) measure.
In the end, then, the ΔP and the conditional probability are quite similar except in cases where the ΔP excels in not over-estimating the attraction between units. This pattern is stable across languages, as before, which gives us confidence that the ΔP actually does provide an improved core measure rather than just exploiting a property particular to the English data on which it has previously been evaluated. | 2019-02-19T14:07:20.258Z | 2018-10-05T00:00:00.000 | {
"year": 2021,
"sha1": "824dde729130d6737e90fc9c53a9b22ad062c1ea",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2104.01297",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d3dcbc7368ebe13bcf4aaf628fd01aece1923266",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119480371 | pes2o/s2orc | v3-fos-license | Transverse Vortex Dynamics in Superconductors
We experimentally characterize the transverse vortex motion and observed some striking features. We found large structures and peaks in the Hall resistance, which can be attributed to the long-range inhomogeneous vortex flow present in some phases of vortex dynamics. We further demonstrate the existence of a moving vortex phase between the pinned phase (peak effect) and the field induced normal state. The measurements were performed on NiZr2 based superconducting glasses.
We experimentally characterize the transverse vortex motion and observed some striking features. We found large structures and peaks in the Hall resistance, which can be attributed to the long-range inhomogeneous vortex flow present in some phases of vortex dynamics. We further demonstrate the existence of a moving vortex phase between the pinned phase (peak effect) and the field induced normal state. The measurements were performed on N iZr2 based superconducting glasses.
Type II superconductors placed in a magnetic field (B) will allow quantized magnetic fluxes to penetrate and form vortex lines parallel to the field surrounded by superconducting currents. Because of the sign of these currents, single vortices will repel each other and condense at zero temperature into an Abrikosov vortex lattice [1] in the absence of disorder. When introducing disorder and a driving force, the vortex structure will evolve through several different phases, which include a moving Bragg glass, a pinned disordered phase and a liquid-like phase [2,3,4]. Theoretically, it is expected that the transverse motion of vortices (perpendicular to the driving force) also exhibits interesting pinning properties, however these have been elusive to experiments so far.
To overcome the inherent difficulty in observing the vortex flow at high vortex velocities and densities, where no direct imaging technique can be used, we used dissipative transport in a very clean and isotropic type II superconductor described below. In the mixed state of type II superconductors, the appearance of a resistance is due to the motion of vortices, which upon application of a current (J) in the sample will travel in the direction of the Lorentz force J × B, thereby inducing a measurable resistance. If the vortices move precisely in the direction of the Lorentz force, that is perpendicular to the current direction, no Hall voltage is expected. Therefore, the condition for the onset of a Hall voltage is that the vortices be traveling at some angle to the Lorentz force; then the component of motion parallel to the applied current will induce a Hall voltage. Interestingly, the Hall effect in the superconducting state still eludes the research community; it remains controversial even after over 40 years of research on the subject. Some predict a Hall sign reversal below T c caused by pinning effects [5,6], others argue that the anomaly cannot be due to pinning [7,8,9,10], whilst others even predict no sign reversal at all [11,12]. Moreover the few studies, which report Hall effect measurements on samples which also exhibit the peak effect in longitudinal transport measurements do not show any sharp features [10,13,14,15] and no correlation to the different vortex phases was observed.
Many difficulties involved in the analysis of the Hall resistance data and theory stem from the competing contributions due to the Hall resistance of normal electrons and the voltage produced by the moving vortices. The con-tribution to the Hall voltage of the non-superconducting or normal electrons can be found in the vortex cores as well as in possible pockets of normal phases in an inhomogeneous superconductor. In order to avoid this problem, we have chosen a metallic glass, where the Hall voltage contribution of the normal electrons, antisymmetric in B, is negligible compared to the voltages produced by moving vortices, which is mainly symmetric in B. Indeed, in the normal phase of our system we find R Asy H ≃ B/ne < 10µΩ/T , where n > 1.4 × 10 22 cm −3 is the lower bound for the measured electronic density and R Asy H is always negligible compared to all other contributions. These density values are consistent with those found in for melt-spun NiZr 2 ribbons [16].
The measurements of the Hall resistances were performed in glassy Fe x Ni 1−x Zr 2 ribbons for different values of x as a function of magnetic field. The superconducting transition temperature T c of these high-purity Fe-Ni-Zr based superconducting metal glasses prepared by melt-spinning [17] is around 2.3 K depending on the iron content. The amorphous nature of the samples ensures that the pinning is isotropic and has no long-range order, as opposed to crystals, in which long-range order provides strong collective pinning. Also, due to their high purity, the samples have a very weak pinning potential and critical current densities (J c ≤ 0.4A/cm 2 ) from 10 to 1000 times smaller than in previous typical materials [8,9,10,13,15]. The advantage of using samples with such a small depinning current resides in the possibility of investigating the pinning and depinning mechanisms of the flux line lattice without the use of a large excitation current which can introduce uncertainties due to the self-heating it produces. The different length scales characterizing our superconducting samples were estimated from standard expressions for superconductors in the dirty limit [18], and found to be typical of strong type II low temperature superconductors, as described in ref. [19].
In the bottom of figure 1, we present a phase diagram (in red) obtained from longitudinal resistance measurements for different driving currents I on a sample of Fe 0.1 Ni 0.9 Zr 2 . The labeling follows the scheme proposed in ref. [19], where the first depinned vortex phase, labeled depinning 1, is characterized by collective moving vortices and was identified in ref. [2] as the moving Bragg glass, in which quasi long-range order exists. At higher B, the vortices are pinned again (pinning phase), which is the origin of the peak effect and was proposed to originate from the softening of the vortex lattice [20], which causes the vortex lattice to adapt better to the pinning potential, or from the destruction of long range order by disorder described in the collective pinning theory of Larkin and Ovchinnikov [21]. Finally, just below B c2 and for higher driving currents, an additional depinned vortex phase is observed (depinning 2) which results from a sudden depinning of the vortex lattice before the transition to the normal state, and is characterized by a smectic or plastic flow of vortices [2,3,4]. The onset of this phase is identified in R xx vs B data as the abrupt increase in resistance following the depinning 1 phase for high driving currents. For low driving currents the nature of the transition between the disordered pinned phase and the normal state was never established, but we show it here to be separated by a depinned phase as evidenced by the existence of a pronounced peak in the Hall resistance.
Also shown in figure 1 are the Hall resistances represented as a topological mesh and color projection as a function of B and for different driving currents. Graphed in this manner, the Hall data can be compared directly to the phase diagram and the relation between these two types of measurements can be established. It is important to note that a line accounting for the contact misalignment was subtracted from the Hall curves in this graph. Strong peaks or features are observed in the Hall resistance for all driving current, and are found to be located in the depinning 2 phase close to the transition to the normal state in the phase diagram. In addition, for driving currents below 1 mA, a second peak is observed right at the onset of the pinning phase. The individual Hall resistance curves are shown in figure 2, where the peaks observed in the Hall signal are found to vary in amplitude and shape with the driving current. While we have measured more than half a dozen samples of varying iron concentration all show very similar features and the results shown in all the figures are representative of all. In all the samples there is no single clear cut distinction between the depinning 1 and 2 phases in terms of the Hall resistance, as opposed to the longitudinal resistance, where a jump in the resistance allowed us to determine the boundary. However, the features are always more pronounced in the depinning 2 phase and are highly reproducible for different B-sweeps, which stands in contrast to the depinning 1 phase where the smaller features change from sweep to sweep, indicative of a noisy history dependent behavior. This behavior of the Hall resistance can be understood in terms of the nature of the different phases. Indeed, in the depinning 1 phase, which is reminiscent of a moving Bragg glass, one would expect a small noisy lateral movement along channels, which depends on the vortex density [2] and would lead to a noisy B-dependent Hall resistance signal. In the depinning 2 phase on the other hand, the existence of sharp reproducible peaks in the Hall resistance can be explained by a long range inhomogeneous vortex flow such as found in smectic channels, where the orientation can vary very suddenly, depending on the local disorder configuration and vortex density. Finally, in the pinned phase no Hall signal is to be expected, which is indeed what we observe. Generically, a peak in the Hall signal is a measure of a long-ranged moving vortex structure, since a short-ranged order would be averaged out over the sample width.
A critical reader could argue that the features seen in the Hall resistance are simply due to a long range inhomogeneous current flow as discussed in ref. [22]. Fortunately, it is possible to show in our case that most of the signal we measure must come from the intrinsic vortex motion. Indeed, using a DC current allows us to separate the different contributions. If the current flow path would solely determine the Hall voltage, this would imply that R H (I, B) ≃ R H (−I, B) and R H (I, B) ≃ R H (I, −B), since the Hall resistance contribution form the normal carriers is negligible. In our samples, however, the differences are almost as large as the values themselves, which therefore excludes a large scale inhomogeneous current flow as the main source for the Hall resistance. A similar argument can be made for intrinsic vortex channels, for which 2R ± odd = R H (I, ±B) − R H (−I, ∓B) would have to be zero because the electric field due to the vortex flow would be opposite for the paired variables (I, ±B) and (−I, ∓B) but with the same vortex flow direction. Indeed, the vortex flow direction is antisymmetric in I and B but the electric field produced by the vortex motion is symmetric in B and antisymmetric in I. In general, R + odd represents the vortex flow contributions originating from one edge and R − odd contributions originating from the other edge. If R odd is non-zero, this also implies that the vortex motion cannot be solely described by pure vortex channeling consistent with our measurements, that R odd is of the same order as R H (see figure 3). Moreover, it turns out that R AC H ≃ R ± even , which is also shown in figure 3. This is the reason that most of the data shown here is in fact R AC H , which is the even contribution of the Hall resistance and represents an average over vortices flowing in opposite directions, hence avoiding intrinsic edge effects. Finally, this demonstrates that the measured R AC H is intrinsically due to lateral vortex motion, which cannot come from pure vortex channeling nor inhomogeneous current flow.
We can now analyze the peak effect region of the phase diagram within this framework and show that indeed, there must exist a moving vortex phase between the pinning phase and the normal state, since we observe a sharp peak in the Hall resistance when sweeping the magnetic field through these regions. Even in the lowest measured currents this peak appears (figure 4), suggesting that a different vortex phase with long range inhomogeneous vortex flow such as a smectic phase exists between the peak effect and the normal state all the way down to vanishingly small driving currents. A similar peak is seen in all the samples we have measured and to the best of our knowledge, this is the first reported evidence for the existence of a smectic-like phase right before the transition to the normal state in such a low driving regime. It is interesting to note that the Hall resistance peak becomes smaller with increasing temperature before vanishing close to T c , which further confirms that this peak is not due to an inhomogeneous current flow close to the superconductor to normal transition but rather a consequence of a long-ranged transverse vortex flow just before the critical field.
In summary, we found that in the first depinned vortex phase encountered as the magnetic field is increased, the Hall resistance is relatively smooth with small noisy features, which are a result of some vortices slipping out of the channels in which they flow. This phase is consistent with a moving Bragg glass. At larger magnetic fields, the reentrant pinning phase known as the peak effect, which is characterized by a vanishing longitudinal resistance also leads to a zero Hall resistance. More inter- estingly, at even higher fields and for all driving currents, large features and peaks are observed in the Hall resistance in the second depinning phase close to the normal state. These important features are characteristic of a long range inhomogeneous vortex flow, such as expected in a smectic phase with orientational changes. Also important is the strong peak feature observed even at low driving currents, between the disordered pinned phase and the normal state, which demonstrates the existence of a long range moving vortex phase in that region. | 2019-04-14T02:09:41.657Z | 2005-10-23T00:00:00.000 | {
"year": 2005,
"sha1": "bf3efcf1a344e1b949a247a960da772d14a18d00",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0510616",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0599d94a1cca90e7e797dc2487587f711a57cd2f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119732637 | pes2o/s2orc | v3-fos-license | A short note about diffuse Bieberbach groups
We consider low dimensional diffuse Bieberbach groups. In particular we classify diffuse Bieberbach groups up to dimension 6. We also answer a question from [S. Kionke, J. Raimbault, On geometric aspects of diffuse groups, Doc. Math. 21 (2016), page 887] about minimal dimension of a non-diffuse Bieberbach group which does not contain three-dimensional Hantzsche-Wendt group.
Introduction
The class of diffuse groups was introduced by B. Bowditch in [2]. By definition a group Γ is diffuse, if every finite non-empty subset A ⊂ Γ has an extremal point, i.e. an element a ∈ A such that for any g ∈ Γ \ {1} either ga or g −1 a is not in A. Equivalently (see [7]) a group Γ is diffuse if it does not contain a non-empty finite set without extremal points.
The interest in diffuse groups follows from Bowditch's observation that they have the unique product property 1 . Originally unique products were introduced in the study of group rings of discrete, torsion-free groups. More precisely, it is easily seen that if a group Γ has the unique product property, then it satisfies Kaplansky's unit conjecture. In simple terms this means that the units in the group ring C[Γ] are all trivial, i.e. of the form λg with λ ∈ C * and g ∈ Γ. For more information about these objects we refer the reader to [1], [9, Chapter 10] and [7]. In part 3 of [7] the authors prove that any torsion-free crystallographic group (Bieberbach group) with trivial center is not diffuse. By definition a crystallographic group is a discrete and cocompact subgroup of the group O(n) R n of isometries of the Euclidean space R n . From Bieberbach's theorem (see [12]) the normal subgroup T of all translations of any crystallographic group Γ is a free abelian group of finite rank and the quotient group (holonomy group) Γ/T = G is finite.
In [7,Theorem 3.5] it is proved that for a finite group G: 1. If G is not solvable then any Bieberbach group with holonomy group isomorphic to G is not diffuse.
2. If every Sylow subgroup of G is cyclic then any Bieberbach group with holonomy group isomorphic to G is diffuse.
3. If G is solvable and has a non-cyclic Sylow subgroup then there are examples of Bieberbach groups with holonomy group isomorphic to G which are and examples which are not diffuse.
Using the above the authors of [7] classify non-diffuse Bieberbach groups in dimensions ≤ 4. One of the most important non-diffuse groups is the 3-dimensional Hantzsche-Wendt group, denoted in [11] by ∆ P . For the following presentation the maximal abelian normal subgroup is generated by x 2 , y 2 and (xy) 2 (see [6, page 154]). At the end of part 3.4 of [7] the authors ask the following question.
Question 1. What is the smallest dimension d 0 of a non-diffuse Bieberbach group which does not contain ∆ P ?
The answer for the above question was the main motivation for us. In fact we prove, in the next section, that d 0 = 5. Moreover, we extend the results of part 3.4 of [7] and with support of computer, we present the classification of all Bieberbach groups in dimension d ≤ 6 which are (non)diffuse.
We use the computer system CARAT [10] to list all Bieberbach groups of dimension ≤ 6.
Our main tools are the following observations: 1. The property of being diffuse is inherited by subgroups (see [2, page 815]).
Now let Γ be a Bieberbach group of dimension less than or equal to 6. By the first Betti number β 1 (Γ) we mean the rank of the abelianization Γ/[Γ, Γ]. Note that we are only interested in the case when β 1 (Γ) > 0 (see [7,Lemma 3.4]). Using a method of E. Calabi [12, Propostions 3.1 and 4.1], we get an epimorphism From the assumptions ker f is a Bieberbach group of dimension < 6. Since Z k is a diffuse group our problem is reduced to the question about the group ker f.
where A ∈ GL(n, Z), a ∈ Q n . If p : Γ → GL(n, Z) is a homomorphism which takes the linear part of every element of Γ then there is an isomorphism ρ : G → p(Γ) ⊂ GL(n, Z). It is known that that the rank of the center of a Bieberbach group equals the first Betti number (see [5,Proposition 1.4]). By [12,Lemma 5.2], the number of trivial constituents of the representation ρ is equal to k. Hence without lose of generality we can assume that the matrices in Γ are of the form and one can easily see that the map F : ker f → GL(n − k + 1, Q) given by is a monomorphism and hence its image is a Bieberbach group of rank n − k.
Now if Γ has rank 4 we know that the only non-diffuse Bieberbach group of dimension less than or equal to 3 is ∆ P . Using the above facts we obtain 17 non-diffuse groups. Note that the list from [7, section 3.4] consists of 16 groups. The following example presents the one which is not in [7] and illustrates computations given in the above remark.
Example 1. Let Γ be a crystallographic group denoted by "05/01/06/006" in [3] as a subgroup of GL(5, R). Its non-lattice generators are as follows Conjugating the above matrices by Now its easy to see that the rank of the center of Γ equals 1 and the kernel of the epimorphism Γ → Z is isomorphic to a 3-dimensional Bieberbach group Γ with the following non-lattice generators: Clearly the center of Γ is trivial, hence it is isomorphic to the group ∆ P . Proof. If a group has a trivial center then it is not diffuse. In other case we use the Calabi (1) method and induction. A complete list of groups was obtained using computer algebra system GAP [4] and is available here [8].
Before we answer Question 1 from the introduction, let us formulate the following lemma: Lemma 1. Let α, β be any generators of the group ∆ P . Let γ = αβ, a = α 2 , b = β 2 , c = γ 2 . Then the following relations hold: where x y := y −1 xy denotes the conjugation of x by y.
The proof of the above lemma is omitted. Just note that the relations are easily checked if consider the following representation of ∆ P as a matrix group Proposition 1. There exists an example of a five dimensional non-diffuse Bieberbach group which does not contain any subgroup isomorphic to ∆ P .
Proof. Let Γ be the Bieberbach group enumerated in CARAT as "min.88.1.1.15". It generated by the elements γ 1 , γ 2 , l 1 , . . . , l 5 where and l 1 , . . . , l 5 generate the lattice L of Γ: where e i is the i-th column of the identity matrix I 5 . Γ fits into the following short exact sequence where π takes the linear part of every element of Γ: A a 0 1 → A and the image D 8 of π is the dihedral group of order 8. Now assume that Γ is a subgroup of Γ isomorphic to ∆ P . Let T be its maximal normal abelian subgroup. Then T is free abelian group of rank 3 and Γ fits into the following short exact sequence where C m is a cyclic group of order m. Consider the following commutative diagram We get that H must be an abelian subgroup of D 8 = π(Γ) and T ∩ L is a free abelian group of rank 3 which lies in the center of π −1 (H) ⊂ Γ. Now if H is isomorphic to either to C 4 or C 2 2 then the center of π −1 (H) is of rank at most 2. Hence H must be the trivial group or the cyclic group of order 2. Note that as Γ ∩ L is a normal abelian subgroup of Γ it must be a subgroup of T : hence T ∩ L = Γ ∩ L. We get the following commutative diagram with exact rows and columns where G = π(Γ ). Consider two cases: 1. H is trivial. In this case G is one of the two subgroups of D 8 isomorphic to C 2 2 . Since the arguments for both subgroups are similar, we present only one of them. Namely, let In this case Γ is generated by the matrices of the form The above considerations show that Γ does not have a subgroup which is isomorphic to ∆ P . | 2017-09-26T08:33:24.000Z | 2017-03-15T00:00:00.000 | {
"year": 2017,
"sha1": "8064c4984e514a24878383995667954e027a9eba",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.jalgebra.2017.08.033",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8064c4984e514a24878383995667954e027a9eba",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
247084434 | pes2o/s2orc | v3-fos-license | From Zero-Intelligence to Queue-Reactive: Limit Order Book modeling for high-frequency volatility estimation and optimal execution
The estimation of the volatility with high-frequency data is plagued by the presence of microstructure noise, which leads to biased measures. Alternative estimators have been developed and tested either on specific structures of the noise or by the speed of convergence to their asymptotic distributions. Gatheral and Oomen (2010) proposed to use the Zero-Intelligence model of the limit order book to test the finite-sample performance of several estimators of the integrated variance. Building on this approach, in this paper we introduce three main innovations: (i) we use as data-generating process the Queue-Reactive model of the limit order book (Huang et al. (2015)), which - compared to the Zero-Intelligence model - generates more realistic microstructure dynamics, as shown here by using an Hausman test; (ii) we consider not only estimators of the integrated volatility but also of the spot volatility; (iii) we show the relevance of the estimator in the prediction of the variance of the cost of a simulated VWAP execution. Overall we find that, for the integrated volatility, the pre-averaging estimator optimizes the estimation bias, while the unified and the alternation estimator lead to optimal mean squared error values. Instead, in the case of the spot volatility, the Fourier estimator yields the optimal accuracy, both in terms of bias and mean squared error. The latter estimator leads also to the optimal prediction of the cost variance of a VWAP execution.
Introduction
The availability of efficient estimates of the volatility of financial assets is crucial for a number of applications, such as model calibration, risk management, derivatives pricing, high-frequency trading and optimal execution. High-frequency data provide, in principle, the possibility of obtaining very precise estimates of the volatility. The (infill) asymptotic theory of volatility estimators was initially derived under the assumption that the asset price follows an Itô semimartingale (see Chapter 3 of Aït-Sahalia and Jacod (2014)). The Itô semimartingale hypothesis ensures the absence of arbitrage opportunities (see Delbaen and Schachermayer (1994)) and, at the same time, is rather flexible, as it does not require to specify any parametric form for the dynamics of the asset price.
However, empirical evidences and theoretical motivations indicate that the prices of financial assets do not conform to the semimartingale hypothesis at high frequencies, due to the presence of microstructure phenomena such as, e.g., bid-ask bounces or price rounding (see Hasbrouck (2007) for a review). From the statistical point of view, such phenomena have been modeled as an additive noise component and the asymptotic theory that takes into account the presence of the latter was readily developed (see Chapter 7 of Aït-Sahalia and Jacod (2014)). In its most basic form, the noise due to microstructure is assumed to be i.i.d. and independent of the semimartingale driving the price dynamics (i.e., the so-called efficient price). Moreover, more sophisticated forms have been studied, such as, for instance, an additive noise which is auto-correlated or correlated with the efficient price (see, e.g., Hansen and Lunde (2006)).
The literature on the estimation of the volatility in the presence of noise is very rich. In fact, there exists a number of alternative methodologies making an efficient use of high-frequency prices to reconstruct not only the total volatility accumulated over a fixed time horizon, i.e., the integrated volatility, but also the trajectory of the latter on a discrete grid, i.e., the spot volatility. These include the two-scale and multi-scale approach by, respectively, L. Zhang et al. (2005) and L. Zhang (2006), the kernel-based method, originally proposed in Barndorff-Nielsen et al. (2008), the Fouriertransform method by Malliavin andMancino (2002, 2009), and the pre-averaging approach by Jacod et al. (2009). Given this variety of alternative methodologies, it is not straightforward to establish which specific noise-robust estimator should be preferred for high-frequency financial applications.
As pointed out in the seminal paper by Gatheral and Oomen (2010), the best asymptotic properties, i.e., the optimal rate of convergence and the minimum asymptotic error variance, do not guarantee the best performance in finite-sample applications. Gatheral and Oomen (2010) proposed to compare the finite-sample performance of different high-frequency estimators via simulations based on a market simulator which is able to reproduce the actual mechanism of price formation at high frequencies with sufficient realism. In this regard, the authors used simulations obtained via the Zero-Intelligence (ZI) limit order book model by Smith et al. (2003) to compare the performance of different integrated volatility estimators. However, the ZI model is based on several simplistic assumptions on the dynamics of the limit order book, and thus it may fail to replicate the actual behavior of high frequency financial data and microstructure noise with satisfactory accuracy. For example, under the ZI model, the order flow is described by independent Poisson processes, while it is well-known that the order flow is a long-memory process (see Lillo and Farmer (2004)) and the different components of the order flow are lead-lag cross-correlated (Eisler et al. (2012)). Moreover, as pointed out by Bouchaud et al. (2018), the ZI model leads to systematically profitable market making strategies. These properties are likely to have an effect on the dynamics of the volatility and market microstructure noise, and thus an analysis based on a more realistic limit order book model is needed.
The first goal of this paper is to extend the study by Gatheral and Oomen (2010) in two directions. First, we use a more realistic limit order book model, namely the Queue-Reactive (QR) model by Huang et al. (2015). Under this model, the arrival rates of orders depend on the state of the limit order book. This implicitly introduces auto-and cross-correlations of the book components, thereby generating more realistic dynamics for the price process at high-frequencies. Secondly, we compare not only the performance of a number of estimators of the integrated volatility (expanding the collection of estimators considered in the study by Gatheral and Oomen (2010)), but also that of different estimators of the spot volatility. To make our comparison meaningful for applications, the performance of the estimators is evaluated in terms of the optimization of the bias and the meansquared-error via the feasible selection of the tuning parameters involved in their implementation. Note that, following Gatheral and Oomen (2010), we consider three alternative price series for the estimation: the mid-price, that is, the average between the best bid and best ask quotes; the microprice, i.e., the volume-weighted average of the best bid and best ask quotes; the trade price, namely the price at which a market order is executed.
For what concerns the integrated variance, we find that the pre-averaging estimator by Jacod et al. (2009) is favorable in terms of bias minimization. Instead, when looking at the optimization of the mean squared error, the situation appears to be more nuanced. Indeed, the Fourier estimator by Malliavin and Mancino (2009) obtains the best average ranking across the considered price series (mid-, micro-and trade-prices) without actually achieving the best ranking for any of these individual series. The best rankings are instead achieved by the unified volatility estimator by Y. Li, Z. Zhang, et al. (2018) (for mid-and micro-prices) and by the alternation estimator by Large (2011) (for trade-prices). Instead, for what concerns the spot variance, the Fourier estimator provides the relative best performance for the three prices series, both in terms of bias and mean-squared-error optimization.
The second goal of the paper is to study the impact of the availability of efficient volatility estimates on optimal execution. Specifically, we investigate, via simulations of the QR model, how the use of different volatility estimators affects the inference of the variance of the cost of the execution strategy. To do so, we consider the instance where the trader is set to execute a volume-weighted average price (VWAP) strategy and assumes that market impact is described by the Almgren and Chriss (Almgren and Chriss (2001)) model. We compare the empirical variance of the implementation shortfall of the simulated executions with the corresponding model-based prediction, evaluated with different spot volatility estimators. As a result, we find that the estimator that yields the optimal performance in terms of bias and mean-squared-error optimization, namely the Fourier estimator, also gives the optimal forecast of the cost variance. More generally, our results suggest that the choice of the spot estimator is not irrelevant, as it may lead to significantly different forecasts of the variance of the implementation shortfall.
The paper is organized as follows. In Section 2 we recall the main characteristics of the ZI and QR limit-order-book models, discuss their calibration on empirical data and compare their ability to reproduce realistic volatility and noise features. In Section 3 we illustrate the estimators of the integrated and spot variance, while in Section 4 we evaluate their finite-sample performance with simulated data from the QR model. Finally, Section 5 contains the study of the impact of efficient volatility estimates on optimal execution. Section 6 concludes.
Limit-order-book models: zero-intelligence vs queue-reactive
Electronic financial markets are often based on a double auction mechanism, with a bid (buy) side and an ask (sell) side. The limit order book (LOB) is the collection of all the outstanding limit orders, which are orders of buying or selling a given quantity of the asset at a given price, expressed as a multiple of the tick size (i.e., the minimum price movement allowed) of the asset. Other two types of orders can be placed: a cancellation, that erases a limit order previously inserted by the same agent, thereby reducing the volume at a given price level, and a market order, that is, an order to immediately buy/sell the asset at the best possible price. The best bid is the highest price at which there is a limit order to buy, and the best ask is the lowest price at which there is a limit order to sell. The spread is the difference between the best ask and the best bid, and is typically expressed in tick size. For a detailed overview of the LOB see Abergel et al. (2016).
In the following, we will be interested in three price series that can be retrieved from LOB data: the mid-price, the micro-price and the trade price.
Definition 1. We define the mid-price p mid and the micro-price p micro of an asset at time t as, respectively, the arithmetic average and the volume-weighted average of the best bid and best ask quotes at time t, i.e., and v a denote, respectively, the best bid, the best ask, the volume (i.e., the number of outstanding limit orders) at the best bid and the volume at the best ask. Finally, the trade price p trade series is defined as the series of prices arising from the execution of market orders.
In our study, we will consider two models for the simulation of the LOB. The simplistic ZI model by Smith et al. (2003) and the more sophisticated QR model by Huang et al. (2015). In the next subsections we briefly recall the main characteristics of the two models. Please refer to the original papers for a more thorough description.
Model descriptions
The zero-intelligence model The ZI model, originally proposed by Smith et al. (2003), is a statistical representation of the double action mechanism used in most stock markets. Despite its simplicity, the model is able to generate a relatively complex dynamic for the order book. It is based on three parameters: the intensity of limit orders, λ L , the intensity of cancel orders, λ C , and the intensity of market orders, λ M . The three components of the order flow follow independent Poisson processes, thus the type of order extracted at each time is independent of the previous orders and the current state of the LOB, and orders may arrive at every price level with the same probability. Each order is assumed to have unitary size. For a detailed discussion about the flexibility of this model, see Gatheral and Oomen (2010). As mentioned, the ZI model may be deemed as too simplistic. Indeed, the assumptions that the intensities of order arrival are independent of the state of the book and that the intensities are equal for each price level are highly unrealistic. Moreover, this model produces purely endogenous order-book dynamics, without considering the effect of exogenous information. Further, as shown in Bouchaud et al. (2018), under the ZI model the market impact of new orders is such that profitable market-making opportunities can be created, even if they are usually absent in real markets. Some of the weakness of the ZI model are overcome by the QR model.
The queue-reactive model
The QR model (Huang et al. (2015)) is a LOB model suitable to describe large tick assets, i.e., assets whose bid-ask spread is almost always equal to one tick. This model is able to reproduce a richer and more realistic behavior of the LOB, compared to the ZI model. In other words, the QR model attempts to fix some of the flaws of the ZI model. This is achieved, in the first place, by assuming different intensities for each level of the LOB. Moreover, the degree of realism is increased by introducing a correlation not only between order-arrival intensities and the corresponding queue size at each level, but also between intensities and the queue size at the corresponding level at the opposite side of the book. Further, a dependence between the volume at the best level and order arrivals at the other levels is assumed. Finally, differently form the ZI model, the QR model allows for exogenous dynamics by taking into account the flow of exogenous information that hits the market. Following Huang et al. (2015), under the QR model the LOB is described by a 2K−dimensional vector, with K denoting the number of available price levels at the bid and ask sides of the book. At the level Q ±i , i = 1, ..., K, the corresponding price is equal to p ± i(tick), where p denotes the center of the 2K dimensional vector. Precisely, Q −i denotes a level order at the bid side and Q i denotes a level order at the ask side. Moreover, q ±i denotes the volume at the level Q ±i .
Thus, with different intensities at each queue, we have that where the function S m,l (q −i ) is responsible for the interaction between the bid and the ask side of the book, that is, being m and l two fixed thresholds. For example, given a certain volume at the bid side, a new bid limit order has different intensities depending on whether the volume at the ask is, e.g., q i = 0, 0 < q i ≤ 5, 5 < q i ≤ 10 or q i > 10. Market orders may arrive only at the best quote. Moreover, we assume that λ L i and λ C i are also functions of ½ q±1≥0 , for i = ±1, to allow for interactions between the best level and the dynamics far from the best level.
Conditionally on the LOB state, the arrival of different orders at a given limit is assumed to be independent, and follows a Poisson distribution, with intensity equal to λ. However, since the queue sizes depend on the order flow, the model reproduces some auto-and cross-correlations between the components of the order flow, as observed in empirical data.
Contrary to the ZI model, the QR gives a dynamic that is not entirely driven by the orders arriving on the LOB. To achieve this, two additional parameters are introduced, θ and θ reinit . Whenever the mid-price changes, an auxiliary (not observed) price, called the reference price and denoted by p ref , changes in the same direction with probability θ and by an amount equal to the tick size of the asset. Moreover, when p ref changes, the LOB state is redrawn from its invariant distribution around the new value of p ref , with probability θ reinit . In fact, as the authors explain, the parameter θ reinit captures the percentage of price changes due to exogenous information.
In the next subsection we address the procedures that we followed to estimate the two LOB models and discuss the differences between them in terms of ability of reproducing realistic features of empirical high-frequency data.
Calibration procedures
For the calibration of the ZI and QR models, we used order-book data of the stock Microsoft (MSFT) over the period April 1, 2018 -April 30, 2018. Data were retrieved from the LOBSTER database.
Microsoft is a very liquid stock with an average spread approximately equal to 1.25 ticks and thus can be considered a large tick asset, suitable to be modeled by the queue-reactive design.
Before the calibration, for each day we removed from the sample the first hour of trading activity after the market opening and the last 30 minutes before the market closure; this is a standard procedure adopted when working with high-frequency data, since during these two moments of the day the trading activity is known to be more intense and volatile, thereby possibly leading to a violation of the large tick asset hypothesis, even for a liquid stock like Microsoft. Given the average spread observed, and being the activity almost fully concentrated at the best limits, we implemented the ZI and QR models using two limits, Q ±1 and Q ±2 . This is in line with Huang et al. (2015).
To estimate the intensities of order arrivals under the QR model, the following inputs are needed: • the type of each event, i.e., limit order, cancel order or market order; • the time between events that happen at Q 1 and Q 2 , along with the queue sizes q 1 and q 2 before each event; • the size of each event, as q i is expressed as a multiple of the median event size.
The estimation of the intensities is performed via maximum likelihood, as in Huang et al. (2015). The parameters m and l that capture the bid-ask dependence are set equal to the the 33% lower and upper quantiles of the q −i 's (conditional on positive values). Given the symmetry property of the LOB, intensities are computed for just one side.
The parameters θ and θ reinit are calibrated using the mean-reversion ratio ζ of the mid-price, which is defined as where n c is the number of continuations (i.e., the number of consecutive price moves in the same directions) and n a is the number of alternations (i.e., the number of consecutive price moves in opposite directions). For more details about the relation between the mean-reversion ratio and the microstructure of large tick assets, see Robert and Rosenbaum (2011).
We carried out the calibration using a two-step generalized method of moments (GMM), which is more robust than the heuristic approach proposed by Huang et al. (2015). Denote by σ emp and ζ emp the empirical estimates of the standard deviation and mean-reversion ratio of the mid-price returns, computed at the 1-second frequency using the last tick rule. Further, denote by σ t (θ) and ζ t (θ) the quantities estimated in simulation t, with t = 1, ..., T , andθ = (θ, θ reinit ). Two-step GMM-based procedure for the calibration ofθ.
Step 1. Obtain a consistent estimate ofθ via the estimator Step 2. Obtain the GMM-estimate ofθ The estimator is asymptotically efficient in the GMM class. For the implementation we used T = 100 simulations with an horizon of one trading day. The tick size was set equal to the minimum bid-ask spread recorded in the data, namely 1 cent. As a final estimate, we obtained θ = 0.6 and θ reinit = 0.85. The asymptotic distribution of the queue size, needed for the re-initialization of the LOB state, was obtained following the approach proposed by Huang et al. (2015). For each simulated path, the starting LOB state was randomly chosen using the asymptotic distribution of the queue size. For the calibration of the ZI model, one only needs to reconstruct the intensities of order arrivals.
To do that, the only information needed involves the type of order, the order arrival time and the order size. The ensuing estimators read where #O = #L + #C + #M and #L, #C, #M denote, respectively, the total number of limit, cancel and market orders arriving at the best quotes or between the spread, while ∆t is the average elapsed time between two consecutive orders.
Comparison of volatility and noise features
As pointed out by Gatheral and Oomen (2010), neither the efficient price nor its volatility are welldefined under the ZI model. The same holds under the QR model. However, if one assumes constant model parameters, thanks to the ergodicity of the processes (see Huang et al. (2015)), the variance of the efficient price can be defined, following Gatheral and Oomen (2010), as where p may denote any price process among those considered, that is, the mid-price, the micro-price and the trade price.
Based on (1), given the (calibrated) values of the LOB parameters, it is possible to estimate the true value of σ 2 via simulations. Specifically, in our study, given the LOB parameters calibrated on the Microsoft sample data, numerical results over 2500 simulations show that for m > 18000 the volatility of mid-prices, micro-prices and trade-prices stabilizes around the value σ 2 = 1.039 · 10 −8 in the case of the QR model.
Since the knowledge of the value of σ 2 is crucial to compare the finite-sample performance of volatility estimators, we wish to verify whether the two order-book models give similar results. The estimates of σ 2 for the two models and for the three price series considered are compared in Table 1.
The levels of variance obtained with the ZI model and the QR are significantly different, even when considering the error in the estimation procedure, with the ZI producing a level about 50% higher than the one observed for the QR model. This highlights a first fact to be considered: the choice of the LOB model is not irrelevant for applications involving the volatility parameter. In fact, it is known that the ZI model calibrated on real data only partially reproduces the actual empirical variance, with a bias which depends on the relative magnitude of the intensities (see, e.g., Bouchaud et al., 2018).
To do so, we use the Hausman test for the null hypothesis of the absence of noise by Aït-Sahalia and Xiu (2019). In particular, we use the formulation of the test in Equation (16) of Aït-Sahalia and Xiu (2019), which is coherent with the use of LOB models with a constant variance parameter. Tables 2, 3 and 4 illustrate the frequencies (in seconds) at which the Hausman test rejects the null hypothesis of the absence of noise with a significance level of 5% (⋆) and the frequencies at which the null is instead not rejected ( ) for, respectively, the MSFT sample and the simulated samples from the ZI and QR models. The results of Hausman test suggest that the noise accumulation mechanism at different frequencies under the QR model is more realistic than the one observed under the ZI model, based on the comparison with the noise-detection pattern in the MSFT sample. This aspect is clearly relevant when analyzing the finite-sample performance of noise-robust volatility estimators, and adds empirical support to the use of the QR model for that purpose.
Volatility estimators
In this section, we briefly describe the noise-robust integrated-and spot-volatility estimators whose performance will be studied in the next section. The formulae for the tuning parameters involved in the computation of the estimators that optimize the mean squared error (MSE) are also reported.
Preliminary notation
We consider the estimation horizon [t, t + h], t, h > 0, and assume that the price p is sampled on the equally-spaced grid with mesh h/n, where n denotes the number of price observations. The quantity p i denotes the log-price of the asset at time t i := t + ih/n, i = 0, 1, ..., n. Further, we define ∆p i := p i − p i−1 . Note that p may refer indifferently to the trade-, mid-or micro-price.
The spot volatility at time t is denoted by σ 2 (t) and Clearly, in a setting with constant volatility σ 2 (t) = σ 2 for all t, the latter simplifies to σ 2 u.
As an auxiliary quantity for the implementation of some estimators, we will need to estimate the integrated quarticity, that is, In the rest of the paper, we drop the subscript of both IV and IQ as we always refer to the interval [t, t + h].
Furthermore, we recall that the asymptotic properties of high-frequency volatility estimators are typically derived under the assumption that, for all t, the observable price p is decomposed as where p ef f denotes the efficient price, whose dynamics follow an Itô semimartingale, while η is an i.i.d. zero-mean noise due to the market microstructure. As additional auxiliary quantities, we will need estimates of the second moment of η, i.e., Finally, we denote the floor function as ⌊·⌋ and the rounding to the nearest integer as [·].
Bias-corrected realized variance
The realized variance, that is, the sum of squared log-returns over a given time horizon, represents the most natural rate-efficient estimator of the integrated volatility in the absence of noise. However, in the presence of noise, as it is typically the case for high-frequency settings, the realized variance is biased. The bias-corrected realized variance by Zhou (1996) corrects for the bias due to noise by taking into account the first order auto-covariance of the log-returns. The estimator reads: where c = n n−q+1 1 q and c 2 = ⌊(n − j + 1)/q⌋. The MSE-optimal value of q is attained as
Fourier estimator
Introduced by Malliavin andMancino (2002, 2009), the Fourier estimator of the integrated volatility relies on the computation of the zero-th Fourier coefficient of the volatility, given the Fourier coefficients of the log-returns. The noise is filtered out by suitably selecting the cutting frequency N . If one uses the Fejér (respectively, Dirichlet) kernel to weight the convolution product, the estimator is defined as where c k (dp n ) = 1 2π n j=1 e −iktj ∆p j represents the k-th discrete Fourier coefficient of the log-return.
The optimal value of the integer N in the presence of noise can be selected by performing a feasible minimization of the MSE, see Mancino and Sanfelici (2008).
In this paper, we implemented IV F ej F , as unreported simulations suggest that it performs better than IV Dir F .
Maximum likelihood estimator
The maximum-likelihood estimator by Aït-Sahalia, Mykland, et al. (2005) is based on the assumption that noisy log-returns follow an MA(1) model, consistently with the seminal microstructure model for the bid-ask spread by Roll (1984). under the MA(1) assumption, it holds that and the maximum-likelihood estimator reads where the pair (φ,σ 2 w ) is the result of the standard maximum-likelihood estimation of the MA(1) model.
Two-scale estimator
The two-scale realized variance by L. Zhang et al. (2005) eliminates the noise-induced bias of the realized variance by combining two different realized variance values, one computed at a higher frequency and one computed at a lower frequency. The estimator reads: The MSE-optimal value of q is equal to
Multi-scale estimator
L. Zhang (2006) also proposed a more sophisticated combination of realized variances at various frequencies that smooths out the effect of microstructure noise. The multi-scale estimator reads: The MSE-optimal q is given by
Kernel estimator
Kernel-based estimators, originally introduced by Barndorff-Nielsen et al. (2008), correct for the bias due to noise of the realized variance by taking into account the autocorrelation of returns at different lags, suitably weighted by means of a kernel function k(·). The estimator reads: Moreover, the MSE-optimal value of q is equal to In this paper we implemented this estimator by using the Tukey Hanning 2 kernel, i.e., we set k(x) = sin 2 (π/2(1 − x) 2 ). This kernel was shown to perform satisfactorily, compared to other kernels (see Barndorff-Nielsen et al. (2008)).
Pre-averaging estimator
The pre-averaging estimator, proposed by Jacod et al. (2009), relies on the averaging of the price values over a window h to compute the realized variance, together with a bias-correction term. The estimator is as follows: Jacod et al. (2009) suggest that the estimator is robust to the choice of h; in our simulation study we set h to obtain a window of approximately 4 minutes, in line with Y. Li, Liu, et al. (2021).
Alternation estimator
Proposed by Large (2011), the alternation estimator corrects the realized variance with a factor dependent on the number of alternations and continuations in the sample, that is, the estimator reads where n c is the number of consecutive price movements in the same direction, while n a the number of consecutive price movement in the opposite direction.
MinRV and MedRV estimators
Andersen et al. (2012) introduced two jump-robust estimators of the integrated variance which consist, respectively, in the (scaled) sum of the minimum or the median between consecutive returns, that is, To make the estimators robust to the presence of microstructure noise, pre-averaging may be applied to price observations, as shown in Andersen et al. (2012), Appendix B. Accordingly, in this paper we used the noise-robust version of IV Min and IV Med with price pre-averaging.
Range estimator
The main idea behind the range estimator by Vortelinos (2014) is to substitute the simple returns with the difference between the maximum and minimum observed price over a given window, to obtain the estimator where max qi = max(p (i−1)q , ..., p iq ) and min qi = min(p (i−1)q , ..., p iq ). The MSE-optimal frequency is
Unified estimator
Y. Li, Z. Zhang, et al. (2018) proposed a unified approach to volatility estimation, obtaining an estimator which is consistent not only in the presence of the typical i.i.d. noise, but also when the noise comes from price rounding. The estimators is defined as follows: wheren = ( m l=1 n l )/h, n l = n/q l and q l+1 = q 1 + l, l = 1, ..., m − 1. The optimal q 1 and h can be selected via the data-driven procedure detailed in Y. Li, Z. Zhang, et al. (2018).
Fourier estimator
The Fourier method allows reconstructing the trajectory of the volatility as a function of time on a discrete grid, see Mancino and Recchioni (2015). This is achieved by means of the Fourier-Fejér inversion formula, which gives the estimator where c k (σ 2 n,N ) = 2π 2N + 1 |s|≤N c s (dp n )c k−s (dp n ) estimates the k-th Fourier coefficient of the volatility. Note that, differently from the other estimators detailed below, the Fourier estimator is global, in the sense that it estimates the entire volatility function on a discrete grid over the interval [0, 2π] 1 , instead of a local value at a specific time t.
The efficient selection of N and M can be performed based on the numerical results given Mancino and Recchioni (2015).
Regularized estimator
Proposed by Ogawa (2008), the regularized estimator is based on a regularization procedure that involves data around the estimation point. The estimator reads Following Ogawa and Sanfelici (2011), for the implementation we set q = [n/n t ] and s = 2q, where n t is the number of points on which the spot variance trajectory is reconstructed.
Kernel estimator
Fan and Wang (2008) (see also Kristensen (2010)) proposed an estimator of the spot variance based on the localization of the kernel-weighted realized variance over a window of length q, which reads For the implementation, we used the Fejér kernel, following Mancini et al. (2015).
Pre-averaging estimator
The pre-averaging estimator of the spot volatility by Jing et al. (2014) and Y. Li, Liu, et al. (2021) relies on the localization of the pre-averaging integrated estimator on a window of length h and reads The constants c 1 and c 2 can be chosen based on the numerical results in Y. Li, Liu, et al. (2021).
Two-scale estimator
The localized two-scale estimator, proposed by Zu and Boswijk (2014), reads
Optimal candlestick estimator
The optimal candlestick estimator proposed by J. Li et al. (2022) where λ 1 and λ 2 are constants. For the implementation, we set ∆ n = 1 minute (see Section 4) and selected (λ 1 , λ 2 ) based on the optimality conditions detailed in J. Li et al. (2022), Section 2.2. (2022) proposed an estimator of the spot variance which consists in a localization of the realized kernel estimator with pre-averaging, i.e.,
Figueroa-López and Wu
The authors suggest to use the exponential kernel, that is, k(x) = 1 2 exp(−|x|), as it is proved to be the optimal kernel in terms of the minimization of the asymptotic variance (see Figueroa-López and C. Li (2020)). Further, we choose c k and c m in accordance with the formulas derived by the authors to optimize the integrated asymptotic variance (see also Remark 4.1 in Figueroa-López and Wu (2022)).
Feasible selection of tuning parameters
The feasible implementation of the optimization formulae for the tuning parameters which appear in the previous section may require the estimation of IV , IQ and ω 2 . In this regard, we use the following estimators, as in Gatheral and Oomen (2010): where the pair (σ 2 w ,φ) is obtained as in (2). Note that q is selected in correspondence of the 5-minute (noise-free) sampling frequency.
Comparative performance study of volatility estimators
In this section we present the results of a study of the finite-sample performance of the volatility estimators described in the previous section, which is based on simulations of the QR model. In the study, we considered two alternative scenarios. In the first scenario, we assumed constant values of the parameters θ and θ reinit , which translate into a constant volatility parameter. Instead, in the second scenario we allowed θ and θ reinit to change, so that the volatility parameter is no longer constant. We emphasize the fact that this second scenario introduces a novelty compared to the study by Gatheral and Oomen (2010), where the volatility parameter is constant. In fact, a scenario with time-varying volatility offers a more realistic framework to assess the performance of estimators.
For each scenario, we simulated 2500 daily paths. For each couple (θ, θ reinit ) considered, the corresponding true value of the volatility parameter was obtained via additional simulations exploiting Eq. (1), as described in Subsection 2.3. Estimators were computed using 1-second price observations. The integrated volatility was estimated on daily intervals, while for the spot volatility we reconstructed daily trajectories on the 1-minute grid. The selection of the tuning parameters involved in the computation of the different estimators was performed based on the feasible formulae and the suggestions reported in the previous section.
Constant θ and θ reinit
In the first scenario, simulations were performed with θ and θ reinit constant and equal to, respectively, 0.6 and 0.85, that is, the parameter values calibrated on the MSFT sample (see Section 2).
We recall that the resulting reference value of the spot variance parameter is σ 2 = 1.0387 · 10 −8 (see Subsection 2.3).
Tables 6-9 display the ranking of the estimators for the three series of mid-price, micro-price and trade-price in terms of their finite-sample performance. The latter is evaluated by means of, resp., the relative bias and MSE for the integrated volatility and the relative integrated bias and MSE for the spot volatility. In each table, the estimators are ordered based on the average ranking, obtained as the arithmetic mean of the rankings for the three prices series.
For what concerns integrated estimators, the pre-averaging estimator and the Fourier estimator provide the relative best performance in terms, resp., of bias and MSE minimization, based on the average ranking. However, note that the Fourier estimator achieves the best MSE average ranking without resulting the first in the ranking for any price series. Indeed, the unified volatility estimator, which is robust to i.i.d. noise and rounding, yields the best performance for mid-and micro-prices, while the best result for trade-prices is achieved by the alternation estimator, which is robust to price discreteness and rounding (see Large (2011). Overall, this may suggest that robustness to rounding may be crucial to optimize the mean squared error. Instead, the pre-averaging estimator is more clearly favorable in terms of bias reduction, given the fact that it results the first in terms of bias minimization for micro-and trade-price series and the second for mid-price series. Moreover, it is worth noticing that the Min RV and Med RV, which are also computed from pre-averaged data, occupy the second and third places in the average ranking for the bias.
As for spot estimators, simulations indicate that the Fourier estimator outperforms the other estimators considered, both in terms of bias and MSE optimization, for all the price series considered.
Only the regularized estimator is capable of obtaining comparable performances in terms of bias. The Fourier estimator differs from the other spot estimators considered in that it relies on the integration of the Fourier coefficients of the volatility rather than on the differentiation of the (estimated) integrated variance, and this appears to represent a solid numerical advantage. Figure 1 shows sample trajectories of the spot estimators computed from the mid-price series, along with the true volatility value, to help better understand the difference in performance among the estimators.
Appendix A.2 contains the analogous figures for the micro-price and the trade-price.
Finally, in the case of both integrated and spot estimators we investigated whether the volatility values obtained using different methods were statistically different. To this end, for each pair of estimators, we performed a t-test of the null hypothesis that the mean value of the estimated volatility is the same. As a result, we found that all average estimations are pairwise significantly different at the 1% confidence level. Moreover, to better assess the differences in the performances of estimators, we also applied the model confidence set (MCS) procedure by Hansen, Lunde, and Nason (2011), with a significance level of 1%. Following Patton (2011), for the MCS we used the qlike loss function. As a result, the MCS procedure always chooses as the optimal model set the one containing only the estimator with the best ranking in terms of MSE (see Tables 7 and 9), thus supporting the soundness of our results. Overall, the results of the t-tests and the MCS offer additional support to the fact that the careful selection of the estimation method is not irrelevant.
Integrated variance estimators -relative bias
Estimator mid-price rank micro-price rank trade-price rank av. rank
Variable θ and θ reinit
A nice feature of the QR model, compared to the ZI model, is the flexibility introduced by the parameters θ and θ reinit . In the second scenario we assessed the effect of time-varying values of θ and θ reinit on the accuracy of volatility estimators. Specifically, we allowed for piece-wise constant volatility dynamics, which might describe a regime-shifting scenario driven, for example, by the flow of information hitting the market. We considered two sub-scenarios, with increasing variability of the volatility parameter. In the first one, the LOB follows five regimes (each with length equal to 1/5 of a day), which translate into a double u-shaped pattern for the volatility parameter, as illustrated in Table 10. In the second sub-scenario, we allowed for ten regimes (each with length equal to 1/10 of a day), which recreate a u-shape pattern for the volatility parameter, see Table 11. Tables 12 and 13 illustrate the average performance rankings in the second scenario. Specifically, average rankings in correspondence of five and ten intra-day regimes are compared with the case of a unique regime (i.e., the case with constant θ and θ reinit illustrated in the previous subsection). Full performance results in terms of bias and MSE are detailed in Appendix A.1.
Overall, it appears that the introduction of time-varying parameters does not significantly affect the performance rankings previously obtained with constant parameters. In other words, most of the rankings are quite stable (with a few exceptions, see, e.g., the average ranking for the MSE of the Range and Pre-averaging estimators), compared to the first scenario. The stability is more evident for spot variance estimators.
As a further investigation, it might be interesting to study whether the performance ranking remains similar also when the volatility path has infinite regimes, for example when the evolution of θ and θ reinit is driven by stochastic differential equations. This would require to build a precise mapping between the volatility and θ and θ reinit in order to simulate the LOB for each value of the simulated volatility. This interesting investigation is beyond the scope of this paper and is left for future research.
As in the previous subsection, to help better understand the difference in performance among the estimators in the second scenario, Figures 2 and 3 contain sample trajectories of the spot variance estimators computed from mid-price observations, together with the path of the true variance parameter; the analogous figures for micro-and trade-prices are in Appendix A.2.
Average rankings for different volatility regimes (integrated variance) relative bias relative MSE Estimator 1 regime 5 regimes 10 regimes 1 regime 5 regimes 10 regimes
The impact of efficient volatility estimates on optimal execution
In this section, we discuss the results of a study which aims at providing insights into the impact of the use of efficient volatility estimates on the prediction of the variance of the cost of a VWAP execution.
Consider the following problem. A trader has S shares to buy within the interval [0, T ]. The interval is divided into N τ time periods of length τ = T /N τ and v k , k = 1, ..., N τ , denotes the (signed) number of shares to be traded in interval k. Clearly, Nτ k=1 v k = S. Moreover, letp k be the price at which the investor trades in interval k (in general different from the average price in the interval, p k ) and p 0 the price before the start of the execution. The objective function is the Implementation Shortfall (IS), defined as i.e., as the difference between the cost and the cost in an infinitely liquid market. The IS is in general a stochastic variable, therefore one often wishes to minimize where λ measures the risk aversion of the trader.
Let us consider a traders which models markt impact according to the Almgren and Chriss model (Almgren and Chriss (2001)). In this model, the price of the stock at step k is equal to the previous price plus a linear permanent market impact term and a random shock, that is, where σ 2 is instantaneous volatility of the unaffected price. Moreover, the actual price paidp k is different from the average price p k in the interval and reads where ρv k represents a linear temporary impact. The expected cost of an execution is then given by and its variance is equal to Note that the variance does not depend on the impact parameters ρ and θ, but only on the volatility.
Further, note that the above expression is more general, as it remains valid also when the temporary impact is nonlinear, see Guéant (2016) for more details.
For the sake of simplicity we assume that the trader performs a VWAP execution (i.e., v = S Nτ (1, 1, ..., 1) T ), so that the variance of the cost is equal to This expression shows that, in order to estimate the variance of the execution cost, the traders must have a reliable estimation of σ. We investigate whether the availability of an efficient estimate of the latent volatility parameter could allow the trader to reliably infer the variance of the cost of the strategy. More specifically, we are interested in assessing whether the use of a specific spot volatility estimator, among those studied in Section 4, leads to a gain in the accuracy of such inference.
To this aim, we use Monte Carlo scenarios of the QR model to simulate a VWAP execution and we compare the variance of the cost of the simulated executions with the corresponding value predicted by the Almgren and Chriss model (see Eq. (6)), evaluated with the (average) value of σ 2 obtained through a specific estimator. Since it is not obvious that the Almgren-Chriss model faithfully describes the market impact in the QR model, we opted for a more robust comparison that considers the ratio of the aforementioned quantities in correspondence of two different values of the couple (θ, θ reinit ), i.e. of the volatility. In this way, the effect of the strategy parameters S, τ and N τ in equation 3, that depend on the specific market-impact model, disappear.
For the simulation of the execution strategy, we set T = 3 hours and 20 minutes, τ = 10 minutes and S = 60, so that N τ = 20 and v = (3, ..., 3) T . The strategy was simulated on top of QR dynamics simulated in two different scenarios with parameters (θ, θ reinit ) equal to (0.6, 0.85) and (0.4, 0.6). We considered 100 VWAP executions and we computed the empirical variance of their execution cost. The average spot variance values were retrieved from the study of Section 4. Table 14 compares the ratios obtained for each spot variance estimator with the benchmark ratio, that is, the ratio of empirical variance costs. Values related to (θ, θ reinit ) = (0.6, 0.85) (respectively, (θ, θ reinit ) = (0.4, 0.6)) were used at the numerator (respectively, denominator). Table 14 suggest that the Fourier estimator and the regularized estimator produce the relative best forecasts of the variance of the strategy costs, as they are associated to a ratio approximately equal to 1.23, which is the closest to the benchmark value of 1.397. As these two estimators provide also the relative best performance in terms of bias and MSE (see Section 4), our study suggests that efficient volatility estimates may be linked to a better forecast of the variance of the execution cost. Furthermore, note that the range of variation of the ratios in Table 14 suggests that the the choice of the estimator is not irrelevant and may lead to significant differences in the forecast of the execution strategy. It seems however that, in general, the use of the formula in Eq. (6) leads to a certain underestimation of the the variance of the implementation shortfall of the considered strategy.
Conclusions
This paper extended the work by Gatheral and Oomen (2010) on volatility estimation with LOB data in two directions. First, we used a more sophisticated LOB simulator compared to the ZI model, namely the QR model, which, by introducing correlations between the current state of the LOB and the intensities of order arrival, and thanks to a more complex re-initialization mechanism, is able to produce more realistic market microstructure dynamics. Secondly, we addressed not only integrated volatility estimators, but also spot volatility estima- tors. For what concerns integrated estimators, we found that the pre-averaging estimator by Jacod et al. (2009) appears to be favorable in terms of bias optimization. Instead, when looking at the minimization of the MSE, the situation is more nuanced, with the Fourier estimator by Malliavin and Mancino (2009) obtaining the best average ranking across the three different price series considered without actually achieving the best ranking for any of the individual series. Specifically, the MSE is optimized by the unified estimator by Y. Li, Z. Zhang, et al. (2018) (in the case of mid-and micro-prices) and the alternation estimator by Large (2011) (in the case of trade-prices). As for the spot volatility, the Fourier estimator appeared to yield the optimal accuracy both in terms of bias and MSE, outperforming the other estimators considered.
Finally, our results suggested that the careful choice of the spot volatility estimator may be relevant for optimal execution. Specifically, we investigated the impact of different spot volatility estimators on the prediction of the variance of the cost of a VWAP strategy and found that the use of the Fourier estimator, which gave the relative most accurate volatility estimates, lead also to a significant gain in predicting the cost variance. | 2022-02-25T06:47:38.637Z | 2022-02-24T00:00:00.000 | {
"year": 2022,
"sha1": "1c38f7e352a73c34d6fdb166faa34409ea7b4714",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2202.12137",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1c38f7e352a73c34d6fdb166faa34409ea7b4714",
"s2fieldsofstudy": [
"Economics",
"Mathematics",
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
59404899 | pes2o/s2orc | v3-fos-license | The automorphism group of a graphon
We study the automorphism group of graphons (graph limits). We prove that after an appropriate"standardization"of the graphon, the automorphism group is compact. Furthermore, we characterize the orbits of the automorphism group on $k$-tuples of points. Among applications we study the graph algebras defined by finite rank graphons and the space of node-transitive graphons.
Introduction
Graphons have been introduced as limit objects of convergent sequences of dense simple graphs, and many aspects of graphs can be extended to graphons. The goal of this paper is to describe a natural way to extend the notion of graph automorphisms to graphons. Our notion of automorphism group satisfies the natural requirement that it is invariant under weak isomorphism of graphons. (Weakly isomorphic graphons represent the limit objects of the same convergent graph sequences.) Thus our study of the automorphisms of graphons fits well into graph limit theory.
In this paper we heavily use the topological aspects of graph limit theory developed in [7]. It was shown in [7] that every graphon has two "canonical" representations on metric spaces, which we call, informally, the neighborhood metric and the 2-neighborhood metric. These metric spaces depend only on the weak isomorphism class of the graphon. (In [4], these are called the "neighborhood metric" and the "similarity metric".) The neighborhood metric space is simpler to define and work with, but it is not compact in general; the 2-neighborhood metric space is compact. The automorphism group acts on each of these as a subgroup of isometries. It is a rather straightforward consequence of the compactness of the 2-neighborhood metric that the automorphism group is always a compact topological group (Theorem 10). This fact is also closely related to (and could be derived from) a theorem of Vershik and Haböck [10] on the compactness of isometry groups of multivariate functions. As a consequence we prove that for node-transitive graphons the neighborhood metric is also compact.
The space of graphons (with weakly isomorphic graphons identified) is compact in a natural topology (defined by the "cut distance"). Another result of this paper is that the set of node-transitive graphons is closed, and hence compact, in this topology. As we will see, graph limit theory restricted to this closed set gives rise to a rather interesting limit theory for functions on groups. Such a theory was initiated in [9], and it was a crucial component of the limit approach to higher order Fourier analysis (see [8]).
We give a characterization of the orbits of the automorphism group on k-tuples of points. This generalizes results in [3] from finite graphs to graphons, as well as the characterization of weak isomorphism of graphons by Borgs, Chayes and Lovász [1]. We use this characterization to connect the topic of graph algebras with group theory. As an application, we give a group theoretic description of the graph algebras defined by finite rank graphons.
It follows from our results that the limit of a convergent sequence of finite graphs, each having a node-transitive automorphism group, is a node-transitive graphon. However, the relationship between the automorphism groups of the finite graphs and that of the limit graphon is more involved.
Graphs and graphons
A k-labeled graph is a graph (simple or multi) with k of its nodes labeled 1, . . . , k (k ∈ {0, 1, 2, . . . }). We denote by F k the set of k-labeled multigraphs, by G k the set of k-labeled simple graphs, and by G 0 k the set of k-labeled simple graphs with nonadjacent labeled nodes. In particular, G 0 is the set of unlabeled simple graphs.
We will need some special k-labeled graphs and multigraphs. We denote by K 2 the (unlabeled) graph with two nodes and one edge, and by C 2 the multigraph consisting of two nodes connected by two edges. We denote by K • 2 and K •• 2 the graph K 2 with one and two nodes labeled, respectively; C • 2 and C •• 2 are defined analogously. We denote by P •• n the path with n nodes, with its two endpoints labeled. For two simple graphs F and G, let hom(F, G) denote the number of homomorphisms (adjacency-preserving maps) V (F ) → V (G). We define the homomorphism density A graphon consists of a standard probability space J and a symmetric measurable function W : J × J → [0, 1]. To simplify notation, we will omit some letters that may be understood. For the standard probability space J, we let B denote the underlying sigma-algebra and let π denote the probability measure. Also, we write dx instead of dπ(x) in integrals if there is only one probability measure considered.
Every graphon (J, W ) defines an integral operator T W on the Hilbert space L 2 (J) by We say that W has finite rank if this operator has finite rank (i.e., its range is a finite dimensional subspace of L 2 (J).
For every graphon (J, W ) and every graph F = (V, E), we define t(F, J, W ) = We note that the formula makes sense for multigraphs F , but we exclude loops. We write t(F, W ) instead of t(F, J, W ) if the underlying probability space is clear. Graphons were introduced to describe limit objects of convergent sequences of dense graphs. A sequence of simple graphs G n is called convergent, if the numerical sequence t(F, G n ) converges for every simple graph F . In this case, there is a graphon (J, W ) such that t(F, G n ) converges to t(F, W ) for every simple graph F [5].
The limiting graphon is not strictly uniquely determined. Quite often one assumes that J = [0, 1] (with the Lebesgue measure). In this paper, different underlying spaces will be more useful. We say that two graphons (J, W ) and (J ′ , W ′ ) are weakly isomorphic, if t(F, W ) = t(F, W ′ ) for every simple graph F . Every graphon is weakly isomorphic to a graphon on [0, 1], but this is not always the most convenient representative of a weak isomorphism class.
Weakly isomorphic pairs of graphons were characterized in [1]. Let J and L be standard probability spaces and let ϕ : J → L be a measure preserving map. For any function U : L × L → R, we define the function U ϕ : It is clear that if (L, U ) is a graphon, then so is (J, U ϕ ), which we call the pullback of (L, U ) along ϕ. It is easy to see that the graphons (L, U ) and (J, U ϕ ) are weakly isomorphic. It follows that all pullbacks of the same graphon are weakly isomorphic. The main result of [1] asserts that two graphons are weakly isomorphic if and only if they are pullbacks of the same graphon.
We can define a sequence of graphons (W 1 , W 2 , . . . ) to be convergent with limit graphon W if t(F, W n ) converges to t(F, W ) for every simple graph F . (There is a semimetric, called the "cut distance", on the set of graphons that makes this space compact, and which defines this same notion of convergence. We don't need the cut distance in this paper, however.) We can define homomorphism densities of k-labeled graphs in graphons, but these will be k-variable functions J k → R rather than numbers. These restricted homomorphism densities are defined by not integrating the variables corresponding to labeled nodes: dπ(x i ). (2)
Graph algebras
Graph algebras are important algebraic structures associated with graph parameters. We give a quick introduction to the subject. For more details see [4]. For two simple graphs G, H ∈ G k , the product GH is defined as the graph obtained from G and H by identifying vertices with the same label and by reducing multiple edges. This product defines a commutative semigroup structure on G k . Let Q k denote the set of formal R-linear combinations of elements from G k . Such linear combinations are usually called quantum graphs. The multiplication extends to Q k from G k using the distributive law and thus Q k becomes a commutative algebra. In other words, Q k is the semigroup algebra of G k .
Let [[G]] be the graph obtained by removing the labels in the graph G. We extend this notation to quantum graphs by linearity. An arbitrary graph parameter f : G → R can be extended to quantum graphs by linearity. Similarly, restricted homomorphism densities can be extended to k-labeled quantum graphs by linearity: Every graph parameter gives rise to a symmetric bilinear form on Q k defined by . Let I k be the set of elements Q in Q k such that Q, P f = 0 for every P ∈ Q k . Then I k is an ideal and Q k /I k is the graph algebra corresponding to is called the k-th connection matrix of f . It is easy to see that the rank of M k is the dimension of Q k /I k .
We will be interested in the special case when f is defined by f (G) = t(G, W ) for some fixed graphon W . In this case, Q k /I k depends only on the weak isomorphism class of W . It was shown in [5] that in this case the inner product ., . is positive semidefinite, and hence so are the connection matrices.
There is a concrete representation of Q k /I k that will be convenient to use and that will create a connection between automorphisms of W and the graph algebras. Let (J, W ) be an arbitrary graphon. We define a map ψ k : Q k → L ∞ (J k ) by letting ψ k (G) be the k-variable function t x1,x2,...,x k (G, W ) ∈ L ∞ (J k ). We extend this map linearly to general quantum graphs. Note that L ∞ (J k ) is a commutative algebra with pointwise multiplication and addition, and ψ k is an algebra homomorphism. The kernel of ψ k is equal to I k and thus the range of ψ k is isomorphic to the k-th graph algebra of W . We denote by A k = A k (J, W ) this subalgebra of L ∞ (J k ).
We will need a subalgebra of A k : let A 0 k denote the linear span of functions ψ k (G), where in G ∈ G 0 k (so its labeled points are non-adjacent). By definition A 1 = A 0 1 .
Metrics on graphons
For two points x, y of a graphon (J, W ), we define their neighborhood distance by It may happen that W (x, .) is not measurable for some x; however, we can always change W on a set of measure 0 to make these one-variable sections of it measurable. We will assume in the sequel that these functions are measurable. The distance function r W is not necessarily a metric, only a semimetric, meaning that r W (x, y) may be 0 for distinct points x and y. Such points are called twins. How to merge twins to get a weakly isomorphic graphon for which r W is a metric, was described in [1] (see also [4]).
As a further step of "purifying" a graphon, we can replace the metric space (J, r W ) by its completion. Furthermore, in this new topology the underlying probability measure may not have full support; we may restrict the graphon to the support of the measure (which is a closed and therefore complete subspace). This procedure is described in [7].
We call a graphon (J, W ) pure, if (J, r W ) is a complete metric space, and π has full support (i.e., every open set has positive measure). The procedure described above implies that every graphon is weakly isomorphic to a pure graphon. Pure graphons will be crucial in this paper, even in order to define automorphisms.
It will be sometimes convenient to use the L 2 -distance instead of the L 1 -distance: we consider , these two metrics define the same topology. In particular, the metric space (J, d W ) associated with a pure graphon (J, W ) is also complete and the measure π has full support. One advantage of d W is that it can be expressed in terms of restricted homomorphism densities. We consider the 2-labeled quantum graph h in Figure 1. Then it is easy to check that While the difference between the metrics r W and d W is not essential, the 2neighborhood metric (called the similarity metric in [4]) is more substantially different [6,7]. One way to define it is to introduce the "operator square" of a graphon: and then consider the neighborhood distance of the graphon (J, W • W ): This definition looks artificial, but in fact it has many nice properties. It is easy to see that r(W ) ≤ r W . If (J, W ) is a pure graphon, then (J, r W ) is a metric space (in particular, the distance between distinct points is positive), which is not necessarily complete, but we can consider its completion (J, r W ). We can extend the probability measure to J by defining it to be 0 on the set of new points. We can also extend the function W to W : J × J → [0, 1] so that (J , W ) is a graphon, and the metric r W is equal to the completion of the metric r W (this takes some care). We will not distinguish r W and r W in the sequel. On the other hand the metric r W is quite different: In terms of the r W metric, all open sets have positive measure, while the set J \ J of new points is closed and has measure 0. On the other hand, in terms of the r W metric, the set J \ J is open (of measure zero).
The main property of this completion, which we will need, is that the space (J , r W ) is compact ( [7]; see also [4], Corollary 13.28). The metric r W has another important property ( [4], Theorem 13.27):
Continuity of restricted homomorphism numbers
We start with citing Lemma 13.19 from [4]: Lemma 2 Let (J, W ) be a pure graphon and let F = (V, E) be a k-labeled multigraph with nonadjacent labeled nodes. Then for all x 1 , . . . , x k , y 1 , . . . , y k ∈ J.
We need a version of this lemma for the r W -distance instead of the r W -distance. Some special cases of this were proved in [4], Section 13.4.
Lemma 3
Let (J, W ) be a pure graphon, and let F = (V, E) be a k-labeled simple graph with nonadjacent labeled nodes. Then the restricted homomorphism function t x1...x k (F, W ) is continuous in each of its variables x i on the metric space (J , r W ).
Proof. Consider any point x = (x 1 , . . . , x k ) ∈ J k , and let y 1 , y 2 , · · · ∈ J be such that Let N (1) = {k + 1, . . . , k + r}, and let F ′ be obtained from F by deleting node 1 and labeling nodes k + 1, . . . , k + r. Then The condition r W (y n , x 1 ) → 0 implies that W (y n , .) → W (x 1 , .) weakly as n → ∞ ( [4], Theorem 13.7). It is easy to see that this implies that k+r i=k+1 W (y n , z i ) → k+r i=k+1 W (x 1 , z i ) (weakly as a function of (z k+1 , . . . , z k+r )), which in turn implies that Let us discuss the restrictions in these lemmas. It is obvious that these lemmas do not remain valid if we allow edges between labeled nodes: for example, itself is not necessarily continuous. Lemma 2 implies that t x1,...,x k is continuous (even Lipschitz) in the neighborhood distance, simultaneously in all variables. Lemma 3, however, fails to hold in this stronger sense; see Example 4 below (adapted from [4], Example 13.30). This example also shows that in Lemma 3 we have to restrict F to simple graphs. Inequality (4) also shows that, for a fixed F , the difference |t x (F, W ) − t y (F, W )| can be estimated by max i r W (x i , y i ), independently of W . Example 5 below shows that, even in the case k = 1, no such estimate can be given in terms of r W (x, y).
This function is not symmetric, so we put it together with a reflected copy to get a graphon: , then (as noted in [4]) the sequence (u 1 , u 2 , . . . ) converges to the point 0 in the metric r W . On the other hand, for the 3-node path labeled at both endpoints is not continuous at x = 0, and that t x,y (P •• 2 , W ), as a function of x and y, is not continuous at (0, 0).
Example 5 Consider the weighted graph H given by the matrix of edgeweights
This weighted graph H can be considered as a pure graphon with a 4-point underlying space. Then H • H = H ′ is the weighted graph given by the matrix of edgeweights 1/8 1/4 2/9 2/9 1/4 1/2 1/9 2/9 1/6 1/3 0 and the same nodeweights as before. Let a and b be the last two nodes, then We have seen that excluding edges between the labeled nodes is essential in both previous lemmas. The next lemma expresses W in terms of restricted homomorphism numbers for graphs with nonadjacent labeled nodes, and gives a (rather weak, but still useful) remedy for this restriction. We consider two sequences of quantum graphs f n and g n (Figure 2), and define Figure 2: Quantum graphs f , f n and g n . Here f n is obtained by gluing together n copies of f along the labeled nodes; one of these nodes is then unlabeled (shown in white).
Proof. We have It is not hard to see that the right hand side tends to 0 as n → ∞, which implies the lemma (in fact, a little more: U n (x, .) → W (x, .) in the L 1 metric for every x).
3 Compactness of the automorphism group
Automorphisms of graphons
It only makes sense to define automorphisms of pure graphons. Of course, one could define an "automorphism" of any graphon (J, W ) as an invertible measure preserving map σ : J → J such that W (x σ , y σ ) = W (x, y) for almost all x, y ∈ J. However, there is a lot of trouble with this notion: weakly isomorphic graphons will have wildly different automorphism groups. An example with many automorphisms is a stepfunction W : here Aut(W ) contains the group of all invertible measure preserving transformations that leave the steps invariant (in addition to all the automorphisms of the corresponding weighted graph). Note, however, that if we purify a stepfunction, then we get a finite weighted graph, and so the large and "ugly" subgroups consisting of measure preserving transformations of the steps disappear. Another problem would be that any permutation of points of a zero-measure set should be considered an automorphism, so every graphon would have a transitive automorphism group.
Definition 7 Let (J, W ) be a pure graphon. A measure preserving bijection σ : J → J is called an automorphism of (J, W ) if, for every x ∈ J, the equality W (x σ , y σ ) = W (x, y) holds for almost all y ∈ J.
Note the change in the phrasing of the last condition: it is stronger that requiring that W (x σ , y σ ) = W (x, y) for almost all x, y ∈ J. This modification will exclude "automorphisms" like interchanging two points.
(The simpler but inadequate definition is given in [4]; the results announced there hold true with the definition given here.) It is clear that every automorphism preserves the distances r W and r W , and hence it extends to an automorphism of (J, W ). The points of J \ J can be identified in the graphon (J , W ) by the property that every r W -neighborhood of them has positive measure. So the automorphism groups of a pure graphon (J, W ) and its completion (J, W ) are essentially the same. In this section, we will mostly work with (J, W ).
We can endow Aut(W ) with a metric (and through this, with a topology) by Not every isometry of the metric space (J, r W ) (or of the metric space (J, r W )) is an automorphism.
. This is pure as well, and interchanging the two components is an isometry but not an automorphism in general.
The following technical lemma shows that a slight apparent weakening of the second condition in the definition of an automorphism leads to the same concept. We will formulate it for the r W -metric; for the r W -metric the proof is similar (in fact, much simpler).
Lemma 9 Let (J, W ) be a pure graphon, and let ϕ : J → J be a bijective measure preserving map that is an isometry of (J , r W ) and satisfies W ϕ = W almost everywhere.
Then ϕ is an automorphism.
The condition that W ϕ = W almost everywhere implies that almost all points are nice, but we want to show that all points are nice.
To this end, let us fix x ∈ J. Since the measure has full support in (J , r W ), every neighborhood of x has positive measure, and hence there is a sequence of nice points x n such that r W (x n , x) → 0. This means that Lemma 11 The automorphisms of a pure graphon (J, W ) form a closed subgroup of the isometry group of (J, r W ).
Proof. Clearly every automorphism of (J, W ) is an isometry of (J , r W ), and these isometries form a subgroup. We want to prove that this subgroup is closed in the topology of pointwise convergence.
Let (ϕ n ) be a sequence of automorphisms of (J, W ), and assume that they converge to an isometry ϕ. We want to prove that ϕ is not only an isometry, but an automorphism. By Lemma 9, it suffices to prove the following claims.
Claim 2
The map ϕ is measure preserving.
It suffices to prove that for any two open sets A and B, For every y ∈ J, Using that the maps ϕ n are automorphisms, The first term on the right side tends to A×B W (ϕ(x), ϕ(y)) dx dy by Proposition 1, and the difference in the last line tends to 0 as n → ∞ by (9) and Claim 1. This proves Claim 3, and thereby the Lemma.
Spectral decomposition
Since W is bounded, the operator T W is Hilbert-Schmidt and hence it has a spectral decomposition where the λ r are its nonzero eigenvalues and the functions f r ∈ L 2 (J) are the corresponding eigenfunctions, forming an orthonormal system. Here λ r → 0. By definition almost everywhere. We assume that W (x, .) is measurable for every x, and we can change f r on a set of measure 0 so that (11) holds for every x ∈ J. We note that (11) implies that f r is bounded: We need the following simple observation: for every x ∈ J, Indeed, using (11) and the fact that {f r } is an orthonormal system, we get (the last equality follows because even though {f r } may not be a complete orthogonal system, it can be extended by functions in the nullspace of T W to such a system, and these additional functions contribute 0 terms). The second equality in (12) is trivial by definition. (12) in turn implies that Expansion (10) may not hold pointwise, only in L 2 ; but it follows from basic results on Hilbert-Schmidt operators that if we take the inner product with any function U ∈ L 2 (J × J), then we get an equation: where the sum on the right side is absolutely convergent. We need the following stronger fact: Lemma 12 Let (J, W ) be a graphon, and let (22) be its spectral decomposition.
(a) For U ∈ L 2 (J) and y ∈ J, the sum is absolutely convergent.
(b) For every bounded measurable function U : J × J → R and for almost all y ∈ J, Here the first factor is the tail of a convergent sum by (13), and hence it tends to 0 as n → ∞. Furthermore, {f r } is an orthonormal system, and hence proving (a).
Let g 1 (y) and g 2 (y) be the functions on the left and right sides of equation (16). Then for any bounded measurable function h : J → R, (where we use (14) and the fact that the sum in the third line is absolutely convergent). This proves that g 1 = g 2 almost everywhere.
Spectral decomposition of pure graphons
In this chapter we use the topological properties of pure graphons to formulate finer statements about spectral decompositions. First of all, note that if (J, W ) is a pure graphon then eigenfunctions of W are continuous functions on J in the metric r W ([4], Corollary 13.29). Furthermore, the eigenfunctions separate the points of J: Lemma 13 If (J, W ) is a pure graphon, then for every pair of distinct points x, y ∈ J there is an eigenfunction f of W such that f (x) = f (y).
Proof.
By way of contradiction, assume that x and y cannot be separated this way. From x = y we obtain that r W (x, y) > 0, and thus the functions W • W (x, .) and W • W (y, .) have a positive distance in L 2 (J). On the other hand, holds for every fixed z ∈ J where the sum is L 2 -convergent. Applying this formula for z = x and z = y together with our assumption that f i (x) = f i (y), we get a contradiction.
Lemma 14 If (J, W ) is a pure graphon, then the sum on the left side of (12) converges uniformly for x ∈ J.
Proof. Using continuity of the eigenfunctions we obtain that every term on the left side of (12) is continuous in r W , and so is the right side by Lemma 3. Since every term on the left side is nonnegative, it follows by Dini's Theorem that the convergence is uniform in x.
This allows us to get the following stronger version of Lemma 12 for pure graphons: Lemma 15 (a) If (J, W ) is a pure graphon, then the sum (15) is uniformly absolute convergent for y ∈ J.
(b) If, in addition, U (x, y) is a continuous function of y for every x ∈ J in the neighborhood distance, then the expansion (16) holds for every y ∈ J.
Proof. (a) By Lemma 14, Hence the computation in (17) gives an estimate of the tail uniformly for all y ∈ J.
(b) The left side of (16) defines a continuous function of y ∈ J in the metric r W . Every term on the right side is also continuous, and the convergence is uniform by the estimate (17), using (18). Hence the limit is a continuous function of y ∈ J. The space J has the property that every nonempty open set has positive measure. If two continuous functions are equal almost everywhere on such a space, then they are equal everywhere.
Corollary 16 If the automorphism group of a pure graphon (J, W ) is transitive on J, then (J, W ) is compact and J = J.
Proof. Let x ∈ J, then the orbit of x is a continuous image of Aut(J, W ), and so it is compact in the metric r W . If the automorphism group is transitive on J, then this orbit is J, and hence (J, r W ) is compact. Since r W ≤ r W , this implies that (J, r W ) is compact, and since J is dense in (J, r W ), it follows that J = J.
We use our results above about spectra to describe a way, more explicit than convergence in L 2 , of the convergence of the expansion (10). For a graphon (J, W ) and λ > 0, we define the graphon (J, [W ] λ ) by the following partial sum of (10): Note that this sum is finite. If W has multiple eigenvalues, then the terms λ r f r (x)f r (y) depend on the basis chosen in the eigenspaces, but [W ] λ does not depend on this basis.
where E λr is the eigenspace of W corresponding to λ r . Let Π λ denote the orthogonal projection of L 2 (J) onto U λ . Then Assume that the eigenvalues are ordered so that |λ 1 | ≥ |λ 2 | ≥ . . . . Let µ λ denote the probability distribution of the vector (f 1 (x), f 2 (x), . . . , f d (x)) ∈ R d , where d = dim(U λ ) and x ∈ J is chosen randomly, and let S λ ⊂ R d be the support of µ λ . Then the purification of (J, [W ] λ ) can be defined as (S λ , W ′ λ ), where A coordinate-independent way of describing µ λ is to consider the dual space of U λ .
For each x ∈ J, we consider the linear functional f → (T W f )(x) (f ∈ U λ ). If x ∈ J is chosen randomly we obtain the probability distribution µ λ on U * λ , and we can define S λ as its support. We will need the next lemma, which is a direct consequence of the results in the paper [9].
Lemma 17 Let {U n } ∞ n=1 be a convergent sequence of graphons with limit W . Assume that λ > 0 is not an eigenvalue of W . Then there is subsequence and choices of orthonormal eigenvectors for [W n ] λ and [W λ ] such that the measures µ n λ constructed above for W n converge to µ λ weakly.
If α < β, then the projection be a decreasing sequence tending to 0. Let S be the inverse limit of the system {P αi+1,αi : The limit of {µ αi } ∞ i=1 defines a probability measure µ on the compact set S. Let (S, U αi ) be the graphon defined on S using the formula (21) for the i-th coordinate.
Lemma 18 For every graphon (J, W ) there is a measure preserving homeomorphism τ : J → S such that (U αi ) τ = [ W ] αi holds for every i.
Proof.
Notice that the construction of (S, U αi ) depends only on the weak isomorphism class of W , and so we can assume that (J, W ) is pure. The maps τ i : x → (f 1 (x), f 2 (x), . . . , f d (x)) from J to S i (where d = dim(U αi )) are continuous in the r W metric. Hence the map τ = (τ 1 , τ 2 , . . . ) : J → S is also continuous. Since τ separates elements in J (to see this, apply lemma 13 for W •W ), it is a bijection between J and S. The desired property is clear from the definition of τ .
Subdividing edges
As an application of spectral decomposition, we prove the following generalization of Lemma 5.1 in [1] (which will be needed later on).
Lemma 19 Let (J 1 , W 1 ) and (J 2 , W 2 ) be two pure graphons and let a ∈ J k 1 , b ∈ J k 2 . Let h be a k-labeled quantum multigraph. In every constituent of h, select an edge such that at least one endpoint of it is unlabeled, and let h m denote the k-labeled quantum multigraph obtained from h by subdividing the selected edge by m − 1 new nodes in every constituent. Suppose there exists an m 0 ≥ 2 such that t a (h m , Proof. Let g i be obtained from h by keeping only those terms in which one endpoint of the selected edge is labeled i (1 ≤ i ≤ k). Let g 0 be the sum of the remaining terms, where the selected edge has no labeled endpoint. Let g ′ i be the (k +1)-labeled quantum multigraph obtained from g i by deleting the selected edge from each constituent and labeling its unlabeled endpoint by k + 1. Let g ′ 0 be the (k + 2)-labeled quantum multigraph obtained from g 0 by deleting the selected edge from each constituent and labeling its endpoints by k + 1 and k + 2. Then We use the spectral decomposition This decomposition holds almost everywhere for m ≥ 2, but for m = 1, we can only claim that the sums on the right sides converge to the function on the left in L 2 . Since the graphon is pure, Lemma 15 implies that the expansion holds for all m ≥ 1. We have an analogous expansion for t b (h m , W 2 ). If these two expressions are equal for every integer m ≥ m 0 , then they are also equal for m = 1 (see e.g. [4], Proposition A.21).
Corollary 20 Let (J 1 , W 1 ) and (J 2 , W 2 ) be two pure graphons and let a ∈ J k for every k-labeled simple graph F , then (23) holds for every k-labeled multigraph F . (b) If (23) holds for every k-labeled simple graph F with nonadjacent labeled nodes, then (23) holds for every k-labeled multigraph F with nonadjacent labeled nodes.
Proof. (b) follows from Lemma 19 by induction on the number of parallel edges.
To prove (a), it suffices to note that W 1 (a i , a j ) = W 2 (b i , b j ) follows by considering the simple graph F with a single edge connecting i and j.
Automorphism groups and spectral decomposition
Let g : J → J be an automorphism of a graphon (J, W ). Notice that if f is an eigenfunction of length 1 of W then f g is also an eigenfunction of length 1 corresponding to the same eigenvalue. As a consequence every automorphism of W acts on the space U λ defined in (20) as an element in O λ := |λr |≥λ O(E λr ) where O(E λr ) is the orthogonal group on E λr . The corresponding action on the dual space U * λ leaves the measure µ λ invariant. We will denote by Γ λ the finite dimensional compact group formed by all elements O λ that preserve µ λ . (Note that Γ λ is the automorphism group of [W ] λ .) The group O α acts on both U α and U * α . Since U β is an invariant subspace of O α , the group O α acts on U * β as well. In particular, there is a homomorphism h α,β : Γ α → Γ β . We denote by Γ W the inverse limit of the system {h αi+1,αi } ∞ i=1 . We can describe the automorphism group of a compact graphon using representation of a graphon above.
Lemma 21 For every graphon (J, W ) the action of Aut(W ) on J can be obtained as
Proof.
We may assume that the graphon (J, W ) is pure. First we show that The other containment is a direct consequence of Lemma 18: elements of τ −1 • Γ W • τ act on J continuously and leave [ W ] αi invariant for every i. This means that they also fix W .
Characterization of the orbits
The following theorem characterizes the orbits of the automorphism group of a graphon.
Theorem 22 Let (J, W ) be a pure graphon, and let a 1 , . . . , a k , b 1 , . . . , b k ∈ J. Then there exists an automorphism ϕ ∈ Aut(J, W ) such that a ϕ i = b i if and only if t a1...a k (F, W ) = t b1...b k (F, W ) for every k-labeled simple graph F in which the labeled nodes are independent.
The following version is more general (at least formally).
Theorem 23 Let (J 1 , W 1 ) and (J 2 , W 2 ) be two pure graphons and let α i ∈ J k i . Then there exists a measure preserving bijection ϕ : J 1 → J 2 such that W ϕ 2 = W 1 almost everywhere and α ϕ 1,i = α 2,i if and only if t α1 (F, W 1 ) = t α2 (F, W 2 ) for every k-labeled simple graph F .
The proof of this theorem is a modification of the proof of the main result of [1], combined with more recent methods involving pure graphons.
First, we note that the condition in the theorem is self-sharpening: by Corollary 20, the condition holds for every k-labeled multigraph F . The following lemma is the main step in the proof.
Lemma 24 Let (J 1 , W 1 ) and (J 2 , W 2 ) be two graphons and let a ∈ J k for every k-labeled multigraph F . Let π i denote the probability measure of J i . Then we can couple π 1 with π 2 so that if (X, Y ) is a pair from the coupling distribution, then almost surely for every (k + 1)-labeled multigraph F .
Proof.
Consider two random points X from π and Y from π ′ , and the random variables with values in [0, 1] F k+1 . We claim that the variables A and B have the same distribution. It suffices to show that A and B have the same mixed moments. If F 1 , . . . , F m ∈ F k+1 , and q 1 , . . . , q m are nonnegative integers, then the corresponding moment of A is where the multigraph F is obtained by unlabeling the node labeled k + 1 in the multigraph F q1 1 . . . F qm m . Expressing the moments of B in a similar way, we see that they are equal by hypothesis. This proves that A and B have the same distribution.
Using Lemma 6.2 of [1] it follows that we can couple the variables X and Y so that A = B with probability 1. In other words, for every F ∈ F k+1 with probability 1.
For an infinite sequence X ∈ J N , let X[n] denote its prefix of length n.
Lemma 25 Under the conditions of the previous lemma, we can couple π N 1 with π N Proof. By Lemma 24, we can define recursively a coupling κ n of π n 1 with π n 2 so that t X (F, W 1 ) = t Y (F, W 2 ) almost surely for every F ∈ F k+n , and κ n+1 , projected to the first n coordinates in both spaces, gives κ n . The distributions κ n give a distribution κ on J N 1 × J N 2 , which clearly has the desired properties. The following lemma can be considered as a version of the theorem for infinite sequences.
Lemma 26 Let (J 1 , W 1 ) and (J 2 , W 2 ) be two pure graphons, and let a i = (a i,1 , a i,2 , . . . ) ∈ J N i be a sequence whose elements are dense in J i . Suppose that t a1 (F, W 1 ) = t a2 (F, W 2 ) for every partially labeled multigraph F . Then there is a measure preserving bijection ϕ : J 1 → J 2 such that W ϕ 2 = W 1 almost everywhere and a ϕ 1,j = a 2,j for all j ∈ N.
The notation t a (F, W ), where a is an infinite sequence, means that only those elements of a are considered whose subscript occurs in F as a label.
Proof. We start with noticing that This follows by (3) and the hypothesis of the lemma. For x ∈ J 1 , take a subsequence (a 1,i1 , a 1,i2 , . . . ) such that a i,in → x. Then (a 1,i1 , a 1,i2 , . . . ) is a Cauchy sequence, and hence, by (24), so is the sequence (a 2,i1 , a 2,i2 , . . . ), and since (J 2 , r W2 ) is complete, it has a limit x ϕ . It is easy to see that this map is well-defined (i.e., it does not depend on the choice of the sequence (a 1,i1 , a 1,i2 , . . . )), and that ϕ is bijective.
Next, we claim that for every sequence x 1 , . . . , x k ∈ J 1 and every multigraph F with nonadjacent labeled nodes Indeed, this holds if every x i is an element of the sequence a 1 by hypothesis, and then it follows for all x i by the continuity of t x1,...,x k (F, W 1 ) (Lemma 2). Finally, consider the function . By Lemma 6, W 1 − U n 1 → 0 as n → ∞. Also, by (25), , and applying Lemma 6 again, W ϕ 2 −U n 1 → 0 as n → ∞. This implies that W 1 = W ϕ 2 almost everywhere. Now we are ready to prove the main theorem of this section.
Proof of Theorem 23. Let X 1 , X 2 . . . be independent random points of J 1 , and let Y 1 , Y 2 . . . be independent random points of J 2 . Applying Lemma 25 repeatedly, we can couple X 1 , X 2 . . . with Y 1 , Y 2 . . . so that, for any (k + r)-labeled graph F , With probability 1, the elements of both sequences a = (a 1 , . . . , a k , X 1 , X 2 , . . . ) and b = (b 1 , . . . , b k , Y 1 , Y 2 , . . . ) are dense in J 1 and J 2 , respectively. Let us fix such a choice, then by Lemma 26 there is a measure preserving bijection ϕ : J 1 → J 2 such that W ϕ 2 = W 1 almost everywhere and a ϕ i = b i for all i ≤ k. This proves the theorem.
Node-transitive graphons
Let G be the automorphism group of the pure graphon (J, W ). We consider the natural action of G on functions on J defined by f g (x) = f (x g ). Similarly G acts diagonally on functions on J n . For a subset S ⊂ L ∞ (J n ) we denote by S G the set of G-invariant elements in S. It is clear that restricted homomorphism functions are invariant under the action of G and thus all the algebras A k are G-invariant.
Definition 28 A graphon is called node-transitive if the automorphism group of its pure representation (J, W ) acts transitively on J.
The next theorem gives an algebraic characterization of node-transitive graphons.
Theorem 29 Let (J, W ) be a graphon. The following statements are equivalent.
(iv) The first connection matrix M 1 of W has rank 1.
equivalent to (iii). We know that rk(M 1 ) = dim(A 1 ), so (iv) is just a re-statement of (iii). Finally, (v) is a re-statement of (iv), since M k is positive semidefinite.
Examples for node-transitive graphons are finite node-transitive graphs. Other examples are graphons defined on compact topological groups.
Definition 30 Let G be a second countable compact topological group, which, together with its Haar measure, defines a standard probability space. Let f : G → [0, 1] be a measurable function such that f (x) = f (x −1 ). Then the graphon W : Note that the condition f (x) = f (x −1 ) is needed to guarantee that W is symmetric. By omitting this condition we get "directed Cayley graphons".
Note that a finite node-transitive graph G is not necessarily a Cayley graph (for example, the Petersen graph). However one can obtain a Cayley graph G ′ from G by replacing every vertex by m vertices and every edge by a complete bipartite graph K m,m . The value m is the size of the stabilizer of a vertex in G in the automorphism group. The graph G ′ is weakly isomorphic to G as a graphon.
Proof. Let (G, W ) be a Cayley graphon on the compact topological group G. It is clear that G acts transitively (with multiplication from the right) on this graphon. (However, W might not be pure.) It follows that the restricted homomorphism functions t x (F, W ) are all constant on G. The third condition in Theorem 29 shows that W is node-transitive.
To prove the second assertion, let (J, π, W ) be a node-transitive graphon; we may assume that it is pure. Let G be its automorphism group. We know that G is compact, and so it has a normalized Haar measure µ. Let us fix an element c ∈ J, and define the function U : G × G → [0, 1] by U (g, h) = W (c g , c h ). We claim that (G, µ, U ) is a Cayley graphon weakly isomorphic to (J, π, W ).
Remark 32 Theorem 31 creates a connection between graph limit theory and an interesting and rich limit theory for functions on groups (see [8], [9]). The idea is the following. Let be a sequence of measurable functions on compact groups. We say that the sequence f i is convergent if the corresponding Cayley graphons {W i } ∞ i=1 converge. By Proposition 33 and Theorem 31, we prove that the limit of {W i } ∞ i=1 is weakly isomorphic to a Cayley graphon defined by a measurable function f : G → [0, 1] on a compact group. We say that f is the limit object of the It turns out that one can define this limit concept without passing to graphons. This point of view was heavily used in the second author's approach [8] to higher order Fourier analysis.
Limits of node-transitive graphons
We start with the observation that, if a convergent graph sequence consists of nodetransitive graphs, then their limit graphon is node-transitive as well. More generally, we have the following consequence of the fifth condition in Theorem 29.
Corollary 33 If a sequence of node-transitive graphons is convergent, then their limit graphon is also node-transitive.
What makes this simple assertion interesting is the fact that the automorphism group of the limit graphon is not determined by the automorphism groups of graphs or graphons in the convergent sequence.
Example 34 Fix any 0 < α < 1, and define the graph G n by V (G n ) = [n], where every i ∈ [n] is connected to the next and previous ⌊αn⌋ nodes (modulo n). The automorphism group of G n is the dihedral group D n . This sequence tends to the pure graphon on S 1 , with W (x, y) = ½(∡(x, y) ≤ 1/2), whose automorphism group is O(2), the continuous version of the dihedral groups.
No surprise so far. But let us consider the graphs G n ×G n+1 . Add edges connecting every node (i, j) to (i+a, j +a), where α(n+1) < a < n/2. Let H n denote the resulting graph.
Example 35
The next example (in a slightly different form) is from the papers [8] and [9]. It shows that even if the underlying group G is the same for a convergent sequence of Cayley graphons, a transitive action on the limit graphon may need a different, bigger group. Let G = R/Z be the circle group and let ξ : G → C be the character defined by ξ(x) = e 2πix . Let f n be the function ℑ(1 + ξ + ξ n )/2 where ℑ denotes the imaginary part. It is not hard to see that the limit of the Cayley graphons corresponding to f n is the Cayley graphon corresponding to the function f (x, y) = ℑ(1 + ξ(x) + ξ(y))/2 on the torus G 2 .
In the light of the previous example the next theorem (which we quote from [9]) is somewhat surprising. We need a definition.
Definition 36 Let G be a compact group with Haar measure µ. Let V n denote the subspace of L 2 (G, µ) spanned by the G-invariant subspaces of dimension at most n. We say that G is weakly random if V n is finite dimensional for every n. We cite a related result, which is a consequence of a theorem of Gowers [2], indicating further, more subtle, relations between the automorphism groups of graphs and their limits.
Theorem 38 (Gowers) Let G n be a Cayley graph of a group Γ n (n = 1, 2, . . . ), where the edge-density of G n tends to a limit 0 ≤ c ≤ 1, and the minimum dimension in which Γ n has a nontrivial representation tends to infinity. Then the sequence (G n ) ∞ n=1 is quasirandom, i.e., it tends to a pure graphon (J, W ) where J has a single point.
Our goal is to determine the automorphism group of the limit of a sequence of node transitive graphs. Using lemma 21 one can reduce the problem of computing the automorphism group of W to the same problem about bounded rank graphons. To demonstrate this principle we show the next theorem. Recall that a compact group Γ is abelian by pro-finite if it has a closed abelian normal subgroup A such that Γ/A is the inverse limit of finite groups.
Theorem 39 Let {G n } ∞ n=1 be a sequence of node-transitive graphs converging to a graphon (J, W ). Then (J, W ) is weakly isomorphic to a Cayley graphon on an abelian by pro-finite group.
Proof.
We want to show that G = Aut(W ) has a closed, abelian by pro-finite subgroup that acts transitively on J. Let {α i } ∞ i=1 be a decreasing sequence of real numbers with lim i→∞ α i = 0 that contains no eigenvalue of W . We can assume that (J, W ) is pure. Since W is node transitive, theorem 10 implies that J is compact and J = J. We will use the notation from chapter 4.2.
For every W j let µ j αi denote the measure defined above for W j in the explicit coordinate system R di where d i = dim(U αi ). For finitely many values of j the measure µ j αi may exist in a different dimension but we ignore those values. By choosing a subsequence we can assume without loss of generality that the conditions of the lemma 17 hold for every i.
Let G j i ⊂ O(d i ) denote the automorphism group of µ j αi and let H i denote the closed subgroup in O(d i ) whose elements are ultra-limits (for some fixed ultrafilter ω) of sequences (g 1 , g 2 , . . . ) where g j ∈ G j i . It is clear that elements of H i preserve ν i and it acts transitively on S i .
We claim that H i is abelian by finite. A classical theorem by Camille Jordan [11] states that there is a function f (n) such that any finite subgroup of GL(n, C) contains an abelian group of index at most f (n). Using this theorem, we see that each G j i has an abelian subgroup of index at most f (d i ). It is a standard technique to show that this property is inherited by the ultralimit H i . If the groups G j i are all abelian, then the continuity of the commutator word shows that H i is abelian. For the general case, choose f (d i ) coset representatives g i,j,k in each group G j i for the abelian subgroup where 1 ≤ k ≤ f (d i ). Their limits as j → ∞ will be coset representatives for the limiting abelian group.
To finish the proof, let H be the inverse limit of the groups H i with respect to the homomorphisms P αi+1,αi . Then H ⊆ Γ W and H acts transitively on S. By lemma 21 we obtain that τ −1 • H • τ ⊆ Aut(W ) is transitive on J.
Graph algebras of finite rank graphons
We conclude with an application of our results on automorphisms of graphons to characterize graph algebras of graphons that have finite rank as integral kernel operators. Let (J, W ) be a pure graphon with finite rank. The spectral decomposition (10) takes the simpler form For any sufficiently small λ > 0, we have [W ] λ = W , and so the considerations in Section 4.2 imply that (J, r W ) is compact. Let G = Aut(W ), and let S be the function algebra generated by the eigenfunctions of W . We denote by S n the space of homogeneous polynomials of degree n in the eigenfunctions of W , so that S = n S n . Substituting (27) in the definition (2) of restricted homomorphism numbers, we see that A 1 ⊆ S. Since the functions in A 1 are G-invariant, it follows that A 1 ⊆ S G . Our main goal is to prove that equality holds here.
For h ∈ L ∞ (J n ), we define r(h, x) = x1,x2,...,xn The following lemma states some elementary properties of this function.
Lemma 40 (a) If h ∈ L ∞ (J n ) then r(h, x) ∈ S n (as a function of x ∈ J). (b) If h ∈ A n , then r(h, x) ∈ A 1 .
(c) r(h g , x) = r(h, x) g for every g ∈ G.
Proof. Assertion (a) follows by substituting formula (27) in (28). To prove (b), let h(x 1 , . . . , x n ) = t x1...xn (s, W ), and let s ′ ∈ Q 1 denote the one-labeled quantum graph obtained from s by connecting a new node with label 1 to all the labeled nodes and then we removing the original labels. Then r(h, x) = t x (s ′ , W ). Finally, (c) follows by replacing W (x, x i ) by W (x g , x g i ) in the formula for r(h g , x). Since the action of G is measure preserving, the integration over (x g 1 , x g 2 , . . . , x g n ) is equivalent to the integration over (x 1 , x 2 , . . . , x n ).
Lemma 41 Every function f ∈ S n can be expressed as f = r(h, x) for some function h ∈ L ∞ (J n ).
Proof. If h(x 1 , x 2 , . . . , x n ) = f i1 (x 1 )f i2 (x 2 ) . . . f in (x n ), then Every function f ∈ S n can be expressed as a linear combination of functions such as that on the right side of the previous formula. Since r(h, x) is linear in h, this completes the proof.
Lemma 42 Every function f ∈ S G n can be expressed as f = r(h, x) for some Ginvariant function h ∈ L ∞ (J n ).
Proof. By Lemma 41, f (x) = r(q, x) for a suitable q ∈ L ∞ (J n ). Let h = G q g . It is clear that h is G-invariant. By lemma 40 and the linearity of r(. , .) in the first variable it follows that f (x) = r(h, x).
Lemma 43 S G n = A 1 ∩ S n .
Proof. Trivially S G n ⊇ A 1 ∩ S n . To prove the reverse, let f ∈ S G n . Trivially f ∈ S n , so it suffices to prove that f ∈ A 1 . Lemma 42 shows that f (x) = r(h, x) for some G-invariant function h ∈ L ∞ (J n ). Using Corollary 27, there is a sequence of functions q k ∈ A 0 n such that q k → h (k → ∞) uniformly in x. By Lemma 40, r(q k , x) ∈ A 1 (as a function of x ∈ J), and clearly r(q k , x) → f = r(h, x) uniformly in x. This implies that f ∈ A 1 .
Proof.
We have seen that A 1 ⊆ S G . To prove the reverse, we note that every function f ∈ S is a finite sum of functions n f n , where f n ∈ S n , and if f is Ginvariant, then so are the terms f n . Hence S G is the linear span of the spaces S G n . By the previous lemma we get that S G ⊆ A 1 .
Corollary 45 A 1 is finitely generated.
Proof. The algebra S is a finitely generated commutative algebra and the compact group G acts on S via automorphisms. Hilbert's theorem on G-invariant rings implies that A 1 = S G is finitely generated. | 2014-06-19T07:15:24.000Z | 2014-06-19T00:00:00.000 | {
"year": 2014,
"sha1": "4a2163f0940eaa732f4c68db3fe0599c376697ef",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jalgebra.2014.08.024",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "4a2163f0940eaa732f4c68db3fe0599c376697ef",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
118399743 | pes2o/s2orc | v3-fos-license | Strength of the E_R = 127 keV, 26Al(p,g)27Si resonance
We examine the impact of the strength of the E_R = 127 keV, 26Al(p,g)27Si resonance on 26Al production in classical nova explosions and asymptotic giant branch (AGB) stars. Thermonuclear 26Al(p,g)27Si reaction rates are determined using different assumed strengths for this resonance and representative stellar model calculations of these astrophysical environments are performed using these different rates. Predicted 26Al yields in our models are not sensitive to differences in rates determined using zero and a commonly stated upper limit corresponding to wg_UL = 0.0042 micro-eV for this resonance strength. Yields of 26Al decrease by 6% and, more significantly, up to 30%, when a strength of 24 x wg_UL = 0.1 micro-eV is assumed in the adopted nova and AGB star models, respectively. Given that the value of wg_UL was deduced from a single, background-dominated 26Al(3He,d)27Si experiment where only upper limits on differential cross sections were determined, we encourage new experiments to confirm the strength of the 127 keV resonance.
The origin of the observed Galactic radioisotope 26 Al is still unresolved. Over thirty years have passed since the first identification [1] in the Galactic interstellar medium of the 1.809-MeV β-delayed γ-ray line from the decay of the ground state of 26 Al (t 1/2 = 7.17 ×10 5 y). Since then, increasingly sophisticated observational studies have produced all-sky maps of the 1.809 MeV emission (showing that 26 Al is mostly confined to the Galactic disk) [2], demonstrated that 26 Al co-rotates with the Galaxy (supporting its Galaxy-wide origin) [3], and used 26 Al as a tracer to examine the kinematics of massive star and supernova ejecta [4], among other achievements. The stellar production of 26 Al has also been inferred through measured excesses of its daughter 26 Mg in inclusions and presolar dust grains within primitive meteorites [5][6][7][8]. Nonetheless, despite extensive theoretical studies of nucleosynthesis in proposed astrophysical environments [9,10] such as asymptotic giant branch (AGB) stars [11][12][13], classical nova explosions [14][15][16] and massive stars [17][18][19], accounting for the present-day Galactic 26 Al abundance of 2 − 3 M ⊙ [3,20] has proved elusive.
In hydrogen-burning environments, an accurate thermonuclear rate of the 26 Al(p, γ) 27 Si destruction reaction at the relevant stellar temperatures is clearly needed for reliable model predictions of 26 Al production. For example, according to current models, winds from AGB stars eject 26 Al produced at temperatures of ≈ 50 − 100 MK, while in classical novae, 26 Al is produced in explosions that involve an oxygen-neon white dwarf and achieve peak temperatures of T peak ≈ 0.2 − 0.4 GK. To determine the 26 Al(p, γ) rate in these environments, one therefore requires resonance energies E R (which enter exponentially in the rate) and (p, γ) resonance strengths ωγ (which enter linearly in the rate) for 27 Si states between the 26 Al+p energy threshold (S p = 7463.25 (16) keV [21]) and ≈200 and ≈500 keV above this threshold for AGB stars and novae, respectively. [Note that throughout this manuscript we discuss exclusively the (p, γ) reaction on the 5 + ground state of 26 Al rather than on the 0 + isomeric state at E x = 228 keV (t 1/2 = 6.3 s).] The principal uncertainties in the 26 Al(p, γ) 27 Si rate at temperatures relevant to 26 Al production in AGB stars and novae arise from the unmeasured strengths of the resonances at E R = 68 and 127 keV [22]. Tentative observations of additional states [23,24], which would correspond to E R = 30 and 94 keV, should also be confirmed, although we note that two relatively non-selective, recent indirect studies did not observe the latter level [24,25]. While it may play a role in AGB stars, the uncertainty in the rate due to the strengths of resonances at 30, 68 and 94 keV is not expected to significantly affect 26 Al production in novae [14,16]. Therefore, in the present work we focus on the impact of variations in the strength of the 127 keV resonance on 26 Al production in models of AGB stars and classical novae. Obviously any sensitivity of 26 Al production in these environments to reasonable adopted strengths for this one resonance would only be exacerbated through consideration of additional resonances.
Most detailed nova and AGB star models that have examined the production of 26 Al [13][14][15][16][26][27][28] have used 26 Al(p, γ) rates that incorporate a result from Vogelaar et al. (1996) [29] for the strength of the 127 keV resonance. Indeed, studies have estimated that novae may contribute up to ≈ 30% of the Galactic 26 Al abundance using such a rate [15,27]. Vogelaar et al. measured differential cross sections for 27 Si states above the 26 Al+p energy threshold populated through the 26 Al( 3 He,d) 27 Si proton-transfer reaction. Assuming purely single-particle transfer, (p, γ) resonance strengths may be estimated from proton spectroscopic factors C 2 S extracted from such an experiment. For the state at E x = 7590 keV (E R = 127 keV) solely upper limits for differential cross sections were determined, and at only three of the nine angles at which measurements were made. These limited results were largely due to the nature of the target employed, which was dominated by 27 Al ( 26 Al/ 27 Al = 6.3%). Because of this, the measured deuteron spectra were dominated by products from the competing 27 Al( 3 He,d) reaction. Direct reaction calculations assuming l = 0 transfer were then used with the upper limits on the differential cross sections to give their stated upper limit of C 2 S UL = 0.002 for this 9/2 + [24,25] state. This would correspond to a strength of ωγ UL = 0.0042 µeV for the E R = 127 keV resonance under the reasonable assumption that the proton partial width for this threshold state is much less that the γ-ray partial width.
This upper limit on the spectroscopic factor may be questionable for several reasons. The dearth of angles at which differential cross section upper limits were extracted for this state makes the theoretical fit highly dependent on the reliability of the limit at the lowest angle (θ c.m. ≈ 5 • , see Fig. 6a in Ref. [29]). If, instead, their calculation were scaled to the upper limit at the highest angle at which a limit was extracted from the background-dominated spectra (θ c.m. ≈ 14 • ), the C 2 S value would increase by a factor of ≈ 20. Furthermore, we have repeated the l = 0 theoretical calculation for this state using the direct reaction code FRESCO [30] and we find a C 2 S value up to ≈ 5 times larger than that of Vogelaar et al. when reasonable sets of optical model parameters are adopted [29,31,32]. Finally, for the 9/2 + [24,25], 7739 keV 27 Si state, Vogelaar et al. determine a strength for an l = 0 transition (via an extracted C 2 S) that differs from the directly-measured value [33] by a factor of ≈ 5. For this state, 26 Al( 3 He,d) cross sections were measured at seven angles. While the discrepancy may be due to, for example, population of this state through a mixed transition, an erroneous spectroscopic factor due to issues with the measured differential cross sections or the theoretical calculations cannot be ruled out. With regard to the thermonuclear rate of the 26 Al(p, γ) 27 Si reaction, a spectroscopic factor of zero for the 127 keV state leads to a rate up to 1.6 times lower than that determined using ωγ UL over T = 0.05 -0.11 GK. On the other hand, a strength of ωγ = 24 × ωγ UL = 0.1 µeV (or equivalently, C 2 S = 24 × C 2 S UL ) has a dramatic effect on the rate over T = 0.04 -0.2 GK, leading to enhancements by as much as a factor of 10 relative to the rate using ωγ UL . Such an enhanced rate would not be unreasonable given the above discussion. These reaction rates are shown in Fig. 1.
To assess the sensitivity of model predictions of 26 Al yields to the strength of the E R = 127 keV, 26 Al(p, γ) 27 Si 27 Si reaction calculated using different assumed strengths (in µeV) for the E R = 127 keV resonance. Rates are shown relative to the rate calculated using a strength of 0.0042 µeV. All other parameters for these rate calculations were adopted from Ref. [22]. resonance, we have performed new sets of representative hydrodynamic nova models and stellar nucleosynthesis calculations for AGB stars. We have used these models together with 26 Al(p, γ) 27 Si rates calculated assuming E R = 127 keV resonance strengths of 0, 0.0042 µeV, and 0.1 µeV, as discussed above. To fully explore the impact of this strength, we have also used a rate determined using a strength of 1 µeV, although we note that the corresponding C 2 S value of 0.5 seems incompatible with the data of Ref. [29]. For the nova simulations, a 1.25 M ⊙ oxygen-neon white dwarf was evolved from the accretion stage to the explosion, expansion and ejection stages. Four models, identical except for the adopted prescription of the 26 Al(p, γ) rate, have been computed with the spherically symmetric, implicit, Lagrangian code SHIVA, extensively used in the modeling of stellar explosions such as classical novae and type I X-ray bursts [34]. The solar-like accreted material was pre-mixed with material from the outer layers of the white dwarf at a level of 50% to mimic mixing at the core-envelope interface [35]. Typical values for the initial white dwarf luminosity (10 −2 L ⊙ ) and the mass-accretion rate (2 × 10 −10 M ⊙ yr −1 ) have been adopted, resulting in explosions with T peak = 0.25 GK. Nucleosynthesis in AGB stars was examined using models of 6 M ⊙ and 8 M ⊙ stars, with metallicities of Z = 0.004 and 0.014, respectively [28,36]. These models were chosen because the temperature at the base of the convective envelope reaches ≈0.1 GK during the thermally-pulsing AGB phase, which makes them ideal sites for testing the impact of reaction rates related to the production of 26 Al. Abundances in the AGB star models were determined with a postprocessing algorithm [28] that incorporates time-dependent diffusive mixing for all convective zones [37]. Models using 26 Al(p, γ) 27 Si rates determined with ωγ = 0 and ωγ UL = 0.0042 µeV agreed to better than 3% in the amount of 26 Al produced, for both the nova and AGB star simulations. Yields of 26 Al decreased by 6% and 40% when the reaction rates calculated with strengths of 0.1 and 1 µeV were used in the nova models, relative to the 26 Al yield determined using ωγ UL . The impact of the enhanced rates in the AGB star models is rather more striking: for the 8 M ⊙ model, 26 Al yields decreased by 10% and a factor of 2 when the rates with strengths of 0.1 and 1 µeV were employed; for the 6 M ⊙ model, 26 Al yields decreased by 30% and a factor of 6 when the rates with strengths of 0.1 and 1 µeV were used, all relative to the 26 Al yield determined using ωγ UL . We also note that in a study of the impact of reaction rate variations on 26 Al production in massive stars [19], an enhancement of the 26 Al(p, γ) 27 Si rate by a factor of 10 during core hydrogen burning reduced the predicted 26 Al yield by a factor of 1.8. As shown in Fig. 1, this level of enhancement of the rate at the relevant temperatures (T ≈ 0.04 − 0.08 GK [17,19]) would follow from a strength of 0.1 µeV for the 127 keV resonance.
Given the impact on model predictions of 26 Al production, we encourage experimental efforts to measure the strength of the E R = 127 keV, 26 Al(p, γ) 27 Si resonance. A new 26 Al( 3 He,d) 27 Si measurement with an improved target would be helpful to both confirm the results of Vogelaar et al. [29] for the 127 keV resonance and to help estimate the unknown strengths of the lower energy resonances. Sufficiently stringent upper limits from direct measurements would also be welcome. | 2014-08-22T08:04:23.000Z | 2014-08-22T00:00:00.000 | {
"year": 2014,
"sha1": "e52c9eef8a8eb9e92b9706da8cc5a9ac8a8225ed",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1408.5227",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e52c9eef8a8eb9e92b9706da8cc5a9ac8a8225ed",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221295180 | pes2o/s2orc | v3-fos-license | More than Meets the Eye: Aspergillus-Related Orbital Apex Syndrome
The patient is a 67-year-old Caucasian male with a past medical history of diabetes mellitus type 2, coronary artery disease (CAD) status post stent placement, renal cell carcinoma (RCC) status post left nephrectomy and bilateral adrenalectomy secondary to metastatic disease, and aspergillus pneumonia who was transferred from an outside hospital for evaluation of progressively worsening pulsating right temple and retrobulbar headache. Initial studies ruled out glaucoma, giant cell arteritis, and stroke, or aneurysmal pathology. The only positive finding was right sphenoid sinus disease on imaging that had caused bony destruction and infiltration of the right orbital apex. Broad-spectrum antibiotics were started for bacterial versus fungal sinusitis and the patient was admitted to the medical floor with consultations to Neurology, Otolaryngology (ENT), and Ophthalmology. ENT took the patient emergently to the OR. The final diagnosis was chronic aspergillus sinusitis and right-sided orbital apex syndrome (OAS). Antibiotics and antifungals were optimized by the infectious disease team. ENT also ordered steroid washouts post-operatively with budesonide and saline as well as sinus debridements every couple of weeks.
Introduction
The orbital apex disorders include cavernous sinus syndrome (CSS), superior orbital fissure syndrome (SOFS), and orbital apex syndrome (OAS). All three disorders have varying etiologies, similar clinical manifestations, and varying degrees of severity. Thus, prompt identification is imperative for proper treatment and preservation of vision.
OAS is an extra-orbital complication of orbital ("postseptal") cellulitis, a sight-threatening and life-threatening infection of the soft tissue posterior to the orbital septum [1]. It is much more commonly found in young children than adults. Co-existing (bacterial) pan-or ethmoid rhinosinusitis is described in 86-98% of cases [1]. Other causes of orbital cellulitis include ophthalmic surgery, peribulbar anesthesia, orbital trauma with a fracture or foreign body, and dental or middle ear infections. Blindness can occur by way of extra-orbital extension of the infection to the orbital apex; this is known as orbital apex syndrome (OAS) or Jacod syndrome. While the causative organisms are often hard to identify, Staphylococcus aureus (S. aureus), Streptococcus anginosus, non-typeable Haemophilus influenzae, Mucorales, and Aspergillus spp. 1 1 2 1 3 have been identified in association with orbital cellulitis [1]. The latter two microbes are more often found in association with OAS in immunocompromised patients [2]. Less than 5% of blood cultures come back positive in adults and wound cultures have been noted to come back as polymicrobial only in pediatric cases.
Preseptal cellulitis (also known as periorbital cellulitis) which involves the soft tissues anterior to the orbital septum (i.e. including the eyelid), is much more common than orbital cellulitis [1]. While preseptal cellulitis is not known to be sight or life-threatening, sometimes the diagnosis can remain unclear and patients should be treated as if they have orbital cellulitis for this reason until it can be definitively ruled out.
Case Presentation
The patient is a 67-year-old white male with a past medical history of diabetes mellitus type 2 (HbA1c of 8.8%), hypertension, hypothyroidism, coronary artery disease (CAD) status post stent placement, renal cell carcinoma status post left nephrectomy and bilateral adrenalectomy secondary to metastatic disease, history of aspergillus pneumonia, left occipital meningioma, and benign prostatic hyperplasia who was admitted to the medical floor for further workup and management of a severe right temple and retrobulbar headache. Two weeks prior to admission he endorsed having a sinus infection from which he still had persistent pain and congestion. These symptoms were also accompanied by intermittent episodes of diplopia, photophobia, and tearing of the right eye for three weeks prior to admission. Examination of the affected eye revealed sinus tenderness, chemosis, periorbital tenderness and proptosis, and lateral gaze palsy. Extraocular movements of the left eye were intact. Pupils were also equal and reactive to light and accommodation bilaterally.
A CT scan of the head was obtained and came negative for any acute process. The patient tested negative for giant cell arteritis and glaucoma. CT scan of the orbits without contrast showed right sphenoid sinus disease that had caused bony destruction and likely infectious infiltration of the right orbital apex. Ophthalmology, Infectious Disease, Neurology, and ENT consults were obtained. His initial antibiotic regimen consisted of intravenous (IV) vancomycin and piperacillin-tazobactam. Piperacillin-tazobactam was changed to meropenem and amphotericin by the Infectious Disease team as there was suspicion for bacterial versus fungal sinusitis (especially rhinocerebral mucormycosis given his uncontrolled diabetes). Per ENT, biopsy results of his sinuses status post initial sinus debridement revealed fungal debris which was confirmed to be Aspergillus spp. In light of these findings, IV amphotericin was transitioned to isavuconazole. MRI scans of the brain and neck including angiography and venography were negative for any aneurysmal pathology and venous sinus thrombosis. Chronic paranasal sinusitis was the only positive finding. A lumbar puncture was negative. During his hospital course, an edema around the patient's right eye subsided although the lateral gaze palsy remained. Ophthalmology did not appreciate any papilledema on fundoscopic examination. They did not recommend any acute intervention beyond outpatient follow-up after discharge from the hospital. ENT also ordered steroid washouts post-operatively with budesonide and saline as well as sinus debridements every couple of weeks. His final antibiotic regimen per the Infectious Disease team consisted of IV vancomycin and cefepime for six weeks as well as per oral (PO) voriconazole for six months.
Discussion
The orbit itself is a cone-shaped structure with its apex within the skull (Figures 1-2). The orbital apex disorders as Cox et al. describe include OAS, CSS, and SOFS [2]. They can be progressive in nature with SOFS developing into OAS or CSS. OAS, CSS, and SOFS share similar etiologies, symptomatologies, diagnostic evaluations, and management strategies. CSS results from compression of the sinuses themselves, and SOFS results from a lesion immediately anterior to the orbital apex ( Figure 2). Etiologies of each of these orbital apex disorders could be neoplastic, inflammatory, developmental, traumatic, or infectious as in this case. OAS is an infectious complication of orbital cellulitis, involving soft tissues posterior to the orbital septum ( Figure 1). Orbital cellulitis is typically precipitated by ethmoid sinusitis. Notwithstanding, both preseptal cellulitis and orbital cellulitis have different clinical implications and it is important to distinguish between them [2]. Orbital cellulitis presents the greater emergency, with an immediate threat to vision as well as to life. Preseptal cellulitis mainly presents with eyelid swelling with or without erythema and may be associated with fever and leukocytosis. Clinical features of orbital cellulitis include these as per Gappy et al. along with eye pain and tenderness, pain with extraocular eye movements and proptosis [1]. Ophthalmoplegia with or without diplopia, vision impairment (manifested by an afferent pupillary defect), and chemosis may also be present. Diagnosis of orbital cellulitis is made via contrast-enhanced CT scan of the orbits and sinuses. Complications of orbital cellulitis include subperiosteal cellulitis, orbital abscess, visual loss, and intracranial extension [1]. The main objective with orbital cellulitis is the preservation of vision. Extraorbital extension of orbital cellulitis presents as OAS. Intracranial extension of orbital cellulitis can cause subdural empyemas, epidural abscesses, meningitis, and CSS. This image was obtained with consent from The Neuroradiology Journal [3].
Mucor, Aspergillus spp., and Mycobacterium tuberculosis have been identified as the main causative pathogens of orbital cellulitis which can involve extra-orbital extension resulting in OAS [1][2]. While such cases are rare, Goyal et al. explain that patients are suspected to be already predisposed to immunodeficiency via chronic diseases including but not limited to the pancreas (diabetes), kidneys (renal acidoses), and human immunodeficiency virus (HIV) [2]. Many early infections are confined to the maxillary and sphenoid sinus [1]. In rare cases, the infection invades through the sphenoid bone, which results in OAS [3]. OAS itself is known to cause blindness, loculations of infection within the intracranial compartment, and cavernous or dural venous sinus thromboses. Startlingly enough, there are no other alarming signs of inflammation. Emergent surgical intervention paired with long-term intravenous antibiotics is warranted to preserve vision and avoid deleterious insult to the orbital compartment [3][4].
Management
If it is unclear whether a patient has preseptal cellulitis or orbital cellulitis, their response to antibiotic therapy can also help confirm the diagnosis beyond the history of present illness, physical examination, and diagnostic studies. Parenteral broad-spectrum antibiotic therapy against S. aureus (including methicillin-resistant Staphylococcus aureus), Streptococci, and gram-negative bacilli such as Pseudomonas aeruginosa should be started. If there is lack of improvement in signs and symptoms 24-48 hours after the initiation of such therapy in addition to worsening visual acuity or pupillary changes, absolute neutrophil count (ANC) > 10,000 cells/uL, evidence of abscess greater than 1 cm in diameter, and limited extra-ocular muscle movements, orbital cellulitis should be suspected. Management of this should include consultation of an ophthalmologist and an ENT. Repeat imaging and endoscopic nasal surgery to biopsy should follow closely after [1,[4][5]. For uncomplicated infections, Gappy et al. recommend that antibiotics should be continued until there is complete resolution: this can range up to at least two to three weeks of IV and PO antibiotic therapy [1]. For complicated infections such as severe ethmoid sinusitis accompanied by bony destruction of the sinus, at least four weeks of antibiotic coverage is recommended [1][2]. The transition from IV to PO therapy is at the discretion of Infectious Disease specialists, who also need to decide whether or not certain patients are candidates for peripherally inserted central catheters (PICCs) or midline catheters to complete IV antibiotic infusions as an outpatient.
Fungal rhinosinusitis is more common in immunocompromised patients. Inhaled fungal spores can commonly colonize the sinuses and lungs, but that does not mean that they will cause overt disease. In these patients with immunocompromising conditions such as diabetes or a history of cancer such as in the patient described, a more aggressive and invasive disease course is likely [2,6]. Acute infections are caused by Aspergillus, Fusarium, and Mucorales while chronic indolent infections are caused by dematiaceous (brown-black molds also known as phaeohyphomycosis) such as Bipolaris, Curvularia, and Alternaria spp. Cox et al. point out that the latter variant can also be caused by Aspergillus and Scedosporium apiospermum complex [2]. With suggestive symptoms in immunocompromised patients such as fever, facial pain, nasal congestion, vision impairment, and altered sensorium, CT imaging should be obtained immediately. If any abnormalities are detected, MRIs should be performed (Figures 2-3). Nasal endoscopy and possible radical surgical debridement by ENTs are needed to assess signs of necrosis otherwise indicative of rhinocerebral mucormycosis, a deadly infection. In the presence of a perforated nasal septum or palatal or gingival eschars (ie. necrosis), fungi are understood to have already invaded the intravascular space of the maxillofacial region [2]. The diagnosis of invasive fungal rhinosinusitis is confirmed only on histopathology. Empiric IV antifungal therapy typically includes lipid formulations of amphotericin B 5 mg/kg daily or voriconazole 6 mg/kg every 12 hours for two doses followed by 4 mg/kg every 12 hours after that, if mucormycosis is effectively ruled out (Mucorales is generally known to be resistant to triazole antifungal agents such as voriconazole) [2]. Isavuconazole 200 mg every eight hours for two days (PO or IV) followed by once daily thereafter can be used in the setting of intolerance to voriconazole. Nonetheless, the patient's own clinical profile and the agent's own side-effect profiles should be considered in the decision-making process to optimize the chosen antifungal regimen. The response to therapy should also be monitored closely as that will ultimately determine when it is safe to transition from IV to PO therapy. PO dosing for voriconazole is 200 mg twice daily. Acute, invasive, and indolent rhinosinusitis secondary to Aspergillus is responsive to voriconazole. All in all, suppressive therapy with amphotericin, voriconazole, or the echinocandins such as micafungin or caspofungin (if the disease is severe enough) can last up to six months and immunocompromised patients may require longer. As far as antifungal choice is concerned, the financial burden on the patient, if long-term outpatient antibiotic therapy is required, also needs to be considered.
FIGURE 3: MRI of orbit
Coronal (a), axial (b) T2W MR image shows a hyperintense lesion medial to the medial rectus muscle (yellow arrow) extending to the medial orbital wall forming sub-periosteal abscess and extending laterally into the intraconal fat. Post-contrast coronal T1W image (c) shows the enhancement (yellow arrow). This was a case of orbital cellulitis.
Conclusions
Prompt identification is imperative for proper treatment and preservation of vision in the setting of orbital apex ("Jacod") syndrome. With most patients displaying concurrent symptoms of sinus disease and vision impairment, a work-up should be initiated including blood tests, cultures, and radiography. Even in the setting of preseptal cellulitis, broad-spectrum IV antibiotic therapy should be started rapidly and response to therapy should be monitored closely. Lack of response or worsening signs and symptoms of ongoing disease should necessitate involvement of Infectious Disease, ENT, and Ophthalmology specialists. Special consideration should be given to patients that are immunocompromised (such as those who are diabetic or with a history of cancer or immunodeficiency) as they are at higher risk of invasive disease with more virulent organisms and fungi such as Aspergillus spp. The primary objective is ruling out necrotizing infection such as rhinocerebral mucormycosis with emergent nasal endoscopy if such is suspected. Fungal rhinosinusitis infection can be acute and invasive or chronic and indolent, but equally dangerous. In consideration of this, sinus biopsies and serial debridements can facilitate the overall management of the underlying illness. With cases of orbital cellulitis complicated by extra-orbital extension such as in the patient described, Infectious Disease specialists and Care Coordination teams (ie. social workers) in hospital should work together to prepare and manage long-term antibiotic coverage, arrange follow-ups and central vascular access, and ensure the primary medical teams remain appraised of the patient's disease course after discharge from the acute care setting.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2020-07-30T02:03:48.985Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "1e7f5552db8792c215aad6c6fd99b175adb2ae78",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/27465-more-than-meets-the-eye-aspergillus-related-orbital-apex-syndrome.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a66b38a9579a2a727a344bdae9438eaba9da04e5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17948021 | pes2o/s2orc | v3-fos-license | The index of operators on foliated bundles
We compute the equivariant cohomology Chern character of the index of elliptic operators along the leaves of the foliation of a flat bundle. The proof is based on the study of certain algebras of pseudodifferential operators and uses techniques for analizing noncommutative algebras similar to those developed in Algebraic Topology, but in the framework of cyclic cohomology and noncommutative geometry.
Introduction.
Let (V, π), F → V π −→ B be a smooth fiber bundle with fiber F of dimension q. We assume that (V, π) is endowed with a flat connection corresponding to an integrable subbundle F ⊂ T V , of dimension n = dim(B), transverse at any point to the fibers of π. The pair (V, F ) is a foliation.
The purpose of this paper to study invariants of differential operators along the leaves of the above foliation. The index of an elliptic operator along the leaves of the foliation F is an element in the K-Theory group K 0 (F ) = K 0 (Ψ −∞ (F )) where Ψ −∞ (F ) is the algebra of regularizing operators along the leaves. In the case of a foliated bundle there exists a Connes-Karoubi character Ch : K 0 (Ψ −∞ (F )) → H * −q Γ (F, O) ′ to the dual of equivariant cohomology with twisted coefficients, where Γ is the fundamental group of the base B acting on the fiber F via holonomy, and O is the orientation sheaf. Our main theorem computes the Connes-Karoubi character of the index. This amounts to a proof of the "higher index theorem for foliations" in this special case. A very general higher index theorem for foliations can be found in [C3] and here we give a new proof of this theorem for flat bundles. Some very interesting results related to the results in this paper are contained in [CM2] where Partially supported by NSF grant DMS-9205548, a Sloan research fellowship and NSF Young Investigator Award DMS-9457859 Dif f -invariant structures are treated in detail. See also [C4]. In the even more special case of a family of elliptic operators our theorem recovers the computation of the Chern character of the index bundle of a family of elliptic operators [AS].
The problem that we consider was suggested by [DHK]. The proof of our theorem is based on the Cuntz-Quillen exact sequence [CQ] and the results in [BN] and [N3].
The second paper identifies the cyclic cohomology groups with geometric groups.
The third paper provides us with the axiomatic setting necessary to deal with index problems in the framework of cyclic cohomology.
Let P (∂) = a α ∂ α be the local expression of an elliptic operator on the base B, acting between the sections of two vector bundles. We lift each vector field ∂ i on B to a vector field ∇ i on the total space V of our flat bundle. This will allow us to construct the lift P (∇) which will be an example of an elliptic differential operator along the leaves of F . In this rather degenerate case the invariants for P (∇) reduce to invariants of P (∂). However not all operators that we consider arise in this way, actually very few do. The nonmultiplicativity of the signature [A] is related to the fenomena that we investigate.
Statement of the problem.
Consider a smooth foliation F of smooth manifold V . All the structures that will be used in this paper will be smooth, i.e. C ∞ , so that we shall omit "smooth" in the following. We think of the foliation (V, F ) as an integrable subbundle F ⊂ T V .
That is, F identifies with the tangent bundle to the foliation.
By considering only differentiations along the fibers of F one obtains longitudinal differential operators. In analogy with manifolds, one can proceed then to define longitudinal pseudodifferential operators, denoted Ψ p (F ). A good reference to these constructions is [MoS]. An alternative description of these algebras for foliations coming from flat bundles is given in the next section.
The algebra Ψ −∞ (F ) of regularizing operators along F is usually referred to as denote the graph of the foliation (V, F ) as well [W] then Ψ −∞ (F ) identifies with the algebra C ∞ c (F ) of compactly supported smooth kernels on the graph.
We review here, in order to fix notation, the construction of the graph of the foliation (V, F ). It consists of equivalence classes of triples (x, y, γ), where x, y ∈ V are on the same leaf and γ is a path from x to y completely contained in that leaf.
The equivalence is given by "holonomy". The graph is a smooth manifold, usually non Hausdorff.
As in the classical case, the principal symbol induces an isomorphism where S * F is the unit sphere bundle in the dual F * of F . The notion of asymptotic expansion generalizes as well, and this shows that a matrix of order 0 pseudodifferential operators is invertible modulo regularizing operators if and only if its principal symbol is invertible. From this we infer that σ 0 induces an isomorphism where K top i is the quotient of K alg i with respect to homotopy (i = 0 or i = 1).
The most general form of the index problem for foliation is: (FOL-ALG) Determine the algebraic K-theory boundary (index) map (A very closed related problem is obtained by considering topological K-theory.) The major difficulty is that we know very little about the K 0 -groups involved.
(The K 1 -groups are relatively easy to determine.) Denote by HC per i (A), i ∈ Z/2Z, the periodic cyclic homology groups of an arbitrary complex algebra A, and by the Connes-Karoubi character [C2,K,LQ]. One way to avoid the above difficulty is A formula for Ch • Ind will be called a "cohomological index theorem".
The cohomological form of the problem is not just a simplification of the original problem, but it also brings a new perspective. This is because what we usually want itself. Also this form of the problem makes the connection with the characteristic classes of foliations, as we shall see bellow.
The actual definition of the various cyclic homology groups will not be necessary for our purposes. What will matter will be that they exist, and that they satisfy certain general properties. This is very similar to the philosophy of Algebraic Topology, especially in the axiomatic approach due to Eilenberg and Steenrod.
Let us begin by explaining some of the constructions in a particular but suggestive case. Let A be an algebra and let τ : A → C be a trace (i.e., τ (xy) = τ (yx)).
The map that associates to any idempotent e = (e ij ) ∈ M n (A) the number τ (e) = Στ (e ii ) ∈ C factors to a morphism In general any trace τ defines a class [τ ] ∈ HC 0 per (A) and there exists a pairing , between cycle homology and cyclic cohomology such that We shall call the elements of HC i per (A) higher traces.
Any holonomy invariant measure µ on a foliation (V, F ) determines a trace τ µ : The quantity τ µ (Ind [u]) was determined by Connes in [C1]. In view of what we said above this amounts to a partial determination of Ch(Ind[u]).
We now very briefly review the most important properties of periodic cyclic (1) The groups HC per i (A) and HC i per (A), i ∈ Z 2 are covariant (resp. contravariant) functors on the category of complex locally convex algebras with continuous algebra morphisms. If f : A → B is an algebra morphism then we denote by f * and, respectively, f * the induced morphisms.
where for any group Γ we denote by C[Γ] its group algebra.
(4) Consider a separatedétale groupoid G [BN], that is, G is a small category together with manifold structures on G (0) def = Ob(G) and G (1) def = M or(G) such that all morphisms are invertible, all structural maps are smooth and the domain and range are local diffeomorphisms. Let BG be the geometric realization of the nerve of G (this is Grothendieck's classifying space of the topological category G). Also denote by O(G) the complex orientation sheaf of BG (this is defined because G isétale) and denote by q the common dimension of G (0) and G (1) . The main result of [BN] establishes the existence of an injective map Φ : where C ∞ c (G) is endowed with the natural topology and the convolution product.
The morphism Φ is multiplicative and functorial with respect toétale morphisms [N3] that are one-to-one on units. We will call Φ the geometric map. The mor- where × is the external product in cohomology and ⊗ is the external product in periodic cyclic cohomology [N2,N3].
(4') We are going to make more explicit the constructions of (4) in the case of interest is the (algebraic) crossed product algebra and BG = (X × EΓ)/Γ = "the homotopy quotient X//Γ", here Γ → EΓ → BΓ is the universal principal Γ-bundle. The map Φ of (4) becomes an injective map Φ : (this is a more precise form of some results in [N1]). If X is orientable and Γ preserves the orientation, then the left hand side reduces to equivariant coho- Here O(M ) denotes the complexified orientation sheaf on the smooth manifold M .
(5) (Excision) Any two-sided ideal I of a complex algebra A gives rise to a periodic six-term exact sequence of periodic cyclic cohomology groups i ∈ Z 2 similar to the topological K-theory exact sequence. Thus periodic cyclic cohomology defines a generalized cohomology theory for algebras [CQ,CQ1,CQ2].
This boundary map is multiplicative: If B is an other algebra and we denote by ∂ A⊗B the boundary map for the exact sequence corresponding to I ⊗ B ⊂ A ⊗ B then we have ∂ A⊗B (x ⊗ y) = ∂(x) ⊗ y for any x ∈ HC * per (I) and any y ∈ HC * per (B). Here ⊗ denotes also the external product in cyclic cohomology.
(6) There is a functorial morphism For A = C ∞ (X), Ch coincides with the classical Chern character up to rescaling [MiS]. (7) The boundary map in algebraic K-theory and periodic cyclic cohomology are compatible in the following sense for any u ∈ K alg 1 (A/I) and ξ ∈ HC 0 per (I) [N3]. Here we have denoted for We now go back to our foliation (V, F ).
A complete transversal N ⊂ V is a submanifold of dimension q =the codimension of F which is transverse to the leaves and which intersects each leaf. Complete transversals always exist but they are usually not compact and not connected. The choice of a transversal N determines anétale groupoid by restriction: The equivalence relation is easier to describe in this case. A path γ from x to y can be covered by distinguished coordinate patches and hence defines a diffeomorphism ϕ γ : N x → N y from a neighborhood of x in N to a neighborhood of y. Then There is a map which is also given by Morita equivalence and an inclusion as a full corner [BGR].
Denote by q the codimension of F = dim(N ).
Lemma 1. The morphism given by the composition does not depend on the choice of N .
Proof. For any complete transversal N the morphism depends on a partition of unity in such a way that any two such morphisms are conjugated by an inner automorphism.
Let N 1 and N 2 be two complete transversals. Choose a third transversal N ′ not intersecting N 1 and N 2 . By considering N ′ can reduce the problem to the case when N 1 ⊂ N 2 and then use the remark in the beginning.
Consider the continuous map f : V → BF which classifies F [Ha]. In the following statement we are going to use the notation: for the morphism defined in the previous lemma. Also Ind will denote the boundary map in topological K-theory (as in FOL-TOP), and T (F * ) will be the Todd class of F * = Hom R (F , C).
Index Formula Problem. Let F be a foliation of dimension n and codimension q, and let f : V → BF , p : S * F → V and Φ 0 be as above. Then for any u ∈ K 1 (S * F ) and ξ ∈ H 2m (BF , O(F )) we have the following index formula The above formula, if correct, would identify the morphism ℓ in Connes' higher index theorem for foliations [C3].
The index theorem for foliated bundles.
In case F is actually a fiber bundle the formula in the above problem becomes Our proof of the index theorem for flat bundles will use an alternative description of the various algebras associated to the foliation in terms of certain crossed products. If the groups Γ acts on an algebra A 0 then the algebraic crossed product where σ 0 is the principal symbol map and S * B ⊂ T * B is the cosphere bundle of
B.
By a standard procedure we enlarge the algebra Ψ 0 c to include all (n + 1)summable Schatten-von Neumann operators. Explicitly, denote by where T denotes a bounded operator on L 2 ( B) and tr is the usual trace.
It is a simple known fact that Ψ −1 c ⊂ C n+1 and that Ψ 0 considered also in [N2,N3].
The normalization factor is chosen such that T r m C 1 = S m tr, where S is the Connes periodicity operator. By abuse of notation we denote by T r ∈ HC 0 per (C m+1 ) the class of T r m for any 2m ≥ n + 1.
Denote by H * (X) the Z/2Z-periodic complex cohomology groups of the a manifold X.
Lemma 2. We have that Φ induces an isomorphism Proof. Since Γ acts freely on the oriented odd dimensional manifold S * B and * * Recall [N3] that there is a HC * per (C[Γ]) module structure on HC * per (A ⋊ Γ) induced by the C[Γ]-coalgebra structure of A ⋊ Γ: ) and x ∈ HC * per (A ⋊ Γ).
Denote by g : S * B → BΓ the classifying map of the covering Γ → S * B → S * B.
Proof. We know that the action of HC * per (C[Γ]) factors through r 0 because Γ acts without fixed points on S * B. This shows that we can assume ξ = Φ(η) for η = r 0 (ξ), η ∈ H * (BΓ). The module structure is obtained using the multiplicativity of Φ and the fact that the composition corresponding to δ is id × g, by definition. We then have Proposition. Suppose the graph of (V, F ) is separated where V, F, B and E = E( B) are as above. Then there exists a commutative diagram where α is an isomorphism onto e((E ⊗C ∞ 0 (F )) ⋊ Γ)e for some idempotent e and β induces an isomorphism in cyclic cohomology.
Proof. This is the promised equivalent definition of the various algebras associated to (V, F ) in the particular case of a foliated bundles. The idempotent e is defined using a standard argument based on partitions of unity as follows.
Choose a partition of unity on B subordinated to a finite trivializing cover itself.
Consider now the exact sequence We want to identify the Cuntz-Quillen boundary map of this exact sequence.
Observe that since Γ acts by inner automorphisms on C n+1 we have that C n+1 ⋊ Define where T is the usual Todd class and n = dim(M ).
where I is the Index class and Φ : H * (S * B) → HC * +1 (C ∞ c (S * B)) is as in the previous section.
Proof. Consider the following commutative diagram for an idempotent e implementing the Morita equivalence, and α is the inclusion. The idempotent e is defined as in the previous proposition.
The morphism α ′ is defined using the natural representation of E( B) ⋊ Γ on L 2 ( B) given by the fact that the action of Γ is implemented by inner automorphisms. This shows that the restriction α ′ 0 is given by the composition where χ : C[Γ] → C is the augmentation morphism γ → 1.
The above commutative diagram has the property that is the isomorphism of Lemma 2 and that α * 0 is an isomorphism as well. Also, if we It is interesting to observe that the mere existence of the top commutative diagram in the previous proof implies a theorem of Atiyah and Singer [A]. The lemma is equivalent to the higher index theorem for coverings of Connes and Moscovici [CM1,N3].
Suppose now that the foliation (V, F ) defined at the beginning of this section by a flat connection of the bundle π : V → B has a separated graph. This is equivalent to the following condition: the only γ ∈ Γ for which F γ has a nonempty interior are those γ that act trivially on F . Here F denotes the fiber of V → B as before.
Consider now the exact sequence induced by the morphism β in the previous proposition. Denote by where q is the codimension of F , and O(S * F ) is the orientation sheaf of S * F .
In the particular case of (V, F ) that we are studying, the classifying space BF of the graph of the foliation coincides, up to homotopy, with the homotopy quotient where q = dim(F ). In the case we discuss now, that of a foliated bundle, these two maps are related by T r ⊗ Φ(ξ) = Φ 0 (ξ).
We obtain, using the compatibility between the index map in K-Theory and the boundary map in periodic cyclic cohomology, | 2014-10-01T00:00:00.000Z | 1996-07-03T00:00:00.000 | {
"year": 1996,
"sha1": "deb1434a77c28754a8ca23697164fa88730622b4",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1006/jfan.1996.0135",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "e91117c0f11a6545748d597d2242bed53ff0a25e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
100998086 | pes2o/s2orc | v3-fos-license | Atomistic mechanisms and diameter selection during nanorod growth
We study in this paper the atomic mechanisms of nanorod growth and propose the way of diameter selection of nanorod. A characteristic radius is demonstrated to be crucial in nanorod growth, which increases proportional to one fifth power of the ratio of the interlayer hopping rate of adatoms across the monolayer steps to the deposition rate. When the radius of the initial island is larger than this characteristic radius, the growth morphology evolves from a taper-like structure to a nanorod with radius equal to the characteristic radius after some transient layers. Otherwise the nanorod morphology can be maintained during the growth, with stable radius being limited by both the radius of the initial island and the three-dimensional Ehrlich-Schwoebel barrier. Therefore different growth modes and diameter of nanorod can be selected by tuning the characteristic radius. The theoretical predictions are in good agreement with experimental observations of ZnO growth.
I. INTRODUCTION
One-dimensional nanostructures have attracted much attention since they provide potential applications for nanoelectronics and nanophotonics [1][2][3][4][5]. For example, zinc oxide nanorods with hexagonal cross-section can be applied as whispering gallery resonators, in which the coupling between the resonant modes and free excitons depends sensitively on the cross-sectional radius [3][4][5]. Growth of nanorods have been extensively reported thus far, yet there are few studies concentrating on the underlying atomistic mechanisms of growth, especially on the understanding and controlling the cross-sectional radius of nanorods from atomic point of view [6][7][8][9][10][11][12][13].
It is known that surface kinetics plays an important role in determining the morphology and size of nanostructures [14][15][16][17]. The deposited adatoms can either diffuse within the topmost layer and aggregate to form a new layer nucleus, or hop downward across the step edges and contribute to the lateral growth of the topmost layer. Under a certain deposition condition, the kinetics-controlled competition between the growth in the normal direction to the substrate and the lateral growth is expected to determine the growth modes and morphologies [15,16].
The surface kinetics can be described by intralayer and interlayer hopping rates of adatoms, ν = ν 0 exp(−E d /kT ) and ν = ν 0 exp(−E s /kT ), respectively, where the prefactors ν 0 and ν 0 are the attempt rates which are approximately of the same value; k is the Boltzmann's constant and T the temperature. The interlayer diffusion barrier (E s ) is normally larger than the intralayer one (E d ). The difference of these two values is denoted as E es , which is known as Ehrlich-Schwoebel barrier (ESB) [18,19]. The ratio of ν/ν increases with ESB as exp(E es /kT ). The ESB is reported to increase with the step height and saturate in several atomic layers, with a value usually referred as three-dimensional (3D) ESB. The conventional ESB is applicable for a monolayer step and is hereafter termed as two-dimensional (2D) ESB [20,21].
When ESB is small enough to allow sufficient interlayer diffusion, layer-by-layer growth occurs, whereas larger ESB leads to multilayer growth [15]. In the latter case, islands initiate from each nucleus, which approach nanorods if the cross-sections remain the same along the longitudinal direction. The key point to select and control the nanorod growth is to understand how the radius of the cross-section varies when atomic layers add up under specific growth conditions. Since there is a large difference between the 2D-and 3D-ESBs, it is also crucial to identify the specific roles of 2D-and 3D-ESBs in the nanorod growth.
In this paper, we study the influence of deposition rate and ESB on the growth process of nanorods. A characteristic radius has been identified, which increases proportional to one fifth power of the ratio of the 2D-ESB limited hopping rate of adatoms to the deposition rate. Both the growth modes and the nanorod diameter can be selected by tuning this characteristic radius. We demonstrate that when the radius of the initial island is larger than the characteristic radius, the growth morphology evolves from a taper-like structure to a nanorod with radius equal to the characteristic radius. However, if the characteristic radius becomes larger than the radius of the initial island, by increasing the 2D-ESB limited hopping rate or decreasing the deposition rate, nanorod morphology can be maintained during the growth, with a stable radius being limited by both the radius of the initial island and 3D-ESB. The theoretical predictions of the characteristic radius are demonstrated with experimental observations of ZnO growth, and good consistency has been found. Let us consider a nanostructure with thickness of n atomic layers. For simplicity, the cross section is taken as circular. The radius of the i-th layer is denoted as R i , measured in terms of the surface cell parameter a 0 . The dimensionless area and perimeter are A i = πR 2 i and L i = 2πR i , respectively.
Assuming the growth units are deposited in the normal direction of the surface, at rate F per surface cell of area a 2 0 . The number of adatom η per surface cell is determined by the diffusion euqation, dη/dt = ν∇ 2 η +F . By solving the diffusion equation, the distribution of the number density of adatom can be obtained, where η e is the dimensionless number density of adatom at the edge of A n . Before nucleation occurs on top of A n , a deposited adatom on A n has no other choice but to hop across the step edges after a survival time of τ . At steady state, the number of atoms leaving the surface per unit time, L n η e ν , is balanced by that of the atoms deposited on the surface per unit time, F A n . It gives the number density of adatom on the boundary, The total number of adatom on A n can be obtained by integrating, The average survival time of an adatom is thus τ = N ∆t, where ∆t = 1/(F A n ) is the time interval between subsequent deposition events on A n . In addition to ∆t and τ , another concerned time scale is the traversal time for an atom to visit all the sites of A n , τ tr = A n /ν. As A n grows, the probability of nucleation on A n increases. Once a new nucleus forms on A n , the number of atomic layers n increases by one, which leads to the growth in the normal direction to the substrate. For the simplest case that a dimer is the smallest stable island, the nucleation rate on A n can be given by Ω = p 1 p 2 /∆t, where p 1 = 1 − exp(−τ /∆t) is the probability that an atom is deposited during the presence of another atom on the surface, and p 2 = 1−exp(−τ /τ tr ) is the encounter probability [22].
It has been reported that with slow deposition the total number of adatoms is usually much less than unity, i.e. N 1, which means that τ ∆t [16,22]. Furthermore, in typical island growth with large ESB, ν/ν is much larger than dimensionless R n , so the first term in Eq. (1) is much smaller than the second term, and the number density of adatom on A n is approximately uniform, η η e . The total number of adatom becomes N = F πR 3 n 2ν , which gives It indicates that τ τ tr when ν/ν R n , and the encounter probability of two adatoms simultaneously present on the island p 2 is approximately unity. The nucleation rate therefore can be approximated as Depending on the height of the steps across which the adatoms hop to the lower layer, 2D-ESB or 3D-ESB plays the role to influence the interlayer atomic diffusion, respectively. Correspondingly subscripts 2D and 3D will be added to ν or Ω in the context to show such a difference.
III. CHARACTERISTIC RADIUS
For a buried layer, i.e. the atomic layer above which a second layer has formed, the condition τ τ tr means that an adatom can always be trapped by the ascending steps before getting chance to hop to the lower layer. The adatoms on A n before second-layer nucleation, however, has no other choice but to hop to A n−1 . The lateral growth of the topmost layer A n is thus contributed by the atoms deposited on area A n−1 , We assume for the moment that A n−1 is large enough so that A n is always smaller than A n−1 .
The probability f that a second layer has nucleated on A n , according to df /dt = Ω(1 − f ), increases with A n as the following, where can be regarded as the average number of nucleus on A n . Substituting Eqs. (5) and (6) into Eq. (8), where R c is defined as FIG. 1: Schematic of the two different growth modes from the initial island (grey colors) with radius of R1: (a) R1 > Rc; (b) R1 < Rc. R0 denotes the average radius of the substrate occupied by per island, and Rn is the radius of the topmost layer in the nanorod with n grown atomic layers.
ν 2D in Eq. (10) denotes the hopping rate of adatom from A n to A n−1 , with subscript 2D emphasizing that R c is determined by the 2D-ESB across the monolayer step edges.
Since the probability f goes rapidly from nearly zero to nearly unity when I = 1, the condition that I = 1 can be used as a criterion for the formation of the (n + 1)-th layer [15]. According to Eq. (9), the second layer nucleus has formed before R n reaches R n−1 if R n−1 > R c . If R n−1 < R c , however, the probability that secondnucleus forms on top of A n is still nearly zero even though R n reaches R n−1 .
As R c is determined just by the 2D-ESB and the growth conditions, such as the deposition rate and the growth temperature, it can be regarded as a characteristic radius of the growth system. In heterogeneous growth, it is known that the effect of the foreign substrate on surface kinetics properties depends strongly on the thickness of the grown layers. The interlayer hopping rate ν 2D is therefore variable, especially in the first two layers. Accordingly, we denote hereafter the characteristic radius of the first layer and the other layers as R c0 and R c , respectively, in order to distinguish their difference.
It is known that R c0 is critical in determination of the growth mode, such as layer-by-layer growth or island growth in the beginning of heterogenous growth. Here we propose that, once the island growth sets in, it is the characteristic radius R c that plays a key role during the development of a separate island in selecting growth mechanisms and the lateral size of the island.
IV. TWO GROWTH SCENARIOS
In heterogenous growth, the average radius of the foreign substrate occupied by each nucleus is denoted as R 0 . If R 0 is larger than R c0 , a second-layer nucleus forms when the radius of the first layer approaches R 1 = R c0 (R 0 /R c0 ) 2/7 < R 0 , which means that the second layer nucleus forms before the first layers coalescence and thus island growth sets in. We define R 1 as the radius of the initial island from which the island growth starts.
As discussed in previous section, in heterogenous growth the characteristic radius for the second layer changes from R c0 to R c . In homogeneous growth, or in late stage of the heterogenous growth where substrate effect is negligible, the characteristic radius can vary by changing the deposition rate F or the temperature T , according to Eq. (10). In these cases, R c0 and R c are defined as the characteristic radius before and after changing the growth conditions respectively, R 0 and R 1 correspond to the radius of the topmost two layers at the moment that a nucleus forms on R 1 after changing the growth conditions. Two growth scenarios can be identified according to the way that the characteristic radius changes. The first scenario occurs when R c decreases from R c0 , i.e., R c < R c0 . Since R 1 > R c0 , it is still larger than R c . Thus the third layer forms atop when R 2 approaches R c (R 1 /R c ) 2/7 . If we assume all the buried layers cease growing, the radius of each layer is fixed at the moment when it is buried by a new layer. Therefore the radius of the i-th layer in a nanostructure with n grown atomic layers, R i , is determined by setting I(R i ) = 1, Eq. (11) indicates that R i decreases rapidly with increasing i, until it approaches R c . Correspondingly, the growth morphology changes from a tapered one to a nanorod in several transient layers, as schematically shown in Fig. 1(a).
Obviously this is merely a limiting case in which some strong screening effects exist so that the topmost layer dominates the deeper layers in capturing the deposit atoms [23]. The opposite limiting case is that in which the growth units are equally deposited on the exposed area, thus the buried layers can also grow. In the latter case island growth leads to the well-known wedding-cake morphology [16].
The realistic situation is that between these two cases, in which only some finite topmost layers are involved in capturing the deposit atoms as a result of a mediate screening effect. To illustrate the screening effect on the island growth, we consider that only the topmost finite N g layers keep growing laterally. The quantity 1/N g , which is in the range of 0 to 1, can be taken as a measure of the screening strength. We have carried out numerical calculations of the rate equations with different values of N g , the details of which will be reported elsewhere. We find that when the number of the atomic layers of the island n is smaller than N g , the island grows with the well-known wedding-cake shape. When n increases larger than N g , the radii R i for i < N g grow gradually to R 0 , while for N g < i < n − N g R i approach their stable values after sufficient growth, X is a constant determined by (R 1 /R c ) and N g . For N g = 1, X = (R 1 /R c ) 3.5 , consistent with Eq. (11). The radius R i decreases with i, until it approaches R c after some transient layers and then remains this value. The number of the transient layers is proportional to N 2 g , with coefficient a of the order of magnitude of 10. We therefore show that, under a certain screening strength, wedding-cake morphologies (n < N g ), tapered morphologies (N g < n < aN 2 g ) and nanorods (n > aN 2 g ) can be observed successively during the island growth.
It is thus clear that if the radius of the initial island R 1 > R c , the structure finally approaches a nanorod with a tapered base beneath. The screening strength only influences the number of the transient layers of convergence, i.e., the atomic layers of the tapered base. Since the nucleation takes place on the topmost monolayer, we refer the nanorod converged from R 1 > R c as the 2D-ESB-limited nanorod, with radius R 2D equals to the characteristic radius of the system, R 2D = R c . We term this growth mode as the 2D-ESB-limited one.
The second scenario occurs when R c increases from R c0 to a value much larger than R 1 . In this case the average number of nucleus on top of A 2 when A 2 covers A 1 is (R 1 /R c ) 5 0. Therefore the topmost two layers can bunch to a bilayer, which then grows laterally from R 1 to R 2 till a new nucleus finally forms atop. The process repeats and the growth morphology remains rod-like as shown in Fig. 1(b). The whole nanorod grows laterally as the number of the atomic layers n increases, fed by the deposited adatom on the top of the nanorod. Therefore, The average number of nucleus can be obtained by integrating Eq. (8), where R c is the same characteristic radius defined in Eq. (10). Note Ω 3D denotes the nucleation rate on A n after A n approaches A n−1 , and ν 3D represents the interlayer hopping rate across the multilayer step edges. As discussed above, when R c is much larger than R n−1 , the first term in Eq. (14) is nearly zero. The radius of nanorod with n atomic layers R n can be therefore determined by setting I = 1 in Eq. (14), 2: The radius of the n-th layer Rn in an island with n grown atomic layers, for the cases of R1 > Rc with Ng = 1 (above the dotted line), and R1 < Rc (below the dotted line), as a function of n. The values are calculated according to Eqs. (11) and (16). The dotted line guides the characteristic radius Rc. α = ν 3D /ν 2D is the ratio of the hopping rate across the multilayer step to the one across the monolayer step.
es − E 2D es )/kT ] and 0 < α < 1. The radii of the i − th layer R i can be obtained by iterating Eq. (15), The nanorod radius increases until the difference of R n and R n−1 is smaller than one lattice parameter for sufficient large n, when it approaches the stable radius of the 3D-ESB-limited nanorod R 3D . According to Eq. (15), this happens when Substituting Eq. (17) into Eq. (16) and replacing the summation with logarithm for large enough n, the stable radius of the 3D-ESB-limited nanorod can be obtained as, wheren γ is the Euler's constant. Since αR 5 c is proportional to ν 3D /F , it is evident that the nanorod growth in this case is determined by the radius of the initial island R 1 , the deposition rate and the 3D-ESB. Larger 3D-ESB ( smaller α) facilitates the convergence (i.e. smaller n s ) and leads to smaller R 3D , which is consistent with a recent Monte-Carlo simulation on copper nanorod growth [24].
It is worthy to emphasize that Eq. (15) is valid only when the first term in Eq. (14) is negligible, and it is physically invalid to extrapolate from Eq. (18) to a R 3D larger than R c . Actually, the 2D-ESB-limited growth sets in once R n is increased to R c . Therefore the real radius R 3D may never exceed R c .
For comparison, we show in Fig. 2 the radius R n in a nanostructure with n grown atomic layers for the two scenarios, according to Eqs. (11) and (16). It is clear that when the radius of the initial island R 1 is larger than the characteristic radius R c , R n decreases rapidly until it approaches R c . Consequently, the growth morphology evolves from a taper-like structure to a nanorod with uniform radius of R c . In this scenario, the converged radius is limited by 2D-ESB. When R 1 < R c , R n corresponds to the nanorod radius. It increases relatively slowly, with a stable radius smaller than R c , determined by the 3D-ESB and the radius of the initial island R 1 .
V. EXPERIMENT VERIFICATIONS
Experimentally we take zinc oxide (ZnO) vapor growth as an example to verify the selection of the nanorod radius via varying R c by changing the growth temperature. For this purpose, unlike conventional ZnO nanorod growth systems where catalysts are usually introduced, here we establish a physical growth system without using additive chemicals.
The nanorods of ZnO were synthesized catalyst-free in a horizontal tube furnace with programmable temperature control. The pure zinc powder (99.9% Alfa Aesar) and polished Si(100) substrate were arranged in the same quartz boat and 1.0 cm apart. The growth was carried out with flux of nitrogen controlled as 300 standard cubic centimeter per minute (sccm) and oxygen gas flux as 5 sccm. The temperature in the central section of the furnace was homogeneous, where the quartz boat was placed. In each run of the experiment, the temperature was changed as shown in Fig. 3(a) to control the deposition rate. The growth of nanorods was terminated at different time by sudden increasing the nitrogen flux and cutting off the oxygen supply, as illustrated by the three dashed lines in Fig. 3(a). In this way, we expect to preserve the growth morphology at the moment that the growth has been terminated.
In our experiments, the temperature is varied while the flux of N 2 and O 2 are kept as constants for ZnO nanorod growth. The evaporated zinc atoms react with oxygen molecules, and the partial pressure of the production ZnO is proportional to that of zinc, P ZnO ∝ P Zn /K p , where K P is the reaction constant. Both P Zn and K p exponentially depend on temperature, where B Zn = 0.58 eV (6776K) [25] and B K = 0.21 eV (2474 K) [26]. The partial pressure of ZnO can be therefore written as, where B = B Zn − B K = 0.37eV, and P 0 is a constant determined by the other growth conditions except of temperature. The temperature-dependent deposition rate per lattice site can be written as F = a 2 0 P ZnO / √ 2πmkT . According to Eq. (10), the characteristic radius R c is R c = sb(kT ) 1/10 exp(−∆E/kT ), where b = 7ν 0 √ 2πm/(P 0 a 2 0 ) 1/5 , ∆E = (E s −B)/5, and s is the geometrical factor associated with the different cross-sectional shapes of nanorod. Equation (21) shows obviously that R c can be tuned by changing temperature. If ∆E is positive, R c decreases with decreasing temperature, then the first (2D-ESB limited) growth scenario is realized, and the radius of nanorod corresponds to the characteristic radius R c . Otherwise if ∆E is negative, R c increases with decreasing temperature, and so does the radius of nanorods. Now let us check the variation of morphologies of ZnO nanorods with step-decreased temperature as shown in Fig. 3(a). The morphologies of the nanorods were characterized by a field emission scanning electron microscopy (LEO 1530VP). Figure 3(b) illustrates ZnO structures grown at constant temperature of 600 • C. There is no evident variation of the cross-sectional diameter along the longitudinal direction of the nanorods. When the temperature was decreased from 600 to 550 • C, a second segment of nanorods appear, the cross-section area of which shrink to smaller values, as shown in Fig. 3(c). The morphologies obtained after two temperature drops are shown in Fig. 3(d), where two evident changes of crosssectional diameter can be identified on the nanorods as The experimental observation that the radii of nanorods decrease from one segments to the successive ones with decreasing temperature suggests that the growth mode is the 2D-ESB-limited one. Therefore the nanorod radius under a certain temperature is expected to be equal to the corresponding R c . In order to eliminate the influence of substrates, the temperature dependence of the radii of the nanorods in the second sections are explored, while keeping all the other growth conditions as the same.
In Fig. 4, we plot the circum diameters of the crosssection of the nanorods in the second segments of the structures as shown in Fig. 3(c), as a function of the corresponding temperature. The dashed curve gives the theoretical fitting results according to Eq. (21). It shows that the theoretical model is in good consistency with the experimental data. The fit value of ∆E is 0.59 eV, which leads to E s of about 3.3 eV. We should point out that this value is only a rough estimate of the effective barrier against the adatom diffusing across a monolayer step edge in zinc oxide system.
VI. SUMMARY
We have demonstrated that two nanorod growth modes can be realized, depending on a characteristic radius which is determined by the 2D-ESB and the deposition rate. When the radius of the initial island is larger than this characteristic radius, the nanorod ra-dius is 2D-ESB-limited and approaches the characteristic value. Otherwise the nanorod is 3D-ESB limited with a stable radius smaller than the characteristic radius. We suggest that our results is helpful to select the desired growth modes and control the diameter of nanorods and nanowires.
Although experimental studies on growth of ZnO nanorod with hexagonal cross-section has been reported before, to the best of our knowledge, a quantitative study considering the kinetics in the interfacial growth remains rare. Moreover, the theoretical model proposed here in fact is a generic one that is not limited to ZnO nanorod growth only. Experimentally, if one can precisely tune the characteristic radius R c for a specific growth system, or choose a desired initial radius by using suitable seeds, either kinds of growth modes can be selected in order to obtain different morphologies and nanorod size. By this means, microscopic informations of 2D-ESB and 3D-ESB can also be inferred from the experiments. | 2010-03-01T09:18:22.000Z | 2010-02-05T00:00:00.000 | {
"year": 2010,
"sha1": "a6c13b74cd468a7bf0650340b228dbce048412de",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1002.1133",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a6c13b74cd468a7bf0650340b228dbce048412de",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science",
"Chemistry"
]
} |
237647759 | pes2o/s2orc | v3-fos-license | The Colonial Project of Gender (and Everything Else)
: The gender binary, like many colonial acts, remains trapped within socio-religious ideals of colonisation that then frame ongoing relationships and restrict the existence of Indigenous peoples. In this article, the colonial project of denying difference in gender and gender diversity within Indigenous peoples is explored as a complex erasure casting aside every aspect of identity and replacing it with a simulacrum of the coloniser. In examining these erasures, this article explores how diverse Indigenous gender presentations remain incomprehensible to the colonial mind, and how reinstatements of kinship and truth in representation fundamentally supports First Nations’ agency by challenging colonial reductions. This article focuses on why these colonial practices were deemed necessary at the time of invasion, and how they continue to be forcefully applied in managing Indigenous peoples into a colonial structure of family, gender, and everything else.
The Colonial Project of Gender (and Everything Else)
A central tenet of the project of colonisation is a reductive examination of the lives of Indigenous peoples, casting them as dysfunctional, inferior versions of colonial home state actors (Bodkin-Andrews and Carlson 2016, p. 789). The project of embedding these ideas in colonial lands also relies on the colonial state actors' disconnection from their own deep histories, wherein they replace their beliefs and nuances with faith in the colonial state and enact this belief on others. This denial of anything not in service to the state was then, upon invasion and incursion of Indigenous lands, extended to controlling the behaviour, relationships, and the actual embodiment of Indigenous peoples (Smith 2013, p. 45). While the focus across this article is the exploration of the methods of denial and erasure of genders outside of the gender binary, there is also a recognition that this, while devastating in itself, is a marker of how narratives of exclusion are managed in the practices of colonisation.
Central to any exclusion in managing Indigenous peoples, is asserting meaning and value by engaging markers to commodify not only the bodies of existing Indigenous people, but also past and future Indigenous people. This is present in the way that gender roles are reformed through colonial restrictions as a tool to align family and kinship structures so that they mimic privileged European family systems at the moment of first invasion (TallBear 2018, pp. 146-47;Behrendt 2000, p. 354). Through the construction of an exclusive, and excluding ancestral relationship, this structure proposes relationships of meaning that relate only from progenitor to direct issue. The modern nuclear family as a repeating pattern is then privileged. This process, crucially for Indigenous peoples, replaces existing kinship structures with a linear view of relationships, family, and accountabilities to the state and to each other (Nash 2005, p. 449). To decimate kinship structures that variously occupy relationality outside of this linear path requires mimicry of an accepted European system that entails: a mother, who is a woman; a father, who is a man; and then a child who is coded male or female to ensure a continuation of the gender roles assigned and the commodified reproduction of their future descendants (Kramer 2011, p. 381). The extended family (in the vernacular of the British colonial state) then becomes segmented into reproductions of the linear family state, joined together through a common ancestor. In this structure, grandmother and grandfather were once mother and father, and they become either carers or cared for (Smith 2013, pp. 45-46). The roles outside of these are blurred in the chronology set through the colonial project, as the primary concern becomes a focus on continuation of a reproductive line. This is made more difficult when the system, and the act of genealogical charting, asserts a reproductive kinship, and denies a place for relationships outside of linear reproduction, which then becomes framed as a break in the integrity of the idea of family (TallBear 2020, p. 473).
Within each of these reproductive roles are the binary genders of male and female, clearly marked and stated as gender assignation from birth (Gill-Peterson 2018, p. 26). In order to understand the need for these binaries in the colonial project of managing gender, people, and everything else, it is important to consider how the Western system imposed through colonial incursion is exclusively interested in an unbroken line of blood connection and direct descendancy (Strathern 2017, p. 20). It consistently pits this descendancy against a broken line framed as adoption, foster care, or 'distant' relative care, setting these as atypical family structures. The writer and performer Steven Oliver (Kuku-Yalanji, Waanyi, Gangalidda, Woppaburra, Bundjalung, and Biripi), in their 2021 one-person show Bigger and Blacker, explains the lack of understanding of extended kinship systems by non-Indigenous people in a joke. Oliver describes how white people frame the distance away from family members to them, i.e., 'second-cousin, once removed', in contrast to Aboriginal people framing these relationships as closeness, where brothers can be cousins, and cousins can be unrelated (Oliver 2021) or (using the vernacular of the colonial kinship system) distantly related.
These ideas require Indigenous peoples to be treated as objects who are excluded from what Moreton-Robinson refers to as the, 'possessive investment in whiteness,' as a feature of asserting sovereignty (Moreton-Robinson 2015a, p. 76). Calls for sovereignty posit that Indigenous people hold no inherent relationship to the colonial state beyond being managed as pawns and observed as inferior subjects (Moreton-Robinson 2015b, p. 139). These practices of distancing kinship are enacted in the colonial system of Australia as a means to reproduce the constrained 19th century family formation (TallBear 2013, p. 4). These ideas are challenged through the concerted efforts of Indigenous-led initiatives that assert kinship as a means to care for children beyond the colonial idea of family (TallBear 2013, p. 4;Dudgeon and Bray 2019). These recalibrations from within our communities are still managed and overseen by state-endorsed models, but the strong leadership within community-led organisations challenges this. This response has been mapped out as essential as we continue to challenge the fallout of one of the most heinous acts in the contemporary colonial era: the forced and violent removal of children from families, where individuals and families were seen as abject and not fit to make decisions over their futures (Bennett and Gates 2019, p. 605). The challenge for Indigenous-led groups that work against the shackles of the colonial state in restating kinship systems that challenge these structures, is the further challenge to manage this while also breaking down the statemandated hierarchies of responsibilities and relationships that they fall within (TallBear 2013, pp. 6, 14-15). There is also no clear place in these genealogical chartings, which represent legal responsibilities, to locate or make meaning of extended kinship relations nor the complexity of connection found across Aboriginal and other First Nations' communities outside of reproductive issue (TallBear 2018, p. 149). They are seen as a disconnect, rather than a connection. Furthermore, the benefit derived from reconsidering the role of gender outside of the binary in these colonial constructions of family and responsibility, is to understand that Indigenous uniqueness separate to the coloniser were intentionally erased from the colonial record in order to make us same (Driskill 2004, p. 51;Smith 2010, p. 58).
Kinships: Restorative and Regenerative
In spite of these colonial incursions, First Nations communities across the world continue to apply their own kinship forms and relationality extending beyond blood-connection or direct line. Early and intentional colonial erasures formed from managing the reproductive rights of First Nations' communities are fractured within the colonial record that is often relied on to frame evidence. Alison Whittaker (Gomeroi) has written on the absence of evidence not being evidence in itself when it comes to queerness of gender or sexuality across this continent. Whittaker argues that the contemporary presence of queer Indigenous people across the continent belies assertions that queerness is the remit of white progressives, and challenges a reductive reckoning of the complexity of who Indigenous people both were and are (Whittaker 2015, p. 226). Across the colonised North American continent, the modern term, Two Spirit/2-Spirit, has been formulated in recent decades to describe contemporary and historic genders and sexualities that were erased through the colonial record and to provide, as Alex Wilson (Opaskwayak Cree) suggests, a connection from the past to the present (Wilson 1996, p. 303). Not all First Nations' communities connect diverse gender and sexuality, nor do they have names that articulate these connections or this divergence from the colonial binary. There is, however, a growing body of knowledge that suggests that, in spite of the erasures that Whittaker describes, historic renderings the challenge binary genders and reproductive essentialism were, and continue to be, present (Day 2020, p. 368;Smith 2010, p. 47).
Taking control of modern narratives that challenge the colonial project by locating ways to recall and restructure gender and sexualities is key to much of the queer work being done by those who inhabit First Nations' communities (Smith 2010, p. 47). In Australia, the terms Brotherboy and Sistergirl, are newer terms (albeit still framed within the traditional binary) that some people use to describe their transmasculine and transfeminine selves (Farrell 2017, p. 1). These terms, and other terms yet to come, allow for an expansive kinship structure to be calibrated and reframed. They form a challenge to the forced induction of communities into western religious practices that exclude and demonise relationships that fall outside of linear family structures (TallBear 2018, p. 149). They provide a level of resistance (Farrell 2017, p. 1), even as there remains work to be done for these communities to accept and support queerness in the push/pull of the colonial project (Day 2020, p. 368).
Colonial Incursion: Inconsistent and Erasive
These religious constrictions are not a relic of the past, but are held in current practice. This provides evidence of the ongoing nature of colonial incursion, that persistently works to erase difference. In 2019, the Congregation for Catholic Education for Educational Institutions issued a report, Male and Female He Created Them: Towards a Path of Dialogue on the Question of Gender Theory in Education. This report, in use globally across Catholic educational institutions, seems less dialogue and more edict, applying a closed ideal of family as a model to assert the gender binary and exclude all non-nuclear family forms. Across the Report there is a persistent assertion that binary assigned genders are immutable, focused on reproduction, and dangerous when expanded upon (Congregation for Catholic Education for Educational Institutions 2019, pp. 3, 5, 7, 11). The Report insists on certainties around family formation, in the form of a mother and father, and argues that this is essential because it 'allows the child to construct his or her own sexual identity and difference' (p. 14). In complete contrast, the same document argues against difference or alternate ways in which the understanding of gender should be described and encouraged (pp. 3, 14). The report further asserts that the use of reproductive assistance reduces 'a baby to an object in the hands of technology and science' (p. 15) and then, by contrast, promotes medical intervention for 'cases where a person's sex is not clearly defined', by suggesting that 'it is medical professionals who can make a therapeutic intervention' (p. 13). As with many researchers who challenge the idea of gender, the creators of the document seem confounded by the idea that gender may be separate to sex, even when, in the case of children who are born with, what they frame as, ambiguous sex organs (p. 13), there are clear choices being made for the child on their gender or assigned sex, by others. In the same missive, the document, that criticises what it frames as 'gender ideology' as an emerging and modern construction, and also criticises that 'this new range of relationships become 'kinship" (p. 9) and thus gaslights every historic iteration of gender diversity and sexuality outside of the heteronormative, and every Indigenous iteration of kinship that challenges the western patriarchal system.
In reading this work alongside Kim TallBear's Making Love and Relations beyond Settler Sex and Family (2018), where we are presented an alternative way of locating family, what is striking is not only the expansive way that relations are framed, but the ways in which TallBear (Sisseton Wahpeton Oyate) proposes work to arrive at a point where Indigenous genders and sexualities become freed of the colonial trappings of the church and state (TallBear 2018, p. 155). TallBear challenges sexual and gender norms imposed by the colonial state, and proposes a similar pathway to Whittaker's assertion on resistance of framing the past as pure, the present as progressive, and the pathway between the two as immovable or controlled by colonial actors, locating these as deeply problematic trappings of colonial pronatalism and heteronormativity (p. 153). TallBear and other Indigenous scholars engage expansive ways of countering the closing off of colonial understandings of Indigenous peoples, their genders, and their sexualities. For instance, Alex Wilson documents the work within N'tacinowin inna nah': Our Coming in Stories (Wilson 2008), as a process of incorporating the complete Two-Spirit person in a way that expands the idea of who a community is, rather than requiring them to conform to the edicts of that community.
In contrast, religious organisations, such as the Catholic Church, maintain that difference outside of the fundamental binary is to be avoided (Congregation for Catholic Education for Educational Institutions 2019). Yet people from Anastasia the Patrician to St Marina/Marinos, the latter of which was made saintly by their very act of denying their birth gender, and canonised on this basis, have behaved outside of what the Church frames as 'femininity' (Grayson 2009, p. 143). St Marina/Marinos presents as male to join a monastery, is accused of fathering a child with a woman from the town, is then cast out (as the canonical documents suggest) only to care for the child, in spite of their obvious 'innocence' in this act. Upon death, their birth sex is discovered and their acts form the basis of their canonisation (Grayson 2009, p. 143). These, and other figures within the Church who perform outside of the acceptable gender norms stated in the previously mentioned report, hold venerated positions in the literal canon of the Catholic Church. With the confusion of gender roles and gender assignation that the previously mentioned Congregation for Catholic Education for Educational Institutions report makes evident, it is surprising that these saints continue to be revered in the modern church. Surely they represent a confusion of gender, as it is explained in the document, and surely their acts cannot be endorsed?
It is also important that beyond the way that the report frames the risky adoption of ideas of 'gender' as opposed to 'sex', it also frames factors that seem to support complexity in the binary gender that sits outside of the experience of being simply born into a sex. In particular the term 'femininity', in the report (pp. 10, 18-19, 21), is either described as a role of difference, or as an aspect of reproduction, which would be unavailable to any cisgender women who are unable, or chose not to, reproduce. In this case what makes them 'feminine', appears to be their equal opposite to masculinity in affect, and not their birth sex (p. 18). The document also speaks to differences in women to men using the same idea of affect, by suggesting fundamental attitudinal difference, as it invokes and quotes the 2004 Letter to Bishops of the Catholic Church on the Collaboration of Men and Women in the Church and in the World, wherein it describes that 'women's 'capacity for the other' favours a more realistic and mature reading of evolving situations', so that ' . . . a sense and a respect for what is concrete develop in her, opposed to abstractions which are so often fatal for the existence of individuals and society' (p. 10). In creating a dyadic separation, they promote a world in which women have the ultimate responsibility for the bad acts of men. They also frame women as having, ' . . . a unique understanding of reality', suggesting that women are required to be pragmatic, presumably in contrast to men. Given these promoted differences in affect, to then introduce more than two gender binaries could, in fact, confound this binary ideal that seems also focused on contrasting intent.
Representations and Colonising Decisions That Erase and Confound Gender
But where does reliance on the restrictive form of family, gendered roles and gender come from in the colonial mindset when recalling the past and making sense of other cultures? In order to explore this, I ask the reader to come on a journey that considers the way that artefacts and objects from the past, now held within museums, have been calibrated and recalibrated to erase and restrict their meaning and relevance.
Between 2010 and 2019, I undertook a major research project exploring the capacity for national museums to engage with First Nations' communities within their own geographic region. It focused specifically on how those communities work with museums to reflect or represent what was meaningful for them by asking a central question of what is effective in that representation, both for the museum and for the community. The project required a broad review of 470 museums, and included social history and natural history museums, museums with encyclopaedic approaches, and museums that took visitors through a timeline of history. Some museums were focused and run by First Nations' communities. Others, such as the National Museum of the American Indian (NMAI), were pan-Indigenous and used a survey approach where they engaged and represented multiple communities. Still others had a broader brief that included First Nations' within the jurisdictions of the museums, such as the national museums of Australia and the United States, two countries originally selected as sites of interrogation.
While engagement with contemporary communities was central to the discussion, many of the museums were focused on the archaeological record, and often failed to perform ongoing engagement with these communities to make sense of artefacts associated with deep time (O'Sullivan 2016, p. 38). Engagement with contemporary communities, however, was also not always welcomed by those communities. In part this was because museums have been sites of extreme colonial violence, with requests for the return of the human remains of past generations held in museums and archives, denied or dismissed until recent years. This work on repatriation was the central work I had engaged with prior to beginning this project. As a Wiradjuri person, like many other Indigenous people, I was deeply concerned about the ways in which museums had enacted the colonial project through the gathering of not only our artefacts and objects of our past, but also in how they had failed to comprehend the complexity of what it means for our ancestors to be treated as objects in an archive (O'Sullivan 2016, p. 35). From this position came the review project, that interrogated how these spaces engaged with communities and heard their voices, their needs for repatriation and their desire to represent and interpret their own communities and peoples.
As a major part of this review, I sought to understand the ways in which First Nations' peoples were being represented in their home, colonised countries. The project had begun at the time that the deployment of the Native American Graves Protection and Repatriation Act (NAGPRA) was in full swing, and this provided opportunities for First Nations' communities in the US to look at their own engagements and to define their own participation in the ongoing project of inclusion in museums. In particular outside of this work of repatriation, NAGPRA built capacity within Communities for managing collections and worked towards rehabilitating the relationship between non-Indigenous museums and First Nations' museums and communities. This idea of self-representation was the other present concern for many First Nations' peoples, coming through as a clear outcome from the discussions held across the first part of this project in 2010 (O'Sullivan 2016, p. 37).
During the early stages of the research, as I conducted face-to-face yarning circles, focus groups that allowed free discussion of concerns and suggestions (O'Sullivan 2016, p. 37). Elders were asked to prioritise the museums and places that should be included in this study. Yarning circles are often used by First Nations' communities to discuss important business, and differ from focus groups in the way that they centre the relationality of the members. The researcher will acknowledge the Nation on which they are meeting, usually declare their own Indigenous nation (or non-Indigenous status and background) and their relationship to the community to which they are speaking. Other participants will do the same, and the scene is set for open discussions that provide deep background for the research (Carlson and Frazer 2018). Listening to Elders in this forum was essential in understanding the difficult history that they had, the concerns that they felt, and the expectations that they placed on museums to represent and engage with our Communities.
I began my introduction with my own history of working in repatriation of human remains from museums back to First Nations' communities. I then explained that the project was already set to include certain museums across two of these colonially constructed countries, the first being Australia, as host and funder of the research, and as the site of my own Wiradjuri Nation. The United States was also included as it represented a similar colonial structure of hundreds of linguistically different communities, that both dispossessed First Nations' peoples and accommodated (albeit with levels of paternalism) self-determination for those communities. The museums included were the National Museum of the American Indian of the Smithsonian Institute, the National Museum of Australia, as well as leading museums that held substantial collections. Museums that were run by First Nations communities to represent their own nations were also included, such as the Mashantucket Pequot Museum in Connecticut. The basis for inclusion was that each museum or keeping place held a substantial part of their collection devoted to First Nations' representation in some form, and that their approach engaged with contemporary communities.
In one of the initial yarning circles, an elder asked me why I wasn't looking at England. I assumed the Elder was confused about the nature of the project and I explained that unlike my previous work on repatriation in colonial spaces, this project focused on museums that were located on, or near, First Nations' lands, or within that same colonised country. The Elder was visibly annoyed with me, and stated that 'You can tell a lot about how people represent others by how they represent themselves' (O'Sullivan 2016, p. 37). Their idea, far more complex than the ones I had formulated at the time, proposed that within coloniser lands, an exploration of the colonisers own long history presents a deeper understanding of how their own colonial practices are shaped. Furthermore, it ponders how, in the context of this research, these understandings affect the ways in which First Nations' Peoples are represented in their museums. Britain, then, became included (not unproblematically) and it was there that a number of issues around not just the depiction of their own deep history, but their fixed depictions of gender and gendered affect, started to surface.
Historical Incursions: Who Decides Gender?
In the Ice Age Art exhibition of the British Museum and its corresponding book, Jill Cook explores the modern mind told through Ice Age artmaking focused on their contemporary human figurines that form identities and meaning for the corresponding communities in European deep time (Cook 2013). For a country that presented extreme resistance to my questions on the connection to a deep history of past peoples (O'Sullivan 2016, p. 38), Cook's work on the European mind of more than 40,000 years, represented an insight into a connection that is often not found across British museums that frequently only see ethnicity in others (O'Sullivan 2016, p. 37). These figurines, Cook posits, suggest significant complexity of thought, and the work is a masterpiece of connecting deep time history with contemporary ways of imagining humanity's place in the world (Cook 2013, p. 108). However, Cook, in exploring these meanings, consistently assigns a gendered reading to the figurines that begins with a conflation of sexing and gendering, and extends to similar language of gendered characteristics of masculinity and femininity, used in the previously mentioned document from the Vatican.
When discussing a figurine in mammoth ivory found at Laugerie Basse that is roughly dated to 17,000 BCE, Cook proposes that the figure's 'femininity is determined by the vulva slit clearly marked by an incision' (Cook 2013, p. 226). In this instance, beyond a conflation of gender and sexing of the figurines, there is a broader reach made where corroborating information cannot be found, assuming a binary of genders and affect on the basis of a sexing. The starting point is that figurines become sexed rather than gendered, and then significant embellishments to sexing that Cook, and many others across the broad imagining of the past within an archival-focused retelling, use, applying highly gendered terms such as 'femininity' to code that sexing. 'Femininity', in particular is used frequently throughout Cook's description as an interchangeable idea of a sexed or gendered figure (pp. 224-41). That the terms of masculinity and femininity are used interchangeably to provide detail that affirms binary genders speaks to what reads as a reductive reckoning connected to the curator/writer's own gaze, erasing any potential reading of the archive for divergence from the binary.
Venus of Willendorf is one of the best known European Ice Age figurines, and has become the embodiment of the conflated female/woman/fertility character as a deep time artefact (Karayanni 2009, p. 449;Tripp and Schmidt 2013, p. 56;Kuiper 2016). These nearly complete figurines are often interpreted variously as womanly, female, and as symbols of fertility, regardless of available corroborating evidence. Cook describes the archaeologist Karel Absolon's description of pieces found in stylised, disconnected body parts in the form of wearable ivory as 'grotesque' suggesting the 'highest degree of sexual-biological hyperstylization' that the ' . . . artist neglected all that did not interest him stressing his sexual libido only where breasts are concerned-a diluvial plastic pornography' (Cook 2013, p. 68). This highly gendered analysis suggests a modern ontological frame, a male gaze, neither of which are backed up with evidence from the archaeological record to suggest gender or sexualization. The entire story contains a confabulation that frames actor, intent, and, of course, gender(s).
Characters framed as male often also engage ideas of fertility. Often depicted with erect penises, or showing a level of strength, their physical characteristics are listed to become the embodiment, which then correlates to the binary male fertility. But what evidence exists that this was the intent of the makers? And what right do those who curate have in casting their own ideas of gender on these figurines? The figure of Lion Man, one of the oldest ice-age figurines at approximately 40,000 years old, becomes framed, at least in English, as male (Cook 2013, p. 30). Although this interpretation is challenged, Cook airs these challenges by weighing up the evidence based on body parts that, apparently, demonstrate maleness as a pivot opposite to femaleness, with no consideration for other ideas around gender, and as a process of determining gender. Essential to these assertions is that gender and gender typing mattered as much to ancient peoples as they do to those cataloguing and writing about them. Does it matter if Venus of Willendorf or Lion Man are gendered at all? What does it tell us beyond a contemporary obsession with the gender binary? If there is ambiguity, it is almost always determined to be gendered according to the binary. And where we see genders represented, or even contested outside of the binary, they are often configured as metaphysical and not representative of actual, living people, as in Lion Man: half human, half lion. Are then characters outside of the binary only available as abject or alien representations, rather than figurative representations of living people from the past?
The criticisms are not of Cook's otherwise complex rendering of the ice-age human's mind, nor of other curators and writers who have sought to imagine the past in order to present it for the engagement of a modern audience. While Cook acknowledges in their work that these Ice Age figurines cannot be truly known to us (p. 107), in persisting in this gendering they provide insight into the problematic position that comes from relying on the values of those telling the story and those who control the narrative. The risk is that in the retelling of gender, an overemphasis on identifying the gender of these figurines, as though it provides greater insight to those contemporary actors, reduces rather than expands the ways in which peoples across deep time can be imagined.
Conclusions: Yindyamarra-Respect, Relationality, and Inclusion
For contemporary Indigenous peoples, this reduced representation acts as a marker for our continued management within the colonial project, through the colonial structures and restrictions still held in place. As the Elder suggested during the yarning circle, 'You can tell a lot about how people represent others by how they represent themselves'. Palawa scholar, Ambelin Kwaymullina's poetic summary of colonisation as 'the long con' recognises the persistent lengths that colonial structures will extend to erase the truths and complexities of Indigenous people (Kwaymullina 2020). Museums are the ultimate problematic for Indigenous Peoples: they are spaces that have collected, reduced and displayed our very bodies, and that remain spaces where our past and our present are held. If we cannot insist on the complexity of genders and gender expression, and reduce only to ideas of what can be proven through the colonial record, we will remain static and will succumb to the strictures of the colonial project of managing gender and everything else.
While a curator analysing deep time artefacts of a culture distantly removed is restricted by available evidence, contemporary Indigenous peoples are writing their own stories and are refusing to have their sovereign selves be contained. Wiradjuri writer and scholar Anita Heiss has written about the impossibility of writing on the continent, now known as Australia, without the presence and permission of First Nations' peoples. In considering her argument for both Indigenous people writing their own stories, and the respectful inclusion of the figures of Indigenous peoples by non-Indigenous writers, she has deployed a Wiradjuri expression: yindyamarra, which she interprets as to ' . . . respect, honour, be polite' (Heiss 2021). It insists on inclusion, and for that inclusion to only be through permission and an ongoing conversation. Kwaymullina's 'long con' presumes that this is a constant challenge for those engaged on both sides of colonisation. As a process that denies and erases, to then include and respect is an important journey. In the case of writing the complexity of gender, that may mean for museum curators, writers and those tasked with supporting better understandings of the uncontained truth of Indigenous gender of the past, that engaging yindyamarra and revealing our own biases on gender, is key to providing a story that reveals, rather than erases. | 2021-09-27T20:56:05.555Z | 2021-07-16T00:00:00.000 | {
"year": 2021,
"sha1": "bc2e77feb2f297582f9e081e98b52a15752c8ecf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-5778/5/3/67/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3f36e55418a5d88341e38c142e19aed7c81b1a4b",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"History"
]
} |
231901483 | pes2o/s2orc | v3-fos-license | Transmission Chains of Extended-Spectrum Beta-Lactamase-Producing Enterobacteriaceae at the Companion Animal Veterinary Clinic–Household Interface
Extended-spectrum beta-lactamase-producing Enterobacteriaceae (ESBL-E) among animals and humans are a public health threat. This study analyzed the occurrence of ESBL-E in a high-risk environment in a companion animal clinic and two animal patients’ households. In an intensive care unit (ICU), rectal swabs from 74 dogs and cats, 74 hand swabs from staff and 298 swabs from surfaces were analyzed for ESBL-E. Seventeen hospitalized patients (23%) and ten (3%) surfaces in the ICU tested ESBL-E positive. Transmission chains for Klebsiella pneumoniae ST307 blaCTX-M-15 and Escherichia coli ST38 blaCTX-M-14, ST88 blaCTX-M-14 and ST224 blaCTX-M-1 were observed over extended periods of time (14 to 30 days) with similar strains isolated from patients and the clinical environment. After discharge, two colonized dogs (dogs 7 and 12) and their household contacts were resampled. Dog 7 tested repeatedly positive for 77 days, dog 12 tested negative; six (24%) surfaces in the household of the persistently colonized dog tested ESBL-E positive. The owner of dog 7 and one of the owners of dog 12 were colonized. Based on whole genome sequencing, isolates from the owners, their dogs and other ICU patients belonged to the same clusters, highlighting the public health importance of ESBL-E in companion animal clinics.
Introduction
Antimicrobial resistance in companion animals is of public health importance because of the close contact between pets and their owners, which can facilitate the transmission of resistant bacteria [1][2][3][4][5][6][7][8][9]. The trend towards intensive medical care of dogs and cats fosters hospitalization and nosocomial infections [10][11][12][13] and has led to a growing number of geriatric and immunosuppressed animal patients that are highly susceptible to infections, including those with antimicrobial resistant microorganisms (ARM). Antimicrobial use, which is discussed as one of the main drivers of resistance development, is common in companion animal medicine, including the use of highest priority critically important antimicrobials and even antibiotics of last resort, such as carbapenems, are administered in some instances [14][15][16][17][18][19][20][21][22][23][24].
The spread of ARM, such as extended-spectrum beta-lactamase-producing Enterobacteriaceae (ESBL-E), challenges human and veterinary healthcare settings worldwide and poses a public health threat [25]. In addition to their plasmid-mediated resistance to penicillins and cephalosporins, ESBL-E are often resistant to antibiotics such as fluoroquinolones, aminoglycosides, and sulfamethoxazole/trimethoprim [26]. Previous hospitalization, a raw food diet, elderly age, urinary or intra-abdominal infections, hepatic cirrhosis, residence in overcrowded household districts and antimicrobial therapy are known risk factors for ESBL-E colonization of dogs and cats [27][28][29]. In a recently published study, 21.4% of dogs and cats carried ESBL-E on admission to veterinary hospitals, whereas 53.7% were colonized after 72 h of hospitalization [27]. This points towards an important role of companion animal clinics in the transmission of ESBL-E [1,[10][11][12][13]30]. However, the transmission chains for ESBL-E within veterinary hospitals, especially in high-risk settings such as intensive care units (ICUs), has not yet been resolved. Additionally, the impact of colonized patients for ESBL-E dissemination in the households after discharge is unclear.
The close contact of companion animals in the household to their owners is thought to be a risk factor for ARM transmission to owners. In households in which humans carry ESBL-E, identical strains were detected in dogs from the same households [28,31]. Furthermore, a study from human medicine documented that transmission rates of ESBL-E between humans in household settings outnumbered transmission rates within the hospital, and transmission rates of 23% and 25% for ESBL-producing Escherichia coli (E. coli) and Klebsiella pneumoniae (K. pneumoniae), respectively, were documented within the households [32]. This indicates that household transmission between humans can play a substantial role in the spread of ESBLE-E, but data regarding transmission between companion animals and humans in households is limited. Furthermore, the contamination of the household environment with ESBL-E by colonized pets has not yet been investigated.
The aims of this study thus were to analyze transmission chains of ESBL-E over a 45-day period in an intensive care unit, a high-risk environment in a companion animal clinic, and to investigate ESBL-E dissemination by colonized patients to household contacts and the environment in two households after discharge.
ESBL-E in the Intensive Care Unit
A total of 91 rectal swab specimens from 49 dogs and 25 cats hospitalized in the ICU, and 298 specimens from 25 predefined high-touch surfaces and from 74 hands from healthcare workers in the ICU were collected at regular intervals on 12 sampling days over a 45-day period. ESBL-E (E. coli and K. pneumoniae) were isolated from 12 (24%) dogs and 5 (20%) cats ( Figure 1, Table S1) and from 3% of the high-touch surfaces (range: 0-28% per sampling day; positive specimens: dog cage, area of drug preparation, small cabinet, blood pressure monitor scale (floor), water tap, fridge with medication, scissors). None of the hand swabs tested positive for ESBL-E.
ESBL-E genes detected in the clinic included bla CTX-M-1 , bla CTX-M-14 , bla CTX-M-15 , bla CTX-M-65 , and bla CTX-M-216 ; bla CTX-M-15 was most common and detected in 8 of 10 (80%) environment-and 10 of 23 (43%) patient-derived ESBL-E positive specimens. Additionally, broad spectrum beta-lactamase genes bla SHV-1 and bla TEM-1 were detected amongst these isolates (Table S1). Among the E. coli isolates, nine different sequence types were identified ( Figure 1, Table S1). Among the K. pneumoniae isolates, ST15 and ST307 were found. The phylogenetic relationship for all human-animal-environmental strains is shown in Figures 2 and 3. K. pneumoniae ST307 bla CTX-M-15 predominated in the ICU, particularly on day 22 where ESBL-E contamination of the ICU was most extensive (Figure 1, Table S1). On this day, 7 (28%) environmental specimens tested positive for ESBL-E and six of these isolates belonged to K. pneumoniae ST307 bla CTX-M-15 .
Transmission chains for several closely related ESBL-E isolates were detected within the ICU over extended periods of time. K. pneumoniae ST307 bla CTX-M-15 was isolated for the first time from dog 4 on day 15 and thereafter from different hospitalized patients (dogs 6 and 8; cats 1, 4 and 5; days [22][23][24][25][26][27][28][29] and environmental surfaces (days 22 and 45, Figure 1), which indicates an ongoing transmission chain for this strain. Some of these isolates (dog 4, day 15; cat 5, day 29; environmental specimens, days 22 and 45) were characterized by whole genome sequencing (WGS) and core genome multi-locus sequence typing (cgMLST) analysis which revealed that all selected isolates belonged to the same cluster. Additionally, three specific E. coli strains (ST88 bla CTX-M-14 , ST224 bla CTX-M-1 , ST38 bla CTX-M-14 ) were isolated on various sampling occasions in the ICU. E. coli ST88 bla CTX-M-14 was detected in dog 1 (days 3 and 8) and dog 2 (day 3) and was thereafter isolated from dog 5 on day 17. E. coli ST224 bla CTX-M-1 first occurred in dog 7, cat 3 and the environment on day 22, and was detected again 21 days later (day 43) in dog 11 and in the clinical environment. Lastly, E. coli ST58 bla CTX-M-14 was isolated on a first occasion from in dog 9 on day 29 and again from dog 12 on day 45.
ESBL-E in the Households
Two colonized dogs (dog 7, household 1; dog 12, household 2) were resampled at home after discharge from the clinic, together with the household contacts and the household environment ( Figure 4). Household 1 contained dog 7 and the owner. Dog 7 and the owner were found to be persistently and intermittently colonized with ESBL-E, respectively, between days 27 and 77 after the dog's discharge from the clinic, and both tested negative on day 133. E. coli ST224 bla CTX-M-1 and ST5869 bla CTX-M-56 were detected in the dog, and E. coli ST224 bla CTX-M-1 and ST10 bla CTX-M-15 in the owner. E. coli ST224 bla CTX-M-1 had already been isolated from dog 7 during hospitalization (and from other ICU patients and the ICU environment, see above) and was found in dog 7 on repeated samplings until day 77 after discharge and in the owner until day 47 after the dog's discharge ( Figure 4). Isolates of E. coli ST224 bla CTX-M-1 from dog 7 (day 47 after discharge), its owner (day 47 after discharge) and clinic-derived specimens (cat 3, day 22; clinical environment, day 22) were subject to WGS and cgMLST analysis and confirmed to belong to the same cluster. Timeline of extended spectrum beta-lactamase producing Enterobacteriaceae (ESBL-E) isolated from the dogs, cats and owners in household 1 and 2 and the household environment. CD, colonized dog; O, owner; d, days after discharge; H1, household 1; H2, household 2; H1c1, water bowl; H1f1, dog's sleeping basket (living room); H1g1, dog's blanket on terrace; H1h1, dog's sleeping basket (bedroom); H1m1, carpet; H1y1, kitchen sponge; SK, subculture. Each horizontal line refers to a specimen obtained from the same animal, owner or environmental surface over time (12, t2, t3, t4, t5). The brackets at the right side of the strain ID indicate subcultures of the same specimen. Negative test results are only shown for animals, environmental surfaces or owners that had tested positive for ESBL-E at a certain time point.
Discussion
The present study documents transmission chains for several ESBL-producing E. and K. pneumoniae strains in a high-risk setting of a companion animal clinic. Within limited time of observation (45 days), transmission chains for one K. pneumoniae and th E. coli strains were documented, and the isolates included high-risk human pathog clones such as K. pneumoniae ST307 which has been previously associated with bapenem and ESBL resistance [33][34][35][36]. This strain was repeatedly detected in ICU pati and the clinical environmental over a period of 30 days, indicating an ongoing outbr situation. Our results underline that ICU settings in companion animal clinics could nificantly contribute to the spread of ESBL-E and high-risk human pathogenic clones Worryingly, the study also supports a direct transfer of ESBL-producing E. coli str from ICU patients to companion animal owners (or vice versa). A recent study found only 12% of the owners in households with a colonized dog were ESBL-E carriers. A tionally, a match in the core genome between the owner and the dog specimen was o found in 5% of the exposed households [28]. A previous study found that dog owners . Timeline of extended spectrum beta-lactamase producing Enterobacteriaceae (ESBL-E) isolated from the dogs, cats and owners in household 1 and 2 and the household environment. CD, colonized dog; O, owner; d, days after discharge; H1, household 1; H2, household 2; H1c1, water bowl; H1f1, dog's sleeping basket (living room); H1g1, dog's blanket on terrace; H1h1, dog's sleeping basket (bedroom); H1m1, carpet; H1y1, kitchen sponge; SK, subculture. Each horizontal line refers to a specimen obtained from the same animal, owner or environmental surface over time (12, t2, t3, t4, t5). The brackets at the right side of the strain ID indicate subcultures of the same specimen. Negative test results are only shown for animals, environmental surfaces or owners that had tested positive for ESBL-E at a certain time point.
Environmental contamination with ESBL-E was detectable in 6 (24%) specimens in household 1 (at day 47 after discharge) and all isolates belonged to E. coli ST224 bla CTX-M-1 . Areas in close contact with the dog, such as the carpet, the dog's water bowl, the dog's sleeping basket in the living room, the dog's blanket on the terrace and the dog's sleeping basket, but also the kitchen sponge, were contaminated, whereas areas primarily in contact with the owner tested negative. The relatedness of the isolate deriving from the kitchen sponge to the other Escherichia coli ST224 bla CTX-M-1 was confirmed through WGS and cgMLST. Immediately after thorough cleaning with a commercially available household cleaning product, none of the environmental specimens taken tested positive (day 57), while the dog, but not the owner, remained consistently positive until day 77 (Figure 4).
Household 2 contained two people, two cats, the colonized dog 12 and another dog (Figure 4). At the time of retesting (68 days after discharge), one of the two owners in household 2 was colonized with E. coli ST38 bla CTX-M-14 while both dogs, the cats and the other owner tested negative (Figure 4). The owner tested again positive for this strain in the second sample collected 118 days after the dog's discharge. E. coli ST38 bla CTX-M-14 had originally also been isolated from dog 12 of this household (on day 45 during hospitalization), and in another dog (dog 9, day 29) from the ICU 16 days before dog 12 was sampled. WGS and cgMLST analysis confirmed that the isolates of dogs 9 and 12 and of the owner of dog 12 (68 days after discharge) belonged to the same cluster. Environmental contamination with ESBL-E was undetectable in this household (at day 68 after discharge) where only one owner, but not the dogs, was colonized with ESBL-E.
Hygiene standards were assessed in both households using a questionnaire (Table S2). Overall, hygiene behavior did not clearly differ between the households. Owners of both households indicated that they "regularly" used hand sanitizer and antibacterial soap, none of the owners fed their dogs a raw food diet but both owners had contact to the human health care system (the owner living in household 1 worked as a surgical cosmetician, the owner in household 2 worked as a care professional in a nursing home). Owners from both households indicated that kitchen towels were not changed daily and that no separate chopping boards were used for meat and food of nonanimal origin.
Resistance Profiles of ESBL-E
Resistance profiles were determined for all ESBL-E isolates collected in this study. The strains shared by the owner and the colonized dogs in households 1 and 2 showed resistance to ampicillin, cephazolin, cefotaxime, cephepime, nalidixic acid, ciprofloxacin, sulfamethoxazole-trimethoprim and streptomycin (Table S1). The E. coli strains in household 1 were additionally resistant to kanamycin, gentamicin and tetracycline, whereas the E. coli strains in household 2 and from dogs 9 and 12 were also resistant to azithromycin.
Discussion
The present study documents transmission chains for several ESBL-producing E. coli and K. pneumoniae strains in a high-risk setting of a companion animal clinic. Within the limited time of observation (45 days), transmission chains for one K. pneumoniae and three E. coli strains were documented, and the isolates included high-risk human pathogenic clones such as K. pneumoniae ST307 which has been previously associated with carbapenem and ESBL resistance [33][34][35][36]. This strain was repeatedly detected in ICU patients and the clinical environmental over a period of 30 days, indicating an ongoing outbreak situation. Our results underline that ICU settings in companion animal clinics could significantly contribute to the spread of ESBL-E and high-risk human pathogenic clones.
Worryingly, the study also supports a direct transfer of ESBL-producing E. coli strains from ICU patients to companion animal owners (or vice versa). A recent study found that only 12% of the owners in households with a colonized dog were ESBL-E carriers. Additionally, a match in the core genome between the owner and the dog specimen was only found in 5% of the exposed households [28]. A previous study found that dog ownership was not a risk factor for ESBL-E carriage [29], however, dogs are often colonized with ESBL-E after hospitalization [27]. E. coli ST38 bla CTX-M 14 originally detected in two ICU patients was isolated from the owner of one of these animal patients after the dog's discharge. Interestingly, the colonization of the owner was found at a time when the dog tested negative for this strain, and colonization persisted for at least 50 days. WGS and cgMLST confirmed the very close relationship of the isolates of the owner and the ICU patients. Of note, the colonization of the owner was not associated with environmental contamination with ESBL-E in this household. In the second household investigated in this study, closely related E. coli ST224 bla CTX-M 1 isolates were detected in the owner and its dog after discharge over extended periods of time, and considerable environmental household contamination occurred with this strain (24% positive environmental surfaces). The dog remained colonized with this E. coli strain for 77 days after discharge. In this household, it was unclear whether the dog introduced this E. coli strain into the ICU and caused a transmission chain to three other ICU patients, or whether colonization first occurred during hospitalization and resulted in a transfer of the isolate into the household. Again, WGS and cgMLST confirmed the very close relationship of the clinic-and ownerderived isolates. The results underline that ESBL-E transmission chains can be frequent in ICU settings in companion animal clinics and pose a risk for both, the animal owners and for other ICU patients. Interruption of these transmission chains by comprehensive infection prevention and control (IPC) concepts including stringent adherence to hand hygiene are thus of public health importance and should be urgently promoted. So far, the implementation IPC concepts are, in contrast to human hospitals, not mandatory for veterinary clinics in Switzerland.
Households have been previously described as a potential reservoir for ESBL-E [37,38]. Environmental contamination with ESBL-E in the household was extensive in household 1 with the persistently colonized dog but was undetectable in household 2 where only the owner was colonized, although hygiene habits seemed to be comparable in the two investigated households. Both owners indicated not using separate chopping boards for meat and food of nonanimal origin and not changing kitchen towels daily, which could both be an important source of transmission of ESBL-E [29,39]. Furthermore, ESBL-E were primarily detected on surfaces in household 1 that were in close contact with the persistently colonized dog. This could support the hypothesis that colonized companion animals contribute more to household contamination with ESBL-E than colonized humans. This is also supported by the fact that in household 1, E. coli ST224 bla CTX-M 1 was the only detected strain in the dog during hospitalization and for 77 days after discharge, in contrast to the owner, who was colonized with an additional strain, and only E. coli ST224 bla CTX-M 1 was detected in the household environment. Such long carriage periods could additionally contribute to the risk of spreading of ESBL-E in the household environment [28]. Of note, environmental contamination in household 1 for ESBL-E was much higher than at most of the sampling days in the ICU in the investigated clinic (3%, range: 0-24%), and higher than recently reported for environmental samples collected in seven companion animal clinics and practices in Switzerland (0-2% of the environmental specimens were ESBL-E positive) [40]. Although only two households were investigated in this study, our results are alarming and highlight the need to develop evidence-based recommendations for the handling of ESBL-E colonized animals in the household environment.
Overall, bla CTX-M-15 , a highly prevalent ESBL gene in both humans and companion animals, predominated in the clinical samples [41][42][43][44]. The previously described emergence of bla CTX-M-1 and bla CTX-M-14 in dogs and cats in Switzerland was also evident in this study [45]. ST307 has been previously described in a dog with a urinary tract infection from Brazil [46]. Additionally, E. coli ST38 bla CTX-M-14 , ST88 bla CTX-M-14 and ST224 bla CTX-M-1 reoccurred on different sampling occasions indicating additional minor outbreaks. ST224, which was also isolated from the owner in this study, has been frequently reported in companion animals [47][48][49]. ST88 is common among both humans and animals and globally distributed [50]. Both ST88 and ST38 belong to global extraintestinal pathogenic E. coli lineages and companion animals have been documented as a possible reservoir [51,52]. Furthermore, ST15, an epidemic and international human-related K. pneumoniae strain, was isolated from one dog in this study. K. pneumoniae ST15 and E. coli ST10 and ST58 strains have been previously isolated from clinical specimens of companion animal patients from the same veterinary clinic in an unrelated study [45].
Data on environmental contamination by ESBL-E in veterinary facilities are scarce. A recent study reported a prevalence of ESBL-E on high-touch surfaces ranging from 0-2% across seven Swiss veterinary clinics and practices [40] and areas with high patient traffic and utensils were most contaminated with ARM [40]. In this study, environmental contamination with ESBL-E in a high-risk setting was detected in 3% of the high-touch surfaces, but contamination varied considerably between the sampling days: seven of 10 isolates were found on sampling day 22 and six of these seven specimens yielded K. pneumoniae ST310 bla CTX-M-15 .
ESBL-E was not isolated from any of the hand swabs in this study although hands are regarded as one of the main vectors for ARM transmission. Considering the high number of hand-animal contacts that take place during the daily work of healthcare workers, the microbiological analyses of the swabs represent only a snapshot and cannot fully mirror the transmission events in these settings. Previous studies have reported drug-resistant Enterobacteriaceae on the hands of veterinary staff [53] and nosocomial pathogens have been isolated from the hands of healthcare workers [54].
The present study has some limitations. The ICU of only one veterinary clinic was investigated in this study. An extrapolation of our results to ICU settings of other veterinary clinics is thus not possible. Of note, the companion animal clinic included in this study showed the lowest environmental ARM contamination and the highest IPC standards among three large referral clinics in Switzerland in a recent study, suggesting that the frequency of ESBL-E transmission chains observed in this study might not be overestimated [40]. However, the present study also showed considerable variations in the ESBL-E detection between sampling days. Furthermore, the present study investigated only two households, and documentation of a colonized companion animal was only available in one of the households. Furthermore, environmental sampling in the household differed regarding the time after discharge. Future studies should thus further elucidate the ESBL-E transmission chains between companion animals and owners in household settings.
Ethics
All methods were carried out in accordance with relevant guidelines and regulations. In accordance with local legislation, ethical approval was sought from the Swiss Ethics Committees on research involving humans (2019-00768). Informed consent was obtained from all participants. Ethical approval for the collection of rectal swab specimens from the dogs and cats was received from the local Veterinary Office (ZH028/19). All owners of the dogs and cats gave informed consent.
Specimen Collection
Single cotton swabs were used for the collection of specimens. Rectal swab specimens from all dogs and cats that were examined by the intensive care unit (ICU) of a veterinary tertiary care facility between June 2019 and August 2019 were collected by the first author of this study after informed owner consent. Sampling intervals were kept constant throughout the study period ( Figure 1). Swabs from a modified previously published list of high-touch surfaces (Table S3) were collected in the ICU during the same time period [40]. During the same time points, hand swabs of the dominant hand from veterinary staff (i.e., veterinarians, nurses and students) working in the ICU were collected before and after animal patient contact, regardless as to whether gloves were worn. If gloves were worn, the hand swab was taken from the glove.
Two households were followed-up and were asked to send stool specimens of the colonized animal patient and its household contacts (owner, dogs and cats living in the same household) at different time points (Figure 4) and specimens from twenty-five surfaces with high human and animal contact in the household (Table S4) were collected. Furthermore, owners were asked to fill out a questionnaire on household hygiene (Table S2) [55][56][57]. There was no compensation for participating in the study.
Microbiological Analysis
Specimens from dogs, cats and owners of two colonized dogs, hand swabs and swabs of high-touch surfaces from the clinic and the household environment were analyzed for the presence of ESBL-E.
All swabs were immediately enriched in 10 mL peptone water (BioRad, Hercules, CA, USA), followed by selective enrichment in Enterobacteriaceae enrichment broth (Oxoid, Hampshire, UK). ESBL-E were screened by using the chromogenic medium Brilliance™ ESBL Agar (Oxoid, Hampshire, UK), according to the manufacturer's instructions. Colonies were picked from the selective media based on phenotype and species identification was conducted by using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS, Bruker Daltronics, Bremen, Germany). Polymerase chain reaction (PCR) assays for the presence of genes encoding bla CTX-M groups, bla SHV and bla TEM were conducted on Enterobacteriaceae isolates as previously described [45,[58][59][60].
WGS
Whole genome sequencing was performed according to procedures previously described on selected isolates from colonized dogs and cats, owners and clinical and household environment based on MLST results, genes encoding bla CTX-M groups, bla SHV and bla TEM and antimicrobial susceptibility testing [64]. Briefly, the isolates were grown overnight on sheep blood agar at 37 • C prior to genomic DNA isolation using the DNA blood and tissue kit (Qiagen, Hombrechtikon, Switzerland). A Nextera DNA Flex Sample Preparation Kit (Illumina, San Diego, CA, USA) was used to prepare the DNA, which produces transposome-based libraries that were sequenced on an Illumina MiniSeq Sequencer (Illumina, San Diego, CA, USA). Reads were checked for quality using the software package FastQC 0.11.7 (Babraham Bioinformatics, Cambridge, UK). Both Illumina-reads files passed the standard quality checks of FastQC, with the exception of the module "Per Base Sequence Content", which returned a failure. Such failure is common for transposomebased libraries and was therefore ignored and reads were assembled using the Spades 3.0 based software Shovill 1.0.4, using default settings. The assembly was filtered, retaining contigs > 500 bp.
Conclusions
In conclusion, the present study documents transmission chains for the human pathogenic K. pneumoniae ST307 strain and three ESBL-producing E. coli strains in an ICU of a veterinary clinic over a 45-day observation period. The study strongly suggests the transfer of ESBL-producing E. coli strains from the ICU setting to the patients' households and pet owners, and vice versa, with extended periods of ESBL-E colonization in the animals and owners. Contamination of the household environment in the case of a persistently colonized dog was extensive and might outweigh contamination by colonized humans. The study highlights the risk of veterinary clinics in the spread of ARM and the need to further investigate transmission of ARM in companion animal households in order to develop evidence-based recommendations on hygiene measures in these settings.
Supplementary Materials: The following are available online at https://www.mdpi.com/2079-638 2/10/2/171/s1, Table S1: Extended-spectrum beta-lactamase-producing Enterobacteriaceae (ESBL-E) isolated from dogs, cats, clinical and household environment and owners of colonized pets, Table S2: Questionnaire for the owners of colonized pets, Table S3: List of high-touch surfaces in the veterinary clinic, Table S4: List of high-touch surfaces in the households of colonized pets.
Funding: This research received no external funding. This research was supported by the University of Zurich.
Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Swiss Ethics Committees on research involving humans (2019-00768). Informed consent was obtained from all participants. Ethical approval for the collection of specimens from the dogs and cats was received from the local Veterinary Office (ZH028/19). All owners of the dogs and cats gave informed consent.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. | 2021-02-13T06:16:36.918Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "abb5a15ccbbbc371f9d000b352058e0911da1e2e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6382/10/2/171/pdf?version=1612849454",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7987e48d11ac74497ba993dbd3fb349696596912",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261493541 | pes2o/s2orc | v3-fos-license | The seasons within: a theoretical perspective on photoperiodic entrainment and encoding
Circadian clocks are internal timing devices that have evolved as an adaption to the omnipresent natural 24 h rhythmicity of daylight intensity. Properties of the circadian system are photoperiod dependent. The phase of entrainment varies systematically with season. Plastic photoperiod-dependent re-arrangements in the mammalian circadian core pacemaker yield an internal representation of season. Output pathways of the circadian clock regulate photoperiodic responses such as flowering time in plants or hibernation in mammals. Here, we review the concepts of seasonal entrainment and photoperiodic encoding. We introduce conceptual phase oscillator models as their high level of abstraction, but, yet, intuitive interpretation of underlying parameters allows for a straightforward analysis of principles that determine entrainment characteristics. Results from this class of models are related and discussed in the context of more complex conceptual amplitude–phase oscillators as well as contextual molecular models that take into account organism, tissue, and cell-type-specific details.
Introduction
Circadian clocks are complex systems that integrate different scales of spatio-temporal organization to plastically cope with varying environmental demands in a daily and seasonally changing world.Interlocked transcriptionaltranslational negative feedback loops are a common design principle underlying single cellular rhythm generation across different species, such as Neurospora crassa, plants, insects, and mammals.Such single cellular oscillators coordinate at the tissue and organ level to ensure a proper system level functioning of circadian physiology (Micklem and Locke 2021).A functioning circadian clockwork has been shown to provide an adaptive advantage across different kingdoms of life, ranging from unicellular cyanobacteria to multicellular plants and mammals (Ouyang et al. 1998;Dodd et al. 2005;Spoelstra et al. 2016).In turn, circadian disruption has been linked to adverse health effects, such as an increased risk for cancer, cardiovascular diseases, or mood disorders (Savvidis and Koutsilieris 2012;Crnko et al. 2019).
Stable entrainment of the circadian system to the 24 h rhythms of environmental zeitgeber signals, such as light-dark or temperature cycles, is essential for the proper alignment of physiological processes around the solar day and is thus under evolutionary selection.Intrinsic clock properties and external environmental factors that vary with season and latitude determine at which time of day physiological processes are executed (Hut et al. 2013;Bordyugov et al. 2015).In addition to such zeitgeber-and oscillator-dependent tuning of the phase of entrainment, it has been shown that the circadian system plastically changes in response to previously applied entraining cues such as changing periods or photoperiods (Pittendrigh and Daan 1976a).
In the following sections, we describe general principles of circadian entrainment as well as synchronization or self-entrainment in ensembles of coupled clocks.We extend these concepts toward the entrainment under seasonal conditions and corresponding network re-organizations that have been proposed to underlie photoperiodic encoding within the mammalian core pacemaker.In each section, exemplary intuitive conceptual models are carefully introduced in detail before reviewing more complex approaches as well as detailed contextual molecular models.
Kuramoto model with a bimodal frequency distribution
A system of N coupled phase oscillators whose dynamical evolution is given by is known as a Kuramoto model; see also Eq. ( 14) of the Main text.In the section Photoperiodic encoding through network re-organizations, we assume a functional separation of the N oscillators into two groups or communities representing, e.g., the core and shell part of the mammalian core pacemaker, the suprachiasmatic nucleus (SCN).In general, the communities can be of different size as described by the fractions p ∈ [0, 1] and (1 − p) , respectively, with different mean frequencies 1 and 2 as well as different frequency spreads (or scale factors) 1 and 2 as given by a bimodal Cauchy-Lorentz distribution Using the Ott-Antonsen approach (Ott andAntonsen 2008, 2009), the temporal evolution of the communities order parameters R 1 (t) and R 2 (t) as well as the phase differ- ence between the clusters Δ (t) = 2 (t) − 1 (t) under the assumption of identical intra-and inter-community coupling strength K ij = K reads as For the sake of simplicity, we assume in Fig. 5 that both communities are of equal size p = 1 − p = 0.5 and have an identical frequency spread 1 = 2 = .Under such condi- tions, one can focus to solutions satisfying the symmetry condition R(t) ∶= R 1 (t) = R 2 (t) , such that above equations facilitate to (1) (3) with Δ = 2 − 1 , being in line with results of Martens et al. (2009).
Conceptual models explain complex data
Konopka and Benzer were the first who discovered a singlegene mutation that affects circadian free-running rhythms in Drosophila melanogaster (Konopka and Benzer 1971), leading to a new era of molecular genetics in chronobiology that eventually revealed the molecular constituents of circadian clocks across various organisms, such as cynobacteria, Neurpospora crassa, Arabidopsis thaliana, Drosophila melanogaster, as well as mammals (Bell-Pedersen et al. 2005).Even long before the molecular cogs and levers of the regulatory feedback loops underlying circadian rhythm generation have been found, conceptual oscillator models have been developed and used to understand circadian behavior and photoperiodic responses (Wever 1964;Pavlidis 1967;Winfree 1967).Such conceptual or generic oscillator models do not consider molecular details specific to certain organisms, tissues, or cell types but rather focus on general oscillator properties and their potential to explain observed experimental data (Roenneberg et al. 2008).
Phase oscillator models One of the most abstract and simple conceptual models are phase oscillators.The only variable used to describe the circadian clock dynamics in this class of models is the phase of its oscillation (t) essentially evolving between 0 and 2 .By this, we tacitly assume that the clock self-sustains its oscillation with a robust period or angular velocity = 2 .Yoshiki Kuramoto introduced an intuitive way to describe the interaction between a given oscillator (t) and a second oscillator (t) by means of a sinusoidal coupling term such that oscillator (t) slows down in case its phase advances the second oscillator (t) and speeds up in case it is delayed compared to (t) as the term sin( (t) − (t)) in (8) becomes negative or positive, respectively (Kuramoto 1975(Kuramoto , 2003)).
Even such a simple model allows to explain a variety of experimental results and can help to better understand properties of the circadian clock.Assuming that a circadian clock with period and described by phase variable (t) is driven by an external zeitgeber (t) of period T, the dynamical evo- lution of the phase difference (t) = (t) − (t) between the zeitgeber and clock phase is governed by the well-known Adler equation ( 8) where Δ = 2 T − 2 is the difference of the angular veloc- ity of the zeitgeber and internal clock and z is the effective zeitgeber strength.Here, we tacitly assumed that there is no feedback from the clock to the zeitgeber, such that the entrainment cue can be described via d (t) dt = 2 T , with T being the zeitgeber period.From (9), it follows that the circadian clock is only able to entrain to the zeitgeber signal for small enough frequency detunings ( Δ ) or high enough zeitgeber strengths (z) given by the condition The range of periods for which the internal clock entrains to the external zeitgeber is termed entrainment range.For a given zeitgeber period, e.g., T = 24 h, the entrainment range generally increases for increasing zeitgeber strength z, leading to a wedge shaped entrainment region in the -z parameter plane, known as the Arnold tongue (Fig. 1a).For combinations of free-running periods and zeitgeber (10) Color-coded values depict the phase of entrainment as given by Eq. ( 11).d Experimentally obtained entrainment phases for different species subject to entrainment cycles of different external zeitgeber period T. Species have been categorized into vertebrates (purple lines) as well as plants and unicellular species (brown lines).Please refer to the original publication (Aschoff and Pohl 1978) for the detailed description of the investigated animals and entrainment properties.Data have been extracted from Fig. 2 of (Aschoff and Pohl 1978) via the WebPlotDigitizer software (Rohatgi 2022) strength z that lie within the Arnold tongue, the internal clock and external zeitgeber signal oscillate with a common period (frequency-locking) and adopt a stable phase relationship (phase-locking).This phase of entrainment is of fundamental importance for the proper alignment of physiological processes around the solar day and thus under evolutionary selection.
Our conceptual phase oscillator model (9) predicts that the dependency of the phase of entrainment on zeitgeber strength z and the frequency detuning Δ is given by Thus, for any given zeitgeber intensity z, the phase of entrainment can vary only by 180° with respect to changes of or T. Rütger Wever, a pioneer in mathematical modeling of the circadian system, described such 180° rule already in 1964 for a conceptual oscillator model adopted from electrical engineering, the Van der Pol oscillator, that has been originally developed to describe oscillations in electrical circuits employing vacuum tubes (Wever 1964).The 180-degree-rule predicts a smaller phase variability with respect to variations of the intrinsic free-running period for increasing zeitgeber strength due to larger entrainment ranges, i.e., phases are less compressed (see color-coded area in Fig. 1a.This is in line with results from early entrainment experiments by Klaus Hoffmann, showing that entrainment ranges increase, while the phase variability decreases with an increasing zeitgeber amplitude (strength) for the ruin lizard Lacerta sicula subject to temperature cycles of T = 24 h period (Hoffmann 1969), compare Fig. 1b and corresponding arrows in Fig. 1a.Positions of arrows in Fig. 1a, i.e., the zeitgeber strength z estimated from the experimental data, have been determined by linear regressions (dashed lines) in Fig. 1b.
The resistance of a self-sustained oscillator to entrain to a certain zeitgeber signal can be used to define strong and weak clocks.While strong clocks are characterized by relatively small entrainment ranges and, thus, large phase variabilities with respect to changes in T, weak clocks exhibit broad entrainment ranges and small phase variabilities, corresponding to small and large zeitgeber strengths z in Fig. 1c.Along these lines, entrainment experiments can be used to categorize circadian pacemakers into weak and strong clocks and to infer internal oscillator properties of a given organism, tissue, or cell type of interest.Aschoff and Pohl summarized the entrainment behavior of 19 different species subject to entrainment cues of different zeitgeber period T (Aschoff and Pohl 1978).By comparing the dependency of the entrainment phase to changes in T (i.e., the slope of curves in Fig. 1d) with results from our conceptual model (Fig. 1c), it turns out that vertebrate clocks (11) rather behave like relatively strong clocks (Fig. 1c, purple arrow), while clocks of plants and unicellular species behave more like weak oscillators (Fig. 1c, brown arrow).Along these lines, above-described 180 • rule has been used to show that strong oscillators like the vertebrate clock with a high phase variability are able to translate a narrow distribution of internal free-running periods in a population with standard deviations of as little as = 0.2h for humans (Duffy et al. 2011) into the experimentally found large spread of human chronotypes which can be related to a large spread in the distribution of entrainment phases (Roenneberg et al. 2004;Granada et al. 2013;Schmal et al. 2020).A similar reasoning has been used to argue that the weak circadian clocks as observed for organisms living at high latitudes such as certain Drosophila strains (Beauchamp et al. 2018) or rain deer (van Oort et al. 2005) could be an adaptive advantage as weak oscillators are able to entrain better under extreme photoperiodic conditions such as long summer days or long winter nights in comparison to strong clocks (Vaze and Helfrich-Förster 2016).Analogously to a weaker circadian clock, entrainment can be also facilitated by increasing an organisms light sensitivity as suggested in a comparative study of a northern and southern line of the parasitoid wasp Nasonia vitripennis (Floessner et al. 2023).
Broad applicability of phase oscillator models
The conceptual phase oscillator approach described in the previous section solely relies on the assumption that oscillators exhibit self-sustained oscillations and that interactions between clocks are weak in a way that amplitude effects can be neglected and the overall system dynamics can be adequately described by its phase of oscillation.Due to the general validity of these assumptions among many systems, the phase oscillator approach has been applied to a plethora of physical, chemical, and biological systems, such as synchronizing fireflies, frog choruses, or the crowd synchronization of pedestrians on London's Millennium bridge (Ermentrout and Rinzel 1984;Strogatz et al. 2005;Ota et al. 2020), to name a few.
Entrainment under varying photoperiods
So far, we discussed general principles of entrainment under the assumption of symmetric zeitgeber cues with equal durations of day and night.Due to the tilt of the Earth's rotation axis with respect to its orbit around the Sun, properties of zeitgeber signals such as the photoperiod of light-dark cycles depend on latitude and season.In Schmal et al. (2015), the concept of Arnold tongues (Fig. 1a, c was extended to account for photoperiodic entrainment, i.e., to zeitgeber cycles of varying daylengths.Since pure phase descriptions as given by Eqs.(8,9,10,11) are unable to directly account for amplitude-dependent effects on entrainment and phase resetting (Lakin-Thomas et al. 1991;Ananthasubramaniam et al. 2020), we use a conceptual amplitude-phase oscillator model also known as Poincaré oscillator (Glass and Mackey 1988), instead.In radial coordinates as given by Eqs.(12 and 13), variables r and denote the time-dependent (instantaneous) amplitude and phase of the internal clock, respectively, while parameters A, and conveniently describe properties of the internal clock such as the steady-state amplitude, period, and radial relaxation rate which can differ and be related to specific organisms, tissues, or cell types.The resulting entrainment regions in the photoperiod and zeitgeber period parameter plane have their largest entrainment range at the equinox and taper toward the internal clocks free-running period under constant darkness and constant light (Fig. 2a).The tilt of this Arnold Onion is given by Aschoff's rule, i.e., the difference between under constant darkness and constant light, with the internal period under constant light being typically shorter or longer compared to the period under constant darkness in day-active animals and plants or night-active animals, respectively (Aschoff 1960;Pittendrigh 1960).A complementary theoretical treatise to explain the emergence and properties of Arnold onions using a pure phase oscillator description as given by Hoveijn ( 2016) connects these results with the mathematical approach of the previous section. ( Again, such a straightforward conceptual model is able to explain a variety of experimental results on photoperiodic entrainment.For the model assumptions underlying Fig. 2a, the 180° rule holds true within the entrainment range at a given fixed photoperiod, analogous to the observation for pure oscillators in Fig. 1.From this, it follows that the phase variability with respect to changes in zeitgeber period T is lowest under equinoctial conditions and increases with increasing or decreasing photoperiods as experimentally observed for golden hamster entrained to light-dark cycles of different photoperiods; compare Fig. 2b.Another prediction from Fig. 2a is that a large tilt of the Arnold onion as given by Aschoff's rule can lead to a situation where the internal clock might be able to entrain to zeitgeber signals of short but not of long photoperiods or vice versa.This phenomenon can explain why the drinking behavior of squirrel monkeys (Saimiri sciureus) synchronizes to 24 h light-dark schedules of extremely short photoperiods but not to those longer than 21 h (Schmal et al. 2015;Sulzman et al. 1982).
While Fig. 2a shows entrainment ranges and phases for zeitgeber signals with a square-wave-like waveform as used in the laboratory, entrainment to light-dark cycles similar to those observed under natural conditions have been studied in (Schmal et al. 2020).
Intrinsic oscillator properties affect seasonal entrainment Entrainment characteristics of the circadian system do not rely only on properties of the zeitgeber signal and the internal period as discussed in the previous paragraphs but also on other intrinsic properties of the circadian clock such as the amplitude, radial relaxation rate, waveform, or twist (i.e., the dependence of the internal period on amplitude).Modeling approaches have been used to show that increas- ing amplitudes and radial relaxation rates make an oscillator more resistant toward entrainment in comparison to clocks with relatively small amplitudes and relaxation rates (Lakin-Thomas et al. 1991;Abraham et al. 2010).The finding that collective amplitudes and relaxation rates increase due to resonance effects in ensembles of interacting clocks (Abraham et al. 2010;Bordyugov et al. 2011;Schmal et al. 2018) has been used to explain why strongly coupled systems, such as the mammalian core pacemaker, the suprachiasmatic nucleus (SCN), have a narrow entrainment range, while putatively weakly coupled systems like lung or heart tissue rather behave like a weak clock and entrain to more extreme zeitgeber periods (Abraham et al. 2010).This interpretation is further strengthened by the fact that pharmacological decoupling of SCN neurons by MDL or TTX leads to a better entrainability of cultured SCN slices subject to temperature cycles (Abraham et al. 2010) as well as the observation that a faster recovery from jet-lag is observed for mice lacking receptors for the coupling agent arginine vasopressin (AVP) (Yamaguchi et al. 2013).Along these lines, it has been proposed that genetic redundancy within the molecular regulatory network underlying the mammalian circadian rhythm generation strengthens the clock and, thus, leads to narrow entrainment ranges (Erzberger et al. 2013).
Bifurcations affect seasonal entrainment Bifurcations are defined by qualitative changes of systems dynamics due to variations of an internal or external parameters.Many of such qualitative changes in the systems dynamics upon parameter variations have been described for circadian clocks of different organisms.For example in mammals, the dissociation of a single activity band into two bands, termed splitting or frequency doubling, has been observed as a response to changes in zeitgeber properties such as an increasing light intensity under constant conditions (Pittendrigh and Daan 1976b).A transition from self-sustained to damped oscillations has been reported for circadian KaiC rhythms in cyanobacteria after reducing the ambient temperature below 18.6 °C (Murayama et al. 2017).
Such changes in qualitative behavior of circadian oscillator properties will have an impact on the entrainment characteristics as recently reported in a mathematical study using the Goodwin model (Ananthasubramaniam et al. 2020).The Goodwin oscillator is a generic model of a delayed negative feedback loop where the final product X 3 of a three- component activatory chain inhibits the production of the first component X 1 (Goodwin 1965); see Fig. 3a, b.It fulfills all necessary requirements to produce self-sustained oscillations, such as a negative feedback, non-linearity, as well as delay, and has a long tradition of being applied in modeling circadian clocks (Ruoff et al. 2001;Gonze and Ruoff 2021).Assuming that light enters the model as an additive term to the X 1 variable, one observes that increasing constant light finally drives the system to a dampened regime through a Hopf bifurcation; see Fig. 3c.Damped oscillators can be entrained much more easily to rhythmic zeitgeber signals in comparison to a strong self-sustained clock (Bain et al. 2004;Gonze et al. 2005).Thus, forcing the Goodwin oscillator with a zeitgeber intensity that corresponds to a value that would drive the system to a damped regime under constant conditions (e.g., gray dotted line in Fig. 3c) leads to broadening of the entrainment region under long photoperiods (Fig. 3d).A similar behavior that relies on the bifurcation structure of the underlying molecular feedback loop has been observed in Neurospora crassa subject to temperature entrainment, where increasingly damped oscillations for decreasing temperatures (Liu et al. 1997b) translate into an experimentally observed broader entrainment range under short thermoperiods (Burt et al. 2021).
Contextual models
While conceptual models as described so far focus on generic properties of the oscillatory circadian system, contextual models try to understand properties and design principles of the circadian clock with respect to organism, tissue, or cell-type-specific details.In a simplified schematic view of the circadian clock, known as Eskinogram, the clock system can be divided into an input pathway that integrates external zeitgeber signals, the circadian core pacemaker, and output pathways or subordinate clocks.Contextual models have been proposed for all of these three regulatory layers and have been subsequently used for studying entrainment properties of the circadian clock.
For the human circadian system, Kronauer, Forger, and Jewett proposed a model for the biochemical processes that pre-process and convert light information for the circadian core pacemaker and couple this model of the retinal light input pathway to a conceptual van der Pol oscillator model (Kronauer et al. 1999;Forger et al. 1999) that has a long tradition of being used in circadian clock modeling (Wever 1964(Wever , 1972;;Kronauer et al. 1982).The model accurately predicts experimental light stimulus data (Forger et al. 1999) and has been subsequently used to study general principles underlying circadian (seasonal) entrainment (Creaser et al. 2021), entrainment under field conditions (Stone et al. 2020), interactions between circadian rhythms and homeostatic sleep drive (Phillips et al. 2010), as well as the reentrainment time (i.e., jet-lag duration) for traveling between regions of different season that occurs for north-or southward directions (Diekman and Bose 2018).
A large variety of detailed contextual molecular models of the intracellular regulatory feedback loops have been proposed for various model organisms including Neurospora crasssa (Hong et al. 2008), the small flowering plant Arabidopsis thaliana (Locke et al. 2006;Fogelmark and Troein 2014;De Caluwe et al. 2016), the fruit fly Drosophila melanogaster (Leloup and Goldbeter 1998), and mammals (Forger and Peskin 2003;Relógio et al. 2011;Korencic et al. 2012).Such contextual models helped to reveal design principles underlying circadian rhythm generation at the intracellular level but have also been used to better understand which model architectures of the clock and its light input pathways allow for a proper entrainment across different seasons (Thommen et al. 2015;De Caluwé et al. 2017).Along these lines, Troein et al. (2009) used an evolutionary optimization approach to find clock models that best adapt to various photoperiods and weather-induced stochasticity.
Finally, detailed molecular contextual models have been proposed to understand design principles underlying the processing of circadian signals as perceived by the output pathways of the circadian clock.This includes driven feedback loops subordinate to the core pacemaker (Schmal et al. 2013) as well as output pathways controlling starch metabolism in plants (Seaton et al. 2014), liver metabolism in mammals (Woller et al. 2016), or those triggering seasonal responses such as flowering time in plants (Salazar et al. 2009) or photoperiodic responses in mammals (Ebenhöh and Hazlerigg 2013).
Photoperiodic encoding
The circadian system does not only passively react to changes in photoperiods by tuning its entrained amplitude and phase as a response to seasonal variations in zeitgeber signals as discussed in the section Seasonal entrainment, it also plastically changes its internal properties as a response to the previously experienced light schedule.In mammals, the suprachiasmatic nucleus (SCN) has been identified as the central circadian pacemaker.Located in the anterior hypothalamus, it consists of approximately 10 4 bilaterally distributed neurons and is a remarkable example of localized brain functionality.Ablation of the SCN leads to a suspension of circadian behavioral activity and its re-transplantation restores such rhythmicity within approximately one week (Ralph et al. 1990).Based on the expression of neuropeptides, the SCN is often dichotomized into a ventrolateral (core) and a dorso-medial (shell) part.Experimentally observed re-organizations of spatio-temporal pattern formation between the core and shell part of the SCN in response to changing environmental photoperiods have been supposed as an internal representation of seasons (Coomans et al. 2015).These findings are in favor of the original hypothesis of Erwin Bünning from 1936 that the circadian clock, in addition to being a daily clock also serves as a seasonal timing device by measuring the day-length internally (Bünning 1936).Evidence from knockout mice lacking the neurotransmitter VIP that loose their ability to encode seasonal information suggests that inter-cellular coupling between SCN neurons is essential for proper photoperiodic encoding in mammals (Lucassen et al. 2012).
Prior to discussing principles underlying seasonal encoding, we discuss mathematical approaches that help to understand how coupling leads to the experimentally observed spontaneous synchronization or self-entrainment in networks of interacting clocks.
Precision through coupling
It has been shown that inter-cellular communication between SCN neurons, relying on neuropeptidergical, synaptic, and gap-junctional couplings, is quintessential for generating the remarkable precision observed at the tissue and behavioral level (Yamaguchi et al. 2003;Herzog et al. 2004).Single cellular oscillations of widely dispersed SCN neurons have been reported to show a relatively large standard deviation of = 1.28h in comparison to = 0.32h and = 0.13h at the tissue explant and organism level, respectively (Herzog et al. 2004).Analogous to the physical separation of SCN neurons, pharmacological suspension of synaptic couplings by tetrodotoxin (TTX) reversibly leads to larger phase distributions of clock gene oscillations in organotypic SCN slices, i.e., a less precise clock (Yamaguchi et al. 2003;Abel et al. 2016;Schmal et al. 2018).Besides coupled SCN neurons in the mammalian core pacemaker, communicating circadian clocks have been described in and between peripheral tissues as well.Density-dependent rhythmicity in cultured fibroblasts (Noguchi et al. 2013) and hepatocytes (Guenthner et al. 2014) is indicative of coupling (Micklem and Locke 2021).In addition, mathematical modeling suggests that the choroid plexus, a non-neuronal brain tissue that harbors a robust clock more precise than the SCN gains its high precision through nearest-neighbor gap junctional coupling (Myung et al. 2018).
A conceptual phase oscillator approach A natural description of oscillator phase dynamics in a system of coupled clocks like the SCN without explicit consideration of the intricate molecular details of intracellular rhythm generation is given by Kuramoto models (Kuramoto 1975;Strogatz 2000;Kuramoto 2003) which are a generalization of the coupled phase oscillator model ( 8), described in the section "Phase oscillator models".Here, i is the phase of oscillator i, N the number of total oscillators in the network, i = 2 i the angular velocity of oscillator i, K i,j the coupling strength of the interaction from oscillator j onto oscillator i, and sin( j − i ) the corre- sponding interaction or coupling function.A convenient way to study the collective dynamics in such an ensemble of coupled oscillators is to introduce the global order parameter Re i Ψ = 1 N ∑ N j=1 e i j with i being the complex element.Here, R and Ψ are the phase coherence and mean phase in the ensemble of clocks, respectively, and thus describe the macroscopic state of the network dynamics, see Fig. 4a.
In the context of circadian clocks, this conceptual phase oscillator model as given by Eq. ( 14) has been used to analyze how the circadian free-running period observed at the behavioral level is determined by the ensemble average of the individual clock cell periods, measured for widely dispersed SCN neurons in wild type as well as heterozygous and homozygous tau-mutant Syrian hamsters (Liu et al. 1997a).
Low-dimensional representation of phase oscillator network dynamics
Recent mathematical advances tremendously facilitate the analysis of the complex network dynamics given by Eq. ( 14) and thus to infer properties of single cellular parameters and coupling topologies of networked circadian clocks.Similar to the theory of statistical mechanics in physics where macroscopic variables, such as temperature, entropy, or pressure of a given system are derived from the microscopic dynamics in a large ensemble of particles, Edward Ott and Thomas M. Antonsen proposed a method that allows to describe the complex dynamics in a large ensemble of N coupled oscillators by low-dimensional representations using macroscopic variables such as the phase coherence R or mean phase Ψ (Ott andAntonsen 2008, 2009).
Applying the Ott-Antonsen reduction method to our prototypical example of mean-field coupled phase oscillators as given by Eq. ( 14) under the additional assumption that the internal frequencies i follow a unimodal (Cauchy-Lorentz) distribution of periods, allows to essentially reduce the dynamics of the N-dimensional set of equations ( 14) to a simplified one-dimensional description of the temporal evolution of the ensemble's phase coherence: where relates to the spread of the distribution of cell autonomous periods i , see also (Ott and Antonsen 2008).Figure 4b illustrates the good accordance between the numerical simulations of full model ( 14) and the reduced dynamics given by ( 15).From this low-dimensional representation, steady-state dynamics (fixed-points) after the decay of transients ( t → ∞ ) can be readily inferred by searching for the values of the global phase coherence R(t) = R ⋆ , such that the right-hand side of Eq. ( 15) equals zero, i.e., no further changes in the dynamics occur.It straightforwardly follows that no synchronization between the oscillators occurs (i.e., the incoherent state R ⋆ = 0 is stable) in cases where the coupling strength K does not exceed two-times the frequency spread (i.e., K < 2 ).Synchronized ensemble dynamics emerge for large enough coupling strength K > 2 , i.e., the so-called coherent network state R ⋆ = √ 1 − 2 K becomes stable, with an increasing phase coherence R for increasing coupling strength K or decreasing spread of internal period distribution ; see Fig. 4c.
Such predictable changes in the distributions of individual oscillator phases and periods but also amplitudes due to resonance effects have been to quantify the relative strength of inter-cellular coupling in the mammalian circadian core pacemaker (Schmal et al. 2018) as well as peripheral clocks such as cultured U-2 OS cells (Finger et al. 2021).To give an example, we re-analyze the distributions of clock gene expression phases, measured by a PER2::LUC reporter for cultured SCN slices under control conditions and during the suspension of synaptic coupling after TTX application as previously described; see Fig. 4d for representative snapshots.By comparing the experimentally obtained phase coherences with findings from our modeling approach (see arrows in Fig. 4c), one can show that the phase distribution under TTX treatment .c) For large enough coupling strength, i.e., K > 2 , oscillators show spontaneous synchronization.Subsequently, the phase coherence increases, i.e., the phase-spread decreases, for increasing coupling strength.Here, a value of = 0.01 has been used which approximately corresponds to the experimentally observed standard deviation of = 1.28h for widely dispersed SCN neurons, estimated in the period domain (Herzog et al. 2004).The dashed black line denotes the critical coupling strength K c = 2 .d) Representative oscillation phase distribution of PER2::LUC reporter gene expression for individually tracked SCN neurons of cultured SCN slices under control conditions (top) and during application of tetrodotoxin (bottom).Phases are shown in a histogram (left) and at their original positions within the SCN slice (right).Original data have been obtained from (Abel et al. 2016) and oscillation phases have been determined as previously described (Schmal et al. 2018).Arrows in panel (c) point to coupling strength K ≈ 0.028 and K ≈ 0.200 that lead to a phase coherence of R = 0.54 and R = 0.95 as observed for the exemplary experimental data under control and TTX conditions as shown in panel (d), respectively corresponds to a relatively weakly coupled network state, similar to findings in Schmal et al. (2017Schmal et al. ( , 2018)).
Extensions of this model such as an (Ott-Antonsen reduced) uniformly coupled network of circadian clocks that is, in addition, driven by a 24 h sinusoidal light-dark cycle, have been used to study cross-time-zone traveling and to reveal principles underlying the observed difference in resynchronization duration between eastward and westward travel (Lu et al. 2016).
Please note that a technical limitation of the Ott-Antonsen approach is that in order to be able to derive a reduced macroscopic model such as (15) from the N-dimensional full set of microscopic equations, one usually requires the assumption that the intrinsic frequencies i follow a Cauchy-Lorentz distribution or the superposition of multiple Cauchy-Lorentz distributions in case of oscillator communities with different (mean) periods in each community (Martens et al. 2009;Skardal 2019).It has been found that a related method, the m 2 ansatz, yields quantitatively better results for populations of coupled clocks having frequency distributions with exponential tails such as Gaussians, even though the qualitative dynamics obtained from the m 2 or Ott-Antonsen approach might be comparable (Hannay et al. 2018).
Models of coupled circadian clocks reveal principles underlying efficient synchronization
During the last decades, mathematical models of networked circadian clocks have been proposed to study the effect of network topologies and intrinsic oscillator properties on synchronization properties in ensembles of coupled clocks.It has been shown that increasing the number of locally coupled van der Pol oscillators as a model for the SCN leads to the emergence of a stable overall rhythm and a higher resistance against noise (Achermann and Kunz 1999;Kunz and Achermann 2003).Networks of coupled conceptual Poincaré (see Eqs. 2013) and analyzed as previously reported (Schmal et al. 2017).d Schematic drawing of an SCN model, constituted of two groups of interacting oscillators, i.e., core and shell neurons, under the assumption that intracellular oscillators of core and shell neurons follow intrinsic frequency distributions of different mean; see "Materials and Methods" for further model details. e Increas-ing period differences Δ = C − S as well as decreasing coupling strength K between the core (ventral) and shell (dorsal) neurons can lead to an increasing gap between the oscillation phases of core and shell neurons.Color-coded bifurcation diagrams of the Ott-Antonsen reduced system given by Eqs. ( 6), ( 7) of the section "Materials and Methods" have been obtained by XPP-Auto.f Ott-Antonsen reduced dynamics faithfully reproduce the behavior of numerical simulations of the full set of equations, compare dashed lines representing the steady-state phases in the core and shell neurons in the low-dimensional Ott-Antonsen reduced representation with the corresponding numerically obtained phase distributions depicted by bar plots.Simulations shown in (f) correspond to the parameter values depicted by the star in (e), i.e., a coupling strength of K = 0.065 , a period difference of Δ = 4h corresponding to C = 26h and S = 22h as well as a frequency spread of = 0.01 as used in Fig. 4b, c (12), ( 13)) or Goodwin (see Fig. 3a) oscillators have been used to study the effect of network topology (Gu and Yang 2016), intrinsic oscillator properties (Gu et al. 2018(Gu et al. , 2019)), the fraction of light-perceiving neurons (Gu et al. 2014), and zeitgeber waveforms (Zheng et al. 2022) on the overall entrainment capability of the SCN to external zeitgeber cues.
Interestingly, studies using mean-field coupled Goodwin oscillators suggest that bifurcations, i.e., qualitative changes in the dynamics of individual SCN neurons, similar to those described in the paragraph Bifurcations affect seasonal entrainment can achieve an efficient synchronization between SCN neurons.For certain mean-field coupling strengths the average neurotransmitter concentration can drive the individual SCN neurons to a dynamical regime with damped oscillations which, in turn, eventually allows them to entrain easier to the mean-field rhythm of the coupling agents (Gonze et al. 2005;Locke et al. 2008).
While all models described in this section mainly focused on principles that determine the synchronizabilty in ensembles of interacting oscillators, we focus in the next section on how plastic network re-organizations allow to explain the experimentally observed photoperiodic encoding in the mammalian core pacemaker.
Photoperiodic encoding through network re-organizations
The SCN is an anatomically heterogeneous tissue.While the shell or dorso-medial part of the SCN expresses mainly arginin vasopressin (AVP), the core or ventro-lateral part expresses vasoactive intestinal peptide (VIP), and gastrin releasing peptide (GRP); see Fig. 5a for a schematic drawing.The neurotransmitter -aminobutyric acid (GABA) has been shown to be expressed in both subregions and can act as a de-synchronizing or synchronizing agent, depending upon the system state of the SCN (Evans et al. 2013;Freeman Jr. et al. 2013;Myung et al. 2015).Depending upon developmental stages and environmental conditions, the SCN can show complex spatio-temporal patterns such as phase waves or phase-clustering (Quintero et al. 2003;Evans et al. 2011;Fukuda et al. 2011;Myung et al. 2012).These phase organizations rely on the previously applied light-schedule and photoperiod and it is thus believed that the network-level organization constitutes an internal representation of seasons.While small regional phase differences of PER2::LUC reporter construct rhythms are observed in cultured SCN slices after equinoctial 24 h light-dark schedules (LD12:12), long-day entrainment with 20 h of light and 4 h of darkness (LD20:4) leads to phase clusters and regionspecific phase differences of up to 12 h (Evans et al. 2011(Evans et al. , 2013)); see Fig. 5b, c.Likewise, at the electrophysiological level, in vivo neuronal activity profiles in freely-moving mice are compressed under short-day in comparison to longday entrainment (VanderLeest et al. 2007).
What are the design principles underlying photoperiodic encoding?A macroscopic state such as the occurrence of phase waves or phase clusters in systems of networked oscillators such as the SCN is generally determined by both intrinsic oscillator properties as well as the coupling topology of the network.Indeed, using a conceptual Kuramoto model approach, one can show that the occurrence of phase clusters could (i) either rely on a globally uniformly coupled population of clocks where two disjoint communities of SCN neurons such as those in dorsal and ventral region have distinct intrinsic free-running periods (Martens et al. 2009;Zhang et al. 2020) being consistent with the finding that periods in the core and shell differ (Noguchi et al. 2004;Myung et al. 2012) or (ii) rely on the re-arrangements of the coupling strength between those regions (Hong and Strogatz 2011;Sonnenschein et al. 2015;Skardal 2019).To give an example, we study a system of coupled Kuramoto oscillators as given by Eq. ( 14) that is functionally separated into two groups, related to the core (ventral) and shell (dorsal) neurons of the SCN, under the assumption that intrinsic frequencies of core and shell neurons follow distributions with different mean values; see Fig. 5d and section "Materials and Methods" for further model details.For the sake of simplicity, we assume that the fraction of core and shell neurons is equal, that the spread of frequencies (or periods) is identical in the core and shell part of the SCN and that the inter-and intra-group coupling is of identical strength K. Using these assumptions, one can observe that, indeed, an increasing period difference between the core and shell neurons as well as a decreasing coupling strength K leads to an increasing gap between the oscillation phases of the core and shell neurons (Fig. 5e, f).More complicated models, including asymmetries such as different fractions of core and shell neurons, a varying spread of frequency distributions in core and shell neurons, or an asymmetric strength of inter-and intra-group coupling, may give rise to symmetry breaking, more complex dynamics and opens the possibility to further tune the phase separation between the core and shell part of the SCN.Along these lines, a modeling study suggests that a relatively stronger coupling from the core (ventral) to the shell (dorsal) part of the SCN in comparison to the opposite coupling direction is consistent with the experimentally observed re-synchronization dynamics after a reversible pharmacological decoupling of cell-to-cell communication (Taylor et al. 2017).
Extensions of above-described Kuramoto models have been used to study the SCN under different entrainment conditions.The SCN receives light information through the retinohypothalamic tract via melanopsin-containing intrinsically photosensitive retinal ganglion cells (ipRGCs) (Hattar et al. 2002).Classical rod and cone photoreceptors contribute under certain conditions (Walmsley et al. 2015).While previous works suggested that light-responsive neurons are mainly found in the ventral part of the SCN (Meijer and Schwartz 2003), recent studies revealed that ipRGCs innervate VIP, GRP, and AVP neurons in the ventral and dorsal parts of the SCN with the vast majority of innervations found in the ventral region (Fernandez et al. 2016).
A two-community model with the ventral (core) part of the SCN being entrained by light signals in combination with an Ott-Antonsen reduction has been used to study how the functional separation into core and shell affects the tissues upper and lower limits of entrainment and allows to explain the experimentally observed dissociation phenomena (Goltsev et al. 2022), i.e., the emergence of a second rhythm in SCN activity and behavior with a different period in comparison to the entrainment cue.Hannay et al. (2020) revealed that the photoperiod-induced adjustment of coupling strength between the ventral and dorsal part of the SCN under the assumption that only the ventral part receives light information yields a mutually consistent unifying explanation of three light-mediated circadian effects, namely, (i) the increasing phase gap under long photoperiods, (ii) the experimentally observed photoperiod-depending aftereffects (Pittendrigh and Daan 1976a), as well as (iii) the apparently counter-intuitive observation that the amplitude of phase response curves with respect to light perturbations decreases for mammals entrained to long photoperiods (van-derLeest et al. 2009;Ramkisoensing et al. 2014).Similar findings have been obtained by a purely numerical study in (Gu et al. 2016).
The functional separation of the SCN into two regionally disjoint sub-groups, i.e., a core and a shell part, gave rise to modeling approaches using the assumption that the two communities essentially act like two separated oscillators mutually coupling to each other (Myung and Pauls 2018).This approach is similar to the original proposition of Pittendrigh, Daan and Berde, assuming that (at least) two autonomously oscillating coupled oscillators, termed morning (M) and evening (E) oscillators, underlie the rhythm generation in the mammalian pacemaker which helped to explain the experimentally observed splitting of behavioral rhythms into two components (Pittendrigh and Daan 1976b;Daan and Berde 1978;Helfrich-Förster 2009).Along these lines conceptual network models where functionally separated groups within a larger network of coupled oscillators are associated with mutually coupled single oscillators have been used to study several phenomena in chronobiology including photoperiodic encoding (Myung and Pauls 2018) and aftereffects (Azzi et al. 2017) in the SCN, exposure to skeleton photoperiods (Flôres and Oda 2020), splitting phenomena (Indic et al. 2008;Oda and Friesen 2002), the experimentally observed transient dynamical dissociation between different clock genes of the intracellular transcriptional-translational feedback loops after external perturbations (Schmal et al. 2019), as well as the synchronization of circadian rhythms of different brain areas such as the area postrema, the nucleus of the solitary tract, and the ependymal cells surrounding the fourth ventricle (Ahern et al. 2023).
Concluding remarks
In addition to experimental advances, mathematical modeling contributed to understand principles of circadian entrainment, intracellular rhythm generation through transcription-translation feedback loops (TTFL) and non-TTFL mechanisms, as well as the synchronization of interacting circadian entities.Theoretical approaches, mathematical modeling, and numerical simulations can help to understand complex dynamics and counter-intuitive results that may otherwisely be hard to grasp.
In this review, we explored various mathematical approaches to understand seasonal entrainment and photoperiodic encoding ranging from individual clocks entrained by an external zeitgeber to complex networks of coupled oscillators.Recent theoretical advances such as the Ott-Antonsen or m 2 approach that facilitate the analysis of net- work dynamics are introduced and discussed in the context of circadian systems.In the physics literature, these approaches have been used to study numerous realizations of coupled oscillator networks with different oscillator properties and network topologies, yielding a large potential to utilize these theoretical advances in future chronobiological studies.
Throughout the review, we generally focus on two extreme cases of networked oscillatory systems.While the entrainment studies discussed in the section "Seasonal entrainment" study dynamical properties of a single oscillator driven by an external zeitgeber signal, synchronization between SCN neurons underlying photoperiodic encoding as discussed in the section "Photoperiodic encoding" typically consists of thousands of coupled clocks, allowing for a proper analysis using averaging methods such as the Ott-Antonsen approach.In contrast, dynamics of mesoscopic systems such as the circadian clock in flies are much harder to study and further theoretical advances are needed.
Fig. 1
Fig. 1 Weak zeitgebers or strong clocks lead to a large phase variability.a Arnold tongue based on phase oscillator model (9) in the parameter plane spanned by the internal free-running period and the amplitude or strength z of the external zeitgeber signal.Colorcoded values depict the phase of entrainment as given by Eq. (11).b Experimentally obtained entrainment phases in dependence of the intrinsic free-running period for ruin lizards subject to temperature cycles of different amplitude, i.e., zeitgeber strength.Data have been extracted from Fig.5of(Hoffmann 1969) via the WebPlotDigitizer software(Rohatgi 2022).c Arnold tongue in the parameter plane spanned by the period T and amplitude or strength z of the external
Fig. 2
Fig. 2 Arnold onions capture essential features of seasonal entrainment.a Entrainment regions adopt an onion-shaped geometry in the photoperiod-zeitgeber period parameter plane.The tilt of the Arnold onion can be explained by Aschoff's rule, i.e., the difference between the internal free-running period under constant darkness (photoperiod of 0% ) and constant light (photoperiod of 100% ), depicted by vertical dashed lines.Phases of entrainment are color-coded within the
Fig. 3
Fig.3Intrinsic oscillator properties govern seasonal entrainment characteristics.a The Goodwin oscillator is considered a blueprint for models of molecular negative feedback loops.We assume that the (square-wave) zeitgeber signal affects the negative feedback loop by an additive term to the X 1 -variable.b Example oscillations under free-running conditions for a parameter set that leads to self-sustained oscillations.Same parameters as those underlying Fig.8of (Ananthasubramaniam et al. 2020) have been used.c For an increasing con-
Fig. 4
Fig. 4 Precision through coupling.a) Phase distributions in large ensembles of clocks can be conveniently visualized on the unit circle of radius 1.The global order parameter Re i Ψ , depicted as a blue arrow, conveniently summarizes macroscopic properties of the ensemble.While the phase coherence R is given by the length of the arrow, the average phase Ψ defines the position of the arrow head similar to the clock hands of a classical mechanical clock.b) The Ott-Antonsen reduction method faithfully reproduces numerical results in an ensemble of uniformly coupled Kuramoto oscillators with unimodally (Cauchy-Lorentz) distributed intrinsic frequencies i = 2 i
Fig. 5
Fig. 5 Photoperiodic encoding.a Schematic drawing of region-specific neuropeptide expression within the SCN.b Emergence of spatial phase clustering in cultured SCN slices from mice entrained to extremely long photoperiods of LD20:4.c Histogram of phase values depicted in panel (b) shows a bimodal distribution.Bold and dashed black lines denote a bimodal composite van Mises distribution and the underlying unimodal distributions, respectively, fitted to the histogram data (gray bars).Fitting reveals that the mean phases in the core and shell are separated by 90 • after entrainment to light-dark cycles of extremely long photoperiods (LD20:4).Data have been obtained from Evans et al. (2013) and analyzed as previously reported(Schmal et al. 2017).d Schematic drawing of an SCN model, constituted of two groups of interacting oscillators, i.e., core and shell neurons, under the assumption that intracellular oscillators of core and shell neurons follow intrinsic frequency distributions of different mean; see "Materials and Methods" for further model details. e Increas- | 2023-09-04T06:17:03.786Z | 2023-09-02T00:00:00.000 | {
"year": 2023,
"sha1": "ff4619869dc1ef389f399cdb15978963b7fbdbb3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00359-023-01669-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "81282d4acb27ecd4134427d504a77f873f2aeeea",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204833895 | pes2o/s2orc | v3-fos-license | Validation of a HILIC UHPLC-MS/MS Method for Amino Acid Profiling in Triticum Species Wheat Flours
Amino acids are essential nutritional components as they occur in foods either in free form or as protein constituents. An ultra-high-performance (UHPLC) hydrophilic liquid chromatography (HILIC)-tandem Mass Spectrometry (MS) method has been developed and validated for the quantification of 17 amino acids (AA) in wheat flour samples after acid hydrolysis with 6 M HCl in the presence of 4% (v/v) thioglycolic acid as a reducing agent. The developed method proved to be a fast and reliable tool for acquiring information on the AA profile of cereal flours. The method has been applied and tested in 10 flour samples of spelt, emmer, and common wheat flours of organic or conventional cultivation and with different extraction rates (70%, 90%, and 100%). All the aforementioned allowed us to study and evaluate the variation of the AA profile among the studied flours, in relation to other quality characteristics, such as protein content, wet gluten, and gluten index. Significant differences were observed in the AA profiles of the studied flours. Moreover, AA profiles exhibited significant interactions with quality characteristics that proved to be affected based mainly on the type of grain. A statistical and multivariate analysis of the AA profiles and quality characteristics has been performed, as to identify potential interactions between protein content, amino acids, and quality characteristics.
Introduction
Amino acids (AA) are essential nutritional components present in foods either in their free form (FAA) or as protein constituents. They directly contribute to the flavor of foods as they are precursors of aroma compounds and colors formed by thermal or enzymatic reactions during production, processing, and storage of food. Hence, information on the profile and amount of free AA is highly needed in food science and nutrition studies [1][2][3][4][5].
Analysis of AA, in either free form or in protein profile, was highly challenging due to their structural and polarity differences. Ion exchange liquid chromatography has found wide application in AA analysis, especially through the commercial amino acid analyzers, which utilize cation-exchange chromatography followed by post-column derivatization with a chromophore or fluorophore derivatizing agent [6][7][8]. On the other hand, conventional reversed-phase (RP) High Pressure Liquid Chromatography (HPLC) proved to be time-consuming since most AAs are highly polar and cannot be determined without pre-or post-column derivatization [2,9,10]. Recently the advancement of hydrophilic interaction chromatography (HILIC) and new analytical columns provided alternative paths for the profiling of AA by liquid chromatography (LC). HILIC is generally known to enhance the sensitivity of electrospray ionization-mass spectrometry (ESI-MS) detection, and it is increasingly employed in the analysis of polar analysis in various matrices, [3,5,[11][12][13][14][15].
So far, the application of HILIC-MS has found limited use in the analysis of AA in food samples. To our knowledge, three methods have been reported on the determination of FAA in liquid food matrices such as juice, beer, honey, or tea [3], ginkgo seeds [5], and fruits of Ziziphus jujuba [14]. Only recently, it was developed a HILIC-MS/MS method for the determination of either free AA or amino acids profile in high protein content food matrix (mussels) [15]. Moreover, the analytical methods that determine amino acid profile (TAA) were developed for the application in pure protein samples, such as collagen [16] or bovine serum albumin (BSA) and angiotensin I [13]. In all these methods, single ion monitoring-MS (SIM-MS) detection [5] or tandem MS [4,13] were applied. By using tandem MS detection, the separation of isobaric AA was feasible in most cases, whereas the use of single MS detection led to increased total analysis time since longer chromatographic runs were needed for the separation of all compounds. Prior to the analysis of amino acids, proteins need to be hydrolyzed in order to release their constituting AA. The most commonly applied method is hydrolysis by digestion with a strong inorganic acid [13,[15][16][17].
Cereals are considered as one of the basic foods consumed by humans and animals. The carbohydrates they contain provide approximately 50% of the total daily calories, whereas the proteins one-third of the total protein need [17]. The composition of amino acids varies among the proteins of the different cereal grains or flours. Wheat (Triticum species) is the 3rd most-produced cereal worldwide. Wheat proteins are known to be low in some amino acids that are considered essential for the human diet, especially lysine (the most deficient amino acid) and threonine (the second limiting amino acid) [18][19][20][21]. On the other hand, they are rich in glutamine and proline (s), the functional amino acids in dough formation. Tetraploid Emmer wheat (Triticum turgidum species, dicoccum, genomes AABB), also known as emmer, faro or zea in different countries, is hulled wheat that differs from the domestic species on the fact that the ripened seed head of the wild species shatters and spreads the seed onto the ground while in the domesticated counterpart the seed head remains intact, making it easier for humans to harvest the grain. It is considered as the ancestor of bread wheat and durum wheat growing in the margin of the Mediterranean area [22]. Hexaploid spelt, Triticum aestivum variety spelta (genomes AABBDD) is also a hulled cereal grain with high resistance to environmental factors (diseases, stress) showing good yields under disadvantageous conditions [23]. It is suitable for organic farming and contributes to agro-diversity [24]. Spelt is becoming widely used in the growing natural food market. It has been reported that spelt protein content showed a great variation depending on the genotype [21,25,26], but it is higher than in common wheat [25,27]. The amino acid composition of the proteins from spelt differs slightly from one of the modern bread and pasta wheats [21]. It has been suggested that spelt-based products could be potentially more digestible than those from common wheat. Certain ancient wheats, einkorn, spelt, emmer, and Khorasan are currently of particular interest for use in selected bakery products [21]. The development of new cultivars has been attempted with the aim to improve the content of all essential amino acids [17,25].
In the present work a single HILIC-MS/MS method was developed, validated and applied for the determination of 17 amino acids, in a comparative study for the determination of the TAA of 'ancient' wheat species such as emmer, spelt, and bread wheat (Triticum aestivum) with different extraction rates and cultivated under different practices (organic or conventional) in a fast and reliable way without the need of derivatization, preceded only by a simple hydrolysis procedure. Finally, analysis of variances and multivariate analysis were performed in order to explore potential interactions of the TAA with flour quality characteristics of samples under investigation.
Standard Solutions
Stock solutions of the compounds were prepared in 0.1 M HCl at a concentration of 10 mg mL −1 and stored in amber vials at −20 • C. Working standards were prepared from the stock solutions by appropriate dilution with ACN/water 95:5 (v/v) and stored at −20 • C.
Sample Preparation for Amino Acid Profile Analysis
Ten (10) Triticum flours were selected from the Greek market, to be tested for their AA concentration and quality characteristics. The commercial flour samples differed in their extraction rate, type of cultivation (organic or conventional), and type of wheat, as shown in Table 1. The first letter of codification corresponds to wheat type ('S' for spelt, 'B' for bread wheat and 'E' for Emmer) followed by a 2-digit number revealing the flour extraction rate and the character 'O' in the case of organically cultivated wheat. An amount of 10 g of flour was selected from 4 different positions of the package and mixed properly to create a homogeneous material that was then kept in a chemical dryer until analysis. The remaining original sample was kept at 2-4 • C. A modified method reported previously from Tsochatzis et al. [15] has been applied for the determination of amino acid mass fraction (g/100 g protein) in the aforementioned homogenized flour samples. In brief, for the AA profile 10.0 ± 0.5 mg of dried flour was placed in a hydrolysis tube along with 100 µL of 6 M HCl containing 4% (v/v) TGA as the reducing agent. The tube was flushed with N 2 gas to establish oxygen-free conditions, sealed, and heated at 110 • C for 18 h. Then, the mixture was transferred to a centrifuge tube with 0.5 mL HCl 0.1 M and 0.5 mL water and centrifuged at 4200× g for 5 min. The supernatant was collected
UHPLC-MS/MS Analysis
UHPLC-tandem mass spectrometry was based on the method described by Tsochatzis et al. [15]. In brief, the analysis was performed on an Accela TSQ Quantum TM Access MAX Triple Quadrupole Mass Spectrometer system (Thermo Scientific, San Jose, CA, USA) operating under XCalibur (Thermo Scientific, San Jose, CA, USA) Software. The mobile phase consisted of solvent A: ACN/5 mM HCOONH 4 , pH = 3.0 adjusted with HCOOH 95:5 (v/v) and solvent B: ACN/5 mM HCOONH 4 , pH = 3.0 adjusted with HCOOH, 40:60 (v/v). Elution was based on a linear gradient program of 13 min from 80% A:20% B to 62% A:38% B, followed by a 2 min equilibration step to the initial conditions prior to the next injection. The flow rate was 400 µL min −1 , and the total analysis time was 15 min.
Chromatographic separation was performed on a 2.1 mm × 150 mm ACQUITY UPLC 1.7 µm BEH HILIC amide column (Waters), equipped with an ACQUITY UPLC BEH Amide 1.7 µm Van-Guard Pre-column, maintained at 40 • C. Selected Reaction Monitoring (SRM) with Electrospray positive ionization mode (ESI +) was applied with spray voltage at 3000 V, capillary temperature: 300 • C, vaporizer temperature: 300 • C, sheath gas pressure at 40 arbitrary units (Arb), aux gas pressure at 10 Arb, ion sweep gas pressure at 2.0 Arb, ion source discharge current at 4.0 µA and collision gas pressure at 1.5 mTorr. Auto-samplers' temperature was set at 4 • C, and the injection volume set at 5 µL. Amino acid individual data regarding molecular formulas, monoisotopic masses, precursor-product ion for the aforementioned SRM, along with their respective retention times in standard solutions and after acidic hydrolysis, are given in the Supplementary material (Table S1).
Method Validation
Method linearity, precision, trueness, the limit of detection (LOD), and limit of quantification (LOQ) were calculated. The linearity of the method was firstly assessed by analyzing standard solutions mixtures at six concentration levels for all AA (0.5, 1, 10, 50, 100, 200 µg mL −1 ), representing the working concentration range. Calibration curves were constructed by plotting the peak areas of the respective AAs followed by linear regression analysis (R 2 ), based on the standard addition method. LODs and LOQs were calculated according to the signal-to-noise ratio (S/N) and the slope (S), using the equations LOD = 3 SD/S and LOQ = 10 SD/S [28].
Precision and trueness were assessed in the B-90 flour sample, which was used as a reference sample. The sample was fortified at two concentration levels (20.0 and 40.0 mg/100 g) of all AAs tested. All calculations were performed using the concentration values expressed in g/100 g for each AA individually. For short-term repeatability, the fortified samples were analyzed in triplicates during the day while for intermediate precision, the aforementioned samples were analyzed in triplicates for three consecutive days. Relative standard deviation (RSD; %) and recoveries were calculated as ((amount found in the spiked sample-amount found in the sample)/amount added) × 100 [2]. All the results regarding precision and trueness are presented in Supplementary material's Table S2.
Quantitation and Matrix Effect
The calibration curves of the studied AAs, based on the linear regression coefficients (R 2 ) have been performed with the standard addition method. Flour samples were fortified at two concentration (5 and 10 µmol/100 g) levels of all AAs tested, followed by an analysis in triplicates. Calibration curves were constructed by linear regression analysis of the peak area (Y) versus the injected concentration (X), and they have been assessed based on the linear regression coefficients (R 2 ). Linear equations were established to determine the initial concentration of amino acids in the dried cereal flour samples. Evaluation of the matrix effect (ME, %) was performed by the slope comparison method as it was previously reported [2,15].
Data Processing and Statistical Analysis
Data were processed using the XCalibur application manager for the quantification of compounds. Regression analysis and statistics were performed using Microsoft Excel, and further statistical analysis, such as Analysis of Variance (ANOVA), followed by Tukey comparison test in all cases, has been performed with Minitab 18.0 statistical software (Minitab Inc., State College, PA, USA). The multivariate statistical analysis, combined with cluster analysis, has been performed with Simca 15 (Umetrics, Umea, Sweden).
Acidic Hydrolysis and Antioxidant Agent
A properly performed hydrolysis is a prerequisite of a successful analysis regarding amino acid profiling in food matrices. The conditions selected were based according to the previously reported conditions by Fountoulakis et al. and the applied conditions in case of mussels from Tsochatzis et al. [15] or the ones reported by EZ Faast [30]. We selected the conditions of 110 • C for 18 h, in order to minimize (in combination with the antioxidant agent) degradation of specific amino acids, while we made a compromise in the recoveries of the more hydrophobic AA, such as valine and isoleucine and leucine [6,15,30]. In addition, the selection of this temperature was selected to minimize the potential cross-reactions of amino-acids with starch. The selection of the hydrolysis conditions was also based on the study of Tsochatzis et al. [15].
Analytical Method Development, Validation, and Optimization
The total analysis time was less than 12 min. The method exhibited good linearity in the concentration range of 0.5-200 µg mL −1 with a linear regression coefficient (R 2 ) of above 0.99 for each AA. The effects from the matrix were minimal in both cases of either standard solutions or amino acid determination after hydrolysis. A typical chromatogram of the AA analysis in flours is presented in Figure 1.
The present analytical method exhibited satisfactory sensitivity for all AA. LODs varied from 0.002 (valine, serine, leucine, isoleucine) to 0.009 g/100 g (threonine) and the LOQs from 0.007 (serine, leucine) to 0.024 g/100 g (threonine, lysine) and a minimal matrix effect was observed. The analytical figures of merit of the method are given in Table 2.
Regarding the trueness, it was assessed by the recoveries from spiked cereal bread wheat flour, after acidic hydrolysis, at two concentration levels. The resulting recoveries ranged from 85.7% (lysine) to 121.8% (leucine) in the intra-day assay and from 86.8% (lysine) to 123.3% (leucine) for the intermediate precision (Table S2). The respective precision expressed in relative standard deviation (RSD %) values ranged from 0.6% (Glutamic acid) to 13.9% (proline) and from 1.4% (serine) to 13.7% (lysine) for short-term repeatability and intermediate precision respectively. The results of the analytical method presented adequate precision and accuracy results.
Amino Acid Profile of Flour Samples
Statistical differences have been identified between the AA mass fractions (Table 3) among the studied flour samples. From Table 3, it could be concluded that phenylalanine, threonine, glycine, and histidine did not have statistical differences among the various flours. Instead, significant differences have been observed for isoleucine, serine, lysine, valine, methionine, proline, glutamic acid, and glutamine. The results were in accordance with previously reported work, regarding the study of AA content in ancient cereal grain wheat cultivars [21]. also based on the study of Tsochatzis et al. [15].
Analytical Method Development, Validation, and Optimization
The total analysis time was less than 12 min. The method exhibited good linearity in the concentration range of 0.5-200 μg mL −1 with a linear regression coefficient (R 2 ) of above 0.99 for each AA. The effects from the matrix were minimal in both cases of either standard solutions or amino acid determination after hydrolysis. A typical chromatogram of the AA analysis in flours is presented in Figure 1.
Flour Quality Parameters
The results from all the studied quality characteristics of the flours are presented in Table 4. Gluten index and wet gluten have been evaluated, and ANOVA showed that the tested flours presented a significant difference with the" bread" wheat flour, also showed that flours do differ in their protein (%) content (data obtained from the Kjeldahl method), gluten index, and wet gluten. By comparing the set of ANOVA results, it could be identified that "bread" flours presented a large variety of total protein content, ranging from 12.3% (B-70) to 13.9 (B-90-O), while differences were also observed for their gluten index and wet gluten. In the case of spelta, the S-70 types, either organic or conventional, presented close results in all the studied quality characteristics, something which was also valid in case of "Emmer"-type.
Acidic Hydrolysis
Regarding sample preparation and acidic hydrolysis, we studied the effect of the antioxidant agent as specific AA are susceptible to decomposition during acidic hydrolyses, such as tryptophan, as well as others like asparagine and glutamine to be converted to aspartic acid and glutamic acid, respectively. In this case, an antioxidant agent is needed to prevent the aforementioned decomposition or converting reactions. We selected to proceed with TGA, as it has been reported to be an effective antioxidant agent for amino acid analysis in food matrices [6,15,30]. In this case, the two levels of TGA have been tested, of 2% v/v [30] and 4% [15]. The results indicated that the presence of TGA at a level of 4% v/v, was more effective, and it was selected for the final protocol.
Even though it is a practice in acid hydrolysis to add an antioxidant agent to prevent the degradation of AAs, to our knowledge, such a protective step has not been previously applied prior to HILIC-MS analysis in case of cereal flour analysis. Moreover, no single set of conditions has been suggested so far for the effective prevention of all AA degradation in wheat flour samples.
Sample Preparation and Analytical Methods
By applying the present method, all AAs eluted prior to 11 min of run time. The selected chromatographic conditions, especially for the mobile phase, have been selected with the aim to provide as good peak shape for basic, neutral, and acidic AAs and certainly taking into consideration the effect of pH to HILIC peak shapes [15,31]. As reported previously by Tsochatzis et al., it was noticed that the retention times and the respective peak shapes of the analytes in the extracts are slightly shifted from the respective AA standard. This behavior could be due to the highly acidic conditions during hydrolysis, which affects the pH of the final injection sample and the separation of the analytes. Due to the contribution of ionic interactions in the retention mechanism, in HILIC, the charge state of the analyte affects the retention time of the compound [15,31]. To eliminate this effect, after trying different approaches, it was selected the dilution (10 times with ACN/water 95:5 v/v to obtain a final pH of 2.5-3) of the final solution before injection. The dilution did not present a limitation for the present method since the concentration of amino acids obtained after hydrolyzing the proteins is significantly higher than that of free amino acids present in most foods [15,32]. The Multiple Reaction Monitoring (MRM) transitions and the parameters in the tandem MS detection were selected after tuning for the optimum signal for each of the analytes.
Trueness assessed by the reported recoveries from spiked cereal flour after acidic hydrolysis. An important source of slight variations could be considered the acidic hydrolysis and especially the precision of the temperature that needs to be controlled and precisely assessed [32]. The reported results for both trueness and precision regarding the quantification of AAs are similar to the ones previously reported in other food matrices [3,5,14,15].
With the current method, only 17 amino acids could be determined following acid hydrolysis with TGA as an antioxidant agent. The amino acids tryptophan, cysteine, cystine, and asparagine were affected from the acid hydrolysis; tryptophan was unstable, asparagine and glutamine tended to convert to aspartic acid and glutamic acid, respectively, while cysteine and cystine were oxidized to cysteic acid [9,15,33].
Comparative Study of Amino Acid Profile of Flour Samples
Regarding stability and yields of amino-acids during acidic hydrolysis, serine and tyrosine are generated in low yields, methionine is sensitive to oxidation, due to acidic conditions it could be oxidized to its sulfone product, and finally, valine hydrolyzes in poor yields (longer time and temperatures are needed) [1,34,35].
In case of proline, Kapusniak et al. reported reactions of starch with α-amino acids, where proline, alanine, isoleucine, and valine most readily reacted with starch that could affect the hydrolysis [36] while, Ito et al. (2006) highlighted the significant effect of hydrolysis time on the yields of all amino acids, with changes in isoleucine, lysine and serine, while there was observed an existing binding of amino acids (glycine, alanine, and partial lysine) to starch chains [37].
In addition, partial losses of the amino acids tyrosine and serine while the other amino acids (valine, leucine, and isoleucine) are requiring longer hydrolysis time in order to obtain higher yields. Our results are in accordance with Rowan et al. [38].
It has been reported that spelt has a high concentration in methionine compared to wheat [25,39,40]. This fact was confirmed in the present study, where all tested spelt flours, presented significantly higher methionine content than emmer as well as bread wheat flours. The values of the methionine varied from 0.73-0.97 g/100 g protein (dry basis); spelt flours showed 10% higher values than the reference bread wheat flour. Aspartic acid content was significant higher in all studied spelt flours compared to whole grain wheat flour is in accordance with the results of [21,25,38].
In the case of alanine, the results indicated that all flours have significantly higher content compared to the B-90 except for E-100-O that showed similar value. The results suggest that hydrolysis has a great impact and effect on the release and analysis of the amino acids, while on the other hand, the side cultivation technique and location have a potential role in the final concentration of specific amino acids.
It is also reported in the literature that wheat proteins have low mass fractions of certain AA, especially lysine and threonine [18][19][20][21]. Contrary, in the current study, it was observed that the refined bread wheat flours and the whole grain wheat flour (reference) presented significantly higher contents of lysine (2.38 and 1.89 g/100 g protein) compared to other types of studied flours (emmer, spelt) that had significantly lower mass fractions [21]. It is also reported that wheat proteins have low mass fractions of certain AA, especially lysine and threonine [18][19][20][21]. Escarnot et al., reported that the spelt amino acid profile differs from that of bread wheat, supported by limited evidence of higher content for isoleucine, leucine, and glycine [25,[40][41][42][43]. Our results are in accordance with the reported literature on higher protein content in spelt than in wheat grains under low nitrogen fertilization [25,40], although this has not been proved to be statistically significant, as it is also observed in our study. Pruska-Kedzior et al. found significantly higher protein content in spelt flour, but there should be considered that genotype and the cultivation conditions highly affect the protein content [27].
Statistical analysis (ANOVA) performed for each AA, indicated that there are significant statistical differences between all the studied flours (Table 3). Statistical analysis revealed some very interesting results about the interaction between the type of flour and the concentration of amino acids. In general, there was a significant interaction between the type of the flour studied and the amount of AA.
Glutamine and proline are the functional amino acids in dough formation [21]. For glutamine, the organic B-90 presented the higher mass fraction among the flours studied, followed by the S-70-O and conventional spelta S-100a. Especially in the case of glutamic acid (Glu) and proline, the whole grain wheat flour showed higher concentration than the B-90 (Table 3). White wheat flour and white whole grain flours showed lower proline and glutamic acid concentrations that the rest of flours studied, which is in accordance with the results of Abdel-Aal and Hucl and Escarnot et al. [21,25]. The aforementioned authors reported that the spelt is rich in proline, which is the major functional amino acid in dough formation. The data obtained showed that the organic bread wheat flour (B-90-O) showed much higher proline content than the two bread wheat flours B-70 and B-90).
Interaction with Quality Parameters
A multivariate analysis was performed to study potential interactions of the TAA with the quality characteristics of cereal flours under investigation. Thus, by developing this analytical tool that reveals the amino acid profile of flours, one could discover the amino acid profile of proteins of different Triticum species and use it for choosing the variety with a more balanced profile for the use in cereal product development in correlation with other quality characteristics. The performed multivariate analysis between the AA content and the flour quality characteristics showed that the studied cereal flours could be distinguished in groups, based on their origin. The score plot, along with the respective loadings is presented in Figure 2.
Results showed that the studied types of flours presented a specific AA content pattern and specific quality characteristics. From the principal component analysis (PCA) biplot and the respective score plots, it could be identified that there is a clear differentiation of the three different types of flours. Three different groups were identified as expected; the emmer grains (blue dots), the spelta (brown dots) and the bread wheat (purple dots). The right part represents the bread wheat flours, with high protein content (%), high gluten index, and lower falling number (FN). The Emmer type flours presented lower protein content and lower gluten index, while the spelt type flours are in between these two categories, presenting intermediate quality characteristics of the other two types of flours.
that reveals the amino acid profile of flours, one could discover the amino acid profile of proteins of different Triticum species and use it for choosing the variety with a more balanced profile for the use in cereal product development in correlation with other quality characteristics. The performed multivariate analysis between the AA content and the flour quality characteristics showed that the studied cereal flours could be distinguished in groups, based on their origin. The score plot, along with the respective loadings is presented in Figure 2. Table 1).
Figure 2.
Principal components analysis of the studied amino acid profile and quality characteristics; (A) score plot and (B) loading plot and (C) 3D scatterplot (for the type studied cereal flour numbering, please see Table 1).
All the interactions and effects could be observed in the loading plot ( Figure 2B). Briefly, the main effects resulted from the type of flour, and the protein content tends to play a significant factor in the clustering of the assessed grains which is resulting also to various quality characteristics and specific AA profiles, for which we have already identified statistical differences among the three types of flours. In addition, the organic cultivated grains tended to reach a higher protein content that is also reflecting to a higher content of certain AAs. On the other hand, a lower protein content reflected to a lower content of certain AA, but significant In concluding, it seems that the type of cultivation of the cereal grains affects the AA profile, as well as the quality characteristics of the flours and there is an indication of the potential effect of the cultivation (organic, conventional). Flours from organic cultivated grains seemed to be close in protein content (%) and eventually in the amino acids, but there is a clear differentiation in gluten index and wet gluten. In this case, the "spelt" flours are closer to "emmer" for the aforementioned characteristics, and all of them differentiated from the bread flour either type-70 or type-90. Spelt flours, either type-70 or 100, presented similar characteristics for either organic or conventional cultivated grains, while also "Emmer" type-70 or 100, presented also close characteristics. The biggest differentiation was noticed in the "bread" flours, where the cultivation and the extraction rate presented a significant effect for all the studied factors and the amino acid content (Table 3 and Figure 2).
Conclusions
A UHPLC-HILIC-tandem MS method has been developed and validated for the quantification of 17 amino acids in cereal flour samples after acid hydrolysis with HCl in the presence of a reducing agent. Tryptophan, cysteine, cystine, and asparagine were not possible to be quantified as they were degraded during hydrolysis due to harsh acidic conditions applied. The method proved to be a fast and reliable tool for acquiring information on amino acids profile from cereal flours. The developed analytical method has been applied in different flours such as spelt, dicoccum, whole grain wheat, and white wheat. Moreover, multivariate analysis showed that protein content and type of flour have the main contribution and effect, interacting with either the AA profile and with the studied quality characteristics. It has also been presented that not only with quality characteristics of the flours, but the type oz flour has significant interactions. A clear effect of an indication of the potential effect has been identified among the different flours studied.
Supplementary Materials: The following are available online at http://www.mdpi.com/2304-8158/8/10/514/s1, Table S1: Amino acids monoisotopic masses. Multiple Reaction Monitoring (MRM). Retention times (tR) and conditions in the mass spectrometer along with their respective molecular formulas, Table S2: Precision and trueness data for the analysis of AA in spiked B-90 bread wheat sample. | 2019-10-18T11:58:52.468Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "bcb2e244d89025883f6fdfb59bf8a8ccbe78df63",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/8/10/514/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bcb2e244d89025883f6fdfb59bf8a8ccbe78df63",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
26206368 | pes2o/s2orc | v3-fos-license | Discovery of Renal Tuberculosis in a Partial Nephrectomy Specimen Done for Renal Tumor
The association of renal cancer and renal tuberculosis is uncommon. While the incidental discovery of renal cell carcinoma in a tuberculous kidney is a classical finding, the discovery of tuberculous lesions after nephrectomy for cancer is exceptional. We report the case of a female patient aged 60 who had a partial nephrectomy for a 5 cm exophytic kidney tumor. Pathological examination concluded that renal clear cell carcinoma associated with follicular caseo tuberculosis.
Introduction
The association of renal cancer and renal tuberculosis is uncommon. 1,2 While the incidental discovery of renal cell carcinoma in a tuberculous kidney is a classical finding, the discovery of tuberculous lesions after nephrectomy for cancer is exceptional. 1,2 We describe a case in which histological examination revealed associated tuberculosis.
Case presentation
A female patient aged 60, without urological or medical history, in which we discover by chance on an ultrasound a 5 cm exophytic kidney tumor.
In CT, the lesion was isodense and enhanced after injection of contrast material (Figs. 1 and 2); there was no abnormality in the rest of the parenchyma or the urinary tract.
We performed a partial nephrectomy by lumbotomy. Macroscopically, the lesion was a polychrome appearance with hemorrhagic and pseudocystic rearrangements. The postoperative course was uneventful.
The final histological examination confirmed the diagnosis of suspected cancer by imaging. This was a conventional cell adenocarcinoma Führman grade 2. The tumor was reworked by many tuberculoid granulomas caseo-follicular (Fig. 3). The exeresis limits were away from the cancerous growth and contained some epithelioid granulomas and giant cell. Perirenal fat was removed free of histological abnormality.
The search for tubercle bacilli performed on urine and sputum after surgery was negative. A chest radiograph showed no signs of tuberculosis.
In principle, a quadruple antituberculosis therapy was introduced for 2 months and relayed by a combination therapy for 4 months.
A CT was performed 6 months after surgery and was without abnormalities
Discussion
Since the late 1980s, the incidence of tuberculosis increased in the US and Europe West while the disease was virtually eradicated by the BCG vaccine. Three main factors explain this increase: immunosuppression related to the AIDS virus, the flow of immigrants living in community, and the emergence of bacterial strains resistant to usual treatments. 3 Renal localization of tuberculosis is the result of hematogenous spread of mycobacteria from a pulmonary focus and occurs in 8 to 10% of cases of primary infection tuberculous. 4 The causative agent is usually Mycobacterium tuberculosis. The other type of mycobacteria Mycobacterium bovis or Mycobacterium africanum are exceptionally involved. 2 Less than 50 cases of tuberculous kidney cancers have been reported in the first half of the 20th century. 5 Despite the recent resurgence of tuberculosis, very few cases have been described in the past 20 years. 1,2 The originality of our observation is based on the double chance discovery of a tumor of the kidney and renal tuberculosis. While the chance discovery, ultrasound or tomodensi tométrique, kidney tumor is frequent (more than 50% of kidney cancer diagnoses), 6 the chance discovery of renal tuberculosis is exceptional. In most reported cases, tuberculosis was known to be the cause of kidney destruction. 1 The histological examination of the removed kidney highlighted the presence of an unsuspected cancer. 1 In our case, it is histologically a renal tumor that introduced tuberculous lesions in a patient with no known history of mycobacterial infection. To our knowledge, only few cases of accidental discovery of isolated renal tuberculosis have been reported in the literature. 1,2 In the literature, the conventional cell adenocarcinoma is the most histological form found. 1,2 The combination of tuberculous lesions to other tumor histological types such as transitional cell carcinoma has rarely been described. 7,8 Except in cases where the kidney is no longer functional, the treatment must be adapted to the characteristics of the cancer. In case of small polar tumor, a partial nephrectomy preceded or followed by antituberculosis treatment is possible. 1,2
Conclusion
The association of renal tuberculosis and cancer is rare. The originality of this observation is based on the discovery mode of renal tuberculosis on a piece of partial nephrectomy done for kidney tumor. Imaging allows the diagnosis of renal mass, and it is the pathological examination confirms the cell granuloma giant association with caseous necrosis and malignant cells. | 2016-05-04T20:20:58.661Z | 2015-03-04T00:00:00.000 | {
"year": 2015,
"sha1": "845b2975499c19cbceea195d986f4667eb661f7b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.eucr.2015.01.009",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "845b2975499c19cbceea195d986f4667eb661f7b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233921722 | pes2o/s2orc | v3-fos-license | Fuzzy Controller Structures Investigation for Future Gas Turbine Aero-Engines
: The Advisory Council for Aeronautics Research in Europe (ACARE) Flight Path 2050 focuses on ambitious and severe targets for the next generation of air travel systems (e.g., 75% reduction in CO 2 emissions per passenger kilometer, a 90% reduction in NOx emissions, and 65% reduction in noise emission of flying aircraft relative to the capabilities of typical new aircraft in 2000). In order to meet these requirements, aircraft engines should work very close to their operating limits. Therefore, the importance of advanced control strategies to satisfy all engine control modes simultaneously while protecting them from malfunctions and physical damages is being more crucial these days. In the last three decades, fuzzy controllers (FCs) have been proposed as a high potential solution for performance improvement of the next generation of aircraft engines. Based on an analytic review, this paper divides the trend of FCs design into two main lines including pure FCs (PFC) and min–max FCs (MMFC). These two main architectures are then designed, implemented on hardware, and applied in a case study to analyze the advantages and disadvantages of each structure. The analysis of hardware-in-the-loop (HIL) simulation results shows that the pure FC structure would be a high potential candidate for maneuverability and response time indices improvement (e.g., military applications); while min–max FC architecture has a great potential for future civil aero-engines where the fuel consumption and steady-state responses are more important. The simulation results are also compared with those of industrial min–max controllers to confirm the feasibility and reliability of the fuzzy controllers for real-world application. The results of this paper propose a general roadmap for fuzzy controllers’ structure selection for new and next generation of aircraft engines.
Introduction
A gas turbine engine (GTE) is a type of continuous combustion engines. Main elements common in all GTEs are an upstream rotating gas compressor, a combustion chamber, and a downstream turbine on the same shaft as the compressor. This combination is usually called gas generator (GG). By adding other elements to the GG, the GTE can be used for different applications. Nowadays, GTEs have many applications like surface vehicles (race cars, tanks, locomotives, etc.), aircraft and rotorcraft engines, ships and marine applications, heavy-duty gas turbines for power plants, and integrated renewable and gas-fired energy generation systems. Concerning this variety of applications and different operating conditions, different control modes should be defined and satisfied in GTEs. Therefore, like any other mechanical system, a proper control strategy plays a vital role in GTEs safer operation. This control system should increase the engine performance efficiency to meet structural and aerodynamic limitations (control modes). The first aspect for design a proper control system is to know the dynamic behavior of the system and the limitations that should be satisfied during the engine operation. Among different types of GTEs, the issue of gas turbine aero-engine controller design should be taken into account seriously as it is related to the complexity of the engine. The recent designs of gas turbine aero engines are increasingly complex to meet the severe limitations and targets of the future flight paths set by governments and organizations (e.g., Advisory Council for Aeronautics Research in Europe (ACARE) Flight Path 2050). Historically the development of jet engine controllers can be divided into this classification: 1.
Hydro-mechanical fuel control, which consists of a simple mechanical actuator controlled by the operator. In other words, in this embodiment, GTEs are manipulated by hydro-mechanical control systems.
2.
Hydro-mechanical/electronic fuel control, which is the former fuel flow controller with added an electronic control unit. This electronic unit performed the function of thrust setting, speed governing, and acceleration and deceleration in response to power lever inputs. 3.
Digital electronic engine control (DEEC), in this embodiment, functions carried out after input data from the airframe and engine were processed by the DEEC computer included setting the variable vanes, positioning compressor start bleeds, controlling gas-generator, adjusting the augmenter segment sequence valve, and controlling the exhaust nozzle position.
4.
Full authority digital engine (or electronic) control (FADEC), works by receiving multiple input variables of the current flight condition including air density, throttle lever position, engine temperatures, engine pressures, and many other parameters. The inputs are received by the electronic engine controller (EEC) and analyzed up to 70 times per second. Engine operating parameters such as fuel flow, stator vane position, air bleed valve position, and others are computed from this data and applied as appropriate. FADEC also controls engine starting and restarting procedures. The FADEC's basic purpose is to provide optimum engine efficiency for a given flight condition [1].
However, principle of using only fuel flow for closed-loop speed control and limit its amount during transients, as in the first hydro-mechanical systems is still the main control strategy in many systems. Other control signals are often open-loop scheduled or used only for limiting engine parameters. As said before, there are several control strategies have been proposed to deal with the above-mentioned requirements dating back to 1952. Each of these algorithms has its advantages and disadvantages. Some of them are not capable of satisfying all engine control modes simultaneously and some of them are weak in some modes and strong in some other modes. A comprehensive review and analysis of the history of GTEs control strategies could be found in [2,3].
The more complicated the engine design, the more demands and limitations on control algorithms. So, this is the time to think differently about the control rules and strategies for gas turbine aero-engines to satisfy new advanced limitations and control requirements for GTEs. For this purpose, different control methods like model predictive control (MPC), linear-quadratic regulator (LQR), linear-quadratic-Gaussian (LQG), and fuzzy logic have been used in the literature recently. Among these algorithms, the fuzzy logic control method has many advantages like similarity to human reasoning, based on the linguistic model, using simple mathematics for nonlinear problems, ability to deal with integrated and complex systems, high precision, rapid operation, and also some disadvantages like needing more fuzzy grades for more accuracy, does not capability to receiving feedback for implementation of learning strategy, restricted number of usage of input variables [4]. Fuzzy logic control is also a heuristic approach that easily embeds the knowledge and key elements of human thinking in the design of nonlinear controllers. Qualitative and heuristic considerations, which cannot be handled by conventional control theory, can be used for control purposes in a systematic form, applying fuzzy control concepts. Fuzzy logic control does not need an accurate mathematical model, can work with imprecise inputs, can handle nonlinearity, and can present disturbance insensitivity greater than the most nonlinear controllers. Fuzzy logic controllers usually outperform other controllers in complex, nonlinear, or undefined systems for which a good practical knowledge exists [5]. Many pieces of evidence show the fuzzy logic theory applications in GTE's controller design [6][7][8][9][10][11]. Fuzzy and fuzzy-PI controllers for GTE rotor speed control during the startup phase as well as normal operating conditions were designed in [8,9] where the fuel flow is manipulated as a control variable for power plant gas turbine rotor speed control. Fuzzy-PID controller that introduced a better transient response than a PID controller for a power plant gas turbine was proposed and simulated with a linear model in [8]. A fuzzy modified model reference adaptive controller (FMRAC) for GTE rotor speed control was developed and showed its better time response than modified model reference adaptive controller (MRAC) in [9]. A master controller for micro gas turbine generator using the fuzzy control algorithm on fuzzy control processing for on line PID (proportional-integralderivative) setting parameter was designed and indicated its advantages of fast response and small overshoot in [10]. A gas turbine aero-engine fuzzy controller that optimized by genetic algorithms was also declared in [11]. Moreover, during the last three decades, different investigations have been done on controller design for GTEs based on fuzzy logic methods. The main milestones in the design and simulations of FCs for GTEs are listed in Table 1. Table 1. Milestones in fuzzy controller design for gas turbine engines (GTEs).
Paper Title Main Achievement Publication Year
Fuzzy Computing for Control of Aero Gas Turbine Engines Certain stipulations, rules, and fuzzy logic are suggested for the control of a single spool aero gas turbine (pure fuzzy) 1994 [12] Fuzzy Scheduling Control of a Gas Turbine Aero-Engine: A Multi-objective Approach Combination of fuzzy logic and evolutionary algorithms (EA) to refine the control performance and to increase the flexibility of GTEs (pure fuzzy) 2002 [13] Fuzzy Fuel Flow Selection Logic for a Real-Time Embedded Full Authority Digital Engine Control In order to achieve proper performance, Typical control loops chosen by min-max theory are replaced by fuzzy logic loops (min-max fuzzy) 2003 [14] Advanced Control of Turbofan Engines Different control loops for turbofan engine control modes are designed and analyzed based on industrial min-max strategy and improved by fuzzy rules with respect to the implementation considerations (min-max fuzzy) 2012 [15] Heavy-duty gas turbine monitoring based on adaptive neuro-fuzzy inference system: speed and exhaust temperature control Using an adaptive neuro-fuzzy inference system (ANFIS) to maintain turbine operation at optimum performance. The results obtained, based on the use of the Rowen model, show the effectiveness of the proposed system (pure fuzzy) 2017 [16] Design of an Interval Fuzzy Type-2 PID Controller for a Gas Turbine Power Plant The selected model is Rowen's model to present the mechanical behavior of the gas turbine, the main goal is aimed to improve the system dynamic performance, all gains for conventional PID and interval fuzzy type-2 PID are tuned using social spider optimization(SSO) technique, and showed the performance improvement for interval fuzzy type-2 PID controller in comparison with conventional PID via simulation (min-max fuzzy) 2018 [17] Turbojet engine industrial min-max controller performance improvement using fuzzy norms The minimum and maximum functions in the industrial min-max strategy are replaced with the different fuzzy norms to improve the performance of the GTE FADEC (min-max fuzzy) 2018 [18] Fuzzy modeling and fast model predictive control of gas turbine system For achieving high tracking performance and disturbance rejection ability within less settling time under various operating conditions, an improved fuzzy modeling approach and corresponding fast model predictive control algorithm were introduced and applied to a gas turbine system (pure fuzzy) 2020 [19] As it can be seen in the above-mentioned studies, there are two main lines for fuzzy controller structure design and two main architectures proposed for satisfying control requirements of GTEs:
•
The first structure uses pure fuzzy control (PFC) strategy in which all control rules and loops are replaced by fuzzy rules.
•
In the second structure, the controller keeps the industrial min-max structure in which the winner of different control loops will be selected by a pre-defined min-max strategy. However, control loops will be replaced by a fuzzy logic controller to result in a min-max fuzzy controller (MMFC).
The main contribution of this paper is to investigate the advantages and disadvantages of these two structures and to discuss the possibility of using them for the next generations of GTEs towards ACARE Flight Path 2050 requirements. Therefore, this paper will compare two different GTEs fuzzy controllers' structures with each other by developing a framework for real-time hardware implementation. For this purpose, a PFC and an MMFC for a single spool gas turbine aero-engine, as a case study, are firstly designed and described in detail. A hardware-in-the-loop (HIL) stand simulation will then be designed and developed by choosing proper hardware for implementation. Besides, an advanced communication method for software and hardware is designed and implemented to obtain reliable realtime results and control all criteria and requirements. Finally, the simulation results for both implemented controllers are presented and discussed to introduce an initial road-map for the application of fuzzy controllers in gas turbine aero engines.
PFC and MMFC Design
Without loss of generality, a single-spool turbojet engine ( Figure 1) was selected as a case study in this paper. This engine has an axial three-stage compressor, an annular combustion chamber, and an axial one-stage turbine. The geometry structure of this engine is fixed and therefore applied fuel flow to the combustion chamber is the only parameter that can be used as the control variable. The GTE control system is required to meet the engine thrust regulation and safety constraints simultaneously. As mentioned earlier, the two different structures for the controller that have been selected for design, simulation, and implementation on the hardware in this paper are: The controllers will then be implemented on the hardware and the results of realtime simulation will be compared to introduce the suitable fuzzy control algorithm for different applications.
Pure Fuzzy Controller (PFC) Design
The main idea in this strategy is to use the fuzzy rules to satisfy all engine control modes. In other words, this structure will benefit from the speed, the constraints satisfaction ability, and the flexibility from fuzzy nature. Former GTEs have been manipulated particularly by rotor speed management. This method could deal with engine limitations and be implemented by a hydro-mechanical controller. By emerging digital controllers, the rotor speed derivative (acceleration/deceleration) has been added to controllers' structures as a control variable. Simultaneously speed and acceleration control has led to transient response improvement while engine functional characteristic reaching. As a sample, cold engine acceleration can lead to consuming more fuel for engine components warming and reducing the acceleration rate, while warm engine acceleration is smooth and quick. Rotor acceleration control provides stable acceleration that is self-governing from engine temperature situation. Besides that from the safety point of view, this method has avoided engines from aerodynamic instabilities like surge, rotor over speed, and turbine blades overheating. Respecting to this fact that the derivative part of a controller is more sensitive to rotor speed variation and has a proper time response in comparison with the integral controller part, and integrated signal hardware implementation was time-consuming for operation at HIL purpose, without loss of generality for our story approach this part of the controller was neglected and PD controller chosen for controller design. There is much evidence in scientific researches that illustrate the application of rotor speed and its derivative as control variables for GTEs controllers [21].
The schematic of this type of controller has been shown in Figure 2. As it can be seen in this figure, this controller has two input variables, route error (the difference between throttle command-power lever angle-or desired rotor speed and real rotor speed) and the derivative of this error. Both K1 and K2 coefficients are determined for normalizing membership function inputs, cause the acceptable determined range for these inputs is between −1 to 1, and they stand for mapping inputs to the range of acceptable membership function variables. Based on these inputs and the pre-defined fuzzy rules, the controller calculates transient fuel flow as the output. Moreover, as shown in Figure 2, injected fuel flow to the combustion chamber will be achieved by adding steady-state fuel flow and transient fuel flow. The steady-state part of the fuel flow is calculated by a gain scheduling controller as a function of the engine rotor speed. This function could be derived from the engine performance simulation or experimental results. It should be mentioned that for controller design, the Mamdani fuzzy inference engine and center of area (COA) defuzzification method are used. In addition, minimum stands for "AND" and maximum stands for "OR". Moreover, for reducing the number of variables, calculation, and time expending for searching, the Gaussian fuzzy membership function is chosen. Each function has two tuning parameters and for PFC design seven linguistic variables were defined as Table 2. The definition of fuzzy rules needs to know the interaction between engine components and awareness of physical damages and probable aerodynamic instabilities that can occur while increasing and decreasing engine rotational speed. As an example, at the primary sharp acceleration time (PB) the transient fuel flow must be Z or PS because the existing engine components' inertia can cause the turbine blade overheating. Also, at the primary sharp deceleration time (NB) the transient fuel flow must be Z or NS because the above-mentioned reason causes flame burnout. Table 3 shows the fuzzy rules for the designed PFC. In Table 3 DeltaN stands for PLA-RPM (pilot lever angle and shaft rotational speed those were normalized), DeltaNdot stands for error derivative, and finally FMF stands for transient fuel flow changes. All inputs after receiving by controller were normalized and getting to PFC for calculating proper transient fuel flow and because of this reason all membership function's inputs are between -1 and 1. The rules stated in Table 3 are coming from experts' knowledge and publicly available control laws for jet engine control systems [22].
MIN-MAX Fuzzy Controller (MMFC) Design
Each GTE has three different control modes.
1.
Steady-state control mode to meet pilot thrust level requirement.
2.
Transient control mode to reach the required thrust in a proper time.
3.
Physical limitations control mode to prevent the engine from damages and malfunctions (e.g., over-speed, over-temperature, surge, stall, etc.).
The idea of min-max controller presented by Kreiner. A and Lietzau. K is to design different control loops for each control mode and select the best control loop at each time step based on a pre-defined strategy to satisfy all engine control modes simultaneously [23]. The schematic of a min-max control loop for a turbojet engine is shown in Figure 3. As can be seen in this figure, the controller consists of four control loops as follows: 1.
PLA control loop: this loop has to supply the pilot desired thrust in each situation.
2.
Maximum speed limitation loop (MSLL): this loop is to prevent the engine from exceeding the rotor speed from the permissible amount. This control loop takes this responsibility to guarantee the integrity of the GTE. Maximum deceleration limitation loop (MDLL): at the primary sharp deceleration time control system must prevent fuel flow from abruptly reducing because the rotor inertia could lead to flame burnout. Therefore, the fuel flow reducing rate must be limited.
After designing the above-mentioned control loops, a pre-defined "MIN-MAX selection strategy" will select transient state fuel flow by using a selection algorithm between these four control loops to satisfy all engine control loops simultaneously. A simple minmax selection strategy for a single spool turbojet engine is as follow: where WfPLA, Wfdec, Wfacc, WfNmax stand for fuel flow calculated by PLA, MDL, MAL, and MSL loops respectively. In addition, Wftr is the final transient fuel amount. In an acceleration operation, the min-select strategy will protect the engine from surge and over speed whereas, in a deceleration process, the max-select strategy will protect the engine from flameout. If the calculated pilot command transient (PLA) fuel does not exceed these limitations, it will be the winner of the min-max selection strategy. Otherwise, the fuel flow that imposes one of the limitations will be selected as the transient fuel flow in order to protect the engine against failure or malfunction. The min-max control strategy performs very well in satisfying all engine control modes simultaneously. However, regarding engine nonlinear nature applying a linear controller at PLA loop will result in reducing the capability of input order tracing as well as losing the proper flexibility. This reason led researchers to design another form of MIN-MAX controller that fitted by a fuzzy controller for determining transient fuel at PLA, MALL, MDLL loops. Fuzzy min-max control parameters Variety cause to meet multi-control purposes. Figure 4 shows the MMFC structure used in this study where all K1, K2, and K11 coefficients were determined for normalizing membership functions inputs, cause the acceptable determined range for these inputs are between −1 to 1, and they stood for mapping inputs to the range of acceptable membership functions variables. Again, for entering MIN-MAXs selection boxes outputs must be mapped so K22 and K3 were used for this purpose. The MMFC design procedure is as follow: PLA control loop design: as mentioned before, PLA control loop is to determine transient fuel for pilot command tracing in a proper response time. As shown at Figure 4, this fuzzy controller is a single input, single-output (SISO). The input is the error between the current engine rotational speed and the desired rotational speed (translated PLA to the engine rotational speed by using the thrust/rotational map of the engine) and the output is the transient fuel flow. For input and output, seven linguistic variables are determined with associated rules and membership functions as shown in Table 4 and Figures 5 and 6 (Delta N is PLA-RPM). The designed fuzzy rules are like the PLA loop as shown in Table 5. As PLA loop symmetry and uniformity between membership functions and rules have caused linear relation between inputs and output in the MADLL (Maximum acceleration and deceleration) loops. Again, FMF is transient fuel flow. It also should be mentioned that as MSLL loop is just to limit the maximum rotational speed of the GTE (and this value is constant for each engine), there is no need to manipulate it by fuzzy logic rules.
Hardware Implementation
After designing the PFC and MMFC, the implementation procedure will be described in this section in order to develop a HIL simulation platform to analyze the capabilities of the designed controllers in a real-time simulation feature. From implementation consideration point of view, selecting the proper hardware for real-world application is the main and the most important part of the controller manufacturing procedure. Chosen hardware must have an acceptable input reading speed, processing time, and resolution for outputs generation to be able to control the engine rapidly and correctly. For the GTE control problem, proper hardware must have minimum error and fault while reading engine rotational speed and PLA signals and producing the accurate fuel flow signal that should be injected into the engine to satisfy all engine control modes simultaneously.
Among different types of hardware, the AVR family has a lot of advantages including easy to program in C for most basic functions, adequate documentation, inexpensive, hobbyist friendly (parts in through-hole packages), nice peripherals (built-in oscillator, flash memory, onboard RAM, serial ports, ADC, EEPROM, etc.), low power consumption, and good cross-platform support. Therefore, the AVR Microcontroller (ATMEGA 32A) was chosen in this study. There is much evidence in the literature that confirms the use of ATMEGA family for controller hardware implementation because of the above-mentioned advantages [24][25][26][27][28][29][30]. The procedure of implementation is described in detail in this section:
Experimental Apparatus
In order to design proper hardware for GTE fuel controller with HIL test, well communicating between hardware and MATLAB Simulink is mandatory. For this purpose, the PC serial port was used, and thus, sending and receiving data between hardware and MATLAB Simulink environment have become achievable. The MAX232 is an integrated circuit that converts signals from a TIA-232 (RS-232) serial port to signals suitable for use in TTL-compatible digital logic circuits. The MAX232 is a dual transmitter/dual receiver that is typically used to convert the RX, TX, CTS, RTS signals. Any communication between PC and hardware needs this chip. However, some modern computers don't have a serial port. Therefore, a converter that can convert USB port to a serial port is also necessary. The used converter in this study is FT232 module. The FT232R is one of the latest devices to be added to FTDI's range of USB UART interface integrated circuit devices. The FT232R is a USB to serial UART interface with optional clock generator output.
The main processor unit is from AVR family (ATMEGA32A). Each data value will be received by the microcontroller and after calculating and output generating, the result will be returned to the software by this AVR. Figures 10 and 11 show the designed hardware schematic and its PCB (printed circuit board) separately. Components are generally soldered onto the PCB to both electrically connect and mechanically fasten them to it. This board is designed to simply achieve all requirements. It has one main processor (ATMEGA32), FT232 module to transferring data from PC to processor and vice versa, two capacitors, a resistor and regulator for power supply tuning, and a manual reset push button. After each process processor was automatically programmed for internal reset and it will be ready for another process. For communicating between the abovementioned hardware and PC (Simulink environment), required codes have been written in CODEVISION environment. As mentioned earlier, after correct communicating between hard and software achievement, each controller was programmed at hardware and tested for proper programming and validating the hardware implementation by MATLAB Simulink. The ongoing picture shows the GTE controller hardware. Figure 12 shows the manufactured PCB in the real-world ready for control GTE. This PCB is the main GTE controller at our experiment and replaced with controller box in Simulink environment at hardware in the loop (HIL) test.
Initial Preparation
As shown in previous sections, fuzzy controllers take PLA and RPM as inputs and after calculation and execution of fuzzy rules, return the fuel flow signal as the output. This number is applied to the servo that controls engine fuel. The designed hardware can take PLA and RPM in the form of integer numbers (between 0-255) and output is a digital number between 0-255. An internal microcontroller clock was activated to counting transmission sensors pulses at a specified time for calculating the frequency and reading serial port for receiving PLA and engine rotational speed. The time-step is important because of calculating error derivative. The written codes for calculating time and receiving and transferring data are presented in the Appendix A in detail [Appendix A.1, Appendix A.2].
PFC Controller Hardware Implementation
In order to implement the PFC, all seven Gaussian membership functions have been programmed in the Codevision environment. As an example, one of them is presented in the Appendix A [Appendix A.3]. It should be mentioned that the weighted average method is used for defuzzification. Computing output belonging value to membership functions was done with the center of the batch method as well [Appendix A.4]. After gaining transient output, this value must be added to the steady-state value read from table that is programmed at hardware and this summation must be changed to a digital number that will be sent by the microcontroller.
MMFC Hardware Implementation
Implementation of the fuzzy min-max controller has three separate steps. The first two steps are for a fuzzy controller and the final step is the MIN-MAX loops programming. The first fuzzy controller has one input and fifteen Gaussian membership functions (PLA control loop fuzzification). The second fuzzy controller has two inputs (error and its derivative) and four membership functions (acceleration/deceleration control loop fuzzification), [Appendix A.5, Appendix A.6].
Hardware in the Loop Simulation
Hardware in the loop simulation method has wide application in dynamic systems simulations. In this method, some parts of modeling will be done in the software environment and some others will be programmed and implemented physically on the hardware. The designed hardware must be acted synchronously with software to guarantee the consistency of the results. For a control system test, there are two different options:
•
The controller could be modeled in the software to be running on target computer hardware while it is connected to your physical plant or system. (The target computer hardware acts as the controller.).
•
The other option is to implement the controller on the hardware, which can include production or embedded controls implementation, using a simulation of your plant or system. (Here, the target computer acts as a physical plant or system.).
In recent years hardware in the loop method was used in many pieces of research for testing physical elements act accuracy. Some of these researches are performed based on controller implementation [31][32][33][34], and in some of the engine and sensors are real [35][36][37]. In this study, the controller is implemented as hardware in the real world and all other parts like engine and sensors were programmed at software.
Moreover, from a simulation speed point of view, the simulation could be run in the following features: • Simulation without time limitations; • Real-time simulation; • Simulation faster than real-time.
Real-time simulation refers to a computer model of a physical system that can execute at the same rate as actual "wall clock" time. In other words, the computer model runs at the same rate as the actual physical system. Real-time simulation and testing extend beyond simulation by verifying algorithmic design behavior while running models at required speeds, respecting precise timing requirements. The executing model is connected to sensors, actuators, and other hardware.
Results Analysis
In order to validate the accuracy of hardware implementation, the result of HIL simulations was firstly compared with a model in the loop (MIL) run. A MIL simulation is a technique used to abstract the behavior of a system or sub-system in a way that this model can be used to test, simulate and verify that model. The implemented controllers were simulated in both the HIL platform and MIL with the engine model to investigate the effectiveness and performance of each controller. The engine model is a block-structured model created and validated against experimental results. All details about the engine modeling and validation procedure could be found in [18].
In order to simulate the dynamic behavior of the engine, the block-structured modeling approach has been used in this paper. These models consist of a linear dynamic part to simulate all engine lags and a nonlinear static part to simulate the relationship between the different engine parameters. The model parameters are usually tuned by the experimental results. There are three kinds of block-structure models, including Hammerstein, Wiener, and Wiener-Hammerstein models [38].
The novel generalized describing function (NGDF) is a recently proposed blockstructured approach introduced by Lichtsinder et al. [38]. The NGDF is based on the error minimization concept and the difference between the NGDF model and the models proposed by other researchers is that in NGDF the transfer functions between different inputs and outputs have an incremental form to enhance the accuracy of the model. The schematic of the NGDF model is shown in Figure 13 to show that this method has the highest accuracy between the different block structures modelling approach, the jet engine is modelled using different modeling approaches, transfer function described in [39], Wiener block structure described in [40], and NGDF described in [38]. The engine specification is shown in Figure 14. The results are compared with the experimental results of the transient behavior of the engine in Figure 15. As shown in Figure 15, the NGDF is tracking the engine parameters with very high accuracy in both steady-state and transient operation. More details about the engine modelling, procedure, and used equations could be found in [41,42]. In order to carry on simulations, the PLA command has been varied with step changes as a function of time in order to test the capability of controllers in dealing with sudden changes and difficult working conditions. Figures 16 and 17 compare the MIL and HIL results for the PFC and Figures 18 and 19 compare HIL and MIL results for MMFC. As can be seen in these figures, both controllers were implemented accurately and replicate the simulation situations without any steady-state or transient errors. Moreover, the industrial min-max controller has also been implemented on the hardware in order to explore the effectiveness of the designed fuzzy controllers in satisfying engine control modes. Figures 20 and 21 confirm the accuracy of the implemented minmax controller. After confirming the validity of the implementation procedure, the dynamic behavior of the three controllers is compared. Figures 22 and 23 compare the results of HIL simulations for PFC and MMFC and also the min-max Controller. These figures confirm that both fuzzy controllers are able to satisfy all engine control modes simultaneously without exceeding engine physical limitations. Figure 22 shows that the PFC has a smaller response time than the MMFC. It means that this controller enables the engine for better maneuverability which is an important aspect for military and unmanned aerial vehicles (UAV) applications. Both controllers' behavior is in very good agreement with the industrial minmax controller and this confirms the feasibility of the designed controllers for real-world applications. On the other hand, Figure 23 shows that the MMFC has less fuel consumption in comparison with the PFC. It introduces this structure as a high potential candidate for applications where the fuel burn and specific fuel consumption (SFC) is more important (e.g., civil aircraft engines). Both fuzzy controllers perform better than the conventional min-max controller in terms of fuel economy. Table 6 compares the response time and fuel consumption of the controllers in a one-minute simulation shown in Figures 22 and 23.
Conclusions
Different fuzzy controller structures for gas turbine aero-engines are investigated in this paper. Based on an analytic review, it is shown that pure fuzzy controllers and min-max fuzzy controllers are the two main proposed architectures for the next generation of aero-engines control system design. Both architectures are designed described in detail with associated fuzzy rules and membership functions. The hardware in the loop platform is also developed and the validity of the implemented controllers is confirmed by a model in the loop approach. The simulation results for a step-change mission confirm that:
•
The pure fuzzy controller structure performs better in terms of pilot command tracking and, therefore, it is an appropriate candidate for control of the next generation of military aero-engines.
•
The min-max Fuzzy controller structure performs better from fuel consumption and economic points of view that makes it a strong candidate for the next generation of civil aero-engines.
•
Both fuzzy controller structures are feasible for real-world application and perform better than the conventional min-max controller in terms of fuel economy. | 2021-05-08T00:03:16.671Z | 2021-02-22T00:00:00.000 | {
"year": 2021,
"sha1": "f9cab3114e7f2673597b9de288832e6b78f31a2d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-186X/6/1/2/pdf?version=1614926297",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "635f041cd4d920e69ac0aa4cf7de7ab2f3352213",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
259288680 | pes2o/s2orc | v3-fos-license | Hypoparathyroidism-related health care utilization and expenditure during the first postoperative year after total thyroidectomy for cancer: a comprehensive national cohort study
Objectives Hypoparathyroidism is the most common complication of total thyroidectomy for cancer, and requires calcium and/or vitamin D supplementation for an unpredictable period of time. The additional cost associated with this complication has not hitherto been assessed. The aim of this study was to assess the economic burden of postoperative hypoparathyroidism after total thyroidectomy for cancer in France. Methods Based on the French national cancer cohort, which extracts data from the French National Health Data System (SNDS), all adult patients who underwent a total thyroidectomy for cancer in France between 2011 and 2015 were identified, and their healthcare resource use during the first postoperative year was compared according to whether they were treated postoperatively with calcium and/or vitamin D or not. Univariate and multivariate cost analyses were performed with the non-parametric Wilcoxon test and generalized linear model (gamma distribution and log link), respectively. Results Among the 31,175 patients analyzed (75% female, median age: 52y), 13,247 (42%) started calcium and/or vitamin D supplementation within the first postoperative month, and 2,855 patients (9.1%) were still treated at 1 year. Over the first postoperative year, mean overall and specific health expenditures were significantly higher for treated patients than for untreated patients: €7,233 vs €6,934 per patient (p<0.0001) and €478.6 vs €332.7 per patient (p<0.0001), respectively. After adjusting for age, gender, Charlson Comorbidity index, ecological deprivation index, types of thyroid resection, lymph node dissection and complications, year and region, the incremental cost of overall health care utilization was €142 (p<0.004). Conclusion Our study found a significant additional cost in respect of health expenditures for patients who had hypoparathyroidism after thyroidectomy for cancer, over the first postoperative year. Five-year follow-up is planned to assess the impact of more severe long-term complications on costs.
Introduction
Postoperative hypoparathyroidism, the most frequent complication after total thyroidectomy for cancer, is caused by post-surgical parathyroid gland failure. The resultant hypocalcemia generally requires calcium +/-vitamin D supplementation. This supplementation can be discontinued after recovery, in the case of 'temporary' postoperative hypoparathyroidism, but after six months to one year, the condition may be considered as 'permanent'. The rates reported in most recent large-scale and national cohort studies in different countries range from 20 to 40% for temporary hypoparathyroidism, and from 5 to 12% for permanent hypoparathyroidism (1)(2)(3)(4)(5).
In the course of acute and chronic supplemented postoperative hypoparathyroidism, complications may occur and lead to repeated medical visits, such as intravenous calcium extravasations (6), rehospitalizations for calcium disorders, as well as, in the long term, neurological, psychiatric, bone-related (7), cardiac and renal morbidity (8), and excess mortality (9).
The clinical burden of hypoparathyroidism has been studied, but without any estimate of the associated costs. The economic evaluations in postoperative hypoparathyroidism have been designed to compare calcium +/-vitamin D supplementation strategies (10)(11)(12)(13)(14)(15). Thus, to date, no large-scale study has assessed the economic impact of this complication. This impact can be directly due to specific patient care associated with the provision and monitoring of drug supplementation and with the management of its potential consequences, such as longer hospital stays, regular and repeated biological tests and/or medical consultations, but also with indirect care expenditure that cannot reasonably be predicted, such as increased daily allowances on account of sick leave, etc.
Since 2010, the French "Cancer Cohort" has compiled the exhaustive individual sociodemographic and medical characteristics, hospital stays, and outpatient healthcare consumption for all patients who have had cancer in France, using data from the French SNIIRAM/ SNDS repository, which is the national health insurance information system (16,17). The amounts reimbursed by the national health insurance for all these healthcare consumptions are also recorded in this information system. This cohort, whose data have already been used to trace the care pathway of patients treated for breast cancer (18) or assess access to palliative care in France (19,20), offers unique resources for an exhaustive evaluation, at a national level, of the health care expenditure of patients with thyroid cancer.
The aim of our study was to assess the economic burden of hypoparathyroidism for the first year post-total thyroidectomy for cancer in France, and to describe overall and specific health expenditures, as well as cost determinants.
Data sources
This observational study was based on the French national cancer cohort, which includes all patients diagnosed or treated for cancer since 2010. This cohort, described in detail elsewhere (17), is extracted from the large-scale French National Health Data System (SNDS) (21).
Briefly, SNDS is a national database collecting healthcare consumption and reimbursement claims covering 99% for the French population (i.e., 67 million people). It includes demographic data (e.g., sex, date of birth, date of death if applicable, health insurance scheme), hospitalization data (diagnoses, medical procedures, expensive drugs and medical devices), outpatient care data (drugs dispensed, lab tests, procedures and services). Sick leave allowances and disability allowances are also entered in SNDS.
The Diagnosis codes were recorded based on the International Classification of Diseases -10 th revision, ICD-10 (World Health Organization) (22). Procedures were recorded with the CCAM classification (classification commune des Actes Médicaux) (23). Medicinal products were identified with the Anatomical Therapeutic Classification (ATC) code (24). Laboratory assays were identified with the national laboratory test coding table (NABM) (25).
Study population/cohort
In this study, we included all adult patients who underwent, between 2011 and 2015, total thyroidectomy or completion thyroidectomy for cancer (ICD-10 codes: C73 or D093 or D440 or E070 or D448), with or without central and/or lateral lymph node dissection (CCAM codes: KCFA005 or KCFA007 or KCFA002 or KCFA003 or KCFA006 or KCMA001). We excluded patients who were taking calcium and/or active vitamin D up to the last 30 days before the procedure (except if there was only one drug delivery in the 90 days preceding the procedure), as well as patients with a previous history of parathyroid pathology or resection (see codes in appendix).
In this study we used the ATC codes A12AA (Calcium), A12AA01 (Calcium phosphate), A12AA02 (Calcium glubionate), A12AA03 (Calicm gluconate), A12AA04 (Calcium carbonate), A12AA20 (different Calcium salts in association), A12AX (combinations of Calcium with vitamin D and/or other drugs) to search for what we defined as « Calcium treatments ». Our definition of « Vitamin D treatment » was selective: we searched either for Calcitriol (ATC code A11CC03) or for Alfacalcidiol (ATC code A11CC04) We divided patients into two groups according to whether they could be considered as having had postoperative hypoparathyroidism or not. Group 1 included patients who started calcium and/or vitamin D supplementation within the first postoperative month and/or were hospitalized for severe hypocalcemia at any time in the first year (ICD-10 code E83.51: Code for hypocalcemia). Group 2 included the other patients.
Subgroups were also defined to better characterize potential hypoparathyroidism. Subgroup 1A consisted of patients treated during the first postoperative month and who continued the treatment continuously during the first postoperative year (patients treated for probable permanent hypoparathyroidism), subgroup 1B consisted of patients treated during the first postoperative month and who discontinued their treatment during the first postoperative year (patients treated for probable temporary hypoparathyroidism), and subgroup 1Z consisted of patients who were hospitalized with hypocalcemia <1.5mmol/l (ICD-10 code E83.51) without any treatment during the first postoperative month (patients with hypoparathyroidism probably untreated or undetected during the first postoperative month). Subgroup 2C consisted of patients never treated during t h e fi r s t p o s t o p e r a t i v e y e a r ( p a t i e n t s w i t h o u t hypoparathyroidism), subgroup 2D consisted of patients untreated during the first postoperative month, but who started treatment in the second or third postoperative month (patients with indeterminate postoperative parathyroid status), and subgroup 2E consisted of patients untreated during the first postoperative month, who started treatment after the third postoperative month (patients treated for a reason probably unrelated to hypoparathyroidism).
Variables of interest and outcomes
The study population was described using the following criteria: age, gender, chronic comorbidities at any time in the last 365 days prior to thyroidectomy (list of 17 included in the Charlson comorbidity index) (27-30), French deprivation index (31), year of surgery, length of thyroidectomy hospital stay, geographical area of thyroidectomy hospital stay, type of thyroid resection (total thyroidectomy, completion thyroidectomy, complex total thyroidectomy), type of lymph node dissection performed (none, central compartment, lateral compartment), radioactive iodine (RAI) treatment, other complications (laryngeal, hemorrhagic, infectious and cutaneous complications), and rehospitalization for calcium disorders (at any time during the first postoperative year for the last three criteria).
Costs were estimated from the payer's perspective (French National Health Insurance).
All healthcare consumption reimbursed by the payer during the first postoperative year were identified in the database, i.e., from the end of surgery hospitalization up to day 365 after the date of surgery, or until the date of death, whichever occurred first.
Inpatient (hospitalization) care costs based on the Diagnosis-Related Group tariffs and drugs or medical devices billable in addition to the DRGs in public and private hospitals, outpatient care costs (drugs, biological tests, imaging procedures, total medical consultations, transportation and other procedures, such as physiotherapy or speech therapy, etc.), and indirect costs (allcause sick leave daily allowance and invalidity allowance) were included in the overall reimbursed healthcare expenditures.
Some health expenditure items thought to be specifically related to hypoparathyroidism were also described, and included some specific medications, medical consultations, bioassays, imaging and rehospitalization for calcium disorders (other than severe hypocalcemia). Specific drug prescriptions included conventional treatments prescribed to limit the long-term harm of hypoparathyroidism: calcium, active vitamin D, magnesium, thiazide diuretics, and phosphate binders. The specific medical consultations concerned doctors likely to have been directly involved in the treatment (general practitioner, endocrinologist, general/visceral surgeon, ENT surgeon) or its complications (plastic surgeon, rheumatologist, nephrologist, urologist, psychiatrist, cardiologist). Specific bioassays concerned tests associated with the monitoring of hypoparathyroidism and its potential renal complications: calcium, phosphorus, parathormone, 25-(OH)-vitamin D (D2+D3), plasma magnesium, urea, creatinine, urine calcium level, urine creatinine level, creatine clearance (32). Specific imaging tests were those associated with monitoring potential complications of treated hypoparathyroidism: urinary tract ultrasound, brain CT and/or MRI, skull X-ray and bone densitometry (32). Specific rehospitalizations for calcium disorders also provide information on monitoring of potential parathyroid-related complications. ATC, NABM, CCAM and ICD-10 codes used to collect drugs, bioassays, imaging and hospitalization are shown in the Appendix.
Statistical analysis
Categorical variables were described with frequency and percentage, continuous variables with mean and standard deviation ( ± SD) or median.
Univariate cost analyses were performed with the non-parametric Wilcoxon test, multivariate analyses were carried out using generalized linear models (GLMs) with log link and gamma distribution. Incremental expenditures associated with hypocalcemia and covariates were estimated from respective regression coefficients (independent differentials) in GLM including all the following covariates: age (1 st quartile to 4 th quartile), gender, Charlson comorbidity index (0, 1 or 2, > 2), French deprivation index (from 1 st quintile: the least deprived quintile to the 5 th quintile: the most deprived quintile), year of surgery (2011 to 2015), geographical area of thyroidectomy hospital stay (13 administrative areas), type of thyroid resection (total thyroidectomy, completion thyroidectomy, complex total thyroidectomy), type of lymph node dissection (none, central compartment, lateral compartment), laryngeal complications, hemorrhagic complications, infectious complications, cutaneous complications (complications at any time during the first postoperative year).
Statistical tests were two-sided, a p-value < 0.05 was considered as significant.
All statistical analyses were performed using World Programming System 4.02.
Selection of the study population
Between January 1st, 2011, and December 31st, 2015, 34,398 patients underwent total thyroidectomy for thyroid cancer in France. After taking into account for exclusion factors (preoperative calcium or vitamin D-calcium therapy for 2,040 patients, associated parathyroid pathology for 1,080 patients, age <18 years for 288 patients, and uninterpretable data for 28 patients), 31,175 patients were analyzed, as detailed in the flow chart, Figure 1.
Patient characteristics
In total, out of the 31,175 patients analyzed, 13,247 patients (42.49%) were deemed to have postoperative hypoparathyroidism on account of calcium or vitamin D+calcium replacement therapy started within the first postoperative month (n=13,224: 4,344 with calcium alone, and 8,880 with vitamin D+calcium) or hospitalization for severe hypocalcemia at any time in the first year (n=23). They were included in Group 1.
The remaining 17,928 patients (57.51%) were assigned to Group 2. The main characteristics of both groups were described (Table 1).
Among the 13,224 patients who started calcium and/or vitamin D supplementation within the first postoperative month, 2,855 (22%) were still treated at one year (Subgroup 1A). Of the other 10,369 patients who discontinued their treatment before 1 year (Subgroup 1B), 9,530 (92%) were treated for less than 6 months.
Among the 17,928 patients without any supplementation within the first postoperative month (Group 2), 17,030 patients (55% of the patients of this study) took no treatment during the first year (Subgroup 2C), 336 patients started supplementation in the second or third postoperative month (subgroup 2D, indeterminate postoperative parathyroid status), with 106 (32%) of these still treated at 1 year, and 562 patients started supplementation after the third postoperative month (subgroup 2E, indeterminate postoperative parathyroid status but probably not related to thyroid surgery), with 269 (49%) of these still treated at 1 year. Ultimately, 375 of these 898 patients with undetermined parathyroid status were still treated at 1 year.
A representation of the duration of supplementation (≥1 calcium ± vitamin D delivery) by time intervals: within the first month, >M1 and ≤M6, >M6 and <M12, or at M12 accounts for late recoveries ( Figure 2). For instance, patients still treated 1 year after thyroidectomy, from 9.1% (2,855) to 10.4% (3,230) of patients, can be found in the peripheral circle of the figure.
Regarding health care resource utilization within the first postoperative year, almost all patients received outpatient care, 70% of patients were hospitalized at least once: 9,523 patients (72%) in Group 1 versus 12,336 patients (69%) in Group 2; 91% had at least Flow chart.
Health expenditures
The overall health expenditure of the 31,175 patients during the first postoperative year was €95,809,106 in Group 1 and €124,315,208 in Group 2 ( Table 2). The mean overall health expenditure per patient during the first year was €7,233 (± €9,256) in Group 1 and €6,934 (±€8,751) in Group 2, with an average difference of €298 per patient (p<0.0001). The average difference between the two groups in inpatient care costs was €23 (p<0.0001), the difference in outpatient expenses was €111 (p<0.0001), and the difference in indirect expenses was €164 (p<0.0001).
Specifically, hypoparathyroidism-related health care expenditures during the first year were €6,339,607, with an average of €478.6 (€ ± 708.8) per patient in Group 1 versus a total of €5,964,120 and an average of €332.7 (€ ± 461.0) per patient in Group 2 (Table 3). This resulted in a mean difference between the groups of €145.9 per patient (p<0.0001), mainly due to drugs and rehospitalizations for calcium disorders. To take into account potential confounders, a multivariate analysis was performed. After adjustment, the 1-year incremental cost in patients treated for postoperative hypoparathyroidism was estimated at €142 (p<0.004) ( Table 4).
After adjustment for potential confounders, the incremental cost for patients treated for a probable permanent postoperative hypoparathyroidism (Subgroup 1A) compared to patients without hypoparathyroidism (Subgroup 2C) was estimated at €776 (p<0.0001) ( Table 4).
Discussion -conclusion
The data presented here provide information on hypoparathyroidism patterns after total thyroidectomy for cancer during the first postoperative year, and assess the additional health care expenditures associated with the treat ment of hypoparathyroidism on a nationwide scale in France.
Hypoparathyroidism was characterized by hospitalization for severe hypocalcemia at any time in the first postoperative year, a clear criterion of hypocalcemia and/or at least one delivery of calcium and/or vitamin D started within the first postoperative month without any further administration requirement, allowing us to capture different temporary patterns, in addition to permanent ones.
The rate of permanent hypoparathyroidism, measured at 9.1%, is of the order of most recent large series (2-4), such as the 2021 national audit run by the British Association of Endocrine and Thyroid Surgeons, which also reported a 9% rate in the cancer group (5). We also found, similarly to Villarroya-Marquina et al., that it was still possible for 23% (839/3694) of patients still treated for hypoparathyroidism at the sixth postoperative month to recover after the sixth postoperative month, which therefore does not seem to be the optimal cut-off to define permanent hypoparathyroidism (33). Finally, we found a higher rate of patients who had undergone cervical lymph node dissection (43%) in patients with hypoparathyroidism than in those who had no hypoparathyroidism (32%), which is consistent with other reports where lymph node dissection is considered as a risk factor for hypoparathyroidism (34). In our study, after adjustment for potential confounders, the additional overall health expenditures for patients with hypoparathyroidism was €142 per patient from the payer's perspective. This sum remains relatively modest, and possible explanations may be that our study did not take into account the cost of the initial surgery stay, and that, in the short-term, the difference in health care consumption between the groups pertains to inexpensive categories (drugs, laboratory tests, etc.). Health expenditures were estimated from the payer's perspective (French Imaging included urinary tract ultrasound, brain CT and/or MRI, skull X-ray and bone densitometry; Rehospitalization for calcium disorder included the following ICD-10 codes: E835, E8350, E8358. national health insurance), and therefore did not include possible outof-pocket expenses for the patient. These out-of-pocket expenses for patients were not estimated here. In any case, such an estimation would have been rough, as expenses not covered by the payer may be covered by complementary insurance not available in databases. Furthermore, drugs that are reimbursable but not presented for reimbursement or over-the-counter treatments represent out-ofpocket expenses but are not captured in the cohort. This unobserved part of consumption may vary according to the studied therapeutic class as mentioned by Bertocchio et al. (35). We chose to assess the postoperative health care expenditure for all patients and for those suffering from hypocalcemia, regardless of duration, in order to gain an overall idea of what it represented on a national scale. As a result, we included patients with different hypoparathyroidism profiles, and therefore different expenditure profiles. The additional analysis of subgroups allowed a better description and understanding of these profiles.
It would seem, as a first approximation, that the majority of the expenditure in Group 1 originates from patients with permanent hypoparathyroidism (Subgroup 1A), but these results should be refined through a specific study of cost profiles based on the duration of hypoparathyroidism.
Finally, indirect costs (sick leave allowance and invalidity allowance) represented 55% of the additional cost in our study, which was unexpected. Although the impact of postoperative hypoparathyroidism on work has been qualitatively assessed (4, 15), Subgroup 1A consisted of patients treated during the first postoperative month and who continued treatment continuously during the first postoperative year; Subgroup 1B consisted of patients treated during the first postoperative month and who discontinued treatment during the first postoperative year; Subgroup 1Z consisted of patients who were hospitalized with hypocalcemia <1.5mmol/l (ICD-10 code E8351) without any treatment during the first postoperative month; Subgroup 2C consisted of patients untreated during the first postoperative year; Subgroup 2D consisted of patients untreated during the first postoperative month, but who started treatment in the second or third postoperative month; Subgroup 2E consisted of patients untreated during the first postoperative month, who started treatment after the third postoperative month 1A vs 1B : comparison between Group1A and Group 1B on mean costs per patient; 1A vs 2C : comparison between Group1A and Group 2C on mean costs per patient; D : Average difference ; univariate analysis : comparison using non-parametric Wilcoxon test Specific expenditures (after hospital discharge) were estimated for outpatients, except for rehospitalization for calcium disorder Specific drugs included calcium, active vitamin D, magnesium, thiazide diuretics, phosphate binders; specific medical consultations or procedures included general practitioner, endocrinologist, general/visceral surgeon, ENT, plastic surgeon, rheumatologist, nephrologist, urologist, psychiatrist; specific bioassays included calcium, phosphorus, parathormone, 25-(OH)-vitamin D (D2 +D3), plasma magnesium, urea, creatinine, urine calcium level, urine creatinine level, creatine clearance; specific imaging included urinary tract ultrasound, brain CT and/or MRI, skull X-ray and bone densitometry. Rehospitalization for calcium disorder included ICD-10 codes: E835, E8350, E8358.
Strengths of the study
Based on comprehensive real-world data, our study makes it possible to describe observed health expenditure without the need for extrapolation. Furthermore, by providing a comparison with a control group of untreated patients, i.e., without hypoparathyroidism, our study makes it possible to estimate postoperative hypoparathyroidism-related health expenditure. Finally, the nature of the data available makes it possible to categorize health expenditure, and to estimate not only specific costs directly associated with hypoparathyroidism, but also indirect health costs, thus making it possible to detect unforeseen expenditure. These three items offer new concepts compared to the literature. Wang et al. (10) and Nicholson et al. in Markov cohort model studies (12), and Mercante et al. in a randomized trial (13), compared the cost of several drug supplementation regimens for hypoparathyroidism, while Fanget et al., in a retrospective series, estimated an approximate additional management cost of hypocalcemia patients, including the initial surgical stay, and without specifying the perspective or the source of cost data (11). These studies varied in the items chosen to estimate the cost, whether it was the price of drugs (10)(11)(12), biological assays (10,13), caregiver time (12,13), excess days of initial hospitalization (11), or rehospitalizations (10), but none of these studies included a control group of non-hypoparathyroidism patients. Other studies did not include cost quantification, such as Chen et al., who reported, in a retrospective multicenter study, health resource utilization: pills and medical visits (14) and Siggelkow et al. who conducted a survey assessing, among other things, medication use, caregiver burden and impact on work in patients with hypoparathyroidism (15). These latter two studies did not specifically target postoperative hypoparathyroidism, and none of the studies mentioned above have specifically focused on cancer patients. Finally, Mathonnet et al., in a large study describing the complete care pathway of total thyroidectomy patients in France, also quantified the rate of hypoparathyroidism after total thyroidectomy for cancer, but did not provide any cost estimation (4).
Limitations of the study
The inability to access patient records, which were anonymized, made it difficult to confirm the diagnosis of postoperative hypoparathyroidism. Rather than searching the SNDS database for patients with the ICD10 coding 'hypoparathyroidism', we preferred to use a more reliable definition of hypoparathyroidism based on automatically recorded drug reimbursement. With ICD10 codes, cases without hospitalization would not have been identified, only stays with hypoparathyroidism would have been located, but these codes do not provide sufficient clinical information, and they do not indicate the duration of the condition. Within the first postoperative month, these codes were found for only 21 patients: 19 in Group 1 and 2 in Group 2 (data not shown), therefore no bias was introduced. Even though laboratory tests are entered into the database (blood calcium, etc.), their results are not available, which implies the use of a proxy. We assumed that the initiation of vitamin D and/or calcium treatment immediately after total thyroidectomy, in previously untreated patients, left little doubt about the causal link between the treatment and the disease. However, we were interested to find that a group of patients (Subgroup 2D with n=336 patients), started their treatment after the first postoperative month. A further analysis of this small subgroup showed that one third of these patients (representing only 106 patients) started treatment after the first month, but continued it for the whole first year. These patients were most probably patients treated for permanent hypoparathyroidism, whose initial treatment was not been recorded, for example. Some patients started treatment after the third postoperative month, a timeframe that is inconsistent with primary hypoparathyroidism (Subgroup 2E with 562 patients, of whom 269 were still being treated at 1 year). Therefore, these were patients who were most likely treated for a reason other than postoperative hypoparathyroidism (renal osteodystrophy for instance), although it is theoretically possible that hypocalcemia requiring treatment could remain untreated for 3 months. However, given the low proportion of the potentially misclassified patient population, we consider that the use of the proxy we chose did not affect the findings of our study. Finally, our study only considered the first postoperative year, because it corresponded to the first phase of the disease. This phase makes it possible to discriminate between temporary hypoparathyroidism and permanent hypoparathyroidism. After the first year, the occurrence of serious complications due to permanent hypoparathyroidism should allow the identification of the few cases potentially misclassified. These long-term complications can be expected to cause potentially significant additional costs. Therefore, a 5-year update of the current data has been planned.
In conclusion, our study found that 42.5% of patients who underwent total thyroidectomy for cancer in France were treated for postoperative hypoparathyroidism, and 9.1% remained treated one year post-surgery. During the first postoperative year, health expenditures were significantly higher for patients treated for postoperative hypoparathyroidism, even if the difference was small. A 5-year follow-up is planned to assess the more burdensome long-term complications and their costs.
Data availability statement
The datasets presented in this article are not readily available because access to the data from the SNDS requires permission in accordance with Public Health Code (Articles L.1461-1 to L.1461-7) and French data protection act (loi n°78-17 janvier du 6 janvier 1978). Requests to access the datasets should be directed to lesdonnees@institutcancer.fr.
Ethics statement
The studies involving human participants were reviewed and approved by French Data Protection Agency (Commission nationale de l'informatique et des liberteś-Cnil (26)) n°2019-082 and 2019-083. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Author contributions
FB, CB, SR, P-JB, and EB contributed to conception and design of the study. EB organized the database and performed the statistical analysis. FB wrote the first draft of the manuscript. EB and SR wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. | 2023-06-30T13:07:26.282Z | 2023-06-28T00:00:00.000 | {
"year": 2023,
"sha1": "1f47ddbf0e20e1b0d247772943ffa436dceac775",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fendo.2023.1193290",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "1f47ddbf0e20e1b0d247772943ffa436dceac775",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
270310395 | pes2o/s2orc | v3-fos-license | Nirmatrelvir/ritonavir or Molnupiravir for treatment of non-hospitalized patients with COVID-19 at risk of disease progression
Background In randomized controlled trials, Nirmatrelvir/ritonavir (NMV/r) and Molnupiravir (MPV) reduced the risk of severe/fatal COVID-19 disease. Real-world data are limited, particularly studies directly comparing the two agents. Methods Using the VA National COVID-19 database, we identified previously uninfected, non-hospitalized individuals with COVID-19 with ≥1 risk factor for disease progression who were prescribed either NMV/r or MPV within 3 days of a positive test. We used inverse probability of treatment weights (IPTW) to account for providers’ preferences for a specific treatment. Absolute risk difference (ARD) with 95% confidence intervals were determined for those treated with NMV/r vs. MPV. The primary outcome was hospitalization or death within 30 days of treatment prescription using the IPTW approach. Analyses were repeated using propensity-score matched groups. Results Between January 1 and November 30, 2022, 9,180 individuals were eligible for inclusion (6,592 prescribed NMV/r; 2,454 prescribed MPV). The ARD for hospitalization/death for NMV/r vs MPV was -0.25 (95% CI -0.79 to 0.28). There was no statistically significant difference in ARD among strata by age, race, comorbidities, or symptoms at baseline. Kaplan-Meier curves did not demonstrate a difference between the two groups (p-value = 0.6). Analysis of the propensity-score matched cohort yielded similar results (ARD for NMV/r vs. MPV -0.9, 95% CI -2.02 to 0.23). Additional analyses showed no difference for development of severe/critical/fatal disease by treatment group. Conclusion We found no significant difference in short term risk of hospitalization or death among at-risk individuals with COVID-19 treated with either NMV/r or MPV.
In December 2021, two novel oral antiviral agents, Nirmatrelvir/ritonavir (NMV/r) and Molnupiravir (MPV), were granted Emergency Use Authorization (EUA) by the Food and Drug Administration (FDA) for treatment of early symptomatic patients with mild to moderate COVID-19 at high risk of progression to severe disease [12][13][14][15].NMV/r is a SARS-CoV-2-3CL protease inhibitor, the enzyme that the coronavirus needs to replicate.It inhibits viral replication intracellularly at the proteolysis stage before viral RNA replication.NMV has to be administered with low-dose ritonavir, a potent CYP3A inhibitor with no activity against SARS-CoV-2 on its own, which slows the metabolism of NMV and prolongs its half-life.In randomized controlled trials and real-world studies, NMV/r has been associated with reduced mortality, hospitalization, and hospital length of stay [16][17][18][19][20].While MPV treatment was also associated a significant reduction in hospitalization or death by day 29 compared with the placebo group In randomized controlled trials, [21] real-world studies of MPV have shown mixed results with some studies reporting a reduction in mortality and hospitalizations and other showing no benefit [22][23][24].Observational studies including both treatments have shown both to be beneficial compared with untreated controls, though these studies have generally not compared the two agents against each other [25][26][27].An observational study compared NMV/r and MPV against untreated controls in hospitalized patients and found a survival benefit associated with both drugs, but no reduction in intensive care unit (ICU) admission or the need for ventilatory support [28].Of note, the European Medicines Agency did not recommend approval of MPV noting absence of compelling evidence of benefit of MPV in patients with COVID-19, leading the manufacture to withdraw its application for marketing authorization in Europe in June 2023 [29].A randomized controlled trial directly comparing NMV/r to MPV is very unlikely due to logistic, financial, and ethical constraints.In the absence of such trials, rigorous observational studies can provide real-world evidence of their comparative effectiveness in patients with COVID-19.We undertook this study to determine the comparative effectiveness of NMV/r vs. MPV treatment upon the risk of hospitalization or death in a previously uninfected, non-hospitalized population at risk for disease progression.
Study setting
The Veterans Health Administration of the Department of Veterans Affairs (VA) created a national COVID-19 Shared Data Resource, which contains detailed demographic, clinical, laboratory, vital status, and episodes-of-care information on all Veterans with a laboratory-confirmed diagnosis of COVID-19 infection and recipients of a COVID-19 vaccine within the VA.Veterans who are tested or vaccinated outside VA are captured by patient self-report (presentation of a vaccination card) or through insurance claims data.The VA COVID-19 Shared Data Resource is updated regularly in real time with information derived from multiple validated sources [30][31][32][33][34].
Study population
We used a matched cohort design for the current study using two approaches described below.Eligible individuals were those in the VA COVID-19 Shared Data Resource with at least two episodes of care in the VA healthcare system within the last 2 years, who had a first confirmed SARS-CoV-2 infection between January 1 and November 30, 2022, had at least one risk factor for progression to severe disease, and received either NMV/r or MPV within 3 days of their COVID-19 diagnosis.Those who were hospitalized or died before or within 24 hours of receiving NMV/r or MPV, those who received both NMV/r and MPV, and those who received monoclonal antibody for COVID-19 or remdesivir were excluded, as were those who received treatment� 3 days after the index diagnosis.We used an inverse probability of treatment weights (IPTW) based approach for our primary analysis, as detailed in our previous publications [35,36].Briefly, we fitted a logistic regression model for NMV/r or MPV prescription using age (5 year blocks), race, sex, body mass index, VA facility where diagnosis was made, vaccination status, and presence of diabetes, hypertension, cardiovascular disease, chronic kidney disease, chronic lung disease, and cancer diagnoses.The estimated probabilities from this model were used to compute inverse probability of treatment weights, which were used to weigh in subsequent analyses.To account for potential replications caused by IPTW, we used a robust (sandwich) variance estimator in Cox regression model, which yielded conservative 95% confidence intervals.Adequacy of weighting was tested by calculating the standardized mean difference for each variable after applying the weights.A value of <0.2 indicates good matching for the variable tested.
We conducted additional analyses to determine the validity of our primary results.Among the eligible population, we used propensity-score matching to identify those prescribed NMV/ r and 1:1 matched controls prescribed MPV.Propensity-score matching was done on age, race, sex, body mass index, multiple comorbidities, site of diagnosis and vaccination status.We used matching without replacement using a caliper of 0.2SD.We calculated the ARD and 95% confidence intervals for hospitalization or death within 30 days overall, and for various subgroups.
Vaccination status was categorized based on the status at the time of COVID-19 diagnosis into individuals who were unvaccinated or who did not complete a primary series, those who completed a primary series, and those who completed a primary series and received at least one booster dose after that.Body mass index was calculated using the average of two most recent height and weight values.Comorbidities were retrieved from the VA National COVID-19 database, where they are identified based on International Classification of Diseases-10 th edition (ICD-10) codes.
Primary outcome measure
Our primary outcome measure was hospitalization or death within 30 days among those prescribed NMV/r vs. those prescribed MPV.Time-at-risk started from the date of treatment prescription in each group.
Statistical analysis
We calculated the absolute risk difference (ARD) and associated 95% confidence intervals between the groups overall, and for sub-strata of the population by age, sex, body mass index, presence of various comorbidities, vaccination status, and presence of symptoms.Kaplan-Meier curves were generated to demonstrate the difference in outcomes over time among those treated with NMV/r or MPV.Logrank test was used to calculate p-values between groups.A p-value of <0.05 was considered statistically significant.
Additional analyses
We repeated all analyses comparing NMV/r vs. MPV for development of severe, critical or fatal disease.Severe or critical disease were defined as need for intensive care unit admission or mechanical ventilation, or death.In addition, we determined the hazards ratios for the risk of developing the primary outcome using Cox proportional hazards analysis.
Ethics statement
The study was granted an exempt status by the Institutional Review Board at the VA Pittsburgh Healthcare System (Study Number 1617395-6).Since there was no contact with any of the participants, and due to its exempt status, the informed consent requirement was not applicable.
Results
Among 105,502 individuals who tested positive during the study period, 9,180 were eligible for inclusion in the final analyses.(Fig 1) Among those, the primary analyses were conducted 6,592 individuals prescribed NMV/r and 2,454 prescribed MPV using the inverse probability of treatment weights.(Fig 1 ) The standardized mean difference values before and after inverse probability of treatment weighting are provided in S1 Fig in S1 File.The median age in the IPTW groups was 67 years, 87% were male, 23% were Black.Median body mass index was 30 kg/m 2 , median Charlson comorbidity index was 2, and approximately 16% were unvaccinated against COVID-19.(Table 1) Median number of days from diagnosis to prescription, and from onset of symptoms to prescription among symptomatic individuals was 0 days (IQR 0,1).The absolute risk difference (ARD) for hospitalization or death within 30 days among patients who received NMV/r vs those who received MPV was -0.25 (95% CI -0.79 to 0.
28). (Fig 2,
Panel A) There was no statistically significant difference in ARD among strata by age, race, comorbidities, or symptoms at baseline.(Fig 2 , Panel A) Absolute risk difference for hospitalization or death among NMV/r treated vs. MPV treated was -2.3 (95% CI -3.8 to -0.79) for those who were unvaccinated or did not complete a primary series, and 0.9 (95% CI -0.19 to 1.99) for those who had completed a primary series but not received a booster dose.There was no significant difference among those who had received a booster dose after completing a primary series.Kaplan-Meier curves depicting the proportion of individuals without hospitalization or death among those treated with NMV/r or MPV is shown in Fig 3, Panel A and did not demonstrate a difference between the two groups (logrank p-value = 0.6).
Additional analyses
The main analyses were repeated on a propensity-score matched groups that included 2,453 matched pairs.The standardized mean difference values before and after propensity-score matching are shown in S2 We repeated all analyses with severe, critical, or fatal disease within 30 days of treatment initiation as the primary outcome.These results mirrored the corresponding primary analyses and are presented in S3 and S4 Figs in S1 File.We also determined the hazards of developing the primary outcome of interest using the Cox proportional hazards analysis, which also confirmed the results of the primary analysis.
Data access
The data for this study were accessed over an extended period during 2022 and 2023.The study was considered exempt from review by the Institutional Review Board at VA Pittsburgh Healthcare System.The authors did not have access to information that could directly identify participants included in the analyses during or after data collection.
Discussion
Data comparing NMV/r vs. MPV are scant.Our comparison of the two antivirals against COVID-19 demonstrate that they have comparable effect in reducing the risk of hospitalization or death in non-hospitalized individuals with at least one risk for progression of disease.
Recently, a small observational study noted that both drugs demonstrated effectiveness against hospitalization or death, and time to first negative COVID-19 test [37].Pivotal randomized clinical trials of NMV/r and MPV demonstrated efficacy of both antivirals compared with placebo in reducing risk of hospitalization or death among non-hospitalized individuals at risk of disease progression when administered early in the course of COVID-19 [16,21].However, subsequent observational studies have shown mixed results, particularly for MPV.The PANORAMIC trial was a large, multicenter, open labeled, platform adaptive randomized controlled trial, which failed to show any benefit of MPV in reducing hospitalization or death among high-risk individuals [24].In another study emulating a target trial comparing either NMV/r or MPV versus non-initiation of these treatments, both agents reduced all-cause mortality among hospitalized patients.However, there was no reduction in the need for intensive care unit admission or mechanical ventilation [28].The use of NMV/r has been associated with more consistent results in improving clinical outcomes [20].To our knowledge, no published studies have directly compared these two antivirals in the same eligible population.Since a gold-standard randomized, controlled trial comparing these two agents is extremely unlikely due to logistic and financial constraints, a rigorously conducted observational study may provide clinically meaningful information.We used several analytical approaches to match the groups receiving NMV/r or MPV to reduce selection bias and assignment of one treatment over the other.All analyses demonstrated no significant difference in the risk of hospitalization or death among non-hospitalized individuals with COVID-19 who were treated with NMV/r or MPV.Our study population included non-hospitalized patients with at least one risk factor for progression to severe disease.Furthermore, no difference in the two antivirals were observed for the development of severe, critical or fatal disease.Some important differences between NMV/r and MPV should be considered when prescribing these agents.NMV/r is not indicated in individuals with severe renal impairment (eGFR < 30 mL/min), while dose reduction is recommended in those with eGFR between 30-60 mL/min.No dose adjustment is recommended in individuals with mild to moderate hepatic impairment (Child-Pugh Class A or B).Since Nirmatrelvir must be co-administered with ritonavir, extreme caution must be observed in individuals taking other drugs metabolized by CYP3A.No dose adjustments or drug interactions are listed for MPV in the prescribing information (package insert) based on the limited data available.
Several limitations should be considered when interpreting these results.Since the treatment assignment was not randomized, there is a risk of selection bias and residual confounding.Treatment assignment was dependent upon the choice of individual prescribers.Information on SARS-CoV-2 variants was not available.There is a possibility of previously undiagnosed infection among the study population, which may have conferred varying level of immunity.There is a small possibility on incomplete capture of hospitalizations if care was provided outside the VA healthcare system.Some comorbidities like chronic kidney disease, chronic lung disease, and diabetes have a wide spectrum of severity which may affect outcomes differently.For example, an individual with stable stage 3 chronic kidney disease may be affected quite differently than an individual with stage 5 disease who is on chronic hemodialysis.Such variations in disease severity were not considered in our study due to lack of sufficient data for accurate disease severity classification.Finally, we did not determine the effectiveness of either antiviral vs. no treatment.
We employed several strategies to mitigate the limitations noted above.To minimize bias due to non-random selection of the antiviral agent, we used inverse probability of treatment weights.We also used a propensity-score matched approach to balance the two groups based on their baseline characteristics.Both approaches yielded study groups that were well matched on multiple demographic and clinical characteristics.We included those individuals in our study who had at least one VA encounter within the previous two years to minimize persons who receive care outside the VA healthcare system.It should be noted that our study population was predominantly male, which is reflective of the population served by the VA healthcare system.
In summary, we found no significant difference in short term risk of hospitalization or death, or severe, critical, or fatal disease in non-hospitalized individuals with COVID-19 at risk of disease progression who were treated with either NMV/r or MPV. and the use of facilities at the VA Pittsburgh Healthcare System, Veterans Health Foundation of Pittsburgh, and the central data repositories maintained by the VA Information Resource Center, including the Corporate Data Warehouse.The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the funding agencies.
their institution), and conference support from Octapharma.DBR has received consulting fees from OMASS Therapeutics and has a leadership and fiduciary role in the Heal-COVID trial TMG.BRS, JM, MAD, CTS, NSB, and MF report grant money paid to their employer from the University of Oxford for the statistical design and analyses of the PANORAMIC trial.JM has also participated on data and safety monitoring boards as part of his employment with Berry Consultants.ML is a member of the data monitoring and ethics committee of RAPIS-TEST (NIHR efficacy and mechanism evaluation).SK reports grants from GlaxoSmithKline, ViiV, Ridgeback Biotherapeutics, Vir, Merck, the UK Medical Research Council, and the Wellcome Trust (all paid to his institution), speaker's honoraria from ViiV, and donations of drugs for clinical studies from ViiV Healthcare, Toyama, and GlaxoSmithKline.JFS has participated on a data safety monitoring board for GlaxoSmithKline.MA has received grants from the Blood and Transplant Research Unit, Janssen, Pfizer, Prenetics, Dunhill Medical Trust, the BMA Trust (Kathleen Harper Fund), and Antibiotic Research UK (all of which were paid to their institution), and consultancy fees from Prenetics and OxDx.MA reports a planned patent for Ramanomics, has participated on data safety monitoring boards or advisory boards for Prenetics, and has an unpaid leadership or fiduciary role in the E3 Initiative.NPBT has received payment for participation on an advisory board from MSD (before any knowledge or planning of this trial).OvH has received consulting fees from MindGap (fees paid to Oxford University lnnovation), has participated on data safety monitoring boards or advisory boards for the CHICO trial, and has an unpaid leadership or fiduciary role in the British Society of Antimicrobial Chemotherapy.AU has received consulting fees and payment or honoraria from MSD, GlaxoSmithKline, and Gilead.NF has received consulting fees from Abbott Diagnostics and GlaxoS-mithKline, is a member of the PRINCIPLE trial data safety monitoring board and the NIHR Health Technology Assessment General Funding Committee, and has stocks in Synairgen.JB has received consulting fees from GlaxoSmithKline (paid to her institution).All other authors declare no competing interests.
25.
Fig in S1 File indicating good matching on the variables tested.The baseline characteristics of the study population before and after propensity-score matching are shown in S1 Table in S1 File.There was no difference in the primary outcome among the two groups (ARD -0.9, 95% CI -2.02 to 0.23).(Fig 2, Panel B) Subgroup analyses by age, race, sex, comorbidities, vaccination status, or presence or symptoms also did not demonstrate any difference among those treated with NMV/r or MPV.Kaplan-Meier curves depicting the proportion of individuals without hospitalization or death among those treated with NMV/r or MPV also did not demonstrate a difference between the two groups (logrank p-value = 0.1).(Fig 3, Panel B).
(S5 Fig, panels A and B in S1 File).
Table 1 . Baseline characteristics of the Nirmatrelvir/ritonavir (NMV/r) and Molnupiravir (MPV) analysis cohort.
Wan EYF, Wang B, Mathur S, Chan CIY, Yan VKC, Lai FTT, et al.Molnupiravir and nirmatrelvir-ritonavir reduce mortality risk during post-acute COVID-19 phase.J Infect.2023; 86(6):622-5.Epub 2023/02/ 24.https://doi.org/10.1016/j.jinf.2023.02.029PMID: 36822409 and Health Bureau of the Government of the Hong Kong Special Administrative Region, and the Hong Kong Research Grants Council, outside the submitted work.FTTL has been supported by the RGC Postdoctoral Fellowship under the Hong Kong Research Grants Council and has received research grants from the Food and Health Bureau of the Government of the Hong Kong Special Administrative Region, outside the submitted work.CSLC has received grants from the Food and Health Bureau of the Hong Kong Government, Hong Kong Research Grant Council, Hong Kong Innovation and Technology Commission, Pfizer, IQVIA, and Amgen; and personal fees from PrimeVigilance; outside the submitted work.XL has received research grants from the Food and Health Bureau of the Government of the Hong Kong Special Administrative Region; research and educational grants from Janssen and Pfizer; internal funding from the University of Hong Kong; and consultancy fees from Merck Sharp & Dohme; Dohme, unrelated to this work.CKHW has received research grants from the Food and Health Bureau of the Hong Kong Government, the Hong Kong Research Grants Council, and the EuroQol Research Foundation, unrelated to this work.IFNH received speaker fees from MSD. ICKW reports research funding from Amgen, Bristol Myers Squibb, Pfizer, Janssen, Bayer, GSK, Novartis, the Hong Kong Research Grants Council, the Hong Kong Health and Medical Research Fund, the National Institute for Health Research in England, the European Commission, and the National Health and Medical Research Council in Australia, outside the submitted work; and is a non-executive director of Jacobson Medical in Hong Kong and a consultant to IQVIA and World Health Organization.EWC reports grants from Research Grants Council (RGC, Hong Kong), Research Fund Secretariat of the Food and Health Bureau, National Natural Science Fund of China, Wellcome Trust, Bayer, Bristol-Myers Squibb, Pfizer, Janssen, Amgen, Takeda, and Narcotics Division of the Security Bureau of the Hong Kong Special Administrative Region; honorarium from Hospital Authority; outside the submitted work.All other authors declare no competing interests. | 2024-06-08T05:13:29.913Z | 2024-06-06T00:00:00.000 | {
"year": 2024,
"sha1": "138842a3f7f405132b336def753490b2907d5519",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "138842a3f7f405132b336def753490b2907d5519",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
115236966 | pes2o/s2orc | v3-fos-license | Comment on"Analytical Structure of One Dimensional Localization Theory: Re-Examining Mott's Law"
In Phys. Rev. Lett. 84, 1760 (2000) A.O.Gogolin has challenged the established point of view that Mott's law for the dynamical conductivity of a 1 dimensional insulator is correct. We present the result of a numerical solution of Berezinskii's recursion relations that demonstrate unambigously that the earlier solutions are indeed valid in contrast to Gogolin's claim.
In a recent Letter A. O. Gogolin [1] has challenged the established point of view that Mott's prediction for the dynamical conductivity of a localized electron system is correct. The intuitive argument [2] leads in one dimension to a dynamical conductivity of the form ω 2 ln 2 ω.
Later, the precise, asymptotical result , C denotes the Euler-Mascheroni constant 0.5772 . . . and σ 0 = e 2 v F τ /π per spin). has been derived by several authors using different methods, see e. g. Refs. [3][4][5]. (We are not aware of any analytical prediction for the constant C.) Gogolin presents a purely formal calculation which yields (eq. (22) in Ref. [1] with σ 0 = 4). In view of the mentioned variety of works corroborating Mott's conclusion this is quite unexpected. If Gogolin were right, then one of the thought to be most profound chapters in localization theory would have to be rewritten. In fact, however, as we will demonstrate below, he is not. Gogolin's analysis starts from the famous recursion equations derived first by Berezinskii [3]. The equations can be solved in a standard manner by mapping them to a differential equation. Gogolin's claim is that the previous solution of this equation is incorrect and hence also the conductivity law ω 2 ln 2 ω derived thereof. He argues that previous authors have not properly taken into account discreteness of the spectrum of the equation.
A simple method to check Berezinskii's result is to solve the recursion equations for the conductivity numerically. (For details see Ref. [6].) The algorithm is very stable and has been used down to frequencies ν = 5·10 −6 where M = 10 8 in a calculation with 40 digits (fixed) precision. For even larger M = 2 · 10 8 or more digits, e.g. 60, σ does not change implying that rounding errors are irrelevant. Fig. 1 shows our result. The agreement of the numerical data with the Mott/Berezinskii-solution is perfect over more than 3 decades while the data is completely incompatible with Gogolin's ln 3 ω term.
One may ask where Gogolin's approach fails. We believe that the problem stems from the "leading logarithmic approximation", the only step in the calculation which is not exact. The expression eq. (19) in Gogolin's paper derived within this approximation may be sufficient for obtaining the leading term ∝ iω. However, the real part of σ is of higher order in ω and presumably to this order corrections exist that have been ignored by Gogolin and that cancel the ln 3 ω term. We also mention, that in contrast to Gogolin's statements the length ℓ ln(1/ν) has been identified and discussed in the literature as a relevant scale, e. g. in Ref. [7].
We thank A. Mildenberger, A. D. Mirlin, D. G. Polyakov, L. Schweitzer and P. Wölfle for stimulating discussions. Support from the SFB 195 der DFG is gratefully acknowledged. | 2019-04-14T01:56:45.445Z | 2000-10-31T00:00:00.000 | {
"year": 2000,
"sha1": "2dab014f620cdb146baab27b265841aa748aecb3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fb2eb186541b9bc393d5a6ee1055bdede12f655a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
16549640 | pes2o/s2orc | v3-fos-license | Hybrid Dark Sector: Locked Quintessence and Dark Matter
We present a unified model of dark matter and dark energy. The dark matter field is a modulus corresponding to a flat direction of supersymmetry, which couples, in a hybrid type potential, with the dark energy field. The latter is a light scalar, whose direction is stabilized by non-renormalizable terms. This quintessence field is kept `locked' on top of a false vacuum due to the coupling with the oscillating dark matter field. It is shown that the model can satisfy the observations when we consider low-scale gauge-mediated supersymmetry breaking. The necessary initial conditions are naturally attained by the action of supergravity corrections on the potential, in the period following the end of primordial inflation.
Introduction
In the last few years cosmology experienced a dramatic influx of observational data [1,2,3,4]. Analysis of these observations have resulted in the emergence of the so-called 'concordance' model of cosmology. According to the data we live in a spatially flat Universe, whose content is comprised of predominantly unknown substances, not accounted for by the standard model of particle physics. In particular, roughly 1/3 of the energy density of our Universe behaves as pressureless matter (dust), with little or no interactions with the usual baryonic matter. These weakly interacting massive particles (WIMPs) do not correspond to the luminous matter of galaxies and, hence, have been named dark matter. The remaining 2/3 of the Universe content at present, is attributed to an even more exotic substance, whose properties are such that affect the global geometry of the Universe and cause the observed current accelerated expansion of spacetime. In analogy with dark matter, this substance has been named dark energy. Therefore, despite the substantial progress of particle cosmology in the last decade, cosmologists are forced to admit that the bulk of the content of the Universe corresponds to an unknown dark sector, on whose origin and theoretical justification we can only speculate.
Modern particle physics indeed offers a number of possibilities for the explanation of this dark sector. In particular, supersymmetric theories include a zoo of unobserved particles corresponding to the superpartners of standard model particles. This so-called hidden sector may soon be accessible to collider experiments. Other candidates are offered by modified gravity theories (e.g. the Brans-Dicke scalar of scalar tensor gravity), string theories (e.g. string axions/moduli, Kaluza-Klein particles) and theories of large extra dimensions (e.g. the radion). Therefore, it seems that particle physics has available options capable of addressing the problem of the dark sector.
Since the dark matter issue has been present for some time, a number of successful candidate WIMPs exist to explain it, the most prominent of which are the lightest supersymmetric particle (e.g. the neutralino) and the axion (i.e. the phase-field of the Peccei-Quinn field, used to solve the strong CP-problem). On the other hand, the problem of dark energy is quite recent and more difficult to address, because the properties of dark energy are quite bizarre and may even threaten some of our 'fundamental prejudices' such as the vanishing of the cosmological constant or even the dominant energy condition (for a review see [5]).
The simplest form of dark energy is a non-zero cosmological constant Λ. Phenomenologically, this is the most appealing choice, since it provides a very nice fit to the data (the so-called ΛCDM model, whose minimal form is the famous 'vanilla model'). However, the value of the cosmological constant has to be fine tuned to the incredible level of Λ ∼ 10 −123 M 2 P , compared to its natural value, given by the Planck mass M P . Moreover, a constant non-zero vacuum density inevitably leads to eternal accelerated expansion. This results in the presence of future causal horizons, which inhibit the construction of the S-matrix in string theory and are, therefore, most undesirable [6].
For these reasons theorists have attempted to formulate alternative solutions to the dark energy problem, while keeping Λ = 0 as originally conjectured. The most celebrated such idea is the introduction of the so-called quintessence field [7]; the fifth element after cold dark matter (WIMPs), hot dark matter (neutrinos), baryons and photons. Quintessence is a light scalar field Q, which has not yet reached the minimum of its potential and, therefore, is responsible for the presence of a non-vanishing potential density V 0 today. This density currently dominates the Universe, giving rise to an effective cosmological constant Λ eff = 8πGV 0 , which causes the observed accelerated expansion. Eventually, the quintessence field will reach the minimum of its potential (corresponding to the true vacuum) ending, thereby, the accelerated expansion. Hence, quintessence dispenses with the future horizon problem of ΛCDM.
However, despite its advantages, the quintessence idea suffers from certain generic problems [8]. For example, in order to achieve the correct value of V 0 , one usually needs to fine-tune accordingly the quintessence potential. Also, in fairly general grounds it can be shown that the value of quintessence at present is Q ∼ M P (if originally at zero) with a tiny effective mass m Q ∼ 10 −33 eV. In the context of supergravity theories such a light field is difficult to understand because the flatness of its potential is lifted by excessive supergravity corrections or due to the action of non-renormalizable terms, which become important at displacements of order M P . Finally, quintessence introduces a second tuning problem, that of its initial conditions.
In this paper we attempt to address the dark sector problem in a single theoretical framework. Other such attempts can be found in Ref. [9]. We assume that the dark matter particle is a modulus Φ, corresponding to a flat direction of supersymmetry. The modulus field is undergoing coherent oscillations, which are equivalent to a collection of massive Φ-particles, that are the required WIMPs. Coupled to the dark matter is another scalar field φ. This can be thought of as our quintessence field and it corresponds to a flat direction lifted by non-renormalizable terms. Even though the φ-field is a light scalar, it is much more massive than the m Q mentioned above, so as not to be in danger from supergravity corrections to its potential. Our quintessence field is coupled to our dark matter in a hybrid manner, which is quite natural in the context of a supersymmetric theory. Due to this coupling, the oscillating Φ, keeps φ 'locked' on top of a potential hill, giving rise to the desired dark energy. When the amplitude of the Φ-oscillations decreases enough, the dark energy dominates the Universe, causing the observed accelerated expansion. Much later, when the oscillation amplitude is reduced even further, the 'locked' quintessence field is released and rolls down to its minimum. Then, the system reaches the true vacuum and accelerated expansion ceases. Our model accounts successfully for the observations using natural mass-scales (corresponding to low-scale gauge-mediated supersymmetry breaking). In order to explain the required initial conditions we explore in detail the history of our system during the early Universe, when the supergravity corrections to the scalar potential are essential.
Our paper is organized as follows. In Sec. 2 we present and analyze the dynamics of our model, while we also determine the value of the model parameters. In Sec. 3 we demonstrate that the required initial conditions for our dark matter field Φ may be naturally attained due to the action of supergravity corrections on the scalar potential. We also investigate the disastrous possibility of the decay of the oscillating dark matter condensate into quintessence quanta. In Sec. 4 we show that the supergravity corrections may also ensure the locking of the quintessence field φ. Additionally, we elaborate more on the value of the tachyonic mass of φ and its vacuum expectation value. Finally, in Sec. 5 we discuss our results and present our conclusions.
We assume a spatially flat Universe, according to the WMAP observations [1]. Throughout our paper we use natural units such thath = c = 1 and Newton's gravitational constant is 8πG = m −2 P , where m P = 2.4 × 10 18 GeV is the reduced Planck mass.
The model
Consider two real scalar fields Φ and φ with a hybrid type of potential of the form where λ < ∼ 1. From the above we see that the tachyonic mass of φ is given by The above potential has global minima at (Φ, φ) = (0, ±M ) and an unstable saddle point at (Φ, φ) = (0, 0). Now, since the effective mass-squared of φ is Suppose that originally the system lies in the regime, where, Φ ≫ Φ c and φ ≃ 0. With such initial conditions the effective potential for Φ becomes quadratic: Hence, when φ remains at the origin, Φ oscillates on top of a false vacuum with density The oscillation frequency is ω Φ ∼ m Φ and the time interval (∆t) s that the field spends on top of the saddle point (∆Φ ≤ Φ c ) is whereΦ is the amplitude of the oscillations. Originally this amplitude may be quite large but the expansion of the Universe dilutes the energy of the oscillations and, therefore,Φ decreases, which means that (∆t) s grows. However, as long as the system spends most of this time away from the saddle and until (∆t) s becomes large enough to be comparable to the inverse of the tachyonic mass of φ, the latter has no time to roll away from the saddle [10,11]. Hence, the oscillations of Φ on top of the saddle can, in principle, continue until the amplitude decreases down to at which point φ has to depart from the origin and roll down toward its vacuum expectation value (VEV) M . However, the roll down of φ can occur earlier if Φ c > Φ s , even though the period of oscillation is smaller than m −1 φ . Indeed, when Φ s <Φ < Φ c , φ ≃ 0 is not possible because, were it otherwise, it would mean that the field would have had to remain on top of the saddle for the entire period of oscillation. Hence, φ departs from the origin atΦ end , wherē From Eqs. (4) and (8) we find thatΦ end is decided by the relative magnitude of the masses of the scalar fields because During the oscillations the density of the oscillating Φ is where the dot denotes derivative with respect to the cosmic time t. Comparing this with the overall potential density given in Eq. (5) we see that, the overall density is dominated by the false vacuum density given in Eq. (6), when the oscillation amplitude is smaller than, The above model can be used to account for both the dark matter and the dark energy in the Universe. Provided the initial conditions for the two fields are appropriate, it is possible that the oscillating field Φ constitutes the dark matter, whereas the field φ is responsible for eliminating the dark energy in the future, so as to avoid eternal acceleration and future horizons. The dark matter field Φ oscillates on top of the false vacuum V 0 in the same manner as in 'locked' inflation [10,11]. The false vacuum is not felt until today, when the accelerated expansion begins. Eventually, at some moment in the future, the amplitude of the Φ-oscillation reachesΦ end and φ rolls away from the origin terminating the accelerated expansion. We want our model to explain the dark energy responsible for the currently observed accelerated expansion of the Universe. Hence, the false vacuum density V 0 of our model should be comparable to the density ρ 0 of the Universe at present In view of Eq. (6), this implies the condition We also want our model to explain the dark matter, by means of the oscillating scalar field Φ. Indeed, it is well known that a scalar field oscillating in a quadratic potential has the equation of state of pressureless matter [12] (it corresponds to a collection of massive Φ-particles) and, therefore, Φ can account for the dark matter necessary to explain the observations. For this, the oscillating Φ has to satisfy certain requirements. One of these is the obvious requirement that Φ should not have decayed until today. This means that the decay rate of Φ should satisfy the condition where H 0 ∼ √ ρ 0 /m P is the Hubble parameter at present. Using that where we used that the coupling g Φ of Φ with its decay products lies in the range mΦ mP ≤ g Φ ≤ 1, with the lower bound corresponding to the gravitational decay of Φ, for which Γ Φ ∼ m 3 Φ /m 2 P . From the above bound we see that we require Φ to be a rather light field with mass < ∼ 10 MeV. We choose, therefore, to use a modulus field, corresponding to a flat direction of supersymmetry, whose mass is estimated as where M S is the supersymmetry breaking scale, ranging between m 3/2 ≤ M S ≤ √ m P m 3/2 , where m 3/2 ∼ 1 TeV is the electroweak scale (gravitino mass) and the upper bound corresponds to gravity mediated supersymmetry breaking while the lower bound corresponds to gauge mediated supersymmetry breaking, which can give M S as low as (few) × TeV. Eqs. (16) and (17) suggest If φ were a modulus too then the natural value of α would have been Figure 1: Illustration of the scalar potential V (Φ, φ). Originally, Φ ∼ m P and φ ≃ 0. The field Φ begins oscillating with amplitude Φosc. The Universe dilutes the energy of the oscillations until the amplitude decreases toΦ end ∼ Φc, when the system departs from the saddle and rolls toward the minimum at (Φ = 0, φ = ±M ).
with M ∼ m P . The above, however, in view of Eq. (14), results in the condition which, combined with the range in Eq. (18), results in the following range for the vacuum expectation value (VEV) of φ 10 MeV ≤ M ≤ 1 TeV .
Thus, we see that M ≪ m P in contrast to expectations. However, there are ways to reduce the VEV of φ, provided the tachyonic mass m φ remains roughly unmodified. Hence, in the following we retain the value of α shown in Eq. (19). We discuss the small VEV of φ in Sec. 4.2. In view of Eqs. (18) and (21) we make the following choice With this choice, the number of parameters of the model in Eq. (1) is minimized to two natural mass scales: m P and M S ∼ m 3/2 and a coupling λ ≤ 1. Before concluding this section we notice that, from Eqs. (3), (19) and (22) we find which, in view of Eqs. (9) and (10), gives Now, since we need the oscillation of Φ on top of the false vacuum to continue until today, when where we also used Eqs. (4) and (12).
Dark matter requirements
3.1 The value of Φ at the onset of the oscillations Another important requirement for Φ, if the latter is to account for the dark matter in the Universe, is that it has the correct energy density. This requirement is determined by the initial amplitude Φ osc of the oscillations of the field. When the oscillations begin we have [cf. Eq. (11)] where the subscript 'osc' denotes the onset of the Φ-oscillations. According to the Friedman equation we have ρ = 3H 2 m 2 P , where H(t) ≡ȧ/a is the Hubble parameter and a(t) is the scale factor, parameterizing the Universe expansion. Hence, since the oscillations begin when Now, using Eq. (17) and also that, during the radiation era, ρ ∼ T 4 (with T being the temperature) we obtain where 'eq' denotes the time t eq of equal matter and radiation densities, at which T eq ∼ 1 eV. Hence, we see that H osc ≫ H eq , which means that the oscillations begin during the radiation dominated period. During this period the density of the Universe scales as ρ ∝ a −4 , while the density of the oscillating scalar field scales as ρ Φ ∝ a −3 [12]. Hence we have ρ Φ /ρ ∝ a ∝ H −1/2 . Therefore, the density of the oscillating scalar field eventually dominates the Universe. Since, we want Φ to be the dark matter, we require that its density dominates at t eq . Consequently, Eq. (27) suggests where we also used Eq. (17). Putting the numbers in the above we find Φ osc ∼ 10 −6 m P . This is substantially smaller than the natural expectation for a modulus, which corresponds to an original misalignment (i.e. displacement from its VEV) of order m P . However, below we attempt to explain this reduced misalignment by means of supergravity corrections. These corrections are expected to lift the flatness of the Φ-direction and enable Φ to begin rolling down long before H ∼ m Φ .
The effect of supergravity corrections
Supergravity corrections to the potential generate an effective mass term proportional to the Hubble parameter [13]. Thus, the effective potential in Eq. (5) becomes where c is a positive constant and we ignored the false vacuum contribution V 0 , which is negligible at times much earlier than the present time.
We assume that, in the early stages of its evolution, the Universe underwent a period of cosmic inflation. During and after inflation, until reheating, the Universe is dominated by the density of the inflaton field. The minimum of V (Φ), in general, is expected to be shifted by ∆Φ ∼ m P at the end of inflation. Hence, at the end of inflation we expect Φ inf ∼ m P . After the end of inflation and until reheating V (Φ) is given by Eq. (30) with c ∼ O(1) [13]. However, after reheating, when the Universe becomes radiation dominated, one expects c → 0 and the supergravity correction vanishes 1 [14]. Now, the reheating temperature is T reh ∼ √ Γ inf m P , where Γ inf is the decay rate of the inflaton field. Using this and Eq. (17) it is easy to show that where the subscript 'reh' denotes the time of reheating. Typically, baryogenesis mechanisms require T reh > 1 TeV. Therefore, reheating occurs before the onset of the quadratic oscillations, which we discussed in the previous subsection. As a result, after reheating, the motion of the field is overdamped by the excessive friction of a large Hubble parameter (compared to its mass) and so Φ freezes until H is reduced enough for the quadratic oscillations to commence.
To understand this consider the Klein-Gordon equation of motion of the field, which, in view of Eq. (30), takes the formΦ If, after reheating, Φ is dominated by its kinetic density ρ kin ≡ 1 2Φ 2 then only the first two terms in the left-hand-side (LHS) of Eq. (32) are important, which results in ρ kin ∝ a −6 . Thus, the kinetic density is soon depleted away and the field becomes potential density dominated. 2 When this happens the first term in the LHS of Eq. (32) becomes negligible. Then, considering that c = 0 after reheating, it is easy to find the solution From the above it is evident that, in the interval m Φ < H < H reh , the field remains frozen. Consequently, Hence, the required value of Φ osc , given in Eq. (29), may be explained by the evolution of Φ during the period after the end of inflation until reheating. Below we discuss this evolution assuming that the sign of the supergravity correction is positive. The evolution of a scalar field under the influence of supergravity corrections has been thoroughly studied in Ref. [15], where it was found that, during a matter dominated period (such as the one after the end of inflation and before reheating, when the Universe is dominated by massive inflaton particles), the value of the field is given by the following equations: For c > 9/16: For c = 9/16: 1 The supergravity corrections during radiation domination are due to Kähler couplings of the scalar field with the thermal bath dominating the density of the Universe. For example, consider a scalar field Ψ, which is part of the thermal bath. Then the supergravity corrections arise through the kinetic density due to terms in the Kähler potential of the form: K ∼ Ψ 2 Φ 2 /m 2 P . The kinetic term L kin ≡ (∂m∂nK)∂µφm∂ µ φn (with ∂n ≡ ∂/∂φn) includes a contribution of the form δL kin ∼ (Φ/m P ) 2 ∂µΨ∂ µ Ψ. Now, naively one expects (∂Ψ) 2 ∼ ρ Ψ ∼ T 4 , because Ψ is part of the thermal bath. Since, T 4 ∼ ρ ∼ (Hm P ) 2 we find that δL kin ∼ H 2 Φ 2 , i.e. supergravity corrections seem to result again in an effective mass of order H. However, a more careful examination of the above shows that this is not so. Indeed, ∂µΨ∂ µ Ψ =Ψ 2 − ( ∇Ψ) 2 = 0, because Ψ is a relativistic (effectively massless) field, whose modes correspond to plane waves of the form: Ψ k = Ψ 0 k e ±ikt . Similar results are obtained with fermions. Hence, the supergravity correction vanishes in the radiation dominated period. KD wishes to thank T. Moroi for clarifying this point. 2 We make the conservative assumption that the value of Φ is not dramatically reduced until its density be- For c < 9/16: whereĉ ≡ min{c, 9/16} .
Using Eqs. (29) and (38) [cf. also, Eq. (34)] one finds where H inf is the Hubble parameter at the end of inflation and we considered Φ inf ∼ m P . From the above, it is easy to obtain where we used that T eq /M S ∼ 10 −6 and also that V 1/4 inf ∼ √ H inf m P and T reh ∼ √ Γ inf m P . The amplitude of the density perturbations (given by the COBE satellite observations), if they are due to the amplification of the quantum fluctuations of the inflaton field, determines the energy scale of inflation as follows [16] V where ǫ is one of the, so-called, slow-roll parameters, associated with the rate of change of H during inflation. Typically, ǫ ∼ 1/N where N ≃ 60 is the number of the remaining e-foldings of inflation when the cosmological scales exit the causal horizon. Hence, we see that the energy scale of inflation is determined by the COBE observations to be given by the energy of grand unification: V 1/4 inf ∼ 10 16 GeV. Inserting this value into Eq. (41) we find that, for √ c > 3 4 , we obtain T reh ∼ 10 10 GeV. A reheating temperature this high is in danger of violating the well-known gravitino constraint, which requires T reh ≤ 10 9 GeV. Enforcing this constraint we find the following allowed range for c This is a rather narrow range for the value of c, albeit quite realistic. However, this does not necessarily imply any tuning. Indeed, different values of c result in different values of Φ osc , which, with Φ being the dark matter, would give different values of T eq . The latter is determined observationally and has no fundamental origin. Hence, one can view the above result as an observational determination of c. Still, we can expand the allowed range of c even above 9/16 if we break loose from the COBE condition in Eq. (42). This is possible if we consider alternative scenarios for structure formation. For example, if we assume that the primordial spectrum of density perturbations is due to the amplification of the quantum fluctuations of some curvaton field other than the inflaton, as suggested in Ref. [17], then the COBE constraint on V 1/4 inf becomes relaxed into an upper bound [18]. Assuming c ≥ 9/16, Eqs. (39) and (41) give Hence, for the allowed range of T reh we find From the above it is clear that the necessary initial conditions for the quadratic oscillations of Φ, in order for the latter to be the dark matter particle, can be naturally attained by considering the action of supergravity corrections on V (Φ) after the end of inflation and until reheating.
Avoiding the decay of Φ into φ-particles
One final requirement for our dark matter field Φ is that it should not decay into φ-particles until the present time. Indeed, the coupling between the two fields suggests that, in principle, such a decay is possible. Here we find the appropriate constraint on the coupling constant λ, which ensures that such a decay does not take place.
Let us consider first the perturbative decay of the Φ condensate. The decay rate for the decay: Φ → φ φ is estimated as In order to avoid the decay we need to have Γ Φ→φφ < H until today. Now, sinceΦ ∝ a −3/2 , it is easy to find that where w is the barotropic parameter corresponding to the equation of state of the dominant component of the content of the Universe (w = 0 {w = 1 3 } for the matter {radiation} dominated epoch). Since w ≤ 1, we see that the constraint on Γ Φ→φφ relaxes with time. Hence, the tightest constraint corresponds to the earliest time when the decay Φ → φ φ can occur. Now, this decay is possible only when m Φ ≥ 2m eff φ ∼ √ λΦ, where we considered that, during most of the oscillation period, Φ ∼Φ. Hence, the decay can take place only after the amplitude of the oscillations becomes Φ <Φ m , whereΦ From Eqs. (46) and (48) we find Therefore, the constraint for the avoidance of the decay Φ → φ φ reeds where we used that m ΦΦeq ∼ √ ρ eq ∼ T 2 eq ∼ m P H eq .
Using Eqs. (49) and (51) and enforcing the constraint in Eq. (50) we obtain Apart from the perturbative decay of Φ it is possible that φ-particle production occurs in an explosive manner due to parametric resonance effects [19]. This process takes place during the small fraction of each oscillation when Φ is close enough to the origin that m Φ ≥ 2m eff φ ≃ 2 √ λΦ(t) even thoughΦ >Φ m . The efficiency of the resonance is determined by the so-called q-factor: When q ≫ 1 we are in the broad resonance regime and the production of φ-particles is quite efficient. However, despite this fact, their energy is only a fraction of the total energy in the oscillating Φ. Consequently, the evolution of Φ is hardly affected by the resonant production of φ-particles. The produced φ-particles are expected to eventually thermalize and become a (negligible) component of the thermal bath. The resonance becomes narrow when q < ∼ 1, which occurs deep into the radiation epoch. Soon afterwards, backreaction and rescattering effects are expected to shut down the resonance and terminate the non-perturbative production of φ-particles. Hence, the resonant decay of Φ does not really impose any additional constraints. 3 In view of Eqs. (25) and (52) we see that the allowed range for λ is Such a small coupling between flat directions can be naturally realized through being determined by the Planck suppressed expectation value of some other field.
Dark energy requirements 4.1 Locking conditions for φ
Our results also depend on the initial conditions for our quintessence field φ, which has to find itself near the origin in order to become locked, when the Φ oscillations begin. The required condition, in fact, is φ osc ≪ M .
Recall, here, that the subscript 'osc' denotes the onset of the oscillations of the field Φ and not of φ.
Firstly, due to the interaction term in Eq.
inf is small, one may not be able to have both Φ and φ of the order of m P after the end of inflation (despite the fact that λ < 10 −19 [cf. Eq. (54)]) because we need V (Φ, φ) ≪ V inf , otherwise the inflationary dynamics would be disturbed. As we have chosen Φ inf ∼ m P , we obtain the following bound for the value of φ at the end of inflation 4 Assuming that φ is also subject, like Φ, to supergravity corrections, which provide a contribution c ′ H 2 to its mass-squared, we can estimate φ reh using the analog of Eq. (38) Using Eq. (40) it is easy to obtain If both c, c ′ ≥ 9/16 then the above gives φ reh ∼ 10 −6 φ inf . However, one can achieve a substantially smaller φ reh . For example, with c ′ ≥ 9/16 and c ≈ 0.4 one finds φ min reh ∼ 10 −13 φ inf . As in the case of Φ, the supergravity corrections disappear (they cancel out) after reheating. Consequently, during the radiation dominated epoch, the effective mass of φ, according to Eq. (3), is given by where we considered that Φ reh ∼ Φ osc ≫ Φ c [cf. Eq. (34)]. Since, in the interval m Φ < H < Γ inf , Φ remains frozen, the above effective mass remains constant after reheating and until the oscillations of Φ begin. Comparing this effective mass with Γ inf one finds where we also used Eq. (40). In view of Eq. (25) we find λ 1/4 10 −3 m P > 10 8 GeV. Hence, considering the gravitino constraint T reh ≤ 10 9 GeV, we expect that m eff φ > Γ inf and, therefore, the oscillations of φ begin immediately after reheating.
During these oscillations we have φ ∝ √ ρ φ ∝ a −3/2 ∝ H 3/4 , which results in For the allowed range of T reh the above corresponds to 10 −9 ≤ φ osc /φ reh ≤ 1. Putting Eqs. (56), (59) and (62) together we obtain: The first factor on the right-hand-side of the above can be as low as 10 −9 , the second one can be as low as 10 −13 , while the last factor in front of m P cannot be larger than unity. Hence, it is evident that the requirement in Eq. (55), which demands φ osc < 10 −15 m P , may well be satisfied. Let us demonstrate this with a small example. Suppose that that V 1/4 inf ∼ 10 16 GeV and we choose c ≈ 0.4 and c ′ ≥ 9/16. Using this and in view also of Eq. (54), Eq. (56) suggests that φ inf < ∼ m P . Hence, Eq. (63) suggests that φ osc < 10 −15 m P can be achieved if T reh ≥ 10 M S , which allows almost the entire range of T reh .
As a result of the above, our assumption φ ≃ 0 in Sec. 2 is well justified.
The mass and VEV of φ
In order to achieve a false vacuum density as small as ρ 0 we not only require a small tachyonic mass for our locked quintessence field φ but also a small VEV according to Eq. (22). One way to achieve this is to stabilize the φ-direction by means of some high-order non-renormalizable term, of the form where n > 2 and Q is an appropriate large cut-off scale, which is linked to the VEV M as where we also considered Eq. (2). The most natural choice is Q = m P , which gives n = 4. Hence, the action of non-renormalizable terms may well reduce the VEV of φ naturally 5 . The important issue here is that we need to preserve the smallness of the tachyonic mass. For a flat direction one expects the dominant contribution to the mass to be of the form (M S /m P ) p M S with p = 1. This is the case, for example, of the dark matter field Φ as shown in Eq. (17). However, in the case of φ, we need to suppress this contribution and consider p = 2 instead, according to Eq. (23). It is conceivable that this may occur due to accidental cancellations in the Kähler potential, or due to some symmetry, which protects m φ . In any case, even if this requirement corresponds to a certain level of fine-tuning, this tuning is much less stringent than what is required in most quintessence models (with typical effective mass m Q ∼ H 0 ), because m φ ∼ 10 15 H 0 . Moreover, since m φ ∼ 10 9 H eq , supergravity corrections, during the matter era after t eq , are negligible, in contrast to the usual quintessence models [8]. Note, however, that there exist some dark energy models corresponding to particles with mass much larger than H 0 . For example this is possible in scalar tensor theories of gravity [21], which can account for both quintessence and dark matter (e.g. see Refs. [22] and [23] respectively). Another recent such example is dark energy from mass varying neutrinos [24].
A marginal increase of m φ may be achieved if we consider that the VEV of φ is reduced by the action of loop corrections (instead of non-renormalizable terms). These are of the form where C ≪ 1. In this case we have The above setup can increase the tachyonic mass by a factor 1/ √ C. The best case, however, corresponds to Q = m P , which gives C ≃ 0.015, i.e. m φ is increased at most by a factor of 8.
Finally, the smallness of the tachyonic mass of our locked quintessence may result in the appearance of a fifth-force [25] because the associated Compton wavelength is From the above we see that such a fifth-force cannot bias the formation of large structures like galaxies and galactic clusters. It is conceivable, though, that it may affect the generation of population III stars and stellar formation in general. However, the fifth-force is strongly constrained by the solar system tests on the equivalence principle [26]. Hence, we require that φ is some hidden sector field with suppressed interactions with ordinary baryonic matter.
Discussion and conclusions
We have analyzed a unified model for the dark matter and the dark energy. As dark matter we used a modulus field Φ, which corresponds to a flat direction of supersymmetry. The field undergoes coherent oscillations that correspond to massive particles (WIMPs), constituting pressureless matter. Our Φ field is weakly coupled with another scalar φ, through a hybrid type potential, very common in supersymmetric theories. The scalar φ corresponds to a flat direction lifted by non-renormalizable terms. Due to the above coupling the oscillating Φ keeps φ 'locked' on top of the saddle point of the potential, resulting in a non-zero false vacuum contribution V 0 . The amplitude of the oscillations decreases in time due to the Universe expansion. Below a certain value Φ Λ the Universe becomes dominated by the false vacuum density and a phase of accelerated expansion begins. Acceleration continues until the amplitude of the oscillations decreases down to a critical value Φ c , when the 'locked' quintessence field φ is released and rolls down to its VEV. At this point the system reaches the true vacuum and the accelerated expansion ceases.
We have shown that it is possible to explain both dark matter and dark energy by taking the supersymmetry breaking scale to be M S ∼ TeV, which corresponds to low-scale gauge-mediated supersymmetry breaking. The VEV of φ has to be given also by M S , which is possible to achieve by stabilizing its potential with the use of a non-renormalizable term of the 8th order. Hence, using only two natural mass scales, m P and M S ∼ m 3/2 we are able to achieve cosmic coincidence, in the sense that we manage to obtain comparable densities for dark matter and dark energy at present without severe fine-tuning. In order to successfully account for both dark matter and dark energy our scalar fields need to have the correct initial conditions. By studying the dynamics of our scalar fields in the early Universe, we have demonstrated that the required initial conditions are naturally attained when considering the action of supergravity corrections to the scalar potential during the period following the end of primordial inflation and until reheating.
The advantages of our model are the following. Firstly, it uses a theoretically well motivated framework to address, in a unified manner, both the open issues of dark matter and dark energy. Also, coincidence is achieved with the use of only natural energy scales and initial conditions. The observational consequences of our model are similar to those of ΛCDM because, during most of the evolution of the Universe, the model is reduced to Eq. (5) (i.e. it corresponds to a collection of Φ-particles (WIMPs) plus an effective cosmological constant Λ eff = V 0 /m 2 P ). Hence, our model enjoys all the successes of ΛCDM but it does not suffer from its disadvantages, namely the extreme fine-tuning of Λ and also the conceptual blunders of eternal acceleration and future causal horizons. Our model avoids eternal acceleration because the 'locked' quintessence field terminates false vacuum domination, when it is released from the origin. The only tuning problem that our model suffers from is the smallness of the tachyonic mass m φ of our quintessence field, which may be due to some approximate symmetry. Still, we have m φ ≫ H eq ≫ H 0 , which means that the supergravity corrections to the φ-direction are negligible even after t eq , in contrast to the generic problem of most quintessence models, which have m Q ∼ H 0 .
It is interesting to estimate how long the late period of accelerated expansion lasts. After domination by the false vacuum we have H 0 ≃ √ V 0 / √ 3 m P = constant. Hence, a phase of (quasi) de Sitter expansion begins, with a ≃ a 0 exp(H 0 ∆t), where ∆t = t − t 0 . Now, for the oscillating Φ we have Φ ∝ √ ρ Φ ∝ a −3/2 . Thus, we obtain where we have used Eqs. (4), (12) and (17). In view of Eqs. (22) and (54) we see that the period of acceleration may last up to 8 Hubble times (e-foldings) depending on the value of λ. Another interesting point regards the coupling g Φ of the dark matter particle to its decay products. Eqs. (15), (17) and (22) suggest that g Φ should lie in the range: 10 −30 ≤ g Φ < 10 −15 , with the lower bound corresponding to gravitational decay, when g Φ ∼ m Φ /m P . Thus, Φ is truly a WIMP [cf. also Eq. (54)].
We should also point out here that our oscillating Φ-condensate does not have to be the dark matter necessarily. Indeed, it is quite possible that φ-remains locked on top of the false vacuum while ρ Φ is negligible at present. That way, our model can account for the dark energy requirements even if the initial conditions for Φ (e.g. the value of c) are not appropriate for the latter to be the dark matter. Indeed, the locking of quintessence requires that ρ Φ (t 0 ) ≥ ρ min Φ , where ρ min Φ ∼ m 2 Φ Φ 2 c corresponds to the minimum energy for the oscillations. In view of Eqs. (4), (13), (17) and (22) it is easy to find: ρ min Φ /ρ 0 ∼ 10 −30 λ −1 . Hence, depending on λ, Φ may contribute only by a small fraction to dark matter, while still being able to lock quintessence and cause the observed accelerated expansion at present. However, we feel that using Φ to account also for the dark matter renders our model much more effective and economical, without any additional tuning requirements (in the sense that the required value of c is natural).
To summarize we have presented a unified model of dark matter and dark energy in the context of low-scale gauge-mediated supersymmetry breaking. Our model retains the predictions of ΛCDM, while avoiding eternal acceleration and achieving coincidence without significant finetuning. The initial conditions of our model are naturally attained due to the effect of supergravity corrections to the scalar potential in the early Universe, following a period of primordial inflation. | 2014-10-01T00:00:00.000Z | 2004-01-29T00:00:00.000 | {
"year": 2004,
"sha1": "76b5cddb9ad3281de9e7f41dd2438fede2caa751",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0401238",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "76b5cddb9ad3281de9e7f41dd2438fede2caa751",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
202553812 | pes2o/s2orc | v3-fos-license | Role of fruit juice in achieving the 5-a-day recommendation for fruit and vegetable intake
Abstract Although there is strong evidence that consumption of fruit and vegetables is associated with a reduced rate of all-cause mortality, only a minority of the population consumes 5 servings a day, and campaigns to increase intake have had limited success. This review examines whether encouraging the consumption of fruit juice might offer a step toward the 5-a-day target. Reasons given for not consuming whole fruit involve practicalities, inconvenience, and the effort required. Psychologically, what is important is not only basic information about health, but how individuals interpret their ability to implement that information. It has been argued that fruit juice avoids the problems that commonly prevent fruit consumption and thus provides a practical means of increasing intake and benefitting health through an approach with which the population can readily engage. Those arguing against consuming fruit juice emphasize that it is a source of sugar lacking fiber, yet juice provides nutrients such as vitamin C, carotenoids, and polyphenols that offer health-related benefits. Actively encouraging the daily consumption of fruit juice in public health policy could help populations achieve the 5-a-day recommendation for fruit and vegetable intake.
INTRODUCTION
In many countries, although not all, half or 1 glass of unsweetened 100% fruit or vegetable juice counts toward 5 portions a day. Yet, some have claimed that fruit juice is little more than a source of sugar, predisposing consumers to weight gain and obesity, 1,2 and have proposed that fruit juice should not be included. The veracity of this argument has been examined, and given advantages in terms of the ease of consumption of juice, it was considered whether a recommendation to consume fruit juice might be a simple and effective means of moving toward the goal of 5 servings a day. It was concluded that 100% fruit juice should be distinguished from juice sweetened with sugar but that daily consumption would benefit large sections of the population.
In this area, the argument has tended to be based on a particular isolated nutrient, usually sugar, but to a lesser extent fiber. There is a need, however, to remember that 100% juice, the topic of this review, does not have added sugar and should be distinguished from sweetened juices and fruit cordials. Nevertheless, the World Health Organization 3 suggested limiting the levels of free sugars in the diet with the implicit assumption that, for public health purposes, all sources can be added together and treated as one. However, Public Health England, 4 when they considered the definition of free sugars, commented that current definitions reflected a "limited understanding of the extent to which the cellular structure of different types of processed foods containing naturally occurring sugars is broken down and the differences in the physiological response to sugar consumed in different forms." This includes the sugars found in fruit.
The first objective of this review was to establish the factors that make increasing fruit and vegetable consumption so difficult and to consider the extent to which juice can solve these problems. Then the alleged benefits and alleged negative consequences of drinking juice were examined. To date the debate has centered on sugar and fiber, although meta-analysis finds that additional portions of fruit and vegetables, but also fruit juice, decrease the risk of coronary heart disease and all-cause mortality. 5 The wider context is that only a small minority of the population consumes the recommended amount of fruit and vegetables, 6,7 and campaigns to increase intake have had little success. If fruit juice offers the means of adding to efforts to consume 5 servings of fruits and vegetables per day in a more acceptable way than preparing and consuming intact items, should this become a recommended approach, albeit one that does not replace the existing intake of intact fruit and still favors the majority of intake being intact fruit?
FACTORS INFLUENCING WHETHER INDIVIDUALS ACHIEVE 5 SERVINGS A DAY
In one sense, 5-a-day campaigns have been extremely successful. It is widely known that the eating of fruit and vegetables is an essential aspect of a healthy diet. Supermarket shelves have many food items labeled with the banner "one of your five-a-day." There is no need to say this refers to 1 portion of the recommended fruit and vegetable servings or why it is beneficial: it can be assumed that people understand the implicit message. However, because the majority of the population fails to consume the required amounts, it is apparent that it is insufficient to merely convey basic information to consumers. After identifying 50 trials with children, a Cochrane review concluded that "the evidence for how to increase fruit and vegetable consumption of children remains sparse." 8 To increase consumption there is a need to understand the basic motivations that prevent the choice of fruit and vegetables and to offer solutions that minimize their impact. In this context, it was argued that fruit juice successfully addresses many problems and therefore has the potential to help achieve the recommended intake.
The likelihood of consumption varies with a range of social and psychological parameters. A survey in New York found that 50% of the population ate <1 portions a day 6 : low levels of education and being male were associated with a lower intake. Other problems included no convenient access to fresh produce and the high cost of produce. In another US study, those trying or not trying to increase intake were compared. 7 Those not trying to change their diet were more likely to be male, younger, and have a higher body mass index (BMI). Barriers to consumption included the impression that fruits and vegetables took time to prepare; they do not stay fresh for long so are not readily available in the home; they were costly; and they do not satisfy hunger. 7
Psychosocial factors
A review of psychological and social factors that influence consumption concluded that there was strong evidence that the eating of fruit and vegetables was influenced by self-efficacy, social support, and knowledge. 9 There was weaker evidence for the influence of attitudes/beliefs, perceived barriers to consumption, the intention to change, and autonomous motivation.
According to social cognitive theory, 10 self-efficacy is a major factor in determining the setting of goals and the resulting effort that is expended. There is a distinction between "outcome expectancy" (the estimation that a certain outcome will result following a given behavior) and "efficacy" (the belief you can successfully carry out that behavior). What is important is not only basic information, but how the individual interprets the relevance of that information to his or herself. That is, although it might be fully accepted that fruit and vegetable intake results in better health, an individual will have little motivation if there is also a perception that he or she is unable to perform the relevant behavior.
As an example, Kreausukon et al 11 compared interventions aimed at increasing intake, based on either giving basic information or, in addition, enhancing selfefficacy. The control group received information concerning general health and nutrition education, whereas a second group, in addition, received a program that focused on self-efficacy and planning. For example, they planned when, where, and how they intended to consume fruit and vegetables. Both the intention to consume and the amount consumed were greater in those who received self-efficacy training.
Self-determination theory distinguishes autonomous from controlled motivation. When a person fully endorses a behavior and experienced choice, this is said to reflect "autonomous (intrinsic) motivation." When you feel coerced, experience pressure and obligation, this is said to reflect "controlled (extrinsic) motivation." McSpadden et al 12 reported that, whereas autonomous motivation was positively related to intake, there was a negative association with controlled or extrinsic motivation. An example of a question related to autonomous motivation is "I eat fruit and vegetables because I want to take responsibility for my own health" whereas a controlled motivation question was ". . . because I want others to approve of me." This type of finding is important because, when trying to change human behavior, some basic information is necessary, although this can be provided relatively easily; the difficult part is generating the motivation to act on that information.
The nature of motivation is particularly relevant when considering health-related behavior. Many such behaviors reflect extrinsic motivation: that is they are not carried out for current enjoyment but for some subsequent reward such as social approval. In contrast, intrinsic motivation results in freely chosen behavior that reflects core beliefs and values, and carrying it out generates spontaneous rewards and satisfaction.
It has been found that intrinsic motivation is more likely to result in better outcomes than extrinsic motivation. 13 There is a greater likelihood that you will continue to engage in a behavior when is intrinsically motivated, 14 when it is carried out for its own sake and thus there is engagement and effort. Deci and Ryan 15 identified 3 factors that lead to intrinsic motivation and hence the initiation of behavior: competence, one must have the ability to do the job; autonomy, one must be free to make one's own decision without any coercion; relatedness, one should feel cared for, supported, and connected to others.
Efficacy and social support 9 have parallels with the factors that benefit intrinsic motivation, competence and relatedness. 15 In addition, knowledge 9 and autonomy have been mentioned. 15 Below it is argued that fruit juice, rather than intact fruit, is more likely to generate feelings of self-efficacy and hence is more likely to be associated with long-term changes in diet.
Motivation to drink fruit juice
The major reasons for not consuming 5 servings of fruit and vegetables a day relate to practicalities, convenience, and the effort required. Fruit juice offers a solution to many of these problems, and hence has the potential to increase consumption. Juice is convenient, easily transportable and requires no preparation. It can be stored in bulk because it has a longer shelf life than fresh products. It does not need preparation, and because it can be stored at home, additional visits to the supermarket are unnecessary. It can be consumed on the move, and its taste makes it attractive to many. Juice is also a cheaper method of purchase: it can take about fifteen oranges to make a liter of juice, yet in a supermarket the juice costs about a quarter of the price of intact fruit.
From a psychological perspective the perceived problems related to fruit and vegetable consumption will reduce feelings of efficacy 10 and competence 15 and will reduce intrinsic motivation and hence the likelihood of regular consumption. Because consuming juice can overcome these problems, the likelihood of consumption is increased. "Efficacy" is central: there has to a belief that one can successfully carry out the necessary actions. The juiced rather than intact variety is associated with feelings of efficacy and competence, with the possibility that this will generate intrinsic motivation. Few people will feel unable to open a carton or bottle.
POSSIBLE NEGATIVE CONSEQUENCES OF CONSUMING JUICE
Even if one accepts the argument that it is easier to encourage the consumption of juice rather than intact items, any recommendation to consume juice requires evidence that a similar benefit results as with whole fruit. Those arguing that fruit juice should not have a role in the 5 servings a day, typically claim that the sugar in juice results in obesity and that the removal of fiber reduces its nutritional quality. For brevity, a brief overview of the literature has relied on the quoting of systematic reviews and metaanalyses. Such balanced independent summaries prevent the cherry-picking of papers to support a preexisting point of view and, when considering public health, indicate the most common response.
Fruit juice, caloric intake, and obesity
When the French government reassessed their dietary guidelines in 2016, it was decided that fruit juices should be classified as a sugary drink rather than being placed in the fruit category. Although in France it is still possible to count juice as part of the 5 a day, there was clearly a concern about sugar and obesity. The medical campaigning group Action on Sugar 16 goes as far as recommending the removal of fruit juice from the diet to help reduce obesity. Such a view gained support from Jebb, 17 who, when she was the advisor on obesity to the United Kingdom government, suggested that orange juice should no longer be 1 of the 5 a day. She stated that orange juice contains as much sugar as fizzy drinks and is absorbed so quickly that by the time it gets to one's stomach one's body doesn't know whether it's Coca-Cola or orange juice. However, should sugar that is intrinsic to fruit be viewed in a similar manner to the sugar added to soda?
There is considerable evidence that the consumption of sugar-sweetened beverages (SSBs) is a risk factor for obesity, and because fruit juice contains intrinsic sugar, the 2 have often been lumped together. Sugar increases palatability and thus increases intake without there being sufficient calorie compensation at the next meal. 18 In addition, the reduced level of fiber in juice has been viewed as reducing satiety. However, an interesting approach was to give orange juice 3 times a day, either with or between meals 19 ; when consumed with a meal, the juice resulted in lower fat mass and gammaglutamyl transferase levels, a measure of liver functioning. In contrast, consumption between meals increased body fat and decreased insulin sensitivity. However, preload studies should be viewed only as hypothesis generating because energy compensation can take place over a longer period than these studies monitor. 20 The important question is whether, over time, drinking fruit juice increases body weight to the extent that there is an increased incidence of disease.
A meta-analysis illustrates a common problem. 21 When 20 cohort studies of children and 7 studies of adults were summarized, the consumption of SSBs was associated with weight gain over the course of a year. However, the definition of SSBs involved adding together soda, sweetened beverages, sports drinks, and generic fruit drinks. As such, nothing can be concluded about 100% fruit juice. There is a need to distinguish 3 types of drink; those sweetened with sugar that contain no fruit juice; drinks with less than 100% fruit juice that are sugar sweetened; and 100% fruit juice to which no extrinsic sugar has been added. Is 100% fruit juice only guilty by association, or does it predispose to obesity in its own right?
Children
Consumption by children has attracted particular attention. The American Academy of Pediatrics 22 acknowledges potential benefits but also the possibility of the detrimental consequences of drinking fruit juice. In those aged <6 years, there is the possibility of diarrhea, dental caries, and the development of an allergic reaction to orange. After this age, fewer issues arise, although the possibility of weight gain has been a concern for some. The consumption of whole fruit rather than a drink was encouraged, although it was unclear why these were viewed as alternatives: drinks quench thirst and prevent dehydration, whereas there are different motivations to eat whole fruit. A systematic review found that in those aged <12 years, SSBs were associated with total and central adiposity, whereas drinking 100% fruit juice was not. 23 However, in those aged <5 years, there was some evidence that juice was associated with greater body weight. A later review concluded, "Consumption of 100% fruit juice is associated with a small amount of weight gain in children ages 1 to 6 years that is not clinically significant, and is not associated with weight gain in children ages 7 to 18 years." 24 Again, it is essential to distinguish 100% fruit juice from sugar-sweetened products. The recommendation of the American Academy of Pediatrics is that no more than 4-6 fluid ounces, or 100-130 mL per day of juice, be consumed by young children.
Adults
The 2 Nurses Health Studies and the Health Professionals Follow Up cohort were integrated to produce a cohort of 120 877 participants, 25 and over 4 years changes in the intake of foods were related to changes in weight. An increased intake of french fries was associated with an annual increase in weight of 0.84 lbs (380 g); potato chips (crisps) with 0.42 lbs (192 g); SSBs with 0.25 lbs (113 g); and fruit juice with 0.08 lbs (35 g). In contrast, an increased fruit intake decreased weight by À0.12 lbs a year (55 g). It was suggested that the small effect of fruit juice on body weight reflected its consumption in small quantities and a tendency to have 1 rather than multiple servings. These findings assumed that the portion of fruit juice was 240 mL, and if instead the recommended portion of 150 mL had been consumed, the annual increase in weight would have been 0.05 lbs (22 g) a year. In fact, the findings cannot be taken at face value because the studies were not able to distinguish 100% fruit juice from sweetened beverages, and inevitably, in part at least, the results reflected added sugar.
In fact, many studies have found that the consumption of fruit juice was associated with lower body weight. Pereira and Fulgoni 26 used National Health and Nutrition Examination Survey data (NHANES) to examine the influence of 100% fruit juice. Although only 28% of participants consumed fruit juice, they were more insulin sensitive, had a lower risk of obesity, and were less likely to have metabolic syndrome. Similarly, a sample of 13 971 from the 2003-2006 NHANES survey found that those adults who consumed 100% orange juice had a lower BMI, waist circumference, and percentage body fat, 27 although these effects of orange juice were not found in children and adolescents. More recently, a Canadian study examined 26 340 individuals and found that fruit intake was inversely related to BMI, waist circumference, and the percentage of fat mass, 28 with a broadly similar pattern being associated with 100% fruit juice but not vegetables.
Celis-Morales et al 29 collected data in 7 European countries and identified factors that predicted adult obesity. A greater BMI was associated with a higher energy intake, eating red meat, and consuming SSBs. In contrast, a lower BMI was associated with a greater intake of whole grains, fruits and vegetables, and a Mediterranean diet, as well as consumption of fruit juice.
The picture is that at any point in time the consumption of fruit juice is associated with a lower BMI. [26][27][28][29] However, if you change your diet and consume additional calories, over time your body weight will rise 25 ; this is true regardless of the source of additional calories. As such, fruit juice is no different than other food and is better than many because it produces a relatively small increase in weight. The Harvard studies 25 found that juice increased weight by only 0.05 lbs (22 g) a year, although there had been no correction for the increase in calorie intake that may well explain the finding. Similarly, when, over 3 years, weight was monitored in 49 106 postmenopausal women, 30 each additional daily serving of 100% fruit juice was associated with an annual gain of 0.13 lbs (59 g).
Hebden et al 31 reviewed studies that related fruit consumption to adiposity. Four of 6 cross-sectional studies found that fruit consumption was associated with a lower level of at least 1 measure of adiposity. However, none of 11 intervention trials found that increased consumption was influential in this respect, although the trials tended to be small and lasted 4-12 weeks.
Hebden et al, 31 also concluded that fruit juice increased weight over the long term-that is, allegedly fruit and fruit juice had opposite effects, This conclusion was based on 8 mostly large prospective cohorts, of which only 1, the smallest, had distinguished drinking fruit juice with or without added sugar. All other surveys used the Harvard Willett Food Frequency Questionnaire, which simply asks the frequency that fruit juices were consumed but does not distinguish juice to which sugar has and has not been added. When generically reporting the consumption of fruit juice, it is inevitable that 100% fruit juice and sugar-sweetened juice will be treated as one. As more extrinsically sweetened juice is consumed, it follows that nothing can be concluded about 100% fruit juice. Given that, when these types of juices are distinguished, the outcomes differs 23,[26][27][28][29] this is a serious problem. The hypothesis that needs testing is that adding sugar to juices results in a heavier weight, whereas 100% fruit juice does not.
When Rampersaud and Valim 32 more specifically reviewed the relationship between orange and grapefruit consumption and anthropometric measures, they found a lower BMI in those who consumed juice, although intervention studies found no influence. They concluded that moderate consumption of citrus juices "do[es] not appear to negatively impact body weight, body composition, or other anthropometric measures in children and adults." 32 The picture is that a dietary style commonly associated with a lower BMI includes both intact fruit and juice. However, in terms of increasing energy intake, since fruit juice is a minor problem compared with a range of more calorific foods, 25 it should be consumed in moderation to ensure that calorie intake does not markedly increase.
It is noteworthy that rather than playing a major role in calorie intake, fruit juice consumption decreases throughout life. 33,34 A systematic review found that Germans consumed the most fruit juice and Italians consumed the least, but in a range of countries, consumption decreased with age. 34 Thus, if a reduction of the consumption of fruit juice is recommended, it will affect a minority who drink juice, and this minority decreases consumption with age. In addition, for many years the juice industry has bemoaned the fact that overall consumption has continually declined. Between 2013 and 2016 consumption declined by 10% in Germany and Spain, 12% in the United Kingdom, and 15% in France. 35 Between 2008 and 2012, there were falls of 8% in Australia and 6% in New Zealand. 36 Between 1994 and 2016, consumption of orange juice in those under 20 years of age in the United States declined 10.5 pounds a year, and in adults there was a fall of 6.1 pounds. 37 Fructose One viewpoint has been to emphasize the influence of fructose, as in 2004 it was hypothesized that it was a primary cause of the obesity epidemic 38 : in the United States the increased consumption of high fructose corn syrup paralleled the rapid increase in the incidence of obesity.
One reason for being concerned about fructose consumption is that it has been proposed that the nature of its metabolism in the liver predisposes towards lipogenesis. However, a meta-analysis of 14 trials that considered the impact of exchanging fructose in the diet for other carbohydrates concluded that it did not increase postprandial triglycerides. 39 However, when fructose consumption resulted in excessive energy intake, the levels of postprandial triglycerides increased. That is, raising energy intake increases triglycerides, but the form of the sugar consumed is unimportant. Other meta-analyses of human data have come to similar conclusions, ie, "fructose does not cause biologically relevant changes in triglycerides or body weight." 40 In 11 studies carried out for up to 13 weeks, in only 1 did the levels of triglycerides increase, and in this case baseline values were unusually low, possibly resulting in regression to the mean. The triglyceride levels did not increase in women consuming up to 133 g/day and men consuming 136 g/day of fructose-that is, even intakes vastly in excess of normal levels of consumption were without effect.
In this context, the use of stable isotopes is particularly informative because it allows researchers to see, in the intact individual, where fructose ends up. The hypothesis is clear: if fructose is especially associated with fat deposition, then the isotope should end up in fat. A review of human isotopic tracer studies concluded that <1% of fructose was converted to plasma lipids. 41 The World Health Organization 3 report on sugar intake commissioned a meta-analysis that considered 30 randomized controlled trials (RCTs) and 38 cohort studies. 42 It concluded that when sugars were exchanged for other carbohydrates, while maintaining a constant energy intake, body weight was not affected. In contrast, with ad libitum diets, a higher sugar intake was associated with increased weight. Thus, if food intake is modified, any change in body weight appears to reflect a change in total energy intake rather than the provision of fructose. Again, when 31 isocaloric and 10 hypercaloric trials were examined, in isocaloric trials fructose had no influence on body weight. However, when large amounts of fructose were added to the diet, body weight increased. 43 Because those who are overweight may differ metabolically-for example, in terms of insulin resistance or glucose tolerance-this group was considered separately. A similar conclusion resulted: "There is no evidence which shows that the consumption of fructose at normal levels of intake causes biologically relevant changes in triglycerides or body weight in overweight or obese individuals." 43 A consistent picture has emerged. When fructose replaces other carbohydrates that provide a similar number of calories, there is no specific influence on body weight. Rather, weight gain reflects the general overconsumption of calories from any dietary source. As such, the fructose provided by fruit juice, other than the calories it provides, does not predispose to obesity. A major problem for those seeing sugar as the major cause of obesity is that the intake of added sugars has progressively decreased. In the United States between 1999-2000 and 2007-2008, the intake of added sugars was reduced from 100.1 gram/day to 76.7 gram/day, 44 yet the incidence of obesity continued to rise.
Fiber
Fiber plays an important role in a balanced diet because it helps to prevent heart disease, diabetes, and some cancers. 45 Thus, for some, a major problem associated with the consumption of juice is the removal of fiber. When juicing, the skin and pulp, both good sources of fiber, are often not included. However, when juice is made by pulping the whole fruit, fiber is provided.
In Europe, the recommended fiber intake is 25-32 g/day for women and 30-35 g/day for men. 45 Yet when data from 14 European countries were considered, the average intake was less than that recommended. In males aged 20-35 years, the highest average intake was 26 g/day in Germany, Norway, and the United Kingdom. The lowest average was 14 g/day in Belgium; whereas in France, Italy, and Austria it was 16 g/day. 46 In the United States the average intake was 16 g/day, 47 a value that compares with a recommendation of 38 g/day for men and 25 g/day for women. 48 In this context, the drinking of juice has been viewed as reducing the intake of fiber. There appears, however, to be an implicit assumption that if an individual stops drinking juice they will start eating whole fruit. The basis for this unlikely expectation is unclear because the dietary roles played by fruit and a beverage are very different. Why should juice and intact fruit be viewed as alternatives? If an individual drinks juice at breakfast, its removal will result in consuming an alternative beverage. Throughout the day thirst will lead to having a drink, or individuals drink because it is appropriate in a social setting where fruit is not available or when consumption would be socially inappropriate.
There is also another implicit assumption-that fiber can be viewed in a homogeneous manner. However, there are many different types of fiber with different physiological consequences; for example, they impact differently on the gut microbiota. 49 Thus, too much emphasis should not be placed on increasing the intake of any 1 type of fiber, but rather it should be ensured that it is obtained from a range of fruit, vegetables, cereals, bran, nuts, and seeds. A more sophisticated recommendation is required than suggesting the removal of fruit juice from the diet. Although the findings need to be treated cautiously, a meta-analysis 50 found that cereal and, to a lesser extent, vegetable fiber were associated with lower total mortality, although fruit fiber was not.
Although the logic appears muddled, one argument for not consuming fruit juice is that it contains less fiber than intact fruit. Yet nobody would suggest stopping eating meat or fish because they contain no fiber: rather, the overall contribution made to the diet justifies their consumption. Similarly, any recommendation to consume fruit juice must be made in the context of the entire diet, such that the addition would have a positive impact.
In fact, there have been many reports that the consumption of juice is associated with a better quality diet, including a greater consumption of intact fruit and vegetables and hence more fiber. A roundtable of experts concluded that removing juice from the diet would reduce daily fruit consumption and increase the drinking of SSBs. 51 In the United Kingdom those drinking 100% fruit juice were 42% more likely to achieve 5 a day because there was also a greater intake of whole fruit and vegetables. 52 In a French representative sample, the consumption of juice was associated with a diet of higher quality with a higher intake of fruits and vegetables than with nonconsumers. 53 Importantly, fruit juice had a greater influence on the fruit and vegetable intake of those with lower socioeconomic status, such that the removal of juice from the diet of the financially less well-off would have a greater impact on the healthiness of their diet.
Possible negative influences of fruit juice
In summary, the fructose associated with fruit sugar is no more likely to be stored as body fat than other sugars. Although when juiced, less fiber is consumed than with intact fruit, there is no reason to believe that intact and juiced fruit should be viewed as alternatives; they are consumed in different circumstances. In fact, those consuming juice tend to consume more intact fruit and vegetables with an associated greater intake of fiber.
BIOACTIVE MOLECULES
When considering the role of fruit juices, a major factor is that they are a rich source of vitamin C, carotenoids, and polyphenols. However, to demonstrate a parallel between juice and intact fruit, there is a need to show that they act in a similar manner.
A first question is does processing influence the properties of juice? Commercial rather than domestic squeezing has been reported to extract 25% more vitamin C, and pasteurization slightly increased the levels extracted from solids. Pasteurization, concentration, and freezing did not affect the total antioxidant capacity due to vitamin C. 54 Similarly, when 7 juices were compared, fresh and commercially processed juices (no extrinsic sugar) were similar in terms of total phenolic content and antioxidant activity. However, the levels were lower when fruit drinks were sweetened with sugar and contained a smaller proportion of fruit juice. 55 The importance of polyphenols in juices is indicated by their high correlation with total antioxidant activity: it is associated with the total phenolic rather than vitamin C content, 56 although levels will differ with the method of preparation and may decline depending on nature of storage.
A second question is whether juice consumption impacts human physiology? Because fruit juice delivers compounds that improve antioxidant status, Crowe-White et al 57 subjected the topic to systematic review. The summated evidence from 10 trials suggested that 100% fruit juice improved a range of antioxidant biomarkers and the levels of other nutrients. For example, adding orange juice to the normal diet 3 times a day for 3 weeks increased the blood levels of vitamin C (59%), folate (46%), carotenoid (22%), and flavanone (8 fold). 58 Silveira et al 59 asked normal-weight individuals to consume red orange juice each day for 8 weeks. Low-density lipoprotein (LDL) cholesterol and C-reactive protein levels decreased, whereas antioxidant activity in serum increased and insulin resistance and systolic blood pressure declined.
Vitamin C
Water-soluble vitamin C fulfills a wide range of functions. Having antioxidant properties, it protects against the damage caused by free radicals that have a role in aging and the development of cancer, heart disease, and arthritis. It is also involved in healing wounds and maintaining connective tissues and the structure of blood vessels, skin, and bone. Various fruit juices are good sources of vitamin C, including guavas, kiwi, black currents, strawberries, oranges, and papayas.
Sixty-two percent of British children aged 4-10 years consumed fruit juice, something true of only 37% of those aged >65 years. In children, beverages provided 32%-39% of their vitamin C: 14%-19% came from fruit juice and 8%-20% came from soft drinks. Although fruit provided 22% of the vitamin C of those aged 4-10 years, the percentage fell to only 12% in those aged 11-18 years. However, the contribution of fruit juice increased in adults to 19%-24%. 60 Thus, fruit juices make a substantial contribution to the total intake of vitamin C, although there are subgroups who are at risk of deficiency. When 15 769 individuals aged 12-74 years in the United States were studied, although the average intake and blood levels of vitamin C were above the suggested levels, 14% of males and 10% of females were deficient, with smokers being particularly at risk. 61 Another meta-analysis considered 29 trials and concluded that supplementation with vitamin C reduced blood pressure. 62 When nurses were followed for 16 years and the development of coronary heart disease was monitored, 63 a higher intake of vitamin C was associated with a lower risk. When serum ascorbate concentrations were assessed during the second NHANES survey, mortality 12-16 years later was greater in men, but not women, in those with a low intake of vitamin C. 64 Ashor et al 65 pooled the data from 44 clinical trials where orange juice had been added to the diet. They found improvements in endothelial functioning in those with atherosclerosis, diabetes, or heart failure. The effects were stronger in those with a higher risk of cardiovascular disease.
A meta-analysis 66 considered the influence of vitamin C on the glycemic response. Vitamin C reduced the levels of blood glucose in those with type 2 diabetes mellitus (T2DM) if the intervention was >30 days. The effect on fasting insulin levels was greater after a meal and in those who were older.
A British study that considered those with a low income found that 25% of men and 16% of women had blood levels of vitamin C indicative of deficiency, with a further 20% having levels in the depleted range. 67 Although a poor diet is the most usual cause of poor vitamin C status, smoking, pregnancy, a low income, and strenuous exercise may play a part. When those of normal weight were compared with those who were obese, the latter had a lower intake of vitamin C and were more likely to be deficient. 68 In summary, a good vitamin C status is associated with a range of health benefits. Because juices make a substantial contribution to the level of vitamin C intake, their removal from the diet may reduce vitamin C status with the risk of an adverse influence on health.
Carotenoids
More than a thousand carotenoids fall into 2 classes, xanthophylls and carotenes, that are found in fruit and vegetables that are colored orange, red, or yellow. Humans do not synthesize carotenoids, so to benefit they must be part of the diet. Carotenoids act as antioxidants, and beta-carotene can be converted into vitamin A. Vegetables, such as carrots and spinach, are a good source of carotenoids, but carotenoids are also found in plums, apricots, mangoes, colored melons, peaches, nectarines, sour cherries, and some citrus fruits such as red grapefruits, tangerines, and oranges. Particular attention has been directed to lycopene, a bright red carotenoid found in tomatoes, guavas, watermelon, papaya, and pink and red grapefruit.
The levels of carotenoids in the human skin are an indicator of the antioxidant status of the body. Although there are richer sources of carotenoids, adding orange juice to the diet increases the bodily levels. Drinking orange juice for 25 days increased carotenoid status by 10%-15%, depending on starting values. 69 Values, however, returned to baseline levels within 3 days of stopping drinking the juice.
As needed, beta-carotene is converted into vitamin A, which has important roles in the functioning of the immune system, mucus membranes, vision, and the skin. Given the clear evidence that the consumption of fruit and vegetables reduces the incidence of various diseases, 5 their powerful antioxidant properties makes it likely that carotenoids are part of the underlying mechanism.
Due to the antioxidant properties of carotenoids, their consumption is being related to the incidence of head and neck cancer. A meta-analysis of epidemiological studies found that when compared with the lowest level of intake, higher levels of betacarotene were associated with a relative risk of 46% of developing oral cancer and 57% of developing laryngeal cancer. In addition, lycopene, alpha-carotene, and betacryptoxanthin were all associated with a reduction in the rate of oral and pharyngeal cancer of at least 26%. 70 Similarly, a meta-analysis 71 considered prospective studies that had related blood concentrations of carotenoids to the development of breast cancer. An inverse association was found between the total level of carotenoids and the subsequent development of cancer; more specifically, there was a negative association with betacarotene. Intakes of beta-carotene and alpha-carotene have also been associated with the incidence of gastric cancer. 72 In addition, those consuming higher rather than lower levels of alpha-carotene, beta-carotene, and lutein/zeaxanthin had a reduced risk of developing non-Hodgkin's lymphoma. 73 However, the evidence is not all positive; no association was found in meta-analyses between carotenoids and the risk of colorectal cancer. 74 Similarly, a review 75 found no association between carotenoids or vitamin A and the development of Parkinson's disease.
Particular attention has been paid to lycopene, the most powerful antioxidant found in food. A review of a total of 692 012 individuals found relationships between lycopene consumption and the risk of prostate cancer. The risk decreased by 1% for every additional 2 mg of lycopene consumed and reduced by 3.5% for each additional 10 mg/dL of circulating lycopene. 76 Similarly, a meta-analysis found that a greater exposure to lycopene was inversely associated with a lower risk of cardiovascular disease. 77 The above series of meta-analyses provides evidence of a beneficial association between both the intake and blood levels of carotenoids and various disease states. There have been, however, relatively few intervention studies. Forty elderly men with benign prostrate hyperplasia received either 15 mg/day lycopene or a placebo for 6 months. The level of serum prostate-specific antigen was reduced in those taking lycopene, but not the placebo, and prostate enlargement continued in the placebo group but not the lycopene group. 78 However, in a similar trial, men with high-grade prostatic intraepithelial neoplasia consumed either lycopene or a placebo for 6 months. No differences in prostrate-specific antigen resulted, and the prevalence of cancer, atrophy, and inflammation were similar. 79 In African Americans who had been recommended for a prostrate biopsy, lycopene supplementation for 3 weeks did not change the DNA oxidation product 8-oxo-deoxyguanosine or the lipid peroxidation product malondialdehyde in prostrate tissue. 80 While accepting that RCTs offer the strongest evidence, there are good reasons not to place too great a reliance on the present intervention data. There are relatively few studies of short durations, when disorders such as cancer may develop over many years, even decades. To date, the intervention studies are best seen as pilot studies because they have used small sample sizes and therefore had little chance of picking up any but dramatic improvements, when any effect is more likely to be subtle. Future intervention studies that look for changes in the risk of developing the disease will need to be on a large scale and over an extended period.
Because epidemiology tracks the development of disease over many years, it is appropriate to use the criteria of Bradford-Hill 81 that allow a causal relationship to be presumed. In the same way that such an approach was necessary to relate cigarette smoking and lung cancer, the influence of carotenoids may well be insidious, impacting only slowly over time. Bradford-Hill 81 suggested that the larger the association, the more likely it was to be causal: several of the above meta-analyses reported large effect sizes. That the conclusion is supported by a series of meta-analyses satisfies the second criterion; there must be consistency with similar findings being reported by different researchers using different samples. The cause must occur before the effect: in this instance either dietary intake or blood levels were related to the subsequent development of disease. A biological gradient is also supportive, and some of the above meta-analyses related a range of intakes of carotenoids or blood levels to a lower incidence of disease. 76 A consistency of epidemiological and laboratory findings was also supportive, and in this instance, lycopene has, in vitro, been reported to reduce the growth of cancerous cells. 82 Finally, plausibility-are there mechanisms that would be expected to relate cause to effect? In the present instance, the antioxidant effect of carotenoids provides a plausible mechanism. In summary, the existing evidence looks promising, although in an ideal world, clinical trials would confirm the beneficial influence of carotenoids. However, if the influence of these phytochemicals is long term, their effects will be difficult to demonstrate using this approach.
Polyphenols
Fruit high in polyphenols includes blueberries, cherries, cranberries, pomegranates, apples, apricots, grapefruits, oranges, red grapes, raspberries, strawberries, blackberries, gooseberries, plums, and kiwis. One group of flavonoids, the anthocyanins, which give plants their red, purple, and blue color, are attracting increasing interest for their antioxidant, anti-inflammatory, and antiviral properties. They are found particularly in purple grapes, cherries, raspberies, blueberries, and plums.
Silveira et al 83 examined the pharmacokinetics of the main flavanone glycosides found in oranges, hesperidin, and narirutin: they found no differences in the dynamics of freshly squeezed and processed juice. After a single drink, levels peaked 2 hours after consumption and approached 0 after 12 hours, with none being measurable after 24 hours. Manach et al 84 calculated the time scale following consumption of the levels of plasma polyphenols. Gallic acid and isoflavones were the most readily absorbed, followed by catechins, flavanones, and quercetin glucosides, although the kinetics differed. When part of the diet, the half-life of anthocyanins have been found to vary 1-4 hours; flavonols, 1-13 hours; flavanones, 1-3 hours; flavanols, 1-3 hours; and isoflavones, 4-8 hours. 84 Generalizations are difficult given the thousands of phenolic compounds that exist; there is, however, an impression that to maintain a high plasma concentration of polyphenols they need to be consumed regularly. For example, the flavones in orange juice, hesperidin, and naringin, have a half-life of about 2 hours in the blood. There remains a need for a greater understanding of the relationship between metabolic outcomes and consuming 100% fruit juice. 57 A particular interest is the role played by polyphenols in preventing cancer, heart disease, osteoporosis, and neurodegenerative disorders. A meta-analysis of 22 prospective studies related flavonoid consumption to all-cause mortality 85 : a high intake reduced the risk of all-cause mortality (relative risk, 0.74). More specifically when 143 studies that considered cancer were integrated, in prospective studies the consumption of isoflavones was associated with a decreased risk of lung and stomach cancers, and an effect on breast and colorectal cancers just missed statistical significance. 86 Some types of cancer appear to be more susceptible because no influence was found with either esophageal or colorectal cancer. 87 A large prospective cohort of health professionals 88 reported that the intake of anthocyanidins, but not other flavonoids, was associated with a reduced risk of T2DM. Similarly in Finland, consuming blueberries, a rich source of anthocyanidins, was related to a lower risk of T2DM. 89 A meta-analysis considered the association between the intake of flavonoids and risk of T2DM in 312 015 individuals followed up for up to 28 years. A higher intake of flavonoids reduced the risk of the disorder (relative risk, 0.89), which declined by 5% for each additional 300 mg that was consumed each day. 90 Because there are suggestions that flavonoids may influence the immune system, a systematic review related consumption to the incidence of upper respiratory tract infections and indices of immune functioning. 91 Flavonoid supplementation decreased the incidence of respiratory infection by 33%, although measures of immune functioning were unaffected. Polyphenols did not, however, delay the onset of cognitive decline in older adults. 92 Summarizing the influence of the bioactive molecules supplied by fruit and fruit juice, meta-analyses of prospective cohort studies (but not RCTs) have found the consumption of carotenoids to be associated with a decreased incidence of a range of cancers. In particular, lycopene consumption has been found, in a doseresponse manner, to be associated with a reduced rate of prostrate cancer. Similarly, a higher flavonoid intake results in lower all-cause mortality, and the intake of isoflavones reduces the incidence of lung and stomach cancer. The intake of flavonoids, in a dose-dependent manner, reduces the incidence of T2DM. An increased intake of vitamin C is associated with lower blood pressure and better control of blood glucose in T2DM.
FRUIT JUICE AND DISEASE
A series of prospective cohort studies and RCTs have considered the impact of consuming fruit juice (as opposed to particular bioactive molecules) on the risk of disease.
Cardiovascular disease
Aune et al 5 considered 22 studies where the consumption of fruit had been related to aspects of cardiovascular disease. Benefits were found with up to 10 portions a day, when the relative risk was 27% lower than when no fruit was consumed. In particular, there was evidence that a high rather than low intake of either apples/pears or citrus fruits was protective.
However, when considering juice, interventions lasted >4 weeks in only a minority of studies. A cohort study 93 related the intake of fruit juice over a 20-year period to the risk of cardiovascular disease. A higher consumption of juice was associated with a lower incidence of hypertension, although the levels of LDL, high-density lipoprotein (HDL), and triglycerides were not influenced. Consuming 330 mL of pomegranate juice each day for 4 weeks reduced both systolic and diastolic blood pressure. 94 Similarly, after drinking 500 mL of orange juice each day for a month, the diastolic blood pressure of overweight men was reduced. 95 Again, when men with mild hypertension consumed Concord grape juice, both diastolic and systolic blood pressure were reduced. 96 However, other studies did not find that juice affected blood pressure. [97][98][99][100] Vascular functioning has also been considered. In a cross-over trial, those at an increased cardiovascular risk received 500 mL of red orange juice or a placebo for 7 days. 101 Flow-mediated dilatation, a measure of the risk of cardiovascular disease, improved after consuming orange juice. However, there are other reports that fruit juice did not influence vascular functioning. 94,99,100 Similarly, blood lipids have been examined. Overweight women who engaged in aerobic training for 12 weeks either did or did not drink 500 mL of orange juice each day. The consumption of juice resulted in lower total cholesterol and LDL, higher concentrations of HDL, and a better LDL/HDL ratio. 102 In another RCT, those with metabolic syndrome consumed 300 mL a day of a mixture of citrus-based juices for 6 months. 103 Again, the levels of total cholesterol, LDL, and HDL declined. However, in other trials, fruit juice did not affect blood lipids. 95,98,99 In summary, a series of reports found that fruit juice reduced the risk of disease, although the findings are inconsistent and the amount of juice consumed tended to be in excess of the currently recommended level. In addition, most trials have been for a short period and have involved a small number of participants. Positive findings have, however, been reported often enough to suggest there should be further study of risk of cardiovascular disease.
Diabetes
Systematic reviews have consistently reported that consumption of SSBs is associated with a greater risk of T2DM. 104,105 The evidence concerning fruit juice is less clear, although there are reports that, although SSBs increased the risk of T2DM, 100% fruit juice did not. [106][107][108] Imamura et al 105 considered 17 cohorts and found an association between SSBs and diabetes. Similarly, drinking artificially sweetened beverages and fruit juice was also associated with a greater incidence of diabetes. The authors, however, commented that the findings were likely to involve bias, including the misclassification by participants of sugar sweetened fruit-based drinks as 100% fruit juice. Importantly there was a difference depending on whether the incidence of diabetes depended on self-report rather than medical records or biochemical assay. In the former instance, fruit juice was associated with the incidence of diabetes, although the relationship disappeared when more reliable medical or biochemical information was the basis for diagnosis.
Xi et al 104 carried out a larger review. They considered 7 prospective studies that allowed the comparison of sugar-sweetened fruit juice and 100% fruit juice. Four studies provided 191 686 participants and 12 375 cases of T2DM in those who consumed sweetened fruit juice. They were contrasted with 137 663 participants who drank 100% fruit juice, who provided 4906 instances of T2DM: meta-analysis found that consuming sugar-sweetened fruit juice increased the risk of T2DM, whereas drinking 100% fruit juice did not.
Given the unavoidable difficulties that arise when interpreting this type of epidemiological study, it is important to consider RCTs. An obvious hypothesis is that the sugar in fruit juice has consequences for glycemic control that, over time, predisposes to diabetes. A metaanalysis of 18 RCTs 109 summarized the effect of juice on insulin and the control of blood glucose. After consumption for at least 2 weeks, 100% fruit juice was found to have no effect on fasting blood glucose, fasting insulin, the Homeostatic Model Assessment for Insulin Resistance, or glycated hemoglobin-that is, fruit juice had no effect on glycemic control. This finding is similar to those randomized intervention trials that found little or no impact of increasing fruit and vegetable intake on biomarkers associated with metabolic diseases. 110 Cancer Eating 5 servings a day of fruits and vegetables is part of the dietary recommendations of the World Cancer Research Fund/American Institute for Cancer Research. A protective effect is said to be "probable" for mouth, pharynx and larynx, esophagus, stomach, and lung cancer, whereas it is "limited suggestive" for nasopharynx, lung, ovary, endometrium, pancreas, and liver cancer. 111 The role of dietary fiber in colorectal cancer was rated as "convincing." Because fruits and vegetables are associated with only a few types of cancer, their potential may be limited in this area. However, dietary flavonoids, of which citrus fruits and their juices are a rich supply, interfere with "carcinogen activation, stimulate carcinogen detoxification, scavenge free radical species, control cell cycle progression, induce apoptosis, inhibit cell proliferation, oncogene activity, angiogenesis and metastasis as well as inhibit hormones or growth-factor activity." 112 Cirmi et al 113 produced a systematic review that considered the potential of citrus juices to act as anticancer agents, looking at animal, in vitro, and human observational studies. It was concluded that citrus juices were a "potential resource against cancer." They, however, found only 2 human studies, and no conclusion was warranted until a range of longitudinal studies become available. The review of Aguilera et al 114 found that, although some studies found that juice had a positive influence, others did not, and there was a need for studies that examined possible confounding variables such as an interaction with other dietary bioactive substances. In addition, emphasis should not be solely placed on diet as a list of other risk factors needs to be considered. At present, the role of fruit juice is unclear.
Gout and nonalcoholic fatty liver disease
A meta-analysis considered 2 studies that included 125 299 participants and 1533 cases of gout concluded that fructose consumption was a risk factor. For example, Choi and Curhan 115 studied 46 393 men for >12 years and related the intake of soft drinks to the development of gout. The consumption of SSBs, but not diet soft drinks, increased the risk. It was suggested that fructose intake was involved because both fructose-rich fruits and fruit juices were risk factors.
It has also been proposed that fructose is a risk factor for nonalcoholic fatty liver disease. However, because it is known that obesity, insulin resistance, and T2DM are risk factors, any specific effect of fructose should be distinguished from any associated calories and change in body weight. Chiu et al 116 integrated the findings from 7 isocaloric trials where fructose intake was compared with similar amounts of other carbohydrates. The effect of fructose was similar to other forms of carbohydrate; fructose as such did not predispose to the disorder. In contrast, when additional energy was added to a diet (21%-35% increase), changes in both intra-hepatocellular lipids and alanine aminotransferase suggested liver damage. Thus, only when fructose is consumed at levels greatly in excess of the typical level of intake was there a problem, an effect that may well reflect consuming excess energy rather than specifically the consumption of fructose.
DISCUSSION
There are varying views concerning the impact on health of consuming 100% fruit juice. At one extreme it is viewed as a source of sugar that leads to obesity, and at the other it is seen as providing a range of bioactive molecules that benefit health. Yet, if one looks at the views of the national bodies setting food policy, after considering the full range of evidence, the majority give the same advice: fruit juice may contribute to the 5-day recommendation for fruit and vegetable consumption. For example, the US Dietary Guidelines state that 100% fruit juice may be consumed but in moderation. 117 The potential benefits are considerable because there is overwhelming evidence that the consumption of fruits and vegetables benefits health. 5,118,119 Because juice contains the same molecules as intact fruit, it is likely to have a contribution to make, although the emphasis needs to be on 100% fruit juice, rather than juices that are sugar sweetened. Unfortunately, to date, most of the literature has not distinguished 100% fruit juice, and therefore conclusions and recommendations confuse it with sugar-sweetened juice. However, both with weight gain in children 23 and diabetes in adults, 104 meta-analyses have found that sugar-sweetened juices are similar to other SSBs, although both differ from 100% juice. The negative consequences of SSBs were not found in those consuming 100% juice.
Yet, it is a minority of the population who achieve the recommended intake of fruits and vegetables, 5,6,46,120 and for practical reasons and convenience, there is resistance to making dietary changes. 6,7 However, if it is acceptable to count juice as 1 of the 5 servings a day, given the low level of fruit and vegetable intake, the recommendation that it be consumed on a daily basis is a small step to take. In the absence of alternatives, encouraging the daily drinking of fruit juice offers a simple and convenient way to increase the intake of fruit.
Those who advocate reducing the drinking of juice tend to worry that it leads to obesity; as in a similar manner to all food items, excessive consumption has the potential to result in weight gain. However, the majority of the population do not drink any fruit juice, 26,60,61 and the total amount consumed by the population has been declining for many years. [35][36][37] The frequency of consumption declines with age, 33,34 and when consumed, the average intake is small. Particularly in poorer sections of the population, juice may be the main source of fruit, and individuals would suffer if it was removed from the diet; it is this group that would be predicted to benefit most from adding juice to the diet. 52 However, any decision to encourage consumption would need to be accompanied by appropriate health education. Infants should not have a bottle that they can suck at will; there should be appropriate dental hygiene; consumption is better with a meal; there should be only moderate consumption; juice should not replace existing fruit and vegetables in the diet.
Although public health policy naturally considers the entire diet, there has been a tendency at a particular time to pay disproportionate attention to a specific nutrient. In the late 1970s the reduction of fat intake became a major objective. Salt reduction has been emphasized more recently. Currently the reduction of sugar is a major priority: for example, many of those wishing to reduce fruit juice in the diet emphasize the sugar content. Yet food items should be eaten in moderation as part of a wide-ranging and varied diet. The risk is that, if the emphasis is disproportionately on 1 nutrient, other dietary considerations will not attract the attention they deserve.
Should concentrating on the sugar content of fruit juice justify playing down the importance of other contributions? When 14 dietary factors were related to the burden from disease, the 6 factors most highly associated were diets high in sodium, low in vegetables, low in fruit, low in whole grains, and low in nuts and seeds. 120 The authors commented, "A policy focus on the sugar and fat components of diets might have a comparatively smaller effect than that of promotion of increased uptake of vegetables, fruit, whole grains, nuts and seeds." When it comes to public health, it is only possible to make the best guess given the current state of knowledge. If it is assumed that the obvious molecular similarities between intact and juiced fruit result in a similar health benefit, the effect of a portion of fruit juice can be estimated from the data of Aune et al. 5 When 1 portion of fruit was eaten rather than none, all-cause mortality declined by 5.5%. Two portions of fruit a day reduced the risk by 11%, and three portions by 15.6%. The context of these findings is that an American survey found that only 12.2% of adults ate the recommended amount of fruit, whereas 9.3% met the recommendation for vegetables. 121 Clearly, an increase of 1 portion a day would benefit the health of the vast majority. Similarly, a survey by the European Union found that most Europeans do not meet the recommended level of intake. In fact, only 4 of 19 countries had an average intake that reached that level. 46 As an example, in the United Kingdom the average intake of fruit and vegetables, excluding the intake of juice, was 258 g/day, which amounts to 3.2 portions. 46 As judged by Aune et al 5 in their analysis, adding 1 portion to the existing average diet in the United Kingdom would decrease the risk of all-cause mortality by 3.9%.
Such findings should generate little surprise because, since 2003, the World Health Organization/Food and Agriculture Organization of the United Nations have advocated the consumption of fruit and vegetables to improve health. However, although it is an easy decision to recommend increased consumption, implementing such a recommendation has proved extremely difficult. There is considerable public resistance. Many countries have run public information campaigns, but their lack of impact can be judged by the current low levels of consumption. The critical question is, "How can this be increased?" The reasons for not eating fruits and vegetables were discussed above. Essentially, there is the perception that it was often impractical and demanded time and effort. Many of the problems reflect the inherent nature of preparing and eating fruit and vegetables, problems that cannot be easily overcome. However, the convenience of fruit juice is an exception; it avoids most of the stated reasons for choosing not to consume.
Psychologically, a major factor is the feeling of selfefficacy-that is, the perception that you can succeed is basic to initiating and maintaining a behavior. Because consumption of juice is convenient and involves minimal effort, there is little reason to believe one could not drink juice. As such, the active encouragement to drink 1 portion of fruit juice a day stands a good chance of rapidly increasing fruit intake, when it is difficult to suggest alternative strategies. Given the association between increasing intake by only 1 portion and a decreased risk of death and disease, 5 there is the potential for a policy of actively encouraging drinking 1 portion of fruit juice a day to benefit health.
However, the decline in the consumption of fruit juice over recent years [35][36][37] is probably in part due to the emphasis on the sugar content, creating the perception that juice is unhealthy. A positive aura, emphasizing the bioactive molecules provided, will be needed to develop a perception that juice is a health food, rather than the current emphasis on its sugar content, which implies it is a junk food. Care would need to be taken to ensure that 100% fruit juice is consumed in moderate amounts and that it does not replace intact fruit. Although by no means solving the entire problem, a recommendation to consume juice offers a step in the right direction. Because it is convenient and avoids most of the reasons for not consuming, there is a good chance that dietary behavior will change.
Thus, the question is, "Should daily juice consumption be recommended in a world where it has proved very difficult to increase the intake of fruit and vegetables?" The argument is that the relatively attractive nature of juice and its convenience make it "low-hanging fruit." There is a second question: "Is there any alternative that has as good of a chance of increasing intake and therefore having such a large impact on health?" The conclusion will need to be considered separately for children and adults, and the debate will need to examine the consumption of polyphenols, carotenoids, and vitamin C. Any recommendation will need to be made in the context of the entire diet, such that the full range of foods consumed amounts to a balanced diet.
CONCLUSION
The take-away message is that only a small proportion of the population achieves the 5-a-day recommendation for fruit and vegetable intake and it has proved difficult to increase consumption because of the practicalities, inconvenience, and the effort required when consuming whole fruit. Psychologically, believing that one can successfully carry out the behavior (self-efficacy) is fundamental to changing behavior. Because fruit juice avoids many of the reasons for not consuming intact fruit, it can promote self-efficacy and thus encourage consumption. This review does not suggest that fruit juice should replace intact fruit; in fact, the opposite is most likely, as, in practice, juice and intact fruit are consumed in different circumstances and are rarely alternatives. Because fruit juice offers a source of vitamin C, carotenoids, and polyphenols, increasing consumption has the potential to benefit health.
Acknowledgments
Author contributions. D.B. drafted the initial version of the manuscript and H.Y. offered critical input and evaluation. Both authors approved the final version.
Funding. No external funding was received to support this work. The support of Swansea University while this paper was developed is gratefully acknowledged.
Declaration of interest. D.B. has served on the scientific advisory board of the European Fruit Juice Association although they had no role in any aspect of this work including its conception, review, or approval. The choice of topics and the views expressed are solely those of the authors. H.Y. has no relevant interests to declare. | 2019-09-12T13:06:30.588Z | 2019-09-04T00:00:00.000 | {
"year": 2019,
"sha1": "323382e321c78c4d9bd0328437516e4b83f1643a",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/nutritionreviews/article-pdf/77/11/829/30096176/nuz031.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "38e345483fed9b38d5f86f12896ed9765241b2bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
} |
25264966 | pes2o/s2orc | v3-fos-license | Interleukin-2-inducible T Cell Kinase (Itk) Network Edge Dependence for the Maturation of iNKT Cell*
Invariant natural killer T (iNKT) cells are a unique subset of innate T lymphocytes that are selected by CD1d. They have diverse immune regulatory functions via the rapid production of interferon-γ (IFN-γ) and interleukin-4 (IL-4). In the absence of signaling nodes Itk and Txk, Tec family non-receptor tyrosine kinases, mice exhibit a significant block in iNKT cell development. We now show here that although the Itk node is required for iNKT cell maturation, the kinase domain edge of Itk is not required for continued maturation iNKT cells in the thymus compared with Itk-null mice. This rescue is dependent on the expression of the Txk node. Furthermore, this kinase domain independent edge rescue correlates with the increased expression of the transcription factors T-bet, the IL-2/IL-15 receptor β chain CD122, and suppression of eomesodermin expression. By contrast, α-galactosyl ceramide induced cytokine secretion is dependent on the kinase domain edge of Itk. These findings indicate that the Itk node uses a kinase domain independent edge, a scaffolding function, in the signaling pathway leading to the maturation of iNKT cells. Furthermore, the findings indicate that phosphorylation of substrates by the Itk node is only partially required for maturation of iNKT cells, while functional activation of iNKT cells is dependent on the kinase domain/activity edge of Itk.
Natural killer T cells (NKT cells)
are a unique population of innate-like T lymphocytes that play important functions in diverse immune responses. Invariant NKT (iNKT) cells are the dominant subset of NKT cells that express an invariant TCR␣ chain (V␣14-J␣18). When activated, iNKT cells rapidly produce large amounts of IL-4 and IFN␥, along with a number of other cytokines and chemokines. iNKT cells also regulate other immune cells, such as NK cells and B cells, thus playing multiple roles in immune responses. iNKT cells develop from the DP (CD4 ϩ CD8 ϩ double-positive) thymocytes and are selected by the MHC class I-like molecule CD1d expressed on DP cells (1). After selection, the iNKT precursors (CD44 low NK1.1 Ϫ ) further develop through more mature CD44 hi NK1.1 Ϫ iNKT to finally mature CD44 hi NK1.1 ϩ iNKT cells. A number of signaling molecules, transcription factors and cytokine receptors have been identified to be important for iNKT cell development through different stages. For example, SLAM, SLAM-associated protein (SAP), and Src-family kinase FYN and NFB have been suggested to form a signaling pathway that controls iNKT cell development at a very early stage (2)(3)(4)(5)(6)(7)(8)(9)(10)(11). The cytokine IL-15, the vitamin D receptor, PTEN, SLP76 and transcription factors T-bet and AP-1 are all involved in the final maturation of iNKT cells (12)(13)(14)(15)(16)(17)(18).
Itk and Txk are two Tec family non-receptor tyrosine kinases expressed in T cells. Itk-null mice show impaired Th2 cell development, as well as cytokine secretion. In addition, both Itk and Txk regulate the development of naive phenotype CD4 ϩ and CD8 ϩ T cells (19 -22). Itk-null mice have also been shown to have reduced iNKT cell number, impaired maturation and cytokine secretion, which is exaggerated in Itk/Txk DKO mice (23,24). Thus, signals through Itk and Txk nodes are important for T cell development and function.
Signal transduction pathways travel via nodes along the pathway. Each node has at least 2 edges, an input edge and an output edge (25). Multi-domain signaling proteins may contain more than 1 input or output edges (26). Itk has at least 2 input edges, coming from upstream Src family kinases and from PI3 kinase (27). At least 1 output edge has been identified for Itk, tyrosine phosphorylation of PLC-␥1 and subsequent calcium influx, leading to the activation of NFB, NFAT, and Ras (27). Whether the Itk node has a single edge leading to PLC-␥1 activation, or whether it has multiple edges leading to other signaling pathways is not clear. There is evidence for other edges with which Itk can connect to downstream signaling pathways independent of the edge leading to PLC-␥1 phosphorylation. Itk can interact with the adaptor protein SLP-76 via its SH2 and SH3 domains (28 -31). This interaction is not dependent on its kinase activity and could represent additional edges that emanate from the Itk signaling node. Indeed, kinase-deleted or kinase-inactive Itk have been shown to activate the transcription factor serum response factor or induce antigen-induced actin polarization in T cell lines (29,32,33). These data suggest that additional edges emanate from the Itk signaling node, independent of the edge leading to tyrosine phosphorylation of PLC-␥1. However, whether such edges are important in vivo is not clear.
We and others (29,32) have shown that the Itk node has edges that can activate specific pathways in a kinase inde-pendent manner. We have also shown that conventional T cells require the kinase domain edge of Itk for development (22). We show here that a kinase domain mutant of Itk partially rescues the maturation defect observed in Itk-null iNKT cells. This rescue correlates with increased expression of CD122 and T-bet, as well as reduction in the elevated levels of eomesodermin observed in Itk-null iNKT cells. However, cytokine secretion by iNKT cells is dependent on the ability of Itk to phosphorylate downstream substrates. These data suggest that the Itk node downstream of the TcR has edges that are both kinase-dependent and kinase independent, which function in iNKT cell development.
Cell Sorting and Real-time PCR Analysis-Total iNKT cells (TCR ϩ PBS-57/CD1d tetramer ϩ ) were sorted from the thymi of mice using a Cytopeia Influx Cell Sorter (Cytopeia, Seattle, WA) and RNA were extracted using RNase Mini kit (Qiagen). cDNA was generated with You Prime First-Strand beads kit (GE Health) and quantitative real-time PCR was performed using primer/probe sets for T-bet, PLZF and eomesodermin, with GAPDH as a housekeeping gene (Applied Biosystems). Data were analyzed using the comparative threshold cycle ⌬⌬CT method, normalized to the expression of GAPDH, and the values were expressed as fold increase compared with WT iNKT cells, which was set as 1.
In Vivo BrdU Incorporation Assay-WT, Itk-null, and Tg(Lck-Itk⌬Kin)/Itk Ϫ/Ϫ mice were injected intraperitoneally with 1 g of BrdU and placed on drinking water containing 0.8 mg/ml BrdU for the next 6 days. BrdU-containing drinking water was changed every 2 days. 6 days after injection, mice were sacrificed, and thymocytes isolated and stained with PE-PBS57/CD1d tetramer, followed by fixation and per-meabilization. BrdU was detected using APC-BrdU kit from BD Biosciences.
Data Analysis-Statistical evaluation was conducted for all repetitions of each experiment using Student's t test with a probability value, p Յ 0.05, considered statistically significant.
The Kinase Domain Edge of Itk Is Required for Generating
Normal Numbers of Thymic iNKT Cells-To determine if the kinase domain edge of Itk is required for the development of iNKT cells, we compared the iNKT population in WT, Itk Ϫ/Ϫ , Itk/Txk double knock-out (DKO) mice and mice carrying a mutant ITK lacking its kinase domain instead of WT Itk in Itk null or Itk/Txk (DKO) mice (Tg(Lck-Itk⌬Kin)/Itk Ϫ/Ϫ and Tg(Lck-Itk⌬Kin)/Itk/Txk DKO (22)). For simplicity, we refer to these as Itk Ϫ/Ϫ /⌬Kin and DKO/⌬Kin, respectively. The expression level of the transgene in iNKT cells is similar to endogenous Itk as determined by quantitative RT-PCR (data not shown). We found that the absence of Itk results in a reduction in the percent of iNKT cells in the thymus, and expression of the kinase-deleted mutant of Itk could rescue this percentage to WT levels ( Fig. 1A). By contrast, while the percentage of thymic iNKT cells in Itk/Txk DKO was similar to that seen in the Itk Ϫ/Ϫ mice (i.e. reduced compared with WT mice), this percentage was not rescued by expression of the kinase-deleted mutant (Fig. 1A).
The numbers of thymic iNKT cells were also significantly reduced in Itk Ϫ/Ϫ mice compared with WT mice as previously reported ( Fig. 1B and Refs. 23,24). Expression of the kinase domain-deleted mutant in Itk Ϫ/Ϫ mice did not rescue the numbers of thymic iNKT cells as there was no significant difference between Itk Ϫ/Ϫ /⌬Kin and Itk Ϫ/Ϫ mice (Fig. 1B). Similarly, the numbers of thymic iNKT cells was significantly reduced in Itk/Txk DKO mice compared with Itk Ϫ/Ϫ mice (Fig. 1B), and these numbers were not rescued by expression of the kinase domain-deleted mutant (Fig. 1, A and B). Together, these results suggest that the generation of proper numbers of iNKT cells requires the kinase activity of Itk, and the expression of Txk.
Role of Kinase Domain Edge of Itk in iNKT Cell
Maturation-Analyzing the three stages of iNKT cell maturation in the thymus, we found that the absence of Itk results in a significant accumulation of iNKT cells at stage 2 ( Fig. 2A). By contrast, the percentage of thymic iNKT cells at stage 2 was reduced and the percentage at stage 3 was significantly increased in the Itk Ϫ/Ϫ /⌬Kin transgenic mice compared with Itk Ϫ/Ϫ mice ( Fig. 2A). This suggests that the kinase domain independent edge of Itk is sufficient to allow maturation of iNKT cells through the immature stage 2 to the more mature stage 3. However, because the percentage of thymic stage 3 iNKT cells in the Itk Ϫ/Ϫ /⌬Kin mice is still lower than in the WT mice, the kinase activity of Itk is required for the full maturation of iNKT cells through to stage 2. The percentage of stage 2 iNKT cells in Itk/Txk DKO mice was similar to that seen in the absence of Itk, and expression of the kinase domain-deleted mutant did not rescue the percentage, suggesting that Txk may be involved in the non-kinase domain edge of Itk in iNKT cell maturation ( Fig. 2A).
There are two defined subsets of murine iNKT cells, CD4 ϩ or CD4 Ϫ CD8 Ϫ (DN). Analyzing these two populations, we found that about 50% of thymic iNKT cells in WT mice were DN, which was significantly decreased to around 20% (p Ͻ 0.05) in Itk Ϫ/Ϫ mice, demonstrating that Itk is important in iNKT subset development (Fig. 2B). However, analysis of iNKT cell subsets in Itk Ϫ/Ϫ /⌬Kin mice revealed that the percentage of thymic DN iNKT cells was similar to that in Itk Ϫ/Ϫ mice, indicating that the kinase activity edge of Itk is important for the generation of these two subsets of iNKT (Fig. 2B). Because the kinase domain deletion mutant partly rescues the maturation of iNKT cells, this also suggests that the effect of Itk on the distribution of iNKT cell subsets may not be due to larger numbers of immature iNKT cells in these mice.
Thymic NK1.1 Ϫ iNKT cells have higher proliferative rates than NK1.1 ϩ iNKT cells. Labeling with BrdU would therefore result in higher incorporation in the former population over the same time period. Compared with WT iNKT cells, a much higher percentage of Itk Ϫ/Ϫ iNKT cells labeled with BrdU (Fig. 2C), and as expected most of the BrdU ϩ iNKT cells in Itk Ϫ/Ϫ mice were NK1.1 Ϫ , and the percentage of BrdU ϩ Itk Ϫ/Ϫ /⌬Kin iNKT cells was lower than those from Itk Ϫ/Ϫ mice, consistent with the higher percentage of mature NK1.1 ϩ iNKT cells in the Itk Ϫ/Ϫ /⌬Kin mice (Fig. 2C). These data add further support for a role for Itk in iNKT develop-ment and maturation that is partly independent of its kinase activity.
We also examined whether expression of CD1d and the SLAM family receptors, SLAM and Ly 108, on DP thymocytes are affected by Itk. We found that these receptors were expressed at similar levels on DP thymocytes from WT, Itk Ϫ/Ϫ , and Itk Ϫ/Ϫ /⌬Kin mice, indicating that differential expression of these molecules is not responsible for the reduced iNKT cell numbers in these mice (Fig. 2D).
Itk is suggested to form either intramolecular folded monomers or intermolecular dimer in the cells, which may maintain Itk in the inactive state (36 -39). Deletion of the kinase domain may disrupt the conformation of Itk, allowing for easier interactions between Itk and other signaling proteins, and making that edge more active. To test this possibility, we generated mice carrying an Itk mutant, K390R, which is defective in kinase activity instead of the WT Itk (Tg(CD2-ItkK390R)/ Itk Ϫ/Ϫ , Itk Ϫ/Ϫ /K390R for simplicity). This mutant of Itk has all domains intact, and instead carries a single point mutation in the kinase domain. We have shown that such a mutant likely folds in a similar fashion as the WT kinase (37). Analysis of these mice revealed that the numbers of thymic iNKT cells were similar to that found in Itk Ϫ/Ϫ and Itk Ϫ/Ϫ /⌬Kin mice, and significantly lower than WT mice (Fig. 3A). More importantly, there was a significant increase in the percentage of more mature stage 3 iNKT cells in the Itk Ϫ/Ϫ /K390R mice, but no change in the altered CD4/DN ratio of iNKT in the thymus compared with Itk Ϫ/Ϫ mice (Fig. 3, B and C). These data are very similar to the data from the Itk Ϫ/Ϫ /⌬Kin mice, further supporting the conclusion that maturation of iNKT cells is partly independent of Itk kinase activity edge.
Examination of the maturation status of peripheral iNKT cells reveals that expression of the Itk K390R mutant (as well as the Itk⌬kinase mutant, data not shown) was not able to rescue the defect in maturation or numbers of splenic iNKT cells (Fig. 4A). By contrast, liver iNKT cell maturation is not affected by the absence of Itk (2,22), and expression of the Itk K390R mutant does not affect this process (Fig. 4B).
The Kinase Domain Edge of Itk Is Required for Cytokine Production of iNKT Cells-In the absence of Itk, iNKT cells are defective in secreting IL-4 and IFN-␥ in response to ␣-GalCer stimulation (23,24). To determine whether the kinase activity of Itk is important in iNKT cell function, WT, Itk Ϫ/Ϫ and Tg(Lck-Itk⌬Kin)/Itk Ϫ/Ϫ mice were injected with ␣-GalCer and serum samples collected 2 h after injection. Analysis of the serum revealed that Itk Ϫ/Ϫ mice secreted significantly lower amounts of IL-4 and IFN-␥ than WT mice consistent with previous reports (23,24). Expression of kinase domaindeleted mutant in Itk-null mice did not rescue this defect in cytokine production, indicating that the kinase activity of Itk is required for the function of iNKT cells (Fig. 4C). Because Itk-null mice (and the Itk Ϫ/Ϫ /⌬Kin mice) have reduced numbers of iNKT cells, we also stimulated purified hepatic iNKT cells in vitro with anti-CD3 and CD28 for 3 days to confirm the reduction in cytokine secretion. We found that Itk Ϫ/Ϫ and Itk Ϫ/Ϫ /⌬Kin iNKT cells secreted comparable amounts of IL-4 and IFN-␥, which were significantly lower than that secreted by WT iNKT cells (Fig. 4D). Altogether, these data indicate that the kinase activity edge of Itk is important for IL-4 and IFN-␥ production of iNKT cells.
Kinase Domain Edge Independent Rescue of CD122 and T-bet, and Suppression of Eomesodermin Expression in Developing iNKT Cells-Our analysis indicates that the population of thymic CD122 ϩ (IL-2/IL-15 receptor  chain) iNKT cells in Itk deficient was reduced compared with WT mice (Fig. 5A). By contrast, we found that this CD122 ϩ iNKT cell population was significantly higher in the Itk Ϫ/Ϫ /⌬Kin mice than in Itk Ϫ/Ϫ mice, indicating that the increased expression of CD122 may contribute to the increased maturation of iNKT cell in the mice carrying the mutant Itk. These data also suggest that Itk may regulate CD122 expression via a kinase independent edge.
The transcription factor T-bet can regulate the expression of CD122, and we found that Itk Ϫ/Ϫ /⌬Kin iNKT cells had significantly increased levels of T-bet mRNA and protein compared with Itk Ϫ/Ϫ iNKT cells (Fig. 5, B and C). In addition, expression levels of CXCR3, another target of T-bet (40), was also rescued by the expression the equivalent K390R Itk mutant (Fig. 5D). More dramatically, the expression level of eomesodermin, another transcription factor of T-box family that also regulates CD122, was not detected in WT iNKT cells, but was highly expressed in the Itk Ϫ/Ϫ iNKT cells (Fig. 5B). Pointedly, the expression of the Itk kinase-deleted mutant significantly reduced eomesodermin expression in Itk Ϫ/Ϫ iNKT cells (i.e. iNKT cells that develop in the Itk Ϫ/Ϫ /⌬Kin mice), suggesting that kinase domain independent edge signals may affect signaling pathways leading to T-bet and eomesodermin expression in iNKT cells.
Two recent studies have shown that that the transcription factor PLZF is important for iNKT cell development at early stage (10,11), and we found that PLZF mRNA levels were significantly elevated in Itk-null iNKT cells, and this was not normalized by expression of the Itk mutant (Fig. 5B).
DISCUSSION
We show here that the Itk node in T cell receptor signaling regulates the maturation of iNKT cells in part via an edge that is kinase-independent. The partial rescue of iNKT cell maturation depends on the continued expression of the related kinase node Txk, and occurs primarily by signaling the maturation of these cells through the immature stage 2 to the more mature stage 3. This correlates with increased expression of T-bet and CD122, and decreased expression of eomesodermin. Our data suggest that signals emanating from the noncatalytic domains of Itk can act as an edge in the signaling pathway that regulates the expression of these factors, thus modulating iNKT cell development.
Our analysis revealed that the number of thymic iNKT cells cannot be rescued by the expression of the kinase domain mutants of Itk, indicating that the kinase activity edge is critical for transducing signals that lead to WT numbers of these cells. This could be intrinsic, or could be related to the re- duced numbers of total thymocytes observed in the Itk Ϫ/Ϫ and Itk Ϫ/Ϫ /⌬Kin mice, because the overall numbers of thymocytes, and in particular DP thymocytes, play critical roles in iNKT cell development and numbers (1). Indeed, while there is a slight increase in the percentage of iNKT cells in the thymus of Itk Ϫ/Ϫ /⌬Kin mice, the total number of thymocytes is not rescued in these mice, and this translates into reduced numbers (although slightly higher) of iNKT cells in these mice.
We also tested whether the kinase-deleted mutant would behave differently from a full-length kinase that has little to no kinase activity. We compared these two mutants as it is possible that the folding of the kinase-deleted mutant may be different from the WT kinase. The structure of full-length Itk is not known, but based on a number of experiments using isolated domains, and other approaches in cells, we and others have proposed one of two models for folding of this protein, either an intramolecular folded monomer, or a intermolecular folded dimer (36,37,41). Deletion of the kinase domain in both models could potentially result in enhanced interactions with the SH2 and SH3 domains. However, both the kinase-deleted mutant and the kinase activity point mutant behaved in the same fashion with regards to the generation of WT numbers of iNKT cells, as well as in their development and maturation, suggesting that any potential alterations in the structure of Itk does not explain our data.
The related kinase Txk makes some contributions to the development of iNKT cells because Itk/Txk DKO mice have significant reduction in thymic iNKT cells numbers compared with both WT and in particular Itk Ϫ/Ϫ mice (24). We find that the kinase domain independent edge of Itk can drive the maturation of a significant percentage of stage 2 iNKT cells to the more mature stage 3. However, in the absence of Txk, this does not happen. We suggest that the function of the kinase domain independent edge may be dependent on the expression of Txk. Whereas these data suggest a genetic interaction, we have not been able to get enough purified iNKT cells to examine this biochemically. However, these findings suggest that kinase activity edge of Itk may be rescued by Txk, however, this may be less efficient due to the lower levels of expression of Txk in these cells (24). However, in the absence of both Itk and Txk, the kinase domain independent edge cannot drive maturation of these cells. These findings suggest that there may be some cooperation between these two nodes, Itk and Txk, such that expression of a kinase activity edge from another Tec kinase may be able to cooperate with a kinase domain independent edge of Itk in these functions.
Several studies have shown that IL-15 is required for the final maturation of iNKT cells and that the IL-2/IL-15 receptor -chain (CD122) is important for NKT cell development (13,14). Thus the defect of final maturation in Itk Ϫ/Ϫ iNKT cells may be due to the lower CD122 expression in these cells. In support of this, Felices and Berg (24) have reported that Itk-deficient iNKT cells express lower levels of CD122 than WT iNKT cells. Our finding that CD122 and T-bet expression are independent of the kinase domain edge of Itk suggest that the IL-15 signaling pathway may contribute to the defect of Itk-null iNKT cells in maturing from stage 2 to stage 3.
More dramatically, the significant increase in expression of the T-bet-related transcription factor eomesodermin in Itk Ϫ/Ϫ iNKT cells, and its reduction upon expression of the kinase domain-deleted mutant of Itk, suggest that this kinase domain independent edge is critical in suppressing the expression of this transcription factor. It is likely that overexpression of eomesodermin alters iNKT cell maturation, and that the T-bet:eomesodermin ratio is critical in iNKT cell maturation. T-bet and/or eomesodermin have been shown to regulate the expression of CD122, although the exact nature of their contributions is not clear, particularly in iNKT cells (42)(43)(44). It is possible that CD122 expression is strictly dependent on T-bet expression in iNKT cells, and not on eomesodermin. Nevertheless, our data show that the kinase domain independent edge of Itk can partially restore this ratio or balance, thus partially restoring CD122 expression in iNKT cells. By contrast, CXCR3 has been demonstrated to be a prominent target of T-bet (40), and we observed a reduction in its expression in the absence of Itk, and prominent rescue upon expression of the K390R Itk mutant, indicating that as seen for CD122, rescue of T-bet expression by the kinase domain independent edge has functional consequences in these cells.
Two recent studies have shown that the transcription factor PLZF is important at an early stage of iNKT cell development, with arrest at stage 1 in PLZF-null, and arrest at stage 2 in PLZF overexpressed transgenic mice (10,11). This suggests that proper expression of PLZF is critical for normal iNKT cell development and maturation. We found that PLZF mRNA levels were significantly elevated in Itk-null compared with WT iNKT cells, suggesting that elevated expression of PLZF may contribute to the defect in Itk Ϫ/Ϫ iNKT cells. Indeed, transgenic overexpression of PLZF results in defects in iNKT cell development, with arrest in maturation in stage 2, similar to that seen in the absence of Itk (45). In addition, iNKT cells from the Itk Ϫ/Ϫ /⌬Kin have levels of PLZF that is closer to that seen in Itk Ϫ/Ϫ iNKT cells, suggesting that the interaction between the kinase domain independent edge and regulation of expression of these transcription factors may modulate the maturation of these cells. Of course, we cannot exclude the possibility that other transcription factors and signaling pathways may also be involved downstream of Itk signals to iNKT cell development.
Perturbation of T cell receptor signaling pathway nodes often result in different effects in iNKT cell maturation and development compared with conventional T cell development (46). Indeed, the absence of the Itk node affects the development of naïve or conventional CD4 ϩ and CD8 ϩ T cell development, while leaving development of non-conventional or innate memory phenotype T cells intact (19 -21). These nonconventional or innate memory phenotype T cells have properties of iNKT cells, with the presence of preformed cytokine message, ability to rapidly produce cytokines, as well as the requirement for both SAP and Itk nodes for their development (47). However, while the kinase domain independent edge of Itk cannot rescue conventional T cell development (21,22), it can partially rescue maturation of iNKT cells (this work). Conventional T cells express low levels of T-bet, eome-sodermin, and PLZF, while non-conventional or innate memory phenotype T cells express high levels of these transcription factors (compared with naïve or conventional T cells) (19 -22, 47). The ability of the kinase domain edge of Itk to regulate these factors also differs between iNKT cells and non-conventional or innate memory phenotype T cells. Taken together, this suggests that while iNKT cells share some characteristics with non-conventional of innate memory phenotype T cells, there are clear differences in signaling networks that control their development.
Upon TcR stimulation, Itk is recruited to the cell membrane through its PH domain binding to PIP3 in the cell membrane, where Itk is phosphorylated and activated, as well as interacts with other signaling proteins, including SLP-76, LAT, GADS, PLC-␥1, and Vav, to assemble the productive signaling complex and subsequently initiate the downstream signaling pathways. Itk interacts with other proteins mainly through its SH2 and SH3 domains and this adaptor function is very important for the downstream signaling pathways. Itk can regulate Vav localization and TCR-induced actin polarization independent of its kinase activity edge, but requires its PH and SH2 domains (29,33). In addition, the SH3 domain, but not kinase activity edge of Itk, is required for antigen receptor induced transcription factor SRF activation (32). This kinase domain independent edge can partially rescue antigen receptor induced activation of Erk in Tec kinase-null DT40 cells (32). Itk may therefore utilize this edge with scaffolding function in regulating signaling pathways that contribute to the maturation of iNKT cells. These pathways regulate the expression level of CD122, which in turn may be regulated by the balance of T-bet and eomesodermin expressed in these cells. The related kinase node Txk may play a part in regulating the function of this kinase independent edge. Our data indicate that the Itk kinase domain/activity edge may be targeted to affect iNKT cell function, while leaving some iNKT maturation intact. | 2018-04-03T03:03:27.156Z | 2010-10-29T00:00:00.000 | {
"year": 2010,
"sha1": "93f48069960f91940081deaa496ccc0a99b394b2",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/286/1/138.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "f46337b808270ee8d1c56e0c4517407b99d2a028",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
245874576 | pes2o/s2orc | v3-fos-license | Sensitive Metal-Semiconductor Nanothermocouple Fabricated by FIB to Investigate Laser Beams with Nanometer Spatial Resolution
The focused ion beam (FIB) technique was used to fabricate a nanothermocouple (with a 90 nm wide nanojunction) based on a metal–semiconductor (Pt–Si) structure, which showed a sensitivity up to 10 times larger (with Seebeck coefficient up to 140 µV/K) than typical metal–metal nanothermocouples. In contrast to the fabrication of nanothermocouples which requires a high-tech semiconductor manufacturing line with sophisticated fabrication techniques, environment, and advanced equipment, FIB systems are available in many research laboratories without the need for a high-tech environment, and the described processing is performed relatively quickly by a single operator. The linear response of the manufactured nanothermocouple enabled sensitive measurements even with small changes of temperature when heated with a stream of hot air. A nonlinear response of the nanothermocouple (up to 83.85 mV) was observed during the exposition to an argon-laser beam with a high optical power density (up to 17.4 Wcm−2), which was also used for the laser annealing of metal–semiconductor interfaces. The analysis of the results implies the application of such nanothermocouples, especially for the characterization of laser beams with nanometer spatial resolution. Improvements of the FIB processing should lead to an even higher Seebeck coefficient of the nanothermocouples; e.g., in case of the availability of other suitable metal sources (e.g., Cr).
Introduction
Thermocouples are widely used as components of infrared sensors, thermal probes, motion sensors, energy generators, complex systems based on MEMS/NEMS structures, and many others [1][2][3][4][5][6]. The main advantage of thermoelectric measurement systems based on nanostructures is their very high spatial and time resolution of measurements in comparison to conventional macro or microsystems, because larger structures average the results both in space and time. This is particularly important in scientific applications and especially useful for the characterization of laser radiation, where usually microscale devices have been used so far [1,7]. For the thermoelectric energy conversion of thermal energy (heat flux) transforms to electricity, the maximum efficiency is determined by the dimensionless figure of merit (ZT) for the chosen thermoelectric materials, which is given by ZT = S2σT/κ, where S is the Seebeck coefficient, σ is the electrical conductivity, κ is the thermal conductivity, and T is the temperature. Therefore, the maximum efficiency ZT is increased by the high Seebeck coefficient as well as the low thermal conductivity and electrical resistivity of materials used for thermocouples [8].
Recent studies have shown that the nanostructuring of thermocouples can be an effective method of increasing ZT [9][10][11]. It is believed that this can be related to the reduction of the lattice thermal conductivity due to increased phonon scattering at interfaces of thermoelectric nanomaterials. From a practical point of view, the commercial thermoelectric materials commonly applied in contemporary devices usually have a relatively low Seebeck coefficient because, typically, they are metals or their alloys [2,12]. Therefore, in this study, we investigate the possibility of application of thermoelectric materials with high values of the Seebeck coefficient (e.g., semiconductors), shaped in the form of nanostructures.
In contrast to the bulk Si, the silicon nanowires or other silicon-based nanostructures have gained much attention for use in sensitive nanoscale thermoelectric systems due to their low thermal conductivity, large Seebeck coefficient, and the excellent spatial resolution of measurements. For example, n and p-type silicon nanowires demonstrated Seebeck coefficients equal to 127.6 and 141.8 µV/K, respectively, at room temperature [13] and to 170.0 and 152.8 µV/K in the temperature range from 200 to 300 K [14]. However, the application of silicon wires has not always resulted in a device with spatial nanoresolution; for example, for Si nanowire arrays with a reported high Seebeck coefficients [15,16], where the measurement results were obtained not from a single nanowire but from the whole matrix of many wires-i.e., not in the nanoscale. In addition, for a thermocouple device for bolometric applications, with poly and single-crystalline silicon wires, the reported Seebeck coefficient was high; however, none of its sizes (100 µm long and 1 µm wide) was in the nanoscale [17]. For nanothermocouples based on Cr thin film deposited on silicon, the Seebeck coefficients were equal to 924 µV/K and 515 µV/K, respectively, for nanothermocouples fabricated on the Si wafer and on the flexible Si substrate [18]. The fabrication process in the case of the above nanothermocouples [13,14,18] requires a high-tech semiconductor manufacturing line with sophisticated fabrication techniques and environments, and many pieces of specialized and advanced equipment, based on, e.g., cleanrooms, high-resolution photolithography, e-beam lithography for nanopatterns, etc. In contrast, our work shows the results of the fabrication of nanothermocouples, which lasts for a few hours, based on processing with FIB, which is available in many research laboratories (i.e., without cleanrooms and other high-tech environments) and can be performed by a single operator.
Typically, the nanothermocouples are manufactured in a multi-step process using a large amount of advanced equipment, which for obvious reasons makes the manufacturing process expensive and unavailable for many potential applications. However, the use of the FIB method in the fabrication of nanothermocouples brings new possibilities. The FIB technique is primarily dedicated to carrying out various types of technological processes in micro and nanoscales (i.e., the etching of various materials or deposition of metals and insulators), enabling the fabrication of unique structures or a modification of existing structures [19][20][21][22][23]. The simultaneous observation due to imaging with electrons and ions during FIB machining allows for the direct and precise quality control of performed FIB processes. In one experiment (without breaking vacuum), the technique allows the production of various nano or microstructures, which may be an advantage in relation to other known technologies. Moreover, the FIB systems are dedicated only to the production of small series of structures or devices-e.g., prototypes of highly specialized applicationsand such instruments are particularly useful for research purposes.
The Seebeck nanojunction made of a Pt-W nanostrip prepared with the FIB technique has already been used to monitor the local temperature rise in the processed material during ion beam irradiation in FIB [24]. Although it was based on a metal-metal nanostructure, and therefore it exhibited low sensitivity to a temperature gradient, it reportedly generated a linear response of up to 3.5 mV thermovoltage with a temperature increase of about 250 • C (i.e., with a Seebeck coefficient equal to about 14 µV/K) [24].
We decided to develop the nanothermocouple with unique features and performance; for this, we used FIB fabrication and the metal-semiconductor structure. In the approach proposed in our paper, the very high sensitivity of the produced metal-semiconductor structure (not available for typical metal-metal structures) enables us to apply the nanostructure to detect even small gradients of temperature simultaneously using sensors with much reduced sizes. This results in a significant improvement in both the spatial resolution of the nanostructure in comparison to microstructures and in the sensitivity of the metalsemiconductor nanothermocouple in comparison to metal-metal thermoelectric sensors.
It is worth mentioning that also optical methods when performed for nanosized material objects result in a remarkable spatial resolution and sensitivity on detecting local temperature changes; e.g., for single silicon nanoparticles [25] or single defects in diamond [26]. Furthermore, the research into the laser beams can be performed using a near-field scanning optical microscope (NSOM) [27,28].
Fabrication of the Base Structure
The manufacturing of the nanosensor consists of the base-structure fabrication and its modification in the FIB system (Helios NanoLab 600 DualBeam) using a gallium-ion beam which forms the nanothermocouple. The base structure was manufactured on the n-type (R SH ≈ 2 Ω per square) silicon substrate covered by a 500 nm thick SiO 2 layer. The rectangular 100 nm thick platinum contact pads located in close proximity to each other (with 2 mm × 5 mm size each) were produced on the surface of this oxide using the photolithography technique (Figure 1a). Separate microwires were attached to contact pads (one for each contact pad) as electrical connections using a conductive silver paste, and those microwires were connected to separate standard electrical cables (Figure 1b).
Sensors 2022, 21, x FOR PEER REVIEW 3 approach proposed in our paper, the very high sensitivity of the produced metal-se conductor structure (not available for typical metal-metal structures) enables us to ap the nanostructure to detect even small gradients of temperature simultaneously using s sors with much reduced sizes. This results in a significant improvement in both the spa resolution of the nanostructure in comparison to microstructures and in the sensitivit the metal-semiconductor nanothermocouple in comparison to metal-metal thermoe tric sensors. It is worth mentioning that also optical methods when performed for nanosized terial objects result in a remarkable spatial resolution and sensitivity on detecting l temperature changes; e.g., for single silicon nanoparticles [25] or single defects in diam [26]. Furthermore, the research into the laser beams can be performed using a near-f scanning optical microscope (NSOM) [27,28].
Fabrication of the Base Structure
The manufacturing of the nanosensor consists of the base-structure fabrication its modification in the FIB system (Helios NanoLab 600 DualBeam) using a gallium beam which forms the nanothermocouple. The base structure was manufactured on n-type (RSH ≈ 2 Ω per square) silicon substrate covered by a 500 nm thick SiO2 layer. rectangular 100 nm thick platinum contact pads located in close proximity to each o (with 2 mm × 5 mm size each) were produced on the surface of this oxide using the p tolithography technique (Figure 1a). Separate microwires were attached to contact p (one for each contact pad) as electrical connections using a conductive silver paste, those microwires were connected to separate standard electrical cables ( Figure 1b).
Fabrication of Thermoelectric Nanostructures in the FIB/SEM System
Two thermoelectric junctions (one of the nanosize width) were manufactured in FIB chamber using the FIB processing. These places are marked in Figure 1 as FIB p cessing areas of the left and right junctions. One of them is a "cold" junction (i.e., the heated one, still at ambient temperature, also called a reference junction), and the sec one is a "hot" junction (i.e., the heated one) during the measurements. They were loca close to the outer edges of the contact pads, thus enabling maximum distance between hot and cold junctions.
The idea of thermocouple fabrication is based on the concept in which an FIB dep ited platinum stripe (with its width narrowed by the FIB processes) is used as the met material for the thermocouple junction while monocrystalline silicon substrate is use the semiconductor material. The views after consecutive FIB operations are schematic shown in Figure 2 and presented as top-view SEM images in Figure 3. In the first ste square 20 µm × 20 µm microhole (marked as Si-window in Figure 3a) was milled (etch through the SiO2 layer to the silicon substrate using FIB. The hole was located at a dista of about 30 µm from one contact pad. In this way, access to the silicon (with one laye the thermoelement) was provided. The next step was the FIB deposition of the platin
Fabrication of Thermoelectric Nanostructures in the FIB/SEM System
Two thermoelectric junctions (one of the nanosize width) were manufactured in the FIB chamber using the FIB processing. These places are marked in Figure 1 as FIB processing areas of the left and right junctions. One of them is a "cold" junction (i.e., the unheated one, still at ambient temperature, also called a reference junction), and the second one is a "hot" junction (i.e., the heated one) during the measurements. They were located close to the outer edges of the contact pads, thus enabling maximum distance between the hot and cold junctions.
The idea of thermocouple fabrication is based on the concept in which an FIB deposited platinum stripe (with its width narrowed by the FIB processes) is used as the metallic material for the thermocouple junction while monocrystalline silicon substrate is used as the semiconductor material. The views after consecutive FIB operations are schematically shown in Figure 2 and presented as top-view SEM images in Figure 3. In the first step, a square 20 µm × 20 µm microhole (marked as Si-window in Figure 3a) was milled (etched) through the SiO 2 layer to the silicon substrate using FIB. The hole was located at a distance of about 30 µm from one contact pad. In this way, access to the silicon (with one layer of the thermoelement) was provided. The next step was the FIB deposition of the platinum micropath (145 µm long, 32 µm wide, and about 1 µm thick) linking the silicon in the square microhole with the contact pad ( Figure 3b). The width of the deposited Pt stripe was larger than the size of the square microhole, covering the whole area of the exposed silicon in the hole. Afterwards, the Pt layer in the region of this hole was additionally thickened using platinum deposition by FIB (about 1 µm thick Pt with a size equal to 21 µm × 21 µm, Figure 3b). In this way, one thermoelectric Pt/Si junction was fabricated. Simultaneously with these operations at one thermojunction, the other Pt/Si junction was also manufactured at the outer edge of the other contact pad using the same FIB operations and procedures described above (starting from milling the hole, as in Figures 2b and 3a). The next steps led to the fabrication of the thermoelectric micro or nanostructure using gallium-ion beam milling in the area of the square hole of the left junction (Figure 3c-f). At the beginning, the process led to the fabrication of a 5 µm wide strip in the middle part of the hole area (Figure 3c). In subsequent processing steps, the width of the thermoelectric Pt/Si junction was consecutively reduced in a similar way to 2 µm, 1 µm, 500 nm (Figure 3d), 200 nm, and 90 nm (Figure 3e), respectively. The right junction was the reference junction, and its initial size was not reduced.
Measurements of thermoelectric voltage (ThV) described below were performed after each iteration after decreasing the widths of the left junction. All FIB processes leading to the fabrication of a Pt/Si micro or nanostructure were performed using an ion-beam energy of 30 kV and an ion-beam current ranging from nanoamperes to picoamperes. ors 2022, 21, x FOR PEER REVIEW micropath (145 µm long, 32 µm wide, and about 1 µm thick) square microhole with the contact pad (Figure 3b). The width was larger than the size of the square microhole, covering the w silicon in the hole. Afterwards, the Pt layer in the region of this hole using platinum deposition by FIB (about 1 µm thick Pt with a siz Figure 3b). In this way, one thermoelectric Pt/Si junction was fabric these operations at one thermojunction, the other Pt/Si junction wa outer edge of the other contact pad using the same FIB operations above (starting from milling the hole, as in Figures 2b and 3a). The rication of the thermoelectric micro or nanostructure using gal the area of the square hole of the left junction (Figure 3c-f). At t led to the fabrication of a 5 µm wide strip in the middle part of In subsequent processing steps, the width of the thermoelectric utively reduced in a similar way to 2 µm, 1 µm, 500 nm (Figur (Figure 3e), respectively. The right junction was the reference ju was not reduced. -c) The cross-section schemes showing consecutive steps of the FIB processing leadin to the formation of a junction (e.g., the left junction) between platinum and silicon substrates. Fig ures (b,c) correspond to Figure 3a,b, respectively. The drawings are schematic, and the sizes are no to scale.
Thermoelectric Measurements Using Hot-Air Stream
After the FIB processing of the micro or nanojunction, the structure was electrically tested in the FIB chamber with the use of Kleindiek probe manipulators. This procedure was performed to check that the structure was electrically active. On the other hand, thermoelectric measurements of the thermovoltage ThV (generated due to a temperature difference between both junctions) were performed outside the FIB system using the Keithley K617 source-meter for the voltage measurements and averaged over five measurements (with an error ±0.05 mV). Firstly, the flow of hot air was used for heating one thermojunction (the "hot" junction; i.e., the left junction). The hot air coming out of the air-heater (equipped with a temperature controller) was a source of heating. This source waschosen due to precise control of the temperature in this case.
A hot airstream (coming out of the nozzle with a 1 mm inner diameter) was directed at the left thermojunction. The distance between the heated structure and the nozzle outlet was about 1 mm. The temperature of the airstream coming from the nozzle was controlled before each measurement using the Pt 100 temperature sensor. The measurements were carried out at hot-air temperatures ranging from 37.5 • C to 100 • C in increments of 12.5 • C (with an error ±0.5 • C). During the heating of the thermojunction with the hot air from the 1 mm wide nozzle outlet, the other thermojunction was located apart at a distance of above 10 mm; i.e., at the ambient temperature (which during the measurements was equal to 22 • C). The other thermojunction (the right junction) was used as a reference junction and was not heated.
When the temperature of the airstream increased, a linear rise in recorded voltage (ThV) for all junction widths was observed (Figure 4a). For the prepared micro-or nanostructures, the highest Seebeck coefficients were up to 150 µV/K. This is equivalent to a 10 times larger sensitivity to temperature gradients than obtained from the metal-metal (Pt-W) nanothermocouple described in [24]. The Seebeck coefficients showed a slight increase (Figure 4b) for the case of 90 nm wide nanojunction after milling the air gap under the nanojunction in the way shown in Figure 3f. This effect was due to the reduced volume of the heated material and reduced thermal contact due to removal of a silicon layer. All the fabricated Pt/Si junctions (micro and nanojunctions) provided an ex thermoelectric signal detection and linear response, even for the 90 nm wide ju which enabled nanometer spatial resolution. Moreover, their sensitivity was up times larger than for typical metal-metal junctions [2,12,24].
Exposure to Laser Beam
To improve the quality of the metal-semiconductor interface of the 90 nm All the fabricated Pt/Si junctions (micro and nanojunctions) provided an excellent thermoelectric signal detection and linear response, even for the 90 nm wide junction, which enabled nanometer spatial resolution. Moreover, their sensitivity was up to 10 times larger than for typical metal-metal junctions [2,12,24].
Exposure to Laser Beam
To improve the quality of the metal-semiconductor interface of the 90 nm wide nanojunction (in the form of the bridge structure presented in Figure 3f), they were exposed to a powerful beam of an argon-ion laser (with the beam diameter approximately equal to 0.7 mm). The continuous wave (CW) argon laser beam (λ = 514 nm, from H230NDL001 laser unit with variable beam power, produced by National Laser Company) was directed to the thermoelectric nanostructure (the left junction). The optical power density of the laser beam was determined using the ThorLabs PM100D optical power meter with the S130C sensor. The incident laser beam also caused the heating of structure to high temperatures (i.e., the laser annealing of nanojunction), as well as the measured voltage changes at different laser beam powers. The measured voltages for nanojunction were determined using Keithley 2100 multimeter and averaged over 5 measurements ( Figure 5).
Sensors 2022, 21, x FOR PEER REVIEW nanojunction were determined using Keithley 2100 multimeter and averaged ov urements ( Figure 5). The experiment with this laser was used to improve the quality of junction annealing and at the same time to determine the response of the structure to th laser beam even for the laser beam with an optical power density as high as 17 with the signal response equal to 83.85 mV. The observable nonlinearity in Fig be attributed, among other factors, to a significant heat spreading occurring at dients of temperature.
Thermoelectric Measurements Using Hot Airstream after Laser Treatment
After experimenting with the laser illumination, the 90 nm wide nanojun an air gap was tested again using a hot airstream. The conditions and parame hot airstream were the same as they were before the laser experiment.
The obtained ThV results improved when both junctions (left and right) previously exposed to the powerful laser beam; i.e., when laser annealed (Figu improvement was related to higher Seebeck coefficients (and therefore also Th The Seebeck coefficient for the laser-heated nanojunction increased to 140 µV/ was equal to 135 µV/K before the laser heating. Generally, it is important to ap temperature annealing with the aim of increasing the quality of metal-semi electrical contact after the FIB deposition of a metal layer on a semiconductor. A the application of laser annealing as described above was an easy and effective The experiment with this laser was used to improve the quality of junctions by their annealing and at the same time to determine the response of the structure to the incident laser beam even for the laser beam with an optical power density as high as 17.4 Wcm −2 , with the signal response equal to 83.85 mV. The observable nonlinearity in Figure 5 can be attributed, among other factors, to a significant heat spreading occurring at high gradients of temperature.
Thermoelectric Measurements Using Hot Airstream after Laser Treatment
After experimenting with the laser illumination, the 90 nm wide nanojunction with an air gap was tested again using a hot airstream. The conditions and parameters of the hot airstream were the same as they were before the laser experiment.
The obtained ThV results improved when both junctions (left and right) had been previously exposed to the powerful laser beam; i.e., when laser annealed ( Figure 6). The improvement was related to higher Seebeck coefficients (and therefore also ThV values). The Seebeck coefficient for the laser-heated nanojunction increased to 140 µV/K, while it was equal to 135 µV/K before the laser heating. Generally, it is important to apply high-temperature annealing with the aim of increasing the quality of metal-semiconductor electrical contact after the FIB deposition of a metal layer on a semiconductor. As shown, the application of laser annealing as described above was an easy and effective method to fulfil this need.
Application of Thermoelectric Nanothermocouples Fabricated by FIB
The use of the thermoelectric nanostructure presented in this work to study beam may be particularly useful for a highly sensitive analysis of the optical n distribution and of the mode structure of laser beams with nanometer resolution a wide spectrum of wavelengths. The point-to-point scanning of the whole are laser beam in the near-field using thermoelectric nanostructures can determine th of the laser beam; in particular, the spatial intensity distribution of the emitted r Such an approach would allow for the experimental study of the phenomenon brous structure formation within the laser beam or enable the control of the beam phenomenon [29,30].
An important advantage of such research is the possibility of designing st with intentionally increased losses for unwanted modes in a laser beam. Such an for a mid-infrared semiconductor laser has been presented [31] where the high modes were removed. The available measurements, however, were performed an only for the far-field pattern of the device (and with 1 mm intervals between the m ment points).
Other potential applications of thermoelectric nanostructures in near-field are, e.g., optimizations of the threshold current and the wall-plug efficiency and i ments of the optical power or the luminance quality for almost all types of lasers
Conclusions
The research aimed at manufacturing a metal-semiconductor (Pt-Si) nano couple by the FIB technique and determining the level of its thermoelectric respo obtained results confirmed that the nanothermocouple can be suitable for the laser beams with a nanometer spatial resolution and high ssensitivity (10 times be for metal-metal nanostructures). A Seebeck coefficient up to 140 µV/K was m For different cases of FIB deposition (various metals, semiconductors, and details of the deposition process), the improvement can be even larger due to annealing. This improvement was most likely related to the quality improvement of metal-semiconductor interface in nanojunctions during the laser heating. The annealing does not influence the linearity of obtained results (as shown in Figure 6), and the measurements give reproducible results.
The voltage response of each nanojunction may be measured when applying preprogrammed temperatures (e.g., with a hot airstream). Thus, such calibrated nanothermocouples can also be effectively used in the case of a multijunction set, enabling precise measurements even if small differences between their individual responses (i.e., their Seebeck coefficients) occur. Then, the "cold" (unheated) junction is common for all nanojunctions.
Application of Thermoelectric Nanothermocouples Fabricated by FIB
The use of the thermoelectric nanostructure presented in this work to study the laser beam may be particularly useful for a highly sensitive analysis of the optical near-field distribution and of the mode structure of laser beams with nanometer resolution and over a wide spectrum of wavelengths. The point-to-point scanning of the whole area of the laser beam in the near-field using thermoelectric nanostructures can determine the nature of the laser beam; in particular, the spatial intensity distribution of the emitted radiation. Such an approach would allow for the experimental study of the phenomenon of the fibrous structure formation within the laser beam or enable the control of the beam-steering phenomenon [29,30].
An important advantage of such research is the possibility of designing structures with intentionally increased losses for unwanted modes in a laser beam. Such an example for a mid-infrared semiconductor laser has been presented [31] where the higher order modes were removed. The available measurements, however, were performed and shown only for the far-field pattern of the device (and with 1 mm intervals between the measurement points). Other potential applications of thermoelectric nanostructures in near-field research are, e.g., optimizations of the threshold current and the wall-plug efficiency and improvements of the optical power or the luminance quality for almost all types of lasers.
Conclusions
The research aimed at manufacturing a metal-semiconductor (Pt-Si) nanothermocouple by the FIB technique and determining the level of its thermoelectric response. The obtained results confirmed that the nanothermocouple can be suitable for the study of laser beams with a nanometer spatial resolution and high ssensitivity (10 times better than for metal-metal nanostructures). A Seebeck coefficient up to 140 µV/K was measured, with a highly linear response to heating temperatures. An experiment with the junctions exposed to a powerful argon-laser beam was performed to improve the quality of junctions (by laser annealing) and to determine the response of the structure to such a high-power incident laser beam. The measurements performed after the abovementioned process revealed an improvement of the nanothermocouple parameters, related to the quality increase of the metal-semiconductor interfaces in the junctions. Our nanothermocouple based on a metal-silicon nanojunction (Pt-Si) was intended for the study of low-energy laser radiation, which causes small changes in temperature; therefore, the operating temperature range of our thermocouple should be in the range from ambient temperatures up to 100 • C.
The obtained results show the potential usefulness of nanothermocouples made with the FIB technique. They can be used particularly in specialized thermoelectric sensors/detectors with nanometer spatial resolution for detecting the heat and radiation of various spectral ranges and intensities, especially for the near to far infrared range.
In contrast to the fabrication of thermometric structures, which requires a high-tech semiconductor manufacturing line with sophisticated fabrication techniques, environments, and many pieces of advanced equipment, our work shows the results of the fabrication of nanothermocouples (with excellent parameters) using the FIB method, which is available in many research laboratories (i.e., without cleanrooms and other high-tech environments) and can be performed by a single operator. Improvements of the FIB technique should lead to even higher Seebeck coefficient of the nanothermocouples fabricated with this method; e.g., in case of the availability of other suitable metal sources (e.g., of chromium, as applied in [18]). | 2022-01-12T06:18:24.970Z | 2021-12-31T00:00:00.000 | {
"year": 2021,
"sha1": "f6e82763e2f25463205586b166645219572aed3d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7414cced27c4930b41f80a1be7b2119af5a5de3d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
15304512 | pes2o/s2orc | v3-fos-license | The Ionizing Continuum of Quasars
The ionizing continuum shape of quasars is generally not directly observable, but indirect arguments, based on photoionization models and thin accretion disk models suggest that it should peak in the extreme UV, and drop steeply into the soft X-ray regime. However, recent observations of very soft X-ray emission in low z quasars, and far UV emission of high z quasars, suggest that the ionizing continuum of quasars does not peak in the extreme UV, and may extend as a single power law from ~1000 A to ~1 keV. If true, that has interesting implications for photoionization models and for accretion disk models. The proposed revised continuum shape will be tested directly in the near future with FUSE.
Introduction
What is the shape of the ionizing continuum (1-∼ 10 Rydbergs) of quasars? This question is interesting because this is where quasars' continuum emission peaks (e.g. Sanders et al. 1989;Elvis et al. 1994), and it therefore provides an important clue to the nature of the continuum emission mechanism. It is also interesting because this spectral shape controls the H to He photoionization ratio, and the heating per ionization of the ionizing continuum, and is thus important for understanding the physical parameters of the photoionized gas in quasars.
The best guess mechanism for the continuum emission mechanism in quasars is accretion of gas into a massive black hole (e.g. Rees 1984). The most detailed models calculated are for thin accretion disks. One can thus use theoretical accretion disk spectra to predict the ionizing continuum shape based on the observed optical-UV emission. The implied shape tends to peak in the extreme UV (EUV, e.g. Laor 1990). One can also constrain the EUV spectral shape using the He II λ1640 equivalent width. This line is presumably a pure recombination line, and thus it can be used as a rather accurate measure of the number of He II ionizing photons. Mathews & Ferland (1987) used this argument to deduced an ionizing continuum which peaks at a few Rydbergs. Thus, it appeared that both accretion disks and photoionization models indicate that the ionizing continuum peaks in the EUV.
However, these arguments are indirect, and one would like to get the best possible observational constraints on the ionizing spectral shape. To address this question we carried out a large program with the ROSAT position sensitive proportional counter (PSPC). The results were surprising, as further described below (for more details see Laor et al. 1994;.
How to Observe?
The large Galactic opacity prevents a direct observation of the EUV in quasars. One alternative is to observe the UV spectra of very high redshift quasars. The other alternative, adopted in our study, is to go to the other side of the Galactic opacity barrier and to observe low redshift quasars in very soft X-rays.
X-ray observations below 1 keV prior to ROSAT indicated a spectral steepening, or equivalently an excess emission, relative to the flux predicted by an extrapolation of the hard X-ray power-law (e.g. Arnaud et al. 1985;Wilkes & Elvis 1987;Turner & Pounds 1989). In some objects the excess could be described as a very steep and soft component, which is consistent with the Wien tail of a hot thermal component dominating the UV emission. However, these studies were limited by the low signal to noise ratio (S/N) and energy resolution of the EINSTEIN IPC, and the EXOSAT LE detectors, in particular in the crucial energy range below 0.5 keV. This prevented an accurate determination of the soft X-ray emission spectrum of quasars. Furthermore, the objects studied do not form a complete sample, and these results are likely to be biased by various selection effects which were not well defined a priori. In particular, most studied objects are nearby, intrinsically X-ray bright, AGNs.
The PSPC detector aboard ROSAT had a significantly improved sensitivity, energy resolution [E/∆E = 2.4(E/1 keV) 1/2 FWHM], and spatial resolution below ∼ 2 keV, compared with previous detectors (Trümper 1983). We used this detector to make an accurate determination of the soft X-ray properties of a well defined, complete, and otherwise well explored, sample of quasars.
What to Observe?
We found the BQS sample, a subset of the PG survey defined by Schmidt & Green (1983), to be particularly suitable for our purpose for the following reasons: 1. These objects are selected only by their optical properties, thus they are not directly biased in terms of their X-ray properties. 2. This sample has already been studied extensively, and in a uniform manner in other parts of the spectrum, including the radio (Kellerman et al. 1989;Miller, Rawlings & Saunders 1993), the mid-to far-infrared (Sanders et al. 1989), and the nearinfrared to optical (Neugebauer et al. 1987). High quality optical spectroscopy was obtained by Boroson & Green (1992), HST FOS spectroscopy was obtained by Wills et al. (1998, these proceedings), and ASCA and SAX X-ray spectra were obtained by George et al. (1998) and Fiore et al. (1998). These studies of the PG quasars provide us with the most complete and coherent picture of the emission properties of bright AGNs, and allow us to make a detailed study of possible correlations between the soft X-ray properties and various other emission properties. 3. This sample includes a large fraction of the brightest known quasars, thus rather high S/N spectra could be obtained within a reasonable amount of spacecraft time.
The complete PG sample includes 114 AGNs, of which 92 are quasars (i.e. M B < −23). We selected a subsample of the PG quasars which is optimally suitable for soft X-ray observations by the following two selection criteria: 1. z ≤ 0.400. This prevents the rest-frame 0.2 keV from being redshifted beyond the observable range. 2. N Gal H I < 1.9 × 10 20 cm −2 , where N Gal H I is the H I Galactic column density as measured in 21 cm. This low N Gal H I cutoff is critical for minimizing the effects of Galactic absorption. This cutoff implies an upper limit on the Galactic optical depth in our sample of τ 0.2keV < 1.6 (Morrison & McCammon 1983). Even with this low N Gal H I cutoff, no photons below 0.15 keV can be detected. This is because the opacity of the Galaxy increases as ∼ E −3 , giving τ 0.1keV = N H /1.77 × 10 19 , while the effective area of the PSPC drops rapidly below 0.15 keV. As a result practically no photons below 0.15 keV can be detected from the quasars (although the formal lower limit of the usable channels on the PSPC is 0.1 keV). These criteria limited our sample to 23 quasars, which should be representative of the low-redshift, optically-selected quasar population.
Accurate values of N Gal H I are crucial, even for our low N Gal H I sample, in order to make an accurate determination of the intrinsic soft X-ray spectrum. The N Gal H I values were taken from Elvis, Lockman & Wilkes (1989), Savage et al. (1993), Lockman & Savage (1995), and the recent extensive measurements by Murphy et al (1996). All these measurements of N Gal H I were made with the 140 foot telescope of the NRAO at Green Bank, WV, using the "bootstrapping" stray radiation correction method described by Lockman, Jahoda, & McCammon (1986), which provides an angular resolution of 21', and an uncertainty of ∆N Gal H I =1 × 10 19 cm −2 (and possibly lower for our low N H quasars). This uncertainty introduces a flux error of 10% at 0.2 keV, 30% at 0.15 keV, and nearly a factor of 2 at 0.1 keV. Thus, with our accurate N Gal H I reasonably accurate fluxes can be obtained down to ∼ 0.15 keV.
The Soft X-ray Continuum
All the objects in our sample were detected with the PSPC, and high quality spectra were obtained for most objects. The spectra of 22 of the 23 quasars are consistent, to within ∼ 30%, with a single power-law model at rest-frame 0.2 − 2 keV. There is no evidence for significant soft excess emission with respect to the best fit power-law. We place a limit (95% confidence) of ∼ 5 × 10 19 cm −2 on the amount of excess foreground absorption by cold gas for most of our quasars. The limits are ∼ 1 × 10 19 cm −2 in the two highest S/N spectra.
Significant X-ray absorption (τ > 0.3) by partially ionized gas ("warm absorber") in quasars is rather rare, occurring for ≤ 5% of the population, which is in sharp contrast to lower luminosity Active Galactic Nuclei (AGNs), where significant absorption probably occurs for ∼ 50% of the population.
A significantly flatter α ox is obtained when the three X-ray weak quasars (see Laor et al. 1997), and the absorbed quasar PG 1114+445, are excluded.
Thus, "normal" RQQ quasars in our sample have α ox = −1.48 ± 0.10, α x = −1.69 ± 0.27, while for the RLQ α ox = −1.44 ± 0.12, α x = −1.22 ± 0.28, where the ± denotes here and above the dispersion about the mean, rather than the error in the mean. Zheng et al. (1997) have constructed a composite quasar spectrum based on HST spectra of 101 quasars at z > 0.33. They find a far-UV (FUV) slope (1050-350Å) of α FUV = −1.77 ± 0.03 for RQQs and α FUV = −2.16 ± 0.03 for RLQs, with slopes of ∼ −1 in the 2000-1050Å regime. The Zheng et al. mean spectra, presented in Figure 6, together with the PSPC mean spectra, suggest that the FUV power-law continuum extends to the soft X-ray band. In the case of RQQs there is a remarkable agreement in both slope and normalization of the soft X-ray and the FUV power-law continua, which indicates that a single power law continuum component extends from ∼ 1000Å to ∼ 1 − 2 keV. RLQs are predicted to be weaker than RQQs at ∼ 100 eV by both the FUV and the PSPC composites. It thus appears that there is no extreme UV sharp cutoff in quasars, and no steep soft component below 0.2 keV. This implies that the fraction of bolometric luminosity in the FUV regime may be significantly smaller than previously assumed.
The EUV Continuum
The UV to X-ray continuum suggested in Figure 1 is very different from the one predicted by thin accretion disk models and suggested by photoionization models. In particular, it implies about a four times weaker FUV ionizing continuum compared with the Mathews & Ferland continuum that was deduced based on the He II λ1640 recombination line equivalent width.
What does it Mean?
4.1. Photoionization models Korista, Ferland & Baldwin (1997) discuss possible ways to reconcile the revised ionizing spectral shape with photoionization models. They find that there is no way to adjust the BLR parameters to produce the observed strength of He II together with the other UV emission lines, and so they conclude that either the ionizing continuum is anisotropic, and the BLR sees a harder continuum than what we see, or that the interpolation between the FUV and soft X-ray emission is wrong, and there is an EUV peak near 4 Rydbergs.
An anisotropic ionizing continuum is naturally produced by thin accretion disks, as the radiation from the hottest inner parts of the disk is deflected towards low inclination angles by the combined effect of Doppler beaming and gravitational deflection (e.g. fig.8 in Laor & Netzer 1989). Unified models of AGNs (e.g. Antonucci 1993), as well as X-ray spectroscopy of the Fe Kα line (Nandra et al. 1997), indicate that quasars are generally seen not too far from face-on. Thus, the BLR is most likely spread at high inclination angles, together with the rest of the obscuring gas, and the observed continuum will always be softer than the one incident on the BLR.
The other possibility of an EUV peak may have a physical explanation as a bound-free He II emission edge produced in the disk's atmosphere (Hubeny & Composite optical-soft X-ray spectrum for the RQQs and RLQs in the Laor et al. sample (thick solid line). Note that despite the fcat that RLQs are brighter at 2 keV, they are fainter at 0.2 keV. The Mathews & Ferland (1987) spectral shape assumes a hard X-ray power law down to 0.3 keV and a very steep component below 0.3 keV. This spectral shape is inconsistent with the PSPC results. The Zheng et al. composites for RLQs and RQQs are plotted in a thin solid line. They suggest that the FUV power law extends into the soft X-ray regime, with no extreme UV spectral break and no steep soft component below 0.2 keV. Hubeny 1997Hubeny , 1998. The problems with this explanation is that a strong He II emission edge requires fine tuning of the disk model parameters. It also requires fine tuning of the spectral shape so that the emission shortward of the EUV peak will look like a smooth extension of the emission longward of the peak.
Accretion disk models
Accretion disks inevitably produce a spectral shape which rises slowly with frequency and then drops steeply above some cutoff frequency. This just reflects the fact that the disk is powered by gravity, and that the dissipated energy is radiated locally (and that it has an inner edge). The revised ionizing continuum drops much more slowly than possible with any form of simple thin accretion disk models (e.g. Fig.7 in Laor et al. 1997). This slow drop may reflect a drop in the disk radiation efficiency at small radii, either due to the disk becoming optically thin, so that the viscous infall time is shorter than the gas cooling time, or if the optical depth remains large, the radiative efficiency may drop due to trapping of the outgoing radiation in the disk, and its advection beyond the black hole event horizon. Alternatively, part of the dissipation may occur in a warm corona above the disk, which will turn the disk exponential tail EUV emission passing through it into a power law tail (e.g. Czerny & Elvis 1987).
Is the PSPC well calibrated at low energy?
Both EINSTEIN and ASCA observations of quasars generally suggest a flatter soft X-ray power-law emission. This raises the possibility that the PSPC may be badly calibrated, and that the FUV -soft X-ray match may just be a coincidence. A proper evaluation of the calibration of these X-ray telescopes is well beyond the scope of this contribution. It is sufficient to say that there is no consensus about this issue in the X-ray community.
However, the following result suggests that the PSPC is most likely well calibrated below 0.5 keV. Figure 2 compares the Galactic N H deduced from the accurate 21 cm measurements with the best fit X-ray column deduced using N H as a free parameter (Laor et al. 1997). In most objects the two columns agree to within ∼ 1σ. Two objects, PG 1116+215 and PG 1226+023, have a very high S/N PSPC spectrum, and for these N H (X-ray) is very well determined (to within 0.8−1×10 19 cm 2 ), yet this column is still consistent with N H (21 cm), indicating that both methods agree to 5-7%. This remarkably good match implies that the PSPC is very unlikely to be have a significantly biased calibration below ∼ 0.5 keV, where the ISM absorption becomes significant. The N H (X-ray) vs. N H (21 cm) match also has various interesting physical implications, as further discussed in Laor et al. (1997).
It is also likely that the PSPC is well calibrated above 0.5 keV. Laor et al. (1994Laor et al. ( , 1997 fitted each object above 0.5 keV, to look for spectral curvature by comparing this fit to the fit to the whole PSPC band (their fit 3). They found that the mean spectral slope above 0.5 keV is not significantly different from the mean slope over the whole PSPC band. This suggests that the PSPC is also well calibrated above 0.5 keV. Otherwise, the PSPC calibration above 0.5 keV needs to biased in such a way so as to just compensate for an intrinsic slope change above 0.5 keV.
Is it dust reddening?
Zheng et al. corrected their individual quasar FOS spectra for Galactic extinction and also made a statistical correction for absorption by the Lyman forest. However, no correction was applied for reddening intrinsic to the quasars. The Galactic extinction curve rises steeply below 2000Å and relatively small extinction in the optical may induce significant reddening in the FUV. If quasars have dust with a Galactic extinction curve one may worry that the observed steepening below 1000Å is induced by dust.
The dust opacity for a variaty of grain compositions peaks at ∼ 700 − 800Å and drops steeply at shorter wavelengths, to about 1/3 of the peak opacity at ∼ 300Å (see Fig.6 in Laor & Draine 1993). Thus, if the observed steepening below 1000Å was due to dust extinction, then the spectrum at λ < 700Å should have flattened back due to the decreasing extinction. Since the observed composite does not show such a recovery, it is not likely that the steepening is due to intrinsic dust absorption, whether it is Galactic dust or dust of other compositions.
Are we comparing apples and oranges?
The Zheng et al. sample includes only z > 0.33 quasars, and their composite FUV slope is based mostly on z ≥ 1 quasars, while our sample is limited to z ≤ 0.4 quasars only. Thus the two samples are practically disjoint. If the FUV to soft X-ray spectral shape is redshift dependent then we are not comparing similar objects, and the apparent agreement of the FUV and soft X-ray composites would be just a coincidence.
One clearly needs to explore the FUV to soft X-ray spectral shape in samples with similar redshifts. A stronger test is to explore whether the mean FUV and soft X-ray continua agree in a given sample, and the strongest test is to explore whether they agree for each object in the sample. A large program was recently approved for the FUSE mission (PI Anuradha Koratkar) to obtain high quality FUV spectra for all our sample of 23 quasars. The spectra will be obtained down to the Galactic Lyman limit cutoff, i.e. typical rest frame λ ∼ 750Å, which is well below the 1000Å break. This will allow us to clearly determine for each object whether the FUV and soft X-ray continua agree.
The EUV in Other Types of AGNs
Seyfert galaxies show flatter α ox than quasars, but they also show a flatter α x than in quasars. This raises the possibility that these objects may also have a single power law component extending from the FUV to ∼ 1 keV, as was suggested by Zheng et al. (1995) for Mrk 335. FUSE observations of a large sample of Seyfert galaxies having high quality PSPC spectra are required in order to address this possibility, although given the low z of Seyferts, it will be possible to probe their continuum slope only down to rest frame λ850Å. The H I column determined by 21 cm measurements versus the best fit H I column determined by a power-law fit to the quasars PSPC spectrum. The assumption that both values are equal, indicated by the straight line, is acceptable at the 8% level. Note, in particular, the two highest S/N objects, which deviate from the straight line by less than 1 × 10 19 cm −2 . The agreement between the two measures of N H indicates a lack of intrinsic cold gas absorption in quasars, and that H I/He I≃H/He in the ISM at high Galactic latitudes.
BALQSOs appear to have a steep FUV slope (Korista et al. 1992;Arav et al. 1998), and they also generally show a steep α ox (Green & Mathur 1996), most likely due to strong X-ray absorption (Mathur, Elvis, & Singh 1995). Again, this raises the possibility that their FUV extrapolates to the soft X-ray flux level. HST and FUSE observations of a larger sample of z ∼ 1 − 2 BALQSOs can address this possibility. If the FUV of BALQSOs is indeed generally very steep that would exacerbate the energy budget problem, which is already significant in normal quasars. Since the weakness of the soft X-ray emission is most likely due to absorption, the steep FUV spectra may also be due to (a wavelength dependent) absorption.
Some AGNs must clearly have an EUV continuum which is very different from a simple power law. In particular, Puchnarewicz et al. (1994Puchnarewicz et al. ( , 1995a) find a number of AGNs with an extremely strong soft X-ray component, much above the UV component. However, these AGNs were selected from the most luminous soft X-ray sources known, and are thus most likely extreme cases. Other AGNs have a rather steep UV continuum, but flat α ox (Puchnarewicz & Mason 1998), possibly due to extinction of the optical continuum and absorption of the soft X-ray continuum. | 2014-10-01T00:00:00.000Z | 1998-10-15T00:00:00.000 | {
"year": 1998,
"sha1": "8579a54586e6f3226dfa3cf5392c9792bb860b8c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d50cf723cc080b80b01fb6b0029796487c6fe5b0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
233620340 | pes2o/s2orc | v3-fos-license | Evaluating Occupational Stress Levels of the Railway Workers
AIM This study aimed to determine the levels of stress that are experienced by railway workers. METHOD This descriptive, cross-sectional study was conducted at the facilities of Turkish State Railways. The study sample included 322 male workers. The study data were collected between February and April 2015. A personal information form and the Doetinchem Organizational Stress Questionnaire were used to collect data. RESULTS It was determined that the employees are influenced by all sub-dimensions of the groups of stressors, social changes, psychological tensions, and complaints about health, and they have a medium level of stress. The study found that the workers were mostly influenced by their responsibilities and occupational uncertainty in future. CONCLUSION Descriptive characteristics of the workers and work-related and occupational characteristics showed statistically significant difference in mean scores of the subscales of stressors, social variables, psychological variables, and health complaints. In future, defining stress-related factors by determining the stress levels of employees will guide the initiatives intended to reduce work-related stress.
Introduction
Employee health is substantially affected by work, and work is affected by employees' health as well. This relationship should be explored to protect employee health and improve the quality of work (Bilir & Yıldız, 2014).
Most people spend a major part of their adult life working in an environment in which they face many physical and psychological challenges, requiring them to cope with varying degrees of stress. Physical (for example, temperature, lighting, pressure, ventilation, radiation, and noise) as well as chemical conditions (for example, lead, benzene, and mercury exposure) have negative effects on health. A working environment includes both physical/chemical and social/psychological environments. Work-related stress has potential health effects on the workers (Capasso, 2018). According to the International Labor Organization, stress is the harmful physical and emotional response owing to an imbalance between the perceived demands and the perceived resourc-es and the abilities of individuals to cope with those demands. Work-related stress is determined by work organization, work design, and labor relations and occurs when the job demands do not match or exceed the capabilities, resources, or needs of the worker or when the knowledge or abilities of an individual worker or a group to cope are not matched with the expectations of the organizational culture of an enterprise (International Labor Organization, 2016).
People can experience stress in different aspects of their lives; one of them is the working life, which is a stressful environment (De Sio et al., 2017). Each employee and each job have unique source of stressors, which vary by personal characteristics, technology, work environment, and interpersonal communication (Motowidlo et al., 1986).
It is important to determine the stress-generated situation and how it affects the employees. Instead of being controlled by stress, workers should control their own stress (Potter & Perry, 2009). Work-related stress can be managed by changing personal characteristics, attending social activities, or implementing time management (Aydın, 2016;Potter & Perry, 2009). In addition, work-related stress can be reduced by some changes in the work environment in which common decisions are made by employees, roles in the workplace are defined, conflicts are reduced, work conditions are improved, and social support is provided (Aydın, 2016;Garcia-Herrero et al., 2017).
The share of railway services in transportation networks is over 10% in developed countries. It is 1.5% in Turkey (İnan & Demir, 2017). Railways were rapidly developed by reconstruction after 2003 in Turkey (Sarı et al., 2011). This development resulted in a current issue regarding railway workers' problems with work and the workplace. Railway workers may be exposed to high levels of stress because they are assigned to shift work, seasonal work, and distant long road work and factory and studio workers are assigned to hazardous and very hazardous work. Altundaş et al. (2010) have studied railway workers and found that their job satisfaction was low and their risk of exposure to high-voltage transmission lines, noise, and work accidents was high. They described the negative aspects of their work life as physically demanding work conditions, irregular work hours, low pay, and poor work and rest facilities. They said that they experienced sleep disorders owing to shift work, worked in extreme cold and hot weather, and had musculoskeletal problems. Canpolat (2006) has found that railway workers experience stress concerning their relationships with superiors (58%), low pay (52%), complex structure of the workplace (43.5%), poor-quality food (21.7%), high risk of work accidents (20.3%), excessive work hours (20.3%), relationship with peers (18.8%), lack of break time (8.7%), and work environment (5.8%).
Managing workplace stress is an important area of work-related health and safety. One of the important tasks of an occupational health nurse is to organize the interventions to manage workplace stress. The occupational health nurse performs nursing interventions to manage work stress in employees. They identify the source of stressors in the workplace, determine which employees have the highest levels of stress, and intervene to reduce current sources of stressors. They take preventive measures to protect the employees' health against the negative effects of stress and help the individuals cope with the harmful outcomes of stress. They intervene to help the employees to adapt to stress (Clemen-Stone et al., 2002;Usca, 2013). They perform evidence-based implementations to improve the quality of life and health of the employees (Rogers, 2012). Assessing employees' stress levels can be a guide to plan stress management interventions.
In workplaces with a high number of employees, it may be difficult for individuals to adapt to work, colleagues, and organization. This may increase the number of factors that create stress in the workplace and increase employee perception of these factors. There are many studies in the literature examining the causes, consequences, and ways of coping with work stress (Usca, 2013;Smith et al, 2019;Yang et al 2019). However, there is no large-scale study conducted by the nurse, who is the basic member of the occupational health team, in the field of occupational health that defines the stress level of the employees in crowded workplaces that have different occupations in our country. Determining the stress levels and job stressors of the workers working together in different job areas can guide the prevention and elimination of these stressors. These initiatives can contribute positively to employee health and safe work environment.
From this perspective, this study was conducted to evaluate the stress levels of employees working in a public institution.
Research Questions
1. Which personal descriptive characteristics affect the mean scores of the Doetinchem Organizational Stress Questionnaire (VOS-D) stressor, social changes, psychological tensions, and complaints on health? 2. Which work and workplace characteristics affect the mean scores of VOS-D stressor, social changes, psychological tensions, and complaints on health?
Study Design
It is a descriptive cross-sectional study.
Sample
The data were collected from 5 factories and workshops ( Before the study, the researcher performed a power analysis to calculate the sample size. Therefore, the researcher used the mean score scales that were obtained from a similar past study (Çınar, 2010). As a result of the analysis with an alpha value of (a) 0.05, a power of (1-b) 0.90, and a deviation of 0.05, it was calculated that at least 300 individuals should participate in this study. In the sample, the researcher included 322 collaborative employees that built an open communication and agreed to participate in this study. The study data were collected during personal interviews that were performed during break time of normal working hours.
Data Collection
The data for this descriptive study were collected between February and April 2015. The data were collected during the break time during working hours of the workers. The researcher gave the data collection tools to the workers in the restroom or canteen in groups and collected them after they were completed.
Data Collection Tools
The researcher used a personal information form and VOS-D to collect the data.
Personal Information Form
This form included 21 questions about personal descriptive information as well as workplace and work information. The questions were prepared in accordance with the literature (Aydın, 2016;Bilir & Yıldız, 2014;Canpolat, 2006;Motowidlo et al., 1986;Potter & Perry, 2009). The form included questions about: gender, age, educational status, marital status, work unit, staff status, work experience, work order, physical workplace conditions (noise, inadequate/extreme illumination, inadequate ventilation, extreme cold and hot weather, dust, smoke, radiation, extreme humidity, vibration, pressure, inadequate equipment, insufficient working area, crummy building, badly designed/ inadequate furniture, insufficient toilets, insufficient restroom/canteen), ergonomics in the workplace, relationships with coworkers and superiors, exposure to work-related violence, job health and safety measurements, status of encountering job accidents, perception of work conditions, perception of work stress, job satisfaction, thought of changing jobs, habit of smoking and alcohol consumption, disease, and average income level.
The Doetinchem Organizational Stress Questionnaire
The original questionnaire was created in Dutch. It was adapted to Turkish language by Türk (Türk, 1997). VOS-D is an 81-item Likert-type scale that was used to identify and estimate the levels of organizational stress factors. VOS-D includes the dimensions of stressors, psychological tensions, complaints on health, and social changes. Each group includes its own sub-dimensions. According to the objective of the research, some scales may not be involved in the study or some new scales may be added. These scales may be independently evaluated. Stressors include the following sub-dimensions: excessive workload, uncertainty of roles, responsibility, conflict of roles, not being able to leave the workplace, making no participation in decision-making process regarding work, lack of belief in the necessity of work, and uncertainty about the future of work. Psychological tensions include the following sub-dimensions: lack of job satisfaction, feeling worried about work, and psychological complaints. Complaints about health include occasional and continuous illnesses. Social changes include lack of support by chief and coworkers (Türk, 1997).
All VOS-D dimensions and their sub-dimensions were used in this study. Total Cronbach's alpha (α) coefficient of VOS-D was 0.81 in the original scale, and it was calculated 0.87 in this study. To evaluate the obtained scores, the study used a conversion table that included percentile values of 5%, 25%, 75% and 95%. Table 1 presents the percentiles with their average scores (Türk, 1997).
Statistical Analysis
Evaluation of categorical variables was performed using descriptive statistics. Suitability of the data for normal distribution was examined by the Kolmogorov-Smirnov or Saphiro-Wilk test, and homogeneity of variance was examined by the Levene test. Student's t test was used to compare the 2 groups, and one-way analysis of variance (ANOVA) was used for comparison of 3 or more groups when parametric test conditions were met. In cases where parametric test conditions were not fulfilled, the Mann-Whitney U test was used for comparing the 2 groups and the Kruskal-Wallis variance analysis was used for comparing 3 or more groups. The Scheffe multiple comparison test and Bonferroni corrected Mann-Whitney U test were used to determine which groups the difference was between. The threshold for significance was p<0.05.
Ethical Considerations
The researcher obtained an official consent from all plants and directorates of TCDD in addition to the ethical approval to evaluate the research ethics. An authorization dated January 12, 2015, was obtained from Turgut Özal University University's human research ethics committee (Decision No: 63) for ethical compliance of this study. The researcher also informed all workers that they can participate in the study on a voluntary basis and obtained their written informed consent.
Results
The mean age of the workers was 47.0±7.4 years with a minimum age of 21 and a maximum age of 60 years. The mean working years of these workers in this job were 20.0±9.6 years. Personal descriptive characteristics of workers and workplace and work-related characteristics are presented in Tables 2 and 3. Table 4 presents the percentile distributions and descriptive statistics of workers' mean scores on each sub-dimension of stressor, social change, psychological tension, and health complaints groups. The mean scores of employees for all sub-dimensions of the VOS-D were at the medium level. In the group of stressors, employees were the most influenced by "responsibility" (score: 3.21) and "uncertainty of the future of work" (score: 3.20). In the group of social changes, employees were the most influenced by "lack of support by chief" (score: 2.35). In the group of psychological tensions, they were influenced by "lack of job satisfaction" the most (score: 2.20). In the group of complaints on health, employees were influenced by "complaints about illness occurring occasionally" the most (score: 9.46) ( Table 4).
The sub-dimensions with significant differences were summarized after evaluating the participants' sub-dimension mean scores on stressors, social changes, psychological tensions, and complaints on health on the basis of personal descriptive characteristics as well as work and workplace characteristics. Table 5 presents this summary.
Excessive Workload
The mean score for excessive workload of workers was significantly higher than that of the other participants who were younger than 39 years (x 2 =11.73, p=0.003), who were single (z=−2.802, p=0.005), who had a university degree (x 2 =7.85, p=0.005), with an income less than expenses (F=5.25, p=0.006), who worked in the Wagon Maintenance and Repair Workshop Directorate (F=15.02, p<0.001), who perceived the work environment as non-ergonomic (F=8.90, p<0.000), who had good relationships with coworkers (t=2,82, p=0,005), and who had poor relationships with superiors (F=8.44, p<0.001). The mean score of this sub-dimension was significantly higher than that of the other participants who were exposed to job violence (t=4.31, p<0.001), found job health and safety precautions insufficient (F=12.46, p<0.001), thought they worked in poor working conditions (F=24.93, p<0.001), described their work Table 5).
Uncertainty of Roles
The mean score on uncertainty of roles was significantly higher for the participants who had an income equal to expenses (x 2 =25.38, p<0.001), did not smoke (F=3.69, p=0.02), had no idea about the ergonomics of the work environment (x 2 =15.99, p<0.000), did not have good relationships with their coworkers (t=-3.32, p=0.001), and had medium-level relationships with superiors (x 2 =11.58, p=0.003). These participants with higher mean scores were also exposed to job violence (z=−2.44, p=0.014), had a low employee satisfaction (t=9.17, p=0.010), wanted to change their job (t=-2.83, p=0.005), and did not feel well at work (x 2 =9.17, p=0.010) (ANOVA: F value, Kruskal-Wallis test: x 2 value, Student's t test: t value, Mann-Whitney U test: z value) (Table 5).
Responsibility
The individuals who worked in the Factory Directorate of the Rail Welding and Track Machines Repair (x 2 =17.94, p=0.001) were included in the permanent staff (t=2.25, p=0.025), perceived the work environment as non-ergonomic (F=3.42, p=0.034), thought that they have bad working conditions (F=7.04, p=0.001), and described their work as very stressful (x 2 =6.59, p=0.002) obtained significantly higher mean scores on responsibility than the other participants (ANOVA: F value, Kruskal-Wallis test: x 2 value, Student's t test: t value, Mann-Whitney U test: z value) ( Table 5).
Not Being Able to Leave the Workplace
The mean score of this sub-dimension was significantly higher for employees working in the Loco Maintenance Workshop Directorate (F=3.32, p=0.011) than that of the ones working in other departments (ANOVA: F value, Kruskal-Wallis test: x 2 value, Student's t test: t value, Mann-Whitney U test: z value) ( Table 5).
Lack of Participation in Work-Related Decision Making
The mean score of this sub-dimension was significantly higher for the participants who were 39 years old or younger ( Table 5).
Lack of Believing the Necessity of Work
The mean score of this sub-dimension was significantly higher for the individuals who were single ( Table 5).
Uncertainty of the Future of Work
The individuals who were 50 years old or younger (x 2 =19.67, p<0.001), were married (z=-2.79, p=0.005), graduated from high school (x 2 =8.70, p=0.003), had more than 30 years of work experience (F=6.11, p<0.001), did not have any idea about the ergonomics of the work environment (F=3.98, p=0.020), maintained poor relationships with their superiors (F=5.41, p=0.005), felt a low level of employee satisfaction (F=13.38, p<0.001), wanted to change their job (t=−5.59, p<0.001), and did not feel well at work (F=7.65, p=0.001) obtained a significantly higher mean score on this sub-dimension than that of the other participants (ANOVA: F value, Kruskal-Wallis test: x 2 value, Student's t test: t value, Mann-Whitney U test: z value) ( Table 5).
Lack of Support by Chief
The mean score of this sub-dimension was significantly higher for those who had an income less than expenses (F=4.21, p=0.016), did not have any idea about the ergonomics of the work environment (F=9.83, p<0.001), maintained poor relationships with their coworkers (t=−2.028, p=0.028), did not have good relationships with their superiors (F=29.69, p<0.001), thought that job health and safety precautions in the workplace were insufficient (x 2 =18.48, p<0.001), believed that they were working in bad working conditions (F=4.81, p=0.009), described their work as very stressful (F=8.80, p<0.001), had a low level of employee satisfaction (F=50.50, p<001), wanted to change their job (t=−3.14, p=0.002), and did not feel well at work (F=14.51, p<0.001) (ANOVA: F value, Kruskal-Wallis test: x 2 value, Student's t test: t value, Mann-Whitney U test: z value) ( Table 5).
Lack of Support by Coworkers
The mean score of this sub-dimension was significantly higher for the individuals who had an income equal to expenses (x 2 =18.13, p<0.001), consumed or gave up consuming alcohol (x 2 =6.28, p=0.043), worked in the Factory Directorate (x 2 =32,76, p<0.001), did not have any idea about the ergonomics of the work environment (x 2 =11.49, p=0.003), did not have good relationships with their coworkers (z=−6.51, p<0.001), maintained medium-level relationships with their superiors (x 2 =28.52, p<0.001), were exposed to job-related violence (t=2.21, p=0.027), did not have any idea about job health and safety precautions in the workplace (x 2 =16.28, p<0.001), had a medium-level employee satisfaction (F=9.58, p<0.001), and had fair feelings about their job (x 2 =21.01, p<0.001) (ANOVA: F value, Kruskal-Wallis test: x 2 value, Student's t test: t value, Mann-Whitney U test: z value) ( Table 5).
Lack of Job Satisfaction
The individuals who were single (t=−3.09, p=0.002), had an income equal to their expenses (x 2 =7.57, p=0.023), and did not consume alcohol anymore (x 2 =7.80, p=0.02) obtained a higher mean score on this sub-dimension. (Table 5).
Feeling Worried About Work
The mean score on feeling worried about work was significantly higher for those who had university degrees (x 2 =11.44, p=0.010), with an income less than expenses (F=5.88, p=0.003), who worked in the Railway Mechanical Workshop Directorate (F=3.23, p=0.013), who perceived the work environment as non-ergonomic (F=6,79, p=0.001), who maintained poor relationships with their superiors (F=7.66, p=0.001), and who were exposed to job-related violence (t=4.55, p<0.001). These individuals also believed that job health and safety precautions in the workplace were insufficient (F=10.75, p<0.001), described their work as very stressful (F=22.22, p<0.001), had a low level of employee satisfaction (F=13.08, p<0.001), wanted to change their job (t=−4.55, p<0.001), and did not feel emotionally well at work (F=7.74, p=0.001) (ANOVA: F value, Kruskal-Wallis test: x 2 value, Student's t test: t value, Mann-Whitney U test: z value) ( Table 5).
Psychological Complaints
The mean score of psychological complaints was significantly higher for the individuals who were single ( (Table 5).
Complaints about Occasionally Occurring Illness
Individuals who had an illness (z=-3.36, p=0.001), perceived the work environment as non-ergonomic (x 2 =10.03, p=0.007), had poor relationships with their superiors (x 2 =14.86, p=0.001), and believed that job health and safety precautions in the workplace were insufficient (x 2 =11.63, p=0.003) obtained a significantly higher mean score on this sub-dimension. These individuals also had a work accident (x 2 =9.16, p=0.01), believed that they were working in poor conditions (x 2 =6.02, p=0.049), described their work as a little stressful (x 2 =9.91, p=0.012), and felt a low level of employee satisfaction (x 2 =8.20, p=0.017) (ANOVA: F value, Kruskal-Wallis test: x 2 value, Student's t test: t value, Mann-Whitney U test: z value) ( Table 5).
Discussion
This study was conducted to determine the stress levels of railway workers. The researcher thinks that the study findings will guide future initiatives that aim to reduce work-related stress.
All of the workers in this study were male. This may indicate that the harsh work conditions of units where the study was performed were not suitable for women, and men work in more harsh work conditions. Of the workers in a study by Canpolat (2006) on factory workers' sources of stress in the workplaces, 95.7% were male (Canpolat, 2006). Şahin (2017) found work stress levels as "physical and mental stress indicators were present" in his study of 285 male workers in an iron and steel plant, which is a heavy industrial enterprise.
The mean age of participating workers was 47.0±7.4 years. The mean age of the workers was high because of TCDD's reduced worker recruitment and the workers' long experience of working in these units.
Of the participants, 77.6% worked in shifts. A study by Okutan & Tengilimoğlu (2002) of 242 managers and 362 workers at the Ankara Regional Directorate of the State Railways of the Republic of Turkey determined 70% of them worked in shifts and felt uncomfortable about it. Work hours affect the stress levels of workers, and shift workers' lack of a consistent sleeping pattern can cause physical fatigue, psychological burnout, and deterioration of social life and diet.
Of the workers, 74.2% said that the ergonomics of the workplace were not convenient and they had no idea about it. Of the workers in an industrial factory studied by Çınar (2010), 37.3% found the workplace ergonomic, 23.8% did not find it ergonomic, and 38.9% had no idea about ergonomics; 96.8% participants were male. The findings of this study resemble those of Çınar (2010).
Workers in this study felt discomfort primarily about noise (67.1%) and secondarily about dust or smoke (59.6%). Workers in the study by Çınar (2010) felt discomfort primarily about dust or smoke (53.6%) and secondarily about noise (43.4%). The results of this study resemble those of Çınar (2010). Negative physical conditions in the workplace affect the workers in many ways. Improvement of negative physical conditions may prevent job accidents and illnesses.
Of the participating workers, 82.3% said that they were satisfied, 6.2% were somewhat satisfied, and 11.5% were not satisfied with the job. The findings of this study resemble the findings of Canpolat (2006). The rate of workers who were satisfied with the job was 79.9%, and the rate of workers who were not satisfied was 20.3% (Canpolat, 2006). Aazami (2015) determined that job satisfaction is a significant factor that affects the psychosocial status of workers.
This study found that all sub-dimensions caused medium-level stress, which is consistent with the relevant literature (Clemen-Stone et al., 2002). For example, the study conducted by Çınar (2010) in a workplace operating in industry found that stressors, social changes, and psychological tensions caused medium-level stress. Another study found that all sub-dimensions of stressors and social changes cause medium-level stress (Türk & Çakır, 2006). The findings of this study are consistent with the literature, which is also a major indicator that there has been no positive improvement in working conditions, job security, and job safety in Turkey in the last 15 years. In this study, 95% and 75% of employees were mostly affected by responsibility and uncertainty of the future of work, respectively, whereas more than 25% were affected by uncertainty of the future of work and 5% were affected by excessive workload. A related study found that 5%, 25%, 75%, and 95% of employees were mostly affected by excessive workload (Türk & Çakır, 2006). Türk (1997) found that 95% and 75% of employees were mostly affected by responsibility, while 25% and 5% were mostly affected by excessive workload. In another study, it was observed that employees were mostly affected by responsibility and excessive workload . This study found that employees were mostly affected by responsibility, and this is consistent with the other study findings. However, the influence of uncertainty of the future of work is not observed in other studies. This can be owing to the fact that employees do not feel safe because there were varying working conditions in the facilities where this study was conducted, and these state-guaranteed factories are considered to be taken into the scope of privatization.
In the literature, other studies similar to this study indicated that personal characteristics and work and work-related characteristics affect the stress levels of employees (Türk & Çakır, 2006;Couser, 2008;Çınar, 2010;Özçay, 2011;Özen, 2012;Yeşil, 2013;Hu et al., 2014;Smith et al., 2019). Consistent with the findings of this study, the mean score for excessive workload was higher for single employees with postgraduate degrees in a study conducted with nurses (Özen, 2011). This study showed that perceptions of excessive workloads decreased as the employees became older. This may be owing to the fact that employees that worked in the same units for many years feel more experienced. This study also found that workplace and work-related characteristics significantly affected the score of excessive workload. Gillespie et al. (2001) have stated that excessive workload and work-related stress are positively related; work-related stress increases with an increase in workload, and it decreases with a decrease in workload. Karabağ & Özgen (2008) found significant relationships between workload and stress levels.
The findings of this study are in line with the literature because the score of uncertainty of roles sub-dimension was higher for employees who did not have good relationships with their superiors and coworkers (Çınar, 2010). The mean score of uncertainty of roles was found to be significantly higher for those with a low economic status (Özçay, 2011). The author assumed that the employees having poor relationships with their superiors and coworkers may experience uncertainty of roles because of their lack of effective communication in the workplace. Studies in the relevant literature have highlighted that uncertainty of roles may cause lack of self-confidence and loss of motivation, which may also create more stress (Gümüştekin & Gültekin, 2009). Başaran (2008) has concluded that an increase in the uncertainty of roles decreases employees' job satisfaction. Similar to this study, Özen (2011) has found that the mean score of uncertainty of roles was higher in nurses who do not smoke.
In this study, the mean score was high for staffed employees who perceive the work environment as not ergonomic, think that they have bad working conditions, and feel much stressed at work. The higher responsibility score of the staffed employees may be associated with the status of staffed employees being better than contracted/temporary employees owing to their higher work responsibilities. Özçay (2011) have found that perceived responsibility was significantly higher in staffed employees than in contracted employees. Given that majority of the contracted/temporary personnel have to work in this way, perceived responsibility of temporary employees will probably be lower assuming that they do not have a sense of belonging to the work and workplace. A study conducted with forensic science experts and their assistants found that taking responsibility causes more stress as individuals become older, and employees between the ages of 25 and 29 years have the lowest responsibility scores . This study found that age did not affect the scores of responsibility, and the participants had the lowest score of responsibility.
In the literature, similar with this study, the mean score of conflict of roles was higher for single employees than that for married employees (Narin, 2010;Özçay, 2011;Yeşilyurt, 2009). This difference can be explained with their skills to prevent conflict of roles by taking multidimensional roles in their marriages. Similar with this study, Özen (2011) has found the mean score of conflict of roles to be higher in alcohol consumers. As the alcohol consumers experience a strong conflict in their roles, it can be concluded that alcohol is referred to be an ineffective way to cope. Similar to the findings of this study, the literature found the mean score of conflict of roles to be higher for the workers who find their income insufficient than the other individuals (Özen, 2012). Özçay (2011) have found that the mean scores on conflict of roles were significantly lower in company employees than in contracted personnel. Conflict of roles mostly influences middle-level employees (Baltaş & Baltaş, 2010). The mean score on conflict of roles is higher in the company employees because they believe that they have a higher work status than contracted/temporary workers. Başaran (2008) has stated that an increase in conflict of roles reduces job satisfaction.
In this study, locomotive maintenance employees had the highest score than other working unit employees in not being able to leave the workplace. This may be owing to the fact that the probability of leaving the workplace is lower for these employees because of the nature of their work. Being unable to leave the workplace because of the aspects of the work is an important source of stress for workers. Stordeur & Wanderberne (2001) have argued that organizational structure of the workplace should allow changes that are addressed to defining, preventing, and removing factors that cause stress.
Similar to the findings of this study, the literature suggests that as the ages of employees and their duration at work increase, employees' participation in the decision-making process increases as they become older and gain more experience at their work (Türk, 1997;Türk, 2006;Çınar, 2010). Being older and more experienced is assumed to lead to a greater participation of employees in the decision-making processes.
Young employees with fewer working years participate less in work-related decision-making processes, which may be a result of their lack of experience and weak loyalty to the work and workplace. In this study, the mean score on lack of participation in work-related decision-making process was significantly higher for employees who did not have any work accidents. Canpolat (2006) has found a significant difference between employees' work accident experiences and stress levels and stated that employees with this experience have higher stress levels than the other employees. Negative experiences of employees with work accident experience and thoughts of being at risk for another work accident may increase their desire to have more control on the work and to participate in decision-making process.
In this study, many personal as well as work and workplace-related characteristics significantly influenced the mean score on the lack of belief in the necessity of work. These findings are compatible with those in the literature (Çınar, 2010;Özen, 2011;Ross & Altmaier, 1994;Türk, 1997;Türk, 2006). Ross & Altmaier (1994) have emphasized that lack of belief in the necessity of the work was one of the reasons for work-related stress. Employees who do not believe in the necessity of their work perceive going to work as an obligatory task and think that they do not have any reasons to do their work.
Çınar (2010) has found a significant difference between the sub-dimension of uncertainty of future work and working years and stated that employees who worked for at least 21 years have higher scores on uncertainty of future work. Türk & Çakır (2006) have found that employees who are aged at least 40 years and primary school graduates and had at least 21 years of work experience have higher scores for uncertainty of future work. Employees who are older than 50 years and have at least 30 years of work experience are getting closer to their age of retirement, and university graduates are preferred over the high school graduates at work; therefore, high school graduate employees are afraid of losing their jobs, which may lead to high scores on uncertainty regarding the future of work. Uncertainty about the future of work is the lowest for university graduates because employees with a high education status work in more qualified management positions.
Employees who experience difficulties owing to work and workplace-related problems are expect-ed to find job health and safety precautions insufficient, think they have bad working conditions, do not feel well at work, want to change their jobs, and have stress at work when they do not receive support from their chiefs and friends. Employees who maintain positive relationships with their coworkers and superiors may be more motivated and desired to become integrated with their work for valuable contributions. The literature suggests that there is a positive relationship between work-related stress and organizational loyalty. Başaran (2008) has expressed that job satisfaction increased and job-related stress decreased as the level of satisfaction with coworkers increased. Chang (2006) has stated that work-related stress is low when loyalty to organization and employees is strong.
In this study, many personal characteristics and work and workplace-related characteristics significantly affected the mean scores on lack of job satisfaction. Job satisfaction indicates how much employees care about their work and shows their work-related information, beliefs, and pleasures. Employees expect to have proper working conditions and motivational support to get job satisfaction. Therefore, employees, who have poor relationships with their coworkers and superiors, are not happy with their work, and feel bad at work, are expected to have low job satisfaction. Studies in the literature have found that age and relationship with coworkers affect job satisfaction, which is consistent with this study (Çınar, 2010;Türk, 1997). Employees make progress in their careers as they become older; however, becoming older also implies a decrease in physical strength. As a result, employers and organizations expect less from these employees, which may in turn affect their job satisfaction. In contrast, the increase in commitment and performance on the basis of spending many years at work may lead to a high job satisfaction. Work and work-related negative situations may cause psychological complaints. Studies in the literature have emphasized that psychological complaints involving negative feelings, such as concern, fear, helplessness, and hopelessness, may increase employees' work-related stress (Maslach, 2018). In this study, the mean score of psychological complaints was higher for the employees who have poor relationships with their coworkers and find their workplaces stressful. Similar to the findings of this study, studies in the literature have determined that married employees have more psychological complaints than single employees and a significant difference was found among psychological complaints, work-related stress, and relationships with coworkers (Çınar, 2010; Özen, 2011).
This study found that many personal descriptive characteristics and work and workplace characteristics did not influence the mean score of complaints on health. The literature also suggests similar results with the findings of this study. Aagestad et al. (2014) have stated that psychosocial risk factors influence general health conditions. Owen (2000) has stated that negative workplace conditions have negative influences on physical and psychosocial health of employees. Çınar (2010) has found significant differences between the sub-dimensions of work accident experience, work-stress, job satisfaction, and complaints about health. A history of work accident, insufficient job health and safety precautions, bad working conditions, and not feeling good at work may trigger complaints about health. This may lead to permanent injuries and conflict that affect the workplace as a whole.
Conclusion and Recommendations
This study showed that the employees were influenced by all sub-dimensions of stressors, social changes, psychological tensions, and complaints about health and they feel medium-level stress. Therefore, this study recommends that education programs be organized in workplaces to prevent, reduce, and manage stress. In addition, organizations should develop new strategies to periodically evaluate workplace stressors and to better control stress-related factors at work. Working environments and conditions should be improved; assignments, authorities, and responsibilities of all workers should be clearly defined in the workplace; and counseling units to cope with work stress should be constituted. Events should be organized to make way for the careers and promotions of workers, balanced workloads, increased social interactions in the work environment, ensuring workers' inclusion in decision making, teamwork among workers, determining shift hours according to the individual characteristics of workers, making regulations about working hours and workloads, preventing violence in workplace, and developing team spirit.
The occupational nurse should identify the employees at risk of stress and take appropriate action. They should help to protect individuals from harmful consequences of stress in individuals experiencing stress and should intervene to adapt to and reduce stress. They should lead the employee to manage time or participate in social and cultural activities. Studies that assess the work stress levels of workers in various fields should be conducted to define field-specific stressors. Intervening in the management and control of work stress is the responsibility of the occupational nurse. | 2021-05-05T00:09:48.722Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "f4a22b36bff3e98fc3aec7d7ccd313a5cecda832",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5152/fnjn.2021.19082",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d071e6c018f4d78d2d42f50610433d76d4984010",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227315045 | pes2o/s2orc | v3-fos-license | Transesophageal echocardiography-associated tracheal microaspiration and ventilator-associated pneumonia in intubated critically ill patients: a multicenter prospective observational study
Background Microaspiration of gastric and oropharyngeal secretions is the main causative mechanism of ventilator-associated pneumonia (VAP). Transesophageal echocardiography (TEE) is a routine investigation tool in intensive care unit and could enhance microaspiration. This study aimed at evaluating the impact of TEE on microaspiration and VAP in intubated critically ill adult patients. Methods It is a four-center prospective observational study. Microaspiration biomarkers (pepsin and salivary amylase) concentrations were quantitatively measured on tracheal aspirates drawn before and after TEE. The primary endpoint was the percentage of patients with TEE-associated microaspiration, defined as: (1) ≥ 50% increase in biomarker concentration between pre-TEE and post-TEE samples, and (2) a significant post-TEE biomarker concentration (> 200 μg/L for pepsin and/or > 1685 IU/L for salivary amylase). Secondary endpoints included the development of VAP within three days after TEE and the evolution of tracheal cuff pressure throughout TEE. Results We enrolled 100 patients (35 females), with a median age of 64 (53–72) years. Of the 74 patients analyzed for biomarkers, 17 (23%) got TEE-associated microaspiration. However, overall, pepsin and salivary amylase levels were not significantly different between before and after TEE, with wide interindividual variability. VAP occurred in 19 patients (19%) within 3 days following TEE. VAP patients had a larger tracheal tube size and endured more attempts of TEE probe introduction than their counterparts but showed similar aspiration biomarker concentrations. TEE induced an increase in tracheal cuff pressure, especially during insertion and removal of the probe. Conclusions We could not find any association between TEE-associated microaspiration and the development of VAP during the three days following TEE in intubated critically ill patients. However, our study cannot formally rule out a role for TEE because of the high rate of VAP observed after TEE and the limitations of our methods.
Introduction
Ventilator-associated pneumonia (VAP) is the most common acquired infection in critically ill patients under mechanical ventilation [1], often associated with significant morbidity [2,3]. VAP is mainly precipitated by microaspiration of contaminated gastric and oropharyngeal secretions [4]. Microaspiration is defined by leakage of oropharyngeal secretions accumulated upstream the tracheal cuff into the lower respiratory tract [5,6]. The gold standard test for the diagnosis of microaspiration is using technetium 99 m [7]. However, applying this technique in intubated patients in the intensive care unit (ICU) is thwarted by the difficulty of transporting patients to the radiology department to avoid radioactivity in ICU [8]. Pepsin comes from pepsinogen and is secreted by the chief cells in the stomach, and amylase is a digestive enzyme, secreted by the salivary glands and the pancreas. Because they are not normally present in the respiratory tract, pepsin, and salivary amylase were proposed to diagnose microaspiration of gastric content and oropharyngeal secretions, respectively [9][10][11][12]. Their use in intubated critically ill patients is rapid, easy to perform in routine, cheap and only requires tracheal secretions.
Over the past decade, transesophageal echocardiography (TEE) has emerged as a common, minimally invasive, bedside examination in ICU [13], with a low complication rate in intubated patients [14,15]. TEE-induced bacteremia is extremely rare; thus, TEE is not an indication for antibiotic prophylaxis [16]. Nevertheless, potential microaspiration associated with TEE has never been evaluated in intubated ICU patients. TEE could indirectly trigger microaspiration of oropharyngeal and gastric contents in mechanically ventilated patients via factors such as loss of integrity of the esophageal sphincter, gastroesophageal reflux, displacement of tracheal tube, and modification of tracheal cuff inflation.
The main objective of this study was to evaluate the role of TEE in triggering microaspiration of gastric contents and oropharyngeal secretions, and VAP in intubated critically ill patients.
Study design and participants
We performed a multicentric prospective observational study in four French medical ICUs of university hospitals between March 2017 and September 2018. Consecutive adult patients intubated and mechanically ventilated for more than 24 h prior to enrollment and who required TEE were included. Exclusion criteria were pregnancy, tracheostomy, and TEE contraindications. This study was conducted in compliance with the amended Declaration of Helsinki. The protocol was approved by the ethical committee CPP, Ile de-France III (EUDRACT number: 2016-A01488-43, approval number: S.C.3457). The protocol was considered as a component of standard care, and patient consent was waived. Written and oral information about the study was given to patients or families.
Procedures and definitions
All included patients were subjected to endotracheal suction just before TEE and within the two hours after. For quantitative analyses, endotracheal aspirates were drawn without the addition of saline beforehand. The collected endotracheal aspirates were stored at − 20 °C in each center and sent to a central laboratory (Lille University Hospital) at the end of the study. All measurements of pepsin and amylase were performed by biologists who were blinded to the chronological status of TEE samples (before vs. after TEE). Pepsin was quantitatively measured by ELISA technique, and salivary amylase activity was calculated as the difference between total and pancreatic amylase activities [12,17]. The tracheal cuff pressure was manually checked before and after TEE. For some patients included in the Henri Mondor center, Creteil, the tracheal cuff pressure was continuously and mechanically assessed from five minutes before TEE until five minutes after. For those patients, the tracheal cuff pressure signal was recorded using differential pressure transducer TSD160D (Biopac Systems, Goleta, CA, USA) connected to analog/numeric data acquisition system (MP150, Biopac systems, Goleta, CA, USA) and stored on a computer to be analyzed with AcqKnowledge software version 5.0 (Biopac systems, Goleta, CA, USA).
Microaspiration of gastric contents and oropharyngeal secretions is usually confirmed upon detecting significant pepsin (> 200 μg/L) [17] and salivary amylase (> 1685 IU/L) [12] concentrations in the tracheal secretions, respectively. TEE-associated microaspiration of gastric contents (or oropharyngeal secretions) was defined by the association of: (1) pepsin (or salivary amylase) concentration which is ≥ 50% higher in the post-TEE sample than in the pre-TEE sample and (2) a significant post-TEE concentration of pepsin of > 200 μg/L (or salivary amylase of > 1685 IU/L).
Data collection
All data were prospectively collected starting with the inclusion data: age, gender, body mass index, simplified acute physiology score II (SAPS II) at ICU admission [20], comorbidities, history of acute respiratory distress syndrome, shock, and VAP prior to TEE, date and cause of intubation, tracheal tube characteristics (type, diameter, position), Sequential Organ Failure Assessment (SOFA) score, Richmond Agitation and Sedation Scale (RASS), duration of mechanical ventilation prior to TEE, time between last oral decontamination and TEE, ventilator parameters, tracheal cuff pressure before and after TEE, gastric tube and enteral feeding management, evaluation of residual gastric volume, concomitant treatments, probe type and introduction (duration, number of attempts, method, patient position), TEE characteristics (date, duration, indication, use of transgastric view), and complications. The following data were collected during ICU stay: length of stay, mechanical ventilation duration, VAP, and mortality.
Outcomes
The primary endpoint of this study was the percentage of patients with TEE-associated microaspiration of gastric contents and/or oropharyngeal secretions. The secondary outcomes were the percentage of patients who developed VAP within three days after TEE and the evolution of tracheal cuff pressure throughout TEE procedure.
Statistical analysis
Statistical analysis was performed using JMP software (version 9; SAS Institute Inc, Cary, NC) and GraphPad Prism 5 software (GraphPad Software Inc., La Jolla, CA, USA). The number of patients required to assess the incidence rate of microaspiration during TEE was estimated at 75, considering a theoretical prevalence of 75% (previous studies reported the presence of microaspiration at baseline in at least 50% of intubated patients) [12,21,22], a precision of ± 10%, a confidence interval of 95%, and a type I error rate of 5%. We anticipated a 25% failure rate for sample processing and analysis and decided to include a total of 100 patients.
Normality of variables was evaluated by Shapiro-Wilk test. Continuous variables were expressed as mean (± standard deviation) or median (first quartile-third quartile) according to their Gaussian or non-Gaussian distribution, respectively. We compared patients who developed VAP within the three days following TEE with their counterparts using Student t test for Gaussian continuous variables, Mann-Whitney test for non-Gaussian continuous variables, and Chi-square or Fisher exact tests for categorical variables, as appropriate. We compared concentrations of pepsin and salivary amylase before and after TEE using paired Wilcoxon test. We evaluated the change in tracheal cuff pressure throughout TEE procedure using one-way ANOVA and Dunnett multiple comparison test. For all tests, a twotailed P < 0.05 was considered statistically significant.
The study population
A total of 310 patients who underwent TEE were screened during the study period in the participating centers, of whom 242 met the eligibility criteria; however, only 100 patients (35 females) were retained in this study ( Fig. 1), with a median age of 64 (53-72) years. The majority of eligible patients were excluded for logistical reasons (absence of the investigator when TEE was performed, at night and on weekends) or because of lack of sufficient tracheal secretions. During TEE examination, most patients were already sedated (93%) and sedation was increased in many of them (62%), but only few (n = 12, 12%) received additional neuromuscular blocking agent.
Altogether, 19/100 patients (19%) were diagnosed with VAP within three days after TEE. Patients' characteristics at baseline and throughout TEE procedure with comparison between those who developed VAP and those who did not are shown in Tables 1 and 2, respectively. VAP patients had a larger tracheal tube size, endured more attempts of TEE probe introduction, and were more often on anticoagulants than no-VAP patients. TEE complications were scarce and similar in both groups. Among the 19 VAP episodes, three had no bacteriological documentation and six were polymicrobial. The causative microorganisms identified were Pseudomonas aeruginosa (seven cases), Klebsiella pneumoniae (six cases), Staphylococcus aureus (three cases), Enterobacter cloacae (three cases), Stenotrophomonas maltophilia (three cases), Escherichia coli (two cases) and Proteus mirabilis (one case).
Microaspiration
It was possible to assess pepsin and salivary amylase concentrations (sufficient amount of tracheal suction) in 82 patients before TEE, 83 patients after TEE, and 74 patients for both time points (Fig. 1). We detected 17/74 patients with TEE-associated microaspiration (prevalence of 23%, 95% confidence interval 15-34%), and this prevalence did not differ between the four participating centers. The concentrations of pepsin and salivary amylase were not different between VAP and no-VAP patients (Table 3). Moreover, median pepsin and salivary amylase levels were not significantly different before and after TEE (Figs. 2 and 3). No association was found between the occurrence of VAP within three days of TEE and TEE-associated microaspiration ( Table 3). A sensitivity analysis assessing patients who developed VAP within 5 days following TEE (22/100, 22%) found similar results (Additional file 1: Table S1).
Continuous monitoring of tracheal cuff pressure
Continuous monitoring of tracheal cuff pressure throughout TEE process was performed in 20 patients, of whom six had TEE-associated microaspiration and three had VAP. Overall, as compared with baseline (2 min before TEE start), TEE induced an important increase in tracheal cuff pressure, especially during insertion and removal of the TEE probe (Fig. 4).
Discussion
To the best of our knowledge, this is the first study conducted to evaluate the impact of performing TEE on the occurrence of microaspiration and VAP in intubated critically ill patients. Although a substantial number of patients could be characterized as having TEE-associated microaspirations (23%), according to an ad hoc definition, the changes in pepsin and salivary amylase levels throughout TEE process showed huge interindividual variability. We detected no association between TEEassociated microaspiration and the development of VAP during the three days following TEE. However, because of the high rate of VAP observed after TEE and the limitations of our methods, our findings cannot formally rule out a role for TEE in the occurrence of VAP. TEE generated a transient variation of tracheal cuff pressure, especially upon inserting and removing the TEE probe.
Microaspiration is a well-known causative factor of VAP [23]. Pepsin and salivary amylase are reliable markers of microaspiration and are tightly linked to the development of VAP [21,24,25]. These markers have been used as surrogates in studies evaluating the efficacy of various devices in preventing VAP, as tracheal tubes [26], subglottic secretion drainage systems [27], and mechanical devices controlling tracheal cuff pressure [17]. In such studies, microaspiration assessment relied on several tracheal aspirates drawn over a wide timeframe (1 or 2 days), and its definition considered the percentage of tracheal aspirates with higher levels of pepsin (> 200 μg/L) and/ or salivary amylase (> 1685 IU/L). For us, it was not possible to evaluate TEE-associated microaspiration using the same approach given the limited number of tracheal aspirates available in our protocol (only two/patient). Of more, we relied on commonly reported thresholds for salivary amylase and pepsin [8,12,26].
The continuous monitoring of tracheal cuff pressure throughout TEE procedure showed significant elevation of cuff pressure, especially during insertion and removal of the TEE probe. Persistent underinflation (< 20 cmH 2 O) of the tracheal cuff was shown as an independent risk factor for microaspiration and VAP [28], whereas cuff leakage was inversely correlated with cuff pressure [29]. Hypothesizing that acute variations of tracheal cuff pressure during TEE might be associated with microaspiration and VAP warrants further research. The relatively high rate of VAP found in this study can be reasonably attributed to the severe cases we included. Of note, 38% of patients presented with acute respiratory distress syndrome. However, we cannot formally exclude a role of microaspiration in this high rate. The fact that patients who caught VAP had their tracheal tubes larger than those used in patients who did not may suggest more leaks occurring in the former group. Moreover, patients who caught VAP were more often on anticoagulant, a therapy that has potential anti-inflammatory effects beyond anticoagulation, and which may be beneficial in acute respiratory distress syndrome [30]. For
Table 3 Microaspiration indicators and outcomes stratified by VAP incidence within 3 days after TEE
Values are expressed as mean (± SD) or median (IQR) as appropriate instance, nebulized heparin was proposed for lung injury but with contradictory results [31] and was not effective in preventing VAP [32]. We did not identify any dreaded clinical complication associated with TEE neither did TEE significantly impact salivary amylase and pepsin concentrations. However, the substantial levels of pepsin and amylase observed in some patients and the fact that VAP patients had endured more attempts of TEE probe introduction might represent a good incentive to install some VAP prevention measures before and/or during TEE. Such measures may involve deep oropharyngeal suctioning [33], subglottic suctioning [34], semi-recumbent positioning [35], continuous control of tracheal cuff pressure [17], or using higher PEEP levels [29].
This multicenter study was conducted in four tertiary university ICUs where TEE is routinely used in intubated critically ill patients. The major strengths of the study are the comprehensive search for risk factors for microaspiration, its prospective design, the combined use of salivary amylase and pepsin for microaspiration documentation, and the continuous assessment of tracheal cuff pressure to scrutinize VAP pathophysiology. Our study has several limitations. First, the cohort included a relatively small number of patients with no control arm. Second, pepsin and salivary amylase and tracheal cuff pressure continuous monitoring were not assessed in all patients. Third, the definition of TEE-associated microaspiration may be questionable, as previously discussed. It used a single assessment of biomarkers and an arbitrary cutoff. We did not correct for baseline concentration of biomarkers in the digestive tract, but these biomarkers are not normally found in the respiratory tract and previous studies did not use such corrections. Fourth, the use of three days as a cutoff point to define VAP after TEE is also questionable, but results were similar upon using a five-day cutoff point. Fifth, we focused on direct microaspiration during TEE and did not assess other mechanisms that may cause pneumonia, as dysphagia or swallowing dysfunction [36]. Eventually, we did not assess the change in tracheal bacterial colonization. The amount of bacterial inoculum Evolution of tracheal cuff pressure throughout TEE procedure. * and **denote significant difference as compared with baseline, i.e., tracheal cuff pressure 2 min before probe introduction, with a p value < 0.05 and < 0.01, respectively. could be used as a closer surrogate for VAP [37]. Sixth, VAP would have been more relevant as a primary endpoint from a clinical point of view. However, if TEE has a potential impact on VAP, it is likely to be small given the multiple factors influencing VAP occurrence. We therefore used microaspiration as the primary endpoint because microaspiration is considered as the main mechanism of VAP. Lastly, the limitations of the methods used to identify TEE-associated microaspiration and the high rate of VAP observed after TEE cannot allow ruling out a role for TEE.
Conclusion
In this multicenter prospective observational study, we detected no association between TEE-associated microaspiration and the development of VAP during the three days following TEE. However, because of the high level of VAP observed after TEE and the limitations of the methods used, our findings cannot allow formally ruling out a role for TEE in the occurrence of VAP.
Additional file 1: Table S1. Microaspiration indicators and outcomes stratified by VAP incidence within 5 days after TEE. | 2020-12-07T14:43:29.774Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "107631d276923b3698df47f531ed540ee54c6b3b",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-020-03380-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "107631d276923b3698df47f531ed540ee54c6b3b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
33248727 | pes2o/s2orc | v3-fos-license | Hashimoto’s Thyroiditis
One hundred years ago, a 31-year old Japanese surgeon called Hakaru Hashimoto at Kyoto Imperial University published the description of four cases of goiter in the thyroid resection specimens from four women in a German journal Archiv für Klinische Chirurgie in 1912 just before World War I. This paper contained two Latin words in the title (struma lymphomatosa) and five microphotographs. The histological appearance of these goiters characterized by a lymphoplasmocytic infiltrate with diffuse inflammatory alterations of the thyroid parenchyma and fibrosis was very different from the colloid goiters that he was familiar with, as well as those from Grave’s disease, infectious thyroiditis (especially related to tuberculosis or syphilis) and the fibrous thyroiditis described by Riedel in 1896. He emphasized similarity with the histological data observed with lacrimal, salivary, lymph node and splenic involvement of Mikulicz’s disease (now Sjögren's syndrome). Mickulicz was the teacher of Hayari Miyake, the head of Hashimoto’s department.
Hashimoto's discovery was not exactly ignored for the next few decades, but recognition of its existence was certainly slow, possibly related in part to the publication of the paper in German. In 1931, Graham and McCullagh used the term ''Hashimoto'' for the first time in the title of an article, strongly arguing that struma lymphomatosa was indeed distinct from Riedel's thyroiditis. The description of lymphocytic thyroiditis was rediscovered in the United States in 1936, and the disease was labeled Hashimoto's thyroiditis (chronic lymphocytic thyroiditis) in medical textbooks. In 1939, the prominent British thyroid surgeon Cecil Joll coined the term ''Hashimoto disease'' and used it in the title of a review he wrote about this condition. Since then, Hashimoto's thyroiditis has gone from being a rarity to one of the most common autoimmune diseases, as well as the most common endocrine disease.
An essential step was the characterization in 1956 by Noel Rose and Ernest Witebsky of experimental thyroiditis created by the injection of thyroid extracts and Freund adjuvant in rabbits, which showed histological aspects similar to those described by Hashimoto. In the same year, Ivan Roitt and Deborah Doniach reported the presence of autoantibodies directed against thyroglobulin in the Lancet. Until then it had seemed inconceivable that an individual would develop antibodies directed against his own body (''horror autotoxicus''). This was the first time that the possibility of autoimmune diseases had been suggested, and Hashimoto's disease established itself as the model of organ-specific autoimmune diseases. It is a good example of a situation in which a brilliant clinician provided the clinical and anatomopathological description of a disease, leaving it to future generations to understand the mechanism and, in the current case, to establish the far-reaching concept of autoimmune diseases, sometimes organ-specific, polyglandular or general.
With regard to history, there is agreement that the hypertrophic forms of lymphocytic thyroiditis, which can progress to atrophy although not inevitable, would be labeled Hashimoto's thyroiditis. In terms of the pathogenic, histologic and biologic perspectives, these forms did not differ greatly from atrophic lymphocytic thyroiditis causing myxoedema, or asymptomatic autoimmune thyroiditis or thyroiditis occurring with nodules or cancer. These aspects are the subject of the article by Orgiazzi. Even before the description of lymphocytic thyroiditis by Hashimoto, the simultaneous occurrence of hypothyroidism with other endocrinopathies had been demonstrated in Germany in 1904 by Erhlich, and in 1908 in France by Claude and Gougerot. In 1980 these conditions of polyglandular failure were divided into four types by Neufeld, and then grouped into two varieties. Childhood-onset type 1 autoimmune polyendocrine syndrome has autosomal recessive transmission linked to a mutation of the AIRE gene, which controls the production of antibodies at the thymic and peripheral levels. This can be differentiated from type 2 (or 2/3), which is much more frequent, starts in adulthood, and is polygenic and multifactorial, the characteristics of which are presented by Kahaly. Whether isolated or occurring with polyendocrinopathies, autoimmune involvement of the adrenal glands alters the quality of life and is life-threatening; it is explained and its care are described in the article by Napier and Pearce. The pathogenic access and assessment of autoimmune involvement in parathyroid and pituitary are still in the early stages, while the immunologic and pathogenic data and management perspectives are better understood in diabetes mellitus, as confirmed in the article by Boitard. The description of the histological changes of Hashi-moto's thyroiditis, only just a century ago, gained credibility over several decades. Hashimoto's thyroiditis is now considered the most prevalent autoimmune disease.
Prevalence over time
Its incidence is about 1 case per 1000 persons per year. The prevalence is 8 cases per 1000 when estimated from a review of published articles, and 46 cases per 1000 when estimated from the biochemical evidence of hypothyroidism and thyroid autoantibodies in subjects participating to the Third National Health and Nutrition Examination Survey. Caturegli (Caturegli, De Remigis et al. 2013) and colleagues have marked the centenary of Hashimoto's seminal paper by reviewing the extensive surgical pathology archives of the Johns Hopkins Hospital for cases of Hashimoto thyroiditis, spanning an extremely long period from 1889 to 2012. The results are fascinating and a fitting way to mark this important anniversary. The study reveals a remarkable change in incidence, with very few cases for the first half of the period, and then a significant increase between 1943 and 1967, a constant incidence up to 1992 and then another significant increase over the last two decades.
Of course this retrospective pathological analysis can take little account of the clinical reasons for thyroid surgery, which will have a fundamental impact on the incidence, but the results are so striking that a rapid increase in the frequency of Hashimoto's thyroiditis in the second half of the 20th century seems an unavoidable conclusion. Another review supported this by reviewing of 1050 Austrians patients who had surgery for benign goiter between 1979 and 2009; there was a significant increase in the incidence of both lymphocytic thyroiditis and Hashimoto's thyroiditis in resection specimens over this time. In addition, a striking rise in Hashimoto's thyroiditis has been reported recently in Italy. Between 1975 and 2005, there was a 10-fold rise in incidence: patients have become relatively younger, are more likely to be male, and have lower autoantibody responses. While some of this change in incidence could be the result of increased thyroid function testing and earlier detection of disease, the overwhelming conclusion is that environmental factors must be responsible. Increased iodine intake is certainly one possible influence: a recent study in China showing increases in subclinical hypothyroidism and autoimmune thyroiditis in an area of more than adequate iodine intake is the latest among many such reports. But not all studies have shown an adverse effect of excess dietary iodine (possibly related to genetic factors and the rate of the increase in iodine intake in a population) and when it does occur, the effect of iodine may be transient. Nor is thyroid autoimmunity alone in this regard; celiac disease, type 1 diabetes, and multiple sclerosis have all increased in incidence over the last three decades. It is likely that aspects of urbanized living, such as higher standards of hygiene, increased prosperity, and increased exposure to environmental toxins, are responsible for this generalized trend, perhaps by altering the balance between T helper cell subtypes.
Pathogenesis and etiology
The pathogenesis of Hashimoto's thyroiditis has elicited interest since it was first reported. Dr. Hashimoto himself speculated on possible explanations of what he saw under the microscope, eventually concluding ''at present we cannot say anything definite about the cause''. Initial theories postulated this disease was due to infection, understandably, since infections were quite common and a large focus of clinical investigation, but no clear link with microorganisms was ever found. Other theories considered the Hashimoto goiter a premalignant condition. Some scholars believed the thyroid itself possessed a lymphogenic secretory capability that became hyperactive in these patients. Others viewed the goiter as secondary to constant anxiety and emotional unrest. In 1951, Hellwig proposed the colloidophagy theory, based on rodent studies performed in the late 1920s and his own observations in humans that macrophages exist in the thyroid gland and are capable of ingesting colloid. He postulated that thyroid macrophages that have engulfed colloid degenerate and release colloid, which then attracts lymphocytes into the thyroid. Finally, in the early 1950s, the field of autoimmunity began to take shape; animal models were being developed in which injection of a tissue extract was capable of reproducing a lymphocytic infiltration of that particular organ. This experimental approach was applied to the thyroid when, in 1956, lymphocytic infiltration of the rabbit thyroid was induced by injection of rabbit thyroid extracts. The horror autotoxicus dogma was dismantled and autoimmunity became recognized as an important mechanism of disease. In the ensuing five decades, numerous studies have greatly expanded our understanding of the pathogenesis of Hashimoto's thyroiditis and helped translating research findings into clinical practice. We have known since the mid-1980s that thyroperoxidase is a dominant protein antigen targeted by the patient's immune system in Hashimoto's thyroiditis, and, as a result, antibodies to thyroperoxidase are now considered the most sensitive and specific biomarkers to establish this diagnosis. They also have a predictive value since their presence precedes a clinical diagnosis of Hashimoto's thyroiditis by at least 7 years. We have also known since 1971 that Hashimoto's thyroiditis, like other autoimmune diseases, has a genetic basis. Substantial efforts have been devoted to identify the genes that predispose to Hashimoto's thyroiditis, but results have been less fruitful than expected. Genome-wide association studies and candidate gene approaches have identified a handful of confirmed susceptibility genes (MHC class II region, CTLA-4, PTPN22, and ARID5B), each making, however, only a small contribution to the disease phenotype and through mechanisms that remain to be discovered.
Human leukocyte antigen (HLA) genes
The first gene locus identified in association with the autoimmune thyroid disease was major histocompatibility complex (MHC) region on the chromosome 6p21 which encodes human leukocyte antigens (HLAs). HLA region, which is highly polymorphic, comprises several immune response genes. HLA molecule, located on antigen presenting cell (APC), binds and presents an antigenic peptide and in this way enables T cell recognition and response to an antigen. Presumably, specific HLA alleles have a higher affinity for autoantigenic thyroidal peptides and are thus likely to contribute to the development of the autoimmune thyroid disease. Nevertheless, in order to initiate the thyroid autoimmunity autoantigen occurrence within thyroid or thyroid draining lymph nodes is needed, being followed by HLA presentation. In HT, aberrant expression of HLA class II molecules on thyrocytes has been demonstrated. Presumably, such thyrocytes may act as APCs capable of presenting the thyroid autoantigens and initiating autoimmune thyroid disease. In Caucasians, associations of different forms of HT with various HLA alleles were reported, including DR3, DR5, DQ7, DQB1*03, DQw7 or DRB1*04-DQB1*0301 haplotype. In Japanese, associations with DRB4*0101, HLA-A2 and DRw53 were demonstrated, while in Chinese patients association with DRw9 was observed (Hawkins, Lam et al. 1987
Cytotoxic t lymphocyte antigen-4 (CTLA-4) gene
CTLA-4 gene, which is the second major immuneregulatory gene related to autoimmune thyroid disease, lies on chromosome 2q33. The expression of CTLA-4 on the surface of T cells induced by the activation of the T-cell receptor results in suppression of T-cell activation. CTLA-4 gene polymorphisms may reduce expression or function of the CTLA-4 antigen and may therefore contribute to the reduced inhibition of T-cell proliferation and subsequently increase susceptibility to autoimmune response. In the past, several polymorphisms of the CTLA-4 gene in HT patients were studied. Among them, the initially reported (AT)n microsatellite CTLA-4 polymorphism in the 3' untranslated region (UTR) was found to be associated with HT in Caucasian and Japanese patients, but not in Italian population. In the exon 1 located 49A/G single nucleotide polymorphism (SNP), resulting in threonine to alanine substitution, was associated with HT, however, certain other studies have not confirmed this observation. A large meta-analysis, including both published and unpublished data of 866 HT patients, indicated a significant association with 49A/G (summary OR 1.29; 95% CI, 1.11-1.50). Another CTLA-4 polymorphism is 6230A/G SNP which is located at 3'-UTR and designated CT60. Initial observation of the association with HT was not confirmed by later studies, however, the results of the meta-analysis, based on six published and unpublished studies of 839 HT patients, indicated a significant association with CT60 SNP (summary OR 1.64; 95% CI, 1.18-2.28). Nevertheless, the exact mechanism conferring the susceptibility to HT has not been elucidated yet and further studies are needed to determine which CTLA-4 polymorphism is causative.
Protein tyrosine phosphatase nonreceptor-type 22 (PTPN22) gene
PTPN22 is the most recently identified immuneregulatory gene associated with the autoimmune thyroid disease, which is located on chromosome 1p13. PTPN22, which is predomi-nantly expressed in lymphocytes, acts as a negative regulator of T-cell activation, much like CTLA-4. 1858C/T SNP of the PTPN22 gene, resulting in arginine to tryptophan substitution at codon 620 (R620W), was demonstrated to be a risk factor for many autoimmune diseases. The mechanism is not clear since the disease predisposing T allele has been demonstrated to enable even more efficient inhibition of T-cell activation. Presumably, weaker T-cell signalling may lead to impaired thymic deletion of autoreactive T cells or an increased PTPN22 function may result in inhibition of regulatory T cells (Tregs), which protect against autoimmunity. An early study in HT patients demonstrated a significant association with 1858C/T SNP (OR 1.77; 95% CI, 1.56-3.97). Afterwards, this observation was neither confirmed in German, Tunisian and Japanese populations nor in Slovenian patients. In a small group of patients with both HT and autoimmune diabetes, T allele was determined in 50% compared with only 14% in healthy controls (OR 6.14; CI, 2.62-14.38), however, in a yet another study estimating the same polymorphism this association was not confirmed. Recently, 5 other PTPN22 SNPs have been tested in Japanese patients, showing no relation with HT, but a novel protective haplotype containing those SNPs has been observed (
Thyroglobulin gene
Tg is an important thyroid specific antigen, also present in the circulation, which makes it an easy target of the autoimmune response. Gene for Tg is located on the chromosome 8q24 and linkage of this region with HT and autoimmune thyroid disease was first identified by a Japanese and an American whole genome studies. A subsequent fine mapping of this region exposed Tg gene as one of the major thyroid specific susceptibility genes, linked and associated with the autoimmune thyroid disease. Later, different alleles of various microsatellite markers and different SNPs of Tg gene were related to HT, possibly affecting its expression, antigenicity, iodination, or binding to HLA. The association of Tgms2 microsatellite marker in intron 27 with HT was confirmed in Japanese as well as in Caucasian population. Sequencing of human Tg revealed 14 SNPs among which four SNPs, including exon 10-12 SNP cluster and exon 33 SNP, were associated with HT. However, this observation was neither confirmed in a larger data set of the United Kingdom Caucasian patients nor in Chinese population.
Vitamin D receptor gene
Vitamin D, which acts via vitamin D receptor (VDR), possesses immunomodulatory properties and its deficiency has been implicated in the development of autoimmune diseases. Many immune cells express VDR, dendritic cells in particular, where VDR stimulation has been shown to enhance their tolerogenicity. Tolerogenic dendritic cells promote development of Tregs with suppressive activity and therefore peripheral tolerance. VDR gene is located on the chromosome 12q12 and its polymorphisms have been related to different autoimmune disorders such as type 1 diabetes or Addison's disease. A decade ago, the association between VDR-FokI SNP in exon 2 and HT has been identified which was later confirmed in the observation of Taiwanese Chinese patients. In the Croatian population VDR gene 3' region polymorphisms were related to HT, possibly affecting VDR mRNA expression. A significant relation has also been discovered between HT and both promoter and intron 6 gene polymorphisms of CYP27B1 hydroxylase, which is located on chromosome 12q13, catalysing the conversion of 25 hydroxyvitamin D3 to its active form.
Cytokine genes and other immune-related genes
Lately, several genes encoding different inflammatory cytokines have been studied in HT, some of them also influencing the severity of the disease. Interferon (IFN)-γ, produced by Thelper type 1 (Th1) cells, promotes cell mediated cytotoxicity which underlies thyroid destruction in HT. T allele of the+874A/T IFN-SNP, causing the increased production of IFNγ, was associated with severity of hypothyroidism in HT patients. Higher frequency of severe hypothyroidism was also observed in patients carrying CC genotype of-590C/T interleukin 4 (IL-4) SNP, leading to a lower production of IL-4, one of the key Th2 cytokines which suppresses cell-mediated autoimmunity. Gene polymorphism of transforming growth factor(TGF)-β, inhibitor of cytokine production, was also associated with HT. T allele of+369T/C SNP, leading to a lower secretion of TGF-β, was more frequent in severe hypothyroidism than in mild hypothyroidism. Similarly, more severe form of HT was associated with-2383C/T SNP of gene for forkhead box P3 (FoxP3), an essential regulatory factor for the Tregs development. Unlike the severity of hypothyroidism, the development of HT itself was associated with C allele of tumor necrosis factor (TNF)--1031T/C SNP. Namely, C-allele carriers present with higher concentration of TNF-which acts as the stimulator of the IFN-production.
Female sex
As indicated by numerous epidemiological studies, females present with positive thyroid autoantibodies (TAbs) up to three times more often than males. The largest NHANES III study has shown that females were positive for TPOAbs and TgAbs in 17% and 15.2%, respectively, while males only in 8.7% and 7.6%, respectively. According to the estimation provided by the study of Danish twins, the genetic contribution to TPOAb and TgAb susceptibility in females was 72% and 75%, respectively, while in males it was only 61% and 39%, respectively. The possible explanation for high female predominance in thyroid autoimmunity might be associated with the X chromosome containing a number of sex and immune-related genes which are of key importance in the preservation of immune tolerance. Increased immunoreactivity might therefore be related to genetic defects of the X chromosome, such as structural abnormalities or monosomy. Accordingly, a higher incidence of thyroid autoimmunity was reported in patients with a higher rate of X chromosome monosomy in peripheral white blood cells or in patients with Turner's syndrome. Another potential mechanism of impaired immunotolerance in females is skewed X-chromosome inactivation (XCI) leading to the escape of X-linked self-antigens from presentation in thymus with subsequent loss of T-cell tolerance. Skewed XCI was associated with a higher risk of developing autoimmune thyroid diseases. Recently reported frequencies of skewed XCI in HT were 31%, 34.3%, 25.6% and 20%, respectively, which is significantly higher than in healthy controls, where the prevalences were only 8%, 8%, 8.6% and 11.2%, respectively. Furthermore, a study of Danish twins demonstrated a significant association of skewed XCI with TPOAb serum concentrations in dizygotic but not in monozygotic twin pairs, indicating that shared genetic determinants of XCI pattern and TPOAb production are more likely than causal relationship.
Pregnancy and postpartum period
The tolerance of the fetal semi-allograft during pregnancy is enabled by the state of immunosuppression which is a result of hormonal changes and trophoblast expression of key immunomodulatory molecules. The pivotal players in regulation of the immune response are Tregs, which rapidly increase during pregnancy. Consequently, both cell-mediated and humoral immune responses are attenuated with a shift towards humoral immune response, resulting in immune tolerance of the conceptus tissues and suppression of autoimmunity. Accordingly, the decrease of both TPOAb and TgAb concentrations during pregnancy has been reported, reaching the lowest values in the third trimester. Postpartum rapid decrease of Tregs and reestablishment of the immune response to the pre-pregnancy state may lead to the occurrence or aggravation of the autoimmune thyroid disease. The increase of TPOAb concentrations occurred as soon as 6 weeks after delivery, reaching the baseline level at approximately 12 weeks and the maximum level at about 20 weeks after delivery. In up to 50% of females with positive TPOAbs in the early pregnancy, thyroid autoimmunity in the postpartum period exacerbates in the form of postpartum thyroiditis. It may occur within the first year after delivery, usually clinically presented with transient thyrotoxicosis and/or transient hypothyroidism, while in about a third of females permanent hypothyroidism may even develop.
Fetal microchimerism
The term fetal microchimerism is defined by the presence of fetal cells in maternal tissues which are transferred in the maternal circulation during pregnancy. Several years after the delivery, the chimeric male cells can be detected in the maternal peripheral blood as well as in maternal tissues, such as thyroid, lung, skin, or lymph nodes. The fetal immune cells, settled in the maternal thyroid gland, may become activated in the postpartum period when the immunotolerance ceases, representing a possible trigger that may initiate or exaggerate the autoimmune thyroid disease. In HT, fetal microchimeric cells were detected in thyroid in 28% to 83% which means that their occurrence is significantly higher than in the absence of autoimmune thyroid disease. Furthermore, a recent study of twins supported the putative role of microchimerism in triggering thyroid autoimmunity, showing a significantly higher prevalence of TAbs in opposite sex twins compared to monozygotic twins. Additionally, euthyroid females having been pregnant presented significantly more often with positive TPOAb compared to females with no history of being pregnant. However, the relation between parity and autoimmune thyroid disease was not confirmed by large population-based studies, advocating against the essential contribution of fetal microchimerism to the pathogenesis of autoimmune thyroid disease.
Iodine intake
Excessive iodine intake is well-established environmental factor for triggering thyroid autoimmunity. Several large population-based studies demonstrated higher prevalence of TAbs in the areas with higher iodine supply since the estimated prevalence was approximately 13% in iodine deficiency, 18% in circumstances of sufficient iodine intake and about 25% in areas with excessive iodine intake. Moreover, up to four-fold increase in prevalence of TAbs was demonstrated after the exposure to higher iodine intake due to the improvement of iodine prophylaxis in previously iodine deficient areas. According to the intervention study, deliberate exposure to 500 μg of iodine provoked thyroid autoimmunity in 20% of previously healthy individuals. Valuable evidence was also provided by using experimental animal models of autoimmune thyroiditis, where the prevalence and severity of thyroid autoimmunity significantly increased when the dietary iodine was added ( Several putative mechanisms by which iodine may promote thyroid autoimmunity have been proposed. Firstly, iodine exposure leads to higher iodination of Tg and thus increases its immunogenicity by creating novel iodinecontaining epitopes or exposing cryptic epitopes. This may facilitate presentation by APC and enhance the binding affinity of the T-cell receptor which may lead to specific T cell activation. Secondly, iodine exposure has been shown to increase the level of reactive oxygen species in the thyrocyte which is generated during TPO oxidation of excessive amounts of iodine. They enhance the expression of the intracellular adhesion molecule-1 (ICAM-1) on the thyroid follicular cells which could attract the immunocompetent cells into the thyroid gland. Thirdly, iodine toxicity to thyrocytes has been reported, since highly reactive oxygen species may bind to membrane lipids and proteins, causing thyrocyte damage and release of autoantigens. Fourthly, iodine excess has been shown to promote follicular cell apoptosis by inducing an abnormal expression of tumor necrosis factor-related apoptosis inducing ligand (TRAIL) and its death receptor (DR)-5 in thyroid. Fifthly, in vitro evidence also suggests an enhancing influence of iodine on the cells of the immune system, including augmented maturation of dendritic cells, increased number of T cells and stimulated B-cell immunoglobulin production.
Drugs
Furthermore, certain drugs were reported to trigger or exacerbate thyroid autoimmunity in susceptible individuals (Barbesino 2010, Hamnvik, Larsen et al. 2011). Interferon-α (IFN-α) is extensively used to treat chronic hepatitis and is frequently associated with thyroid autoimmunity since TAbs were observed in up to 40% and clinical disease in 5-10% of patients treated with IFN-α. Presumably, IFN-α has both thyroid toxic effects with consequent autoantigen presentation and immune effects, such as switching to Th1 immune response, suppression of Treg function, activation of immune cells, stimulation of cytokine release and expression of MHC class I on thyroid cells. Similarly, IL-2 treatment, used for melanoma and renal carcinoma, seems to act via immune and toxic mechanisms, leading to both TAb positivity and hypothyroidism.
In patients with known autoimmune thyroid disease lithium may increase the risk of hypothyroidism. According to some studies, treatment with lithium has also been shown to increase TAb titres and the prevalence of thyroid autoimmunity. Among putative mechanisms direct toxicity of lithium on thyroid or toxicity of increased intrathyroidal iodine resulting from lithium treatment were discussed. Similarly, amiodarone alone as well as its high iodine content may act cytotoxically which may lead to thyroid autoantigen presentation and provoke thyroid autoimmunity. (Bianchi, Rossi et al. 2013) thought that sunitinib may exert these effects via multiple receptor tyrosine kinases, including vascular endothelial growth factor receptor (VEGFR) and the platelet-derived growth factor receptor (PDGFR). In patients treated with sunitinib or sorafenib, routine thyroid function testing at baseline and measurement of TSH on day 1 at the start of every new treatment cycle is recommended. Levothyroxine is the standard treatment for overt hypothyroidism and is recommended in some patients with subclinical hypothyroidism; overt or subclinical hypothyroidism per se does not justify the withdrawal of TKI therapy. Thyroid function test should be included in routine toxicity assessment of TKIs under clinical evaluation (Torino, Corsello et al. 2009, Hamnvik, Larsen et al. 2011, Eisen, Sternberg et al. 2012); however, the clinical relevance of early diagnosis of hypothyroidism in patients with TKIs is still controversial.
Infections
Not only the IFN-α treatment but also hepatitis C infection itself has been reportedly associated with thyroid autoimmunity and hypothyroidism. Among possible mechanisms, the molecular mimicry between viral and selfantigens has been suggested, whereas the release of proinflammatory mediators caused by viral infection may lead to activation of autoreactive T-cells. Besides, in HT several other putative triggering viruses have been implicated such as parvovirus, rubella, herpes simplex virus, Epstein Barr virus, and human T-lymphotropic virus type 1. A recent study of sera in pregnant women has also indicated an association between a prior infection with Toxoplasma gondii and an increase of TPOAbs. Nevertheless, the evidences are scarce and further studies are required in order to confirm the role of infections as causative agents.
Chemicals
The exposure to environmental toxicants such as polyaromatic hydrocarbons or polyhalogenated biphenyls, both commonly used in a variety of industrial applications, has been shown to provoke thyroid autoimmunity not only in experimental animals but also in humans. Recently, a significantly higher prevalence of HT and TAb (9.3% and 17.6%, respectively) has been demonstrated in residents living in the area of petrochemical complex of Sao Paolo compared to the control area (3.9% and 10.3%, respectively). In Slovakia, the exposure to polychlorinated biphenyls was associated with TAb and hypothyroidism. Although there is strong evidence attesting the contribution of chemicals to thyroid autoimmunity, the exact mechanisms of their action are yet to be established (Langer, Tajtakova
Hashimoto's thyroiditis and papillary thyroid carcinoma
There has long been a controversy in the literature about a possible link between HT and PTC. Conflicting reports continue to emerge. Some suggest a positive correlation between the two, and even a cause-and-effect relationship, whereby the activated inflammatory response present in HT creates a favorable setting for malignant transformation. The inflammatory response may cause DNA damage through formation of reactive oxygen species, resulting in mutations that eventually lead to the development of PTC. Nevertheless, it remains unclear whether: (1) HT predisposes patients to develop PTC, (2) HT is an incidental finding with concurrent PTC, or (3) In population based studies where the specimens were obtained from FNAB, the average prevalence of PTC in patients with HT was 1.20%, with an average risk ratio of 0.69. Conversely, in studies from archival thyroidectomy specimens, the average prevalence and risk ratio were as high as 27.56% and 1.59, respectively. This variability could be a result of different methods of obtaining specimens and heterogeneity in the population under investigation in terms of ethnic, geographic, and gender differences.
The prevalence and the risk ratio of PTC in patients with HT compared to those without HT are significantly higher in studies of thyroidectomy specimens, compared to studies of patients undergoing FNAB. In studies that mentioned the indications, thyroidectomy was reserved for patients not responding to thyroid suppression therapy, those with symptoms of compression, worrisome or inconclusive FNA cytology, and historic or physical findings warranting further workup and treatment (e.g., irradiation, nerve paralysis, pain, or cervical lymph node enlargement). It should be noted that the vast majority of patients with HT do not require surgery. Hence, the patients who require thyroidectomy are already at higher risk for malignancy compared to the general population with HT.
There have been a number of proposed hypotheses to explain the linkage between the two diseases. From a histological perspective, Tamimi (Tamimi 2002) assessed the prevalence and severity of thyroiditis among three types of surgically resected thyroid tumors and found a significantly higher rate of lymphocytic infiltrate in patients with PTC. However, PTC with concurrent HT is associated with female gender, young age, less aggressive disease such as small tumor size, less frequent capsular invasion and nodal metastasis, and better prognosis. Furthermore, these patients are also less likely to develop recurrence and have a higher survival rate In the study by Eisenberg et al (Eisenberg and Hensley 1989), none of the patients with a thyroid carcinoma and HT developed relapse or metastases after 74 months of followup. Kebebew (Kebebew, Treseler et al. 2001) demonstrated that CLT correlates with improved survival in patients with PTC but is not an independent prognostic factor. Boi et al investigated the relationship between thyroid autoimmunity and thyroid cancer in a series of FNAB of unselected and consecutive thyroid nodules. This study revealed that the positive predictive values for thyroid carcinoma in antithyroid antibody-positive and -negative nodules are not statistically significant for class III (indeterminate risk) and class IV (suspected malignancy) cytology. It should be noted that it is important to distinguish between diffuse vs focal lymphocytic infiltration around the tumor. In the studies that described the histological findings, HT was defined as diffuse lymphocytic infiltration, rather than peritumoral lymphocytic infiltration alone. The significance of this is that HT does not represent a reaction to tumor alone but is an independent chronic process. In chronic inflammation, there are reactive alterations of stroma brought on by injury from chemokines, cytokines, and growth factors that cause damage to stromal cells. This in turn may cause malignant transformation in epithelial cells, thereby resulting in tumor development. In contrast, the lymphocytic infiltrate of HT may be an immunological response with a cancer-retarding effect, contributing to a favorable outcome of PTC compared to other thyroid cancers. Moreover, the relatively high prevalence of PTC in autopsy series may represent host immune control. Interestingly, lymphocytic infiltration within or surrounding the tumor was found to correlate with the existence of CLT. This may explain the "protective" effect of CLT in PTC.
Another hypothesis for the causal relationship between HT and PTC is that elevated levels of TSH found in hypothyroid patients with HT stimulate follicular epithelial proliferation, thereby promoting the development of papillary carcinoma (Jankovic, Le et al. 2013 . A subset of studies that was adjusted for autoimmune thyroiditis did not find this relationship between TSH and heightened odds ratio for thyroid cancer. Conversely, several authors identified a few biomolecular markers, including RET/PTC rearrangements, p63 protein, and loss of heterozygosity of hOGG1, that are potentially involved in neoplastic transformation from HT to PTC. So far, no causal genetic linkage has been confirmed.
In conclusion, the existing data provide inconsistent evidence favoring a causal relationship between HT and PTC. Population-based studies using FNAC show no significant increase of PTC in patients with HT, whereas surgical series using thyroidectomy show a heightened risk for coexistent PTC, possibly related to selection bias. Prospective studies involving a large number of subjects and long-term follow-up are needed to further elucidate the relationship. Several studies also suggest that HT appears to confer a better prognosis in patients with PTC, but more research is necessary to further investigate this. At the present time, there is no valid established criterion to identify those patients with HT at a higher risk of developing PTC. Careful observation and follow-up of HT patients is recommended, especially those with nodular variants.
Laboratory tests
Elevated anti-TPO or anti-Tg antibody titers are the most specific laboratory findings to establish the diagnosis of autoimmune thyroid disease (AITD) or HT, typically making biopsy unnecessary. The 24-hours thyroid radioactive iodine-123 or-131 ( 123 I or 131 I) uptake (RIU) is also helpful to distinguish Hashitoxicosis from Graves' disease (GD); the RIU is low in patients with Hashitoxicosis, whereas it is elevated in those with GD. 123 I is preferred than 131 I because it has a shorter half-life (13 hours for 123 I, 8 days for 131 I) allowing quicker dissipation of background radiation. Since radioactive iodine is secreted in breast milk, and 123 I has a short half-life, it is recommended for diagnostic thyroid studies in nursing mothers. Breast milk must be pumped and discarded for 2 days after the intake of 123 I either used for thyroid uptake or for thyroid scanning.
Scintigraphy reveals in-homogeneous activity throughout the gland in 50% and a pattern suggestive of either hot or cold nodules or a combination of both in 30% of patients. Twenty percent of patient with HT have normal findings at the scintigraphic thyroid imaging.
Clinical pictures
Clinical manifestations of HT are variable and commonly include diffuse or nodular goiter with euthyroidism, subclinical hypothyroidism which shows a combination of elevated serum TSH concentrations and normal free T4 and T3 concentrations and permanent hypothyroidism. Not often, HT causes acute destruction of thyroid tissue and release in blood of stored thyroid hormones, causing transient thyrotoxicosis. This condition has been termed "Hashi-toxicosis" or "painless sporadic thyroiditis", or "painless postpartum thyroiditis" when occurs in women after delivery. In Hashitoxicosis serum TSH is suppressed, and total and free T3 and T4 are elevated. Also, serum T4 is proportionally higher than T3, reflecting the ratio of stored hormones in the thyroid gland, whereas in GD and in toxic nodular goiter, T3 is preferentially elevated. Rarely, a hypofunctioning gland in HT may become hyperfunctioning with the onset of coexistent GD. In patients with GD, HT is usually present concurrently.
Treatment
If overt hypothyroidism is present, the treatment of choice for HT is the administration of Lthyroxine (L-T4) in the usual replacement doses. We also use L-T4 to treat patients with HT subclinical hypothyroidism and high serum thyroid antibody concentrations, because in these cases a progression to overt hypothyroidism is common and hyperlipidemia and atherosclerotic heart disease may develop. L-T4 may mildly and indirectly suppress serum concentrations of autoantibodies due to decreased stimulation of thyroid tissue by TSH and causing reduction of antigenic production. The goal of treatment is to restore clinically and biochemically an euthyroid state. For that, free T4 levels must be within the reference range and TSH at the lower half of the reference range. The usual dose of L-T4 is 1.6-1.8 μg/kg per day and is patient dependent. Elderly patients usually require a smaller dose of L-T4, sometimes less than 1μg/kg per day. The initial dose and the optimal time needed to establish the full replacement dose as above should be individualised relative to age, weight and cardiac status.
In HT patients with a large goiter and normal or elevated serum TSH, we think, L-T4 may be given in doses sufficient to suppress serum TSH in an effort to shrink the thyroid theoretically, although randomized studies are needed to verify the long term safety of this method concerning the potential cardiovascular and skeletal side effects. Suppressive doses of L-T4 tend to shrink the goiter by average of 30% over 6 months. If the goiter does not regress, the L-T4 doses are lowered. Goiters that are hard and fibrotic do not respond to L-T4 treatment. If the thyroid gland is only minimally enlarged, the patient is euthyroid and TSH levels are normal, the patient should remain under medical supervision, since hypothyroidism may often develop years later. Also, patients should be informed about the importance of compliance with replacement treatment and instructed to report any symptoms suggesting hyperthyroidism that could be due to an overdosage of L-T4. The intake of L-T4 should be apart by at least 4 hours from other drugs like calcium carbonate, ferrous sulfate, cholestyramine, sucralfate, iron-containing multivitamins, antacids containing aluminum hydroxide, phenytoin sodium, carbamazepine and amiodarone HCL, all of which impair the absorption/ metabolism of L-T4.
Selenium (Se) supplementation in patients with AITD, including HT, seems to modify the inflammatory and immune responses, probably by enhancing plasma glutathione peroxidase (GPX) and thioredoxin reductase (TR) activity and by decreasing toxic concentrations of hydrogen peroxide (H 2 O 2 ) and lipid hydroperoxides, resulting from thyroid hormone synthesis. When Se intake is adequate the intracellular GPX and TR systems protect the thyrocyte from these peroxides, considering that oxidative stress induces TR1 and GPX. The current recommended dietary intake of selenium in humans to achieve the maximal activity of GPX in plasma or in erythrocytes is between 55 and 75μg per day. It must be considered that organic forms of Se such as Se-methionine and yeast-bound Se, have a much lower toxicity and a much higher effectiveness and safety than inorganic Se like sodium selenate. Several studies have revealed a significant reduction of anti-TPO concentrations in patients with AITD treated with 200 μg Se per day for three, six, or nine months ( found an overall decrease of 46% of AITD at 3 months (P<0.0001) and of 55.5% at 6 months (P<0.05) of treatment with L-selenomethionine plus L-T4. Others found a decrease of 26.2% at 3 months (P<0.001) and an additional 23.7% at 6 months (P<0.01) after L-Se-methionine treatment. A significant decrease in the mean serum anti-TPO levels was also noted after the daily intake of 200 μg sodium selenite for 3 months. This decrease amounted to 36.4% in the selenium-taking group of patients versus 12% in the control group (P=0.013) (Gartner, Gasnier et al. 2002). A recent study in 80 Greek women with HT showed a significant reduction of serum anti-TPO levels during the first 6 months of L-Se-methionine treatment (P<0.0001). Anti-TPO decreased by 5.6% and by 9.9% after 3 and 6 months of L-Se-methionine treatment, respectively. The extension of L-Semethionine supplementation for 6 more months resulted in an additional 8% decrease, while cessation of treatment resulted to a 4.8% increase, in the anti-TPO concentrations.
A systematic review and meta-analysis done by Toulis (Toulis, Anastasilakis et al. 2010) provided evidence that selenomethionine at a dose of 200 mg once per day is effective in reducing TPOAb titers in patients with HT after a 3-month period, compared with placebo. In absolute numbers, this reduction equals ~300 IU/mL. Efficacy of Se supplementation for >3 months could not be supported, because of lack of evidence from randomized, placebo controlled trials. Patients assigned to Se supplementation had also a threefold higher chance of reporting an improvement in well-being and/or mood, compared with controls. No serious adverse effects were recorded after Se supplementation, with the exception of a limited number of gastric discomfort complaints associated with selenomethionine use. In general, there are no data demonstrating that Se treatment had any impact on the natural course of the disease.
Further controlled and extensive studies are needed to clarify the exact mechanisms by which Se exerts effects on anti-TPO production, and investigate the long-term clinical effects of Se treatment.
In a study involving 21 patients with HT and subclinical hypothyroidism, simvastatin in a daily dose of 20 mg orally for a period of eight weeks improved thyroid function inducing an increase in serum free T3 and free T4 levels and a decrease in TSH levels. Decreases in anti-TPO and anti-Tg antibodies were not statistically significant, possibly by stimulating apoptosis of certain types of lymphocytes. Further controlled and extensive studies are needed to investigate the effectiveness of statins treatment on the course of HT.
Patients with Hashitoxicosis may have only mild thyrotoxicosis and may not require treatment. Antithyroid drug treatment with thiourea drugs is contraindicated, because there is no excess of thyroid hormone production. Patients who have more symptoms should have a 24-hours thyroid RIU test and a radioiodine scan to determine whether GD may be present and may be treated with beta-blockers. In symptomatically thyrotoxic patients with low thyroid 123 I uptake test, propranolol treatment is continued and sodium ipodate or iopanoic acid may also be given in doses of 500 mg daily orally, until the patient is euthyroid. Sodium ipodate and iopanoic acid are known iodinated contrast oral cholecystographic agents that inhibit peripheral 5'-monodeiodination of thyroxine, thereby blocking its conversion to active T 3 . Patients having low thyroid RIU do not respond to thiourea medication.
Patients with HT and a large goiter with pressure symptoms such as dysphagia, voice hoarseness, stridor and respiratory distress, may require surgical care. Also, in HT the presence of a malignant nodule or of a thyroid lymphoma diagnosed by histology after a fine-needle aspiration is an absolute indication for thyroidectomy. | 2019-01-23T00:11:41.922Z | 2014-05-21T00:00:00.000 | {
"year": 2014,
"sha1": "8c0bb0267ba3a6e5009964ed388b9cd8b60a7ed7",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/46420",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "017fb35f89e8abedcf53c80915d80f79fa1da820",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
7629573 | pes2o/s2orc | v3-fos-license | Enzootic Angiostrongyliasis in Shenzhen, China
To the Editor: Angiostrongylus cantonensis is a zoonotic parasite that causes eosinophilic meningitis in humans after they ingest infective larvae in freshwater and terrestrial snails and slugs, paratenic hosts (such as freshwater fish, shrimps, frogs, and crabs), or contaminated vegetables. With the increase of income and living standards, and the pursuit of exotic and delicate foods, populations around the world have seen angiostrongyliasis become an important foodborne parasitic zoonosis (1–9).
Shenzhen municipality is situated in the most southern part of mainland People’s Republic of China between the northern latitudes of 22°27′ to 22°52′ and eastern longitudes of 113°46′ to 114°37′; it shares a border with the Hong Kong Special Administrative Region, China, in the south. The climate is subtropical, with an average annual temperature of 23.7°C. The city is 1,952.84 km2 and has a population of 10 million.
Since 2006, thirty-two sporadic cases of human eosinophilic meningitis caused by consumption of undercooked aquacultured snails have been documented in Shenzhen (Shenzhen Center for Disease Control and Prevention, unpub. data). To identify the source of these infections and assess the risk for an outbreak of eosinophilic meningitis, we conducted a survey to investigate whether A. cantonensis occurs in wild rats and snails in Shenzhen.
To examine A. cantonensis infection in intermediate host snails, 302 terrestrial snails (Achatina fulica) were collected from 10 investigation sites across Shenzhen, and 314 freshwater snails (Pomacea canaliculata)were sampled from 6 investigation sites. We examined the snails for A. cantonensis larvae by using pepsin digestion standardized procedures (3). To survey the prevalence of adult A. cantonensis in definitive host rats, we collected 187 Rattus norvegicus rats and 121 R. flavipectus rats collected from 4 sites where positive snails positive for A. cantonensis were found. These rats were examined for the presence of adult A. cantonensis in their cardiopulmonary systems.
A. cantonensis larvae were found in 96 (15.6%) of 616 examined snails. Of these, P. canaliculata had an average infection rate of 20.7% (65/314), significantly higher (p<0.01) than that of A. fulica (10.3%, 31/302), an indication that P. canaliculata may be the principal intermediate host for A. cantonensis in Shenzhen. A. cantonensis adults were recovered from the cardiopulmonary systems of 37 (12%) of 308 examined rats. Infection rate for R. norvegicus rats was 16.6% (31/187), significantly higher (p<0.01) than that for R. flavipectus (4.9%, 6/121), an indication that R. norvegicus may be the principal definitive host for A. cantonensis in Shenzhen, possibly due to the rat’s preference for eating snails. Infection rates were higher for female rats (25.6% for R. norvegicus and 7.8% for R. flavipectus) than for male rats (8.9% for R. norvegicus, 2.9% for R. flavipectus), possibly because female rats eat more snails to supply proteins for reproduction. This report of enzootic A. cantonensis infection in wild rats and snails in Shenzhen demonstrates the existence of natural origins of infection with A. cantonensis for humans in this city.
Persons in Shenzhen eat raw or undercooked freshwater and terrestrial snails and slugs. This practice provides opportunities for infection with A. cantonensis, particularly given that P. canaliculata has been aquacultured intensively for human consumption. The prevalence of A. cantonensis in wild rats and snails in Shenzhen poses substantial risk for future outbreaks of human eosinophilic meningitis. Moreover, public health officials, epidemiologists, researchers, clinical technicians, medical practitioners, parasitologists, and veterinarians, as well as the general public, should be aware of such risks, and integrated strategies should be taken to reduce or eliminate such risks.
Enzootic Angiostrongyliasis in Shenzhen, China
To the Editor: Angiostrongylus cantonensis is a zoonotic parasite that causes eosinophilic meningitis in humans after they ingest infective larvae in freshwater and terrestrial snails and slugs, paratenic hosts (such as freshwater fi sh, shrimps, frogs, and crabs), or contaminated vegetables. With the increase of income and living standards, and the pursuit of exotic and delicate foods, populations around the world have seen angiostrongyliasis become an important foodborne parasitic zoonosis (1)(2)(3)(4)(5)(6)(7)(8)(9).
Shenzhen municipality is situated in the most southern part of mainland People's Republic of China between the northern latitudes of 22°27′ to 22°52′ and eastern longitudes of 113°46′ to 114°37′; it shares a border with the Hong Kong Special Administrative Region, China, in the south. The climate is subtropical, with an average annual temperature of 23.7 °C. The city is 1,952.84 km 2 and has a population of 10 million.
Since 2006, thirty-two sporadic cases of human eosinophilic meningitis caused by consumption of undercooked aquacultured snails have been documented in Shenzhen (Shenzhen Center for Disease Control and Prevention, unpub. data). To identify the source of these infections and assess the risk for an outbreak of eosinophilic meningitis, we conducted a survey to investigate whether A. cantonensis occurs in wild rats and snails in Shenzhen.
To examine A. cantonensis infection in intermediate host snails, 302 terrestrial snails (Achatina fulica) were collected from 10 investigation sites across Shenzhen, and 314 freshwater snails (Pomacea canaliculata) were sampled from 6 investigation sites. We examined the snails for A. cantonensis larvae by using pepsin digestion standardized procedures (3). To survey the prevalence of adult A. cantonensis in defi nitive host rats, we collected 187 Rattus norvegicus rats and 121 R. fl avipectus rats collected from 4 sites where positive snails positive for A. cantonensis were found. These rats were examined for the presence of adult A. cantonensis in their cardiopulmonary systems.
A. cantonensis larvae were found in 96 (15.6%) of 616 examined snails. Of these, P. canaliculata had an average infection rate of 20.7% (65/314), signifi cantly higher (p<0.01) than that of A. fulica (10.3%, 31/302), an indication that P. canaliculata may be the principal intermediate host for A. cantonensis in Shenzhen. A. cantonensis adults were recovered from the cardiopulmonary systems of 37 (12%) of 308 examined rats. Infection rate for R. norvegicus rats was 16.6% (31/187), signifi cantly higher (p<0.01) than that for R. fl avipectus (4.9%, 6/121), an indication that R. norvegicus may be the principal defi nitive host for A. cantonensis in Shenzhen, possibly due to the rat's preference for eating snails. Infection rates were higher for female rats (25.6% for R. norvegicus and 7.8% for R. fl avipectus) than for male rats (8.9% for R. norvegicus, 2.9% for R. fl avipectus), possibly because female rats eat more snails to supply proteins for reproduction. This report of enzootic A. cantonensis infection in wild rats and snails in Shenzhen demonstrates the existence of natural origins of infection with A. cantonensis for humans in this city.
Persons in Shenzhen eat raw or undercooked freshwater and terrestrial snails and slugs. This practice provides opportunities for infection with A. cantonensis, particularly given that P. canaliculata has been aquacultured intensively for human consumption. The prevalence of A. cantonensis in wild rats and snails in Shenzhen poses substantial risk for future outbreaks of human eosinophilic meningitis. Moreover, public health offi cials, epidemiologists, researchers, clinical technicians, medical practitioners, parasitologists, and veterinarians, as well as the general public, should be aware of such risks, and integrated strategies should be taken to reduce or eliminate such risks. (1). Four of these deaths occurred in Turkey in 2006. Understanding gaps in the public's knowledge about avian infl uenza risks and transmission provides guidance on which issues future public health information campaigns may wish to focus. From a public health perspective, a more informed general public will be less likely to unnecessarily alter their travel and food consumption behavior and more likely to take appropriate preventive actions.
A 2006 Eurobarometer survey asked 29,170 residents of the 27 countries in the European Union, Croatia, and Turkey about their knowledge of avian infl uenza risks (2). Eurobarometer surveys are undertaken by the European Commission to monitor the EU public's social and political opinions. The survey was conducted on a multistage random sampling basis. Therefore, the sample is representative of the whole territory surveyed. Each country's population was randomly sampled according to rural, metropolitan, and urban population densities. A cluster of addresses was selected from each primary sampling unit by using country-dependent resources such as electoral registers. Addresses were chosen systematically by using standard random route procedures, beginning with a randomly selected initial address. The survey was conducted by face-to-face interviews in respondents' homes.
Data were collected from March 27 through May 1, 2006. This period is especially interesting when looking at Europeans' knowledge about avian infl uenza risk because the fi rst European cases of avian infl uenza (H5N1) were found in October 2005 in Turkey; additional cases were found later that month in Romania, Croatia, and the United Kingdom. Therefore, the period would have included media coverage about avian infl uenza as well as any targeted public health efforts to inform residents about avian infl uenza risks. By the end of this survey's fi eldwork period, 17 of the 29 countries surveyed had reported infl uenza virus (H5N1) in birds, 3 in mammals, and 1 in humans (3).
Respondents were asked 7 questions about their knowledge of the risks humans face regarding avian infl uenza (Table). When we looked at these results with the aim of setting future public health information campaign objectives, we considered incorrect or "don't know" responses to indicate public health information campaign failures. Uncertainty regarding avian infl uenza risks appeared to involve consumption of eggs and vaccinated, cooked poultry and whether the virus can be transmitted between humans. However, for all questions asked, more than half of the respondents answered correctly except when asked about eating poultry that had been vaccinated against avian infl uenza. This question also had the highest number of "don't know" responses. Respondents are most knowledgeable about the preventive measure of culling chickens, perhaps because of the media attention these events attract. The large percentage of correct answers for some questions points to successes of previous information campaigns and media coverage, but the 40% of respondents answering incorrectly or "don't know" to questions about poultry and egg consumption | 2014-10-01T00:00:00.000Z | 2008-12-01T00:00:00.000 | {
"year": 2008,
"sha1": "2417dc3d66564442cf5e1f3fd17b3cbf7ef2d80a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3201/eid1412.080695",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b70dfbae3285e87aa563febf2cf9cd4320901bbb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
201666066 | pes2o/s2orc | v3-fos-license | Camera Pose Correction in SLAM Based on Bias Values of Map Points
Accurate camera pose estimation result is essential for visual SLAM (VSLAM). This paper presents a novel pose correction method to improve the accuracy of the VSLAM system. Firstly, the relationship between the camera pose estimation error and bias values of map points is derived based on the optimized function in VSLAM. Secondly, the bias value of the map point is calculated by a statistical method. Finally, the camera pose estimation error is compensated according to the first derived relationship. After the pose correction, procedures of the original system, such as the bundle adjustment (BA) optimization, can be executed as before. Compared with existing methods, our algorithm is compact and effective and can be easily generalized to different VSLAM systems. Additionally, the robustness to system noise of our method is better than feature selection methods, due to all original system information is preserved in our algorithm while only a subset is employed in the latter. Experimental results on benchmark datasets show that our approach leads to considerable improvements over state-of-the-art algorithms for absolute pose estimation.
I. INTRODUCTION
VSLAM can estimate the camera trajectory and reconstruct environment, therefore, it is very important on many occasions, such as mobile robots navigation and augmented reality (AR). To improve the accuracy, parallelism [1] and orthogonality [4] of lines or planes are utilized. However, since no prior structural information is acquired when exploring new environments, they cannot be used. Unlike the above methods, the optimization-based methods [5], [21] or the matrix-theorybased approach [13] are not limited by the environment. But most pipelines of them are complex and difficult to combine with different SLAM systems. Multi-sensor fusion can compensate the drawbacks of each other, therefore, the IMU is widely used in VSLAM to improve the system robustness [20], [24], [25]. However, accurate IMU bias estimation is difficult, and large estimation error may affect the localization performance of the SLAM system. This paper presents a novel pose correction method, which is compact and effective and reserves all original system information. The most related work to this paper is [14], where the map points with small bias value are chosen to estimate the camera pose while the other map points are abandoned. Different from choosing a subset of map points to reduce the estimation error in [14], our method compensates the pose estimation error based on the bias values of all map points.
Our algorithm has two advantages compared with existing methods. The first advantage is our method is compact and effective and easy to integrate into different SLAM systems. The second is the robustness of our method is better than feature selection methods, such as [14]. Since only a subset of features is chosen in feature selection methods, they are more sensitive to system noise, which is demonstrated by the experimental results.
II. RELATED WORKS
Structural regularity based methods: A monocular SLAM system, which leverages structural regularity in Manhattan world and contains three optimization strategies is proposed in [1]. However, to reduce the estimation error of the rotation motion, multiple orthogonal planes must be visible throughout the entire motion estimation process. Unlike only using planes in [1], the rotation motion is estimated by joint lines and planes in [2]. Once the rotation is found, the translational motion can be recovered by minimizing the de-rotated reprojection error. In [3], the accuracy of BA optimization is enhanced by incorporating feature scale constraints into it. Structural constraints between nearby planes (e.g. right angle) are added in the SLAM system to further recover the drift and distortion in [4]. Since the structural regularity does not exist in all environments, the application scope of this category is limited.
Optimization-based methods and matrix-theory-based methods: A new initialization method for the orientations of the pose graph optimization problem is proposed in [5]. In this method, the orientation values are calculated by an iterative approach, and the relative orientation mismatches of the cost function are approximated by a quadratic cost function. In [6], the photometric and the depth error over all pixels are employed to reduce the estimation error in the RGB-D system. However, this method is time-consuming and difficult to achieve real-time performance. Different from using all pixel depth information in [6], a monocular camera combined with sparse depth information from LiDAR is employed in [7], and three optimization strategies are carefully designed considering both accuracy and time-consuming. Similar to [6] and [7], a new approach, which utilizes dense fusion of several stereo depths in the locality establishes a locally dense and globally sparse map. Rao-Blacwellized particle filter (RBPF) method is employed in [8] and [9]. The difference between them is [8] presents a new RBPF method while drawbacks of the RBPF are overcome in [9] by scaled unscented transformation. How to obtain accurate map in the large or scale uncertain environments is studied in [10]- [12]. To get a good accuracy without sacrificing speed, new matrix decompose methods are proposed in [13] and [15]. Different from the above methods, where all system information is employed, a good feature selection algorithm is introduced in [14]. By selecting the map points which have smaller error, it can reach a balance between the error expectation and the covariance. Most pipelines of them are complex, therefore, it is difficult to integrate them into different SLAM systems.
Methods of integration with IMU: In [16], four cameras and an IMU are tightly fused in a Micro Aerial Vehicle (MAV). A new approach tightly combines visual measurements with IMU measurements is proposed in [17]. The novelty lies in that the IMU error term is integrated with the landmark reprojection error in a fully probabilistic manner. In [18], IMU information is employed in the ORB-SLAM [19] to solve the scale problem of a monocular system. The performance of the SLAM system is easily affected by the bias estimation results and IMU noise.
The main contributions of this paper can be summarized as follows. (i) A new camera pose correction method is proposed. (ii) A bias calculating method used for the map point is integrated into our framework. Thanks to this method, our system can operate in real-time. (iii) Experimental results demonstrate that our method outperforms the state-of-the-art SLAM system.
III. RELATIONSHIP BETWEEN POSE ESTIMATION ERROR AND MAP POINT BIAS
The commonly used SLAM system is shown in Fig. 1. In this system, the main object is to estimate the camera pose x and the map point p i by minimizing the errors between the observation values and the estimation values, which can be written as arg min where n represents the number of matched image feature point z i at the camera pose x, and h(x, p i ) represents the camera projection model. To simplify the description, for the symbols whose dimension can be easily determined, the subscripts of them are omitted. This problem can be solved by the Gaussian-Newton method or the Levenburg-Marquadt (LM) method. Since h(x, p i ) is a nonlinear function, it must be linearized to fit for these optimization methods. The first-order approximation to h(x, p i ) about initial guess x (s) can be written as where H x is the Jacobian matrix about x. Similar to (2), the first-order approximation to h x (s) , p i about initial guess p where H pi is the Jacobian matrix about p i . According to the Gaussian-Newton method, (1) and (2), the pose update can be written as where H + x is the pseudo-inverse of H x . Substituting (3) into (4), the pose estimation error is represents the map point bias. Suppose the image observation error subjects to the zeromean Gaussian distribution, i.e. zi ∼ N (0, zi ), and the error of the map point subjects to non-zero-mean Gaussian distribution, i.e. biased distribution pi ∼ N µ pi , pi . This assumption is reasonable, because the bias of the map point can be introduced by the BA optimization or the image measurement error. According to this assumption, the expectation of the camera pose estimation error is It is obvious that the pose estimation error can be reduced if the bias value of the map point is known. However, getting accurate bias value is difficult due to system errors.
In this paper, a bias calculating expression proposed in [29] is employed, which will be introduced in the next section.
A. Bias Calculation
The basic idea of the bias calculating method is introduced in this subsection, and more details can be available in [29].
represent the translation vector and the rotation vector of the camera. According to the optical flow, 3D camera motion and
Symbol
Meaning d Ω scene depth, the velocity fields of the image point z i can be expressed as where p (x i , y i ) and q (x i , y i ) are the horizontal and vertical velocity fields, d(x i , y i ) = v z /z (x i , y i ) is the scaled inverse scene depth, and the meanings of the other parameters are listed in Table I. For N matched feature points in two consecutive frames, normalizing linear distances with respect to the focal length, (8) can be written as a matrix form where the meanings of symbols are shown in Table I. The aim is calculating z from u. For VSLAM, since the camera motions corresponding to these two frames are known, Ω is known. (9) can be rewritten as where where the meanings of r i and s i are listed in Table I. The least square solution of (10) iŝ To simplify the description of the bias expression, let According to [29], the bias of the inverse depth estimation of the i th feature point is i is the variance in the image coordinate measurements, r ix and s ix is the derivative of r i and s i with respect to x, and r iy and s iy have similar meanings.
Since the depth result is easily affected by the noise for two-frame reconstruction, L two-frame reconstruction results are employed to reduce the depth error, where the depth result and the bias ared For the biased estimation, In VSLAM, map points can be observed by multi-frames, which is shown in Fig. 2. Therefore, map points are jointly recovered by these frames, and µ d may not be the bias value of map points due to it is derived based on two-frame. To solve this problem, we first reconstruct map points using multi-two-frame, and gettingd and µ ĥ according to (13) and (14). Thend c can be obtained. Sinced c is an unbiased estimation, suppose the actual value acquired by multi-frame is d, the bias value of map points is µ d = d−d c . According to the bias value µ d and (6), the camera pose estimation error can be corrected.
B. Anomaly detection strategy
In practical applications, bias values of some map points may be very large due to mismatching or camera motion error, which can make the system unstable. To avoid this negative effect, we propose a heuristic strategy to determine which map points can be used to compensate the pose estimation error. Since the bias value is negative correlation to the image parallel, the principle of our strategy is choosing map points recovered by large parallels. In this article, our method is combined with the VINS-Mono [20] where the camera motion is detected by the IMU. Therefore, the parallel can be replaced by the camera angular velocity. The strategy is where ω cam 2 represents the l 2 -norm of the angular velocity. If the bias value of the map point is larger than the thresh value, the corresponding map point is abandoned to correct the pose error. These thresh values are set based on testing results in different datasets, which makes our algorithm get stable and accurate results.
The full pipeline of our camera pose correction method is shown in Fig. 3. In this figure, the camera pose obtained by the front end is used to calculate the depth and bias of map points according to (11) and (12). To make full use of all frames which observes the same map points, multi-two-frame reconstructions are processed. Based on reconstruction results, bias value is calculated by (13) and (14), and the camera pose error in front end is corrected. Finally the results of the front end are optimized by the back end. Our algorithm only modifies the results obtained by the front end and is independents of the back end. Therefore, it is easy to integrate into different VO/SLAM systems.
Remark 1: In abstract and Section I, we emphasis that one of characteristics of our method is all original system information is reserved. It is not contradict some map points may be abandoned to correct the pose error, because pose correction is a separate procedure. This means that steps of the original system are not changed, therefore, all system information still can be used after the pose correction.
V. EXPERIMENTS
The effectiveness of our method is verified by integrating it into the VINS-Mono framework. Our algorithm is activated after finishing initialization. The pose error correction is implemented if feature points are successfully tracked and triangulated, and finally BA optimization is executed. The experiments are performed in an Intel Core i7-26700QM computer with 8GB RAM, and we evaluate the accuracy in the EuroC dataset [23]. To make results more reliable, we run five times in each sequence for every compared method and calculate the mean value as their final results.
A. Compared results with VINS-Mono based methods
The accuracy of our method is compared with the original VINS-Mono and the good feature selection method (GF) [14] mentioned at the end of Section I. Based on the published source code of [14], which is realized based on the ORB-SLAM, we combine it with the VINS-Mono and set the number of selected map points in the sliding window to 50. This value is large enough to make the estimation result reach a good balance between the accuracy and confidence. If the number of map points in the sliding window is less than 50, all of them are used in BA optimization to avoid the side effect caused by too little map points. Other parameters in these methods are the same. The translation Root Mean Square Error (RMSE) and the median errors of the keyframe trajectory for each sequence are shown in Table II and Table III, respectively. The first three columns represent the results without loop detection, and the results with loop detection are given in the last three columns.
It can be seen that our method performs best on at least seven of ten sequences no matter in the system with or without loop detection. Especially for the RMSE result with loop detection, the accuracies in nine sequences are largely improved, e.g. 0.12m to 0.06m in the sequence MH 01 easy and 0.16m to 0.088m in the sequence V2 02 medium. The difference to the best system on the other sequences is small. For the GF method, due to only a subset of map points is employed in BA optimization, the robustness of the estimation result to the IMU noise is reduced. That is the reason why the performance of VINS-Mono decreases after combined with the GF method. The compared results with the GF algorithm demonstrate the advantage of our method, i.e. robustness to system noise. It guarantees our system can still achieve a good performance in systems contain the IMU or little map points. To show the details of pose estimation error, we visualize the absolute pose error (APE) w.r.t. translation part of sequences MH 01 easy, MH 05 difficult and V2 02 medium in Fig. 4 and Fig. 5. According to Fig. 4, it is obvious that our method has the smallest translation error, especially at the beginning and the end of the trajectory. This result shows superiority of our method in suppressing the accumulative error. Fig. 5 shows the box plot of the translation error. In this figure, the dark line represents the mean value of the error, and the height of the box represents the variance. From this figure, the mean value of our method is less than 0.1m in sequences MH 01 easy and V2 02 medium and about 0.1m in the sequence MH 05 difficult, while the mean value of compared methods are more than 0.1m in sequences MH 01 easy and V2 02 medium and more than 0.15m in the sequence MH 05 difficult. Meanwhile, the variance of our method is also the smallest. This result further demonstrates the effectiveness and stability of our method in reducing the pose estimation error.
We also show the trajectory of the sequence MH 01 easy with loop detection in Fig. 6. In this figure, due to the influence of IMU noise, the GF method cannot return to the original point at the end of the trajectory.
B. Compared results with other VIO methods
The accuracy of our method is also compared with another four state-of-the-art visual inertial odometry (VIO) algorithms, R-VIO [24], ROVIO [25], MSCKF [27] and OKVIS [28]. Since loop detection is not available in some of them, only Table IV. In this table, our method performs best on six of ten sequences compared with R-VIO and performs best on at least nine of ten sequences compared with the other methods. The difference to the best system on the other sequences is small. Our method also has the smallest average error (0.1671m for our, 0.3955m for R-VIO, 0.232m for ROVIO, 0.342m for MSCKF and 0.225m for OKVIS).
C. Time-consuming results
The time-consuming of our method is compared with the original VINS-Mono. Our method is integrated into the solveOdometry function in vins estimator.cpp file, and other files are not modified. Therefore, it is enough to test the time difference of this function, which is more convenient and accurate than testing the whole system. We run five times in each sequence for every method and calculate the mean value as the final results. The average time-consuming of solveOdometry function to solve one keyframe is shown in Table V. In this table, the time differences of sequences V1 02 medium, V1 03 difficult and V2 01 easy are smaller than the other sequences. Since bias calculating occupies most time of our algorithm, different map point number leads to different time-consuming. The texture of Vicon Room is simple than Machine Hall, and the camera rotation is faster than the other sequences in these small time difference sequences. Therefore, less map points are recovered, and the added time to solveOdometry function is little.
According to Table V, it can be seen that the average timeconsuming of our method is larger 9.81ms than the VINS-Mono. This value is acceptable for practical applications.
VI. CONCLUSION
This paper proposes a camera pose correction method, which is compact and effective and reserves all system information. The relationship between the pose estimation error and the biased value of the map point is derived. Based on this relationship and the bias calculating method, the pose estimation results are corrected. We verify the effectiveness and efficiency of our method by comparing with other state-ofthe-art algorithms. The future work will integrate our method into other visual SLAM systems, which does not contain IMU information. | 2019-08-24T02:07:51.000Z | 2019-08-24T00:00:00.000 | {
"year": 2019,
"sha1": "6c69483a07601bb793603f7f606d47f193f9247b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c47edefd4eb6da61a0febfade52dd72e01f083ec",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
116974647 | pes2o/s2orc | v3-fos-license | Stuckelberg Axions and the Effective Action of Anomalous Abelian Models 2. A SU(3)_C x SU(2)_W x U(1)_Y x U(1)_B model and its signature at the LHC
We elaborate on an extension of the Standard Model with a gauge structure enlarged by a single anomalous U(1), where the presence of a Wess-Zumino term is motivated by the Green-Schwarz mechanism of string theory. The additional gauge interaction is anomalous and requires an axion for anomaly cancelation. The pseudoscalar implements the St\"{u}ckelberg mechanism and undergoes mixing with the standard Higgs sector to render the additional U(1) massive. We consider a 2-Higgs doublet model. We show that the anomalous effective vertices involving neutral currents are potentially observable. We clarify their role in the case of simple processes such as $Z^*\to \gamma \gamma$, which are at variance with respect to the Standard Model. A brief discussion of the implications of these studies for the LHC is included.
Introduction
Among the possible extensions of the Standard Model (SM), those where the SU(3) C × SU(2) W × U(1) Y gauge group is enlarged by a number of extra U(1) symmetries are quite attractive for being modest enough departures from the SM so that they are computationally tractable, but at the same time predictive enough so that they are interesting and even perhaps testable at the LHC. Of particular popularity among these have been models where at least one of the extra U(1)'s is "anomalous", that is, some of the fermion triangle loops with gauge boson external legs are non-vanishing. The existence of this possibility was noticed in the context of the (compactified to four dimensions) heterotic superstring where the stability of the supersymmetric vacuum [1] can trigger in the four-dimensional low energy effective action a non-vanishing Fayet-Iliopoulos term proportional to the gravitational anomaly, i.e. proportional to the anomalous trace of the corresponding U(1). The mechanism was recognized to be the low energy manifestation of the Green-Schwarz anomaly (GS) cancellation mechanism of string theory. 1 Most of the consequent developments were concentrated around exploiting this idea in conjunction with supersymmetry and the Froggatt-Nielsen mechanism [2] in order to explain the mass hierarchies in the Yukawa sector of the SM [3], supersymmetry breaking [4], inflation [5] and axion physics [6], in all of which the presence of the anomalous U(1) is a crucial ingredient. In the context of theories with extra dimensions the analysis of anomaly localization and of anomaly inflow has also been at the center of interesting developments [7], [8]. The recent explosion of string model building, in particular in the context of orientifold constructions and intersecting branes [10,11] but also in the context of the heterotic string [12], have enhanced even more the interest in anomalous U(1) models. There are a few universal characteristics that these vacua seem to possess. One is the presence of U(1) gauge symmetries that do not appear in the SM [13,14]. In realistic four dimensional heterotic string vacua the SM gauge group comes as a subgroup of the ten-dimensional SO (32) or E 8 × E 8 symmetry [15], and in practice there is at least one anomalous U(1) factor that appears at low energies, tied to the SM sector in a particular way, which we will summarize next. For simplicity and reasons of tractability we concentrate on the simplest non-trivial case of a model with gauge group SU(3) C × SU(2) W × U(1) Y × U(1) B where Y is hypercharge and B is the anomalous gauge boson and with the fermion spectrum that of the SM. The mass term for the anomalous U(1) B appears through a Stückelberg coupling [14,16,17] and the cancellation of its anomalies is due to four dimensional axionic and Chern-Simons terms (in the open string context see the recent works [14,18,19,20]). 1 Conventionally in this paper we will use both the term "Green-Schwarz" (GS) to denote the mechanism of cancelation of the anomalies, to conform to the string context, though the term "Wess-Zumino" (WZ) would probably be more adequate and sufficient for our analysis. The corresponding counterterm will be denoted, GS or WZ, with no distinction.
Despite of all this theoretical insight both from the top-down and bottom-up approaches, the question that remains open is how to make concrete contact with experiment. However, as mentioned above, in models with anomalous U(1)'s one should quite generally expect the presence of a physical axion-like field χ and in fact in any decay that involves a non-vanishing fermion triangle like the decay Z * , Z ′ * −→ γγ, Z, Z ′ −→ Zγ etc., one should be able to see traces of the anomalous structure [19,20,22,23]. In this paper we will mostly concentrate on the gauge boson decays which, even though hard to measure, contain clear differences with respect to the SM -as is the case of the Z * −→ γγ decay -and in addition with respect to anomaly free U(1) extensions -like the Z ′ * −→ γγ decay -for example.
In [19] a theory which extends the SM with this minimal structure (for essentially an arbitrary number of extra U(1) factors) was called "Minimal Low Scale Orientifold Model" or MLSOM for short, because in orientifold constructions one typically finds multiple anomalous U(1)'s. Here, even though we discuss the case of a single anomalous U(1) which could also originate from heterotic vacua or some field theory extension of the SM, we will keep on using the same terminology keeping in mind that the results can apply to more general cases. We finally mention that other similar constructions with emphasis on other phenomenological signatures of such models have appeared before in [18,24,26,25]. A perturbative study of the renormalization of these types of models is in [27]. Other features of these models, in view of the recent activity connected to the claimed PVLAS result [28], have been discussed in [23].
Our work is organized as follows. In the first sections we will specialize the analysis of [19] to the case of an extension of the SM that contains one additional anomalous abelian U(1), with an abelian structure of the form U(1) Y × U(1) B , that we will analyze in depth. We will determine the structure of the entire lagrangean and fix the counterterms in the 1-loop anomalous effective action which are necessary to restore the gauge invariance of the model at quantum level. The analysis that we provide is the generalization of what is discussed in [23] that was devoted primarily to the analysis of anomalous abelian models and to the perturbative organization of the corresponding effective action. After determining the axion lagrangian and after discussing Higgs-axion mixing in this extension of the SM, we will focus our attention on an analysis of the contributions to a simple process (Z → γγ). Our analysis, in this case, aims to provide an example of how the new contributions included in the effective action -in the form of one loop counterterms that restore unitarity of the effective action -modify the perturbative structure of the process. A detailed phenomenological analysis is beyond the scope of this work, since it requires, to be practically useful for searches at the LHC, a very accurate determination of the QCD and electroweak background around the Z/Z' resonance. We hope to return to a complete analysis of 3-linear gauge interactions in this class of models in the near future.
2 Effective models at low energy: the SU (3) C ×SU (2) W × We start by briefly recalling the main features of the MLSOM starting from the expression of the lagrangean which is given by where we have summed over the SU(3) index a = 1, 2, ..., 8, over the SU(2) index j = 1, 2, 3 and over the fermion index i = 1, 2, 3 denoting a given generation. We have denoted with F G µν the field-strength for the gluons and with F W µν the field strength of the weak gauge bosons W µ . F Y µν and F B µν are the field-strengths related to the abelian hypercharge and the extra abelian gauge boson, B, which has anomalous interactions with a typical generation of the Standard Model. The fermions in eq. (1) are either left-handed or right-handed Dirac spinors f L , f R and they fall in the usual SU(3) C and SU(2) W representations of the Standard Model. The additional anomalous U(1) B is accompanied by a shifting Stückelberg axion b. The c i , i = 1, 2, are the coefficients of the Chern-Simons trilinear interactions [19,20] and we have also introduced a mass term M 1 at tree level for the B gauge boson, which is the Stückelberg term. As usual, the hypercharge is anomaly-free and its embedding in the so called "D-brane basis" has been discussed extensively in the previous literature [13,24,16]. Most of the features of the orientifold construction are preserved, but we don't work with the more general multiple U(1) structure since our goal is to analyze as close as possible this model making contact with direct phenomenological applications, although our results and methods can be promptly generalized to more complex situations.
Before moving to the more specific analysis presented in this work, some comments are in order concerning the possible range of validity of effective actions of this type and the relation between the value of the cutoff parameter Λ and the Stückelberg mass M 1 . This point has been addressed before in great detail in [21] and we omit any further elaboration, quoting the result. Lagrangeans containing dimension-5 operators in the form of a Wess-Zumino term may have a range of validity constrained by M 1 ≥ g 1 g 2 /(64π 3 )a n Λ, where g 1 is the coupling at the chiral vertex where the anomaly a n is assigned and g is the coupling constant of the other two vector-like currents in a typical AVV diagram. More quantitatively, this bound can be reasonably assumed to be of the order of 10 5 GeV, by a power-counting analysis. Notice that the arguments of [21], though based on the picture of "partial decoupling" of the fermion spectrum, in which the pseudoscalar field is the phase of a heavier Higgs, remain fully valid in this context (see [21] for more details). The actual value of M 1 is left undetermined, although in the context of string model building there are suggestions to relate them to specific properties of the compactified extra dimensions (see for instance [13,16]).
3 The effective action of the MLSOM with a single anomalous U (1) Having derived the essential components of the classical lagrangean of the model, now we try to extend our study to the quantum level, determining the anomalous effective action both for the abelian and the non-abelian sectors, fixing the D, F and C coefficients in front of the Green-Schwarz terms in eq. 1. Notice that the only anomalous contributions to S an in the Y-basis before symmetry breaking come from the triangle diagrams depicted in Fig. 1.
Since hypercharge is anomaly-free, the only relevant non-abelian anomalies to be canceled are those involving one boson B with two SU(2) W bosons, or two SU(3) C bosons, while the abelian anomalies are those containing three U(1) bosons, with the Y 3 triangle excluded by the hypercharge assignment. These (BSU(2)SU(2)) and (BSU (3) anomalies must be canceled respectively by Green-Schwarz terms of the kind with F and D to be fixed by the conditions of gauge invariance. In the abelian sector we have to focus on the BBB, BYY and YBB triangles which generate anomalous contributions that need to be canceled, respectively, by the Green-Schwarz terms Denoting by S Y M the anomalous effective action involving the classical non-abelian terms plus the non-abelian anomalous diagrams, and with S ab the analogous abelian one, the complete anomalous effective action is given by with S 0 being the classical lagrangean and The corresponding 3-point functions, for instance, are given by and similarly for the others. Here we have defined the chiral currents The non-abelian W current being chiral it forces the other currents in the triangle diagram to be of the same chirality, as shown in Fig. (7).
4 Three gauge boson amplitudes and gauge fixing 4.1 The non-abelian sector before symmetry breaking Before we get into the discussion of the gauge invariance of the model, it is convenient to elaborate on the cancelations of the spurious s-channel poles coming from the gaugefixing conditions. These are imposed to remove the ∂b − B mixing-in the effective action. We will perform our analysis in the basis of the interaction eigenstates since in this basis recovering gauge independence is more straightforward, at least before we enforce symmetry breaking via the Higgs mechanism. The procedure that we follow is to gauge fix the B gauge boson in the symmetric phase by removing the B − ∂b mixing (see Fig. 2 (C)), so to derive simple Ward identities involving only fermionic triangle diagrams and contact trilinear interactions with gauge bosons. For this purpose to the Stückelberg term we add the gauge fixing term to remove the bilinear mixing, where with a propagator for the massive B gauge boson separated in a gauge independent part P 0 and a gauge dependent one P ξ : We will briefly illustrate here how the cancelation of the gauge dependence due to b and B exchanges in the s-channel goes in this (minimally) gauge-fixed theory. In the exact phase we have no mixing between all the Y, B, W gauge bosons and the gauge dependence of the B propagator is canceled by the Stueckelberg axion. In the broken phase things get more involved, but essentially the pattern continues to hold. In that case the Stückelberg scalar has to be rotated into its physical component χ and the two Goldstones G Z and G Z ′ which are linear combinations of G 0 1 and G 0 2 . The cancelation of the spurious s-channel poles takes place, in this case, via the combined exchange of the Z propagator and of the corresponding Goldstone mode G Z . Naturally the GS interaction will be essential for this to happen.
For the moment we simply work in the exact symmetry phase and in the basis of the interaction eigenstates. We gauge fix the action to remove the B − ∂b mixing, but for the rest we set the vev of the scalars to zero. For definiteness let's consider the process W W → W W mediated by a B boson as shown in Fig. 3. We denote by a bold-faced V the BW W vertex, constructed so to have gauge invariance on the W-lines. This vertex, as we are going to discuss next, requires a generalized CS counterterm to have such a property on the W lines. Gauge invariance on the B line, instead, which is clearly necessary to remove the gauge dependence in the gauge fixed action, is obtained at a diagrammatical level by the the axion exchange (Fig. 3). The expressions of the two diagrams are Using the equations for the anomalies and the correct value for the Green-Schwarz coef- ficient F given in eq. (62) (and that we will determine in the next section), we obtain which can be easily shown to be true after substituting the value of the GS coefficient given in relation (77).
In Fig. (5) we have depicted the anomalous triangle diagram BYY (A) which has to be canceled by the Green-Schwarz term C Y Y M bF Y ∧ F Y , that generates diagram (B). In this case the two diagrams give Figure 5: Unitarity check in abelian sector for the MLSOM.
The condition of unitarity of the amplitude requires the validity of the identity which can be easily checked substituting the value of the GS coefficient C Y Y given in relation (78). We will derive the expressions of these coefficients and the factors of all the other counterterms in the next section. The gauge dependences appearing in the diagrams shown in Fig. 6 are analyzed in a similar way and we omit repeating the previous steps, but it should be obvious by now how the perturbative expansion is organized in terms of tree-level vertices and 1-loop counterterms, and how gauge invariance is checked at higher orders when the propagators of the B gauge boson and of the axion b are both present. Notice that in the exact phase the axion b is not coupled to the fermions and the pattern of cancelations to ensure gauge independence, in this specific case, is simplified.
At this point we pause to make some comments. The mixed anomalies analyzed above involve a non-anomalous abelian gauge boson and the remaining gauge interactions (abelian/non-abelian). To be specific, in our model with a single non-anomalous U(1), which is the hypercharge U(1) Y gauge group, these mixed anomalies are those involving triangle diagrams with the Y and B generators or the B accompanied by the non-abelian sector. Consider, for instance, the BY Y triangle, which appears in the Y B → Y B amplitude. There are two options that we can follow. Either we require that the corresponding traces of the generators over each generation vanish identically which can be viewed as a specific condition on the charges of model or, if this is not the case, we require that suitable one-loop counterterms balance the anomalous gauge variation. We are allowed, in other words, to fix the two divergent invariant amplitudes of the triangle diagram so that the corresponding Ward identities for the BY Y vertex and similar anomalous vertices are satisfied. This is a condition on the parameterization of the Feynman vertex rather than on the charges and is, in principle, allowed. It is not necessary to have a specific determination of the charges for this to occur, as far as the counterterms are fixed accordingly. For instance, in the abelian sector the diagrams in question are In the MLSOM these traces are, in general, non vanishing and therefore we need to introduce defining Ward identities to render the effective action anomaly free.
Ward Identities, Green-Schwarz and Chern-Simons counterterms in the Stückelberg phase
Having discussed the structure of the theory in the basis of the interaction eigenstates, we come now to identify the coefficients needed to enforce cancelation of the anomalies in the 1-loop effective action. In the basis of the physical gauge bosons we will be dropping, with this choice, a gauge dependent ( B∂b mixing) term that is vanishing for physical polarizations. At the same time, for exchanges of virtual gauge bosons, the gauge dependence of the corresponding propagators is canceled by the associated Goldstone exchanges.
Starting from the non abelian contributions, the BW W amplitude, we separate the charge/coupling constant dependence of a given diagram from the rest of its parametric structure T using, in the SU(2) case, the relations having defined D (L) and T λµν is the 3-point function in configuration space, with all the couplings and the charges factored out, symmetrized in µν. Similarly, for the coupling of B to the gluons we obtain while the abelian triangle diagrams are given by with the following definitions for the traces (see also the discussion in the Appendix) The T vertex is given by the usual combination of vector and axial-vector components and we denote by ∆(k 1 , k 2 ) its expression in momentum space Figure 7: All the anomalous electroweak contributions to a triangle diagram in the nonabelian sector in the massless fermion case We denote similarly with ∆ λµν AVV , ∆ λµν VAV , ∆ λµν VVA the momentum space expressions of the corresponding x-space vertices T λµν AVV , T λµν VVA , T λµν VAV respectively. As illustrated in Fig 8), the complete structure of T is given by where we have used the relation between the ∆ AAA (bold-faced) vertex and the usual ∆ vertex, which is of the form AVV. Notice that are the usual vertices with conserved vector current (CVC) on two lines and the anomaly on a single axial vertex.
The AAA vertex is constructed by symmetrizing the distribution of the anomaly on each of the three chiral currents, which is the content of (30). The same vertex can be obtained from the basic AVV vertex by a suitable shift, with β = 1/6, and then repeating the same procedure on the other indices and external momenta, with a cyclic permutation. We obtain and its corresponding anomaly equations are given by typical of a symmetric distribution of the anomaly.
These identities are obtained from the general shift-relation Vertices with conserved axial currents (CAC) can be related to the symmetric AAA vertex in a similar way At this point we are ready to introduce the complete vertices for this model, which are given by the amplitude (29) with the addition of the corresponding Chern-Simons counterterms, were required. These will be determined later in this section by imposing the conservation of the SU(2), SU(3) and Y gauge currents. Following this definition for all the anomalous vertices, the amplitudes can then be written as which are the anomalous vertices of the effective action, corrected when necessary by suitable CS interactions in order to conserve all the gauge currents at 1-loop.
Before we proceed with our analysis, which has the goal to determine explicitly the counterterms in each of these vertices, we pause for some practical considerations. It is clear that the scheme that we have followed in order to determine the structure of the vertices of the effective action has been to assign the anomaly only to the chiral vertices and to impose conservation of the vector current. There are regularization schemes in the literature that enforce this principle, the most famous one being dimensional regularization with the t'Hooft Veltman prescription for γ 5 (see also the discussion in part 1). In this scheme the anomaly is equally distributed for vertices of the form AAA and is assigned only to the axial-vector vertex in triangles of the form AVV and similar. Diagrams of the form AAV are zero by Furry's theorem, being equivalent to VVV.
We could also have proceeded in a different way, for instance by defining each V, for instance V BY Y , to have an anomaly only on the B vertex and not on the Y vertices, even if Y has both a vector and an axial-vector components at tree level and is, indeed, a chiral current. This implies that at 1-loop the chiral projector has to be moved from the Y to to the B vertex "by hand", no matter if it appears on the Y current or on the B current, rendering the Y current effectively vector-like at 1 loop. This is also what a CS term does. In both cases we are anyhow bond to define separately the 1-loop vertices as new entities, unrelated to the tree level currents. However, having explicit Chern-Simons counterterms renders the treatment compatible with dimensional regularization in the t'Hooft-Veltman prescription. It is clear, however, that one way or the other, the quantum action is not fixed at classical level since the counterterms are related to quantum effects and the corresponding Ward identities, which force the cancelation of the anomaly to take place in a completely new way respect to the SM case, are indeed defining conditions on the theory.
Having clarified this subtle point, we return to the determination of the gauge invariance conditions for our anomalous vertices.
Under B-gauge transformations we have the following variations (singlet anomalies) of the effective action and with the normalization given by we obtain Note, in particular, that the covariantization of the anomalous contributions requires the entire non-abelian field strengths F W i, µν and F G a, µν The covariantization of the right-hand-side (rhs) of the anomaly equations takes place via higher order corrections, involving correlators with more external gauge lines. It is well known, though, that the cancelation of the anomalies in these higher order non-abelian diagrams (in d=4) is only related to the triangle diagram (see [23]).
Under the non-abelian gauge transformations we have the following variations where the "hat" field strengthsF W andF G refer to the abelian part of the non-abelian field strengths W and G. Introducing the notation the expressions of the variations become We have now to introduce the Chern-Simons counterterms for the non-abelian gauge variations with the non-abelian CS forms given by whose variations under non-abelian gauge transformations are The variations of the Chern-Simons counterterms then become and we can choose the coefficients in front of the CS counterterms to obtain anomaly cancelations for the non-abelian contributions The variations under B-gauge transformations for the related CS counterterms are then given by where the coefficients c i are given in (57). The variations under the B-gauge transformations for the SU(2) and SU (3) Green-Schwarz counterterms are respectively given by and the cancelation of the anomalous contributions coming from the B-gauge transformations determines F and D as There are some comments to be made concerning the generalized CS terms responsible for the cancelation of the mixed anomalies. These terms, in momentum space, generate standard trilinear CS interactions, whose momentum structure is exactly the same as that due to the abelian ones (see the appendix of part 1 for more details), plus additional quadrilinear (contact) gauge interactions. These will be neglected in our analysis since Figure 8: All the anomalous contributions to a triangle diagram in the abelian sector for generic vector-axial vector trilinear interactions in the massless fermion case we will be focusing in the next sections on the characterization of neutral tri-linear interactions. In processes such as Z → γγγ they re-distribute the anomaly appropriately in higher point functions.
For the abelian part S ab of the effective action we first focus on gauge variations on B, obtaining and variations for Y that give Also in this case we introduce the corresponding abelian Chern-Simons counterterms whose variations are given by and we can fix their coefficients so to obtain the cancelation of the Y-anomaly Similarly, the gauge variation of B in the corresponding Green-Schwarz terms gives and on the other hand the B-variations of the fixed CS counterterms are Finally the cancelation of the anomalous contributions from the abelian part of the effective action requires following conditions Regarding the Y-variations ∝ T r[q B q 2 Y ] and ∝ T r[q 2 B q Y ], in general these traces are not identically vanishing and we introduce the CS and GS counterterms to cancel them. Having determined the factors in front of all the counterterms, we can summarize the structure of the one-loop anomalous effective action plus the counterterms as follows where S 0 is the classical action. At this point we are ready to define the expressions in momentum space of the vertices introduced in eq. (36), denoted by V, obtaining where for the generalized CS terms we consider only the trilinear CS interactions whose momentum structure is the same as the abelian ones as already discussed in section 5. The factor 1/2 overall in the non abelian vertices comes from the trace over the generators. These vertices satisfy standard Ward identities on the external Standard Model lines, with an anomalous Ward identity only on the B line and obviously the B-currents contain the total anomaly a n = − i 2π 2 . The same anomaly equations given above for V λµν BY Y hold for the V λµν BGG and V λµν BW W vertices but with a 1/2 factor overall. The anomaly equations for the YBB vertex are where the chiral current Y has to be conserved so to render the 1 loop effective action gauge invariant. Introducing a symmetric distribution of the anomaly, in the BBB case the analogous equations are A study of the issue of the gauge dependence in these types of models can be found in [23]. Clearly, in our case, this study is more involved, but the cancelations of the gauge dependendent terms in specific classes of diagrams can be performed both in the exact phase and in the broken phase, similarly to the discussion presented in our companion work, having re-expressed the fields in the basis of the mass eigenstates. The approach that we follow is then clear: we worry about the cancelation of the anomalies in the exact phase, having performed a minimal gauge fixing to remove the B mixing with the axion b, then we rotate the fields and re-parameterize the lagrangean around the non trivial vacuum of the potential. We will see in the next sections that with this simple procedure we can easily discuss simple basic processes involving neutral and charged currents exploiting the invariance of the effective action under re-parameterizations of the fields.
6
The neutral currents sector in the MLSOM In this section we move toward the phenomenological analysis of a typical process which exhibits the new trilinear gauge interactions at 1-loop level. As we have mentioned in the introduction, our goal here is to characterize this analysis at a more formal level, leaving to future work a numerical study. It should be clear, however, from the discussion presented in this and in the next sections, how to proceed in a more general case. The theory is well-defined and consistent so that we can foresee accurate studies of its predictions for applications at the LHC in the future.
We proceeed with our illustration starting from the definition of the neutral current in the model, which is given by that we express in the two basis, the basis of the interaction eigenstates and of the mass eigenstates. Clearly in the interaction basis the bosonic operator in the covariant derivative becomes where Q = T 3 + Y . The rotation in the photon basis gives and performing the rotation on F we obtain where the electromagnetic current can be written in the usual way with the definition of the electric charge as Similarly for the neutral Z current we obtain where we have defined We can easily work out the structure of the covariant derivative interaction applied on a left-handed or on a right-handed fermion. For this reason it is convenient to introduce some notation. We define and similarly for the Z ′ neutral current We can easily identify the generators in the (Z, Z ′ , A γ ) basis. These are given bŷ which will be denoted as Q p = (Q,Q Z ,Q Z ′ ). To express a given correlator, say ZA γ A γ in the (W 3 , A Y , B) basis we proceed as follows. We denote with Q p = (Q,Q Z ,Q Z ′ ) the generators in the photon basis (A γ , Z, Z ′ ) and with g p = (e, g Z , g Z ′ ) the corresponding couplings. Similarly, Q p = (T 3 , Y, Y B ) are the generators in the interaction basis (W 3 , A Y , B) and g p = (g 2 , g Y , g B ) the corresponding couplings, so that
The Zγγ vertex in the Standard Model
Before coming to the computation of this vertex in the MLSOM we first start reviewing its structure in the SM.
We show in Fig. 9 the Zγγ vertex in the SM, where we have separated the QED contributions from the remaining corrections R W . This vertex vanishes at all orders when all the three lines are on-shell, due to the Landau-Yang theorem. A direct prook of this property for the fermionic 1-loop corrections has been included in an appendix, where we show the on-shell vanishing of the vertex.
The QED contribution contains the fermionic triangle diagrams (direct plus exchanged) and the contributions in R W include all the remaining ones at 1-loop level. In this case the separation between the pure QED contributions (due to the 2 fermionic diagrams) and the remaining corrections, which are separately gauge invariant on the photon lines, is rather straightforward, though this is not the case, in general, for more complicated electroweak amplitudes. Specifically, as shown in Fig. 10, R W , contains ghosts, goldstones and all other exchanges. An exhaustive computation of all these contributions is not needed for the scope of this discussion and will be left for future work. We have omitted diagrams of the type shown in Figs. 11,12. These are removed by working in the R ξ gauge for the Z boson. Notice, however, that even without a gauge fixing these decouple from the anomaly diagrams in the massless fermion limit since the Goldstone does not couple to massless fermions. In Fig. 13 we show how the anomaly is re-distributed in an AAA diagram by a CS interaction, generating an AVV vertex.
To appreciate the role played by the anomaly in this vertex we perform a direct computation of the two anomaly diagrams and include the fermionic mass terms. A direct computation gives which can be cast in the form and we have introducing the g f Z,A and g f Z,V couplings of the Z with This form of the amplitude is obtained if we use the standard Rosenberg definition of the anomalous diagrams and it agrees with [29]. In this case the Ward identities on the Z G 0 Z Figure 11: Z − G 0 Z mixing in the broken phase in the SM. Figure 12: Same as in Fig. 11 but for the MLSOM photon lines are defining conditions for the vertex. Naturally, with the standard fermion multiplet assignment the anomaly vanishes since Because of the anomaly cancelation, the fermionic vertex is zero also off-shell, if the masses of all the fermions in each generation are degenerate, in particular if they are massless. Notice that this is not a consequence of the Landau-Yang theorem.
Let us now move to the Ward identity on the Z line. A direct computation gives The presence of a mass-dependent term on the right hand side of (116) constitutes a break-down of axial current conservation for massive fermions, as expected.
Stückelberg phase
The presence of anomalous generators in a given vertex renders some trilinear interactions non-vanishing also for massless fermions. In fact, as we have shown in the previous section, in the SM the anomalous triangle diagrams vanish if we neglect the masses of all the fermions, and this occurs both on-shell and off-shell. The only left over corrections are related to the fermion mass and these will also vanish (off-shell) if all the fermions ( 4 ) + = 4 ( ) a n a n /3 a n /3 a n /3 Figure 13: Re-distribution of the anomaly via the CS counterterm of a given generation are mass degenerate. The on-shell vanishing of the same vertices is a consequence of the structure of the amplitude, as we show in the appendix. The extraction of the contribution of the anomalous generators in the trilinear vertices can be obtained starting from the 1-particle irreducible effective action, written in the basis of the interaction eigenstates, and performing the rotation of the trilinear interaction that project onto the Zγγ vertex.
In order to appreciate the differences between the SM result and the analogous one in the anomalous extensions that we are considering, we start by observing that only in the Stückelberg phase (M 1 = 0 and v u = v d = 0) the anomaly-free traces vanish, because of charge assignment. A similar result is valid also in the HS phase if the Yukawa couplings are neglected. Coming to extract the Zγγ vertex we rotate the anomalous diagrams of the effective action into the mass eigenstates, being careful to separate the massless from the massive fermion contributions.
Hence, we split the Y Y Y vertex into its chiral contributions and performing the rotation of the fields we get the following contributions where the dots indicate all the other projections of the type ZZγ, Z ′ γγ etc. Here LLL , RLR etc., indicate the (clockwise) insertion of L/R chiral projectors on the λµν vertices of the anomaly diagrams.
For the Y W W vertex the structure is more simple because the generator associated to W 3 is left-chiral The BY Y vertex works in same way of Y Y Y Finally, the BW W vertex is similar to Y W W where we have defined which are the product of rotation matrices that project the anomalous effective action from the interaction eigenstate basis over the Z, γ gauge bosons.
We have expressed the generators in their chiral basis, and their mixing is due to mass insertions over each fermion line in the loop. The ellypsis refers to additional contributions which do not project on the vertex that we are interested in but which are present in the analysis of the remaining neutral vertices, ZZγ, Z ′ γγ etc. The notation O AT indicates the transposed of the rotation matrix from the interaction to the mass eigenstates. To obtain the final expression of the amplitude in the interaction eigenstate basis one can easily observe that in the helicity conserving amplitudes LLL and RRR the mass dependence in the fermion loops is all contained in the denominators of the propagators, not in the Dirac traces. The only diagrams that contain a mass dependence at the numerators are those involving chirality flips ( LLR , RRL ) which contribute with terms proportional to m 2 f . These terms contribute only to the invariant amplitudes A 1 and A 2 of the Rosenberg representation [23] and, although finite, they disappear once we impose a Ward identity on the two photon lines, as requested by CVC for the two photons. A similar result is valid for the SM, as one can easily figure out from Eq. (112). Therefore, the amplitudes can be expressed just in terms of LLL and RRR correlators, and since the mass dependence is at the denominators of the propagators, one can easily show the relation valid for any fermion mass m f . Defining LLL ≡ ∆ λµν LLL (m f = 0), we can express the only independent chiral graph as sum of two contributions where we define Also, one can verify quite easily that A second contribution to the effective action comes from the 1-loop counterterms containing generalized CS terms. There are two ways to express these counterterms: either as separate 3-linear interactions or as modifications of the two invariant amplitudes of the Rosenberg parameterization A 1 , A 2 . These amplitude depend linearly on the momenta of the vertex [23]. For instance we use which allows to absorb completely the CS term, giving conserved Y /W 3 currents in the interaction eigenstate basis. In this case we move from a symmetric distribution of the anomaly in the AAA diagram, to an AV V diagram. These currents interpolate with the vector-like vertices (V) of the AVV graph.
Notice that once the anomaly is moved from any vertex involving a Y /W 3 current to a vertex with a B current, it is then canceled by the GS interaction. The extension of this analysis to the complete m f -dependent case for ∆ LLL (m f = 0) is quite straightforward. In fact, after some re-arrangements of the Zγγ amplitude, we are left with the following contributions in the physical basis in the broken phase where we have defined the anomalous chiral asymmetries as The conditions of gauge invariance force the coefficients in front of the CS terms to be which have been absorbed and do not appear explicitly, while the SM chiral asymmetries are defined as As we have already pointed out, the amplitude for the Zγγ process is espressed in terms of 6 invariant amplitudes that can be easily computed and take the form with
as one can easily check by a direct computation. We obtain
The computation of these integrals can be done analytically and the various regions 0 < s < 4m 2 f , m f >> √ s/2, and m f → 0 can be studied in detail. In the case of both photons on-shell, for instance, and s > 4m 2 f we obtain Notice that the case in which the two photons are on-shell and light fermions are running in the loop, then the evaluation of the integral requires particular care because of infrared effects which render the parameteric integrals ill-defined. The situation is similar to the case of the coupling of the axial anomaly to on-shell gluons in spin physics [30], when the correct isolation of the massless quarks contributions is carried out by moving off-shell on the external lines and then performing the m f → 0 limit.
qq → γγ with an intermediate Z
In this section we are going to describe the role played by the new anomaly cancelation mechanism in simple processes which can eventually be studied with accuracy at a hadron collider such as the LHC. A numerical analysis of processes involving neutral currents can be performed along the lines of [9] and we hope to return to this point in the near future.
Here we intend to discuss briefly some of the phenomenological implications which might be of interest. Since the anomaly is canceled by a combination of Chern-Simons and Green-Schwarz contributions, the study of a specific process, such as Z → γγ, which differs from the SM prediction, requires, in general, a combined analysis both of the gauge sector and of the scalar sector.
We start from the case of a quark-antiquark annihilation mediated by a Z that later undergoes a decay into two photons. At leading order this process is at parton level described by the annihilations of a valence quark q and a sea antiquarkq from the two incoming hadrons, both of them collinear and massless. In Fig. (14) we have depicted all the diagrams by which the process can take place to lowest order. Radiative corrections from the initial state are accurately known up to next-to-next-to-leading order, and are universal, being the same of the Drell-Yan cross section. In this respect, precise QCD predictions for the rates are available, for instance around the Z resonance [9].
In the SM, gauge invariance of the process requires both a Z gauge boson exchange and the exchange of the corresponding goldstone G Z , which involves diagrams (A) and (B). In the MLSOM a direct Green-Schwarz coupling to the photon (which is gauge dependent) is accompanied by a gauge independent axion exchange. If the incoming quark-antiquark pair is massless, then the Goldstone has no coupling to the incoming fermion pair, and therefore (B) is absent, while gauge invariance is trivially satisfied because of the massless condition on the fermion pair of the initial state. In this case only diagram (A) is relevant. Diagram (B) may also be set to vanish, for instance in suitable gauges, such as the unitary gauge. Notice also that the triangle diagrams have a dependence on m f , the mass of the fermion in the loop, and show two contributions: a first contribution which is proportional to the anomaly (mass independent) and a correction term which depends on m f . As we have shown above, the first contribution, which involves an off-shell vertex, is absent in the SM, while it is non vanishing in the MLSOM. In both cases, on the other hand, we have m f dependent contributions. It is then clear that in the SM the largest contribution to the process comes from the top quark circulating in the triangle diagram, the amplitude being essentially proportional only to the heavy top mass. On the Z resonance and for on-shell photons, the cross section vanishes in both cases, as we have explained, in agreement with the Landau-Yang theorem. We have checked these properties explicitly, but they hold independently of the perturbative order at which they are analyzed, being based on the Bose symmetry of the two photons. The cross section, therefore, has a dip at Q = M Z , where it vanishes, and where Q 2 is the virtuality of the intermediate s-channel exchange.
An alternative scenario is to search for neutral exchanges initiated by gluon-gluon fusion. In this case we replace the annihilation pair with a triangle loop (the process is similar to Higgs production via gluon fusion), as shown in Fig. 15. As in the decay mechanism discussed above, the production mechanism in the SM and in the MLSOM are again different. In fact, in the MLSOM there is a massless contribution appearing already at the massless fermion level, which is absent in the SM. The production mechanism by gluon fusion has some special features as well. In ggZ production and Zγγ decay, the relevant diagrams are (A) and (B) since we need the exchange of a G Z to obtain gauge invariance. As we probe smaller values of the Bjorken variable x, the gluon density raises, and the process becomes sizable. On the other hand, in a pp collider, although the quark annihilation channel is suppressed since the antiquark density is smaller than in a pp collision, this channel still remains rather significant. We have also shown in this figure one of the scalar channels, due to the exchange of a axi-Higgs.
Other channels such as those shown in Fig. 16 can also be studied, these involve a lepton pair in the final state, and their radiative corrections also show the appearance of a triangle vertex. This is the classical Drell-Yan process, that we will briefly describe below. In this case, both the total cross section and the rapidity distributions of the lepton pair and/or an analysis of the charge asymmetry in s-channel exchanges of W's would be of major interest in order to disentangle the anomaly inflow. At the moment, errors on the parton distributions and scale dependences induce indeterminations which, just for the QCD background, are around 4% [9], as shown in a high precision study. It is expected, however, that the statistical accuracy on the Z resonance at the LHC is going to be a factor 100 better. In fact this is a case in which the experiment can do better than the theory.
7.3 Isolation of the massless limit: the Z * → γ * γ * amplitude The isolation of the massless from the massive contributions can be analized in the case of resolved photons in the final state. As we have already mentioned in the prompt photon case the amplitude, on the Z resonance, vanishes because of Bose symmetry and angular momentum conservation. We can, however, be on the Z resonance and produce one or two off-shell photons that undergo fragmentation. Needless to say, these contributions are small. However, the separation of the massless from the massive case is well defined. One can increase the rates by asking just for 1 single resolved photon and 1 prompt photon. Rates for this process in pp-collisions have been determined in [31]. We start from the case of off-shell external photons of virtuality s 1 and s 2 and an off-shell Z (Z * ). Following [32], we introduce the total vertex V λµν (k 1 , k 2 , m f ), which contains both the massive m f dependence (corresponding to the triangle amplitude ∆ λµν . Its massless counterpart V λµν (0) ≡ V (k 1 , k 2 , m f = 0), obtained by sending the fermion mass to zero. The Rosenberg vertex and the V vertex are trivially related by a Schoutens transformation, moving the λ index from the Levi-Civita tensor to the momenta of the photons with k − k 1 − k 2 = 0 and s i = k 2 i (i = 1, 2), and being the usual Mandelstam function and where the analytic expressions for ∆ #i and C #0 are given by and For m f = 0 the two expressions above become ∆ #i = ln(t i /t 3 ), (i = 1, 2), These can be inserted into (137) and (138) together with m f = 0 to generate the corresponding V λµν (0) vertex needed for the computation of the massless contributions to the amplitude.
With these notations we clearly have
Extension to Z → γ * γ
To isolate the contribution to the decay on the resonance, we keep one of the two photons off-shell (resolved). We choose s 1 = 0, and s 2 virtual. We denote by Γ λµν the corresponding vertex in this special kinematical configuration. The Z boson is on-shell. In this case at 1-loop the result simplifies considerably [33] Γ λµν = F 2 (s 2 ǫ[λ, µ, ν, with F 2 expressed as a Feynman parametric integral and for vanishing m f (r f = M 2 Z /4m 2 f → ∞), the corresponding massless contribution is expressed as F (z, ∞) with, in general where The m f = 0 contribution is obtained in the r f → +∞ limit, for z > 0, In these notations, the infinite fermion mass limit (m f → ∞ or r → 0), gives F (z, 0) = 0 and we find which can be used for a numerical evaluation. The decay rate for the process is given by We have indicated with Q * the virtuality of the photon. A complete evaluation of this expression, to be of practical interest, would need the fragmentation functions of the photon (see [31] for an example). A detailed analysis of these rates will be presented elsewhere. However, we will briefly summarize the main points involved in the analysis of this and similar processes at the LHC, where the decay rate is folded with the (NLO/NNLO) contribution from the initial state using QCD factorization.
Probably one of the best way to search for neutral current interactions in hadronic collisions at the LHC is in lepton pair production via the Drell-Yan mechanism. QCD corrections are known for this process up to O(α 2 s ) (next-to-next-to-leading order, NNLO), which can be folded with the NNLO evolution of the parton distributions to provide accurate determinations of the hadronic pp cross sections at the 4 % level of accuracy [9]. The same computation for Drell-Yan can be used to analize the pp → Z → γγ * process since the W V (hadronic) part of the process is universal, with W V defined below. An appropriate (and very useful) way to analyze this process would be to perform this study defining the invariant mass distribution where τ = Q 2 /S, which is separated into a pointlike contribution σ Z→γγ * and a hadronic structure functions W Z . This is defined via the integral over parton distributions and coefficient functions ∆ ij where µ f is the factorization scale. The choice µ f = Q, with Q the invariant mass of the γγ * pair , removes the log(Q/M) for the computation of the coefficient functions, which is, anyhow, arbitrary. The non-singlet coefficient functions are given by with C F = (N 2 c − 1)/(2N c ) and the "+" distribution is defined by while at NLO appears also a q-g sector Other sectors do not appear at this order. Explicitly one gets Figure 14: Two photon processes initiated by a qq annihilation with a Z exchange. where the sum is over the quark flavours. The identification of the generalized mechanism of anomaly cancelation requires that this description be extended to NNLO, which is now a realistic possibility. It involves a slight modification of the NNLO hard scatterings known at this time and an explicit computation is in progress.
Conclusions
We have presented a study of a model inspired by the structure encountered in a typical string theory derivation of the Standard Model. In particular we have focused our investigation on the characterization of the effective action and worked out its expression in the context of an extension containing one additional anomalous U(1). Our analysis specializes and, at the same time, extends a previous study of models belonging to this class. The results that we have presented are generic for models where the Stückelberg and the Higgs mechanism are combined and where an effective abelian anomalous interaction is present. Our analysis has then turned toward the study of simple processes mediated by neutral current exchanges, and we have focused, specifically, on one of them, the one involving the Zγγ vertex. In particular our findings clearly show that new massless contributions are presented at 1-loop level when anomalous generators are involved in the fermionic triangle diagrams and the interplay between massless and massive fermion effects is modified respect to the SM case. The typical processes considered in our analysis deserve a special attention, given the forthcoming experiments at the LHC, since they may provide a way to determine whether anomaly effects are present in some specific reactions. Other similar processes, involving the entire neutral sector should be considered, though the two-photon signal is probably the most interesting one phenomenologically.
Given the high statistical precision (.05% and below on the Z peak, for 10 f b −1 of integrated luminosity) which can be easily obtained at the LHC, there are realistic chances to prove or disprove theories of these types. Concerning the possibility of discovering extra anomalous Z ′ , although there are stringent upper bounds on their mixing(s) with the Z gauge boson, it is of outmost importance to bring this type of analysis even closer to the experimental test by studying in more detail the peculiarities of anomalous gauge interactions for both the neutral and the charged sectors along the lines developed in this work. This analysis is in progress and we hope to report on it in the near future. We summarize in this appendix some results concerning the model with a single anomalous U(1) discussed in the main sections. These results specialize and simplify the general discussion of [19] to which we refer for further details. We will use the hypercharge values The covariant derivatives act on the fermions f L , f R as with l = Y, B abelian index, where A µ is a non-abelian Lie algebra element and write the lepton doublet as We will also use standard notations for the SU(2) W and SU(3) C gauge bosons with the normalizations The interaction lagrangean for the leptons becomes As usual we define the left-handed and right-handed currents Writing the quark doublet as we obtain the interaction lagrangean We also define cos The mass matrix in the mixing of the neutral gauge bosons is given by where The orthonormalized mass squared eigenstates corresponding to this matrix are given by One can see that these results reproduce the analogous relations of the SM in the limit of very large M 1 Similarly, for the other matrix elements of the rotation matrix O A we obtain whose asymptotic behavior is described by the limits These mass-squared eigenstates correspond to one zero mass eigenvalue for the photon A γ , and two non-zero mass eigenvalues for the Z and for the Z ′ vector bosons, corresponding to the mass values The mass of the Z gauge boson gets corrected by terms of the order v 2 /M 1 , converging to the SM value as M 1 → ∞, with M 1 the Stückelberg mass of the B gauge boson, the mass of the Z ′ gauge boson can grow large with M 1 .
The physical gauge fields can be obtained from the rotation matrix O
which can be approximated at the first order as The mass squared matrix (176) can be diagonalized as It is straightforward to verify that the rotation matrix O A satisfies the proper orthogonality relation
Rotation matrix O χ on the axi-Higgs
This matrix is needed in order to rotate into the mass eigenstates of the CP odd sector, relating the axion χ and the two neutral Goldstones of this sector to the Stückelberg field b and the CP odd phases of the two Higgs doublets We refer to [19] for a morre detailed discussion of the scalar sector of the model, where, in the presence of explicit phases (PQ breaking terms), the mass of the axion becomes massive from the massless case. The PQ symmetric contribution is given by while the PQ breaking terms are where b 1 has mass squared dimension, while λ 1 , λ 2 , λ 3 are dimensionless.
c χ = 4 4λ 1 + λ 3 cot β + b 1 v 2 2 sin 2β + λ 2 tan β , and using v d = v cos β, v u = v sin β together with from the scalar potential [19] one can extract the mass eigenvalues of the model for the sscalar sector. The mass matrix has 2 zero eigenvalues and one non-zero eigenvalue that corresponds to a physical axion field, χ, with mass The mass of this state is positive if c χ < 0. Notice that the mass of the axi-Higgs is the result of two effects: the presence of the Higgs vevs and the presence of a PQ-breaking potential whose parameters can be small enough to drive the mass of this particle to be very light. We refer to [23] for a simple illustration of this effect in an abelian model. In the case of a single anomalous U(1) O χ can be simplified as shown below.
Introducing N given by and defining O χ he following matrix where we defined (201)
9.2
Appendix: Vanishing of the amplitude ∆ λµν for on-shell external physical states An important property of the triangle amplitude is its vanishing for on-shell external physical states.
The vanishing of the amplitude ∆ for on-shell physical states can be verified once we have assumed conservation of the vector currents. This is a simple example of a result that, in general, goes under the name of the Landau-Yang theorem. In our case we use only the expression of the triangle in Rosenberg parametrization [34] and its gauge invariance to obtain this result. We stress this point here since if we modify the Ward identity on the correlator, as we are going to discuss next, additional interactions are needed in the analysis of processes mediated by this diagram in order to obtain consistency with the theorem.
We introduce the 3 polarization four-vectors for the λ, µ, and ν lines, denoted by e, ǫ 1 and ǫ 2 respectively, and we use the Sudakov parameterization of each of them, using the massless vectors k 1 and k 2 as a longitudinal basis on the light-cone, plus transversal (⊥) components which are orthogonal to the longitudinal ones. We have where we have used the condition of transversality e · k = 0, ε 1 · k 1 = 0, ε 2 · k 2 = 0, the external lines being now physical. Clearly e ⊥ · k 1 = e ⊥ · k 2 = 0, and similar relations hold also for ε 1⊥ and ε 2⊥ , all the transverse polarization vectors being orthogonal to the light-cone spanned by k 1 and k 2 . From gauge invariance on the µν lines in the invariant amplitude, we are allowed to drop the light-cone components of the polarizators for these two lines ∆ λµν e λ ε 1µ ε 2ν = ∆ λµν e λ ε 1µ⊥ ε 2ν⊥ , and a simple computation then gives (introducing e ⊥ ≡ (0, e) and similar) ∆ λµν e λ ε 1µ⊥ ε 2ν⊥ = a 1 ǫ[k 1 − k 2 , ε 1⊥ , ε 2⊥ , e] = a 1 ǫ[k 1 − k 2 , ε 1⊥ , ε 2⊥ , α(k 1 − k 2 ) + e ⊥ ] ∝ ( ε 1⊥ × ε 2⊥ ) · e ⊥ = 0, since the three transverse polarizations are linearly dependent. Notice that this proof shows that Z → γγ with all three particles on-shell does not occur. As usual one needs extreme care when massless fermions are running in the loop. The situation is analogous to that encountered in spin physics in the analysis of the EMC result, where the puzzle was resolved [30] by moving to the massless fermion case starting from off-mass shell external lines.
Appendix. Massive versus massless contributions
Here we briefly discuss the computation of the mass contributions to the amplitude. We start from the massless fermion limit. The anomaly coefficient in rel. (20) can be obtained starting from the triangle diagram in momentum space. For instance we get and isolating the four anomalous contributions of the form AAA, AVV, VAV and VVA we obtain Similarly we obtain T r[γ λ P L (q / − k /)γ ν P L (q / − k / 1 )γ µ P L q /] q 2 (q − k 1 ) 2 (q − k) 2 the other coefficients reported in eq. (27) are obtained similarly.
Appendix. CS and GS terms rotated
The rotation of the CS and the GS terms into the physical fields and the goldstone gives These vertices appear in the cancelation of the gauge dependence in s-channel exchanges of Z gauge bosons in the R ξ gauge. The dots refer to the additional contributions, proportional to interactions of χ, the axi-Higgs, with the neutral gauge bosons of the model. | 2014-10-01T00:00:00.000Z | 2007-03-12T00:00:00.000 | {
"year": 2007,
"sha1": "3f0ea9eecce191a2d3c9137e64143a7abf5c98be",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0703127",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ff84322dd7e972d36cae62ae41680dd829f59385",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
52832480 | pes2o/s2orc | v3-fos-license | Geographic Variation in the Peptidome Fraction of the Venom of Naja naja naja ( Indian Cobra ) Species as Analysed by MALDI-TOF ; Implications on Antivenin Development
Purpose: Several studies have shown that the commercially available antivenoms are ineffective in neutralizing the toxic and lethal effects. In this study, we aimed at analysing the comparative peptidome fractions of venoms of Indian Cobra (Naja naja) species from three distinct geographical (western, southern and eastern) regions of India. MALDI spectra of the regional venoms were recorded in positive ion mode using a Bruker Daltonics Ultraflex TOF/TOF spectrometer. Results: It was observed that the peptidome fraction of Naja naja species of different regions, varied greatly in MALDI-TOF spectral profile in the 1-40 KDa window. The peptide pattern of the three regional venoms had similarities in containing 27, 20, 13 and 6.7 KDa toxin components. Further, it was interesting to note that the eastern regional venom sample contained an abundant ~5.7 KDa peptide which was completely absent in other regional venoms, which might be the reason for high toxic and lethal effect. Conclusion: The geographical venom variability in peptidome fraction reported here may have an impact in the selection of specimens for antivenom production and highlights the necessity of use of pooled venoms as representative venom for antivenom production. Citation: Saikumari YK, D`Souza CJM, Dhananjaya BL. Geographic Variation in the Peptidome Fraction of the Venom of Naja naja naja (Indian Cobra) Species as Analysed by MALDI-TOF; Implications on Antivenin Development. J Toxins. 2015;2(2): 4. J Toxins 2(2): 4 (2015) Page 02 ISSN: 2328-1723 hardly effective in neutralizing the pathobiological manifestation of the venom [7]. The venom of Naja naja species from different geographical regions are known to vary in their biochemical and pharmacological activities and eastern regional venom is known to be the most toxic and lethal among the three regional venoms [18,19]. It was concluded that the variability in toxicity and lethality was due to the presence of peptides and phospholipases A2 (low molecular weight factors) in the venom [7,19]. Although intra-specific variability in Naja naja species venom has been observed [7,8,18,19], its influence on antivenom production is poorly understood, due to lack of characterization of major toxins involved in pharmacological effects. It is believed that an antiserum raised against fractionated venom containing such major toxins, could yield better protection [20]. Therefore, the aim of the study is to obtain general information of the peptidome (Low Molecular Weight fraction) main toxic/ lethal component of the venom of Naja naja species from different regions of India, which would help in as a reference for development of efficacious therapeutic antivenom. Here we report for the first time the comparative peptidomic analysis of different regional venoms using MALDI-TOF for its potential use in efficacious therapeutic antivenom.
Introduction
Snake venom constitutes a diverse and synergistic cocktail of biologically active molecules responsible for various pharmacological effects [1,2].In this respect, venoms of snakes are known to exhibit marked variation in their potency and the extent of induction of toxic and lethal effects due to variation of toxins have been addressed at different levels (sex, diet, seasonal, geographical etc.,), and also in terms of their composition and relative abundance of toxins [3,4].The variation in venom composition is known to be one of the main reasons for inefficiency of antivenoms, which is the only preferred choice for snakebite victim's treatment all over the world [5].It has been observed that the variable composition of venom influences the effectiveness of antivenom as antivenom prepared against particular regional venom is reported to be ineffective or partially effective against the toxicity/lethality of other regional venom [6][7][8].Hence, understanding the intra-specific variability of venom components are gaining much attention with the intention of production of efficacious therapeutic antivenom and thus help in management of snakebite [8].Although variation in the venom proteome is a well-documented phenomenon; however, variation in the venom of peptidome is poorly understood [9,10].Knowledge of variation of peptidome variation is of prime importance as these are the most potent toxins and less immunogenic [10,11].Recently, it was shown that the commercial available antivenoms are ineffective in neutralizing the toxic and lethal effects of the peptides fraction and the antibody raised against this fraction was found to cross-react with all the regional venoms [11].Further, peptides are known to be more potent in toxic and lethal effects [9,12].Therefore studies on the variation in the venom of peptidome are of prime importance for development of efficacious antivenom.Several approaches and techniques have been employed for studying the variability of venom components that influences the pharmacological effects.Mass spectrometry is one of the major investigative tools, known to gives access to wealth of information in a short working time frame and with minute amount of samples as in the case of snake venoms [13].Several studies have used MALDI-TOF to understand the influence of variation in snake venom induced pathophysiological effects [10,[13][14][15][16].
Indian cobra (Naja naja), one of the medically important snakes, is endemic and distributed all across the country.It is responsible for large number of morbidity and mortality cases in India [17].It is observed that the variation in the venom composition of Naja naja species was the main reason for severity of pathogenesis in the victims of three districts of West Bengal (Eastern India) and the available polyvalent antivenom manufactured in western India was hardly effective in neutralizing the pathobiological manifestation of the venom [7].The venom of Naja naja species from different geographical regions are known to vary in their biochemical and pharmacological activities and eastern regional venom is known to be the most toxic and lethal among the three regional venoms [18,19].It was concluded that the variability in toxicity and lethality was due to the presence of peptides and phospholipases A 2 (low molecular weight factors) in the venom [7,19].Although intra-specific variability in Naja naja species venom has been observed [7,8,18,19], its influence on antivenom production is poorly understood, due to lack of characterization of major toxins involved in pharmacological effects.It is believed that an antiserum raised against fractionated venom containing such major toxins, could yield better protection [20].Therefore, the aim of the study is to obtain general information of the peptidome (Low Molecular Weight fraction) -main toxic/ lethal component of the venom of Naja naja species from different regions of India, which would help in as a reference for development of efficacious therapeutic antivenom.Here we report for the first time the comparative peptidomic analysis of different regional venoms using MALDI-TOF for its potential use in efficacious therapeutic antivenom.
Materials
All the reagents used were of proteomic grade.The lyophilized venom from Naja naja from different regions of India was a gift from Dr. T. Veerabasappa Gowda (Professor, University of Mysore), matrices of MALDI mass spectrometry; α-cyano-4-hydroxycinnamic acid in case of peptides and sinapinic acid in case of proteins used were from sigma-Aldrich (St.Louis, USA).Mass spectrometry calibration standards were from sigma-Aldrich (St.Louis, USA).
Extraction of low molecular weight fraction form Naja naja venoms
To obtain peptidome fraction (low molecular weight fractions), all the three regional Naja naja venom samples were separately subjected to G-50 column chromatography which was equilibrated and eluted with 0.1 M phosphate buffer (pH 7.0) containing 0.5 M Nacl.The elution resulted in 2 peaks out of which the 1 st peak contained high molecular weight proteins (>50 KDa) and 2 nd peak which contained low molecular weight proteins (<40 KDa).The samples were lyophilized until Mass spectrometric analysis was performed.
MALDI mass spectrometry of low molecular weight fractions
MALDI spectra were recorded in positive ion mode using a Bruker Daltonics Ultraflex TOF/TOF spectrometer.The matrices used for positive ion mode detection were α-cyano-4-hydroxycinnamic acid in 60% acetonitrile containing 0.1% TFA.Routinely, 0.5 µl of matrix was mixed with 0.5 µl (1 μg dissolved in 10 μl of water) of the peptide sample on a MALDI plate for mass spectral analysis.Each sample was spotted twice and spectra were recorded for each spot.
Results and Discussions
The venom of Indian Cobra (Naja naja) obtained from 3 different regions western (Mumbai, Maharastra), southern (Chennai, Tamil Nadu) and eastern (Kolkata, West Bengal) (Figure 1) varied greatly in MALDI-TOF spectral profile of peptide (low molecular weight) fractions in the 1-40 KDa window (Figure 2).The observed variations were in the presence/absence of different molecular weight peptides and their abundance across different regional venoms (Figure 2).The peptide pattern of the three regional venoms had similarities in containing 27 KDa, 20 KDa, 13 KDa and 6.7 KDa toxin components (Figure 2).In Elapidae venoms it is usually observed that 12-14 KDa members are PLA 2 s and < 7 KDa members is generally represented by neurotoxins, cardiotoxins or ion-channel blockers [20].Further, it is known that ~5.7-7.3KDa members are potential three finger toxins (3FTxs) [20].Proteins from 3FTx and PLA 2 families are generally known for the major biological effects, being responsible mainly for neurotoxicity and death by respiratory arrest which is the predominant clinical manifestation of Naja naja venoms and in general elapidae venoms [21][22][23].
Although many isoforms of PLA 2 s (in the range of 12-14 KDa) have been isolated and characterized from these three regional venoms of Naja naja species, only few have been characterized [24][25][26][27].A highly lethal cytotoxic peptide (~6.9 KDa) has been reported from eastern regional venom [28].Recently, a potent cardiotoxin having Mol wt of 6.7 KDa has been reported [29].
The eastern and western regional venoms contained 12 KDa, 24 KDa and 33 KDa peptides which was completely absent in southern regional sample of Naja naja.However, it is interesting to note that the eastern regional venom sample contained an abundant ~5.7 KDa peptide which was completely absent in other regional venoms.This component might be the cardiotoxins or neurotoxins which predominantly belong to 3FTxs family that are least studied in Indian venoms.Supporting this is the observation that eastern region venom predominantly causes damage to cardiac muscle [8].Further the study shows that a distinct PLA 2 enzyme is known to be present in eastern venom, which is absent in southern and western venom samples [8].Therefore the presence of highly potent low molecular weight components like the isoforms of PLA 2 s and 3FTxs might be the reason for the observed highly toxic and lethal effects of eastern regional venom when compared to other regional venoms [7,18,19].Further, it can be viewed that the observed ineffectiveness of commercial polyvalent antivenom (manufactured in western India) in neutralizing the pathobiological manifestation of the venom samples from eastern India, might be due to its ineffectiveness of action on low molecular weight components' like 3FTs and neurotoxins [7].
The absence of 12 KDa, 24 KDa and 33 KDa peptides in southern venom when compared to other regional venoms and the abundant existence of ~5.7 KDa peptide in only eastern regional venom of Naja naja species could well be utilized for selection of specimens in the production of antivenom and thus used in the treatment of snakebite patents.A similar study demonstrated and discussed the implications of mass spectrometry analysis for the production of Micrurus antivenoms [20].Further, it concluded that an antiserum raised against fractionated venom containing such major toxins, could yield better protection [20].
Conclusion
In conclusion, the MALDI-TOF analysis revealed qualitative and quantitative peptidome variation in Naja naja venom of distinct geographical origin.Further this study contributes to towards the understanding of intra-venom variability which is useful for production of efficacious and region-specific therapeutic antivenoms, which is the immediate medical concern for researches.
Figure 1 :
Figure 1: Schematic representation of the location of Indian N. naja naja venom samples obtained from western (Mumbai, Maharastra), southern (Chennai, Tamil Nadu) and eastern (Kolkata, West Bengal) regions of Indian peninsula. | 2018-09-19T03:39:07.038Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "6c49377049adefb42b557f8ce599e6845487ec51",
"oa_license": "CCBY",
"oa_url": "https://www.avensonline.org/wp-content/uploads/JTOX-2328-1723-02-0008.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6c49377049adefb42b557f8ce599e6845487ec51",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": []
} |
254398798 | pes2o/s2orc | v3-fos-license | Refractory Metals and Oxides for High-Temperature Structural Color Filters
Refractory metals have recently garnered significant interest as options for photonic applications due to their superior high-temperature stability and versatile optical properties. However, most previous studies only consider their room-temperature optical properties when analyzing these materials’ behavior as optical components. Here, we demonstrate structural color pixels based on three refractory metals (Ru, Ta, and W) for high-temperature applications. We quantify their optical behavior in an oxygenated environment and determine their dielectric functions after heating up to 600 °C. We use in situ oxidation, a fundamental chemical reaction, to form nanometer-scale metal oxide thin-film bilayers on each refractory metal. We fully characterize the behavior of the newly formed thin-film interference structures, which exhibit vibrant color changes upon high-temperature treatment. Finally, we present optical simulations showing the full range of hues achievable with a simple two-layer metal oxide/metal reflector structure. All of these materials have melting points >1100 °C, with the Ta-based structure offering high-temperature stability, and the Ru- and W-based options providing an alternative for reversible color filters, at high temperatures in inert or vacuum environments. Our approach is uniquely suitable for high-temperature photonics, where the oxides can be used as conformal coatings to produce a wide variety of colors across a large portion of the color gamut.
■ INTRODUCTION
Structural color refers to any process where hue is generated utilizing micro-or nanostructured surfaces. These surfaces interact with incident light, changing its reflection or adding absorption peaks, which can result in the production of vibrant colors. 1−4 The shades formed by this process are often far more stable than traditional ink printing options and can offer further printing precision given the microscopic or nanoscopic scale of the fabrication. Many modern attempts at creating artificial structural color can produce vivid, robust shades, but rely on complex metasurfaces 5 or many-layer geometries designed to exploit Fabry−Perot resonances. 6,7 Structural colors are quickly growing in their usage, and have applications in sensing, 8−11 anticounterfeit technology, 12−15 solar selective absorbers for photovoltaics, 16,17 and heat-resistant coatings. 18 In an all-thin-film design, both metallic and dielectric materials are needed to fulfill the thin-film interference conditions required for forming reflective color filters [DOI: 10.1002/adom.202200159]. Yet, most previous structural color designs would not be capable of withstanding hightemperature treatment because the materials commonly used present limited thermal properties (e.g., low melting point, high thermal expansion, etc.). However, several refractory metals and their oxides offer melting points above 1100°C, representing, thus, a promising platform for generating structural colors that can be used under extreme high-temperature conditions. As an example, prior works using W and Mo oxides for this purpose have used nonstoichiometric metal oxides fabricated via sputtering on a glass substrate 17 or on a different metallic substrate like Al or Cu. 19 In this work, we circumvent the thermal limitations imposed by the modest melting point of coin-age metals (Au, Ag, Cu) by realizing a scalable geometry utilizing refractory metals and their oxides for proof-of-concept structural color printing that can operate at high temperatures. Our material selection entails refractory metals with melting point >1100°C, significantly superior to the coin-age metals. While this class of material has been underexplored for photonics thus far, we show that their optical behavior (i.e., permittivity) is very suitable for devices in the visible range of the electromagnetic spectrum. By controlling in situ the oxidation of Ru, Ta, and W thin films, we attain an alternative route for tailoring the spectrum. We fabricate structural color filters that produce vivid colors ranging from dark yellow to light pink and cyan, by performing a controlled heating treatment while measuring the samples in situ with ellipsometry. The hues result from interference between the incoming and outgoing light, which changes depending on the thickness of the MO x layer and the dielectric function of the metal. The colors are obtained by submitting each refractory metal to a thermal treatment at 600°C in an oxidizing environment. Oxygen diffusion within these refractory metals leads to a dual-layer dielectric/metal structure that enables light interference, which, in turn, gives rise to the primary printing colors. These hues are angle-insensitive up to 75°for RuO 2 , and up to 65°for Ta 2 O 5 and WO 3 . Furthermore, optical simulations of similar device structures show that a large portion of the color gamut can be reached simply by changing the thickness of the metal oxide layer. The permittivity for all metals and their oxides has been consistently modeled using general oscillators, and these data are made fully available to enable other researchers to use them when designing optical building blocks for additional hightemperature applications. Our results illustrate how refractory metals can be implemented for color printing, with the flexibility of selecting either static or reversible responses at temperatures beyond 1000°C, depending on material and environment. Given the thermal stability of Ta 2 O 5 in inert environments, 20 these structural color systems would be ideal optical coatings for space applications. Alternately, using further oxidation of all three refractory-metal-based structural color systems in an oxygen-rich environment, these structures could be implemented as simple, yet highly sensitive oxygen sensors. Materials that present suitable optical properties (low loss) and are chemically controllable at high temperatures have been increasingly sought after recently due to their potential usage in ultrahigh-temperature, extreme conditions. In turn, these findings are launching refractory-metal oxides as a class of material for ultrahigh-temperature photonics.
■ RESULTS AND DISCUSSION
To obtain refractory metal oxides, we heat the samples to 600°C in an oxidizing environment (mixture of air and Ar) while measuring their optical properties using in situ spectroscopic ellipsometry. We use a ramping rate of 3°C min −1 , stopping at each 100°C point for 22 min with additional steps of 50°C above 400°C to allow the samples to thermalize (see Figure S1 in the Supporting Information for temperature profile). Figure 1 shows the in situ ellipsometry measurements of the refractory metals from room temperature throughout the hightemperature cycling process. The ellipsometric parameters Ψ and Δ refer, respectively, to the ratio of the amplitude of the reflected s-and p-polarized light, and the phase difference between the reflected s-and p-polarized light. 21,22 Together, they characterize the reflection behavior from the surface of our system. All three films show stark changes in their reflective properties beginning at 500°C, the temperature at which oxygen will begin to diffuse into the bulk of the three metals. 23−26 Clear peaks develop at 500°C and continue to increase in magnitude for the remainder of the temperature ramp process, coinciding with reflective interference due to the growth of the corresponding dielectric layers. 27,28 This alteration is evidenced by a color change in the reflection spectrum (see Figure S1 in the Supporting Information for sample photographs). The location of these peaks shifts slightly toward higher wavelengths as the temperature increases, due to the increasing thickness of the dielectric layer. Given this knowledge, the ability to perform an in situ characterization of the samples via ellipsometry allows for the precise control of the thickness of the oxide layer and their optical properties. In situ optical measurements through high-temperature cycle for (a, d) Ru, (b, e) Ta, and (c, f) W on Si substrates. With these two parameters, we can characterize the optical properties of these materials as they change with increasing temperature. All curves shown are at an angle of 70°from normal incidence. The black arrow shows the order of measurements.
We use X-ray photoelectron spectroscopy (XPS) to discern the specific composition of the oxide layer and analyze any change in surface chemistry with heat treatment. Figure 2 shows the XPS spectra before and after temperature treatment, where the measured and fitted data are presented in black solid line and gray dashed line, respectively. In Figure 2a−c, we see the signature of thin native oxide layers in all samples before high-temperature treatment. This oxide layer is less than 10 nm thick, given the known penetration depth limitation of XPS. 29 This aligns well with previous literature sources, which found thicknesses of native oxides for all three materials to be less than 2 nm at room temperature. 23,26,30 For the pristine samples, the XPS data are fitted by a combination of the metals and their oxides in blue and red, respectively. Upon temperature cycling, the intrinsic oxide layers develop by slowly consuming the metals. From Figure 2d−f, the pure elemental peaks are no longer present, indicating a metal oxide thickness of at least 10 nm. Compared to the literature, we determine the stoichiometry of the oxide layers to be RuO 2 31,32 for Ru, Ta 2 O 5 33 for Ta, and WO 3 34 for W. Given the change in surface composition identified by XPS and the predicted modification in optical behavior from Figure 1, we analyze the newly formed metal oxide layers by measuring their dielectric functions at room temperature before and after the heating cycle. Figure 3 presents the optical properties of the three refractory metals (Ru, Ta, and W), measured via spectroscopic ellipsometry. The dielectric function of all three metals is determined using the generaloscillator model (tables of model parameters for metals and oxides are available in Tables S1 and S2 in the Supporting Information, respectively). We fabricate the thin films by sputtering onto a standard Si wafer and onto a reference glass substrate. By measuring transmission data from the glass reference sample included in the thin-film deposition ( Figure S2 in the Supporting Information), we verify that all three metal thin films are optically thick prior to high-temperature treatment given that the intensity of transmitted light is less than 5% in all cases. All three materials exhibit strongly metallic behavior in the visible region as evidenced by their mostly negative ε 1 , and begin silver in color, as shown in the insets of Figure 3b. We observe limited oxidation prior to hightemperature treatment as evidenced by Figure 2a−d, although effects on the sample are negligible given that their behavior is still strongly metallic and reflective. These results are comparable to previous literature examples of each metal. 35−37 As presented in Figure 3c−d, the oxide layers display an overall dielectric optical behavior, exemplified in their transparency across a wide wavelength range and their positive ε 1 . The dielectric functions line up well with previous literature sources for these oxides. 38−40 Our model indicates the presence of a remaining metallic layer underneath two of the MO x (Ru and W); therefore, we obtain the dielectric function of the metallic and oxide layer of these structures after hightemperature treatment. Here, we observe a three-layer structure with the newly formed metal oxide acting as a top dielectric film, a metal intermediate layer, and a bottom dielectric Si substrate. For the Ta sample, oxygen diffused throughout the entire metallic layer where a Ta 2 O 5 /Si system is formed (see Figure S3 in the Supporting Information for fits to ellipsometric data, along with the calculated thicknesses of each layer). Since Ta was fully oxidized, the dielectric function for Ta presented in Figure 3a−b is determined using the prehigh-temperature reflectivity data for the sample (see Figure S4 in the Supporting Information for fit to pristine Ta/Si ellipsometric data). The relevance in accurately determining the dielectric functions of these oxides lies in using this information to realistically design structural color pixels for printing in high-temperature settings, not possible with conventional coin-age metals. Overall, the control of the thickness of both metal and MO x films enables control over the light interference within the structure, which produces vivid coloration in all three samples, enabling vibrant reflected colors as displayed in the inset of Figure 3d.
An important feature for color pixels is chromaticity and angular insensitivity. Thus, we quantify the changes in hue as a function of light angular incidence for all pixels by measuring the reflection of each heat-treated sample every 10°. We plot the reflectivity for each system in Figure 4, from 15 to 85°from normal incidence for the visible wavelength range (see Figure S5 for full range comparison and Figure S6 for full reflection maps). The data are normalized at each angle such that each curve has a minimum at 0 and a maximum at 1. All three samples show bright coloration for a wide range of angle values. The reflectivity of all three structures is angleinsensitive up to at least 65°as has been previously demonstrated from thin-film-interference-based structural color or superabsorber systems, 2 demonstrating the potential for these materials as wide-angle visible reflectors for structural color applications.
With the dielectric function of each metal and its MO x counterpart, we simulate the expected reflection performance for different values of oxide layer thickness on top of a 20 nm metal layer using the transfer matrix method (TMM) (see top row of Figure 5 for schematics). 41 Figure 5a−c shows the calculated normal-incidence reflection spectra for different thicknesses of the metal oxide layer t ox , varying from 10 to 100 nm in steps of 10 nm. For all three metals, the reflection characteristics reliably shift to longer wavelengths as the oxide thickness increases, suggesting that pixels across a wide range of the color gamut should be fabricable simply by changing the oxide layer thickness, which can be controlled by varying the length of time a sample is held at 600°C. Figure 5d−f shows the chromaticity diagrams for the simulated structures for our three samples as the thickness of the refractory metal oxide layer varies from 0 to 100 nm in steps of 5 nm. As one can observe, the color ranges across a large region of the color gamut simply by increasing the thickness of the oxide layer. The highly tailorable reflectivity and chromaticity achievable with a three-layer reflector geometry, as demonstrated in Figure 5, highlight these materials' promise as photonic active components for high-temperature applications.
Next, we calculate the color of different possible pixels by varying both the metal and the oxide thickness using the simulated reflectivity at normal incidence. 42 Figure 6 shows the simulated color for our three materials for film thicknesses ranging from 0 nm to 200 nm in steps of 5 nm. When varying both thicknesses, we can achieve very vivid coloration across a large portion of the color gamut. As seen in Figure 6a, Rubased samples present overall pastel shades, as a direct consequence of their wider reflectance spectra as in Figure 4a. Conversely, Ta 2 O 5 /Ta/Si and WO 3 /W/Si both offer bright color options throughout most of the visible spectrum due to the narrower peaks in the visible region of their reflectivity spectra. These simulations show the promise of refractory metal oxides for industry-scalable structural color pixels with controllable high-temperature behavior (offering either static or reversible response, depending on material selection), with options ranging from pale to bright colors across the majority of the visible color spectrum. The sharp changes in color with very small changes in thickness also promote one possible use for this structure: in high-temperature applications requiring very low levels of oxygen, these quickly oxidizing samples can serve as highly sensitive oxygen sensors, in which a color change could quickly detect the presence of oxygen.
While all three structures are formed of materials with melting points >1100°C, the oxides present distinct thermochemical properties. Ta 2 O 5 has previously been demonstrated to remain stable in inert environments at temperatures beyond 1000°C, 20 while the other two oxides (RuO 2 and WO 3 ) have been shown to reduce to their pure constituent metals beyond 800°C. 43−45 Thus, the unique material chemistry of each metal oxide is a feature: RuO 2 or WO 3 can be implemented in situations that require color reversibility, while Ta 2 O 5 is the best choice to attain hightemperature stability. For applications in oxygen sensing, reusability is a highly desirable trait. With the reversibility of the oxidation process for RuO 2 and WO 3 thin films, oxygen sensors formed using Ru and W oxide thin films would be fully reusable after reannealing the oxidized film in an inert environment (Ar or N 2 ). In contrast, the oxidation of Ta 2 O 5 being irreversible presents benefits for applications requiring stable coloration, for example as conformal coatings for space applications.
■ CONCLUSIONS
In summary, we realized a platform for structural color filters that can operate at temperatures beyond 1100°C, based on refractory metals and their oxides. We validated the suitability of these materials by determining the changes in their optical properties upon heating treatments in an oxidizing environment. As an example, we demonstrated vibrant hues across a wide portion of the color gamut by submitting Ru, Ta, and W to identical thermal treatments at 600°C. The development of a metal oxide dielectric layer produced interference that led to vivid colors. The refractory, dielectric layers required for interference are achieved using in situ oxidation, a reaction that can be reversible or not, depending on the metal and medium. A distinctive aspect of our approach is the promise of these structures at high temperatures: given their high melting points and differing thermochemical behavior, these structures offer tailorable chromaticity and material-dependent reversibility (RuO 2 , WO 3 ) or static optical behavior (Ta 2 O 5 ) upon hightemperature treatment in inert environments. Furthermore, our oxide growth method allows for very precise control of dielectric layer thickness via in situ optical measurements, which can determine the thickness in real time as the oxide layer grows. Overall, these results show the potential of refractory metals for photonics under extreme conditions and how oxidation can be implemented as a powerful route to attain dielectric layers in situ, which can work as optical markers at elevated temperatures. ■ EXPERIMENTAL METHODS Sample Fabrication. Samples were fabricated via DC magnetron sputtering on a Kurt J. Lesker PVD 200 sputterer. All depositions were in an inert environment (Ar). Deposition parameters for each material are shown in Table S3. The metals were deposited onto a standard Si wafer and onto glass, as a reference.
In Situ Ellipsometry. In situ ellipsometry results were measured on a J. A. Woollam VASE ellipsometer, with a Linkam RC-2 heating stage providing high-temperature control up to 600°C. Samples were heated from room temperature (25°C) to 600°C with a ramping rate of 3°C min −1 , with holds at every 100°C to allow the sample to thermalize and to allow for detailed ellipsometric measurements. Above 400°C, we also stop every 50°C to allow for finer visualization of the high-temperature behavior of the samples. The full temperature profile is shown in Figure S1 in the Supporting Information, along with real-color photographs of each sample before and after hightemperature treatment.
Ex Situ Optical Measurements and Simulations. The ex situ ellipsometry measurements were taken on a J. A. Woollam M-2000 ellipsometer (193−1688 nm). Dielectric functions are determined by fitting the ellipsometric parameters Ψ and Δ, fitting with generaloscillator models for both the pure metals and the oxides after hightemperature treatment using the CompleteEASE software. The individual oscillators used for each model are shown in Table S1 in the Supporting Information, using the standard equations for each given in CompleteEASE. 28 To confirm that the samples were optically thick prior to high-temperature treatment, transmission and reflection data were measured from samples deposited on glass in the same deposition run; transmission measurements on each sample were compared to a straight-through baseline in air. Reflectivity measurements were taken on a J. A. Woollam W-VASE ellipsometer (290− 2440 nm). The optical simulations showing the reflection as a function of changing oxide thickness, and the simulated color as a function of changing metal and oxide thicknesses, were simulated in CompleteEASE using the thicknesses and dielectric functions that were determined using ex situ ellipsometry.
X-ray Photoelectron Spectroscopy (XPS). XPS measurements were taken on a Kratos SUPRA Axis XPS with a monochromated Al Kα source (1486.6 eV). The chamber's base pressure was 2 × 10 −8 Torr, with a 7 mA emission current and a scan size of 450 × 900 μm. Peaks were fitted using Kratos ESCApe; normalization and Shirley background subtraction were performed after fitting. ■ ASSOCIATED CONTENT
Tabulated dielectric functions for metals and oxides from paper, as well as Mo and MoO 3 (TXT) Real-color photographs of samples, temperature profile of high-temperature treatment, parameters for dielectric functions of metals and oxides, transmission measurements for thin films deposited on glass, experimental Ψ and Δ before and after high-temperature treatment compared to model fit, extended reflectivity plots and reflection maps, extended in situ ellipsometry measurements, sputter deposition parameters, in situ ellipsometry plots for Mo presented as the fourth sample of study (not included in the main text due to lower melting point of MoO 3 ), XPS narrow scan for Mo and MoO 3 , reflection spectra for Mo, reflection and chromaticity simulations for Mo, and extended plots for Mo (PDF) | 2022-12-08T16:18:54.508Z | 2022-12-06T00:00:00.000 | {
"year": 2022,
"sha1": "1b39aa40217503df222d6f1ee3827864a660cc8e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3b8f908bc0a6b6fdbb7698797ea172d3e5b49b8f",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
211100934 | pes2o/s2orc | v3-fos-license | Influence of Molecular Design on Radical Spin Multiplicity: Characterisation of BODIPY Dyad and Triad Radical Anions.
A strategy to create organic molecules with high degrees of radical spin multiplicity is reported in which molecular design is correlated with the behaviour of radical anions in a series of BODIPY dyads. Upon reduction of each BODIPY moiety radical anions are formed which are shown to have different spin multiplicities by electron paramagnetic resonance (EPR) spectroscopy and distinct profiles in their cyclic voltammograms and UV-visible spectra. The relationship between structure and multiplicity is demonstrated showing that the balance between singlet, biradical or triplet states in the dyads depends on relative orientation and connectivity of the BODIPY groups. The strategy is applied to the synthesis of a BODIPY triad which adopts an unusual quartet state upon reduction to its radical trianion. The synthesis of organic radicals has received extensive focus over recent years with diverse interests in areas including redox catalysis, imaging, magnetochemistry and molecular spintronics. 1-3 A wide variety of organic moieties are suitable for the creation of stable multi-centred radical species, these include phenothiazine, 4 nitroxide, 5,6 aminyl, 7 viologen, 8 dithiazolyl, 9 verdazyl, 10 and macrocyclic polyradicaloids. 11 Many of these systems exists as triplet states although examples of higher spin multiplicities are known but are significantly less common. Such higher spin multiplicities require strong coupling between the spins, typically as a result of a defined arrangement and this approach has led to the identification of stable quartet 12-17 and
The synthesis of organic radicals has received extensive focus over recent years with diverse interests in areas including redox catalysis, imaging, magnetochemistry and molecular spintronics. 1-3 A wide variety of organic moieties are suitable for the creation of stable multicentred radical species, these include phenothiazine, 4 nitroxide, 5,6 aminyl, 7 viologen, 8 dithiazolyl, 9 verdazyl, 10 and macrocyclic polyradicaloids. 11 Many of these systems exists as triplet states although examples of higher spin multiplicities are known but are significantly less common. Such higher spin multiplicities require strong coupling between the spins, typically as a result of a defined arrangement and this approach has led to the identification of stable quartet [12][13][14][15][16][17] and quintet 18,19 states in multi-centred organic radicals.
In parallel main group elements, 20,21 including boron, have been incorporated into free radical manifolds and this strategy has led us to investigate the use of BODIPY (4,4-difluoro-4-bora-3a,4a-diaza-s-indacene) compounds to create radical species. Our previous investigations of BODIPY molecules 22 reveal that the electrochemical reduction of the BODIPY core leads to the formation of a radical anion (BODIPY •-) which can be characterized by electron paramagnetic resonance (EPR) spectroscopy. BODIPY compounds have been developed for many applications, predominantly based on their fluorescent properties, and as a result the synthetic approaches to functionalised BODIPY species are advanced. [23][24][25] We anticipated that if molecules with multiple BODIPY components could be found which exhibited spin coupling, leading to higher or unusual spin multiplicities, then the rich chemistry of BODIPY compounds would allow the development of new families of organic radicals.
Synthetic strategies for the formation of BODIPY dyads 26,27 and a small number of triads 28 with various geometries have been reported. Thus, we decided to explore, for the first time, the behavior of reduced BODIPY dyads and triads and in particular the multiplicity of these radical anion species. In this study we report a series of five BODIPY dyads and a triad in which the relative geometry and spacing of the BODIPY cores is systematically controlled (Fig. 1a). We then studied the electrochemical behavior of these systems and probed the behavior of the di-reduced ([1] 2-, [2] 2-, [4] 2-, [5] 2-, [6] 2-), or tri-reduced ( [3] 3-), states in order to correlate structure with multiplicity. Our experimental observations, supported by density functional theory (DFT) and second-order multireference perturbation theory calculations, indicate the formation of singlet, open-shell singlet, triplet and quartet states depends on the molecular architecture.
Results and Discussion
Dipyrromethane precursors for all five BODIPY dyads (1, 2, 4-6) and triad (3) were synthesised according to a modified literature procedure (see SI for details). 29 In all instances a di-aldehyde, or tri-aldehyde for 3, were condensed with pyrrole using trifluoroacetic acid as catalyst to afford the target dipyrromethanes or tripyrromethane in 70-90% yield with the exception of the durene-bridged dipyrromethane which was isolated in 20% yield. Target BODIPY species were prepared using a modified literature procedure. 30 Dipyrromethanes were oxidised using 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) and subsequently reacted with BF3.OEt2 in the presence of a base, N,N-diisopropylethylamine (DIPEA). In all cases low yields were observed for the BODIPY formation reactions which was attributed to the decomposition of oxidised dipyrromethane species through polymerisation of pyrrole sub units at the alpha position, 31 and observed though the formation of a black polymeric residue. It was noted that oxidation time and reaction solvent affected yields significantly. Indeed, shortening the reaction time for the oxidation step from 1 hour to less than 5 minutes resulted in increased yields, ca. 5-10% from less than 1%. Furthermore, yields were improved by using anhydrous toluene as solvent rather than CH2Cl2. Similar syntheses of 1 and 2 in similar yield have been reported previously using InCl3 as catalyst. 26 It should be noted that alternative synthetic strategies have been used to prepare related BODIPY dyad and triad systems in slightly higher yields replacing the aldehyde precursor with the corresponding acyl chloride. 28 The single crystal structures of 1, 2 and 3 were determined from crystals grown by the slow diffusion of MeOH into solutions of the compounds in CH2Cl2. Attempts to grow single crystals of 4-6 using analogous conditions were unsuccessful. The structures of 1 and 2 have been reported previously 26 and our measurements confirm similar structural arrangements. In 1 the BODIPY moieties are found to be co-planar but sit at an angle of 54.1 o with respect to the phenyl linker. In contrast to 1, the BODIPY moieties in 2 are offset, in a propeller conformation with respect to each other across the phenyl linker and at an angle of 52.1 o with respect to the phenyl linker. 3 adopts a similar arrangement to 2, with the BODIPY moieties arranged in a propeller configuration across the central phenyl linker (Fig. 1b). The relative orientation of the BODIPY units with respect to each other and to the linking unit in each compound is notable as this may influence communication through the molecule but it should be noted that crystal structures are not necessarily indicative of solution phase behaviour.
All six compounds exhibited similar fluorescence, typical of BODIPY species functionalised at the meso-position with aryl groups. 22,27,28 Spectra exhibited small Stoke's shifts (~20-30 nm) and quantum yields less than 0.03 in all cases except for compound 4. 4 exhibited the smallest Stoke's shift, 18 nm, and the longest fluorescence lifetime of 9 ns and this was considered to result from the inhibition of free rotation at the meso position in 4 compared to others in the series. UV-visible spectroscopy is discussed in more detail below, particularly in comparison to reduced radical species. In order to probe the nature of the reduced states for 1-6, cyclic and square wave voltammetry studies were conducted at room temperature in CH2Cl2 with [ n Bu4N][BF4] as supporting electrolyte. In each case reversible reduction processes are observed for the compounds as anticipated for phenyl-substituted BODIPY moieties. 22,26 Whilst each BODIPY unit is expected to undergo a single one-electron reduction the number of reduction processes observed for 1-6 varies depending upon the geometry and length of the linker, and more particularly the substitution of the linker unit which is affecting communication between the redox centres (Table S1). Thus, whilst the para substituted dyad, 1, displayed a single well resolved reduction (E1/2 = -1.03 V) that showed scan rate dependence (Fig. 2a), the meta-substituted dyad, 2, displayed two overlapping reductions in the cyclic voltammetry (E1/2 = -1.10, -1.22 V) (Fig. 2b) and the triad, 3, displayed three overlapping reductions (E1/2 = -1.02, -1.14, -1.27 V) (Fig. 2c). Where more than one process was observed potentials were resolved by square wave voltammetry. The observations are consistent with 1 undergoing a single, two-electron reduction with attractive interactions between the added electrons. 32,33 2 exhibits two successive one electron reductions separated by 0.12 V, and 3 undergoes three successive one electron reductions, separated by 0.11 and 0.13 V. For both 2 and 3 the separation between the reduction processes suggests limited communication involving repulsive (Coulombic) interactions between added electrons. The cyclic voltammograms for 1 and 2 have been reported previously 26 and our experimental data are in good agreement with the previous study. Interestingly, studies of 4 revealed two closely overlapping reversible reductions which were resolved by square wave voltammetry (-1.22, -1.31 V) (Fig. 2d). The separation of the two reduction processes, in contrast to the single reduction associated with 1, is indicative of repulsive interactions between electrons located on the two terminal BODIPY moieties in 4. Cyclic voltammetry studies of 5 and 6 both revealed single, well resolved, reductions (5: E1/2 = -1.17 V; 6: E1/2 = -1.16 V) involving two electrons, similar to the behaviour of 1, but at more negative potentials, consistent with greater electron density donation to the BODIPY centres by the bi-or ter-phenyl linkers. The reduction potentials for 5 and 6 are consistent with that of N, N'-difluoroboryl-5-(phenyl)dipyrrin (E1/2 = -1.19 V) which contains a single BODIPY unit and a phenyl group on the dipyrrin meso-carbon and indicate that an acceptor orbital of similar nature and energy is common to these species. 22 The shift to less negative for the reduction of 1 is consistent with the electrons entering a more stabilised molecular orbital than those found in 5 and 6 (see below).
Whilst the cyclic voltammetric profiles of 1, 5 and 6 all appear similar, analysis of the effect of scan rate on the peak separations (DE = Ep a -Ep c ) suggests that upon reduction the BODIPY moieties interact differently in each case. Interactions in 1, as discussed above, show a decrease in peak separation at slower scan rates indicating a two-electron process involving attractive interactions, likely resulting from significant structural change, such as a rearrangement or a large change in solvation upon reduction. For 5 the peak separations are consistently larger than expected and show little dependence on scan rate relative to that observed for ferrocene under these conditions (Table S2). This suggests a regime in which weak coulombic interactions exist between the added electrons but unlike 2-4 these are insufficient to result in the resolution of two separate reduction waves, i.e. communication between the redox orbitals is present but weak. Extending the linker from bi-phenyl in 5 to ter-phenyl in 6 increases the distance between redox centres and further decreases the extent of interaction (Figs S3, S4). 6 shows peak separations that are comparable to those of ferrocene suggesting that this increase in distance between redox centres results in little interaction between the added electrons. The nature of the reduced, radical species was probed further by EPR spectroscopy in order to evaluate the influence of molecular structure, and in particular the geometry and length of the spacer separating the BODIPY moieties, upon the spin multiplicity of the radical species (Table 1). The results illustrate that the nature and arrangement of the spacer has a significant effect upon the EPR spectra observed for the different species. All radical dianionic, or trianionic, species gave EPR signals in both fluid and frozen EPR samples. In all cases fluid EPR spectroscopy showed a single resonance (Fig S5) at giso = 2.0026 -2.0028, however, analysis of frozen samples gave more complex spectra in a number of cases (Fig 3). Interesting contrasts are observed between the various radicals with spin multiplicity clearly affected by the geometrical arrangement of the spacer (Fig. 3). The EPR spectrum of [1] 2-, gives rise to a weak signal in both fluid and frozen samples. When integrated against the stable radical DPPH under the same conditions the observed spectrum represents less than 2 % of the expected spin. Therefore, the observed spectrum that arises from the reduction of 1 is considered to arise from a small amount of a paramagnetic impurity and [1] 2exists in the EPR-silent closed-shell singlet state. In contrast, the spectrum observed for [2] 2is consistent with an open-shell singlet diradicaloid as a fluid solution but on cooling to 77 K additional features are resolved in the full-field (g ≈ 2) and half-field (g ≈ 4) regions consistent with a triplet species. The triplet nature of the frozen EPR spectrum of [2] 2was confirmed by simulation of the observed four line spectrum in the g = 2 region using a zero field splitting energy (|D/hc|) of 71 x 10 -4 cm -1 and axial geometry (i.e. E = 0) for the DMs = ±1 transitions. The presence of a weak half-field signal (g = 4.008), corresponding to the forbidden DMs = ±2 transition, helped confirm our assignment of [2] 2as a triplet radical (Fig 3b). The feature in the centre of the spectrum, not reproduced by the simulation, most likely arises due to a paramagnetic by-product, such features are often reported in studies of biradical species. 34,35 Differences between isomers [1] 2and [2] 2indicate that the arrangement of the BODIPY moieties, para vs. meta, is critical in determining the spin multiplicity of the radical anions. The effect of geometry upon EPR multiplicity is further demonstrated by the spectrum of a frozen sample of [3] 3which shows five lines at X-band frequency (Fig. 3c) together with a signal at half field, consistent with, and simulated as, a quartet state with g = 2. (Fig. 3d). This illustrates the influence of conformation, not distance, on magnetic properties since the duryl spacer restricts rotation with respect to the BODIPY moieties. The longer fluorescence lifetime observed for 4 is perhaps indicative of this since hindered rotation inhibits non-radiative decay processes. 36 Comparison of [1] 2with [5] 2and [6] 2probes the effect of increasing the separation between the two terminal BODIPY moieties by increasing the length of the spacer from phenyl [1] 2-, to biphenyl, [5] 2-, to terphenyl, [6] 2-. Strong coupling was observed between electrons in [1] 2to produce a closed-shell singlet state but cyclic voltammetry, as discussed above, shows that subtle differences exist in the electronic interactions for the dianions in this series. As anticipated, larger separation between the BODIPY moieties, on which reduction is predominantly based, leads to weaker interactions between electron spins in [5] (Fig. 3d). Five lines are observed at full field but no half-field signal was detected. We attempted the simulation of this spectrum using individual components from an open-shell singlet diradicaloid (single resonance at centre of the spectrum) and a triplet (four lines) which leads to a good match. This result suggests more complex spin behaviour for [5] 2although we note that a decomposition product with doublet character, as observed for [1] 2-, [2] 2and [4] 2-, may contribute to the EPR spectrum but would appear unusually large if this were the sole origin of the signal. The possibility of both open-shell singlet diradicaloid and triplet states being observed for the same molecule may arise due to subtle changes in conformers in the frozen solution. 37 In contrast to [5] 2-, [6] 2gives a frozen solution EPR spectrum consisting of what initially appears to be a single resonance (Fig. 3e). However, modelling of this spectrum indicates triplet character at 77 K, although we cannot rule out the presence of a minor component of open-shell singlet diradicaloid character.
In order to understand the behaviour observed for the radical anions we performed DFT calculations (see SI for computational details). The magnetic behaviour is best interpreted through comparison of compounds 1, 2 and 3. The molecular orbitals of the neutral species, 1, were calculated and reveal that the highest four occupied orbitals comprise of two pairs of degenerate orbitals which correspond to in-phase and out-of-phase combinations of π molecular orbitals on the BODIPY moieties (Fig 4a). The two lowest unoccupied orbitals are the π* orbitals of the BODIPY moieties and the next two are π* orbitals associated with the central benzene ring linker with a HOMO-LUMO energy gap of 0.247 a.u. Addition of two electrons to the molecule can result in singlet, open-shell singlet or triplet states. In our discussion singlet refers to the lowest energy closed-shell (all electrons paired) singlet state unless specified otherwise. DFT is a single reference determinant-based method and is not suited to describe open-shell singlet states which have multi-determinant character. In order to determine the relative energy of the open-shell singlet states relative to the singlet and triplet states second-order multireference perturbation theory calculations have also been performed (see SI for details). For [1] 2the calculations confirm the singlet state to be lower in energy than the triplet state ( Table 2). This is consistent with the orbital energy diagram for the neutral form of 1 as there is a reasonably large energy difference between the LUMO and LUMO+1 orbitals (0.013 a.u.), thus, the LUMO+1 orbital is less accessible. The orbital energy diagram for the singlet state of [1] 2- (Fig. 4b) confirms that the LUMO from the neutral molecule is now doubly occupied giving rise to a singlet state. 2 exhibits a similar pattern to that of 1 with a similar HOMO-LUMO energy gap (Fig 5a), although for 2 the orbitals correspond to p orbitals localised on a single BODIPY moiety. As with 1, addition of two electrons can give rise to singlet or triplet states but in the case of 2 the triplet state is the lowest in energy (Table 2). This can be partially rationalised by the smaller energy difference between the LUMO and LUMO+1 orbitals for 2 (0.003 a.u.) in comparison to 1 (0.013 a.u.). Therefore, the additional stability associated with the parallel spin electrons compensates for an electron lying in a slightly higher energy orbital. Analysis of the MOs for [2] 2shows that the orbitals associated with the BODIPY moieties are singly occupied (Fig. 5b) whereas the LUMO orbitals in the triplet dianion are associated with the π orbitals of the m-phenyl linker giving rise to a larger HOMO-LUMO gap ( Table 2). The lower energy of the triplet state for [2] 2is consistent with the observed EPR spectrum (Fig. 3b). As 3 undergoes a three-electron reduction process, neither a singlet or triplet state is accessible for [3] state is favourable. Similar to 1 and 2, the highest molecular orbitals calculated for the neutral form of 3 are energetically close combinations of π molecular orbitals located on the BODIPY moieties. For 3 these three orbitals constitute the HOMO, HOMO-1 and HOMO-2 orbitals (see SI). The three lowest unoccupied virtual orbitals are made up of energetically close combinations of π* molecular orbitals located on the BODIPY moieties. Addition of three electrons gives [3] 3which can adopt either a doublet or quartet state, the energies for which suggest that the quartet state is more favourable as the ground state for the trianion. 4 differs from 1 in that the central linking phenyl ring in 1 is replaced by a duryl moiety. Similarly to 1, DFT calculations for 4 confirm that the four highest occupied molecular orbitals comprise of the almost degenerate in-phase and out-of-phase combinations of π molecular orbitals on the BODIPY moieties which make up the HOMO and HOMO-1 orbitals (see SI). There are also two π orbitals located on the duryl linker making up the HOMO-2 and HOMO-3 orbitals. The four lowest unoccupied virtual orbitals are analogous to those observed for 1. The presence of tetramethyl substitution on the central phenyl ring gives a calculated HOMO-LUMO energy gap of 0.114 a.u., larger than that calculated for either 1 or 2. In contrast to [1] 2-, the DFT and multireference perturbation theory calculations for [4] 2indicate that the triplet state is lower in energy than the possible singlet or open-shell singlet states, consistent with the observed EPR spectrum (Fig 3d). For both [5] 2or [6] 2calculations suggest that the triplet state is also lower in energy than the closed-shell singlet state (see SI). However, in the case of [5] 2the EPR spectrum (Fig 3e) shows features consistent with a triplet state and a signal that could be consistent with an open-shell singlet diradicaloid. The multireference perturbation theory calculations for [5] 2and [6] 2indicate small energy gaps, of 6.1x10 -4 a.u. and 1.5x10 -4 a.u. respectively, between triplet state and open-shell singlet diradicaloid states. We propose that both states are apparent in the frozen solution EPR spectrum of [5] 2-, whereas for [6] 2the frozen solution EPR spectrum suggests that the triplet state is favoured. Although the calculations suggest that the open-shell singlet is favoured for [6] 2the calculated energy gap to triplet is smaller in this case than for [5] between occupied and unoccupied orbitals. For the triplet state of [2] 2the a spin orbital energies are shown.
Spectroelectrochemical measurements were employed to further investigate the behaviour of 1-6 by UV-visible spectroscopy, with spectra recorded whilst reducing the sample in CH2Cl2 at 243 K, with [ n Bu4N][BF4] as supporting electrolyte. All compounds exhibited a sharp absorption band at approximately 500 nm with a large extinction coefficient of 90,000-120,000 dm 3 mol -1 cm -1 , featuring a shoulder to higher energy (Fig 6, Table S1) and a broader absorption band at approximately 330-400 nm, with a smaller extinction coefficient of 24,000-33,000 dm 3 mol -1 cm -1 . These bands are consistent with other BODIPY species, the most intense absorption band being associated with S0→S1 transitions and the lower intensity process being due to S0→S2 transitions. 22 Electrochemical reduction with concurrent monitoring of the UV-vis spectrum of each compound revealed some clear trends with three distinct sets of features observed across the range of radicals generated. Each dianion, or trianion, exhibits two bands at ca. 550 and 500 nm. In all cases these two bands are clearly defined with the exception of [1] 2for which the bands are not clearly resolved. These features are also observed for the monoradical anion of N, N'-difluoroboryl-5-(phenyl)dipyrrin indicating a common chromophore is generated across the series, except for [1] 2-. 21 A further higher energy feature is seen for all radicals at ca. 300-400nm although for [1] 2this feature is less well defined and is lower in intensity (Fig 6a, Table S1). In addition, dianions [1] 2-, [5] 2-, and [6] 2also display a low energy feature between 600 and 700nm (Fig 6a, e, (Fig S18) as observed for [5] 2in the EPR spectrum. Although the open-shell singlet state of [6] 2is not apparent in the X-band EPR spectrum recorded as a frozen solution at 77 K we note that the UV-visible spectra were recorded at 243 K, a temperature difference that may perturb the equilibrium between the triplet and open-shell singlet states.
The UV-visible spectrum of [1] 2is quite different from those observed for the other species studied herein and TDDFT calculations appear to overestimate the intensities of transitions < 400 nm. As the EPR spectrum for this species gave no evidence for either a triplet or open-shell singlet state and is strongly indicative of a closed-shell singlet we focussed our efforts on this system. TDDFT calculations of the closed-shell singlet state ( Fig S17) indicated a low energy band at ca. 900 nm which was not observed experimentally, however, enforcing planarity between the dipyrrin moieties and the linking phenyl ring increased the energy of this band to ca. 750nm (Fig S19), that is in better agreement with the observed experimental spectrum. However, bands to high energy were not modelled well and in particular the lower experimental intensity of these features. Despite the difficulties in modelling the spectrum observed for [1] 2-, in general the calculations are consistent with the radical state observed by EPR spectroscopy (Table 1).
In this study we demonstrate that it is possible to link BODIPY moieties into dyads and a triad, each of which can each be reduced to form radicals with varying spin multiplicities including closed and open shell singlet, triplet and quartet species. The diradical species can be considered as BODIPY analogues of the classic Thiele and Chichibabin hydrocarbons, 38 which exist either as a quinoidal or a biradical benzenoidal form, and the Schlenk diradical, which due to the 1,3-substitution at the phenyl linker is restricted to a biradical benzenoid configuration (see SI, Scheme S1). 39 We show that the geometrical arrangement of the linked BODIPYs influences the magnetic behaviour of the reduced BODIPY anions containing multiple BODIPY centres. Significant differences in the electrochemical behaviour, paramagnetism and UV-visible spectra of the reduced dyads, or triad, are observed. Thus, whereas a single reduction process was observed for para-substituted BODIPYs 1, 5 and 6, two processes were observed for the meta-substituted 2, duryl-linked 4 and triad 3. The EPR spectra of the reduced forms reflect similar redox behaviour of dyads 2 and 4 and triad 3 with the radicals anions forming triplet ([2] 2-, [4] 2-) or quartet states ( [3] 3-) consistent with formalization as bi-(or tri-) radical benzenoids. Our experimental observations are supported by DFT calculations and multireference perturbation theory which reveal the preferential adoption of triplet, or quartet, states for these three compounds in contrast to [1] 2-, [5] 2or [6] 2-. In contrast, EPR silent [1] 2is consistent with its formalization as a diamagnetic quinoidal arrangement and this differs from the behaviour of its paramagnetic tetramethyl substituted phenyl-bridged analogue, [4] 2-. Adopting the quinoid/biradical benzenoid formalization this difference illustrates the effect of spacer orientation restricting conjugation between redox centres thus imposing a biradical benzenoidal configuration. The para-substituted phenyl ([1] 2-), biphenyl ([5] 2-) and terphenyl ([6] 2-) bridged compounds are all capable of conjugation (see SI) that extends between BODIPY centres and all exhibit bands lower in energy than 550 nm in their UV-Vis spectra which would be consistent this as an interpretation 11,35,38,40 and would imply a preference for the quinodal form in this series (see SI). This interpretation is reasonable for EPR silent [1] 2but both [5] 2and [6] 2are paramagnetic implying a biradical benzenoidal configuration is preferred, either as a triplet or open shell singlet state, as a result of the larger separation between the reduced BODIPY moieties. Hence by increasing the distance between these BODIPY centres both triplet, observed for [5] 2and [6] 2-, and openshell singlet states, observed for [5] 2play a significant role in the magnetic behaviour.
Our strategy gives insight into how BODIPY dyads and triads can be reduced to afford radical anions with varying multiplicities and how both molecular geometry and the number of reduced BODIPY groups can lead to systems with high multiplicity. Thus, our study reveals BODIPY species as a fascinating alternative for the formation of radicaloids with controllable spin states. Supporting Information Available: Full experimental details and additional figures for electrochemistry, spectroelectrochemistry, EPR spectra, DFT and multireference perturbation theory calculations. Details of structural refinement and CIF file for 1, 2 and 3. CCDC 1935057-1935059 contain the supplementary crystallographic data for this paper. These data can be obtained free of charge from the Cambridge Crystallographic Data Centre via www.ccdc.cam.ac.uk/data_request/cif.
Conflicts of interest
There are no conflicts to declare. | 2020-02-06T09:09:32.571Z | 2020-02-13T00:00:00.000 | {
"year": 2020,
"sha1": "a529b1c2d902e6fec01c90acd2f5461c623b3177",
"oa_license": "CCBY",
"oa_url": "https://nottingham-repository.worktribe.com/preview/4051020/PCCP%202020%2022%204429.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "828c264d049bdb4b0e6dd3d46ee04b0bf7f12591",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
40461791 | pes2o/s2orc | v3-fos-license | The Influence of Hospital Admission on Long-Term Medication of Elderly Patients
There is a vast literature on the pharmacokinetics and pharmacodynamics of drug therapy in the elderly but the effectiveness of medication also depends on the patient taking the tablets. Poor drug compliance by the elderly is well recognised[l-3] and the reasons for it are not difficult to understand. The multiple pathology of old age encourages polypharmacy which in turn is associated with poor compliance[4,5]. For this reason most geriatricians claim that they cure as many patients by stopping drugs as by starting medication, although there is, in fact, little evidence that this is true. The elderly also tend to have impaired memory and often live alone, both factors affecting compliance. However, even when an effort has
taking the tablets. Poor drug compliance by the elderly is well recognised[l-3] and the reasons for it are not difficult to understand. The multiple pathology of old age encourages polypharmacy which in turn is associated with poor compliance [4,5]. For this reason most geriatricians claim that they cure as many patients by stopping drugs as by starting medication, although there is, in fact, little evidence that this is true. The elderly also tend to have impaired memory and often live alone, both factors affecting compliance. However, even when an effort has been made to educate the elderly about their drug regime it has been found [6] that not only did the older patients have less knowledge about drugs than their younger counterparts but they also did not benefit from tuition.
There is some evidence [3] that poor drug compliance is not entirely unintentional. It seems that many patients accept prescriptions to please their doctor rather than with the intention of taking the tablets. This, of course, does not apply solely to the elderly and probably accounts for the vast quantities of unwanted drugs recovered from patients' homes [7][8][9]. Geriatricians take pride in sending patients home on as few drugs as possible, therefore providing the optimal conditions for good compliance. What we do not know is whether the patient, or for that matter the general practitioner, accepts the modified medication. Does the patient continue on the hospital prescribed drugs or does he return to his original prescription? If he does, is this his own or the GP's decision? This study set out to assess the influence of admission to hospital on the long-term medication patterns of a group of elderly people.
Method and Results
Thirty-five patients (10 males and 25 females) discharged from two geriatric units in a large district general (teaching) hospital were included in the study. They were selected by random allocation, the only criterion for inclusion in the study being that they were returning to their own home or that of a relative. Prior to the discharge, one of us Q.A.) explained to the patients the nature of the tablets, the dosage and how long the medication should be taken. The patients were followed up at 2-3 weeks and then again at 6-8 months after discharge from hospital. By the time of the second followup 13 had either died or were no longer living at home. Particular interest was taken in the number of tablets as well as the number of drugs being taken, who dispensed the tablets at home, which drugs had been stopped and why, and whether new drugs had been started. Table 1 shows the number of drugs being taken at different stages of the study. It is of note that although four patients were admitted to hospital without drugs, all the patients were discharged on some form of medication and by the first follow-up two were again on no medication. As many as one-fifth of the patients were taking five or more drugs at the time of admission. There were still five patients (14 per cent) taking this number at the time of discharge and the number had increased to eight patients by the first follow-up. Table 2 shows the number of patients receiving individual drugs at different stages and indicates which were started or discontinued. The four commonest drugs stopped on admission to hospital were sedatives, diuretics, analgesics and antibiotics. Diuretics and analgesics were the drugs most commonly being used by patients at the time of admission and these drugs were stopped in about half of the cases. The drugs most likely to be discontinued were those which had an adverse effect on cerebral function, such as sedatives and anti-nauseants. The drugs most likely to be started in hospital were very similar to those which were stopped, the commonest being diuretics, analgesics and night sedation along with haematinics, digoxin, potassium and purgatives.
Drug Alteration in Hospital
Drugs after Discharge?first Follow-up Of the patients 22 (63 per cent) returned home to live alone and an additional seven (20 per cent) lived with an equally elderly spouse. The rest lived with other members of the family. In most cases (80 per cent) the patients were responsible for their own medication, five (14 per cent) had the medication dispensed by a relative living in the home and two were supervised by supporters from outside the home.
By the follow-up visit at 2-3 weeks after discharge 15 drugs had been discontinued: one (digoxin) by the GP, two (antibiotics) on advice from the hospital but the other 12 by the patients themselves. In no case did it appear that the patient had forgotten to take the tablets. Two were stopped because the patient felt that they were of no use (sedative and purgative) and five because the patient felt that he or she no longer needed them (analgesic, bronchodilator, sedative and antacid), one because of adverse effects (diuretic) and two because they ran out of tablets (diuretic and digoxin). In two cases, both on haematinics, the reason was not clear.
By this first follow-up visit 15 patients had started new drugs. Nine of these (diuretic, analgesic, sedative, purgative and antibiotic) were started by the GP, but six (sedative, trinitrin, Sinemet and bronchodilator) were restarted by the patient.
The patients were questioned about the dosage and timing of their drugs. Only 19 (54 per cent) understood all of the drugs, nine (26 per cent) understood some but not all, and five (14 per cent) did not understand any. It is of interest that the two patients with the most tablets (14-16 daily) understood all while the five who did not understand any were receiving less than 6 tablets a day.
As between those who understood all of the drugs and those who did not, no statistically significant results (ttest) were found for the age of the patient, mental test score, days in hospital, number of drugs on discharge or the number of tablets being taken.
Drugs at Second Follow-up By the second follow-up at 6-8 months following discharge, 10 patients had died and a further three were no longer at home. Compared to the first follow-up, 18 new drugs had been started in 13 (59 per cent) of the remaining 22 patients. Of the 18 drugs 14 had been started by the GP, one by the hospital and three by the 226 patient. All the drugs started by the patient were ones they had been taking at the time of admission but which had subsequently been stopped. Thirteen drugs had been stopped by 10 (45 per cent) of the patients?four by the GP, two by the hospital and seven by the patients. The reasons patients gave for stopping their own drugs were: no benefit in three (two analgesics and one sedative), adverse effects in one (diuretic) and 'no longer needed' in three (haematinic, analgesic and antacid).
Discussion
Drugs are obviously a major part of the management of any patient but there are problems in ensuring that the elderly receive the most appropriate medication and that they take it correctly. Although geriatricians preach the gospel of moderation in medication, and the geriatricians in the study hospital were no exception, it is obviously more difficult than is often thought to achieve this in practice. It is true that 48 per cent of the drugs patients were receiving on admission were discontinued, but they were replaced by an equal number of drugs which were possibly considered more 'appropriate'. The major emphasis was on stopping drugs likely to disturb cerebral function; most of the drugs started were related to the cardiovascular system or anaemia, merely emphasising the reason for admission.
Within three weeks of discharge 12.5 per cent of the drugs the patient was sent home on had been discontinued and an equal number had been started. Doctors played only a part in controlling this change in medication, the patient being responsible for 80 per cent of the discontinued drugs and 40 per cent of the drugs started. As patients are only likely to take drugs from which they feel they benefit this finding emphasises the need to treat the patient rather than the pathology. A problem might arise if patients stopped taking drugs given for asymptomatic disorders but we found no evidence of this. Patients were more likely to stop taking the drugs for conditions with obvious symptomatology, e.g. sedatives, purgatives, analgesics and bronchodilators.
Although none of the patients seemed to have stopped drugs because they forgot to take them, a large proportion of patients did not understand the dosage or timing of at least some of their drugs. Whether this can be overcome is debatable. Certainly other studies [6] have shown how difficult it can be to train elderly patients to understand their medication. This difficulty can only be overcome if supervision of medication, either by members of the family or with professional help, is improved. One of the problems is that district nurses are reluctant to visit the patient only to supervise medication, an impossibility when drugs are to be taken several times a day. It would seem obvious that drug therapy should be kept simple, with as few tablets as possible being taken. It is, however, of note that all of those who did not understand any of their drugs were those taking the fewest tablets, and both of those taking the largest number of tablets understood all of them. This may simply indicate that efforts had been made to control the number of tablets in those most at risk of non-compliance.
The high mortality at the late follow-up is an indication of the genuine degree of illness present and probably explains the high medication rate. There continued to be changes in drug medication among the survivors which, even at this late stage, were still largely influenced by the patient. New drugs were mainly prescribed by the GP, although some patients had reverted to drugs they had been taking at the time of admission to hospital. Drugs which were discontinued were as likely to have been stopped by the patient as by doctors.
It appears that geriatricians may not be as successful at decreasing the number of drugs being taken by elderly patients as they might think. Even within a very short period following discharge from hospital, patients may either discontinue or start drugs and therefore much well intentioned modification of medication is of limited value. With such a large amount of drug modification taking place over a relatively short period, it would seem important that hospital doctors and GPs have good methods of communication. It would also seem that closer contact with the patient, possibly by a health visitor, is required if drug compliance is to be achieved. | 2018-04-03T04:50:45.697Z | 1984-10-01T00:00:00.000 | {
"year": 1984,
"sha1": "95be69af6e94626ef8b1013f78238e5bb857eedb",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "34f0df56bb43377a0d9e37caf747cfa19e6e5eb8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
39718044 | pes2o/s2orc | v3-fos-license | Reproductive Outcome Following Hysteroscopic Treatment of Uterine Septum
Background: Septate uterus is the most common uterine anomaly and a cause for miscarriage and infertility. Existing data suggested a better reproductive outcome of uterine septum following hysteroscopic septum resection. Objective: Current study was administered to share our experience in hystroscopic septum resection for reproductive outcome following hysteroscopic treatment of uterine septum and specifically focusing on different treatment protocols after hysteroscopic septum resection. Methods& materials: This study was a cross-sectional study based on secondary data that was obtained from medical records of infertile women who had undergone transvaginal hysteroscopy and used different treatment protocols after hysteroscopic correction of uterine septum in Infertility and Reproductive Health Research Center between April 2005 and February 2014. Results: The total number of infertile women underwent hysteroscopy uterine septoplasty was 106. The hysteroscopy septoplasty resulted in an overall pregnancy rate of 67% and a live birth 57.5%. Pregnancy rate for patients who had not male infertility was 92.1%. The chi-square test did not reveal any statistically significant difference in side affect, pregnancy, live birth, abortion, preterm deliveries, and term deliveries rate between these patients either with consistent hormone therapy plus IUD insertion or with alternate hormone therapy plus IUD after hysteroscopic metroplasty. Conclusion: The findings of the present study indicated hysteroscopic septum resection to remove a uterine septum in women with infertility is safe and may be an efficacious procedure. Treatment following hysteroscopic septum resection, either the consistent or the alternate protocol is both beneficial to improve pregnancy rate.
In spite of the comprehensive study of infertility, little data are available concerning the benefit of different treatment protocols after septum resection on fertility consequence. Our interest was to assess the efficacy of hysteroscopic septum resection on pregnancy rate and benefit of postoperative various methods in infertile women after septum resection.
MATERIALS AND METHODS
This study was approved by the ethics committee of Babol University of Medical Sciences. A compilation sheet was developed for the present study after taking permission from the general director of the Center to inspect the information existing in the medical records of infertile women. The research design of this study was a cross-sectional study. The study was based on secondary data from Fatemezahra Infertility and Reproduc- tive Health Research Center to inspect the information Center that Inclusion criteria for the study were infertile women who received different treatment protocols after hysteroscopic correction of the septet uterus between April 2005 and February 2014. A total of 106 patients was selected and reviewed based on inclusion criteria, 28 of these had male factor. The initial diagnosis of intrauterine septum was done by hysterosalpingography (HSG). Septet uterine and variable lengths (class Va: complete; class Vb: partial) according the American Fertility Society classification of Mullerian duct anomalies, who agreed to undergo hysteroscopic Septoplasty in the infertility center, were included in the study (5,13,(31)(32)(33) .
All patients were hospitalized and underwent surgery 2 to 5 days after menstruation period in the early proliferative phase. Patient was placed misoprostol suppository in the posterior vaginal fornix for cervical dilation to facilitate an easier and uncomplicated procedure prior night of surgery.
After performing anesthesia, vaginal speculum placed after cleaning the external uterine ostium with a gauze soaked in iodine solution, surgeon used cervical dilator for dilatation of cervical OS to 10-10.5 cm, a 3.5-mm mini-hysteroscope KARL STORZ, Germany For endoscopy, was administered to preserve hymen integrity; normal saline was used as a factor of distending. An endoscopic vaginal exploration presented the presence of one uterine cervix at the open vaginal side and the uterine distention pressure was set at 150 mm Hg.
An electro resectoscope was inserted to corroborate the size, range of the septum and location. Then the septum by needle electrode was incised, and ultrasonography was applied for monitoring during the operation completely. After hysteroscopy, different regimens were used postoperatively to decrease the formation of intrauterine adhesion in the denuded area of septal incision. Postoperatively, patients totally had an intrauterine device insertion (IUD) to certify the potency of the uterine cavity and to prevent further adhesions. Antibiotic treatment was not taken to the patients. Laparoscopy was done also to rule out bicorn septum. All patients after receiving hormone therapy instruction and consciousness were discharged (26,29,(33)(34)(35)(36)(37)(38)(39)(40).
In 71 women, after septum resection alternate hormone order (first protocol) consist of conjugated estrogens (Premarin; Montreal postoperative Canada ,Wyeth-Ayerst) at a dose of 1.25 mg daily was given for twenty five day after surgery and medroxyprogesterone acetate (Provera; Pharmacia and UpjohnKalamazoo, USA) at a dose of 10 mg twice per day was accomplished from day of sixteen in this time in combination with the conjugated estrogens a dose of 1.25 mg daily till 25th of cycle, after 25 days treatment will stop, then menstruation will occur and after menstruation period again we start this treatment process again for 2 month.
A total of 35 patients were received constant hormone (second protocol) includes; conjugated oestrogen at a dose of 1.25 mg twice daily was given for 50 days. Medroxyprogesterone acetate at a dose of 10 mg twice daily was performed for the last 10 days of this time (i.e. days [54][55][56][57][58][59][60] in combination with the conjugated oestrogen. After a withdrawal bleeding, second day of menstruation, the IUD was removed. Pregnancy and delivery follow-up for 12 months was done. If the pregnancy was not happening spontaneously after 6 month treatment, according to surgeon's idea ART procedure was started (41)(42)(43)(44)(45)(46)(47).
Statistical analysis
Data were analyzed using SPSS version 18.0. Comparisons between the type of protocols after septum resection and characteristics of variables were made using t-test. A P value of <0.5 was considered statistically significant. Qualitative data are presented as number and percentage and comparison between groups are estimated by Chi -square and Fisher's exact test.
RESULTS
Out of 106 infertile women with septate uterus who undergone hysteroscopy septum resection, 71 (67%) of the patients received first protocol after septum resection and 35(33 %) second protocol. The mean educational of infertility was 4.5±3.3 years. The BMI and menarche age the subjects were 27±4.3 kg/ m 2 and 12.9±1.0 years, respectively. Table 1 shows some of the characteristics for different treatment protocols after hysteroscopic septum resection. There was no significant difference between age, menarche age, BMI, job of women, infertility type, and duration infertility types with treatment protocols of patients Table 2 presented reproductive outcome and side effect after septum resection with different treatment protocol, showed a pregnancy rate after hystroscopic septum resection was enhanced. 44(62.0%) of patients had a positive pregnancy rate with first treatment protocol, while 27(77.1%) of patients with Second treatment protocol had a positive pregnancy rate.
DISCUSSION
Most of the researchers have shown a better reproductive outcome after hysteroscopic resection of uterine septum in woman with septate uterus, however, there is no evidence on the postoperative management of the hysteroscopic septum division (27) (48). Most of authors also reported hysteroscopic metroplasty in patients with uterine septum improved pregnancy rate (10,42). In this study, pregnancy rate and live birth rate after hystroscopic septum resection in women with septate uterus was high. Two studies found that the pregnancy rate after hysteroscopic metroplasty (around 40%) was lower compared with results (67.0%) (3,15,43,49).Other observational studies also have reported similar findings (50).
In a retrospective, matched, controlled study, the role of septet uterus in the reproductive performance of patients requiring in vitro fertilization (IVF) was evaluated. The pregnancy rate before metroplasty was lower than after metroplasty, and the abortion rate was higher. They suggest that the presence of septate uterus may be decreased the pregnancy rate and increased the abortion rate after the embryo transfers for IVF/ICSI (51). In the present study, in 63 patients who had hysteroscopic resection of uterine septum and did not conceive naturally pregnant, of these 28 (44.5%) became pregnant by ART.
While some authors were used only estrogen after hystroscopic metroplasty but reproductive performance significantly improved (26,45,(71)(72)(73). We found that both intrauterine device and estrogen plus progesterone (HRT) was same for effect on reproductive outcome and the was no significant difference between two protocols on pregnancy rate (41).
In our infertility center we used two treatment protocols that mentioned upper and this was according to our surgeon's idea. We were able to show that the different treatment after septum resection was elected according to surgeon's idea and the postoperative reproductive outcome was similar in both method and there were no significant differences on delivery rates. The presenting findings showed that hysteroscopy for resection of uterine septa will increase the odds of clinical pregnancy in infertile women, but the evidence is not conclusive at present. Therefore, it is suggested that in women with septate uterus and a history of infertility, hysteroscopic septoplasty is a confident and efficient procedure resulting in a higher pregnancy rate. But more randomized controlled trials and prospective studies with enough samples with no intervention and consistent follow-up data are needed to, which could provide the highest level of evidence and substantiate the effectiveness of the hysteroscopic removal of uterine septum in infertile women and various postoperative treatments. Further research studies should focus on specific populations with clear indications, to draw reasonable and meaningful conclusions about the outcomes of hysteroscopic metroplasty. Adequate time after the procedure should be allowed so that subjects have ample time to attempt conception and also to give birth, to allow for accurate live-birth rate calculations.
Limitation: Because of some limitations, we don't have access to all surgical reports and therefore lacked detailed data on the diameter of cervical dilatation and intraoperative findings in some cases; we were also not able to calculate the exact time interval between the hysteroscopic intervention and the beginning of the pregnancies. A short interval between hysteroscopic intervention and conception might be an additional risk factor for preterm birth. Second, some of the infertile patients who underwent hysteroscopic septum resection in the course of infertility assessment at our clinic may have conceived naturally after the procedure, but were lost to follow-up or turned to another clinic for ART.
CONCLUSION
Treatment following hysteroscopic septum resection, either the consistent or the alternate protocol is both beneficial to improve pregnancy rate. There is no meaningful advantage between two adjacent postoperative hormone therapies on pregnancy rate. We have shown that hysteroscopic septum resection to remove a uterine septum in women with infertility is safe and may be an efficacious procedure. However, the need remains for larger randomized controlled trials and prospective studies with enough samples with no intervention and consistent follow-up data to address the effectiveness and safety of adjunct therapy with hysteroscopic septum resection. | 2016-05-12T22:15:10.714Z | 2014-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "1e947d807209a144394ab1b469ac54c5924bfeb5",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4314157?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e947d807209a144394ab1b469ac54c5924bfeb5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53491857 | pes2o/s2orc | v3-fos-license | Factors Influencing the Strategic Implementation of Change Management in the Devolved Public Health Services in Kenya: A Case of Nakuru Provincial General Hospital
Many organizations, including those in the public health sector, are experiencing and managing change which may be either planned or emergent. The performance of Nakuru Provincial General Hospital has been reproached particularly in the wake of go-slows and strikes of the health labour force in recent times. The broad objective of the study was to assess the factors influencing the strategic implementation of devolved health services in Nakuru Provincial General Hospital. The study specifically examined the effect of budgetary support and health policy on strategic implementation of health services.The study was guided by Ansoffiantheory. The study adopted a cross-sectional survey research design. The target population was the 736 employees working with Nakuru Provincial General Hospital. A sample of 89 respondents was drawn from the target population using stratified random sampling method. Primary data was collected using a structured questionnaire. The instrument was pilot tested before its use to collect data for the main study. The study assessed both validity and reliability of the instrument. The collected data was processed and analyzed with the aid of the Statistical Package for Social software. The data was analyzed using both descriptive and inferential statistics. The results (β= 1.093, p<.05) suggested that budgetary support has a positive significant effect on strategic implementation of health services. Health policy had a significant positive effect on strategic implementation of health services (β=.431p=<0.05). It was concluded that budgetary support and health policy determine strategic implementation of devolved healthcare services.
Introduction
All over the world, there is pressure on the government of the day to offer more efficient, effective and satisfactory health services to its citizenry. Reform initiatives have swept through governments bringing news about efforts to reinvent, transform, or reform government health agencies [2]. In the developing world, change management has also been seen to emanate in all spheres of the public sector. The health sector change initiatives have been run under health reform programs. Africa in particular has had a turbulent change environment that has shaped the change management process in the health sector over the half a century or so.
The government of Kenya approved the Kenya Health Policy Framework (KHPF) as a roadmap for developing and managing health facilities [16]. The framework outlines the long-term strategic imperatives and the agenda for Kenya's health sector. In Kenya, there have been a series of major health sector reforms over the last three decades based on the principles of decentralization, community participation and intersectoral collaboration.
In 2010, a new constitution was promulgated through a nationwide public referendum. The new constitution provides for devolution of some of the government functions from national level to semi-autonomous counties countrywide. These are managed by elected county leaders. Counties have the authority to set priorities and allocate resources received from the national level, levy local level taxes and undertake other forms of local resource mobilization to strengthen service provision. This initiative has significantly changed government operations across the devolved sectors, including health in the public sector. The new constitution also created a maximum number of ministries for the country and therefore the coordination of health services has reverted back to one Ministry of Health. The government has also pledged to abolish the current user fee policy making services free in health centres and dispensaries, and to introduce free maternity care throughout the health system, although if, when and how this will happen remains unclear.
These health sector governance changes have important implications to the design, implementation and impact of service delivery to the Kenyan citizenry. These changes have also come with daunting challenges. A review of implementation experience to date should assist in future planning of the changes envisioned in the health sector, including identification of key factors affecting strategic implementation in the public health sector. The current constitutional dispensation ushered in the most awaited hope for Kenyans of 'bringing services closer to the people'. One of the services that were devolved to the Counties was the healthcare service. These were to be managed by the local elected leaders in conjunction with the hospital management team.
Nakuru Provincial General Hospital is a level V public in Kenya. The hospital acts as a referral to level IV hospitals. It provides specialized and general care to patients. The hospital is charged with provision of Health Services in Nakuru County and its environs. In addition, it gets referrals of critical patients from the nearby County Hospitals as well
Statement of the Problem
Many organizations, including the public health sector, are experiencing and managing change which may be either planned or emergent. How effectively change is managed highly determines the strategic implementation of these institutions. There have been efforts since independence to make the health system more efficient, effective and cost friendly to the Kenyan citizens. This has culminated to various strategies being formulated and implemented including devolution of the health sector. Devolution of health system structure and management has been and continues to be a key issue for many countries in the achievement of health for all, and development of primary health care.
Achieving integrative heath care services is a key policy objective of Kenyan devolved governance in in the health sector and is intended to reduce the frustration, delay, the inefficiency, and the gaps that frequently existed in the previously centralized health system management. Health system has had long-standing problems. The origin has been in the way policy has been made, in the way different services are funded, planned, and managed; weaknesses in budgetary and information systems; communication failures and organizational and individual behaviors. Central to the creation of a health care system is the devolved authorities' ability to use these governance tools to rationalize, integrate and coordinate previously autonomous and sometimes competing services.
Kenya devolved its healthcare system since the time the county government came in power in March 4th 2013, however, very little has been done to establish the factors that influence its implementation and more so in Nakuru county. Health staff unrest has been witnessed since the advent of county governance; affecting service delivery thus posing health risks to residents and scaring away potential investors. Both the national and county government together with the various development stakeholders have paid little attention to such a situation despite the fact that if it remains unchecked could jeopardize service delivery. It was against that backdrop that the above study was conceived so as to fill the knowledge gap.
General Objective of the Study
To assess the factors influencing the strategic implementation of devolved health services in Nakuru Provincial General Hospital
Conceptual Framework
A conceptual framework is a diagrammatic representation of study variables and how they relate. This is shown in Figure 1.
Figure 1: Conceptual Framework
It is clearly indicated in Figure 1 that there are two sets of variables which are independent and dependent. Independent variables include County Government budgets and County Government health policy while strategic implementation is the dependent variable. It was hypothesized that the aforestated independent variables affect the strategic implementation of public health facilities particularly in Nakuru General Hospital.
Literature Review
Theories and empirical studies touching on factors influencing strategic implementation are reviewed.
Theoretical Review
Theories of strategic implementation are reviewed and discussed in context of health facilities. The study reviews Ansoffian theory. The Ansoffian theory was pioneered by Ansoff. The theorist lived was born in 1918 and passed on in 2002. He was an applied mathematician and business manager. His mathematics foundation and acumen enabled him to analyze strategic management techniques.
The theory has widely been employed to explain optimal strategic implementation position [7]. The author stated that the implementation of components of Ansoff's strategic Licensed Under Creative Commons Attribution CC BY success paradigm has proven to enhance an organization's probability of strategic success.
The Ansoffian theory was premised on balancing the external characteristics of the product-market strategy and creating an internal fit between strategy and organization resources [1]. In the health sector's context a particular hospital, say Nakuru Provincial General Hospital, may seek to evaluate resources at its disposal in order to address the ever rising demands of patients. The external characteristics in this perspective, include the socio-economic, geopolitical, and health situations that have exacerbated the vulnerability of people to injuries and diseases. This means that the demand for health services will ever be on a rising trajectory. Needless to say, the foregoing calls for an assessment of resources at the hospital's disposal to address these health needs.
It is noted that Ansoff's theory divided the environment into two large categories which are historic and discontinuous. Historic perspective indicates that decisions about the future are founded on past and present events that can be extrapolated into the future. This implies that change is incremental, predictable and visible. On the other hand, discontinuous environments indicate that the future is partially visible and predictable, therefore, change is possible by employing weak signals from the environment. More so, the future could be absolutely unpredictable and invisible which interpretatively means that changes are based on building scenarios utilizing weak environmental signals [7]. The health situation is ever changing which means that the health sector should also be dynamic. It is the responsibility of the sector's management to put in place and implement strategies that would enable the sector to address not only present but also future predictable and unpredictable eventualities.
Empirical Review
This section covers a review of empirical studies that have hitherto been carried out in respect of strategic implementation particularly in the public health sector. Specifically, the study puts into perspective previous findings on the themes of County Government budgets and health policy.
County Government Budgets
A study on the impact of budget participation on managerial performance via organizational commitment among the top 500 Turkish firms revealed that budget participation and organizational commitment had a significance influence of managerial performance [15]. Budget participation was noted to improve management performance in the surveyed firms. It was further deduced that high interaction between budget participation and organizational commitment provided an appropriate environment under which managerial performance would essentially be improved.
An assessment conducted in Yolo County in 2012 by Government Finance Officers Association found that county budgets that embraced public participation provided a means by which performance would be improved. Public participation was meant to increase knowledge about the budget and fiscal situation among the public in order to have a more informed and responsive local budget. Another study found that some local governments prioritize the most critical goals by reviewing their strategic and tactical plans while budgeting [5]. This priority-based budgeting was noted to increase quality and service delivery among cities such as San Jose, California and Florida.
More so, a study [22] looked into the budgeting strategies in selected polytechnic libraries in Nigeria. The author observed that for any organization to perform creditably, then the budget and the budgeting process should facilitate effective utilization of available funds, improve decision making and provide a benchmark to measure and control performance in addition to increasing communication within the organization and establish understanding between managers about goals and objectives. Enterprises make plans using budgets in a systematic or unsystematic way while having some form of budgetary control and budgetary control practices. In his study of budgeting control practices by commercial airlines operating at Wilson Airport, it was noted that budgets formed a platform for business performance evaluation. The author further noted that challenges faced by the airlines were budget evaluation deficiencies lack of full participation of all individuals in the preparation of budgets and lack of management support. These were noted to have a negative impact on the performance of the business.
Another study examined the effects of budgets on financial performance on manufacturing companies in Nairobi County [19]. Despite the study not showing how budgets influenced strategic implementation of the companies, it was revealed that budgets indeed influenced the companies' financial performance. The study however recommended that the county should effectively implement the budget through capacity building, robust systems and processes prioritization, close monitoring and evaluation and stakeholder engagement. In addition, best financial management practices should be enhanced and establish a strong link between planning and budget processes to ensure prudent management of funds.
County Government Health Policy
It is argued that health policy entails the course of action or inaction that affects institutions, organizations, services and funding arrangements of the health system and includes both the policies made by the government and the private sector [4]. On the other hand, it is observed the policy as a process of decision making rather that the output of that process [9]. While looking into the health systems in low and middle income countries, it was noted that health policy actors must negotiate and engage with other range of actors at national and international levels and outside the national health system in order to enhance the heath system development [21].
An empirical study delved into the diagnosis of health policy in poor countries [6]. The authors focused on the evidence showing weak links in the chain between government spending for services to improve health and actual improvements in the health status. They noted that institutional capacity was vital for the provision of effective Licensed Under Creative Commons Attribution CC BY health services since lack or inadequacy of the capacity may lead to below par actual provision of health services. According to the World Health organization [25] global health policies have been put in place to deal with all types of diseases through the health systems, surveillance, treatment and working with the national governments to promote global health. In addition, the organization in collaboration with the world community has been improving ways to curb the key health threats. It is further noted that the organization in conjunction with the international health regulation has promoting cooperation between the developed and developing countries on emerging health issues of global significance [24].
It is posited that in 2005, the regional East African community health policy initiative was established. This policy aimed at accessing, synthesizing, packaging and communicating evidence required for policy and practice. In addition, the policy was aimed to influence policy-relevant research agendas for improved population health and health equity. Further, the policy sought to improve people's health and health equity in the East Africa region through the effective utilization and implementation of knowledge to enhance health policy and practice. With the collaboration with the respective governments of East Africa, WHO enabled health promotion among the population in addition to countries establishing policies to improve health [24].
In a review of the heath policies and the new constitution for vision 2030,it is noted that the Kenya Health Policy 2012 offered guidelines to ensure that the improvement of the health status in Kenya [16]. The policy was noted to emphasize the Kenya's health sector's obligation under the supervision of the government and to ensure that the country attain the highest possible standards of health. The Kenya health policy 2014-2030 as a progression of Kenya health policy 2012 seeks to halt and reverse the rising burden of non-communicable conditions, provide essential healthcare, strengthen collaboration with the private and other healthrelated sectors including the man goal of attaining the highest standard of health in a manner responsive to the needs of Kenyan population.
Strategic Implementation
A report prepared on strategies for improving health care delivery provides that measuring and improving organizational performance is difficult because of the diversity and dynamism of such organization [3]. The report therefore provides that any strategy that an organization chooses should be informed by identified root causes of the problem, the implementation capabilities of the organization and the environmental conditions faced by the organization. In addition, the report identified the intermediate outcomes that result to organization performance. These included the quality of service provided, efficiency in the provision of services, utilization that is the volume of services delivered or clients served and sustainability in provision and delivering needed and valued services.
A study revealed that it is imperative for policy makers to formulate suitable strategies to ensure service quality of the health care centres in India [14]. In addition, in order to improve the operational performance of the Health care centres, systematic mechanisms for supervision, monitoring and review of the functioning of the health care centres should be put in place. Further, administrative system should be instituted to ensure optimal utilization of the available resources and improve the service quality of the health care centres.
Another study examined comparatively how democracy improved health in Ghana and Senegal [8]. The study found that Ghana experienced greater improvements in skilled attendance at birth, childhood immunization, improvements in treatment of children with various ailments and reduction in infant mortality rate. The improvements were alluded to the adoption of national health insurance scheme and universal health coverage. It is estimated the technical efficiency and productivity of sampled hospitals in South Africa. It was found that the hospitals operated at nonoptimal scale and with decreasing returns to scale suggesting that they were technically inefficient [26].
In a study of the determinants of public health care expenditure in Kenya, it was noted that financing health care was crucial in the performance of the health sector [20]. In another study it was also noted that health care financing was key determinant of the health system performance [18]. This was because the financing would provide the necessary resources and the incentives for running the health systems in the country. It was further argued that knowledge on the health care financing would inform government policies by providing a closer look at the effects of its policies on the health care delivery systems and the overall standards of a country. It was found that there was recognizable increase in efficiency among the surveyed hospitals as a result of the major reforms carried out by the ministry of health in Kenya. However, the study recommended that more efforts by the ministry should be channelled on reducing inefficiency in service provision in addition to maintain a central database in order to facilitate measurement of efficiency in order to upgrade the service quality in areas found deficient [12].
Moreover, a study on the factors affecting provision of service quality in the public health sector in Kenya was carried out [23]. The study focused on employee capability, communication and financial resources in Kenyatta National Hospital. The study found that lack of flexibility and budgetary autonomy and lack of performance based incentives led to poor health outcomes and inefficiency. Underfunding public centres coupled with weak health system was noted to affect delivery of quality service in the hospital. The study concluded that fixed budgets in hospitals led to failure to respond to emergencies while centralized budgets contributed to technical inefficiency by preventing staff from optimizing the deployment of inputs and therefore leading to poor service quality in the hospitals. The study recommended that delivery of service quality health could be improved through effective allocation of financial resources in the public sector in order to promote functions that contribute to service delivery and reduce the bureaucracy in financial management.
Research Methodology
Research methodology outlines the procedure that was followed to arrive at findings that conformed to study objective. It outlines the research design, target population, sample and sampling technique, data collection instrument, data collection procedure, data analysis and presentation.
Research Design
A research design is the roadmap of conducting a research study [13]. The study adopted a cross-sectional survey design. The choice of this design is based on the argument that most studies undertaken for academic studies are time constrained and therefore a cross-sectional study would be feasible for this purpose. Moreover, respondents cutting across various departments of NakuruProvincial General Hospital participated in the study.
Target Population
The target population is simply the population that the study findings will be generalized to. The target population constituted the 739 employees working with Nakuru Provincial General Hospital.
Sampling Frame
A sampling frame is defined as a list of the target population from which the sample is selected and that for crosssectional survey a sampling frame usually consists of a finite population. The sampling frame for this study consisted of all Departments in Nakuru Provincial General Hospital. This is as captured in Table 1.
Sample Size and Sampling Technique
A sample is a subset of the study population. In other words, a sample is extracted from the target population. In the context of this study, sampling is necessitated by the fact that, the population is large and it would be, needless to say, constraining in terms of financial and time resources to include all members of the target population in the study.
The study employed Nassiuma's formula to derive the sample size [17]. Substituting these values in the equation, estimated sample size (n) will be: n = ____736 (0.5) 2 _____ 0.5 2 + (736-1)0.05 2 n = 88.14 n = 89 respondents The sampled respondents were drawn from the target population using stratified random sampling method. This is due to the fact that, there are different departments (strata) from where the respondents will be drawn. As such this sampling method ensured fair and equitable distribution of respondents.
Data Collection Instrument
Data were collected using a structured questionnaire. The questionnaire was preferred because the use of questionnaires provides a platform for each sampled respondent to be asked to respond to the same set of questions. In doing this, it provided an efficient way of collecting responses prior to requisite quantitative analysis.
Pilot Testing
Pilot testing is essential in that it offers an opportunity of detecting any probable weaknesses in the research instrument. A pilot study was conducted among a small sample of employees working with Naivasha General Hospital who were selected randomly. The data collected in the pilot study were analyzed with the object of determining both reliability and validity of the research questionnaire used in the final study.
Validity is posited to be the extent to which the interpretations of the results of a test are warranted. The foregoing is argued to depend on the specific use the test is intended to serve [11]. The study sought to determine the content validity by liaising with assigned University supervisors.
Reliability is asserted to be a measure of how consistently an instrument can collect similar data when administered to different populations and/or at different times. Reliability estimates are used to evaluate the stability of measures administered at different times to the same individuals or using the same standard. Reliability coefficients range from 0.00 to 1.00 with higher coefficients indicating higher levels of reliability. The study used the Cronbach alpha to assess reliability of the research instrument. The reliability threshold was alpha ≥ 0.7. The three study constructs returned alpha values equal to 0.781, 0.764, and 0.810 respectively.
Data Collection Procedure
The data were collected using a structured questionnaire. Relevant consents from Kenya Methodist University and the management of Nakuru Provincial General Hospital was sought prior to collecting data from the sampled respondents. The questionnaires were issued to the respondents through their respective heads of department. The filled questionnaires were collected after approximately five working days since their date of issuance. The collected data was processed and analyzed using the Statistical Package for Social Sciences (SPSS) software. The raw data was edited and coded before being analyzed with the aid of the SPSS for the descriptive and inferential statistics. The findings of the study were presented in tables of frequencies, percentage, descriptive statistics and inferential statistics. The study expects to show the influence of various factors (County Government budgets and County Government health policy) on strategic implementation. This was captured by the following econometric model: Y = β 0 + X 1 β 1 + X 2 β 2 +Ɛ Where: Y = Strategic Implementation of health care delivery X 1 = Budgetary Support X 2 = Health Policy
Research Finding
The data collected regarding budgetary support, health policy, and strategic were analyzed using correlation and regression analysis. The first section presents the response rate, which is followed by the correlation and regression results.
Response Rate
Out of the targeted 89 respondents, 54 successfully filled the questionnaires. This was an equivalent of 61% response rate. The response rate was considered adequate.
Correlation Analysis
Correlation analysis was done to determine relationships between the study variables. Pearson product moment correlation coefficient was used. The results of the correlation analysis are presented in Table 2. Correlation coefficient greater than 0.9 indicates problems of multicollinearity. Since the highest correlation coefficient is 0.719 which is less than 0.9, there was no serious problem of multicollinearity. The results as presented in Table 2 show a strong significant positive correlation between budgetary support and strategic implementation (r = 0.666; p<0.01). There was also a significant strong positive correlation between health policy and strategic implementation (r = 0.719; p< 0.01). Therefore, it could be deduced thatbudgetary support and health policy were some of the most fundamental factors affecting strategic implementation of devolved healthcare services in Nakuru General Hospital.
Regression Analysis
The effect of budgetary support and health policy on strategic implementation of devolved healthcare services was tested using multiple regression analysis. The results (β= 1.093, p< 0.05) suggested that budgetary support has a positive significant effect on strategic implementation of health services in Nakuru Provincial General Hospital. Hence hypothesis H 01 was rejected. The results suggest that as the level of budgetary support increases, so does the level of strategic implementation of health services. The results further indicated that health policy had a significant positive effect on strategic implementation of health services in Nakuru Provincial General Hospital (β= 0.431p<0.05). Therefore, hypothesis H 02 was rejected. The results suggested that as the level of health policy increased, strategic implementation of health services also increased.The multiple regression results are summarized in Table 3.
Summary, Conclusions and Recommendations
The study has summarized the research findings, drawn relevant conclusions and then suggested recommendations. All these aspects are in line with the objective of the study.
Summary
The study revealed that budgetary support has a positive significant effect on strategic implementation of health services in Nakuru Provincial General Hospital. This results supported earlier findings found that organizations need to adequately fund developed health functions for effective strategic implementation of devolved healthcare services. It was further indicated that health policy had a significant positive effect on strategic implementation of health services in Nakuru Provincial General Hospital Health policy acts as the framework for the operationalization of devolution of health functions. The findings mirror a previous study that found that a relationship between Health policy and strategic implementation of devolved healthcare services.
Conclusions
The study concluded that there exists a positive and significant correlation between budgetary support and health policy on one hand, and strategic implementation of health services on the other hand. Therefore, managers should consider the aforementioned factors so as to enhance the strategic implementation of health services in the health sector. This is important because if the factors were to be
Recommendations
The study recommends that county government should prioritize budgetary support and health policy as one crucial way of enhancing strategic implementation of health services within their respective jurisdictions. The public health managers are advised to work in liaison with county government administration in order to bolster delivery of health services. | 2018-10-17T21:21:43.037Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "e6d08b68e9338d9fd8c5e16868eb60636943fd45",
"oa_license": null,
"oa_url": "https://doi.org/10.21275/v5i4.nov162844",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c222c1432afebee7005baddeefeb7f057cd63e84",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": []
} |
15306642 | pes2o/s2orc | v3-fos-license | Simultaneous monitoring of stable oxygen isotope composition in water vapour and precipitation over the central Tibetan Plateau
This study investigated daily δO variations of water vapour (δOv) and precipitation (δ Op) simultaneously at Nagqu on the central Tibetan Plateau for the first time. Data show that the δO tendencies of water vapour coincide strongly with those of associated precipitation. The δO values of precipitation affect those of water vapour not only on the same day, but also for the following several days. In comparison, the δO values of local water vapour may only partly contribute to those of precipitation. During the entire sampling period, the variations of δOv and δ Op at Nagqu did not appear dependent on temperature, but did seem significantly dependent on the joint contributions of relative humidity, pressure, and precipitation amount. In addition, the δO changes in water vapour and precipitation can be used to diagnose different moisture sources, especially the influences of the Indian monsoon and convection. Moreover, intense activities of the Indian monsoon and convection may cause the relative enrichment of δOp relative to δOv at Nagqu (on the central Tibetan Plateau) to differ from that at other stations on the northern Tibetan Plateau. These results indicate that the effects of different moisture sources, including the Indian monsoon and convection currents, need be considered when attempting to interpret paleoclimatic records on the central Tibetan Plateau.
Introduction
The Tibetan Plateau is a natural laboratory for studying the influences of different moisture sources, which include polar air masses from the Arctic, continental air masses from cen-tral Asia, and maritime air masses from the Indian and Pacific Oceans (Bryson, 1986), and for reconstructing paleoclimate variations (An et al., 2001).The stable oxygen isotope (δ 18 O) provides an important tracer for understanding atmospheric moisture cycling, especially by using the δ 18 O records in all three phases of water (Dansgaard, 1964;Lee et al., 2005).Oxygen isotopes also act as important indicators for reconstructing paleoclimates by using their records preserved in ice cores (Thompson et al., 2000), speleothems (Cai et al., 2010), tree rings (Treydte et al., 2006;X. Liu et al., 2014), and lake sediments (Zech et al., 2014).Variations of δ 18 O result from different isotope fractionation processes that may be influenced by temperature, humidity, and vapour pressure (Dansgaard, 1964;Jouzel and Merlivat, 1984;Rozanski et al., 1992), and from different moisture sources (Breitenbach et al., 2010;Pang et al., 2014).
To better understand atmospheric moisture transport to the Tibetan Plateau and surrounding regions, the Chinese Academy of Sciences (CAS) established an observation network in 1991 to continually survey δ 18 O variations in precipitation on the plateau (the Tibetan Plateau Network of Isotopes in Precipitation, TNIP) (Tian et al., 2001;Yu et al., 2008;Yao et al., 2013).Previous studies have shown that δ 18 O variations in precipitation on the southern Tibetan Plateau differ distinctly from those on the northern Tibetan Plateau (Tian et al., 2003;Yu et al., 2008;Yao et al., 2013).In addition, many scientists have investigated the roles of various climatic factors, especially the Asian monsoon's influence on δ 18 O in precipitation (Aizen et al., 1996;Araguás-Araguás et al., 1998;Posmentier et al., 2004;Vuille et al., 2005;J. Liu et al., 2014;Yu et al., 2014a).Recent stud-ies have also investigated δ 18 O in river water (Bershaw et al., 2012), lake water (Yuan et al., 2011), and plant water (Zhao et al., 2011;Yu et al., 2014b).In comparison, only a few studies have focused on δ 18 O from water vapour over the Tibetan Plateau (Yatagai et al., 2004;Yu et al., 2005;Kurita and Yamada, 2008;Yin et al., 2008).Moreover, a gap exists in the studies regarding the relationship between δ 18 O of water vapour and of precipitation, and on the relative enrichment of δ 18 O from precipitation relative to that from water vapour over the Tibetan Plateau (In this study, the "relative enrichment" was defined as the difference of the δ 18 O values of precipitation (δ 18 O p ) and vapour (δ 18 O v ), δ 18 O = δ 18 O p − δ 18 O v ).An improved understanding of δ 18 O as tracers of water movement in the atmosphere and as indicators of climate change requires detailed knowledge of the isotopic compositions in all three phases of water (Lee et al., 2005).In contrast to liquid or solid precipitation, measurements of δ 18 O in water vapour can be taken across different seasons and synoptic situations, and are not limited to rainy days (Angert et al., 2008).Hence, δ 18 O in water vapour has become an important topic in the fields of paleoclimatology, hydrology (Iannone et al., 2010), and ecology (Lai et al., 2006), especially for understanding different moisture sources in order to describe different patterns of circulation and to evaluate water resources.
With this background, we launched a project in the summers of 2004 and 2005 to collect simultaneous water vapour and precipitation samples at Nagqu (31 • 29 N, 92 • 04 E, 4508 m a.s.l.) on the central Tibetan Plateau (the first such study), despite the difficultly of collecting water vapour samples at this high elevation.Based on the δ 18 O data sets from these samples, this paper discusses the relationship between δ 18 O from water vapour and from precipitation, considers the effects of various meteorological parameters on the δ 18 O of water vapour and precipitation, and attempts to explain the relationships between the isotopic compositions of samples and moisture sources.
Sampling sites, materials, and methods
The Nagqu station lies in the middle of a short grass prairie, in a sub-frigid, semi-humid climate zone between the Tanggula and Nyainqentanglha mountains (Fig. 1).The annual average temperature at this station was recorded as −2 • C, with an annual mean relative humidity of 50 %, and average annual precipitation of 420 mm.Most of the rainfall at this site occurred during May through August and accounted for about 77 % of the annual precipitation.
This study collected water vapour samples at Nagqu during the periods of August-October 2004 and July-September 2005.Based on an earlier study, if the condensation temperature falls below −70 • C, the sampling method diminishes the correction factor (−0.07 ‰) to below the typical error value quoted for 18 O analyses by modern mass spectrometers (Schoch-Fischer et al., 1984).Our study extracted water vapour cryogenically from the air, by pumping it slowly through a glass trap immersed in ethanol, which was continuously maintained at a temperature as low as −70 • C with a set of electric cryogenic coolers driven by a compressor (Yu et al., 2005).Thus the captured water vapour should precisely reflect the water vapour in the atmosphere and minimize fractionation during the sampling.Moreover, the cold trap was made in a linked-ball shape to increase the surface area for condensation (Hübner et al., 1979), and to ensure complete removal of all the water vapour, in order to avoid isotope fractionation during sampling (Gat et al., 2003).In addition, the validity of the cold trap operation was rechecked by connecting an extra glass trap to the outlet of the original trap.No visible condensed vapour was found within, reconfirming the validity of the water vapour sampling method.A flow meter controlled the air flow rate.For about 24 h, air was drawn at a rate of about 5 L min −1 (Gat et al., 2003) through a plastic tube attached to the rooftop of the Nagqu station (the height of the roof is about 6 m).At the end of each sampling, the two ends of the cold trap were sealed, and the samples melted at room temperature.Water was mixed across the trap before decanting it into a small vial and sealed.One sample of about 10 mL was collected each day.In addition, rainfall from each precipitation event at the Nagqu Meteorological Station (close to the vapour sampling site) was collected immediately and sealed in clean and dry plastic bottles.A total of 153 water vapour samples and 90 precipitation samples were collected.All the samples were stored below −15 • C until analysed.During the sampling period, some meteorological parameters, such as temperature at 1.5 m, temperature near ground, relative humidity, surface pressure, and precipitation amount were recorded.The Key Laboratory of Tibetan Environment Changes and Land Surface Processes, Institute of Tibetan Plateau Research (Chinese Academy of Sciences, Beijing) performed the measurements of the oxygen isotopic compositions of all samples, using a MAT-253 mass spectrometer, with a precision of 0.2 parts per mil (‰) for the oxygen isotope ratios (δ 18 O).The H 2 O-CO 2 isotopic exchange equilibration method was adopted for the oxygen isotope ratios (δ 18 O) measurements.This study expresses the measured oxygen isotope ratios (δ 18 O) as parts per mil (‰) of their deviations, relative to the Vienna Standard Mean Ocean Water (VSMOW).Unfortunately, deuterium data at Nagqu were not available for this project.
To identify the moisture transport paths and interpret δ 18 O variability further in the time series, our study determined 120 h back trajectories for air parcels during the entire sampling period, using the NOAA HYSPLIT model (Draxler and Rolph, 1998) and NCEP reanalysis data sets (available at: ftp://arlftp.arlhq.noaa.gov/pub/archives/reanalysis).The origin of air masses as diagnosed from the back trajectory analysis appears to approximate the moisture source direction for the water vapour and for the precipitation at the study site (Guan et al., 2013).The trajectories originated at 1000, 2000, and 3000 m above ground level (a.g.l.), respectively.
Relationship between δ 18 O p of precipitation and δ 18 O v of water vapour
In this study, the isotopic composition of precipitation correlated positively with that of water vapour.Similar close relationships between δ 18 O v and δ 18 O p also exist at Heidelberg (Jacob and Sonntag, 1991) and at Ankara (Dirican et al., 2005).During the process of precipitation, the δ 18 O values of water vapour are primarily influenced by isotopic equilibrium fractionation (Bonne et al., 2014).As the raindrop falls, the content of the raindrop contributes to the ambient water vapour, due to the re-evaporation effect.In that case, water vapour rapidly interacts with raindrops and tends to move toward isotopic equilibrium as the humid approaches to saturation (Deshpande et al., 2010).As a result, the isotopic composition of raindrops contributes to that of the ambient water vapour.Consequently, the isotopic composition of precipitation has a direct effect on the isotopic composition of water vapour.We show that the isotopic composition of precipitation affects that of water vapour, not only on the same day, but also for the next 4 days, resulting in correlation coefficients of 0.69, 0.64, 0.59, and 0.41 (within a 0.01 confidence limit), respectively (Table 1).Clearly, the correlation coefficients and the slopes also decrease gradually over time, with the correlation coefficient for the fifth day decreasing even further (as low as 0.35) and correlated only within a 0.05 confidence limit (Table 1).Correspondingly, the slopes decreased gradually from 0.72 to 0.34.This may partly be the result of surface water evaporation from recent precipitation contributing to the isotopic composition of the local water vapour in the days following the rainfall event.In addition, part of surface water vapour isotopes comes from local evapotranspiration that was affected by the previous precipitation.The decreasing correlations between the δ 18 O p and lagged δ 18 O v with time indicate that the contribution of the event precipitation to evaporation becomes smaller.
Clearly, there exists an interaction between the local evapotranspiration and boundary layer entrainment.Moreover, the boundary layer entrainment can interact with the water vapour in the high altitude, due to the intensive convection over the central Tibetan Plateau.Consequently, the local water vapour can have a part influence on the precipitation, via affecting the water vapour beneath the cloud base.Pfahl et al. (2012) found that microphysical interactions between rain drops and water vapour beneath the cloud base exist by using COSMO iso model.As a result, the δ 18 O values of local water vapour in our study may have an indirect effect on those of precipitation.
The relative enrichment of δ 18 O p relative to δ 18 O v
As reported above, the average relative enrichment of δ 18 O p relative to δ 18 O v in our study was 8.2 ‰.In comparison, the average relative enrichment of δ 18 O p relative to δ 18 O v at the Delingha station (37 • 22 N, 97 • 22 E, 2981 m; see Fig. 1) on the northern Tibetan Plateau ( δ 18 O = 10.7 ‰) (Yin et al., 2008) was higher.This is because Indian monsoon and convection activities at Nagqu are more intense when compared with those at Delingha.Due to the combined impact of these activities, the summer δ 18 O p values at Nagqu were more depleted than those at Delingha (Yu et al., 2008).As a conse-quence, the δ 18 O value at Nagqu fell below that at Delingha.Further south, the relative enrichment of δ 18 O p relative to δ 18 O v at the Bay of Bengal (Fig. 1) was 8.6 ‰ (Midhun et al., 2013), similar to that at Nagqu.While the Indian monsoon at the Bay of Bengal exceeds the intensity of that at Nagqu, the oceanic moisture does not rise to the same degree as at Nagqu.We note that the relative enrichment of δ 18 O p relative to δ 18 O v at the Nagqu station differs from that at the northern station (Delingha), but resembles that of the southern station (Bay of Bengal), apparently because of its unique location, which is affected by both the Indian monsoon and convection.The next section discusses the influences of those activities on water vapour/precipitation δ 18 O changes in detail.
The effects of meteorological and environmental factors on δ 18 O of water vapour and precipitation
A number of meteorological parameters affect the δ 18 O variations of water vapour and precipitation.In particular, different processes dominate the relative humidity variations in different regions, resulting in different isotope ratios in the water vapour (Noone, 2012).In general, water vapour δ 18 O is positively correlated with local surface humidity, consistent with Rayleigh distillation processes.The data from Palisades (USA) show that stable isotopic compositions of water vapour correlate positively with relative humidity (White and Gedzelman, 1984).Wen et al. (2010) Apparently, those results are consistent with Rayleigh distillation in which air parcels become dry and isotopically depleted through condensation during air mass advection.Interestingly, the tendencies of δ 18 O v and δ 18 O p in our study oppose those of relative humidity (Fig. 2).Hence, at Nagqu the δ 18 O values of water vapour and precipitation correlate negatively with relative humidity (RH) (Fig. 4b, Table 2).Moreover, the tendencies of δ 18 O v and δ 18 O p in our study clearly differed from those of surface temperature at 1.5 m or ground temperature at 0 m during the entire sampling period (Fig. 2).No positive correlation was found between the δ 18 O values and temperature (Fig. 4a, Table 2).Thus, the changes in the δ 18 O values of water vapour and precipitation did not depend on changes in temperature, and did not experience a "temperature effect".However, on the northern Tibetan Plateau, the δ 18 O composition of water vapour and precipitation correlated positively with tem- perature (Yin et al., 2008).A positive correlation between the isotope record of water vapour and temperature (T ) was also found at Heidelberg (Germany), western Siberia, southern Greenland, and Minnesota (USA) (respectively, Schoch-Fischer et al., 1984;Bastrikov et al., 2014;Bonne et al., 2014;Welp et al., 2008).Clearly, the relationships between δ 18 O-T and δ 18 O-RH at our station differ from those at other stations.This and the δ 18 O depletion during the summer monsoon period (Fig. 2a and f) may reflect the influences of the Indian monsoon (Yu et al., 2008) and increasing convection (Tremoy et al., 2012).Due to an uplift effect of the massive mountains (such as the Himalayas), warm oceanic moisture transported by the Indian monsoon from the Indian Ocean onto the Tibetan Plateau rises to very high elevations, where very low temperatures prevail (Tian et al., 2003;Yu et al., 2008).This rise results in more depleted δ 18 O values recorded in summertime water vapour and precipitation at Nagqu.Moreover, the intense convection raises the oceanic moisture to higher elevations.Hence, the convection effect for the oceanic moisture increases the more depleted δ 18 O in water vapour and precipitation in our study region (Yu et al., 2008).However, during the monsoon period, the corresponding surface air temperature, relative humidity, and the summer rainfall greatly exceed those during the pre-monsoon and post-monsoon periods (Fig. 2).Accordingly, an inverse correlation exists between δ 18 O in water vapour/precipitation and surface air temperatures, relative humidity, and rainfall, respectively, indicating the lack of a "temperature effect" on δ 18 O in water vapour/precipitation in this study region (Table 2).Particularly, mixing processes related to convection and reevaporation of rainfall over the central Tibetan Plateau play a significant role in controlling the water vapour distribution.That is why the δ 18 O values of water vapour over the central Tibetan Plateau deviate a Rayleigh model.Lee et al. (2011) also found the free tropospheric vapour over tropical oceans does not strictly follow a Rayleigh distillation.Furthermore, the δ 18 O trends coincide with surface pressure (Psfc) during the entire sampling period (Figs. 2, 4c, Table 2).In particular, different pressures at a large spatial scale are associated with different weather systems and thus different moisture sources.For example, the low geopotential height at 500 hPa on 6 August 2005 over the Nagqu station indicated that a low pressure system prevailed in the study region.However, a high pressure system was posed over the Bay of Bengal and the Arabian Sea (Fig. 5a).The marine moisture was transported to the Tibetan Plateau by the Indian monsoon.That is to say, the source vapour for precipitation is predominantly external to the study area in summer monsoon season.As a result, the δ 18 O values of water vapour and precipitation are as low as −32.1 and −21.7 ‰, respectively (Fig. 2f).The corresponding precipitation amount was as high as 25.9 mm (Fig. 2j).In contrast, a high geopotential height at 500 hPa was observed on 5 September 2005 over Nagqu.This indicates that the study region was controlled by the high pressure system and the coastal regions were dominated by a low pressure system, which relates to the westerlies and continental circulation (Fig. 5b).Hence, the δ 18 O values of water vapour and precipitation are as high as −17.5 and −10.4 ‰, respectively (Fig. 2f).The corresponding precipitation amount is only 0.4 mm (Fig. 2j).High precipitation amounts correspond to depleted isotope compositions of water vapour and precipitation, and low precipitation amounts correspond to enriched isotope compositions (Fig. 2).Specifically, the isotope compositions of water vapour exhibit relatively high values, during non-rainy periods (P [precipitation amount] = 0) (Fig. 2a, f, e and j).During non-rainy periods, climate type is considered as the main factor that dominates the temporal variability of the δ 18 O values of water vapour.This demonstrates that precipitation amount also affects the δ 18 O variations of water vapour and precipitation at Nagqu (Fig. 4c, Table 2).During precipitation events, the water vapour generally maintains a state of equilibrium with falling raindrops (Lee et al., 2006).During heavy precipitation events, the isotope ratios of water vapour and condensate decrease as saturated air rises, because of continued fractionation during condensation (Gedzelman and Lawrence, 1982), and the δ 18 O values of precipitation tend to become more depleted (Fig. 2a and f).Correspondingly, heavily depleted δ 18 O values of residual water vapour occur, due to the rainout effect.During periods without precipitation, water vapour dominated by the local evapotranspiration deviates far from saturation, i.e., it may exhibit low relative humidity.In these circumstances, the δ 18 O values of water vapour become highly enriched (Fig. 2a and f).Okazaki et al. ( 2015) also found that the main driver of the more depleted δ 18 O v from Niamey was a larger amount of precipitation at the Guinea coast.
To further reveal the relationships between the δ 18 O values and various meteorological parameters, our study modeled δ 18 O as a function of temperature, relative humidity, surface pressure, and precipitation amount, using a simple multiple regression model.Using a stepwise method and based on the output of this model, the variable of temperature was excluded.The function can be expressed as: (2) The multiple correlation coefficients (R) between all of the independent variables (relative humidity, surface pressure, and precipitation amount) and the dependent variables (δ 18 O v and δ 18 O p ) are 0.60 and 0.56; and the F statistics are significant at the 0.001 and 0.001 levels, respectively.In brief, the δ 18 O changes in water vapour and precipitation at Nagqu relate closely to the joint contributions of relative humidity, pressure, and precipitation amount.
In addition, land surface characteristics and processes such as evaporation and transpiration may also have affected the isotopic ratios of water vapour.During dry periods, the land surface dries due to evapotranspiration, and the moisture in soil and grass (characterized by relatively enriched isotopic values) evaporates into the atmosphere.Therefore, the isotopic ratio of water vapour becomes relatively enriched (Fig. 2a and f).That is why the isotope compositions of water vapour become more enriched during days with no rainfall, compared to during days with rainfall.During heavy rain events, however, local evapotranspiration is extremely weak (Huang and Wen, 2014), because clouds and precipitation cool the surface and moisten the boundary layer, leading to high relative humidities (Fig. 2c and h) (Aemisegger et al., 2014).Therefore, effects of local evapotranspiration on the changes in water vapour δ 18 O can be ignored during such rainy periods, and the corresponding δ 18 O values in water vapour become more depleted (Fig. 2a and f).On cessation of the rain, clouds clear, the ground heats up again, and relative humidity decreases, partly due to warming, partly due to reduced humidity (Aemisegger et al., 2014).In this case, local evapotranspiration will contribute to changes in water vapour δ 18 O, which will quickly return to relatively enriched values (Fig. 2a and f) (Deshpande et al., 2010).Another short-term study by Kurita et al. (2008), undertaken not far from this study area, also demonstrated that water vapour increased gradually, accompanied by an increased contribution of evapo-transpired water that had relatively enriched isotopic values.
δ 18 O changes in water vapour and precipitation related to different moisture sources
Synoptic weather circulation (especially moisture sources) strongly affects the variations of stable isotopic compositions of water vapour and precipitation (Strong et al., 2007;Pfahl and Wernli, 2008;Deshpande et al., 2010;Guan et al., 2013).This study used the NOAA HYSPLIT model to calculate 120 h back trajectories of air parcels for each day of the entire sampling period.Figure 6 shows a subset of the results of the atmospheric trajectories.The results of 12 July, 6 August, 26 August, and 5 September 2005, represent the weak monsoon, the active monsoon, the late monsoon, and the post-monsoon period conditions, respectively.During the weak monsoon period, moisture over Nagqu at 1000 m a.g.l.appears to derive predominantly from the coastal regions of Bengal in the south, which might have been transported earlier by the Indian monsoon and lingered there.In this way, the coastal regions of Bengal act as a moisture reservoir during the weak monsoon period.Clearly, moisture from 2000 and 3000 m a.g.l.recycles from the westerlies (which are associated with enriched surface waters that re-evaporate and with evaporated surface water under lower humidity conditions), and this contributes to the moisture over Nagqu during the weak monsoon period (Fig. 6a).Therefore, δ 18 O v and δ 18 O p values show relative enrichment (such as −17.8 and −14.7 ‰ observed on 12 July 2005) (Fig. 2f).
Compared to the weak monsoon period (Fig. 6a), the contribution of moisture from the westerlies and regional circulation decreased during the active monsoon period (Fig. 6b) (the specific humidity fells to 2 g kg −1 over Nagqu).Due to the dominant Indian monsoon circulation during this period, most moisture at the 1000 m a.g.l. of the trajectories came from this direction.As a result, specific humidity over Nagqu from this pathway increased to 7 g kg −1 (Fig. 6b).In addition, the trajectories of the 2000 m a.g.l.airflow came from the southern slope of the Himalayas (Fig. 6b).The moisture from both of those two paths was uplifted by the high mountains.Moreover, convection over the Tibetan Plateau often occurs in the region between the two major east-west mountain ranges, the Nyainqentanglha Mountains and the northern Himalayas (Fujinami et al., 2005).As mentioned above, intense convection over the Tibetan Plateau, combined with uplift caused by the high mountains, causes oceanic moisture to rise to very high elevations.Obviously, convection of marine and continental air masses not only causes isotopic variations of water vapour (Farlin et al., 2013), but also significantly affects the isotopic composition of the precipitation (Risi et al., 2008).In particular, the time period when convection significantly affects the isotopic composition of precipitation relates to the residence time of water within atmospheric reservoirs (Risi et al., 2008).This results in more depleted δ 18 O values of water vapour and precipitation at Nagqu, such as −32.1 and −21.7 ‰ on 6 August 2005 (Fig. 2f).The corresponding maximum precipitation amount of 25.9 mm over Nagqu was observed during this sampling period in 2005 (Fig. 2j).Purushothaman et al. (2014) also reported the highly depleted nature of water vapour at Roorkee (northern India) during rainy periods, due to the intense Indian monsoon.
Although moisture over Nagqu that derived from the Bay of Bengal decreased during the late monsoon period, some of the trajectories continued to originate in the coastal regions.Figure 6c details one selected event on 26 August 2005, during which the trajectories came from the coastal regions of western India (near the Arabian Sea).The specific humidity over Nagqu from those pathways decreased to 2-6 g kg −1 , compared with those during the active monsoon period.Moisture from those paths was uplifted by the high mountains, via the Indian continent, and also contributed to the relatively depleted δ 18 O values of water vapour and precipitation (−32.6,−25.0 ‰) (Fig. 2f).
Trajectories after the rainy season (such as 5 September 2005, accompanying the Indian monsoon retreat) show that all the moisture had been recycled from the continent (Purushothaman et al., 2014): (1) moisture from the regional circulation dominated the moisture sources in the study area, and (2) moisture from the westerlies also affected the Nagqu region (Fig. 6d).During this period, no contributions from the Bay of Bengal or the coastal regions of Bengal/western India appeared to have significantly enriched δ 18 O values of water vapour (such as −17.5 ‰ on 5 September 2005) (Fig. 2f).During the dry season, specific humidity over Nagqu from those pathways decreased below 3 g kg −1 , and isotopic re-equilibration of rain droplets with surrounding water vapour appear to have affected the δ 18 O variations of precipitation (Sturm et al., 2007).Consequently, the δ 18 O values of precipitation increased rapidly during the postmonsoon period (to −10.4 ‰) (Fig. 2f).
Implication of δ 18 O in water vapour and precipitation for paleoclimatic records
Our study indicates that, during the summer period, moisture over the Nagqu region of the central Tibetan Plateau originates primarily from the southern portion of the Tibetan Plateau, as well as the southern slope of the Himalayas, the coastal regions of Bengal/western India, and the Bay of Bengal, all strongly influenced by the Indian monsoon and convection.In contrast, convection on the northern Tibetan Plateau is weaker than that on the central Tibetan Plateau, and the westerlies prevail on the northern Tibetan Plateau, almost without any influence of the Indian monsoon (Tian et al., 2003;Yu et al., 2008).That is to say, different sampling locations result in different moisture sources, resulting in different climate information preserved in ice cores.In particular, different moisture sources cause different effects on the δ 18 O values of water vapour and precipitation at the two stations of Nagqu and Delingha, located on the central and northern Tibetan Plateau, respectively.This results in different δ 18 O characteristics of water vapour and precipitation from the central and northern Tibetan Plateau and may explain the different δ 18 O characteristics of ice cores from the central and northern Tibetan Plateau.In the northern Tibetan Plateau, due to the moisture sources being fairly simple, isotopic fractions in ice cores from the northern Tibetan Plateau have not been changed by many of the factors discussed here, and the δ 18 O records can be used as a good proxy of temperature.For example, the δ 18 O record preserved in the Dunde ice core from the northern Tibetan Plateau provides a reasonable proxy of summer temperature (Thompson et al., 1989).However, the interpretations of ice core records is more complicated than that in the northern Tibetan Plateau, because of the various moisture sources on the central Tibetan Plateau, especially during the period of the intensive Indian monsoon activities.As a result, the δ 18 O record in the Tanggula ice core from the central Tibetan Plateau shows no correlation between average δ 18 O values and temperature (Joswiak et al., 2010).Accordingly, our findings indicate that the influences of different moisture sources and the activities of the Indian monsoon and convection may be significant when reconstructing paleoclimate variations on the central and northern Tibetan Plateau.Certainly, ice core (or other proxy) δ 18 O records do not reflect day-to-day changes of δ 18 O in water vapour/precipitation.In order to disprove the presence of a temperature effect over the central Tibetan Plateau, multiple years of data and data that span the entire year will be needed for future studies.Hence, the authors have launched a new project to survey a longer time series of isotopic compositions of water vapour and precipitation (δ 18 O and δD), which should provide greater confidence in our findings and gain a better understanding of the links between water vapour and precipitation δ 18 O/δD values and paleoclimatic records.
Conclusions
This study represents the first simultaneous water vapour and precipitation δ 18 O time series for the central Tibetan Plateau.
In the study region of Nagqu, the isotopic composition of precipitation has a direct relationship to that of water vapour.
In comparison, the δ 18 O values of local water vapour may only partly contribute to those of precipitation.The δ 18 O v and δ 18 O p variations at Nagqu appear mainly controlled by joint influences of relative humidity, pressure, and precipitation amount, but did not demonstrate a "temperature effect".Moreover, the different δ 18 O characteristics of water vapour and precipitation at Nagqu appear to relate to different mois-ture sources, especially involving the influences of the Indian monsoon and convection.The relative enrichment of δ 18 O p relative to δ 18 O v at Nagqu (on the central Tibetan Plateau) is similar to that at the southern station (Bay of Bengal), but differs from that at the northern station (Delingha), due to intense Indian monsoon and convection activities.These results may explain the different δ 18 O characteristics obtained from ice cores from the central and the northern Tibetan Plateau.Our findings presented here may provide a basis for reinterpretation of the δ 18 O records in ice cores from the central Tibetan Plateau, and suggest that the impacts of different moisture sources, the Indian monsoon, and convection activities all need to be considered.
Figure 1 .
Figure 1.Map showing the sampling site at Nagqu on the central Tibetan Plateau, with the locations of the Delingha and Bay of Bengal stations, and the city of Lhasa.
Table 1 .Figure 3 .
Figure 3. Relationships between δ 18 O p of precipitation and δ 18 O v of water vapour at Nagqu.Note that the values in 2004 are shown as red open circles; the values in 2005 are shown as blue solid dots.
also found a positive correlation between water vapour δ 18 O and relative humidity at Beijing (China).At a northern Greenland site, both diurnal and intra-seasonal variations show strong correlations between changes in local surface humidity and water vapour isotopic composition(Steen-Larsen et al., 2013).Bonne et al. (2014) also found a positive correlation between water vapour δ 18 O in southern Greenland and the logarithm of local surface humidity exists.In addition, water vapour δ 18 O trends from the Bermuda Islands (North Atlantic) also resemble those of relative humidity(Steen-Larsen et al., 2014).
Figure 4 .
Figure 4. Relationships between δ 18 O and meteorological factors (a, temperature; b, relative humidity; c, surface pressure; and d, precipitation amount) at Nagqu.Note that the values of δ 18 O v are shown as pink open circles; the values of δ 18 O p are shown as green solid dots.
Figure 5 .
Figure 5. Distributions of the geopotential height (unit: meter) at 500 hPa on 6 August (a) and 5 September (b) 2005 over the Tibetan Plateau and adjacent regions, representing the conditions of low pressure (a) and high pressure (b) over the Nagqu station (white dots).
Figure 6 .
Figure 6.Back trajectories calculated by HYSPLT at 1000 (red lines), 2000 (blue lines), and 3000 m (green lines) a.g.l. on 12 July, 6 August, 26 August, and 5 September 2005, representing the conditions during the weak monsoon (a), active monsoon (b), late monsoon (c), and postmonsoon (d) periods, respectively, over the Nagqu station.Note that changes in specific humidity (g kg −1 ) along the air parcel pathways are also shown.
Table 2 .
Correlations between stable oxygen isotope (δ 18 O v and δ 18 O p ) and meteorological factors (temperature, relative humidity, surface pressure, and precipitation amount) at Nagqu. | 2015-06-01T23:46:22.000Z | 2015-09-16T00:00:00.000 | {
"year": 2015,
"sha1": "61d6b69b7e16764e20016ec2a2a2b2634ebc92fb",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-chem-phys.net/15/10251/2015/acp-15-10251-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "61d6b69b7e16764e20016ec2a2a2b2634ebc92fb",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
6219366 | pes2o/s2orc | v3-fos-license | Precocious Gauge Symmetry Breaking in $SU(6) \times SU(2)_R$ Model
In the $SU(6) \times SU(2)_R$ string-inspired model, we evolve the couplings and the masses down from the string scale $M_S$ using the renormalization group equations and minimize the effective potential. This model has the flavor symmetry including the binary dihedral group $\tilde{D}_4$. We show that the scalar mass squared of the gauge non-singlet matter field possibly goes negative slightly below the string scale. As a consequence, the precocious radiative breaking of the gauge symmetry down to the standard model gauge group can occur. In the present model, the large Yukawa coupling which plays an important role in the symmetry breaking is identical with the colored Higgs coupling related to the longevity of the proton.
Introduction
In the m i ni m alsupersym m etri c standard m odel (M SSM ),i t i s wel l -know n that the spontaneous breaki ng ofthe gauge sym m etry SU (2) L U (1) Y ! U (1) em i s caused around the el ectroweak scal e by the radi ati ve e ect due to the l arge top Yukawa coupl i ng. [ 1]O n the otherhand,i n m any supersym m etri c G U T m odel s,i ti sassum ed that by taki ng the w i ne-bottl e type ofthe H i ggs potenti alby hand,the spontaneous breaki ng ofa l arge gauge sym m etry such as SU (5) or SO (10) takes pl ace vi a H i ggs m echani sm at hi gh energi es around O (10 16 G eV ). In order to cl ari fy w hether or not the spontaneous breaki ng ofthe l arge gauge sym m etry occursatsuch a l arge energy scal e, we need to address the underl yi ng stri ng theory w hi ch yi el ds the G U T -type m odel s. T he radi ati ve breaki ng of the l arge gauge sym m etry occurs i f the m ass squared ofa gauge non-si ngl et scal ar el d goes negati ve precoci ousl y as one evol ves dow n from the stri ng scal e. T hen i t i s ofi m portance to study w hether or not the radi ati ve e ect due to the l arge Yukawa coupl i ngs resul ti ng from the underl yi ng theory causes the scal ar m ass squared to be dri ven negati ve at a l arge energy scal e. In the extra U (1) 2 stri ng-i nspi red m odeli thasbeen al ready found thatthe radi ati ve e ect due to the l arge Yukawa coupl i ngs possi bl y breaks dow n one ofthe extra U (1) gauge sym m etri es around O (10 15 G eV ). [ 2] In thi s paper we consi der the SU (6) SU (2) R stri ng-i nspi red m odel ,w hi ch contai ns m any phenom enol ogi cal l y attracti ve features. [ 3,4,5,6,7,8,9]In thi s m odel we evol ve coupl i ngs and m asses dow n from the stri ng scal e M S usi ng the renorm ali zati on group(RG ) equati ons and m i ni m i ze the e ecti ve potenti al . T he purpose of thi s paper i s to expl ore w hether the gauge sym m etry breaki ng occurs or not at very l arge energy scal e. Studyi ng the RG evol uti on from the stri ng scal e M S ,we show thatthe scal ar m ass squared ofthe gauge non-si ngl etm atter el d possi bl y goesnegati ve sl i ghtl y bel ow the stri ng scal e. T hi si m pl i esthatthe precoci ousbreaki ng ofthe gauge sym m etry SU (6) SU (2) R can occurdue to the radi ati ve e ect. In thi sm odel the l arge Yukawa coupl i ng w hi ch pl ays an i m portant rol e i n the sym m etry breaki ng i s i denti calw i th the col ored H i ggs coupl i ng rel ated to the l ongevi ty of the proton. T hi ssym m etry breaki ng tri ggerso the subsequent sym m etry breaki ng. [ 10]T huswe obtai n the sequenti alsym m etry breaki ng w hereSU (4) P S and G S M representthePati -Sal am SU (4) [ 11]and thestandard m odel gauge group,respecti vel y.
In the fram ework ofthe stri ng theory we are prohi bi ted from addi ng extra m atter el ds by hand. In the e ecti ve theory from stri ng,the m atter contents and the Lagrangi an arestrongl y constrai ned dueto thetopol ogi caland thesym m etri calstructure ofthe com pact space. T hi s si tuati on i s i n sharp contrast to the conventi onal G U T -type m odel s. For i nstance,i n the perturbati ve heteroti c stri ng we have no adjoi nt or hi gher representati on m atter(H i ggs) el ds. A l so,i n the context ofthe brane pi cture,m atter el dsbel ong to the bi -fundam entalorthe anti -sym m etri c representati ons under the gauge group such as SU (M ) SU (N ). In the present m odel ,under SU (6) SU (2) R ,gauge non-si ngl et m atter el ds consi st of(15;1),(6 ;2) and thei r conjugates. W i thi n the ri gi d fram ework we have to nd outthe path from the stri ng scal e physi cs to the l ow -energy physi cs. >From thi s poi nt ofvi ew we study the RG evol uti on ofcoupl i ngs and m asses from the stri ng scal e and expl ore the hi erarchi cal path ofthe gauge sym m etry breaki ng.
To the SU (6) SU (2) R stri ng-i nspi red m odelwe i ntroduce the avor sym m etry T he cycl i c group Z M and the bi nary di hedralgroupD 4 have R sym m etri es,w hi l e Z N has not. Introducti on ofthe bi nary di hedralgroupD 4 i s m oti vated by the phenom enol ogi calobservati on that the R -handed M ajorana neutri no m assforthe thi rd generati on hasnearl y the geom etri cal l y averaged m agni tude ofM S and M Z . Further,the bi nary di hedral avor sym m etryD 4 i s an extenti on ofthe Rpari ty. In R ef. [ 9] ,sol vi ng the anom al y-free condi ti onsunderm any phenom enol ogi cal constrai nts com i ng from the parti cl e spectra,we found a l arge m i xi ng angl e(LM A )-M SW sol uti on w i th (M ; N ) = (19; 18), i n w hi ch appropri ate avor charges are assi gned to the m atter el ds. In R efs. [ 8,9]we have assum ed that the scal ar m ass squared ofthe gauge non-si ngl et el d goes negati ve sl i ghtl y bel ow M S . T he resul ts are i n good agreem entw i th the experi m entalobservati onsaboutferm i on m asses and m i xi ngs and al so about hi erarchi cal energy scal es i ncl udi ng the G U T scal e, the scal e and the M ajorana m ass scal e ofthe R -handed neutri nos. T hen we carry out the present anal ysi s ofthe RG evol uti on ofthe scal ar m asses squared on the basi s of the SU (6) SU (2) R m odelw i th the avor sym m etry Z 19 Z 18 D 4 . T hi s paper i s organi zed as fol l ow s. In secti on 2,after expl ai ni ng m ai n features of theSU (6) SU (2) R stri ng-i nspi red m odelw i th the avorsym m etry Z 19 Z 18 D 4 ,we exhi bi tthe superpotenti al . W e poi ntoutthati fthe softscal arm asssquared i sdri ven negati ve, the spontaneous breaki ng ofthe gauge sym m etry SU (6) SU (2) R dow n to G S M occurs i n two steps sequenti al l y. In secti on 3 we study the RG evol uti ons of coupl i ngs and m asses dow n from M S . Iti sfound thatthe scal arm asssquared ofthe gauge non-si ngl et m atter el d possi bl y goes negati ve sl i ghtl y bel ow the stri ng scal e. T he nalsecti on i s devoted to sum m ary and di scussi on.
(i ). T he gauge group G = SU (6) SU (2) R can be obtai ned from E 6 through the Z 2 ux breaki ng on a m ul ti pl y-connected m ani fol ds K . [ 12,13,14]To be m ore speci c,the nontri vi alhol onom y U d on K i s ofthe form w hereI 3R representsthethi rd di recti on oftheSU (2) R .T hesym m etry breaki ng ofG dow n to G SM can takepl acevi a theH i ggsm echani sm w i thoutm atter el ds ofadjoi ntorhi gherrepresentati ons. SU (i i i ). A sthe avorsym m etry,we i ntroduce the Z 19 Z 18 and theD 4 sym m etri esand regard Z 19 and Z 18 asthe R and the non-R sym m etri es,respecti vel y. Si nce the num bers 19 and 18 are rel ati vel y pri m e,we can com bi ne these sym m etri es as Sol vi ng the anom al y-free condi ti ons under m any phenom enol ogi calconstrai nts com i ng from the parti cl e spectra, we obtai n a LM A -M SW sol uti on w i th the Z 342 charges ofm atter super el ds as show n i n Tabl e 1. [ 9]In thi s sol uti on we assi gn the G rassm ann num ber ,w hi ch hasthe charge ( 1; 0)underZ 19 Z 18 , thecharge18 underZ 342 .T he assi gnm entof"D 4 charges" to m attersuper el ds i s gi ven i n Tabl e 2,w here i (i= 1;2;3)represent the Paul im atri ces and T he 3 transform ati on yi el ds the R -pari ty. N am el y, the R -pari ti es ofthe super el ds i (i= 1;2;3)for three generati ons are al lodd,w hi l e those ofthe 0 and are even.
(i v). T here are two types ofgauge i nvari ant tri l i near com bi nati ons Tabl e 2: A ssi gnm ent of"D 4 charges" to m atter super el ds T he avor sym m etry requi res that i n the superpotenti althese tri l i near com bi nati ons are m ul ti pl i ed by som e powers of 0 or 0 . C oncretel y,the superpotenti al term s are ofthe form s T husi n orderto m i ni m i zethescal arpotenti al ,i ti ssu ci entforusto con neoursel ves to the R -pari ty even sector. In the R -pari ty even sector we have the superpotenti al W 1 . T he scal arpotenti ali s gi ven by w here V soft represents the soft supersym m etry breaki ng term s V soft =m 2 0 j 0 j 2 +m 2 j j 2 +m 2 0 j 0 j 2 +m 2 j j 2 + (A term ): T he one-l oop correcti on V 1 loop i s ofthe form [ 17] T hus,i fm 2 0 +m 2 < 0,the m i ni m um poi nt ofthe scal arpotenti albecom es [ 10] i n a feasi bl e param eterregi on ofthe coe ci ents i ,w herem = q jm 2 0 +m 2 j . Si nce we obtai n j h 0 ij> j h 0 ij ,the gauge sym m etry breaki ng occurs i n two steps as W hen the gauge sym m etry SU (6) SU (2) R i sbroken dow n to SU (4) P S SU (2) L SU (2) R ,the el d 0 (15; 1) i s decom posed as N eedl ess to say,the el d 0 (1; 1; 1)devel ops the non-zero V EV j h 0 ij .In addi ti on, the el d 0 (6 ; 2) i s decom posed as 0 (6 ; 2) ! 0 (4 ; 1; 2); 0 (1; 2; 2): A questi on ari ses as to w hi ch el d of 0 (4 ; 1; 2) and 0 (1; 2; 2) devel ops the non-zero V EV j h 0 ij . A s seen i n Eq. (7), the el d 0 (1; 2; 2) has the coupl i ng 0 (1; 1; 1) 0 (1; 2; 2) 2 ,w hi ch i s the thi rd term ofEq. (8). Bel ow the scal e j h 0 ij , thi s term i nduces th term . T he F-at condi ti on for 0 (1; 2; 2) requi res T herefore, for the present m odel to be consi stent, i t i s necessary thatm 2 0 +m 2 i s deri ven negati ve sl i ghtl y bel ow M S . In the next secti on, evol vi ng coupl i ngs and m asses dow n from the stri ng scal e usi ng the RG equati ons,we show thatm 2 0 +m 2 possi bl y goes negati ve sl i ghtl y bel ow the stri ng scal e.
T he R G evolutions of scalar m asses
T hese equati ons are easi l y sol ved as w here the vari abl e u i s de ned as T he constantsg 0 and M 1=2 representtheval uesofthegaugecoupl i ng and thegaugi no m ass at the stri ng scal e M S ,respecti vel y. A t the stri ng scal e the renorm al i zabl e term ofthe superpotenti ali s ofthe si m pl e form T hi s term i nduces the soft breaki ng A term H ere we assum e thatthe Yukawa coupl i ng z 0 and the softbreaki ng param eterA 0 are real . In thi s case the RG equati ons for z 0 ,A 0 ,m 2 0 andm 2 are gi ven by [ 19] (4 ) 2 dz 2 0 dt = 56g 2 6 + 18z 2 0 z 2 0 ; (4 ) 2 dA 0 dt = 3A 0 z 2 0 + 56M 6 g 2 6 ; (30) C oncretel y,the RG evol uti ons are expressed as In these expressi ons we de ne three di m ensi onl ess param eters and three functi ons Si nce we are i nterested i n the precoci ous breaki ng of the gauge sym m etry, the RG evol uti on of the coupl i ngs and the m asses i s carri ed out i n the energy regi on rangi ng from M S to M S =5. Si nce 4 =g 2 0 takes a val ue around 15 i n the present m odel [ 3] , we have g 0 ' 0: 9 and then the regi on consi dered here of the vari abl e u becom es 1: 0 0: 95. W e now proceed to accom pl i sh the num eri cal study as to w hether or notm 2 0 +m 2 i s dri ven negati ve i n the regi on u = 1: 0 0: 95. A s a typi calexam pl e, i n Fi g. 1 we show the cal cul ati on of (m 2 0 (u)+m 2 (u))=m 2 0 for the param eter set (r 0 ; r 1 ; r 2 ) = (3: 0; 3: 0; 3: 0 4: 0). W e nd thatm 2 0 (u)+m 2 (u) i s dri ven negati ve at u 0: 98 i f the val ue of r 2 i s l arger than 3. 3. In Fi g. 2 al so (m 2 0 (u)+m 2 (u))=m 2 0 for the param eter set (r 0 ; r 1 ; r 2 ) = (3: 0; 2: 5 4: 0; 3: 5) i s gi ven. >From these gures i t turns out that the precoci ous breakdow n i s real i zed i n the param eter regi on ofthe Yukawa coupl i ng z 0 (1) 2: 7 and r 1 ; r 2 = 3: 0 4: 0.
In the present choi ce ofz 0 (1) we have z 0 (1) 2 =(4 ) 0: 6,w hi ch does not seem to be sm al lenough to use i t as the perturbati ve expansi on param eter. H owever,z 0 (u) di m i ni shesi n m agni tude w i th decreasi ng u.T he presentanal ysi si ssu ci entto show thati n the feasi bl e param eterregi on,m 2 0 (u)+m 2 (u)possi bl y goesnegati ve sl i ghtl y bel ow M S .
Sum m ary and discussion
In theSU (6) SU (2) R stri ng-i nspi red m odelw i th the avorsym m etry Z 19 Z 18 D 4 , we evol ve coupl i ngs and m asses dow n from the stri ng scal e M S usi ng the RG equati ons. In the feasi bl e param eter regi on ofa Yukawa coupl i ng and the soft supersymm etry breaki ng m asses,the scal arm asssquared ofthe gauge non-si ngl etm atter el d possi bl y goes negati ve sl i ghtl y bel ow the stri ng scal e. T hi s i m pl i es that the precoci ous radi ati ve breaki ng ofthe gauge sym m etry SU (6) SU (2) R can occur due to the radi ati ve e ect. T hi s sym m etry breaki ng tri ggers o the subsequent sym m etry Fi gure 2: T he u-and r 1 -dependences of (m 2 0 (u)+m 2 (u))=m 2 0 . T he param eter r 1 vari es from 2. 5 to 4. 0, w hi l e the param eters r 0 and r 2 are xed as 3. 0 and 3. 5, respecti vel y. In the w hi te regi on (m 2 0 (u)+m 2 (u))=m 2 0 i s posi ti ve and i n the regi on from gray to bl ack has a negati ve val ue up to 3: 0. [ 20]ourresul ti sconsi stentw i th thepresentexperi m entaldata [ 21] . T he l ongevi ty of the proton i s i n connecti on w i th the precoci ous gauge sym m etry breaki ng through the com m on l arge Yukawa coupl i ng. | 2014-10-01T00:00:00.000Z | 2002-08-31T00:00:00.000 | {
"year": 2002,
"sha1": "f7e573f21b9299b2b61d6d335d6a72ab29e61ab7",
"oa_license": null,
"oa_url": "https://academic.oup.com/ptp/article-pdf/109/4/651/5305916/109-4-651.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "f7e573f21b9299b2b61d6d335d6a72ab29e61ab7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225102976 | pes2o/s2orc | v3-fos-license | Quadratic speedup for simulating Gaussian boson sampling
We introduce an algorithm for the classical simulation of Gaussian boson sampling that is quadratically faster than previously known methods. The complexity of the algorithm is exponential in the number of photon pairs detected, not the number of photons, and is directly proportional to the time required to calculate a probability amplitude for a pure Gaussian state. The main innovation is to use auxiliary conditioning variables to reduce the problem of sampling to computing pure-state probability amplitudes, for which the most computationally-expensive step is calculating a loop hafnian. We implement and benchmark an improved loop hafnian algorithm and show that it can be used to compute pure-state probabilities, the dominant step in the sampling algorithm, of up to 50-photon events in a single workstation, i.e., without the need of a supercomputer.
In addition to experimental implementations significant progress has also been made in developing classical simulation algorithms. For boson sampling, a first method was an approximate Markov chain Monte Carlo algorithm [27]. This was improved in Ref. [28], where an exact sampling algorithm was introduced such that the complexity for generating one sample with N photons is equivalent to that of calculating an output probability amplitude. This in turn is equivalent to computing the permanent of an N ×N matrix, which requires O(N 2 N ) time using the best known methods [29].
An effort to obtain simulation techniques for GBS has also been pursued. An exact algorithm has been reported and implemented for GBS with threshold detectors [30,31], but it suffers from exponential memory requirements. Two algorithms were also proposed in Ref. [32] for a restricted version of GBS. The first one has polynomial space complexity and O(poly(N )2 8N/3 ) time complexity; the second has exponential space com- * Equal contributors plexity and O(poly(N )2 5N/2 ) time complexity. Recently, an exact sampling algorithm for GBS was presented that requires only polynomial memory [33]. This algorithm shows an improved complexity proportional to O(N 3 2 N ) for generating one sample with N photons. These GBS algorithms mark a crucial difference with respect to boson sampling: in GBS with pure states, the probability of observing an outcome with N photons is proportional to the loop hafnian of an N ×N matrix, for which the best algorithms require O(N 3 2 N/2 ) time. In other words, there is a quadratic gap between the complexity to generate a sample and the complexity to compute an output probability. This suggests the possible existence of a better sampling algorithm with complexity matching that of calculating an output probability amplitude, similar to what has been achieved for boson sampling.
We present such an algorithm. The main insight is to introduce conditioning auxiliary variables obtained by performing a virtual heterodyne measurement in all modes and iteratively replace the heterodyne outcomes with photon number measurements. The photon number outcomes conditional on the heterodyne measurements are given by pure-state probabilities. We develop a chain of conditional probabilities, where at each step only probabilities of a pure Gaussian state need to be calculated. For outputs with N photons, these are proportional to loop hafnians of N × N matrices, which can be calculated in O(N 3 2 N/2 ) time [34]. In general, the number of output probabilities that must be calculated is proportional to the number of modes m, which leads to a time complexity upper bounded by O(mN 3 2 N/2 ). This corresponds to a quadratic speedup over the previous state-of-the-art. It also suggests that, compared to boson sampling, roughly twice as many photons are needed in GBS to reach the regime where classical simulations become intractable. We implement an improved version of a loop hafnian algorithm and use it to compute pure-state probabilities of events with up to 50 photons using a workstation with 96 CPUs.
In what follows, we begin by giving a short overview of GBS in Sec. II. We then describe our simulation algorithm in Sec. III, which constitutes our main result. In Sec. IV, we benchmark an algorithm for computing loop hafnians, which determine the most expensive step for simulation, and finally present a discussion of our findings in Sec. V.
II. GAUSSIAN BOSON SAMPLING
The quantum state of a collection of m optical modes can be uniquely described in terms of its Wigner function [35,36]. Gaussian states are states whose Wigner function is a Gaussian distribution. They can be described by a covariance matrix V and a vector of means R = q p , whereq,p are the mean canonical position and momentum vectors. It is also possible to express the covariance matrix in terms of the complex amplitudes α = 1 √ 2 (q + ip) ∈ C m , which are described by a complex-normal distribution with mean α = 1 √ 2 (q + ip) ∈ C m and covariance matrix [37], where Á is the identity matrix. Gaussian Boson Sampling (GBS) is a form of photonic quantum computing where a Gaussian state is measured in the photon-number basis. Ifᾱ = 0 , the probability of observing the output sample S = (s m , . . . , s 1 ), where s i is the number of photons in mode i, is given by [11] where and A S is the matrix obtained as follows: if s i = 0, rows and columns i and i + m are deleted from A; if s i > 0, the rows and columns are repeated s i times. The hafnian of a square symmetric matrix of even dimension n is defined as where PMP(n) is the set of perfect matching permutations [34].
Defining N := i s i as the total number of photons, the submatrix A S has dimension 2N , meaning that the best algorithm for calculating its hafnian requires O(N 3 2 2N/2 ) = O(N 3 2 N ) time. However, when the Gaussian state is pure, it is possible to write A = B ⊕ B * and the probability distribution reduces to [11] where B S is constructed analogously: if s i = 0, rows and columns i are deleted; if s i > 0, the rows and columns are repeated s i times. Since the matrix B S has dimension N , computing its hafnian requires only O(N 3 2 N/2 ) time.
An analogous formula can also be derived for GBS with displacements, i.e., whenᾱ = 0. We define the quantities: In this case, the output probabilities are given by [34,38]: where lhaf(·) is the loop hafnian [34], γ S is obtained from γ by repeating the i, i + m entries of γ a total of s i times, and filldiag (A S , γ S ) replaces the diagonal of A S with the vector γ S . The loop hafnian of an arbitrary matrix is defined analogously to the hafnian in Eq. (6), but replacing the set PMP(n) by the set of single-pair matchings SPM(n) [34]. When the Gaussian state is pure, it is possible to express the output probability as whereγ S is obtained fromγ by repeating its i-th entry s i times. Up to constant prefactors, the best known algorithms for calculating loop hafnians of generic matrices have the same complexity as for hafnians, namely O(N 3 2 N/2 ) for matrices of dimension N . This implies that the complexity of computing output probabilities with N photons for pure Gaussian states with displacements also scales as O(N 3 2 N/2 ). We make use of this result in the next section.
So far we have focused only on describing the photonnumber statistics of a multimode Gaussian state. For the sampling algorithm we describe in the next section it will also be necessary to recall some basic properties of continuous-output measurements on a Gaussian state. Of particular relevance here are so-called heterodyne measurements. The outcomes of an m-mode heterodyne measurement are specified by a vector of complex numbers α = 1 √ 2 (q + ip) ∈ C m . Generating heterodyne samples from a Gaussian state is straightforward. For a Gaussian state with vector of meansR and covariance matrix V , we sample from the multivariate normal distribution µ ∼ N (R, V Q ), where µ = [ q p ] and V Q is the Husimi Q-function covariance matrix [39] in the quadrature basis, given by Explicitly, we obtain outcome µ with probability A partial heterodyne measurement can be performed when measuring only a subset of the modes. This can be done by either sampling from the reduced matrix of V Q , formed by selecting only the rows and columns of the included modes, or by sampling all modes and discarding the outcomes for modes we don't wish to sample. It is also useful to write the conditional state of a subset of the modes, A, when the other modes, B, are measured using heterodyne detectors. For this it is convenient to write the covariance matrix V in block form with modes in A and B in separate blocks, and similarly to group the modes in the vector of means. This can be done by permuting the rows and columns in the covariance matrix and the elements of the vector of means, keeping the ordering in both consistent. So we write both the covariance matrix and vector of means in the ordering (q A , p A , q B , p B ): If the outcome µ B is obtained by measuring the modes in the set B, we can write the resulting covariance matrix and vector of means for A as [36] V This update rule together with the sampling in Eq. (14) will be useful in the next section where we introduce our algorithm.
III. ALGORITHM
We now describe an algorithm which samples the modes sequentially, but that, unlike the one from Ref. [33], never requires the calculation of mixed-state probabilities. In this algorithm we introduce partial heterodyne measurements so that only pure-state probabilities need to be evaluated at each step. Given a partial heterodyne measurement on modes in B, the photon number probability of a pattern S in modes A is given by Eq. (12), where B and γ are now found for the state with covariance matrix V (B) A and vector of means R (B) A as given in Eq. (17). Note that if the global covariance matrix V corresponds to a pure state, the conditional covariance matrix V (B) A also corresponds to a pure state.
If we wish to sample in the photon-number basis in modes A without measuring the other modes, we can make use of the observation in the paragraph above: the marginal photon-number probabilities can be obtained by integrating the joint probabilities over the set of possible heterodyne outcomes α B : Hence we can sample from the marginal probabilities by sampling from p(α B ), followed by the conditional probabilities p(s A |α B ) and then simply ignoring the heterodyne outcome. This is justified because one can always sample from a marginal distribution by considering additional virtual variables and then sampling from the correspondingly enlarged probability distribution, as long as one then forgets the values of the added variables. The outcomes of the heterodyne measurements are precisely these added variables; they do not correspond to real measurements in an experimental setup but are rather virtual measurements to introduce convenient conditioning auxiliary variables.
We now want to apply this directly to sampling the modes sequentially. The objective is to sample mode k given the previous modes 1, . . . , k − 1 have already been sampled, so we set A = 1, . . . , k and B = k + 1, . . . , m. We assume that we have already sampled from p(s 1 , . . . , s k−1 , α k+1 , . . . , α m ). We wish to sample s k conditional on this outcome. To do this we can calculate the relative probabilities for all s k and sample from that distribution. This will result in a sample drawn from p(s 1 , . . . , s k , α k+1 , . . . , α m ). By discarding α k+1 , we are left with a sample from p(s 1 , . . . , s k , α k+2 , . . . , α m ) and are ready to sample the (k + 1) th mode. We work progressively from k = 1 to m essentially replacing the virtual heterodyne measurements with photon-number measurements until we are left with a sample from p(s 1 , . . . , s m ).
This algorithm can sample from a pure multimode state, yet in realistic experiments, the full state may not be pure, for example if loss is included. To address this, we express the mixed Gaussian state as a convex combination of pure states. The Williamson decomposition [35,40] of a quantum covariance matrix V states that it can be split as where T = 2 SS T is the covariance matrix of a pure state, S is a symplectic matrix, and W is positive semidefinite. In Hilbert space, this implies that a mixed Gaussian state with covariance matrix V = T + W and a vector of meansR = q p can be expressed as [36,41] where is the probability density function of a multivariate normal distribution, and |ψ R,T is a pure Gaussian state with vector of means R and covariance matrix T . Using this, the probability of observing an outcome (s 1 , s 2 , . . . , s m ) when performing a measurement on a mixed state ̺ on m modes is Hence to sample from a mixed state we can sample the displacement vector from Eq. (21) and then sample from the resulting pure state.
We can now outline the full algorithm for simulating Gaussian boson sampling: Algorithm: To sample from a state given by covariance matrix V and vector of meansR in m modes: 1. If the Gaussian state to be sampled is mixed, calculate the matrices T , W from the Williamson decomposition such that V = T + W . Sample a vector R from the multivariate normal distribution p(R) as in Eq. (21). This can be done in cubic time in the size of the matrix [42]. Continue the algorithm with the pure state given by covariance matrix T and vector of means R.
2. Generate a sample from the probability distribution p(α 2 , . . . , α m ) resulting from heterodyne measurements on modes 2 to m using Eq. (14). This can be done in cubic time in the number of modes.
After repeating steps 3-6 for each k = 1, 2, . . . , m, we are left with an outcome sampled from p(s 1 , . . . , s m ). The correctness of the algorithm follows directly from the definition of conditional probabilities and integration over the auxiliary variables α 2 , . . . , α m as shown in Appendix A.
Overall, the algorithm reduces sampling from the GBS distribution to computing pure-state probabilities p(s k , s ⋆ k−1 , . . . , s ⋆ 1 |α k+1 , . . . , α m ). When a total of N photons are detected, calculating the largest such probability amplitude requires O(N 3 2 N/2 ) time, which results from computing loop hafnians in Step 4. This is scaled up by at most the total number of modes, giving a total sampling complexity of O(mN 3 2 N/2 ). This is a quadratic improvement over the algorithm in Ref. [33] which has complexity O(mN 3 2 N ).
It is worthwhile to compare to the algorithm of Ref. [28] for boson sampling, whose complexity is O (N 2 N ). Up to polynomial factors, our algorithm suggests that the time required to simulate boson sampling with N photons is roughly the same as simulating GBS with 2N photons. We note that our algorithm can also be used to simulate GBS with threshold detectors: once a photon number sample is generated, simply set s i = 1 if s i > 0, where s i = 1 denotes that the detector measured one or more photons.
Finally, note that each random variable s k has support over the non-negative integers, thus in principle one needs to calculate all the (infinitely many) conditional probabilities. However, we can choose some cutoff number of photons d in each mode such that the probability of getting a photon number above this value is negligible. We confirm the accuracy of the algorithm in Fig. (1) where we show that the total variation distance lies within the expected range for the sample size as compared to a brute force sampler with the same number of samples. This shows that the chosen cutoff was sufficiently high to not introduce any notable error.
IV. BENCHMARKING
In this section, we test the performance of a new implementation of the loop hafnian algorithm of Ref. [34], which is available in The Walrus [43]. The evaluation of loop hafnians is delegated to multi-threaded C++ code which uses the La Budde algorithm [44] for calculating the characteristic polynomial of a matrix. This gives a speedup of about three times with respect to previous algorithms based on diagonalization, but more importantly, improves significantly the accuracy of the calculation. In the original implementation of the loop hafnian algorithm, which uses double precision and eigenvalue methods for calculating power traces, it was found that a relative error of ∼ 10 −1 is present for computing the loop hafnian of the 54 × 54 all-ones matrix [34]. To get around this issue, we use the aforementioned La Budde algorithm to significantly improve the accuracy of the calculation of the characteristic polynomial of a matrix [44]. Moreover, this method allows us to use long double complex data types. With these changes we lower the relative error in the calculation by three orders of magnitude compared to the previous implementation. We can then achieve a precision of one part in ten thousand for the computation of loop hafnians of matrices with dimension 56, as shown in Fig. 2.
As shown in the previous section, the runtime of the algorithm scales exponentially with the number of photons and linearly with the number of modes. Since the number of photons is the dominant parameter, we benchmark the time taken to calculate the largest event. If N photons are detected at the end of the algorithm, probabilities having at most N + d photons need to be calculated, where d is the cutoff.
In Fig. 3 we benchmark the time it takes to calculate the loop hafnian of a random symmetric complex For a physical system with m modes, an upper bound on the runtime can be obtained by multiplying the runtime of the largest loop hafnian calculation by m. However, typically the N photons occur spread out across different modes, so we only calculate the largest loop hafnians for a fraction of the modes. The worst-case occurs when photons are detected in the first N modes, resulting in expensive calculations in the remaining m−N modes. For example, for m = 100 and N = 50, we can estimate a runtime of roughly two weeks for generating a single sample on the workstation.
Finally, the loop hafnian implementation benchmarked here is written for portability and reproducibility. This implies that our implementation of the algorithm is not highly optimized. In the future, we expect to achieve a speed increase of one or two orders of magnitude by low-level optimization of the C++ backend coupled with the use of a modern task-based parallelism library [45] for more efficient load-balancing.
V. DISCUSSION
We have introduced an algorithm for the simulation of Gaussian boson sampling for which the complexity of generating a sample scales, up to polynomial prefactors, like the complexity of calculating a probability amplitude. This results in a quadratic speedup compared to the previous state-of-the-art. The algorithm is exact up to a small error induced by a cutoff dimension, runs in polynomial space, and simulates the most general forms of Gaussian boson sampling.
A remarkable consequence of our result is that Gaussian boson sampling requires roughly twice as many photons than standard boson sampling to reach the same regime of classical simulation. This has potential implications for experimental efforts at demonstrating an advantage over classical simulators. Indeed, for a number of photons N and a number of modes m, the complexity of the best algorithm for boson sampling scales as O(N 2 N ), whereas our algorithm has runtime O(mN 3 2 N/2 ). A possible interpretation is that in Gaussian boson sampling, where photons are generated through squeezing, it is the number of photon pairs that determine complexity; not the number of photons. | 2020-10-30T01:01:19.156Z | 2020-10-29T00:00:00.000 | {
"year": 2020,
"sha1": "16d69ff98498604802beb665a9282b155d3110b9",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PRXQuantum.3.010306",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "16d69ff98498604802beb665a9282b155d3110b9",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
231880230 | pes2o/s2orc | v3-fos-license | Visceral adipose tissue-directed FGF21 gene therapy improves metabolic and immune health in BTBR mice
Fibroblast growth factor 21 (FGF21) is a peptide hormone that serves as a potent effector of energy homeostasis. Increasingly, FGF21 is viewed as a promising therapeutic agent for type 2 diabetes, fatty liver disease, and other metabolic complications. Exogenous administration of native FGF21 peptide has proved difficult due to unfavorable pharmacokinetic properties. Here, we utilized an engineered serotype adeno-associated viral (AAV) vector coupled with a dual-cassette design to selectively overexpress FGF21 in visceral adipose tissue of insulin-resistant BTBR T+Itpr3tf/J (BTBR) mice. Under high-fat diet conditions, a single, low-dose intraperitoneal injection of AAV-FGF21 resulted in sustained benefits, including improved insulin sensitivity, glycemic processing, and systemic metabolic function and reduced whole-body adiposity, hepatic steatosis, inflammatory cytokines, and adipose tissue macrophage inflammation. Our study highlights the potential of adipose tissue as a FGF21 gene-therapy target and the promise of minimally invasive AAV vectors as therapeutic agents for metabolic diseases.
INTRODUCTION
Fibroblast growth factor 21 (FGF21) is a peptide hormone that acts on various tissues to maintain energy homeostasis. While FGF21 production predominately occurs in the liver, adipocytes are the main target of FGF21 action. [1][2][3][4][5] In white adipose tissue (WAT), FGF21 stimulates glucose uptake in an insulin-independent manner, 6 modulates lipolysis, 3 regulates mitochondrial activity, 7 and regulates adaptive thermogenesis. 8 Furthermore, the antidiabetic actions of FGF21-resulting in improvements in obesity-induced hyperglycemia, hypertriglyceridemia, and peripheral insulin resistance-are thought to occur primarily within WAT. 1,2 The therapeutic potential of FGF21 is well recognized by the metabolism, pharmaceutical, and gene therapy communities. Increasingly, FGF21 is viewed as a potential therapeutic agent for type 2 diabetes (T2D), fatty liver disease, and other metabolic complications. 9 Exogenous administration of recombinant FGF21 protein to ob/ob, db/db, and high-fat diet (HFD) mice reduces adiposity, lowers blood glucose and triglycerides, and improves insulin sensitivity. 6,10,11 Despite early successes, use of native FGF21 peptide was found to be unfavorable due to its short half-life and biophysical deficiencies. 12 While the development of FGF21 analogs and mimetics is ongoing and provides promise, some limitations exist. The use of FGF21 protein analogs or mimetics may require repeated administrations for maintained clinical benefit, which raises concerns about immunological reactions associated with exogenous protein administration, patient comfort, and treatment non-compliance. [13][14][15] Recent work in the field has investigated the use of adeno-associated viral (AAV) vectors for FGF21 gene therapy to treat obesity and insulin resistance. 16,17 AAV vectors serve as one solution for the troubles of long-term therapeutic protein administration, as they require a single administration for long-term transgene production. AAV vectors are predominately non-integrative, and their genomes persist as episomes in non-dividing cells. 18 Variations in AAV capsids yield tissue tropism, making AAVs adaptable for various therapeutic-, tissue-and cell-specific applications. 19 WAT is a dynamic endocrine and secretory organ, serving as much more than a mere vessel for energy storage. 20 Visceral adipose tissue (VAT) is a subtype of WAT that surrounds inner organs in the abdominal cavity and is thought to contribute to local/systemic inflammation and metabolic function. 21,22 Mature adipocytes in WAT depots are terminally differentiated, 23 making them an attractive target for primarily non-integrative gene expression vectors such as AAVs. Recently, we characterized a novel engineered hybrid serotype, Rec2, which achieves superior transduction of adipose tissue when compared to naturally occurring AAV serotypes. 24 We and others have applied Rec2 serotype vectors to manipulate adipose depots of interest in various mouse models. [24][25][26][27][28][29][30][31][32] Furthermore, we recently designed a dual-cassette vector to transduce VAT in a highly selective manner, while severely restricting off-target transduction of liver during non-invasive intraperitoneal (i.p.) administration. 25,31,32 Here, we combine these unique delivery systems to investigate the potential for WAT-targeted FGF21 gene therapy in obese, insulinresistant BTBR T+Itpr3tf/J (BTBR) mice. 33
RESULTS
To investigate the potential of WAT-directed FGF21 gene therapy to improve metabolic dysregulation, we utilized adipose-targeting recombinant AAV (rAAV)-Rec2 vector containing two expression cassettes, one driving the transgene expression by a constitutive promoter, while the second cassette uses a liver-specific promoter to express a microRNA targeting the woodchuck post-transcriptional regulatory element (WPRE) sequence existing in the first cassette ( Figure 1A). i.p. injection of the Rec2 dual-cassette vector achieves selective transduction of visceral adipose depots while severely restricting off-target transgene expression in the liver. 25,31,32 This VATdirected FGF21 gene therapy was applied to insulin-resistant BTBR mice under normal chow diet (NCD) and diet-induced obesity (DIO) conditions.
Intraperitoneal administration of Rec2-FGF21 sustains FGF21 overexpression in VAT and alters hypothalamic gene expression under NCD conditions. NCD-fed mice were injected i.p. with 2.0 Â 10 10 vg of either Rec2-FGF21 or a control vector containing no transgene, Rec2-Empty (Figure 1A). Mice were subjected to various metabolic tests over a 21-week period ( Figure 1B). No differences in body weight ( Figure 1C) or food intake ( Figure 1D) were observed over 21 weeks. An in vivo echoMRI at 4, 7, and 21 weeks post-injection revealed no significant differences in body fat (Figures 1E, S1A, and S1B) or lean mass percentage (Figures 1F, S1C, and S1D). At 5 weeks post-AAV injection, mice were subjected to a glucose tolerance test (GTT) to assess systemic glycemic processing; no significant differences were observed ( Figures 1G and 1H). At 11 weeks post-AAV injection, an open-field (OF) test was performed; no differences in anxiety-like behaviors and locomotion were observed (Figures S1E-S1G). From 12 to 14 weeks post-AAV injection, indirect calorimetry was performed. No significant differences between Rec2-FGF21 and Rec2-Empty groups were observed across various metrics, including VO 2 , VCO 2 , respiratory exchange ratio (RER), and ambulation ( Figures S2A-S2H). At 20 weeks post-AAV injection, an insulin tolerance test (ITT) revealed no significant differences in non-fasting glucose levels or insulin sensitivity between the two groups ( Figures 1I and 1J). At 21 weeks post-injection, tissues and serum were collected. Rec2-FGF21 and Rec2-Empty mice displayed no significant differences in relative tissue mass of brown adipose tissue (BAT), inguinal WAT (iWAT), gonadal WAT (gWAT), retroperitoneal WAT (rWAT), liver, or pancreas ( Figure 1K).
Rec2-FGF21-treated mice exhibited an approximately 10-fold elevation of serum FGF21 ( Figure 1L). No change in serum leptin ( Figure 1M) or chemokine (C-C motif) ligand 2 (CCL2, also known as MCP-1) was observed ( Figure 1N). Consistent with serum data, a robust 100-fold upregulation of Fgf21 expression was observed in the gWAT (Figure 1O). Together, these data indicate that Rec2-FGF21 sufficiently upregulated local and systemic FGF21 levels over the duration of the 21-week study, although no functional changes in systemic metabolism were observed under NCD conditions. Given that FGF21 acts centrally, [34][35][36] we additionally profiled hypothalamic tissue to assess gene-therapy-induced alterations in neuroendocrine and inflammation markers ( Figure S3). FGF21 acts on a receptor complex consisting of the ubiquitously expressed FGF receptor 1 (encoded by Fgfr1) and a co-receptor b-klotho (encoded by Klb) that is restricted to specific metabolic tissues including adipose tissue, liver, and particular areas of brain. 37,38 No changes in Fgfr1 and Klb were observed. The Rec2-FGF21-treated group exhibited a trend of upregulation of Crh (encoding corticotropin-releasing hormone), consistent with peripheral upregulation of FGF21. 34 A trend of downregulation of Insr (encoding insulin receptor) was observed in the Rec2-FGF21-treated group. No changes in other neuropeptides or receptors involved in energy balance including Obrb (encoding leptin receptor), Npy (encoding neuropeptide Y), Pomc (encoding proopiomelanocortin), or TrkB-FL (encoding full-length tropomyosin receptor kinase B) were observed. Interestingly, a bevy of inflammatory cytokines and immune modulatory genes-including Ccl2, Il1b (encoding interleukin-1b), Ikbkb (encoding inhibitor of nuclear factor kappa-B kinase subunit beta), Tnfa (encoding tumor necrosis factor alpha), Il33 (encoding interleukin-33), and H2Ab1 (encoding histocompatibility 2, class II antigen A, beta)-were collectively downregulated in the hypothalamus of the Rec2-FGF21 group.
VAT-directed overexpression of FGF21 improves systemic metabolism in DIO mice Following 4 weeks of HFD feeding, a separate DIO cohort of mice was injected i.p. with 2.0 Â 10 10 vg of either Rec2-FGF21 or Rec2-Empty. Mice were subjected to various metabolic tests over a 16-week period ( Figure 2A). No differences in absolute body weight were observed over the course of the experiment ( Figure 2B). While Rec2-Empty mice continued to gain weight, we observed a moderation in DIOinduced weight gain in Rec2-FGF21-injected mice ( Figure 2C). In tandem with these observations, Rec2-FGF21 mice exhibited increased relative food consumption calibrated to body weight, suggesting an increase in energy expenditure ( Figure 2D). An in vivo echoMRI revealed Rec2-FGF21 mice to have a reduced body fat percentage ( Figures 2E and S4A) and increased lean mass percentage ( Figures 2F and S4B) when compared to Rec2-Empty controls. At 8 weeks post-AAV injection, mice were subjected to an ITT. Rec2-FGF21 mice showed significantly lower non-fasting blood glucose levels (at t = 0) and improved overall response to the ITT ( Figures 2G and 2H), indicative of alleviation of insulin resistance in the obese BTBR mouse model. At 10 weeks post-AAV-injection, an OF test was performed; no differences in anxiety-like behaviors and locomotion were observed ( Figures S4C-S4E). At 10 weeks post-AAV-injection, a GTT was performed. Rec2-FGF21 mice cleared an i.p. glucose bolus in a more efficient manner, indicating an improvement in glycemic processing following FGF21 gene therapy ( Figures 2I and 2J). Figures 3C and 3D), indicative of elevated energy expenditure consistent with previous observations following FGF21 gene therapy. 16 No differences were observed in RER ( Figures 3E and 3F) or ambulation ( Figures 3G and 3H). Together, these data indicate ( Figure 4A) and relative tissue weight ( Figure 4B). No differences were observed in the tissue weight or relative tissue weight of the virally treated gWAT ( Figures 4A and 4B). Rec2-FGF21 mice additionally displayed a significant increase in relative tissue weight of gastrocnemius ( Figure 4B). A large reduction in liver absolute weight and relative tissue weight was observed in the Rec2-FGF21 group ( Figures 4C and 4D). Hepatic steatosis was alleviated in Rec2-FGF21 mice as measured by liver H&E staining ( Figure 4E) and triglyceride quantification ( Figure 4F). Of note, Fgf21 overexpression was not observed in the liver, consistent with the liver-restricting nature of the dualcassette AAV vector design ( Figure 4G).
VAT-directed FGF21 gene therapy alters serum adipokine and inflammation markers in DIO mice
As expected, FGF21 was increased in the serum of Rec2-FGF21 mice ( Figure 5A). Given this observation, we profiled various serum markers of metabolic function and inflammation to assess changes following FGF21 gene therapy.
Adiponectin is an adipokine that has been shown to connect FGF21 action in adipocytes to liver and skeletal muscle, thus improving insulin sensitivity, glucose homeostasis, and systemic metabolism. 2,39 Accordingly, there was a trend toward increased total adiponectin (p = 0.08) ( Figure 5B) and significantly increased high-molecular-weight (HMW) adiponectin in the serum of Rec2-FGF21 mice (Figure 5C). The ratio of HMW adiponectin to total adiponectin has been described as an advanced marker of systemic metabolism and cardiac health. 40 Rec2-FGF21 mice displayed an increase in the HMW:total adiponectin ratio ( Figure 5D).
Leptin is predominantly secreted by adipocytes and serves as a central-peripheral messenger to maintain energy homeostasis. Leptin production is positively correlated with adipose tissue mass and additionally has been described as a proinflammatory link between immune and neuroendocrine systems. 41 Figure 5H) were observed. A trending, but not significant (p = 0.08), decrease in serum insulin was observed (Figure 5I), indicative of a trend toward improved insulin sensitivity. No difference was observed in the HOMA-IR (homeostasis model assessment of insulin resistance) index of the two groups after a 4-h fast ( Figure 5J).
We additionally profiled several serum proinflammatory cytokines and chemokines. Serum amyloid A (SAA) serves as a marker of inflammation and is thought to be tied to macrophage-related immunologic pathways. 42, 43 We observed a strong trend of reduction of www.moleculartherapy.org SAA in the Rec2-FGF21 group as compared to controls ( Figure 5K). Plasminogen activator inhibitor-1 (PAI-1) is involved in fibrinolysis and its elevation thought to contribute to vascular disease and inflammation in obese states. 44,45 PAI-1 levels were reduced in the Rec2-FGF21 group ( Figure 5L), indicative of improved metabolic function and reduced inflammation. Serum CCL2 levels were reduced in the Rec2-FGF21 group ( Figure 5M), indicative of decreased inflammation; these findings are additionally consistent with the observed improvements in insulin sensitivity. 46 No change in interleukin-1 beta (IL-1b) serum levels was observed ( Figure 5N).
We assessed gene expression in an additional VAT depot, the rWAT ( Figure 6B). A robust 120-fold overexpression of Fgf21 was observed in the Rec2-FGF21-treated rWAT. Despite this robust overexpression, no changes were observed in various adipokine and inflammation markers. In the iWAT-a nontargeted subcutaneous adipose depot-no changes in Fgf21 or its receptors were observed (Figure 6C), consistent with the viral administration technique. In the iWAT, adipokine and mitochondrial markers remained unchanged. Two markers of inflammation, Ccl2 and Pai1, were downregulated in the Rec2-FGF21 group ( Figure 6C). In the BAT, gene expression of Fgf21 and its receptors remained unchanged ( Figure 6D). Rec2-FGF21 mice displayed a reduction of Lep gene expression in the BAT, with no change in Adipoq. In the BAT, various markers of mitochondrial function, thermogenesis, and fatty acid synthesis/oxidation-including Ppargc1a, Ucp1, Dio2 (encoding iodothyronine deiodinase 2), and Ppara (encoding peroxisome proliferator activated receptor alpha)-remained unchanged ( Figure 6D). No transgene expression was found in pancreas or skeletal muscle. No changes in Fgf21, Fgfr1, or Klb were observed in the pancreas (data not shown) or the gastrocnemius ( Figure S5B).
It is well documented that FGF21 acts centrally-through CRH-to induce energy expenditure, thermogenesis, and sympathetic nerve ac- tivity. [34][35][36] Given the observed improvements in systemic metabolic function, we measured hypothalamic gene expression of several neuroendocrine and inflammatory markers in the HFD cohort ( Figure S5C). No changes in Fgfr1 or Klb were observed. Hypothalamic Crh was upregulated, consistent with peripheral upregulation of FGF21. 34 Insr was upregulated, consistent with the observed serum insulin reduction in the Rec2-FGF21 treated group. No changes were observed in additional neuroendocrine markers, including Obrb, Npy, Pomc, and TrkB-FL. In contrast to Rec2-FGF21 treatment in NCD mice, no changes in a myriad of inflammation markers, including Ccl2, Il1b, Ikbkb, Il18, Il33, and H2Ab1 were observed in HFD mice. Interestingly, we observed upregulation of Tnfa in the Rec2-FGF21 group.
VAT-directed FGF21 gene therapy reduces ATM inflammation in DIO mice
Under obese conditions, adipose tissue macrophages (ATMs) accumulate in VAT and exhibit a proinflammatory M1 polarization (CD11c + , CD206 À ), contributing to insulin resistance. 48 In contrast, lean animals present primarily with an M2-polarized state (CD11c À , CD206 + ), which is thought to protect adipocytes from inflammation. 48 As such, we isolated the stromal vascular fraction (SVF) from gWAT and performed fluorescence-activated cell sorting (FACS). Administration of Rec2-FGF21 resulted in distinct changes in ATM polarization within the VAT ( Figure 7A). While no percentage changes in the total population of ATMs ( Figure 7B) and M1-polarized populations ( Figure 7C) www.moleculartherapy.org were observed, a significant percentage increase in M2 polarization was observed in VAT of Rec2-FGF21 mice ( Figure 7D). This change was accompanied by a significant percentage decrease in double-positive (CD11c + , CD206 + ) ATMs ( Figure 7E). ATMs with this signature have been identified as sources of proinflammatory cytokines and are thought to be drivers of insulin resistance. 49 These data indicate that FGF21 gene therapy to the VAT reduced ATM inflammation and is associated with the observed improvements in insulin sensitivity and systemic metabolism. We additionally profiled other immune cell populations in the gWAT. No changes were observed in populations of natural killer T cells ( Figure 7F), T cells ( Figure 7G), or subpopulations of CD4 + T cells ( Figure 7H) and CD8 + T cells ( Figure 7I).
DISCUSSION
The present work provides evidence of a novel adipose-targeting, liver-restricting rAAV vector for long-term, specific transgene expression within VAT. Here, VAT depots were targeted using the adipo-trophic Rec2 serotype in tandem with a dual-cassette rAAV utilizing a liver-restricting element. 25 Combined, these techniques allow for a minimally invasive delivery system that is equivalent to direct fat injections. By utilizing the VAT as an FGF21 "factory" or "pump" to induce FGF21 in circulation, insulin resistance and obesity were reversed in BTBR mice. In additional, local and systemic obesity-associated inflammation was reduced.
Here, we report i.p. Rec2-FGF21 administration promotes a robust VAT-specific overexpression of FGF21 with no change in liver transgene expression. This technique builds upon an extensive report that highlighted the potential for FGF21 gene therapy to counter obesity and insulin resistance in HFD and ob/ob murine models. 16 Our technique differs in two important manners. First, the previous report used AAV8 serotype vector and target sequences for miR-122a and miR-1 to limit transgene expression in the liver and heart. 16 We used an engineered hybrid serotype Rec2 vector-which transduces adipose tissue more efficiently than the naturally occurring AAV8 24 -in combination with a dual-cassette design to restrict off-target liver transduction. 25 This study adds to the literature that characterizes the efficacy of this vector system to selectively transduce adipose tissue. 25,31,32 Second, Jimenez and colleagues 16 performed a laparotomy to directly administer their AAV8 vectors to VAT, whereas the technique presented here allows for non-invasive i.p. injections. In theory, our technique is clinic friendly and stresses the importance of developing minimally invasive administration techniques for widespread use of AAVs as therapeutic agents.
Functionally, our vectors performed similarly. VAT-specific overexpression of FGF21 in ob/ob 16 and BTBR mouse models resulted in increased serum FGF21, increased serum adiponectin, improved glycemia and insulinemia, improved insulin sensitivity, and reduced hepatic steatosis. The previous work found that FGF21 gene therapy to VAT reduced immunostaining of a macrophage marker, Mac2, and expression of F4/80. 16 Our work expands upon these observations, as we performed FACS to more comprehensively observe the immune populations residing in the AAV-treated gWAT, thus profiling ATM polarization and T cell subsets. Importantly, we discovered Rec2-FGF21 gene therapy altered ATM polarization toward a less-inflammatory state, characterized by an increase in anti-inflammatory M2 polarization (CD11c À , CD206 + ) and a decrease in proinflammatory doublepositive (CD11c + , CD206 + ) ATMs. These ATM phenotypes have been shown to have causal links to lean states and improved insulin sensitivity. 48,49 Consistent with the favorable ATM polarization, Rec2-FGF21 administration downregulated the expression of proinflammatory cytokines, chemokines, and inflammasome components in treated gWAT. This gene signature was associated with significant reduction of cytokine and chemokine levels in circulation, suggesting alleviating VAT inflammation is sufficient to lessen systemic chronic inflammation-which, importantly, is implicated in various diseases beyond obesity and T2D. These observations warrant further investigation of VAT-targeted FGF21 gene therapy to treat cardiovascular diseases, non-alcoholic fatty liver disease, and certain types of cancer.
Adiponectin has been proposed as a messenger that links FGF21 actions in local adipocytes to liver and skeletal muscle. 2,39 FGF21 treatment increases adiponectin secretion from adipocytes, 2,39 and adiponectin has been shown to promote M2 macrophage polarization. 50 Additionally, adiponectin has been shown to confer the effects of FGF21 on hepatic fatty acid oxidation and lipid clearance. 2 Interestingly, we observed reduced hepatic steatosis following VATdirected gene therapy; this change was not due to increased hepatic FGF21 expression. We observed increased serum adiponectin following VAT-directed FGF21 gene therapy, providing one potential explanation for the observed reduction in hepatic triglycerides. While not the primary focus of this work, these findings highlight the importance of investigating tissue crosstalk following tissue-directed gene therapy and understanding mechanistic players in such processes.
In the BAT of the Rec2-FGF21-treated group, we observed no change in Ucp1 expression or associated genes. Previous work has shown that FGF21 increases whole-body energy expenditure in ablated BAT and UCP-1 knockout mouse models, 51,52 suggesting FGF21 plays a role in UCP-1-independent thermogenic processes. In contrast to our data, the previous report showed that liver-directed administration of FGF21 gene therapy induced UCP-1 and browning in BAT. 16 Further work is needed to understand the mechanisms of FGF21 in UCP-1-dependent/-independent thermogenesis and to delineate whether tissue source of FGF21 (e.g., liver or adipose tissue) matters in such biological processes.
WAT depots vary in function and their responses to metabolic stimuli. WAT is broadly classified into two depots-VAT and subcutaneous adipose tissue. The former surrounds internal organs and is associated with insulin resistance, metabolic disease, and is thought to contribute to local/systemic inflammation. 21,53,54 The latter is found predominantly around the thighs and is associated with insulin sensitivity. 55 In the present work, we target VAT preferentially with Rec2-FGF21. i.p. administration of Rec2 vector did not result in transgene expression in subcutaneous iWAT. Interestingly, mild reductions in inflammation and relative tissue weight were observed in iWAT in absence of increased Fgf21, Fgfr1, and Klb expression. These results bring questions regarding adipose-adipose crosstalk and FGF21's role in mediating overall metabolic function. FGF21 has differential actions on various WAT depots; in subcutaneous adipose tissue, FGF21 regulates PGC1-a and browning in adaptive thermogenesis. 8 Furthermore, a recent report suggests that FGF21 induces transcriptomic changes associated with reduced subcutaneous adipose tissue weight. 56 Data on FGF21's specific roles in VAT are less conclusive and warrant further investigation. The AAV technology presented here provides one such method to further probe these depot-specific roles; two visceral adipose depots, gWAT and rWAT, displayed robust transgene expression with neglectable transgene expression in subcutaneous iWAT. Notably, growing evidence suggests incongruences between functional aspects of human and murine WAT depots 57-59 -careful experimental design and depot-specific techniques must be used to aid in translation of murine findings to human health. www.moleculartherapy.org BTBR mice are an inbred strain often used as a model of autism spectrum disorder (ASD). HFD is shown to exacerbate social deficiencies and cognitive rigidity in BTBR mice. 60 Moreover, BTBR mice display aberrant immune responses compared to more sociable C57BL/6 mice, characterized by higher anti-brain antibodies, elevated expression of cytokines in the brain-particularly IL-33 and IL-18, and an increased proportion of MHC II-expressing microglial cells. It is proposed that the constitutive neuroinflammation indicates an autoimmune profile contributing to their aberrant behaviors. 61 Rec2-FGF21 treatment led to downregulation of a cluster of immunemodulatory genes in the hypothalamus (Il33, Il1b, Ccl2, Tnfa, Ikbkb, H2Ab1; Figure S3) under NCD conditions, although metabolic outcomes were unremarkable. The impact of adipose-targeted FGF21 treatment on neuroinflammation, behaviors, and the underlying mechanisms warrant future investigation. Another unexpected finding is that the HFD-induced hypothalamic neuroinflammation in C57BL/6 mice was absent in the BTBR mice, although BTBR mice remain more prone to DIO. These observations warrant further work on (1) the aberrant neuroendocrinological and neuroimmunological differences between the insulin-resistant, ASD-like BTBR model and sociable strains, and (2) assessments of the potential benefits of FGF21 treatment beyond metabolic outcomes.
From a therapeutic standpoint, FGF21 AAVs provide several advantages to administration of native FGF21 peptide, analogs, and/or mimetics; such therapeutics require repeated administration, patient adherence, and may be subject to immunological concerns stemming from use of exogenous proteins. In contrast, FGF21 gene therapy via AAV constructs would require but a single administration for long-term transgene persistence. AAV-FGF21 vectors have the additional advantage of producing the wild-type protein, which is easily recognized by canonical FGF21 signaling pathways and has a reduced likelihood of inducing peptide-related adverse immune responses.
It is important to consider the use indications for FGF21 gene-therapy vectors, analogs, and mimetics. Importantly, we observe limited alterations in systemic metabolism following VAT-directed FGF21 gene therapy in mice on NCD. At this time, the use of AAV FGF21 techniques would not be indicated for use in non-obese individuals like the ones reported in the NCD study ( Figure 1). Indeed, the overwhelming majority of preclinical and clinical trials for FGF21 gene therapy, analogs, and mimetics are for obese and/or diabetic individuals. 62 Some have considered aging as an indication for FGF21related therapeutics. Davidsohn and colleagues 17 recently reported a combination gene therapy based on 3 longevity-associated genes-including FGF21-to treat multiple age-related diseases. Adipose tissue dysfunction is thought to be a key driver of aging, leading to a systemic proinflammatory state and multi-organ dysfunction. The related pathophysiology of age-related systemic functional decline often mirrors the pathologies related to obesity. 63 Accordingly, we have initiated a long-term study to examine VAT-directed FGF21 gene therapy on the effects of healthspan and lifespan in middle-aged mice.
In summary, AAV-mediated gene therapy is increasingly attractive as a strategy to fight obesity and metabolic diseases. 64,65 Excessive adiposity is a risk factor for T2D, metabolic syndrome, inflammation, and certain types of cancer. [66][67][68][69][70] VAT is a prime therapeutic target due to its nature as a secretory organ; adipokines can be harnessed to induce local and systemic improvements in metabolic and immune health. [71][72][73] Our study combines an engineered AAV serotype, liverrestricting design, and i.p. administration techniques to provide an example of VAT-targeted gene therapy. Currently, the vast majority of peripheral gene therapies target liver or muscle; the advantages and drawbacks of using these tissues as targets are well characterized. 74 In contrast, the advantages and disadvantages of adipose tissue as a targeting tissue remain largely unknown and warrant further investigation. The recent development of AAV vectors with improved adipose tropism and restriction of off-target transduction paves ways to investigate the long-term transgene expression, local and systemic immune responses, therapeutic efficacy, and safety profile of these vectors in adipose tissue. 75 New AAV administration techniques and bioengineering projects will be essential to increase specificity and efficacy of targeted gene therapies. 74,76 MATERIALS AND METHODS Animals BTBR (Jackson Laboratory #002282) mice were obtained and bred inhouse. Mice were housed in temperature (22 C -23 C) and humidity (30%-70%) controlled rooms under a 12-h:12-h light:dark cycle. All animal experiments were in accordance with the regulations of The Ohio State University's Institutional Animal Care and Use Committee.
NCD mice
Adult male BTBR mice (16-20 weeks old) were placed on NCD (11% fat, caloric density 3.4 kcal/g, Teklad). At baseline, mice were randomized to create two groups (n = 4, Rec2-Empty; and n = 8, Rec2-FGF21) that had no significant differences in age, body weight, fat mass percentage, or lean mass percentage. Following randomization, mice were administered rAAV vectors as described below. Mice were maintained on NCD for the remainder of the study, having ad libitum access to food and water. Body weights were monitored on a weekly basis. In vivo measurements occurred according to the timeline in Figure 1B.
DIO mice
Adult male BTBR mice (13-19 weeks old) were placed on HFD (60% kcal from lard; Research Diets #D12492). After 4 weeks of HFD, mice were randomized to create two groups (n = 7, Rec2-Empty; and n = 7, Rec2-FGF21) that had no significant differences in age, body weight, fat mass percentage, or lean mass percentage. Following randomization, mice were administered rAAV vectors as described below. Mice were maintained on HFD for the remainder of the study, having ad libitum access to food and water. Body weights were monitored on a weekly basis. In vivo measurements occurred according to the timeline in Figure 2A. Rec2-FGF21; Figure 1A). The second cassette encoded a microRNA targeting the WPRE sequence driven by basic albumin promoter to limit transgene expression in the liver. This liver-restricting dual cassette was previously described and verified. 25 The empty control vector (Rec2-Empty) lacked a transgene insertion in the multiple cloning sites. Rec2 serotype specificity, transduction efficacy in adipose tissue, and packaging were previously detailed elsewhere. 24,77,78 Rec2-Empty and Rec2-FGF21 rAAV vectors (2 Â 10 10 vg) were administered to mice via i.p. injections (in 150 mL AAV buffer).
EchoMRI
EchoMRI was utilized to measure body composition of fat and lean mass in live mice without anesthesia. Body composition analysis was performed with an echoMRI 3-in-1 analyzer at the Small Animal Imaging Core of the Dorothy M. Davis Heart & Lung Research Institute, The Ohio State University. Fat, lean, free water, and total water mass were measured by the echoMRI machine and then normalized to total body weight as measured 10 min prior to the scan.
ITT
Mice were injected i.p. with an insulin solution (1.5 U insulin per kg body weight) under non-fasting conditions. Blood was obtained from the tail at baseline, 15, 30, 60, 90, and 120 min after insulin injection. Blood glucose concentrations were measured with a portable glucose meter (Bayer Contour Next).
OF test
Mice were individually placed into the center of an open square arena (60 Â 60 cm, enclosed by walls of 48 cm). Each mouse was allowed to explore the arena for 10 min, during which time and locomotion-in the center and the periphery of the OF-was recorded and analyzed via TopScan (CleverSys) software. Between each trial, the arena was cleaned with Opticide to remove odor cues.
GTT
Mice were injected i.p. with glucose solution (1.0 g glucose per kg body weight) after a 17-h overnight fast. Blood was obtained from the tail at baseline, 15, 30, 60, 90, and 120 min after glucose injection. Blood glucose concentrations were measured with a portable glucose meter (Bayer Contour Next).
Indirect calorimetry
Mice underwent indirect calorimetry using a comprehensive laboratory animal monitoring system (CLAMS; Columbus Instruments, Columbus, OH, USA). Mice were singly housed and had ample access to HFD and water. In our experience, BTBR mice have difficulty using novel water lixits. As such, mice were additionally supplemented with HydroGel cups (Clear H 2 O #70-01-5022). Mice were allowed to habituate for 16-18 h and then various physiological and behavioral parameters (VO 2 , VCO 2 , RER, heat, and ambulation) were recorded at room temperature for 24 h. Mice were returned to their home cage after indirect calorimetry was performed.
Food intake
For the NCD experiment mice, weekly food intake was measured at the cage level and normalized to body weight and the number of mice per cage. Due to worries of food loss stemming from the physical consistency of HFD, the HFD mice were singly housed for 72 h. Food intake was measured every 24 h to provide three replicate measurements of daily intake per mouse. Measurements were normalized to body weight. HFD mice were returned to their home cages following food intake assessments.
Serum and tissue collection
Mice were euthanized and tissues were collected at 21 weeks postinjection (NCD mice) and 16 weeks post-injection (HFD mice). Trunk blood was collected at 10:00 following a 4-h fast. Blood was allowed to clot on ice for at least 30 min before centrifugation at 10,000 rpm for 10 min at 4 C. The serum component was collected and stored at À20 C until further analysis. Tissues were either fixed as described below or flash frozen and stored at À80 C until further analysis. Fat depots were identified, collected as described elsewhere, 79 and normalized to body weight as measured 10 min prior to euthanasia.
Histology
At sacrifice, portions of liver and adipose tissues were fixed in 10% formalin (w/v) for 48-72 h and then dehydrated with 70% ethanol. Tissues were embedded in paraffin, sectioned, and H&E stained by the Comparative Pathology and Mouse Phenotyping and Histology/ Immunohistochemistry (CPMPSR) core of The Ohio State University Comprehensive Cancer Center. Tissue sections were imaged at 20Â magnification using an Olympus BX43 microscope with an Olympus SC30 color camera attachment and Olympus cellSens software.
Isolation of adipose SVF and FACS
Samples of gWAT were minced into small pieces in Kreb-Ringer HEPES buffer (pH 7.4). Collagenase (1 mg/mL, Sigma #C6885) was added and incubated for 40 min at 37 C with shaking. The mixture was centrifuged to separate the floating adipocytes from the SVF. The SVF pellet was treated with ammonium chloride solution to lyse the red blood cells, then washed and resuspended in FACS buffer. 70-mM strainers were used to obtain a single-cell suspension. SVF cells were stained with fluorescent-dye-conjugated antibodies for 20 min. The antibodies used for flow cytometry immunophenotyping are listed in Table S2. Cell events were acquired using LSRII flow cytometry (BD Biosciences), and the results were analyzed using FlowJo v10 software (Tree Star).
Statistical analysis
Data are expressed as mean ± SEM. GraphPad Prism 7 software (GraphPad, La Jolla, CA, USA) and SPSS Statistics v25.0.0.0 (IBM, Armonk, NY, USA) were used to analyze data. Student's t tests were performed for all data except time course data. Mixed-model ANOVAs were used to analyze time course data (weekly body weights, GTTs, ITTs, and indirect calorimetry time measurements). Results of the between-group analyses are reported in the associated figures for weekly body weights and indirect calorimetry time measurements. Results of the pairwise comparisons are reported in the GTT and ITT graphs. | 2020-12-31T09:05:44.038Z | 2020-12-25T00:00:00.000 | {
"year": 2020,
"sha1": "78b6cd42aec82c1b55c3f43e97733e47203ba630",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.omtm.2020.12.011",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6016e701aab09e7939f5a13d6488ed8d142df86",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19002937 | pes2o/s2orc | v3-fos-license | Flesh Shear Force, Cooking Loss, Muscle Antioxidant Status and Relative Expression of Signaling Molecules (Nrf2, Keap1, TOR, and CK2) and Their Target Genes in Young Grass Carp (Ctenopharyngodon idella) Muscle Fed with Graded Levels of Choline
Six groups of grass carp (average weight 266.9 ± 0.6 g) were fed diets containing 197, 385, 770, 1082, 1436 and 1795 mg choline/kg, for 8 weeks. Fish growth, and muscle nutrient (protein, fat and amino acid) content of young grass carp were significantly improved by appropriate dietary choline. Furthermore, muscle hydroxyproline concentration, lactate content and shear force were improved by optimum dietary choline supplementation. However, the muscle pH value, cooking loss and cathepsins activities showed an opposite trend. Additionally, optimum dietary choline supplementation attenuated muscle oxidative damage in grass carp. The muscle antioxidant enzyme (catalase and glutathione reductase did not change) activities and glutathione content were enhanced by optimum dietary choline supplementation. Muscle cooking loss was negatively correlated with antioxidant enzyme activities and glutathione content. At the gene level, these antioxidant enzymes, as well as the targets of rapamycin, casein kinase 2 and NF-E2-related factor 2 transcripts in fish muscle were always up-regulated by suitable choline. However, suitable choline significantly decreased Kelch-like ECH-associated protein 1 a (Keap1a) and Kelch-like ECH-associated protein 1 b (Keap1b) mRNA levels in muscle. In conclusion, suitable dietary choline enhanced fish flesh quality, and the decreased cooking loss was due to the elevated antioxidant status that may be regulated by Nrf2 signaling.
Introduction
Fish meat, which could provide balanced amino acids, is an important protein source for humans [1]. Moreover, fish meat is rich in polyunsaturated fatty acids and vitamins, which is well accepted by consumers [2]. Currently, an inconvenient factor for consumer acceptance is fish flesh quality. Flesh quality deterioration may cause problems for the industry [3], which leads to tremendous economic losses for producers and poor consumption [4]. Cooking loss and firmness are technologically important characteristics of flesh quality [5]. Minimal weight loss after cooking and a firmness of flesh are desired by consumers, soft flesh leads to reduced acceptability [6]. Therefore, we have chosen cooking loss and firmness as focus points. Several studies demonstrated that fish flesh quality could be improved by dietary nutrients [7]. One study reported that vitamin E supplementation was shown to be effective at improving the flesh quality of rainbow trout [8]. Choline was discovered to be an essential vitamin for fish, which acts as an important methyl donor and component of acetylcholine [9]. Studies have shown that feeding a choline-deficient diet induced growth retardation and poor feed efficiency in juvenile Jian carp (Cyprinus carpio var. Jian) [10]. Fish growth primarily depends on the growth of muscle [11]. Craig and Gatlin [12] reported that dietary choline elevated muscle growth and muscle lipid content in juvenile red drum (Sciaenops ocellatus). Meanwhile, Mai et al. [13] showed that dietary choline increased muscle lipid content in juvenile cobia. However, there is no study on how dietary choline affects flesh quality in animal. Choline was recognized as a component of phospholipids that maintain cell membrane integrity [14]. Asghar et al. [15] reported that reduced phospholipid content increased the drip loss in pork muscle. Meanwhile, betaine is an important metabolite of choline [14]. Matthews et al. [16] showed that dietary betaine supplementation increased pH and decreased cooking loss and the firmness of pig muscle. A previous study found that dietary choline deficiency could significantly decrease vitamin E content in rat liver [17]. Moreover, the muscle choline concentration of juvenile cobia was significantly increased with dietary choline [13]. These observations indicated that dietary choline might have a positive effect on flesh quality, which warrants further investigation.
Firmness is an important flesh quality trait that influences acceptance of meat purchases in fish [18]. It was implied that the flesh firmness is associated with the collagen content in fish [19,20]. Johnston et al. showed that high collagen content contributed to muscle firmness in Atlantic salmon [21]. Meanwhile, Martinez et al. [22] reported that fish flesh firmness was negatively correlated with cathepsin activity in Atlantic salmon. However, less attention has been paid to the effects of dietary choline on flesh firmness through regulating muscle collagen content and cathepsin activity in animals. In rats, choline was found to increase liver vitamin C content [17], which improved collagen synthesis in cultured vascular smooth muscle cells [23], thereby improving muscle firmness. Moreover, in mice, choline significantly reduced lung interleukin-4 (IL-4) secretion [24], which down-regulated cathepsin B activity in mice macrophages [25]. The above studies indicated that choline may improve flesh firmness by increasing collagen content and decreasing cathepsin activity in fish muscle, which is worthy of more investigation.
Muscle pH is another important flesh quality parameter, and the development of flesh quality is positively correlated with post-mortem pH in fish muscle [26]. However, no studies have focused on the effects of choline on the pH of fish muscle. Betaine, a metabolic product of choline, increased the serum lactic acid content in newborn piglets [27]. Based on these studies, we speculate that dietary choline may influence the fish flesh pH, which warrants further investigation.
Water-holding capacity (WHC) is another key flesh quality characteristic that can affect the yield and quality of processed meat [28]. Several studies have demonstrated that meat WHC is associated with muscle oxidation damage. Oxidation damage significantly increased a loss of WHC in beef muscle [29]. Previous study in our laboratory has demonstrated that optimum zinc supplementation improved grass carp muscle WHC through attenuating muscle oxidative damage [30]. However, no research has focused on the effects of choline on flesh WHC, and whether dietary choline can affect WHC by affecting the oxidation damage in animal muscle is unknown. A study has shown that dietary choline deprivation could increase oxidative damage of rat kidney [31]. These findings indicated that choline might improve flesh WHC through decreasing muscle oxidation damage in fish. In addition, the oxidative damage of fish was induced by reactive oxygen species (ROS) [32]. To scavenge ROS, fish have developed antioxidant systems that, in general, are composed of the non-enzymatic compound glutathione (GSH) and antioxidant enzymes such as superoxide dismutase (SOD), glutathione peroxidase (GPx) and glutathione S-transferases (GST) [33]. However, no studies have explored the effects of dietary choline on ROS clearing capacity and whether it can affect flesh WHC in animal muscle. It has heen reported that choline can increase GPx activity in rat liver [34]. A previous study demonstrated that dietary choline decreased SOD, GPx, GR and GST activities in the head kidney and spleen of juvenile Jian carp [35]. These studies demonstrated that dietary choline may increase the activities of antioxidant enzymes to improve muscle WHC in fish. This possibility is worth investigating.
Antioxidant enzyme activities were partly dependent on antioxidant enzyme gene mRNA levels in mice liver [36], which are regulated by NF-E2-related factor 2 (Nrf2) in fish [37]. When exposed to oxidative stress, Nrf2 dissociates from Kelch-like ECH-associated protein 1 (Keap1), translocates to the nucleus and induces transcription of antioxidant enzyme genes in terrestrial animals [36]. A study reported that fish had two types of Keap1 (Keap1a and Keap1b) [37], and previous study demonstrated that dietary choline could modulate Nrf2 and Keap1a mRNA expression in the head kidney and spleen of juvenile Jian carp [35]. Furthermore, a study demonstrated that the target of rapamycin (TOR) could regulate Nrf2 expression in rat liver [38]. In mammals, mTOR has emerged as a critical nutritional and cellular energy checkpoint sensor and regulator of protein synthesis by enhancing the activities of positive regulators of translation factors [39]. Moreover, protein kinase casein kinase 2 (CK2) emerged as a ubiquitous cellular signaling molecule that regulates TOR in human glioblastoma cells [40]. However, little information has been produced on the effects of choline on the Nrf2 pathway in fish muscle. In rats, choline could significantly increase serum insulin content [41], which improved CK2 expression in adipocytes [42] and activated TOR signaling in rainbow trout [43]. These studies indicated that choline may regulate fish muscle antioxidant enzyme activities through modulating their gene expression, which may relate to the CK2-TOR-Nrf2 signaling pathway. This possibility warrants further investigation.
Grass carp (Ctenopharyngodon idella) is an important economic freshwater species that is widely cultured over the world [44]. The production of cultured grass carp in China is estimated to be 4.78 million tons, and is ranked second among domestic cultured freshwater fish production in 2012 [45]. The dietary choline requirement of fingerling grass carp has been evaluated in fingerling grass carp [46], but the choline requirements of fish may vary with growth stage. In rainbow trout, based on weight gain, a requirement for choline of 0.12 g initial weight (4000 mg/kg diet) [47] was higher than that at 3.2 g initial weight (714 mg/kg diet) [48]. In addition, the requirements of nutrients may vary with the different indicators. For example, the dietary myo-inositol (MI) requirement of juvenile Jian carp based on muscle protein carbonyl content was estimated to be 853.8 mg MI/kg diet [49], which was higher than that based on percent weight gain (518.0 mg MI/kg diet) [50]. Therefore, it is necessary to study choline requirements of grass carp.
Our current research aimed to evaluate the influence of choline on fish growth and for the first time to explore the influence of choline on fish flesh quality. Furthermore, we measured the antioxidant enzyme gene mRNA levels, as well as the signaling molecules TOR, CK2, Keap1a, Keap1b and Nrf2 gene mRNA levels of fish muscle. The results from this experiment may provide partial explanation for choline-enhanced fish growth and flesh quality. The suitable choline requirements for young grass carp growth and flesh quality were also evaluated, which may be used in formulating commercial feeds for the intensive culture of grass carp.
Experimental diets and design
The formulation of the basal diet is present in Table 1. In order to achieve the maximum growth of grass carp, the isonitrogenous basal diet was contains 30% crude protein [51]. Basal diet mixed with various amounts of choline chloride to provide graded levels of 0 (un-supplemented), 400, 750, 1100, 1450 and 1800 mg choline/kg diet. The final choline concentration were 197, 385, 770, 1082, 1436 and 1795 mg choline/kg diet analyzed by the method of Venugopal [52]. After being prepared completely, the diets were stored at −20°C as described by Wu et al. [10].
Feeding management
The procedures used in this study were approved by the University of Sichuan Agricultural Animal Care Advisory Committee. Before the experimental period, fish were acclimated themselves to the experimental system for 4 weeks [53]. Then, fish with an average initial body weight of 266.9 ± 0.6 g were stocked in 18 experimental cages (1.4 Ã 1.4 Ã 1.4m) with 30 fish per cage. In accordance with Tang et al. [54], each diet was randomly assigned to triplicate cages, and fish were fed 4 times daily for eight weeks, and uneaten feed was collected. During the experimental period, the treatment groups were under natural light cycle, dissolved oxygen was not less than 6.0 mg/L and pH was maintained at 7.0 ± 0.5 according to Tang et al. [54]. The water temperature was 26 ± 2°C.
Sample collection and analysis
At the end of experiment, after being starved for 12 h, fish were weighed and anaesthetized in a benzocaine bath (50 mg/L) as described by Deng et al. [55]. After sacrifice, they were manually filleted, and the muscle samples were obtained from the left side, frozen in liquid N 2 , and then stored at -80°C until analysis as described by Salmerón et al. [56]. Meanwhile, muscle samples were obtained from the right side of the same fish for analysis of flesh quality parameters as reported by Wu et al. [19]. The muscle cooking loss was measured as described by Brinker and Reiter [5]. The hydroxyproline concentration was assayed by hydrolysis in hydrochloric acid as described in Wu et al. [30]. The method of Zhou et al. [57] used to measure muscle moisture content (oven drying to a constant weight) and the protein (N-Kjeldahl×6.25) and lipid contents (solvent extraction with petroleum) were analyzed as described by Geurden et al. [58]. Muscle amino acid composition was analyzed using high-performance liquid chromatography (HPLC) as reported by Gan et al. [59]. Lactate content was measured using enzymatic colorimetric analysis according to Hultmann et al. [60]. Additionally, the fluorimetric method of Li et al. [61] was used to measure the activity of cathepsins. The malondialdehyde (MDA) concentration was measured using the method of Alirezaei et al. [62] with the thiobarbituric acid reaction. Protein carbonyl (PC) concentration was evaluated by the method of Armenteros et al. [63] by the formation of protein hydrazones using 2, 4-dinitrophenylhydrazine. The anti-hydroxyl radical (AHR) capacity was measured using the Griess reagent, and the anti-hydroxyl radical (a-HR) capacity was analyzed using the Fenton reaction as described in Jiang et al. [64]. The anti-superoxide anion (ASA) capacity was determined using the Superoxide Anion Free Radical Detection Kit. Superoxide radicals were generated by the action of xanthine and xanthine oxidase; with an electron acceptor added, a coloration reaction is created using the gross reagent. The coloration degree is directly proportional to the quantity of superoxide anion in the reaction, such as in the methods described by Jiang et al. [64]. The GSH content was determined by measuring the formation of 5-thio-2-nitrobenzoate (TNB) according to Tang et al. [54]. The activity of Cu/Zn-SOD was assayed by measuring the decrease in the rate of cytochrome c reduction in a xanthine-xanthine oxidase superoxide generating system [59]. The GPx and GST activities were determined by measuring the rate of NADPH oxidation and monitoring the formation of an adduct between GSH and 1-chloro-2, 4-dinitrobenzene, respectively [49]. Catalase activity was determined by the decomposition of hydrogen peroxide according to Wu et al. [30]. GR activity was assayed as previously described by Wu et al. [30]. Total thiol-containing compound (T-SH) content was determined by the formation of 5-thio-2-nitrobenzoate followed by spectrophotometry at 412 nm. In addition, the muscle ROS content was measured using 2,7-dichlorodihydrofluorescein diacetate, which was oxidized to fluorescent dichlorofluorescein (DCF) as described in Rhee et al. [65].
Real-time quantitative PCR analysis
Total RNA was isolated from grass carp muscle using an RNAiso Plus Kit and electrophoresed on a 1% denaturing agarose gel to test the integrity [59]. Then, RNA was treated with DNase I to remove DNA contaminant and purified RNA reverse transcribed to cDNA. Specific primers for Cu/Zn-SOD, CAT, GPx, GR, GST, GCL, Nrf2, Keap1a, Keap1b, TOR and CK2 genes, as well as appropriate annealing temperatures are presented in Table 2. Due to results from our previous experiments (data not shown), we choose the β-actin (the most stable of the reference genes analyzed) as the reference gene. The amplification efficiencies of the reference gene and target genes were approximately 100% [59]. The expression results were analyzed using the 2 -ΔΔCT method after verification as described by Salmerón, et al. [56].
Calculations and statistical analysis
Statistical analyses were carried out using SPSS 18.0 (SPSS Inc., Chicago, IL, USA); all data are presented as the means ± SD and were analyzed by ANOVA [59]. Significant differences were considered at the P < 0.05 level, and Tukey's test was used to identify differences among experimental groups. We calculated the dietary choline requirement as described by Feng et al. [66].
Growth performance, muscle protein, lipid and moisture content
As shown in Table 3, fish fed diets with 770 mg/kg and 1082 mg/kg choline had the highest final body weight (FBW) and percentage weight gain (PWG) (P < 0.05), followed by fish fed diets with 1436 mg/kg choline, and was lowest in fish fed diets with 197 mg/kg choline. The feed efficiency (FE) significantly increased when diets were supplementated with 770-1082 mg/kg choline (Table 3). Feed intake (FI) significantly increased with the increase of choline from 197 to 770 mg/kg diet and declined with a further supplementation (P < 0.05) ( Table 3). The optimum dietary choline requirement level was estimated at 1136.5 mg/kg by a quadratic regression on the basis of PWG (Fig 1). As shown in Table 3, muscle composition was markedly affected by choline levels. Fish fed choline-supplement diets has a higher content of protein and lipid but a lower content of moisture. The content of protein was observed to be the highest in fish fed the diet with 770 mg/kg choline, followed by fish fed diets with 1082 mg/ kg and 1436 mg/kg choline, and was lowest in fish fed diets with 197 mg/kg choline. The lipid content of fish muscle was the highest in the 770 mg/kg diet and 1082 mg/kg diet, whereas the moisture content of fish muscle was lowest in the 770 mg/kg diet and the 1082 mg/kg diet.
Amino acid composition of muscle
As present in Table 4, dietary choline significantly affected amino acid composition in fish muscle. Threonine, cysteine, methionine, leucine and lysine contents were highest in the groups fed the 770 mg/kg diet and the 1082 mg/kg (P < 0.05) diet. Glutamic acid content was decreased by supplementation with 385-1083 mg/kg dietary choline (P < 0.05). Alanine content was highest in the group fed 1795 mg/kg choline and lowest in the group fed 385 mg/kg dietary choline (P < 0.05) However, dietary choline did not impact muscle aspartic acid, serine, glycine, valine, tyrosine arginine, phenylalanine or histidine contents (P > 0.05).
Flesh quality parameters
As presented in Table 5, the muscle cooking loss was higher in fish fed the choline deficient and excess diets, and the lowest in fish supplemented with 770 and 1082 mg/kg choline (P < 0.05). In contrast to cooking loss, muscle shear force was the highest in groups fed 770-1436 mg choline/kg diet (P < 0.05) ( Table 5). The hydroxyproline content was also improved by 770-1082 mg/kg choline supplementation (P < 0.05) ( Table 5). Cathepsin B and cathepsin L activities were the lowest with 770-1082 mg/kg choline supplementation (P < 0.05). With choline levels from 190 mg/kg to 770 mg/kg, the pH value decreased (P < 0.05), and the lowest pH value was found in the 770 mg/kg and 1082 mg/kg groups (P < 0.05) ( Table 5). The lactate content significantly increased with choline levels from 197 mg/kg to 770 mg/kg (P < 0.05) ( Table 5).
Muscle antioxidant parameters
The ROS, MDA, PC and GSH contents, as well as the activities of ASA, AHR, Cu/Zn-SOD, CAT, GPx, GST and GR, in fish muscle are presented in Table 6. The ROS content decreased Results are shown as means ± SD (n = 6), and a row with unlike superscript letters were significantly different (P < 0.05).
with 197-1082 mg/kg choline supplementation, and the lowest ROS was found in the 770 mg/ kg and 1082 mg/kg groups (P < 0.05). MDA and PC contents notably declined with choline supplementation from 197 mg/kg to 1082 mg/kg, and the lowest content was found in the 770 mg/kg and 1082 mg/kg groups (P < 0.05). However, fish fed diets with 770 mg/kg and 1082 mg/kg choline achieved the highest ASA and AHR capacities. The activity of Cu/Zn-SOD was the highest when choline level was 770 mg/kg or 1082 mg/kg (P < 0.05). The GPx activity had a similar pattern to Cu/Zn-SOD. The GST activity and GSH content were enhanced incrementally with choline supplementation from 197 mg/kg to 1082 mg/kg and then declined with further supplementation (P < 0.05). However, the activities of CAT and GR were not influenced by dietary choline (P > 0.05). As presented in Fig 2, quadratic regression analysis on muscle PC content estimated the dietary requirement for choline to be 1210.7 mg/kg diet for grass carp (266.5-787.1 g) under current experimental conditions.
Gene expression in fish muscle
As presented in Fig 3, with increasing choline up to 770 mg/kg, relative expression of Cu/Zn-SOD, GPx and GST genes in the muscle significantly increased, and reached a maximum in the 770 mg/kg and 1082 mg/kg groups (P < 0.05), but then declined with dietary choline greater than 1082 mg/kg (P < 0.05). With diets supplemented with 1082 mg/kg choline, fish had the highest GCL mRNA levels, followed by the 770 mg/kg and 1436 mg/kg groups, and then followed by the other diets (P < 0.05) (Fig 3). However, the relative expression of CAT and GR were not significantly different among these treatment groups (P > 0.05) (Fig 3). With increasing choline supplementation, relative expression of Nrf2 significantly increased, and reached a maximum in 770 mg/kg and 1082 mg choline/kg diet groups (P < 0.05). Relative expression of Keap1a and Keap1b had an opposite pattern compared to Nrf2 (P < 0.05) (Fig 4). With 197-770 mg/kg choline supplementation, relative expression of TOR significantly increased and reached a maximum in the 770 mg/kg and 1082 mg/kg groups, and then decreased with dietary choline levels higher than 1082 mg/kg (P < 0.05) (Fig 5). The CK2 mRNA expression was similar to TOR, and was the highest with 770 mg/kg and 1082 mg/kg choline supplementation (P < 0.05) (Fig 5).
Choline improved growth performance and muscle nutrient composition of fish
In our current study, grass carp fed diets with 385-1795 mg/kg choline had significantly higher growth performance, which suggested that suitable dietary choline could improve fish growth. On one hand, enhancement of growth performance in fish may be a consequence of feed intake. Our result showed that the optimum dietary choline enhanced FI, and correlation analysis showed the PWG of young grass carp was positively correlated with FI (r FI = +0.974, P < 0.01), indicating that choline stimulates feed intake thereby improving fish growth. On the other hand, another study showed that fish growth is an accurate and important tool in studying fish feed efficiency [67]. Meanwhile, when feed intake increases, a lower percentage of the feed intake is used for fueling basal metabolism and this improves feed efficiency in fish [58].
The present study has demonstrated that optimal dietary choline significantly improved FE, and correlation analysis showed that FE was positively correlated with FI (r = +0.809, P = 0.051), suggesting that choline-elevated FE may occur through stimulating FI in fish. In Table 6. ROS (% treatment 1), MDA (nmol/mg protein), PC (nmol/mg protein) contents; ASA (U/g protein), AHR (U/mg protein), Cu/Zn-SOD (U/mg protein), CAT (U/mg protein), GPx (U/mg protein), GST (U/mg protein), GR (U/g protein) activities and GSH (mg/g protein) content 1 . addition, the growth of fish is partly attributed to muscle nutrient deposition [66]. The main chemical components of fish muscle are water, protein, and lipids, which make up approximately 98% of the total mass of the flesh [68]. Additionally, the lipid content in muscle has been recognized as a determinant of flavor, juiciness and texture for terrestrial animals as well as fish, which impacts on consumer perceptions [13]. Furthermore, increase in feed intake stimulates lipid deposition in fish tissues, muscle included [58]. Our current study also demonstrated that optimal dietary choline could improve muscle fat content of young grass carp, which was higher in the groups fed the 770 mg/kg diet and the 1082 mg/kg diet. As stated above, our data together corroborate the growth-promoting effects of choline caused primarily by higher feed intake. Moreover, choline-supplemented diets could have significantly improved protein accretion of juvenile shrimp Penaeus monodon [69]. The present study has Optimal Dietary Choline Improved Fish Flesh Quality demonstrated that optimal dietary choline significantly improved muscle protein content, as well as increased the muscle threonine, methionine, cysteine, leucine and lysine contents (Table 4). In addition to the nutrient composition, the firmness, pH value and WHC are the important flesh quality characters [70]. Thus, we next assayed the effects of dietary choline on firmness, pH value and WHC of young grass carp muscle.
4.2.
Choline improved flesh quality of fish 4.2.1. Choline improved flesh firmness of fish. Flesh firmness is an important flesh quality parameter. A decline in flesh firmness makes fish meat unappealing to consumers [70]. Shear force is a reliable biomarker that represents the flesh firmness of fish [71]. In the current study, shear force of muscle markedly increased with dietary choline from 197 to 1082 mg/kg and decreased thereafter in young grass carp (Table 5). Another study indicated that flesh firmness was positively correlated with the collagen content of muscle in Atlantic salmon [19]. In addition, the muscle collagen content could be quantified by the hydroxyproline concentration [30]. Meanwhile, Engel and Bächinger [72] reported that hydroxyproline had a positive effect on collagen stability. As shown in Table 5, supplementation with 385-1436 mg/kg choline markedly improved hydroxyproline content in fish muscle. There is a positive correlation between muscle shear force and hydroxyproline concentration (r = +0.974, P < 0.01), indicating that choline-enhanced flesh firmness may in part due to increasing collagen concentration. Additionally, the enhanced flesh firmness in this study may be because choline decreased cathepsin activity in fish muscle. Cathepsins (such as cathepsin B and L, two of important proteolytic enzymes) are one of the enzymatic system components that are involved in fish muscle degradation [73]. Elevated cathepsin activity results to faster muscle degradation in Atlantic salmon [22]. Fish muscle degradation could reduce the firmness of the fillet in rainbow trout [20]. In the current study, optimum dietary choline supplementation decreased cathepsin B and L activities (Table 5). Correlation analysis indicated that muscle shear force was negatively related to cathepsin B (r = -0.964, P < 0.01) and L (r = -0.972, P < 0.01) activities, suggesting that increment of flesh firmness may partly attribute to optimum dietary choline decreased cathepsin B and L activities. In conclusion, optimal dietary choline could increase fish flesh firmness possibly through increasing collagen content and inhibition of cathepsin B and L activities. In addition, the development of muscle firmness is likely to be compartmentalized by muscle pH in fish [74]. Thus, we next investigated the pH of the fish muscle.
4.2.2. Choline decreased flesh pH value of fish. Post-mortem muscle pH is another important flesh quality parameter in fish [75]. High pH value makes the fish flesh more sensitive to spoilage and decreases shelf-life [26]. It was reported that deterioration of Atlantic cod flesh quality is partly due to proteolysis of muscle protein, and proteolytic activities were significantly lower at a pH of 6.0 [76]. Our study showed that the grass carp post-mortem muscle pH was higher in the choline-deficient group (6.18) and choline excess group (6.19) and significantly decreased with dietary choline levels from 385 mg/kg to 1082 mg/kg (6.06) ( Table 5), which indicated that appropriate dietary choline decreased the fish muscle pH to prevent flesh quality deterioration in fish. The decreased muscle pH by optimal choline may have contributed to the increased anaerobic metabolism-induced lactate production in fish muscle. Another study has demonstrated that lactate accumulation in Atlantic salmon postmortem muscle resulted from anaerobic metabolism, which induced a pH decline [77]. In our current study, the lactate content was increased with appropriate dietary choline supplementation in fish muscle (Table 5). Additionally, there was a negative correlation among the pH and lactate concentration (r = -0.995, P < 0.01), which suggested that optimal dietary choline might decrease pH value partly through increasing lactate content in fish muscle. Apart from firmness and pH value, water-holding capacity (WHC) is another key flesh quality characteristic that affects consumers' perception [28]. It was reported that the flesh WHC was correlated to antioxidant status in fish muscle [30]. Due to this possibility, we next explored how muscle WHC and antioxidant status varied with choline levels.
4.2.3. Choline improves fish flesh WHC and is partly attributable to elevated muscle antioxidant status. Water-holding capacity (WHC) is a key flesh quality, which could be evaluate by cooking loss [78]. Decreased cooking loss was observed from elevating the WHC of cod (Gadus morhua) muscle [78]. In this study, we firstly demonstrated that optimal dietary choline significantly decreased muscle cooking loss, which indicated that optimal choline could improve flesh WHC (Table 5). Another study showed that the improvement in muscle WHC could be attributed to protection of muscle structural integrity, which resulted from deceasing oxidative damage in chicks [79]. In fish, PC and MDA contents are widely used as indices for protein and lipid oxidation damage, respectively [80]. However, there is no information about how dietary choline affects fish muscle oxidation damage. In our current research, optimal choline supplementation significantly reduced the MDA and PC content of young grass carp muscle (Table 6). There is a positive correlation among muscle cooking loss and their MDA and PC contents (r MDA = +0.993, P < 0.01; r PC = +0.768, P = 0.075), indicating that optimal choline improved WHC, which may be partly due to reducing muscle oxidation damage in fish muscle. Oxidation damage was caused by ROS, and OH_and O 2are two toxic ROS in fish [33]. In this study, optimal dietary choline significantly decreased muscle ROS content, and increased ASA and AHR activities ( Table 6). There was a negative correlation among muscle ROS content and ASA and AHR activities (r ASA = -0.961, P < 0.01; r AHR = -0.900, P < 0.05). Our data indicated that appropriate choline could increase O 2cleaning capacity and OH_-cleaning capacity in fish muscle. GSH is an effective non-enzymatic antioxidant that neutralizes ROS [33]. In the present study, optimal choline significantly elevated GSH content ( Table 6) and decreased cooking loss (Table 5) in the fish muscle. There was a negative correlation between muscle cooking loss and GSH content (r GSH = -0.980, P < 0.01), indicating that optimal dietary choline significantly improved flesh WHC perhaps because of enhanced GSH content in fish muscle. However, there has been no study conducted to research how choline impacts on fish muscle GSH content. Another study in fish demonstrated that muscle GSH content was positively correlated with GR activity [81]. However, in our present study, GR activity was not affected by dietary choline (Table 6). In addition, the increased GSH content was attributed to the synthesis of new glutathione molecules [82]. It has been reported that glutamate-cysteine ligase (GCL) is a key component of the enzyme that is involved in GSH synthesis [71]. In our study, optimal choline supplementation increased the mRNA levels of GCL ( Fig 3). Moreover, muscle GSH content was significantly correlated with GCL expression (r GCL = +0.944, P < 0.05). The data implied that choline increased GSH content partly by increasing the GCL gene expression in fish muscle. In addition, ROS can be eliminated by antioxidant enzymes (such as SOD, GPx and GST) in fish [33]. However, there is no more information about how dietary choline affects antioxidant enzyme activity in fish muscle. In our current study, optimal dietary choline significantly increased Cu/Zn-SOD, GPx and GST activities ( Table 6). The improvement of antioxidant enzyme activities by choline was perhaps because choline can elevate muscle methionine content. In a study in rats, methionine proved to improve kidney SOD, GPx and GST activities [83]. In rat liver, choline re-methylated homocysteine to methionine [84]. In the present study, optimal dietary choline significantly improved muscle methionine content (Table 4). However, dietary choline failed to influence the activity of CAT in fish muscle ( Table 6). The insignificant change in the CAT activity may be attributed to the increase in other antioxidant enzymes, such as GPx [85], because H 2 O 2 is eliminated by CAT and GPx also participates in the reduction of H 2 O 2 [82]. There was a negative correlation among muscle cooking loss and antioxidant enzyme activities (r Cu/Zn-SOD = -0.950, P < 370 0.01; r GPx = -0.974, P < 0.01; r GST = -0.929, P < 0.01) in young grass carp. Therefore, we conjectured that cholineenhanced muscle WHC may be due to choline up-regulating Cu/Zn-SOD, GPx and GST activities. Therefore, our current study demonstrated that the enhancement of flesh WHC by choline was partly attributed to the improvement in antioxidant status. In addition, the improvement in antioxidant status was positively correlated to the expression of antioxidant enzyme genes [55].
Choline regulated antioxidant enzyme gene expression: a link to Nrf2 signaling pathways in fish muscle
In the current study, optimal choline supplementation significantly elevated the mRNA levels of Cu/Zn-SOD, GPx and GST in fish muscle (Fig 3). Correlation analysis showed that Cu/ Zn-SOD, GPx and GST activities were positively related to their respective mRNA levels (r Cu/Zn-SOD = +0.918, P = 0.01; r GPx = +0.879, P < 0.05; r GST = +0.972, P < 0.01). These data suggested that suitable choline elevated antioxidant enzyme activities partly due to up-regulating their gene expression in fish muscle.
Nrf2 has been demonstrated to be a critical transcription factor that regulates antioxidant enzyme gene expression [37]. Chen et al. [36] reported that the up-regulation of Nrf2 expression increased SOD and GPx mRNA expression in mouse liver. In the present study, optimal dietary choline significantly up-regulated muscle Nrf2 mRNA levels (Fig 4). Correlation analysis showed that the expression of Cu/Zn-SOD, GPx and GST were positively correlated with the gene expression level of Nrf2 (r Cu/Zn-SOD = +0.953, P < 0.01; r GPx = +0.904, P < 0.05; r GST = +0.978, P < 0.01), implying that choline up-regulated the expression level of Cu/Zn-SOD, GPx and GST partly by increasing the Nrf2 gene expression in fish muscle. In addition, another study demonstrated that the promotion of Nrf2 nuclear translocation could elevate the expression of antioxidant genes in mice liver [86]. Keap1 was identified as an Nrf2-binding protein, which depresses Nrf2 translocation to the nucleus [87]. Moreover, another study reported that fish had two types of Keap1, Keap1a and Keap1b [37]. In the current study, the mRNA levels of Keap1 a and Keap1b in fish muscle were down-regulated by optimal choline levels (Fig 4), suggesting that optimal choline may have increased the Nrf2 activity to up-regulate Cu/Zn-SOD, GPx and GST gene mRNA levels through down-regulating both Keap1a and Keap1b mRNA level in fish muscle.
Moreover, Nrf2 expression could be activated by TOR [88]. A previous study demonstrated that elevated TOR expression could up-regulate Nrf2 expression in endothelial cells [89]. In our present study, optimal choline up-regulated TOR mRNA levels in fish muscle (Fig 5). There was a positive correlation between the expression of Nrf2 and the expression of TOR in young grass carp muscle (r = +0.997, P < 0.01), suggesting that choline up-regulating Nrf2 mRNA levels may be partly through elevating the mRNA levels of TOR in fish muscle. Moreover, choline-enhanced TOR mRNA levels may be partly due to the up-regulation of CK2. In vitro, the up-regulation of CK2 caused the up-regulation of the expression of TOR in human glioblastoma cells [40]. The data presented here showed that optimal dietary choline significantly improved CK2 mRNA levels in fish muscle (Fig 5). Correlation analysis showed that the relative gene expression of TOR was significantly correlated with CK2 mRNA expression (r = +0.961, P < 0.01), showing that choline up-regulating TOR mRNA expression may be partly through up-regulating CK2 mRNA expression in fish muscle. For the above-mentioned results, it is apparent that optimal choline up-regulated TOR expression to enhance Nrf2 gene expression through increasing the expression of CK2 mRNA in fish muscle. However, the exact mechanisms through which choline regulates Nrf2-related signaling molecules remains largely unknown and needs additional investigation.
Differential effect of dietary choline on antioxidant status among muscle and immune organs in fish
In our current study, optimal dietary choline significantly increased ASA, AHR, Cu/Zn-SOD, GPx and GST activities and glutathione content in fish muscle (Table 6). However, our previous study in juvenile Jian carp found that dietary choline could decrease antioxidant enzyme activities and glutathione content in the spleen and head kidney [35]. Those observations indicated that the regulatory effects of choline on muscle antioxidant status were different from that in immune organs. The reasons for the discrepancies remain unknown but might be attributable to three reasons. First, lipid content of the spleen (4.05±0.14) and head kidney (3.67±0.17) are higher than muscle (2.59±0.13) in young grass carp, which indicated that fish spleen and head kidney is more susceptible to oxidative damage than muscle. Second, a study in rabbits demonstrated that the kidney was one of the most important organs involved in choline metabolism and was the organ primarily affected in choline deficiency relative to muscle [90]. Third, choline deficiency could induce production of ROS in rat kidney [31]. A study demonstrated that ROS was adaptive to up-regulation of antioxidant enzyme activities [91]. Therefore, as important functioning organs, head kidney and spleen are crucial organs for the survival of the fish and may be able to compensate to improve antioxidant enzyme activities when dietary choline is deficient.
Choline requirements of young grass carp
The above data clearly demonstrated that optimum dietary choline could improve fish growth and flesh quality. The dietary choline requirements of young grass carp (266.5-787.1 g) based on PWG and muscle PC content were established to be 1136.5 mg choline/kg and 1210.7 mg choline/kg diet, respectively. The results indicated that choline requirement of fish for antioxidant was higher than that for growth. Similar results of other nutrients like myo-inositol has been reported in juvenile Jian carp [49].
Conclusions
In summary, the present study showed that optimal dietary choline elevated flesh firmness, which may be related to collagen content and cathepsin activities. Moreover, optimal dietary choline reduced pH value partly by increasing lactate concentration in fish muscle. Optimal dietary choline improved fish flesh WHC partly through enhancing muscle antioxidant status by a) increasing muscle GSH content, which might be partly due to up-regulation gene expression of GCL, and b) up-regulating Cu/Zn-SOD, GPx and GST activities, which might be partly ascribed to up-regulation of their gene expression. Moreover, the gene expression may be regulated by several signaling molecules (Nrf2, Keap1 a, Keap1b, CK2 and TOR) that are involved in the Nrf2 signaling pathway. Interestingly, the effect of choline on antioxidant status in fish muscle was different from our study in immune organs [35]. All of the results provide a partial mechanism for the positive effect of dietary choline on flesh quality. However, the exact mechanisms of choline's effects require further investigation. In addition, the dietary choline requirements of young grass carp (266.5-787.1 g) based on PWG and muscle PC content were 1136.5 mg/kg and 1210.7 mg/kg diet, respectively.
Author Contributions
Conceived and designed the experiments: HFZ LF XQZ. Performed the experiments: HFZ WDJ YL JJ. Analyzed the data: HFZ WDJ PW JZ. Contributed reagents/materials/analysis tools: LT SYK WNT YAZ. Wrote the paper: HFZ WDJ XQZ PW. Assisted with the gene expression work: YL. | 2017-04-13T07:34:42.739Z | 2015-11-23T00:00:00.000 | {
"year": 2015,
"sha1": "28aac48d76196dcb2643b9cb6d818864434f91aa",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0142915&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28aac48d76196dcb2643b9cb6d818864434f91aa",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
262058365 | pes2o/s2orc | v3-fos-license | Eradication Of Corruption By Tracing Money Laundering As An Integral Legal System That Can Not Be Separated
ABSTRACT
INTRODUCTION
The legal system has specific goals and objectives.The goals and objectives of the law can be people who actually act against the law, as well as in the form of legal acts themselves, and even in the form of tools or state apparatus as law enforcers.
The legal system has a certain mechanism that ensures the implementation of the rules in a fair, definite and firm, and has benefits for the realization of order and public order.The working system of the law is a form of law enforcement (Ariman Sitompul, P Hasibuan, M Sahnan.(2021).
Corruption crimes are closely related to Money laundering, where today money laundering practices are very often carried out against money obtained from corruption crimes.The practice of money laundering (money laundering) is one way to disguise or conceal the results of corruption committed.Money laundering is then used as a shield for the proceeds of corruption (Ariman Sitompul (2020).
In addition to taking away social and economic rights that are certainly very detrimental to the community, the apparatus is also very difficult in terms of tracking the results of corruption, because often money laundering is carried out by corruptors.Money laundering is often done by entering the proceeds of crime into the financial system.The crime of money laundering includes special forms of criminal acts that have a connection with various kinds of crimes.Money laundering is considered a follow-up crime, which is an attempt by the perpetrator to disguise the results of a crime that has been committed previously in order to enjoy the results without being tracked, including one of them, namely from the results of corruption (Kusbianto, 2022).
The Indonesian government does not stay silent with the predicate that it carries as a country that is not serious in handling the problem of money laundering.To that end, the government is trying to deal with these problems by issuing the TPPU law which should be the moment where the Indonesian government should suppress TPPU problems, namely by forming a Financial Transaction Reporting and Analysis Center (PPATK), whose task is to collect and process information related to suspicions or indications of money laundering.Problems arise in law enforcement when someone becomes a suspect TPPU must be proven beforehand by tracing the origin of the crime in advance such as embezzlement, corruption and bribery or other crimes.Law enforcement in the case of TPPU whose initial crime of corruption has been carried out by law enforcement officers who are members of the criminal justice system, cannot be said to be optimal.
As for the reason that reverse proof is difficult to apply in the enforcement of the TPPU law, the initial crime is corruption, because it denies the principle of Indonesian criminal law starting from the principle of presumption of innocence as contained in Article 8 paragraph (1) of Law Number 48 of 2009 on Judicial Power (Judicial Power law) and the principle of self-blame as contained in Article 66 of the Criminal Procedure Code which states "suspects or defendants are not burdened with the obligation of proof", as well as various international conventions on human rights that have been ratified by Indonesia, so it can be seen that due to the reverse process of proof there is a reduction in the protection of the defendant's rights in court and also this concept contradicts or overlaps with other laws and regulations, such as Article 37A of the corruption law which states that: 1) The defendant is obliged to provide information about all his property and the property of his wife or husband, children, and property of any person or corporation who is suspected of having a relationship with the accused case.2) in the event that the defendant is unable to prove that his wealth is not balanced with his income or the source of his additional wealth, the information as meant in Paragraph (1) is used to strengthen the existing evidence that the defendant has committed corruption.Based on the description of the background that the author described earlier, the identification of problems in this study is as follows how the eradication of corruption and money laundering is always related as one unified legal system that can not be separated.
METHOD
This study employs normative legal methodologies.This study utilizes both primary and secondary legal resources.Through the study of literature, the technique for gathering legal materials iscarried out.Normative research methods in which research begins with das solen (law on paper) and ends with das sein (law in actions).This research is classified as ke in normative legal research based on a literature review or a review of merely secondary sources.It is said to be normative because the law is assumed to be an autonomous entity whose enforceability isdetermined by the law it self and not by external factors.This research methodology employs the Statute and Conceptual approaches.Primer Legal Material, which is authoritative legal material, has authority in the form of laws and regulations relevant to this paper's discussion (Ariman Sitompul, 2022)
RESULTS AND DISCUSSION
In Indonesian positive law, the reverse proof system is adopted in 2 (two) laws and regulations, namely Law No. 31 of 1999 on the eradication of corruption as amended by Law No. 20 of 2001 on amendments to Law No. 31 of 1999 on the eradication of corruption (corruption law) and Law No. 8 of 2010 on the Prevention and Eradication of money laundering (TPPU law).
In accordance with the initial idea of the government, the limited and balanced reverse proof in the corruption law can only be applied in 2 (two) objects of proof,(Nurasia Tanjung, 2016) namely: 1. On " corruption bribes receive gratification" the value of Rp. 10.000.000.00.-(Ten million rupiah) or more (article 12b paragraph (1) letter A and Article 37); and exchanging for currency, or securities or other acts that are active sentences in the formulation of Article 3 of the TPPU law, it can be seen that money laundering as referred to in Article 3 of the TPPU law in the library of money laundering including or called active money laundering.Money laundering as referred to in Article 4 of the TPPU law, using the phrase" hide "and" disguise " which is an active sentence in the formulation of Article 4 of the TPPU law, it can be known that money laundering as referred to in Article 4 of the TPPU law, in the library of money laundering is included or called active money laundering (R. Wiyono,2014) 2. Passive money laundering (Article 5), money laundering as referred to in Article 5 of the TPPU law, using the phrase "receive " and " Master" which is a passive sentence in the formulation of Article 5 of the TPPU law, in the library money laundering is included or called passive money laundering The TPPU law does provide for the burden of proof obligation on the defendant, but the framers of the TPPU law did not provide a comprehensive explanation of how the reverse arrangement of evidence in the TPPU law.Unfortunately, based on Articles 77 and 78 of the TPPU law, it is not regulated regarding the procedure of the procedure or at least regulates the consequences of the reverse proof.It should be that the law strictly regulates the consequences of the reverse proof carried out by the defendant.
In the TPPU Law, Article 77 is the opening article that discusses the reverse proof provisions, Article 77 states that: "For the sake of the examination of the court hearing, the defendant is obliged to prove that his property is not the result of a criminal offense".
The sentence in this article is the same as the sentence in the previous law and from this provision the judge can also order the defendant or legal counsel to prove the property owned by the defendant is not related to the criminal offense charged by the public prosecutor.This article is related to Article 78 of the TPPU law which contains how the defendant or his legal counsel proves the origin of the defendant's property.Article 78 of the TPPU law is divided into two paragraphs which state that: 1. in the examination in the court session as meant in Article 77, the judge orders the defendant to prove that the property related to the case is not derived from or related to the criminal act as meant in Article 2 Paragraph (1). 2. The defendant proves that the property related to the case is not derived from or related to the criminal act as meant in Article 2 Paragraph (1) by submitting sufficient evidence.Furthermore, Article 78 paragraph (2) of the TPPU law states that "the defendant proves that the property related to the case is not derived from or related to a criminal offense as Article 2 Paragraph (1) by submitting sufficient evidence".This article is related to the provisions of evidence contained in Article 73 of the TPPU law which explains explicitly the forms of valid evidence in proving money laundering crimes, and in accordance with the initial concept of reverse proof, the defendant or legal counsel in proving in reverse that his wealth is not related to criminal acts also uses evidence in accordance with Article 73 of the TPPU law.
Reverse proof of the origin of assets or assets that are not reasonably owned by the defendant can be done at a minimum in relation to the intersection of the defendant's human rights if the public prosecutor first proves the defendant's property then followed by the defendant to prove his property.Proof of the defendant's property is an obligation contained in the law, not in the form of rights that can be used or cannot be used (Silva Da Rosa,2018).
Reverse proof is the obligation of the defendant in money laundering to prove that the origin of the property owned does not come from a criminal offense as referred to in Article 2 Paragraph (1).The legal basis for this reverse proof is regulated in Articles 77 and 78 of the TPPU law.In Article 77, it is stated that for the purpose of examination at a court hearing, the defendant is obliged to prove that his property is not the result of a criminal offense.The reverse proof system for money laundering in Articles 77 and 78 is for the purpose of examination in court hearings.Therefore, reverse proof can only be applied at the time of examination at a court hearing.
The concept of inverse proof in TPPU is the concept of limited and balanced inverse proof.The limited intent is that the reverse proof is limited to a specific criminal offense, while the intent of balanced is that the public prosecutor remains obliged to prove his charges (Lilik Mulyadi,2018).
There are 2 (two) possibilities, whether the defendant cannot prove that the property he owns is not derived from the results of a criminal offense.If the defendant cannot prove that his property is not the result of a criminal offense, it can be a clue for the judge that the defendant's property is derived from or the result of a criminal offense.On the other hand, if the defendant can prove that the property he owns does not come from the proceeds of a criminal offense, then the prosecutor does not lose the right to prove that the defendant's property comes from a criminal offense.This means that the prosecutor who charged must still equip themselves with a number of evidence to prove their charges.In conditions where the defendant can prove that he is innocent, while the prosecutor proves that the defendant is guilty, then the assessment of the evidence in the trial is on the judge.So the reverse proof in practice must be applied in the process of proving money laundering including the initial criminal offense is a criminal offense (Nasir Sitompul,2022).
In the provisions of Article 69 of the TPPU law, it is stated that: "to be able to conduct investigations, prosecutions, and examinations in court hearings against money laundering crimes, it is not necessary to prove the origin of the criminal offense first".Based on the article that "to investigate, prosecute and examine and prosecute cases of money laundering do not need/do not have to be proven in advance of the original crime".
To encourage a fair and targeted reverse proof process, both the investigator and the public prosecutor must coordinate with the Financial Transaction Reporting and Analysis Center (PPATK) to conduct a thorough tracking of the assets owned by the defendant.This process is carried out to prevent the "blind confiscation" of assets blindly against the entire property owned by the defendant.It is also undeniable that not all assets belonging to the defendant come from or are related to criminal acts, so that in a fair and proper enforcement process investigators and Related Agencies must be careful and thorough to separate assets resulting from criminal acts and assets that are not related to criminal acts (Ariman Sitompul, 2020).
Limited and balanced reverse proof does not provide too much relief for the prosecutor.The reason is that the prosecutor still prepares evidence to strengthen the indictment of money laundering and the public prosecutor is also obliged to prove the defendant's property is the result of a criminal offense.Even the concept of reverse proof can be used as a loophole by the defendant or legal counsel to be able to attack the evidence presented by the public prosecutor.So without the preparation of mature evidence in the investigation process, the reverse proof process can backfire on the public prosecutor himself, because the defendant or his legal counsel can include new evidence that has not been previously verified with the public prosecutor.Therefore, it is also necessary to improve the professionalism and competence of law enforcement, be it the National Police, Attorney General, BNN, KPK, Director General of Customs, Director General of taxes so that the concept of reverse proof in the TPPU law can run effectively and efficiently.
It is sufficient reason to conduct a money laundering investigation against someone who is suspected of committing corruption if in the process of investigating corruption, preliminary evidence of the alleged origin of money from corruption is obtained.For example, for actors who have the status of civil servants or State administrators who are obliged to report their assets as referred to in Article 5 of Law No. 28 of 1999, the data on the report of the State Administrator's assets (LHKPN) submitted to the KPK can be used as a basis.If it turns out that investigators found other wealth outside of the data reported in LHKPN, so it looks lifestyle deviates far from his profile as a civil servant or state administrator, plus if his wealth is on behalf of someone else, then this fact is sufficient as initial evidence to suspect the state administrator of corruption which is then followed by money laundering.This has happened since the implementation of Law No. 15 of 2002 and to this day there have been many cases that have been decided on this matter.
Many jurisprudences related to investigating, prosecuting and examining and prosecuting money laundering cases do not need/do not need to be proven in advance of the original crime.The provisions of Article 69 of the TPPU law have been submitted for Material Test at the Constitutional Court by Akil Mochtar, who is the former chairman of the Constitutional Court, with the results of the decision that the application for material test was rejected.In other countries such as the Netherlands, the United States and Australia that to investigate, prosecute and examine and prosecute cases of money laundering is not necessary/not required to be proven in advance of the crime of origin which is important criminal acts must exist (Nasir Sitompul,2023).
The application of a limited and balanced reverse proof system does not provide too much relief for the prosecutor, because with this concept the prosecutor must still prepare evidence to strengthen the charge of money laundering and the public prosecutor is also obliged to prove the defendant's property is the result of a criminal offense.Technically, the application of the reverse proof system in money laundering crimes whose crimes originate from the current corruption committed by the public prosecutor is by proving first the charges of money laundering then after that it is the turn of the defendant to prove that the defendant's property is not related to or derived from the crime as charged by the public prosecutor.Therefore, the indictment is usually drawn up in a combined (cumulative) form between the offense of criminal origin and the offense of money laundering.The reason why this is done is because the sequence of events (sequence) must be explained from the start of the original crime (predict crime) which then boils down to money laundering (money laundering).
After the examination of the witnesses, including the testimony of expert witnesses and the testimony of the defendant, with reference to the provisions of Article 78 paragraph (1) of the TPPU law, in the case of Jiwasraya the panel of judges asked questions related to the origin of the defendant's seized property.If during the examination at the trial the defendant can prove that his property is not the result of a criminal offense, then the defendant must be released from all lawsuits, but if at the trial it turns out that the defendant cannot explain and prove the origin of the property is not the result of a criminal offense, then the defendant's property must be seized for the state.However, in the case of Jiwasraya, the defendant Benny Tjokrosaputro could not prove the origin of his wealth, so for other elements in the actus reus act of money laundering such as placing, transferring, spending...etc.and elements with the aim of hiding and disguising became the obligation of the public prosecutor as in the indictment of a quo according to the standard of proof in.
There are 3 (three) main factors that hinder the eradication of corruption by making money laundering as a whole in law enforcement:
A. UU TPPU In The Evidence Is Not Clear
In principle, the existence of a reverse proof system in TPPU cases whose criminal acts originate from corruption is a procedure to assist the prosecutor in conducting evidence at the trial.This proof system is not included in the realm of legal substance or material law, but only formal or can be said to be included in the realm of procedural law.Although included in the procedural Law category, the framers of the TPPU law did not provide a comprehensive explanation of how the reverse proof arrangement in the TPPU law.It can be seen that the provisions of Articles 77 and 78 of the TPPU law do not regulate the procedure for proceeding or at least regulate the consequences of the reverse proof included in the explanation of the article.In the future, the TPPU law should firmly regulate the consequences of the reverse proof carried out by the defendant (Mokhammad Najih dan Soimin,2014).
This situation certainly makes the application of reverse proof cannot run properly and measurably.Because the TPPU law does not regulate the details that should exist in an ideal procedural law such as who has the right to request the application of this evidence, who has the right to activate it in a corruption trial, whether there are special evidence tools intended for this evidence, when is the right time to apply this reverse burden of proof and various other questions.All these questions certainly cannot be answered because there is indeed no single law that regulates the burden of proof to be reversed clearly in the TPPU law.
As a result, in the enforcement of TPPU still uses the type of conventional or ordinary burden of proof that generally applies in the procedural code of Criminal Procedure, as well as in the examination of the Jiwasraya case.By not enforcing the rules in detail, this proof could have been used by the legal advisory team so that the procedure was not carried out.Because the vagueness of the procedure will plunge people in error consciously or not.
However, it seems that this is less effective to apply the procedural law of the reverse burden of proof system, because the judge's decision cannot regulate the procedural law more comprehensively.While the burden of proof must be set in reverse in detail and clearly to make it easier to apply.For this reason, the author agrees more if the regulation on procedural law is regulated by law.
B. Unbalanced Burden Of Proof
In the facts on the ground can not be denied if it turns out that not everyone understands and understands about the meaning of the burden of proof reversed even from law enforcement officers themselves.This reverse burden of proof is only considered as a mere discourse or only as a legal accessory, so it is only a second choice and not a primary choice.Whereas in countries that adhere to the continental European legal system or Civil Law such as Indonesia, the law means that it must be the same as the sound of the law so that if the TPPU law places the burden of proof upside down as a system of proof, then it must be implemented in practice in the field.This means that it should not be considered only as a second or last resort (Zainal Arifin Hoesein,2014) The weakness of the evidence system is limited or balanced is the potential for rebuttal from the defendant, as in the case of corruption on the basis of harming state finances (Jiwasraya case), defendant Benny Tjokrosaputro has denied/ evaded the indictment of the public prosecutor by saying if the prosecutor's indictment is not true and never committed corruption on the basis of harming state finances in any form.The defense of the defendant Benny Tjokrosaputro makes the reverse burden of proof cannot run effectively because the defendant's statement is only evasive, not proof if he is innocent in detail and clearly, so that the judge as the person in charge of examining the Jiwasraya case in court cannot impose the obligation of the reverse burden of proof completely on the defendant.Therefore, in the future, the application of the reverse proof system in cases of money laundering crimes whose criminal acts originate from corruption must be regulated firmly and specifically (Nurhayani,2015).
C. The Judicial Mafia
Karl Marx as a critical philosopher once expressed a theory that the law is actually the interests of people have.This cannot be separated from its critical nature when it sees many owners of capital who act arbitrarily against workers in the name of the law in their time.By law, certain economic classes exploit the classes below them so that their interests are always accommodated and not hampered in the least.Marx's criticism was continued by contemporary Marxians who gave rise to instrumentalist theory.This theory says that in fact the law is a tool of domination, a tool of oppression and a cause of suffering (Bernard,2019).
As already explained above, in the tppu law, the reverse proof arrangement is regulated in Article 77 and Article 78.The provision basically regulates the defendant's obligation to prove that his property is not the result of a criminal offense.As for the procedure, in the examination at the court hearing, the judge ordered the defendant to prove that the property related to the case was not derived from or related to a criminal offense.When the defendant proves that the property related to the case is not derived from or related to a criminal offense, it is carried out by submitting sufficient evidence.The application of the reverse proof system in money laundering cases as stipulated in Article 77 and Article 78 of the TPPU law is included in the explanation without legal consequences if the reverse proof is not applied.This is one of the barriers to the implementation of reverse proof that causes reverse proof in cases of money laundering crimes whose criminal acts originating from corruption have never been optimally applied so far.In the future, the TPPU law should firmly regulate the consequences of reverse proof if it is not applied.
Money laundering is a new type of crime in reference to International Criminal Law and criminal law in Indonesia.Although a new type of crime, the enforcement process against money laundering is directly related to National Economic Policy and can have a wide impact on the national financial and banking balance in a country.
Money laundering in general has been classified as a crime and classified as a white collar crime (white collar crime), and is considered an extraordinary crime (extraordinary crime) or even a serious crime (serious crime), because it has a different modus operandi and is more dangerous than conventional crimes known in criminal law in Indonesia (Munir Fuady,2011).Money laundering has a very detrimental impact on the economy, finance, social, and security, even because the scope is cross-border, then money laundering is considered a transnational crime that has become a world phenomenon and an international challenge (Roberts Kennedy,2017).
The government of Indonesia in relation to the politics of money laundering law has established various related laws and regulations in order to counter money laundering the latest is Law Number 8 of 2010 on the Prevention and Eradication of money laundering (UU TPPU).
CONCLUSION
There are 3 (three) main factors that impede the application of the burden of proof in the case of money laundering criminal acts originating from corruption, among others: 1) the reverse proof system has not been clearly regulated in the TPPU law; 2) the existence of a legal paradigm that the burden of proof is always given to the public prosecutor; and 3) the existence of a judicial mafia that inhibits the regulation of the reverse proof system.In the event that the reverse proof is not applied by the law enforcers, juridically as provided for in Article 77 and Article 78 of the TPPU law there are no legal consequences whatsoever.This is one of the barriers to the implementation of reverse proof which causes reverse proof in cases of money laundering whose criminal acts originating from corruption have never been optimally applied so far | 2023-09-20T15:12:12.802Z | 2023-09-15T00:00:00.000 | {
"year": 2023,
"sha1": "4d5bf4239c940c5e79294e70272e8296ea7ece89",
"oa_license": "CCBY",
"oa_url": "https://iaml.or.id/index.php/home/article/download/66/55",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "95c2a3cd507204985ff41702275dfd28756954d4",
"s2fieldsofstudy": [
"Law",
"Political Science"
],
"extfieldsofstudy": []
} |
251538946 | pes2o/s2orc | v3-fos-license | Presolar stardust in asteroid Ryugu
We have conducted a NanoSIMS-based search for presolar material in samples recently returned from C-type asteroid Ryugu as part of JAXA's Hayabusa2 mission. We report the detection of all major presolar grain types with O- and C-anomalous isotopic compositions typically identified in carbonaceous chondrite meteorites: 1 silicate, 1 oxide, 1 O-anomalous supernova grain of ambiguous phase, 38 SiC, and 16 carbonaceous grains. At least two of the carbonaceous grains are presolar graphites, whereas several grains with moderate C isotopic anomalies are probably organics. The presolar silicate was located in a clast with a less altered lithology than the typical extensively aqueously altered Ryugu matrix. The matrix-normalized presolar grain abundances in Ryugu are 4.8$^{+4.7}_{-2.6}$ ppm for O-anomalous grains, 25$^{+6}_{-5}$ ppm for SiC grains and 11$^{+5}_{-3}$ ppm for carbonaceous grains. Ryugu is isotopically and petrologically similar to carbonaceous Ivuna-type (CI) chondrites. To compare the in situ presolar grain abundances of Ryugu with CI chondrites, we also mapped Ivuna and Orgueil samples and found a total of SiC grains and 6 carbonaceous grains. No O-anomalous grains were detected. The matrix-normalized presolar grain abundances in the CI chondrites are similar to those in Ryugu: 23 $^{+7}_{-6}$ ppm SiC and 9.0$^{+5.3}_{-4.6}$ ppm carbonaceous grains. Thus, our results provide further evidence in support of the Ryugu-CI connection. They also reveal intriguing hints of small-scale heterogeneities in the Ryugu samples, such as locally distinct degrees of alteration that allowed the preservation of delicate presolar material.
Abstract
We have conducted a NanoSIMS-based search for presolar material in samples recently returned from C-type asteroid Ryugu as part of JAXA's Hayabusa2 mission. We report the detection of all major presolar grain types with O-and C-anomalous isotopic compositions typically identified in carbonaceous chondrite meteorites: 1 silicate, 1 oxide, 1 O-anomalous supernova grain of ambiguous phase, 38 SiC, and 16 carbonaceous grains. At least two of the carbonaceous grains are presolar graphites, whereas several grains with moderate C isotopic anomalies are probably organics. The presolar silicate was located in a clast with a less altered lithology than the typical extensively aqueously altered Ryugu matrix. The matrix-normalized presolar grain abundances in Ryugu provide further evidence in support of the Ryugu-CI connection. They also reveal intriguing hints of small-scale heterogeneities in the Ryugu samples, such as locally distinct degrees of alteration that allowed the preservation of delicate presolar material.
Introduction
Ancient stardust grains occur as trace components in primitive extraterrestrial materials. These tiny (mostly sub-µm) grains condensed in the outflows or explosions of evolved stars, prior to the formation of the Sun (i.e., "presolar"). They can be distinguished from other solar system materials by their highly anomalous isotopic compositions that reflect nucleosynthetic processes in their parent stars. Studying presolar grains in the laboratory allows unique insights into galactic, stellar, interstellar, and asteroidal evolutionary processes (e.g., Most presolar grains are either O-rich (oxides, silicates) or C-rich phases (e.g., SiC, graphite), although rare nitrides have also been identified (Nittler et al. 1995). Silicates, the most common presolar phase, can only be studied in relatively pristine samples as they are easily destroyed by secondary processes on asteroids or on Earth (Floss & Haenecour 2016;Nittler et al. 2021;Barosch et al. 2022a). SiC and oxide grains are much more resilient to aqueous alteration and were even detected in highly altered meteorites such as Ivuna-type (CI) chondrite Orgueil Huss & Lewis 1995). In Orgueil, a presolar SiC abundance of 14 − 29 !$ %$ ppm was estimated from noble gas analyses (Huss & Lewis 1995) and 34 !* %++ ppm was found by NanoSIMS measurements of acid-resistant organic-rich residues . Most presolar oxides in Orgueil were found in acid-resistant residues and abundances are not well-quantified Dauphas et al. 2010;Qin et al. 2011;Nittler et al. 2018a;Liu et al. 2022). No in-situ study of presolar grain abundances in CI chondrites has been reported to date.
Based on their isotopic compositions, presolar grains can be classified into different groups that link them to their likely stellar sources. The origins of presolar grains and the interpretation of their isotopic compositions have been extensively discussed in the literature (Zinner 2014;Floss & Haenecour 2016, and references therein;Hoppe et al. 2021 (Watanabe et al. 2017;Morata et al. 2020;Tachibana et al. 2022). Ryugu is a C-type (carbonaceous) asteroid with a mineralogical, bulk chemical and isotopic composition that closely resembles CI chondrites (Ito et al. 2022;Nakamura et al. 2022;Yada et al. 2022;Yokoyama et al. 2022 Ryugu was modified on its parent body, or heterogeneities in the distribution of presolar material within and between samples.
Samples and Methods
We analyzed several samples collected by the Hayabusa2 spacecraft during its two touchdowns on the asteroid Ryugu (sample chambers A and C, respectively), as well as material from CI chondrites Orgueil and Ivuna. The following sample types were investigated: (i) Polished thin sections A0058-C1002 (from chamber A; Fig. 1a), C0002-C1001 (from chamber C; Fig. 1b) and Ivuna HK3 (abbreviated as "A0058-2", "C0002" and "Ivuna" in the following; see Yokoyama et al. 2022 for preparation details). (ii) Small Ryugu (A0108-13, C0109-2) and Orgueil grains. These <1 mm-sized grains were crushed between glass slides. A few dozen ~10-30 µm-sized particles were then extracted with a micromanipulator and pressed into annealed gold foil with quartz windows (Fig. 1c). Grains A0108-11 and C0109-8 were sectioned with an ultramicrotome into several ~250 nm thick slices and placed onto Si wafers (see Yabuta et al. 2022 for preparation details). (iii) Lastly, we received insoluble carbonaceous residues that were prepared by acid treatment of Ryugu grains A0106 and C0107 by Yabuta et al. (2022). The residues were deposited onto diamond windows. All samples were Au-coated.
The thin sections were documented with a scanning electron microscope (SEM; JEOL 6500F) and several fine-grained areas were selected for NanoSIMS analyses (see Fig. 1a).
Each area was analyzed for 25 sequential cycles with a resolution of 256×256 pixels and a 1500 µs counting time per pixel per cycle. The pressed particles and organic residues were primarily analyzed to characterize the microscale isotopic variations of organic matter (Barosch et al. 2022b;Yabuta et al. 2022) and were thus not analyzed for O isotopes. Instead, we used the same primary beam conditions to measure 12 C2 -, 12 C 13 C -, 12 C 14 N -, 12 C 15 N -, plus 16 O -, 28 Siand 32 Sor 24 Mg 16 O -, and secondary electrons. An electron gun was used for some measurements to compensate for sample charging. For pressed particles and organic residues larger than 20 µm, the resolution of the ion maps was increased to 512×512 pixels. The counting time was 1000 µs per pixel per cycle and 40 sequential cycles were recorded. To better characterize C-anomalous grains from surrounding materials, some of them were remeasured for N and 28 Si -, 29 Si -, and 30 Siisotopes with a higher pixel resolution than used during the initial mapping.
We used the L'image software for data reduction following the protocol described by Nittler et al. (2018b). The ion images were corrected for a 44 ns dead time, for shifts between image frames, and for effects of quasi-simultaneous arrival (Slodzian et al. 2004;Ogliore et al. 2021). In each map, O, C and Si isotopic ratios were internally normalized to the average composition of each image. N isotopic ratios were normalized to atmospheric N and corrected for instrumental mass fractionation using synthetic SiC-Si3N4 (assumed to have atmospheric 15 N/ 14 N ratios). Presolar grain candidate regions of interests (ROIs) were identified in sigma images of δ 17 O, δ 18 O and δ 13 C, in which every pixel represents the number of standard deviations from the average values (see Table 1 Barosch et al. (2022a), we chose the following significance thresholds: 5σ for O-and C-anomalous grains with diameters <200 nm, 4σ for grains with diameters >200 nm and 3.5σ for C-anomalous grains that were clearly associated with 28 Si in the ion images (Fig. 1d, e). A 120 nm beam broadening correction was applied to grains with sizes below 250 nm (Barosch et al. 2022a).
In the following, we classify C-anomalous presolar grains as SiC grains if 28 Si was detected in the ion image or carbonaceous grains if no 28 Si was detected. Carbonaceous grains could be presolar graphite, organic matter or tiny and extremely 13 C-rich SiC grains for which isotopic dilution has erased the intrinsic Si signal (Nguyen et al. 2007). We attempted to determine the mineralogical compositions of O-anomalous grains with the SEM using an Oxford Instruments energy-dispersive X-ray spectrometer (EDX; 5-10 kV accelerating voltage, 1 nA beam current).
Presolar grain abundances in Ryugu and CI chondrites
We detected a total of 3 O-anomalous presolar grains, 38 SiC grains, and 16 Canomalous carbonaceous grains among all of the Ryugu samples. The search of the Orgueil and Ivuna samples resulted in the identification of 15 SiC grains and 6 carbonaceous grains.
No O-anomalous grains were found in Ivuna and no O isotopes were measured in Orgueil. All identified presolar grains are listed in Table 1.
Total areas of ~38,700 µm 2 and ~25,300 µm 2 , and ~46,500 µm 2 were analyzed in Ryugu chamber A samples, chamber C samples, and CI chondrites, respectively. However, presolar grain abundances were only determined from the thin-section and pressed-particle data. To calculate matrix-normalized presolar grain abundances ( All NanoSIMS maps in Ryugu chamber A and the CI chondrite samples were placed randomly in the fine-grained matrix. In section C0002, about one-third of the NanoSIMS maps were placed in the fine-grained matrix (devoid of presolar grains) and the other two-thirds were placed in two areas with less altered lithologies detected by Kawasaki et al. (2022) (clasts 1+2 in Table 2
Presolar grain compositions
Two O-anomalous grains (HY2-O-01 and -02; Grain HY2-O-01 was identified among fine-grained matrix in section A0058-2 and is probably an oxide. It is associated with Al in the ion image which is suggestive of an oxide grain and silicates are unlikely to survive the extensive aqueous alteration seen in most of the Ryugu samples ( Fig. 1f; Yokoyama et al. 2022). Grains HY2-O-02 and -03 are both from the same less altered lithology in clast 1 of section C0002 (cf. Section 4). Based on the presence of Si in the EDX map, and the absence of Al in the EDX map and the ion image, HY2-O-02 appears to be a silicate (Fig. 1g, h). Grain HY2-O-03 has the same Si -/Oand AlO -/Oratios as the surrounding material and could also be a silicate, although the grain is very small (~120 nm) and it was not possible to cleanly measure its composition.
The C and N isotopic compositions of presolar C-anomalous grains are compared to literature data in Fig. 3b and Si isotopic compositions are displayed in Fig. 3c. Grains without N and Si isotopic data cannot be classified reliably (Table 1b, c). Most Ryugu SiC grains plot in the region of mainstream grains in Fig. 3b and are linked to AGB stars (Zinner 2014). At least two grains with 12 C/ 13 C ratios of ~5 and ~8 are most likely AB grains, and at least one Z grain (HY2-C-05) was identified by its location on the Si three-isotope plot (Fig. 3c). In the CI chondrite samples, most SiC grains are probably mainstream grains with 12 C/ 13 C ranging from 12 to 59, but at least two AB grains with 12 C/ 13 C ≈ 8 were identified (Ivuna-C-10 and -16). SiC grain diameters range from <100-450 nm, with an average diameter of ~240 nm.
Moderate C-anomalous isotopic compositions of approximately δ 13 C ≈ 300 to -300 ‰ ( 12 C/ 13 C ≈ 68-127), and a large diversity in δ 15 N are typical for ~1% of the C-rich organic grains that are present ubiquitously in Ryugu (Barosch et al. 2022b;Yabuta et al. 2022). Most carbonaceous grains with similar compositions are probably organics. However, the majority of carbonaceous grains reported here have much more anomalous compositions, i.e., with the lowest 12 C/ 13 C value at 13 and the highest at 224. Some of these could be SiC grains but at least two grains (HY2-C-27, -48) with very low Si -/Cratios are most likely presolar graphites.
The C-anomalous grain sizes range from 110-750 nm with an average size of ~270 nm.
Discussion
Two presolar SiC grains in Ryugu samples were recently detected by Yabuta et al. (2022) and a presolar graphite grain was detected by Ito et al. (2022). Here, we significantly expand on this work. All major types of presolar grains typically found in situ in carbonaceous chondrites were identified in Ryugu: oxides, silicates, SiC, graphites, and C-anomalous organics ( Table 1). The detection of at least one presolar silicate (Fig. 1g, h) in Ryugu was particularly unexpected, as silicates are easily destroyed during aqueous alteration (Floss & Haenecour 2016). Their occurrence is probably restricted to relatively rare clasts with less altered lithologies than typical Ryugu matrix, as described by Kawasaki et al. (2022; clast 1 in another Ryugu chamber C section. While much more analysis is required to determine reliable abundances, it is clear that these clasts were able to preserve delicate presolar grains, whereas typical Ryugu matrix was not. No O-anomalous grains were detected in situ in Ivuna, although oxides are known to exist in CI chondrites . Assuming an average presolar grain size of 0.25 µm, we estimate a 1σ upper limit for non-detection for the abundances of O-anomalous grains of ~4 ppm (Gehrels 1986). This upper limit is fully consistent with the abundances seen in the Ryugu matrix (1.2 !+., %".* ppm) and the initial estimate of ~0.5 ppm by Hutcheon et al. (1994) based on a single large presolar Al2O3 grain from Orgueil. Recent studies by Morin et al. (2022) and Kawasaki et al. (2022) (Fig. 2b).
SiC grains were relatively homogeneously distributed across the Ryugu thin sections, with at least one SiC found in most Ryugu matrix regions measured (Fig. 1a). In contrast, their distribution in Ivuna seems much more heterogeneous: 60% of all grains were detected in relatively close proximity in a ~0.5×0.5 mm-sized matrix region which was indistinguishable by SEM from the other regions measured, with a factor of three lower number density in the other regions. The abundances of carbonaceous grains seem to be slightly higher in Ryugu chamber A samples compared to chamber C samples and CI chondrites ( The presolar grains detected in Ryugu have isotopic compositions consistent with those seen in primitive meteorites (Zinner et al. 2014, and references therein). Compared to the isotopic compositions of SiC grains detected in acid residues (cf. presolar grain database; Stephan et al. 2020), the N isotopic ratios for many Ryugu SiC grains are closer to solar and/or terrestrial (Fig. 3b). Moreover, the 12 C/ 13 C ratios of five Ryugu SiC grains are between 10 and 20 (Table 1b), which is an unusual value, falling between the AB and mainstream SiC populations. Both observations can be explained by the ubiquity of organic matter in Ryugu (Barosch et al. 2022b;Yabuta et al. 2022). Indeed, it was not always possible to completely disentangle the C and N signals arising from organics from those intrinsic to the presolar grains.
The contribution of C and N from surrounding organic matter to the grains dilute the isotopic signatures of presolar SiC grains and shift them toward less anomalous compositions, as shown by the green model mixing curves in Fig. 3b. These curves indicate mixing between six select "true" SiC compositions and bulk organic matter in Ryugu ( 12 C/ 13 C ≈ 90, 14 N/ 15 N ≈ 260; Yabuta et al. 2022), and show how C and N contamination from the organic matter leads to a narrower range of SiC compositions compared to literature data. This could also lead to misclassification and/or non-identification of C-anomalous presolar grains with low to moderate anomalies in Ryugu and CI chondrites.
Conclusions
The samples returned from asteroid Ryugu by the Hayabusa2 spacecraft contain presolar stardust grains. Their abundances and compositions are similar to presolar material found in CI chondrites. Thus, our results provide further evidence that asteroid Ryugu is closely related to CI chondrites, a connection originally based on mineralogical and bulk chemical and isotopic data (Ito et al. 2022;Nakamura et al. 2022;Yada et al. 2022;Yokoyama et al. 2022).
Refractory O-and C-rich presolar phases survived the pervasive aqueous alteration that
Ryugu has experienced, whereas delicate presolar silicates were likely destroyed. However, small regions of Ryugu escaped extensive alteration (Kawasaki et al. 2022;Nakamura et al. 2022) and allowed their preservation. Further analyses of less altered Ryugu lithologies would be highly beneficial to better characterize their inventory of preserved presolar material and compare it to more altered Ryugu matrix.
Petrographically and isotopically similar less altered clasts were recently detected in Ivuna (Kawasaki et al. 2022;Morin et al. 2022) and could be targeted in future studies for comparison with the Ryugu samples. These clasts might contain presolar silicates that have not yet been found in Ivuna. The presence or absence of presolar material in these clasts would provide important clues about their origin and their history of secondary processing.
Future NanoSIMS-based analyses of Ryugu samples will focus on particles with shallower 2.7 µm OH absorption features in their infrared reflectance spectra (cf. Yada et al. 2022). These may be less aqueously altered. Their study will allow us to better assess the scale of heterogeneity sampled on Ryugu, and to explore the effects of differing degrees of alteration on organics (cf. Yabuta et al. 2022) and presolar grains. Systematic searches for presolar grains in all Ryugu lithologies will provide a representative dataset of presolar grain abundances and characteristics in asteroid Ryugu and will extract the maximum scientific information from these precious samples.
team led by Prof. H. Yabuta. We thank Nico Küter for assistance during sample preparation.
This work was funded in part by NASA Grants NNX16AK72G and 80NSSC20K0340 to LRN. Errors are 1σ (Gehrels 1986 Stephan et al. 2020, and references therein). Green curves indicate mixing curves between select compositions and average Ryugu organic matter (OM; Yabuta et al. 2022), indicating that Ryugu presolar grain compositions, particularly N isotopes, have been somewhat contaminated by the ubiquitously present organic matter. c) Si isotopic ratios of Ryugu presolar grains are compared to literature data for meteoritic SiC (cf. presolar grain database; Stephan et al. 2020). The solid line is the best-fit line to mainstream SiC ). | 2022-08-13T20:02:21.477Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "646ec6e2c9e678862f966995a77d4bcb17d98c2b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "71c74b50be34e43c8600fc7582a93db6c2fade0b",
"s2fieldsofstudy": [
"Physics",
"Geology",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
208758827 | pes2o/s2orc | v3-fos-license | Tailor-made biofuel 2-butyltetrahydrofuran from the continuous fl ow hydrogenation and deoxygenation of furfuralacetone †
In this work, we present the fi rst continuous fl ow process to produce the tailored biofuel 2-butyltetrahy-drofuran from renewable resources. In a two-step approach lignocellulose-derived furfuralacetone is fi rst hydrogenated and then deoxygenated over commercial catalysts to form the desired product. Both reactions were studied independently in batch conditions. The transition to a continuous fl ow system was done and various parameters were tested in the miniplant. Both reactions were performed in a two-reactor-concept approach to yield the desired 2-butyltetrahydrofuran in a high yield directly from furfuralacetone.
Introduction
Lignocellulose is the most abundant source of biomass available on our planet.2][3][4][5] Besides gasification and pyrolysis, the third common strategy to process lignocellulose is based on the hydrolysis of cellulose and hemicellulose to produce monomeric C 5 or C 6 sugars.These sugars are subsequently converted into platform chemicals such as furfural, 5-hydroxymethylfurfural or succinic acid, [6][7][8] which offer a wide range of possible target molecules.][11][12][13] The conversion of sugar-derived platform chemicals into diesel fuels requires an enlargement of the carbon chain and a reduction of the oxygen content.5][16] Dumesic and coworkers were the first to suggest the condensation of furfural and HMF with acetone to form C 8 -C 15 aldol products which can be subsequently hydrodeoxygenated to yield hydrocarbon jet and diesel range fuels. 17The resulting C 8 -C 15 hydrocarbons are comparable to conventional diesel fuels produced from crude oil, and can therefore be used as drop-in fuels without adjustment of the engines.[19][20][21][22] Nevertheless, the oxygen rich feedstock for biofuels opens the possibility to design new types of fuels with improved properties by keeping some of the functionalities in the molecules.4][25] In this context, Leitner et al. recently identified three molecules (2-butyltertrahydrofuran (BTHF), 1-octanol (1-OL) and dioctylether (DOE) derived from furfuralacetone (FFA) as promising fuel candidates. 26While 1-octanol was already successfully tested as blending, the suitability of the other two components has still to be confirmed in real combustion engines.
The synthesis of BTHF, 1-OL and DOE was achieved via a two-step reaction concept.First FFA was hydrogenated over a commercial Ru/C catalyst and then the resulting saturated alcohol was deoxygenated using a bifunctional catalytic system consisting of Ru-nanoparticles stabilised in an acidic ionic liquid.While several catalysts have been reported for the onepot hydrodeoxygenation of various aromatic ketones, 27,28 the separation of the hydrogenation and the deoxygenation steps was necessary in this case in order to avoid the polymerization/degradation of FFA under acidic conditions. 26Later, an improved catalytic system was developed, consisting of Ru-nanoparticles immobilised on an acidic SILP (supported ionic liquid phase) material. 29Depending on the reaction con- Mülheim an der Ruhr, Germany.E-mail: andreas-j.vorholt@cec.mpg.deditions, the selectivity of the hydrodeoxygenation reaction could be shifted to either one of the three products.However, the reaction conditions still involved the use of an ionic liquid as solvent, hindering large-scale and continuous flow applications.As a result, the hydrodeoxygenation of furfuralacetone was so far limited to high pressure (120 bar) batch conditions.
In this work we present the first continuous flow production of BTHF from FFA using commercial catalysts for both hydrogenation and hydrodeoxygenation steps (Scheme 1).First, furfuralacetone was hydrogenated over a Ru/C catalyst, followed by the hydrodeoxygenation of the resulting alcohol (THFA) catalysed by a combination of Ru/C and an acidic ion exchange resin.Both reactions were first independently optimised in batch experiments before being taken to a continuous flow miniplant.Finally, both reactions were sequentially performed in a custom-built continuous flow miniplant to validate the tworeactor concept on the long-term.The two-step approach prevented the undesired formation of humins and allowed producing BTHF continuously in high yield and selectivity.
Safety warning
High-pressure experiments with compressed H 2(g) must be carried out only with appropriate equipment and under rigorous safety precautions.
General
Ru, 5% on carbon was purchased from abcr und used without further pre-treatment.All ion-exchange resins were bought from Sigma Aldrich.The polystyrene-based ion-exchange resins were washed with deionised water and methanol and dried at 100 °C over night prior to use.Hydrogen (5.0) was supplied by Westfalen.All batch experiments were carried out in 10 mL stainless steel high-pressure autoclaves with glass inserts.
Gas chromatography
Gas Chromatography (GC) was used to determine the product yields in batch and continuous flow experiments using tetradecane (99%, abcr) as an internal standard.GC samples were prepared by diluting around 250 mg of filtered product solution with pure solvent.The samples were measured on a Shimadzu Chromatograph Nexis GC-2030 equipped with a FID detector and a CP-WAX-52CB column using tetradecane as internal standard.The response factors of FFA, THFK, THFA, BTHF, 1-OL and DOE were determined by calibration of the pure components.Response factors of compounds, which were not available as pure substance, were estimated using Sternberg's effective carbon method. 30The values for the mass balance were between 95% and 102% for the hydrogenation of FFA and between 91% and 97% for the deoxygenation of THFA.
NMR spectroscopy
1 H-NMR and 13 C-NMR spectra of the isolated products were recorded on a Bruker AV400 ( 1 H 400.2 MHz, 13 C 100.6 MHz) spectrometer at room temperature.As solvent CDCl 3 (residue signal at 7.2 ppm in 1 H and 77.1 ppm in 13 C) was used and its residue signal served as reference for the calibration of the spectra.
Synthesis of the starting material furfuralacetone
Furfural was distilled under reduced pressure and stored under argon atmosphere in the freezer prior to use.furfural (58 g, 0.6 mol) and acetone (78 g, 1.3 mol) were dissolved in H 2 O (470 mL) and cooled to 10 °C.While stirring, a 33 wt% NaOH solution (13 mL) was added.After stirring at rt. for 4 h, the mixture was acidified with 20 wt% H 2 SO 4 (25 ml).The product phase separated from the aqueous phase on standing and was removed.The aqueous phase was extracted with EtOAc (1 × 150 mL) and the organic phases were combined.After removal of solvent under reduced pressure, the crude product was purified by vacuum sublimation.Furfuralacetone was obtained as a white solid and stored under argon at 5 °C.
Batch hydrogenation of furfuralacetone
In a typical experiment, FFA (102.1 mg, 0.75 mmol), 5 wt% Ru/ C (7.6 mg, 3.75 µmol Ru), the internal standard tetradecane (20 mg) and cyclohexane (1.5 mL) were combined in a glass insert and placed in a 10 mL high-pressure autoclave.The autoclave was purged with H 2 and then pressurised to 40 bar.The reaction mixture was stirred at 100 °C for 2 h.Once the reaction was finished, the reactor was cooled to room temperature and carefully vented.The product mixture was diluted, filtered through a syringe filter and analysed via GC-FID.
Batch deoxygenation of 4-(tetrahydrofuran-2-yl)butan-2-ol
Pure THFA was obtained from hydrogenation product solutions by vacuum distillation.In a typical experiment, THFA (108 mg, 0.75 mmol), 5 wt% Ru/C (7.6 mg, 3.75 µmol Ru), Amberlyst 36 (9.3 mg, 0.05 mmol H + ), 20 mg tetradecane and 1.5 mL cyclohexane were combined in a glass insert and placed in a 10 mL high-pressure autoclave.The autoclave was purged with H 2 and then pressurised to 80 bar.The reaction mixture was stirred at 150 °C for 6 h.Once the reaction was finished, the reactor was cooled to room temperature and carefully vented.The product mixture was diluted with acetone, filtered through a syringe filter and analysed via GC-FID.
Monitoring the course of the reaction
To monitor the deoxygenation of THFA, the reaction was carried out in a 50 mL autoclave with a sampling device.For this, the reaction was scaled up to 15 mL.The volume of each sample was approximately 0.2 mL.
Continuous flow reactions
Continuous flow experiments were performed with a custombuilt miniplant from Separex (Fig. 2).Depending on the configuration, the miniplant could be equipped with either one or two stainless steel tube reactors (35 cm long, 8.8 mm internal diameter) in series.The reactors were filled with alternating layers of the inert material SiC (46 grit, ≈3 g per layer) and catalyst.For the hydrogenation step, 1 g of Ru/C was used (125 mg per layer).In case of the deoxygenation, different amounts of Ru/C and Amberlyst 36 were physically mixed to form the catalyst layer.Both ends of the tube reactors were plugged with glass wool and stainless steel frits before and after the reactors to keep the catalyst bed in place.The dead volume of the filled tube reactors was approximately 11 mL.Heating jackets around the reactors provided the necessary heat for the reactions.Hydrogen flow was controlled by a mass flow controller (Bronkhorst F-230 M).The flow rate of the substrate solution was controlled by an HPLC pump (SSI Model 12-6 dual piston pump).Once the hydrogen and substrate flows were combined, the resulting stream passed through a tube filled with glass beads for a pre-mixing of the two phases, as well as a pre-heater before reaching the reactor.A backpressure regulator controlled the pressure inside the miniplant (fluctuation <±1 bar).Samples of the product stream were taken periodically.
Results and discussion
Step 1: Furfuralacetone hydrogenation in batch Furfuralacetone is a quite complex molecule bearing three different types of unsaturated functionalities, namely a C-C double bond, a keto group and a furan ring.As a result, the hydrogenation of FFA leads to the formation of different products, depending on the reaction conditions.Scheme 2 gives an overview of all species that were observed during this work and the proposed pathways leading to those species.The reaction network is in accordance with the literature. 31,32The rapid hydrogenation of the double bond in FFA gives the intermediate 4-(furan-2-yl)butan-2-one (FK).FK can be further converted to either 4-(furan-2-yl)butan-2-ol (FA) by hydrogenation of the keto group or to 4-(tetrahydrofuran-2-yl)butan-2-one (THFK), if the aromatic ring is hydrogenated first.In another hydrogenation step FA and THFK can both be transformed to the saturated molecule 4-(tetrahydrofuran-2-yl)butan-2-ol (THFA).Two additional products that have been observed are 2-methyl-1,6-dioxaspiro [4.4]nonane (SP) and octane-2,5-diol (OD).The formation of SP from FA has been reported before. 33t is formed via the partially hydrogenated intermediate 4-(4,5dihydrofuran-2-yl)butan-2-ol (DHFA) which was not observed because of its high reactivity.The formation of OD requires an opening of the five-membered ring and takes place most likely while the aromatic ring is still intact, since the saturated ring is known to be more stable towards hydrogenolysis. 34,35nitial experiments were performed in batch mode using stainless steel autoclave reactors (Table 1).Commercially available Ru/C served as hydrogenation catalyst since it already gave successful results in previous works 26 since FFA is prone to form humins in their presence.We selected cyclohexane as solvent for the reaction because it is inert under hydrogenation as well as deoxygenation conditions.However, the solubility of FFA in cyclohexane at room temperature is relatively low, so only a concentration of 0.5 mol l −1 was possible.The hydrogenation of FFA showed a minor dependence of the product distribution on the reaction temperature (entries 1.1-1.4).Nevertheless, at 50 °C the highest yield towards the formation of THFA (92%) was achieved (entry 1.1).Variation of the hydrogen pressure revealed that high pressures prevent the formation of the side product SP.While at 20 bar hydrogen pressure 14% of SP were formed, the amount was reduced to 4% at 80 bar (entries 1.5-1.6).An additional increase to 120 bar did not improve the THFA yield further as still 3% SP were formed (entry 1.7).Using THF instead of cyclohexane as solvent led to a decrease in side products formation.However, due to the lower hydrogen solubility in THF, the reaction proceeded at a slower rate, and 8% of the intermediate THFK remained after the reaction (entry 1.8).Under solvent-free conditions, the THFA yield was similar to what was observed in cyclohexane (entry 1.9).These batch results demonstrate that Ru/C can rapidly fully hydrogenate FFA in cyclohexane.Under optimised conditions (50 °C, 80 bar) full conversion and a yield towards THFA of 95% was reached (entry 1.10).
In order to verify the heterogeneous nature of the catalyst, we determined the ruthenium content in the product solution by ICP-MS and tested the catalytic activity of the filtrate after removal of the catalyst.A very low Ru concentration (7 ppb) and no activity after catalyst removal confirm that neither nanoparticles nor molecular species leach into the liquid phase.
Step 2: 4-(Tetrahydrofuran-2-yl)butan-2-ol hydrodeoxygenation in batch The hydrodeoxygenation of THFA involves a complex reaction network of hydrogenation and hydrogenolysis reactions, which is outlined in Scheme 3. Through the desired pathway, THFA reacts in acidic conditions to give 2-butyltetrahydrofuran (BTHF) via elimination of the hydroxyl group.Side reactions include the reversible formation of THFA isomers and dimerisation.One THFA isomer can also react irreversible to 2-methyl-5-propyltetrahydrofuran (MPTHF).Opening of the five-membered ring in BTHF can lead to either the cyclic isomerization product 2-propyltetrahydro-2H-pyran (PTHP) or the linear 1-octanol (1-OL).Additional side products include dioctylether (DOE), octane and the C 7 -molecules such as heptane and 2-methyl-5-ethyltetrahydrofuran.The main side products, PTHP and 1-OL, are potential biofuel molecules, too.Nevertheless, our aim was to maximise the yield of BTHF during the hydrodeoxygenation.
Preliminary optimisation experiments of the reaction parameters based on a design of experiment approach suggested a temperature of 150 °C, a hydrogen pressure of 80 bar, a reaction time of 6 h and a substrate to H + ratio of 15 : 1 (see ESI Tables S2 and S3 †).With these optimised conditions in hand, we investigated the time profile of the reaction, which is displayed in Fig. 3.During the first three hours, BTHF and dimers are formed likewise.Afterwards, as more BTHF is produced and THFA is withdrawn from the equilibrium, the amount of dimers decreases again.At four hours, BTHF reaches its maximum with 94%, before the consecutive reactions to PTHP and 1-OL become pronounced.Next, we compared different ion-exchange resins as solid acid components for the deoxygenation step.The reaction temperature of 150 °C is a limiting factor, because many resins such as Amberlyst 15 are only stable up to 120 °C.Thus, we selected four different ion exchange resins with high thermal stability.Dowex 50WX8, Dowex Marathon MSC and Amberlyst 36 tolerate temperatures up to 150 °C and Nafion NR50 up to 200 °C.As shown in Fig. 4, the resins Dowex Marathon and Amberlyst 36 gave the highest BTHF yields with 83% and 91%, respectively.Dowex 50 seems to be less active because after 6 h still 23% of the starting material remains in form of dimers.The lower activity of Dowex 50 is probably a consequence of its gel-type matrix.Gel-type exchange resins need to swell in the solvent for an optimal access of the acid sides, which is why they are less suitable for apolar solvents such as cyclohexane. 36Dowex Marathon and Amberlyst 36 are macroporous ion-exchange resins with good access of the acid sides even in non-swelling solvents and thus give better results.The ion exchange resin Nafion NR50 showed the highest deoxygenation activity and led to high fractions of the consecutive products PTHP (19%) and 1-OL (11%).The reason for this is the higher acid strength of Nafion NR50 compared to the polystyrene-based exchange resins. 36Although Nafion NR50 is also based on a gel-type matrix, the higher acid strength dominates in this case.The increased formation of side-products and the resulting lower BTHF yield (70%) exclude Nafion as a suitable solid acid.
Ruthenium nanoparticles immobilised on an sulfonic acidfunctionalised SILP material (Ru@SILP1.0),which showed excellent deoxygenation results in a previous work by Luska et al., 29 were tested as well.Surprisingly, under our modified reaction conditionsthat differed mainly from the previous ones through the use of cyclohexane as solvent instead of the ionic liquid [EMIM][NTf 2 ]hardly any BTHF or 1-OL formation was observed.Instead, only the dimerisation of FFA took place.These results indicate that the ionic liquid not only served as a solvent in this case, but actively interferes in the reaction mechanism.It is known that ionic liquids can influence catalytic reactions by stabilising intermediates such as carbenium ions or enhance the strength of acids, which could explain the strong differences of the product distribution for different solvents. 37Finally, we tried p-toluenesulfonic acid monohydrate as a homogeneous acid catalyst.TsOH produced less BTHF than the ion-exchange resins as mainly dimers were formed during the deoxygenation reaction, although the Dowex and Amberlyst resins contain the same sulfonic acid group as TsOH.Gates et al. also observed this trend in the dehydration of t-butyl alcohol.To explain this, the authors presented a concerted mechanism including multiple -SO 3 H groups.This mechanism is more likely to take place in an exchange resin with high local concentration of acid groups. 38verall Amberlyst 36 showed the best performance with the highest yield of the target molecule BTHF (ca.92%) and only small amounts of undesired isomeric side products.Therefore, Amberlyst 36 was selected as acid catalyst for the rest of the study.
Next, we studied the influence of the solvent on the product distribution (Fig. 5).While the hydrogenation of FFA proceeded smoothly in both cyclohexane and THF to give THFA in high yields (ca.90%), THF was not inert under the acidic conditions used for the hydrodeoxygenation reaction.Significant amounts of 1-butanol, the hydrogenolysis product of THF, were indeed observed during the reaction.Ring opening of the solvent took also place in case of 1,4-dioxane or 2-methyltetrahydrofuran.Under solvent-free conditions, BTHF was formed as the main product (76% yield).The lower yield, compared to the reactions in cyclohexane, comes along with a higher amount of 1-OL (19%).As 1-OL is a consecutive product of BTHF, probably better BTHF yields could be achieved with a shorter reaction time in this case.Surprisingly, a completely different product distribution was obtained when the deoxygenation of THFA was performed in the ionic liquid 1-ethyl-3-methylimidazolium bis(trifluoro-methylsulfonyl) imide ([EMIM][NTf 2 ]), which was used in earlier works. 26,29nstead of BTHF, almost exclusively 1-OL and its etherification product dioctylether (DOE) were formed.As mentioned above, the ionic liquid influences the catalysis considerably.Despite octanol being a valuable chemical, the use of an expensive ionic liquid seems not justified, nor suitable for large scale applications. 39While neat conditions would be of course beneficial compared to cyclohexane in terms of economy and green chemistry, the solvent-free implementation into our miniplant was not practical, because FFA is a solid at room temperature.Thus, cyclohexane was used for our continuous flow experiments.
Step 1 & 2: Hydrogenation and deoxygenation of furfuralacetone in batch Finally, we carried out both reaction steps in a row in a onepot approach, using the conditions determined in the previous optimisation (Fig. 6).For this, FFA, Ru/C and Amberlyst 36 were all combined in the beginning and pressurised with hydrogen.The mixture was then stirred for 2 h at 50 °C.At this temperature no polymerisation of FFA took place despite the presence of the acidic Amberlyst 36.Afterwards the temperature was elevated to 150 °C and the mixture was stirred for another 6 h.The tandem reaction approach gave an overall BTHF yield of 91%.This was even higher than the combined yield of the two individual steps (95%•92% = 87%), because some of the SP side product, formed in the first step can react to THFA and further to BTHF under deoxygenation conditions.
Continuous flow experiments
After investigation of both reaction steps in batch conditions, the reaction system was transferred to a continuous flow miniplant.The miniplant was equipped with two tubular reactors in series so that two consecutive reactions can take place one after the other without workup or change of the pressure in between.
Continuous operation step 1: hydrogenation
Following an approach similar to what was previously described for the batch experiments, we studied both reaction steps individually starting with the hydrogenation of FFA.Based on results obtained in batch, cyclohexane was used as solvent and the pressure was set to 80 bar.The temperature was investigated between 50 °C to 120 °C.Table 2 compares the yield of THFA depending on the temperatures after 3 h on stream.Good THFA yields of 94%-97% were obtained at all investigated temperatures, whereby 50 °C gave the highest yield, in accordance to the batch experiments.However, at reaction temperatures of 80 °C or higher these yields were only stable for the first couple of hours.After around 5-6 h on stream, the THFA yield decreased drastically as more of the intermediate THFK remained in the product solution (see ESI Fig. S1 to S3 †).It was concluded that the catalyst suffered from deactivation.Fortunately, at 50 °C the yield of THFA stayed stable at 96-97% throughout the entire experiment of 10 h (Fig. 7).To understand the nature of the deactivation we characterized the Ru/C catalyst by transmission electron microscopy (TEM) and X-ray diffraction (XRD) analysis before and after deactivation took place.TEM pictures showed no clustering of the Ru-particles and the structure of the support did not change significantly either during the reaction.Furthermore, Ru-leaching into the solution was also negligible, as ICP-OES analysis only showed ppb-amounts of Ru in the product solution.As a result, the most likely reason for the catalyst deactivation is poisoning or blockage of the Ru-centres by deposition of an insoluble side product.This presumption is furthermore supported by BET (Brunauer-Emmett-Teller) and TG (thermal gravimetric) analysis.The BET data show a significant reduction of surface area and pore volume of the catalyst after the reaction (see ESI, Fig. S7 †).In addition, TGA revealed a weight loss of the catalyst when heating it over 250 °C (see ESI, Fig. S6 †).
Continuous operation step 2: deoxygenation
We continued with the investigation of the hydrodeoxygenation step in a continuously operated system (Table 3).As evidenced during our study under batch conditions, the residence time must be chosen correctly to achieve a high yield of the desired BTHF.Therefore, we varied the substrate flow and the amount of Amberlyst 36, which both influence the residence time.We started with a flow rate of 0.25 mL min −1 and 2 g of Amberlyst 36 for the first try.In addition to the side products observed during the batch experiments, significant amounts of octane, heptane and 2-methyl-5-ethyltetrahydrofuran were formed under these conditions.Additionally, 11% of dimers were left after the reaction, indicating that the residence time was too short (entry 3.1).To extend the contact time of substrate and catalyst, we increased the amount of the ion-exchange resin to 6 g for the next run.As a result, no dimers remained after the reaction.However, even more C 7 species were observed compared to the previous run (entry 3.2).Surprisingly, still all dimers were converted, when the substrate flow rate was increased to 0.38 or even 1.0 mL min −1 (entries 3.3-3.4).At the same time, the formation of undesired octane and C 7 species was significantly reduced to 5%, thus leading to a yield of 85%.The amount of MTHF and 1-OL was not significantly influenced by the change of the substrate flow rate.
Continuous operation step 1 & 2
Finally, we combined both steps in the miniplant, using the optimised conditions found earlier.For the deoxygenation, we reduced the amount of Ru/C from 2 g to 1 g, because there were indications (see ESI, Table S4 †) that this might further reduce the formation of the C 7 species.The results can be seen in Fig. 8.During the first 4.5 h, a good BTHF yield of 86-87% was achieved, no dimers were formed in the product mixture and the amount of C 7 species and octane was only 2-3%.In the remaining time, the amount of BTHF decreased steadily, as the amount of dimers increased.After 10.5 h on stream, the yield of BTHF was reduced to 75% and the dimers were increased to 16%.We assume that the loss of activity is due to the formation of water which is produced as a byproduct during the deoxygenation.Since water is not miscible with cyclohexane, it is suspected that it stays adsorbed on the Amberlyst 36.This would consistent with findings of other groups that water reduces the strength of Brønsted acids in deoxygenation reactions. 40,41One approach to counteract the decrease in activity could be to reduce the flow rate constantly over time.However, this assumes that the catalyst deactivation reaches a saturation level, as the course of the yield in Fig. 8 suggests and does not progress constantly.Another idea would be to add a drying agent to the catalyst bed to remove the water temporary from the catalyst and change the used reactor from time to time.In any case, this issue needs further investigation and longer time on stream data in the future.
Conclusions
A two-step approach for the conversion of lignocellulose-based furfuralacetone to 2-butyltetrahydrofuran is presented.Complete hydrogenation of furfuralacetone is rapidly achieved at 50 °C over a commercial Ru/C catalyst with excellent yields of up to 97%.The hydrogenation product THFA is then deoxygenated to 2-butyltetrahydrofuran in the second step by metal and acid catalysis.A combination of the commercial catalysts Ru/C and the ion exchange resin Amberlyst 36 were found to be best suited for this transformation.In comparison to other ion exchange resins, Amberlyst 36 lead to the highest BTHF yield of 92% in batch experiments.Furthermore, the combination of inexpensive heterogeneous catalysts and cyclohexane as solvent, allowed the implementation of this reaction cascade into a custom-built miniplant.To best of our knowl- edge this is first continuous flow synthesis of BTHF based on renewable resources.Both reaction steps are successfully performed in sequence in the miniplant with an initial BTHF yield of 86-87%.After 5 h on stream, the yield starts to decrease slowly down to 75% after 10 h due to deactivation of the ion-exchange resin.Further investigations should deal with this deactivation by water and include longer continuous flow experiments.Additionally, a continuous implementation of the reaction sequence based on solvent-free conditions could be considered.
Fig. 2
Fig. 2 Continuous flow miniplant used in this study.(a) Picture of the miniplant.(b) Simplified process diagram.
Table 1
Investigation of the product yields in the hydrogenation of FFA over Ru/C a
Table 2
Continuous flow hydrogenation of FFA in cyclohexane a b Experiment lasted 10 h. | 2019-11-07T15:26:17.445Z | 2019-11-25T00:00:00.000 | {
"year": 2019,
"sha1": "7f124dda0a4a170e97ab52633a668b4bb434460d",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/gc/c9gc02555c",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cdeb75be589d9563bb8480cb07e72733e83e1fc9",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
90860814 | pes2o/s2orc | v3-fos-license | Morphological Characterization of Some Wild Macrofungi of Gorakhpur District, U.P., India
Mushrooms are seasonal fungi, which occupy diverse role in nature across the forest ecosystem. They predominantly occur during the rainy as well as spring season when the snow melts. Mushrooms are in fact the 'fruit' of the underground fungal mycelium. Macrofungi may be edible, inedible, medicinal and poisonous. Some macrofungi are not edible but they have some tonic and medicinal qualities (Chang and Miles, 2004). They are macromycetes forming macroscopic fruiting bodies such as agarics, boletes, jelly fungi, coral fungi, stinkhorns, bracket fungi, puffballs and bird’s nest fungi. They are fleshy, subfleshy or sometimes leathery or woody and bear their fertile surface either on lamellae or lining the tubes, opening out by means of pores.
Introduction
Mushrooms are seasonal fungi, which occupy diverse role in nature across the forest ecosystem. They predominantly occur during the rainy as well as spring season when the snow melts. Mushrooms are in fact the 'fruit' of the underground fungal mycelium. Macrofungi may be edible, inedible, medicinal and poisonous. Some macrofungi are not edible but they have some tonic and medicinal qualities (Chang and Miles, 2004). They are macromycetes forming macroscopic fruiting bodies such as agarics, boletes, jelly fungi, coral fungi, stinkhorns, bracket fungi, puffballs and bird's nest fungi. They are fleshy, subfleshy or sometimes leathery or woody and bear their fertile surface either on lamellae or lining the tubes, opening out by means of pores.
Fungi are the second largest biotic community after insects in world (Sarbhoy et al., 1996). Out of 1.5 million fungi around the earth, only 50% are characterized until now and one third of total fungal diversity exists in India (Butler and Bisby, 1960;Bilgrami et al., 1981Bilgrami et al., , 1991Manoharachary, 2002;Manoharachary et al., 2005). Mushrooms alone are represented by about 41,000 species, of which approximately 850 species are recorded from India (Deshmukh, 2004) mostly belonging to Agaricales, also known as gilled mushrooms (for their distinctive gills), or Euagarics. Various workers also identified and classified various types of macrofungi from different parts of India (Butler and Bisby, 1931;Vasudeva, 1960).
All types of mushrooms are important in decomposition processes, because of their ability to degrade cellulose and other plant polymers (Arora, 2008). Though India has rich macrofungal biodiversity, most traditional knowledge about mushrooms come from the far Eastern countries. Most of the mushrooms grow abundantly in nature and their commercial harvest is being undertaken for benefit in these countries. The mushrooms like Ganoderma, Lentinus, Grifola etc. were collected and used since time immemorial. Recent reports show a tradition of wild mushroom picking, their consumption and sale in the market in other countries (Guzman, 2008;Sitta and Floriani, 2008).
The current area of survey is Gorakhpur which is situated in North-East part of Uttar Pradesh near the border of Nepal. It covers about 3483.8 square kilometers with latitude of 26 0 13' N and 27 0 29' N and longitude of 83 0 05' E. Average annual temperature is 26 0 C. It ranges from 30 0 -40 0 C in summer and 2 0 -18 0 C in winter. Annual rainfall is 1393.1 mm and 87% of rainfall is recorded during period of June to September (Singh et al., 2014). The soil of the region is part of the trans-Sarju plain and comprises Gangetic alluvium brought down by rivers like Ghaghara, Rapti, Rohin and Gandak from the Himalayas in the North. The texture is sandy loam and pH is about neutral. The general vegetation of Gorakhpur district is interspersed with patches of forest, old fields, open pasture, uplands (mounds or dhus), lowlands, orchards, playgrounds and human settlements (Srivastava et al., 2015).
Gorakhpur region is a rich reservoir of macrofungi. Lots of work had been done in this area to explore its macrofungal diversity by various workers (Srivastava et al., 2011;Vishwakarma et al., 2014;Chandrawati et al., 2014). An attempt was made to explore the macrofungal diversity in the region emphasizing morphological features of the collected samples.
Materials and Methods
Systematic and periodical survey of different Tehsils (Sadar, Sahajanwa, Gola, Bansgaon, Khajni, Chauri-Chaura, Campierganj), associated forests and other habitats rich with organic matter of Gorakhpur district (U.P.) were visited during January, 2014 -July, 2016. The ecological habitats viz., humid soil, wood log, leaf litter, wood, sandy soil, leaf heaps, wheat straw, paddy straw, calcareous soil, wet soil, troops of rotten wood, termite nests, decaying wood log and humus were taken in consideration for their presence of macrofungi. Regular field trips were made for collection of macrofungi but it was more frequent during (June to September) monsoon season.
The collected samples were wrapped in wax paper and brought to the laboratory for the study and identification. The identification was made on the basis of macroscopic and microscopic characteristics using relevant literatures (Purakasthya, 1985;Alexopolous et al., 1996) and information available at www.mushroomexpert.com or www.mycokeys.com (Henry and Sullivan, 1969;Rapsang and Joshi, 2012). The soft textured samples were preserved in 2% formaldehyde and leathery textured samples were preserved in 4% formaldehyde. Alternately, the samples were also oven dried at 80 0 C for 5 consecutive days, wrapped in aluminum foil and packed in the polythene bags with naphthalene balls for further study. The traditional knowledge of the wild mushrooms like their edibility and medicinal value were also gathered from the local tribes. All the identified and unidentified specimens were deposited to the herbarium of Department of Botany, DDU Gorakhpur University, Gorakhpur, U. P. India.
Data Analysis
Frequency and density were analyzed following Gogoi and Sarma (2012).
Results and Discussion
A total of 20 macrofungi belonging to 12 genera of 9 families were identified, out of which 10 species belong to family Agaricaceae. The informations regarding the species name, family, edibility and host/substratum, frequency and density of collected macrofungi are given in Table 1.
Agaricus arvensis
Family: Agaricaceae Description: Cap 5cm broad, broadly convex, often with a low umbo, decurved to occasionally upturned in senescent specimens, margin incurved, surface dry and smooth, fibrillose to finely scaled in dry weather, white to ashy-grey colour. Stipe 6 cm long, tapering to a pointed base, stuffed, veil thin, fragile, membranous, either leaving remnants on the young cap margin, temporary ring. Gills close, pink, free, becoming blackish brown at maturity. Spore 6.71 x 4.32 µm, smooth, spore print blackish brown. Habitat: Found scattered in grassy areas.
Agaricus campestris
Family: Agaricaceae Description: Cap 5.5 cm broad, convex, often with a low umbo, margin incurved, dry and smooth surface, fibrillose to finely scaled in dry weather, white to ashy grey colour. Stipe 5 cm long, tapering to pointed base, veil thin, membranous, fragile, either leaving remnants on the young cap margin or forming a median to superior, evanescent ring. Gills close, pink, free, becoming blackish brown at maturity. Spore 6.5 x 4.31 µm.
Habitat: Scattered or forming arcs and rings in grassy areas.
Agaricus trisulpharatus
Family: Agaricaceae Description: Cap 3 cm broad, bell-shaped then umbonate and flat, margin distinctly lined in mature specimens, bright yellow to greenish yellow or pale yellow or white. Gills free, yellow or pale yellow, crowded. Stipe 5 cm long, dry or powdery, slender, smooth but slightly enlarged at the base, yellow. Veil yellow, partial veil forms a small, collar-like ring on the upper stalk which may disappear. Flesh very thin, yellow. Spores ellipsoid, with an apical pore, smooth, 9.40 x 5.51µm. Habitat: Found single or in bunch on rich organic matter, decaying hay and leaf piles. There are several other workers who worked on mushroom diversity of Gorakhpur. Chandrawati et al., (2014) collected 29 macrofungal species belonging to 12 families in which Tricholomataceae was predominant. Out of 29 spp. collected 4 were excellently edible, 6 edible, 18 inedible and 1 poisonous. As a result of extensive field survey and microscopic studies in laboratory 12 taxa belonging to 8 families were identified earlier (Vishwakarma et al., 2014). In present study the survey was made between January, 2014-July, 2016, 20 different species of macrofungi belonging to 12 genera and 9 families were identified based on their morphology and microscopic characteristics. Out of 20 species identified 3 were excellently edible, 9 edible, 3 inedible, 4 medicinal and one was found to be poisonous. Termitomyces heimii, Tuber aestivum and Macrolepiota procera were edible and found to be abundant but Agaricus arvensis was rarely found during the survey.
In conclusion, macrofungi play a vital role in maintaining the ecosystem, they have high nutritional, medicinal potentials and also help in biodegradation and recycling of organic matter. Termitomyces heimii, Tuber aestivum and Macrolepiota procera were abundantly found to be edible and also used for medicinal and cooking purposes by tribals living near forest regions of the Gorakhpur. Identification of some unknown wild macrofungi opens a new way for researchers and pharmaceuticals to exploits them for food, medicines and the other bioprospects to attempt its commercial cultivation. | 2019-04-02T13:02:40.970Z | 2016-12-15T00:00:00.000 | {
"year": 2016,
"sha1": "4e8f267fd10a14349ae665a1952371db235abdfb",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/5-12-2016/Ravinder%20Pal%20Singh,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "506b150c0b2dd4979555a206197e7f04088490a0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
228927805 | pes2o/s2orc | v3-fos-license | The New Testament κύριος problem and how the Old Testament speeches can help solve
The translation of the Hebrew Scriptures into Greek was one of the biggest undertakings of its kind. It would have been impossible to accomplish such a mammoth task without some of the best minds the ancient Mediterranean society had to offer. They had to be competent on so many levels, skilled in various disciplines and had to have the ability to solve problems. But nothing could have prepared them for the translation of the divine name of a Hebrew deity; the most sacred linguistic characters the Hebrew language ever produced, הוהי (the Tetragrammaton). Translating such a significant and religious sensitive Hebrew term turned out to be one of the biggest obstacles they faced because translating this term, by definition, is a sacrilege act. To be sure, the translation of the Tetragrammaton by default implied that the divine name is stripped of its sacred status. In fact, the mere idea to translate the divine name must have been regarded as preposterous. If this did not cause translation fever amongst the scribes, then the prohibition to utter the divine name, presumably from the third century BCE onwards, must have caused some sleepless nights. To add to this, the copying of the Hebrew Scriptures during this period produced multiple Hebrew terms and scribal practises to avoid uttering the divine name. All things considered, the translation of the Hebrew Scriptures into Greek demanded nerves of steel and all the beer Alexandria had to offer.
Introduction
The translation of the Hebrew Scriptures into Greek was one of the biggest undertakings of its kind. It would have been impossible to accomplish such a mammoth task without some of the best minds the ancient Mediterranean society had to offer. They had to be competent on so many levels, skilled in various disciplines and had to have the ability to solve problems. But nothing could have prepared them for the translation of the divine name of a Hebrew deity; the most sacred linguistic characters the Hebrew language ever produced, יהוה (the Tetragrammaton). Translating such a significant and religious sensitive Hebrew term turned out to be one of the biggest obstacles they faced because translating this term, by definition, is a sacrilege act. To be sure, the translation of the Tetragrammaton by default implied that the divine name is stripped of its sacred status. In fact, the mere idea to translate the divine name must have been regarded as preposterous. If this did not cause translation fever amongst the scribes, then the prohibition to utter the divine name, presumably from the third century BCE onwards, must have caused some sleepless nights. To add to this, the copying of the Hebrew Scriptures during this period produced multiple Hebrew terms and scribal practises to avoid uttering the divine name. All things considered, the translation of the Hebrew Scriptures into Greek demanded nerves of steel and all the beer Alexandria had to offer.
The New Testament (NT) κύριος problem is therefore not an isolated issue. It forms part of a much larger interconnected network of challenges, which has the divine name, יהוה (hereafter transcribed Yhwh) as the epicentre. The NT κύριος problem is pertinent for the NT and fundamental for its theology and Christology. To put it plainly, if the term κύριος is an equivalent for the divine name Yhwh; and if the term κύριος in the Yhwh sense is applied to Jesus, the implication is that Jesus is put on par with Yhwh. If translating the term Yhwh was a preposterous idea, equating Jesus with Yhwh is nothing short of blasphemy; punishable by death. To reiterate, the κύριος problem is not confined to the NT, it forms part of a matrix of interconnected issues in a constant push and pull relation. There is no easy way to address this problem, but one must The New Testament (NT) κύριος problem forms part of a larger interconnected network of challenges, which has the divine name Yhwh as the epicentre. To put it plainly, if the term κύριος is an equivalent for the divine name Yhwh and if the term κύριος in the Yhwh sense is applied to Jesus, the implication is that Jesus is put on par with Yhwh. This problem therefore, forms part of a matrix of interconnected issues in a constant push and pull relation. There is no easy way to address this problem, but one must start somewhere. This study will attempt to introduce, illustrate and explain the complexity of the NT κύριος problem to contribute to a deeper understanding of the problem and to appreciate its intricacies. The aim is therefore to illustrate the intricacy of the problem by showing where the NT κύριος problem might have originated and how it evolved. These intricacies will then be pulled into a singular focus made possible by the explicit κύριος citations. These citations, in turn, will be categorised as Theos, Davidic and Jesus speeches and analysed in an attempt to contribute to a possible solution. start somewhere. This study will attempt to introduce, illustrate and explain the complexity of the NT κύριος problem to contribute to a deeper understanding of the problem and to appreciate its intricacies. The aim is therefore to illustrate the intricacy of the problem by showing where the NT κύριος problem might have originated and how it evolved. These intricacies will then be pulled into a singular focus made possible by the explicit κύριος citations. These citations, in turn, will be categorised as Theos', David's and Jesus' speeches and analysed in an attempt to contribute to a possible solution.
Defining the problem
A question that petitions to be answered is whether a κύριος problem is not just an illusion. Is the so-called κύριος problem not the result of a cultural disconnection and misunderstanding of ancient texts, concepts and social contexts? Is this not a case of postulating a problem onto ancient Hebrew and Greek texts because of sociocultural and religious estrangement? The manuscript evidence, however, will reveal that a κύριος problem is not only plausible but a scribal reality. The NT κύριος problem is both conceptual and linguistical in nature; conceptually, it is a matter of who Jesus was perceived to be predominately in relation to a Hebrew deity, and albeit to a lesser extent, the Emperor as dominus. Linguistically speaking, it is a culmination brought about by the complexities of all the Hebrew and Aramaic terms used to reference a Hebrew deity and finding a suitable Greek equivalent in translating these terms. 1 A few assumptions are made that form the basis for the NT κύριος problem: 1. a prohibition was in place from the third century BCE onwards prohibiting the pronunciation of the name for the Hebrew deity, Yhwh (cf. Tov 2020:49) 2. the rule of thumb is that the term κύριος is a suitable Greek equivalent for the Hebrew term יהוה 2 3. the term κύριος in the sense of Yhwh, as a divine name, is not applied to Jesus, irrespective of its ambiguity. 3 These assumptions are problematic for primarily three reasons; firstly, there is no manuscript evidence of an uncontracted κύριος term as an equivalent representation of יהוה from the third century BCE to second century CE. Certainly, the evidence only 1. Tov (2020:47-58) is of the view that κύριος is a standard equivalent for ,יהוה which is a straightforward linguistic equation κύριος = אדני = ,)יהוה( 48. Tov, also puts forward a counter scenario; suggesting that Masoretic Qere perpetuum is a later Hebrew retroversion of the LXX equivalent ,יהוה κύριος. According to this version, this equivalent is not a straightforward linguistical one, but 'involves the theological rendering of the name of the God of Israel with a Greek noun designating the "master of the Universe,"' p. 49; cf. Bousset (1970:129).
2. Baudissen (2016:11-12), suggested in the light of the Hexapla and in particular Aquila and Theodotion, that the term κύριος was the preferred term.
3.Cf. Bousset (1970:125-128), like many others, acknowledged the complexity surrounding the use of the term κύριος as a title for Jesus. He wrote that the introduction and extensive use of the title without a first personal pronoun as attested in the Pauline literature marked a rapid development in Christianity; a development which saw that the absolute ὁ κύριος is ascribed to Jesus, a designation reserved for the 'exalted One' and not the historical Jesus. He goes on to say the expectation was, within Hebraic Judaism at least, to use the term κύριος for a Hebrew deity who is 'Kyrios of the kings' and 'Kyrios of heaven'. This implies that the significant transition from the divine name 'Jahve' to the divine name 'Lord' did not take place in the region of Hebraic Judaism but is rather a peculiarity of Jewish Hellenism. Therefore, the use of ὁ κύριος for Jesus in the religious sense, is only conceivable on the soil of Hellenistic, p. 128.
reveals an abbreviated form of the term κύριος, a practise known as the nomina sacra. 4 Secondly, there are numerous manuscript evidence suggesting alternatives to the nomina sacra as Greek equivalents for Yhwh. Thirdly, it is difficult to determine whether the term κύριος applied to Jesus was understood to be in the 'Yhwh' sense of the word. These 'so-called' Jesus-Yhwh equated occurrences are riddled with ambiguity, to say the least. Two theories will be put to test in this regard: (1) whether the term κύριος as an equivalent for יהוה is a theological rendering designating 'master of the universe,' (cf. Baudissen 2016:128;Tov 2020:49) and (2) if the articulated κύριος, the absolute form is understood to be Yhwh and ascribed to Jesus. As was stated before, the issue is a complex one; it forms part of an interconnected web of textual problems.
An interconnected web of textual problems
The history leading up to the formation of the Jesus movement and the production of written material relating to Jesus as the central figure of this movement, reveals an intriguing web of interconnected issues all contributing to what is referred to here as the NT κύριος problem. What the study wants to convey with this idea of an interconnected web is that no issue, irrespective of when it occurred in history or where and how it is situated in the process (conceptualisation, transmission and translation), ever reach a static state; they remain fluid and ever-evolving. Think of it as a circular web with lines cutting across; forming nodes (connections) where they cut across the circular lines. These nodes represent a κύριος problem and when you address a certain problem (pushing and pulling the node), the entire web is impacted. The nodes closest to the one being pushed and pulled will be affected the most. Here are some of these nodes, of which only a few will be discussed in detail: 1. a Hebraic concept of Yhwh 2. transmission of the term in the Hebrew tradition 3. the translation of the term into Greek 4. transmission of the term in the Greek tradition 5. a Hellenistic/Graeco-Roman concept of the term κύριος 6. the theology and kyriology of the NT; using the term κύριος for Jesus.
A Hebraic concept of Yhwh 5
It is beyond the scope of this study to responsibly deal with the term יהוה as it is conceptualised in the Hebrew Scripture.
4.
A practice whereby important religious and significant terms are abbreviated. Hurtado (2006:96), writes that 'the nomina sacra are so familiar a feature of Christian manuscripts that papyrologists often take the presence of these forms as sufficient to identify a fragment of a manuscript as indicating its probable Christian provenance; ' Heath (2010:517), states that the nomina sacra are the frequent abbreviations of certain words in early Christian manuscripts. Tuckett (2003:431-458), suggested that 52 (P. Rylands Gk [Greek papyri] 457), considered as one of the oldest text fragments of the NT, did not have the distinctive Christian abbreviations. He claimed that it may have significant ramifications upon widely held views about this scribal practise. This was later successfully rebutted by Hurtado (2003:1-14). 5. Rösel (2007:411-428), aptly responded to the problem whether the Masoretes vocalised the tetragrammaton as Adonai ֵי( נ֨ ֨ דֵ ֲ )א or as shema ( אְ ָמָ ְ ׁ .)שְ If the second assumption is correct, reading 'Lordʼ is to be regarded as a later tradition, 411. He is aware of the complexity when he observes that the tetragrammaton is vocalised as Elohim when used with י ֵ֨ נ֨ ֨ דֵ ֲ ,א 412. He also notes that the holem-dot is not written, although Elohim should be read, after which he concludes that qere of והָ ָ יהְ is ֵי נ֨ ֨ דֵ ֲ ,א not א ְמָ ,ׁשָ p. 413. Van Bekkum (2006:3-15), wrote that creation and name (either by the power of the name of God or by combining letters to names) were considered as parts of formative processes by which God succeeded to bring the world and its What can be said at this point is that the Hebrew concept of a deity, within its ancient near Eastern context, 6 was not as homogeneous as often suggested. 7 The evidence suggests a more differentiated and nuanced picture of how a Hebrew deity was conceptualised. The Hebrew Scriptures demonstrate different phases of how ancient Israel's religion developed, characterised by polytheism, monotheism and henotheism, none of which reached a static state. 8 These phases of development testify to variations, differentiations, fluctuations, and inconsistencies. 9 The different nuances in how ancient Israel perceived a deity, particularly their deity, to be and how they called upon such a deity, is under appreciated. For example, an El as a significant element of Elohim named and referred to as El Shaddai 10 and Yhwh, both being a type of El. The concept of a primordial deity (El) and Yhwh becoming the significant El for the Hebrew people augments the complexity factor. In other words, El is the primordial substance of what constitutes an Elohim, in some instances referred to as El Shaddai; while Yhwh is a type of El that became the Elohim of ancient Israel. 11 The variety and potential scope of deity concepts offered by the Hebrew text contributed significantly to transmission of these concepts in the Hebrew tradition. This is not to suggest that Yhwh is not pivotal and central for ancient Israel religion, but a conceptual variation of a Hebrew deity should be considered as a possibility. The transmission of the term יהוה seems to support such.
Transmission of the Yhwh term in the Hebrew tradition
The destruction of the temple in 587 BCE had a devastating impact on ancient Israel, 12 but the subsequent edict of Cyrus, king of Persia, in 539 BCE, which saw the elite return to the province of Yehud created renewed hope to restore and rebuild the temple and the city that cradled it, namely Jerusalem. The ambition of king Philip II of Macedonia, would in due course shatter this renewed hope as illustrated by the brutal war for Judean religious identity and independence, which came to be known as the Maccabean revolt during the second century BCE. 13 Philip II's son, creatures into being. He then goes on to write that paying respect to God's name in the cultic sense defines the relationship between God and Israel, p. 4. 7.See the very recent publication of Shechter (2018:6). Here Shechter states that YHWH is the standard name for the God of Monotheism and that this standard name harbours the authentic connotations of the monotheistic doctrine in Hebrew scripture. Also see Collins (1997) and Gnuse (1997:392).
8.Lynch (2014a:1), pointed out that there are two perspectives in scholarship regarding the development within monotheism; the one is moving away from an institutional expression of Yahwism to a more universal form thereof and the other is that even before the exile monotheism became inseparable and problematically wedded to particular institutions of authority like the monarchy and priesthood and post-exilic period merely continued along these lines.
9. Lynch (2014b:47-68), remarked that the range of texts and rhetorical modes inhabit Israel's monotheistic landscape. He then goes on to ask what the implications of biblical variation in monotheistic rhetoric are.
10. Tov (2020:53), noted that this divine name was not recognised by the translators of the Pentateuch, but it was in the later books. In most cases, according to him, שדי was rendered as a prenominal suffix in rabbinic Hebrew as 'my', kingdoms.
13. Jonker (2016:65), hypothesises that process of identity negotiation already took place during the Persian period because of four levels of sociohistorical existence Alexander III of Macadeon, 14 took over the reins after he (Philip II) was assassinated in 336 BCE. Alexander was subsequently awarded generalship of Greece and used this authority to embark on an unprecedented military campaign through Asia and northeast Africa where he created one of the largest empires of the ancient world. This saw his father's Panhellenic project to lead the Greeks in the conquest of Persia come to fruition. After the sudden death of Alexander in 323 BCE, his kingdom was divided amongst his generals who fought for control over the empire. This infighting caused the empire to be divided into several different kingdoms. 15 The political uncertainty, among other factors, were the impetus for different religious sects to form and for 'the Hebrew people' to reevaluate their religious and political identity as affirmed and shaped by the Hebrew Scriptures, or more specific, the Torah. 16 A need arose to preserve the Hebrew Scriptures by making various copies presumably from the third century BCE onwards. It is postulated that during the same period, 17 pronouncing the 'sacred name' of the Hebrew deity, Yhwh, was prohibited. The manuscript evidence seems to support these postulations; they attest to manifold possibilities in rendering 'the name' as demonstrated by the manuscript extracts from Qumran of which the 'Community Role' (1QS) will be referenced first. 18
Sectarian manuscripts
IQS (community role) 19 : In Figure 1, line 14 it is shaded where the scribe uses four dots when referring to Yhwh. He writes that whilst Israel was in the desert (indicative of the which prompted different power relations; cf. Johnson (2010:64). Gerstenberger (2005:355), points out that the notion that Yhwh is considered to be the (my personal emphasis) deity who made a pack with the Israelites, and by so doing wanted all other nations under is ruled started fading during the last three decades. The idea of Yhwh being the 'almighty' one had its origins in the fluctuating history of ancient Israel. Monotheism had its roots in the new constitution of the faith community during the Persian period.
14.Commonly referred to as Alexander the Great. second person pronoun 'you' in line 12) they abandoned the way of 'Yhwh' (shaded in Figure 1, line 13), followed by an Isaiah 40:3 quotation in line 14. The use of four dots 20 (1QS 8.14) for the reproduction of Yhwh is not repeated elsewhere in the community role, 21 and the use of the term הואהא in line 13 as a a rendition for Yhwh is a hapaxlegomina. 22 In fact, the use of the four dots to reproduce a Hebrew deity is not common among texts found in the Judean desert. The so-called 'War Scroll' (1QM) makes no reference to a Hebrew deity using any equivalent term for Yhwh. What is a unique characteristic in 1QM is the dominant use of אל in referencing a Hebrew deity. 23 The scribal practice used to reproduce Yhwh appears to be somewhat different in the temple scroll (11Q19). Figure 2, the name of the Hebrew deity, Yhwh, is rendered using square Hebrew script as can be seen in lines 13 and 14. 25 This scribal practise dominates this manuscript; as a matter of fact, it dominates most of the documents found in the Judean desert. 26 Tov (2004:207) listed a dicolon (:) used throughout 4QRP b (4Q364) and the uncommon and uncertain use of different colours of ink for לאלהיכ in 11QpalaeoUnidentified Text (11Q22) (Tov 2004:207). He concludes that the four types of special writing systems 20. Stegemann (1969:152), named this Tetraouncta. According to Tov (2004), the four dots in texts written in the square script represent the Tetragrammaton in eight nonbiblical and biblical texts written in the Qumran scribal practise, as well as in four additional Qumran texts (in one: strokes) and XH≥ev/SeEschat Hymn (XH≥ev/ Se 6) 2 7 (four diagonal strokes).
22.The reconstruction of 4Q258 6.7 also reads הואהא but it was obviously reconstructed as such based on 1QS 8.14.
23.1QM 10:4, 7; 18:4 are the exceptions; cf. Tov (2004:224-225). He states that in some instances the Tetragrammaton was replaced by אל (e.g. 4QpPs b ; 4QHos b ; in 1QH a vii אדןני replaces .)יהוה He further states that the preponderance of אל in the sectarian writings (pesharim, Hodayot, prayers, blessings, Rules) as opposed to the rare use of the Tetragrammaton in these writings provide ample evidence of this avoidance, especially in 1QS and 1QH a . Rösel (2007:413), also observed the predominance of the designation ל א for a Hebrew deity. ליהוה Stegemann (1969:157), suggests that the Tetrapuncta preceded the writing of the divine name in square characters.
24.See
for the divine names are closely connected to the Qumran scribal practise, and that no Hebrew texts of a non-sectarian nature or those clearly not written in the Qumran scribal practice containing any of the aforementioned scribal systems for the writing of the divine names have been preserved (Tov 2004:207-208).
A reasonable observation is that there were no standardised scribal practises adopted by scribes of the sectarian manuscripts when it came to the reproduction of the divine name, Yhwh. Reproducing the term remained a religious sensitive matter, often reflecting reverence for, and fear of, uttering the divine name. The יהוה was considered so sacred that it was not written with regular characters. 27 The challenges posed by the sacred divine name, Yhwh, was certainly not limited to the sectarian or non-biblical manuscripts but proved to be equally challenging for the scribes who copied the biblical content.
The peculiar use of palaeo-Hebrew script 28 to reproduce Yhwh was not as uncommon as one might think. According to Tov (2004:225) the writing in palaeo-Hebrew characters probably ensured the non-erasure of the divine name. 29 The extract from 1QpHab (commentary on Habakkuk) in Figure 3 27.Cf. Tov (2004). Rösel (2007:414), suggest that reading 'God' for the tetragrammaton was the normal custom at Qumran.
29. Tov (2004:225). He provides valuable data and insight into the use of palaeo-Hebrew and square Hebrew characters in combination. See also Table 1 in Tov (2004:227-228), for a list of manuscripts where the Tetragrammaton was written in palaeo-Hebrew characters, and http://www.hts.org.za Open Access is but one illustration. In fact, this scribal practise was not limited to manuscripts written in square Hebrew script, but also influenced manuscripts written in Greek. 30 The Nahal Hever finds (manuscripts dated to 50 BCE-50 CE) consistent use of palaeo-Hebrew script without exception (see Figure 4 as illustration). 32. Rösel (2011:38), states that both Adon and Adonaj were used in the Old Testament primarily to designate God. He also points out that these terms were also used is brought about by the Qere and Ketib tradition; what ought to be written and read, respectively, which is obviously not limited to 1QIsa a but can be observed elsewhere (cf. Tov 2020:49).
In Figure 6, a redactor inserted four dots above Adonaj, presumably indicating that this is what ought to be read, but the term יהוה was meant.
The three manuscript extracts are only revealing the tip of the iceberg. The data from the Judean desert finds are overwhelming. An apparent inference is that like the nonbiblical manuscripts, here too there was no standardised scribal system on how one should render the 'name' of the Hebrew deity. These illustrations produce not less than five terms used to render the divine name Yhwh: 1. square Hebrew characters 2. palaeo-Hebrew characters 3. four dots 4. Adonaj 5. the term .הואהא To confirm, these are not the only possibilities, but it sufficiently illustrates the scribal variety with the reproduction of the divine name, Yhwh. These reproductions also created fertile soil for unearthing numerous complex challenges for the Greek translators of the Hebrew Scriptures.
Translating the term יהוה with a Greek 'equivalent'
The array of possible renderings of Yhwh within the Hebrew frame of reference must have caused endless translation nightmares for the scribes responsible for translating the Hebrew Scriptures. What follows are a few illustrations revealing some of the challenges faced by the Greek when reference is made to an authoritative 'Master', and that it is predominately used in combination with Yhwh; also see the monograph of Rösel (2000:49).
1QIsaa (Is 28:16)
In the case of Isaiah 28:16 ( Figure 7) the text reads Yhwh in square Hebrew characters with a superscript Adonaj, which is characteristic of the 1QIsa a scroll as indicated earlier. In Codex Leningradensis, 36 the superscript presumably found its way into the text for it to read ה ֔ וִ ה֔ יְ ָי נ֣ ֣ דָֹ ֲ א with Codex Sinaiticus reading a single κ̅ ς̅ . The point of contention is whether the term κ̅ ς̅ is an equivalent for the superscript Adonaj or Yhwh. 37 The Hexapla recension 38 read an additional κυριος, the Lukian recension 39 attest to a plus reading ο θεος. A similar type of issue is present in Isaiah 28:22; in 1QIsa a it reads יהוה but in the MT it reads ָי נ֧ ֧ דָֹ ֲ א ִה ו֛ ֛ הִ יְ but in this case the Greek text tradition reads. In the latter it seems as if Adonaj found its way outside of the text. Another striking case is found in Lamentations 1:14-17.
4QLam (4Q111), (Lm 1:14-17)
In the short space of a few lines of text (Table 2) it is unclear which terms represent what and whom. It is uncertain 33.Tov (2020:54), offers a table of LXX Equivalents for the Divine Names in Genesis 1-11.
He then comments that the major problem lies with the combination אלהים יהוה with no standard equivalent. Both Tov (2020:48-49) and Rösel (2007:414) Greek text tradition in part adopted a practise of conflating it to a single term in various instances. One should therefore be cautious when postulating the idea that the term θεός is a Greek equivalent for Elohim and the term κύριος for Yhwh; it is far more complex and nuanced than that. 40 A few postulations should be in order at this point: 1. Linguistically speaking, the term κύριος might be an equivalent Greek term for ,יהוה but semantically, conceptually it is not Yhwh; it is a phonetic representation of the divine name. 2. Kyrios is therefore not a type of El as Yhwh is one. 3. Kyrios is not a name of a Hebrew deity, but a term used to translate the Hebrew term .יהוה 4. Kyrios represents a quality of a Theos as appose to a type. 5. It is therefore to be expected that finding a Greek equivalent for the divine name was not an easy matter, as is evident from the Greek manuscript data.
Transmission of the term κύριος in the Greek tradition
It is somewhat artificial to draw a distinction between 'translation' and 'transmission', as if an 'original' source and target text are available. Nevertheless, the aim here is to show how the term יהוה is rendered by the oldest Greek manuscripts testifying to 'biblical' content. By becoming aware of the variant possibilities, it will reveal how it contributes to the NT κύριος problem. These manuscripts are Papyrus Rylands 458, 4Q122 (4Q LXXDeut), 7Q1 (7Q1 LXXEx), 7Q2 (7QLXXEpJer) and 4Q121 (4QLXXNum). Unfortunately, none of them offer any data on how the term יהוה was rendered. Manuscript 4QLXXDeut, however, does present an unusual open space in 40.Two prominent voices in this debate should be mentioned. The first voice is Rösel in ן' , ',אדו p. 1955',אדו p. , Rösel (2000 and Rösel (2007). The second voice is de Troyer (2008:143-172). De Troyer is of the view that vocalisation for יה̇וה̇ was such that one would pronounce it Elohim, whereas Rösel holds the view that Adonaj is to be pronounced.
4QLXXLevb (Lv 4:27)
This fragment ( Figure 8) reads ΙΑW, which was the reason why the remainder of the 4Q120 fragments were 'reconstructed' with the term ΙΑW. Without going into detail, two remarks should suffice. Firstly, it is not totally impossible that the scribe left an open space between ΕΝΤΟΛWΝ and ΟY with a later redactor inserting ΙΑW. Secondly, one should not assume based on this single occurrence that this term was used throughout the manuscript. 41 The point is that at a very early stage, the term ΙΑΩ was considered an accepted transliteration of the term Yhwh but whether it was the general accepted term remains questionable. 42
P.Fouad 266b (848) (Dt 31:28-32:7)
Figure 9 is an extract taken from P. The transmission of the term יהוה in the Hebrew tradition, the translation of Yhwh with the term κύριος, and the subsequent reproducing and transmission of the latter term opens an array of possibilities on how to render the divine name. The κύριος problem is multilayered, multifaceted and complex problem with no easy solution. To add to all of this, the use of the term κύριος in Hellenistic specific Graeco-Roman literature offers another dimension to the κύριος problem.
A Hellenistic/Graeco-Roman concept of the term κύριος 44
It should be stated upfront that a general Graeco-Roman concept underlying the term κύριος is deliberately underplayed. The reason is because of the view held that such a concept does not contribute significantly enough to the NT κύριος problem, but for the sake of perspective, the study will briefly allude to some Graeco-Roman sources.
Pliny the Younger's letter to Emperor Trajan after a visit to Bithynia in 112 CE is insightful. The reason for this letter was to report on how Christians conduct themselves, he wrote: They maintained, however, that the amount of their fault or error had been this, that it was their habit on a fixed day to assemble before daylight and recite by turns a form of words to Christ as a god.
Pliny's understanding of what qualifies and defines a ritual to be one 'as if it is dedicated to a god' made the Christian's habit of singing hymns dedicated to Christ guilty of treason. This issue is further amplified in the sense that, according to Pliny's investigator, some who claimed to be Christians, denied it later: 'All these too both worshipped your statue and the images of the gods, and cursed Christ'. The issue for Pliny is that Christ is venerated and worshipped as a god, as apposed to venerating the Emperor and pray to the 'traditional' Roman gods. This, of course, resulted in certain punishment, according to the response from Trajan, should they not deny Christ and worship their (Roman) deities. From this vantage point there is no other dominus (κύριος) other than the Emperor. The term κύριος was also used when reference is made to Graeco-Roman deities. In three text fragments relating to a banquet invitation hosted by the god Sarapis is a good illustration. Nikephoros extends an invitation: δειπνησαι εις κλεινην τοu κυριου Σαραπιδος εν τω λοχιω 'to dine in banquet with the lord Sarapis in birthhouse'. Herais also extends an invite: δειπνημσαι εν τω οικω του Σαραπειου εις κλεινην του κυριου Σαραπειον 'to dine in the house of Sarapis in banquet of the lord Sarapis', and καλει σε ο θεος εις 43.For a detailed list of how the oldest Greek manuscripts reproduced Yhwh, see Nagel (2017a:129-130).
A document 45 testifying to the repayment of loan states:
τοῦ κυρίου [In the 27th year of the deification of Hadrian which is in the first year of emperor Antoninus' lordship.]
Interestingly noted here, is the 'θεοῦ' of Hadrian and the 'τοῦ κυρίου' of Antoninus; the distinction is a deceased emperor (Hadrian) as opposed to the ruling emperor (Antoninus). The fluidness in the use of these terms could have been that in a Graeco-Roman context accounting for a 'sacred name' was not necessary, and that these terms were not influenced by the nomina sacra scribal practise. Pfeiffer's (2012:139-141) distinction between 'Imperial cult' and 'Emperor worship' is helpful at this juncture. He writes (in relation to the divine cult of the emperor being nothing more than an aspect of emperor worship): '…distinction of status between respective beings, rather than a distinction between their respective natures'. 46 Gradel (2002:26) concluded: 'the worshipped emperor was not a god in an 'absolute sense', but he had a divine status…in relation to the worshippers'. This enlightens the notion of the distinction between the terms θεός and κύριος. 47
The theology and kyriology of the New Testament 48
The NT κύριος problem draws a matrix of complexities into a focused singularity. The explicit Old Testament citations are one such singularity, to be sure of those citations attest to the term κύριος. The exegetical and hermeneutical reworking of the citations not only simultaneously contribute to the complexity of the problem, but also hold solution potential. Inferred from the cursory data covered under Section 'Transmission of the term κύριος in the Greek tradition', it is fair to say that the term κύριος found in the NT contains conceptual elements drawn from a general Hellenistic, Graeco-Roman context, but the true complexity of the matter is because it has an equation potential for the term Yhwh. This makes the NT Kyrios problem a theologically intricate one, with implications for the Christology and kyriology 49 of the New Testament. The crux of the problem revolves around the sacred name Yhwh used for the Hebrew deity; the term κύριος used as a potential Greek equivalent, and the term κύριος used as reference to Jesus. The NT κύριος problem is also an exegetical and hermeneutical one; it is an inter and intertextual matter producing questions such as 'Should the use of the term κύριος in the citation be interpreted as referring to Yhwh?'; 'What is meant when Jesus is referred to as Kyrios?'; 'To what extent is the term κύριος re-interpreted in the NT?'; 'Was it even possible to put Jesus on par with Yhwh?' 46. Pfeiffer (2012:139-141), makes an important distinction between 'Imperial Cult' (honour reserved for the gods) and 'Emperor Worship' (worship given to mortals). Gradel (2002:26), aptly worded this distinction (in terms of the divine cult of the emperor being nothing more than an aspect of emperor worship) as 'distinction of status between respective beings, rather than a distinction between their respective natures'.
Theos' speech
Hebrews 1:10a (Ps 101:26a) 50 : The reason for showing the manuscripts here (Figures 10-12) is to prove the use of nomina sacra scribal practice. Codex S (Ps 101:26a) does not read any κύριος term, but it was added by a later redactor as a superscript (see the red block as indicated on the manuscript directly above). The Hebrews 1:10a section ( 50.The theological effect and significance of this citation has been worked out in more detail in Nagel (2019:557-584).
51. Steyn (2009:341-359), noted that the LXX versions of these Psalms open up the possibility for a Christological interpretation, 341. Steyn (2010:82), also wrote that the third pair of quotations, Ps 44:7-8 (Heb 1:8-9) and Ps 101:26-28 (Heb 1:10-12) in the catena (with no traces in the tradition of such an existing combination prior to Hebrews), ʽBoth deal with the theme of the eternal reign of the Son who is addressed as ʽGod' (if θεὸς is taken as a vocative in this instance).
52. Steyn (2010:82), is of the opinion that the Son is being addressed as θεὸς in Hebrews 1:8 and κύριε in Hebrews 1:10. 66 Intertextually speaking the Psalmist calls upon κύριος, in the vocative case, to listen to his prayer (Ps 101:2). In Psalms 101:13 κύριος is again referenced using the vocative case; here the scribe affirms that κύριος remains κύριος through all the ages of time. The vocative use in Psalms 101:26a, κύριος is recognised as the one who laid the foundation of the earth, from the very beginning, whereas Psalms 101:26b acknowledge the fact that it is because of the works of his (kurios') hands that the heavens exist. 53 Conceptually, there should be little doubt that the κύριος of Psalms 101:26 is the θεός of Genesis 1:1 and 2:4b (cf. Steyn 2010:110-111). From an 53.It is interesting to note that both references to the term κύριος introduce two possible additions; verses 13-23 and verses 26-29 respectively, both defined as hymnic sections; cf. Steyn (2010:103-104). intertextual point of view, Hebrews 1:2b-c suggest that the son is positioned as heir of everything through whom θεός made all the ages. 54 This sounds somewhat different from what is presumably proposed in Hebrews 1:10a-b that κύριος as in Yhwh is the creator, he is the one who laid the foundations of the earth (Heb 1:10a) and created the heavens (Heb 1:10b), whereas in the former (Heb 1:2b-c) the son is the heir, the medium through which all has been created. The author appears to be inconsistent if (1) one draws a distinction between θεός in Hebrews 1:2b-c and κύριος in Hebrews 1:10b and (2) if κύριος in Hebrews 1:10b does not refer to the son. What seems to be more plausible is that the author did not think through the implication of the term κύριος in Hebrews 1:10b. 55 But how does these arguments hold up in the textual unit Hebrews 1:8-9 (Ps 44:7-8) and Hebrews 1:10-12 (Ps 101:26-28), both of which are presented as Theos' speech about the son (Heb 1:8a)?
In Hebrews 1:8b it is said about the son that his (second person, personal pronoun) throne is ὁ θεὸς and therefore it is a throne forever and that his (implying the son's) kingdom is a rule that can be characterised by uprightness (Heb 1:8c). It is further declared that the son loves righteousness but hates lawlessness (Heb 1:9a). The second person speech in Hebrews 1:8b-9b implies that the Psalmist directs his Psalm to the son, the king and son of all sons of Cyrus (cf. Ps 44:1-2). In Hebrews 1:9b it is ὁ θεὸς as the ὁ θεός of the king (cf. Psalm 44) and by implication the son, who anoints the king and son. It is at this point that the author introduces the Psalm 101:26-28 citation. It seems rather obvious that σὺ (Heb 1:10a) also refers to the son, as does all the other second person pronouns in Hebrews 1:8-9. If this is the case, then the vocative use of the term 54.4QPsb col. XXII frgs. 15 also does not make any reference to Yhwh; a tradition upheld by Codex S. The LXXGött (Ps 101:26) does, however, account for the term κύριος in its vocative form.
55.According to Church (2016:269-286), the exalted son in the Psalm context is now the 'Lord' in Hebrews 1:10; meaning the son.
κύριος can only refer to the son, but does it? The suggestion here is that the σὺ in Hebrews 1:10a, to be sure the vocative use of the term κύριος refers to ὁ θεὸς as supported by the connection drawn in the source text (Ps 101). The reason for this suggestion is that whilst Psalm 44:7-8 (Heb 1:8-9) is addressed to the king (κύριος; cf. Ps 44:12) in relation to ὁ θεὸς, Psalm 101 (Heb 1:10-12) is a prayer directed at κύριος as in Yhwh. What does the author achieve with such a reading? How does it contribute to establishing the authority of the son? The answer is that in Hebrews 1:8-9 the kingship of the son in relation to ὁ θεὸς is brought into focus, so much so that the son's throne is ὁ θεὸς. But in Hebrews 1:10-12, the κύριος as in Yhwh, is brought into play. The author had to account for the fact that his readers might have understood σὺ κατʼ ἀρχάς, κύριε, as referring to the son, but it is even more likely that the Yhwh characterisation of ὁ θεὸς is preferred as the more appropriate reading for a Hellenic-Judaic audience. 56 If Hebrews 1:10-12 awoke a sense of ambiguity, it would have been laid to rest with the citation taken from Psalm 109:1 (Heb 1:13b). 57 In Hebrews 1:13a, the author returns to the topic of angels, asking whether Theos has ever said to the angels to κάθου ἐκ δεξιῶν μου 'to be seated on my right hand' (Heb 1:13b). It is noteworthy that μου by implication refers to Theos in this case, which begs the question why is it not possible to interpret the σὺ in σὺ κατʼ ἀρχάς, κύριε (Heb 1:10a) as also referring to Theos? On the one hand the citation in Hebrews 1:10-12 (Ps 101:26-28) exemplifies the κύριος problem, but on the other hand it offers theological perspectives necessary to find an amicable solution.
Acts 2:20-21 (Joel 3:4-5a): The manuscript data appear intact, with no alternatives suggested for the term κύριος, but this does not suggest that all is what it seems, it might just be a matter of scribal 'cover-up'. Be it as it may, the literary 'κύριος' context does not disappoint in offering support to the κύριος problem. The events that would unfold in Acts 2:20-21 begins with Acts 1:6 when those who gathered around him addressed Jesus by using the term κύριε. They wanted to know whether the time has come for the kingdom of Israel to be revealed. They kept referring to Jesus by using the term κύριος even after Jesus ascended into heaven (cf. Acts 1:9-11). His followers also referred to his earthly ministry with the phrase ὁ κύριος Ἰησοῦς (cf. Acts 1:21). It did not end here, they prayed to him, calling upon him as κύριε (Acts 1:24). The second chapter of Acts introduce the 'day of Pentecost' followed by Peter's first speech in Acts 2:14-40. It is within this context that the Joel 3:1-5 citation 58 is implemented, of which Joel 3:4-5a (Acts 2:20-21) is of interest (cf. Table 4).
There is no obvious reason why one should not interpret the two κύριος references in Acts 2:20-21 as referring to the same entity as the term κύριος in Acts 1:6, 21 and 24, hence, Jesus. This raises two questions. The first is whether the author was cognisance of the fact that the term κύριος in Acts 2:20-21 56. Steyn (2010:111), remarks that principally, with the inclusion of κύριος in the LXX, the activities in Psalms 101 could be transferred to Christ.
57.Psalm 109 presents itself as ʽYahweh's oracle,'; cf. Steyn (2010:114 could imply Yhwh. The second related question is whether the author re-interpreted the term κύριος to give it a more Hellenistic flavour. 59 What is clear is that the author cites Joel 3 extensively, which makes it highly likely that he had a fairly good understanding of the literary context. In the Hebrew version of Joel, the term Yhwh is used throughout, and if one assumes that the MT text represents a possible Hebrew vorlage then there is no reason not to interpret the term κύριος as a Greek equivalent for Yhwh. There is no evidence to suggest otherwise, except for the fact that some of the oldest Greek manuscripts attesting to Joel might have read palaeo-Hebrew script for Yhwh. Not only that, there is no hard evidence to prove that an uncontracted κύριος term was used to translate the term Yhwh. One should assume, for the sake of the argument, that conceptually at least, the term κύριος represented Yhwh in the source text, but when the text was applied to the target text, is was re-interpreted. There are at least two arguments opposing a Yhwh meaning assigned to the term κύριος in these instances: 1. In Acts 2:17a it is ὁ θεός who says something about what will happen during these last days. If this is so, and if κύριος represent Yhwh and if Yhwh in turn is considered the Elohim of the Old Testament then it would imply that ὁ θεός is speaking about himself in the third person. To counter this circular reasoning, a new meaning is ascribed to the term κύριος, the same meaning it has in Acts 1:6, 21 and 24; that of an authoritative figure. 2. If Yhwh was meant with the two occurrences in Acts 2:20-21, why would it be necessary to use the term θεός elsewhere in the text? One can also argue in reverse order; if the two κύριος represented Yhwh as the Elohim of Israel, why would it be necessary to make it explicit that θεός wants to say something? Why introduce Acts 2:17-21 as Theos' speech if it was so obvious that the term κύριος called a Hebrew deity to mind? In fact, the textual construct ἐπικαλέσηται τὸ ὄνομα κυρίου 'to call the name of the lord' is already a shift away from Yhwh 59. Blumhoffer (2016:499-516), certainly seems to think so. He is of the opinion that Luke did not receive a text with the 'changes' compared with the Septuagint. What Blumhoffer (2016:502), is after, is the deeper logic behind Luke's editorial moves, he wrote: 'these echoes suggest that Luke has quietly and intentionally evoked a correspondence between Old Testament prophecy and the Day of Pentecost in Acts.' as the divine name; the Hebrew text reads ָה ו֖ ֖ הָ יְ ׁם ֵ ש֥ ֥ ֵּ בְ 'in the name Yhwh'. The power of submission is to do it in the name, the 'divine name'. In Joel the term κύριος becomes a title of someone with a significant name, and this name is interpreted in Acts 2:20-21 as Jesus. The term κύριος in Acts 2:20-21 (Joel 3:4a-5) is not a reference to Yhwh, but to a lord with a meaningful name, reinterpreted as Jesus. The Theos' speeches make it extremely difficult to interpret the use of the κύριος term in Hebrews 1:10a and Acts 2:20-21 as referring to Yhwh. It helps to get rid of the ambiguity and by doing so contributes to finding a solution to the NT κύριος problem.
David's speech
Acts 2:25b (Ps 15:8a): The inverse dualistic (solution-problem) nature of the κύριος problem has been highlighted by way of the so-called Theos' speeches, but does David's speeches also hold solution potential? In Acts 2:25a, the introductory formula Δαυὶδ γὰρ λέγει εἰς αὐτόν 'David says about him' introduces a citation taken from Psalms 15:8-11; Psalm 15 is a stele inscription pertaining to David submitting himself to Kyrios and by implication Yhwh. It is doubtful if the author wanted his readers to hear that David is saying something about Yhwh; he probably wanted them to hear what the 'great' king David said about Jesus as the Kyrios. The author interprets David's foresight as referring to the resurrection of Χριστός (cf. Acts 2:31). 60 In Psalm 15 the Psalmist addresses Kyrios as in Yhwh throughout, but the author of Acts extracted verse 8-15 to serve his purpose of prophetic foresight (cf. Ps 15:8), the resurrection of the devoted one (cf. Ps 15:10), in this case Christ (cf. Acts 2:31). The Psalm, however, speaks of the devoted or pious one, such as king David, whose soul will not be abandoned by Kyrios to Hades. How are these linguistical 'transfigurations' and 'transformations' of κύριος from being Yhwh to κύριος as in Jesus the Messiah and David being the pious one to the one who speaks in the third person possible? This reimagination is made possible by the reference to the pious one in Psalms 15:10, introducing the Psalms 15:8-11 citation as Davidic speech, the term κύριος in its accusative form and the literary context of Acts 2:17-28; both the Joel 3:1-5 and Psalms 15:8-11 citation form part of Peter's first speech. The term κύριος in Acts 2:25b (Table 5) should be interpreted as the same κύριος in Acts 2:20 and 21, namely Jesus, the Messiah.
Mark 12:36 61 and Acts 2:34b (Ps 109:1a): The citation in Acts 2:34b in Table 6 follows the same trajectory as Acts 2:25b; the uttering of the term κύριος is placed on the lips of David. These words reflect Psalms 109:1a and is also introduced in Mark 12:36b 62 . One can deduce at least two text traditions from the data, the one reading the first κύριος term with and without a 60. Trull (2004:432-448) NA, Nestle-Aland. 45 does not account for these verses and a corrector S1 'omitted' επιφανην και εστε πας · ος εαν επικαληται το ονομα σωθησεται. definite article. The reason for this is to draw a distinction between the first and second κύριος term; the first is a reference to Yhwh and the second to a king. In the case of Mark 12:36b the author as Jesus argued against the notion that the Messiah is the son of David. Jesus' argument is that how can the Messiah be the son of David if David himself calls κύριος, κύριος (cf. Mk 12:37). The problem with such an argument is as follows: 1. In Psalm 109 context it is not David speaking, but the Psalmist. 2. The first κύριος refers to Yhwh, whereas the second κύριος refers to David (Ps 109:1a). 3. The direct speech in Psalms 109:1b is that of a κύριος in terms of Yhwh.
It is therefore not David calling κύριος, κύριος but the Psalmist who refers to κύριος (as in Yhwh, the first κύριος term) as κύριος who in turn calls David, the Psalmist's κύριος, κύριος. Said differently, the one who speaks in Psalm 109:1a is κύριος in the sense of Yhwh, and the one who Yhwh is saying something about is also κύριος in terms of David as king. This is written by the Psalmist who allows Yhwh to say something about the Psalmist's king. 63
Jesus' speech
Mark 12:29b-30a (Dt 6:4c-5): The Greek text tradition once again appears to be intact, with minor variations in the Greek tradition (Table 7). In the case of Mark 12:28-30, scribes came up to Jesus to ask him what is the first commandment of them all? The Markan Jesus then responded by quoting from Deuteronomy 6:4a-5. There is no clearer evidence, at least in the mind, that both Jesus and the author conceptually distinguishes himself from the one and only κύριος ὁ θεὸς in the sense of Yhwh, the one and only Elohim of Israel. Earlier on in the narrative (cf. Mk 12:18-27) some Sadducees came to Jesus, to challenge him on the resurrection. He responded to them by quoting from the book of Moses (Ex 6:15), which says that Theos is the Theos of the Patriarchs, the Theos of the living, not the dead. The fact that Deuteronomy 6:4b-5 is cited by Jesus simplifies the κύριος problem. There are no discrepancies, uncertainties or clarifications needed; the content cited is a living tradition, which Jesus simply repeats, as one does as a law-abiding Judean. Deuteronomy 5-6, amongst other text, were part of Hebrew texts on vellum placed in a small leather box called a phylactery, worn by Jewish men at morning prayer as a reminder to keep the law. These texts were recited repeatedly and would have been known of by heart. It is against this backdrop that one should interpret Jesus' response. He (Jesus) is merely reciting this morning prayer that the Yhwh, the Theos of Israel is the one and only Kyrios.
The second reference to the term κύριος is to emphasise the dominion and authority of Yhwh as the one and only Theos.
The fact that this is Jesus' speech and the nature of the content of the speech, contributes to a better understanding of the term κύριος and the problem it might pose.
The explicit κύριος citations in combination with the introductory formulae characterised as Theos', David's and Jesus' speech is the most effective way to determine whether referring to Jesus as Kyrios is meant in the Yhwh sense or not. These speeches and the content placed on the lips of Theos, David and Jesus, respectively, cleared the NT κύριος problem from any ambiguity or vagueness.
Conclusion
The conclusion is that there is no decisive and final solution for the NT κύριος problem. In fact, there will never be a single solution for this multilayered and interconnected problem. The only option is to keep on addressing every single aspect of the κύριος problem against a multilayered complex background. The NT κύριος problem, should therefore never just be a NT problem, it will always be a NT -Old Greek (LXX) problem. This is precisely what the study set out to illustrate. In addition to illustrating the interwoven complexity and intricacies of the problem, the study also draws these multilayered complexities into a singular focus, namely the explicit κύριος citations. To be precise, those citated content were placed on the lips of Theos, David and Jesus, which the study refers to as speeches. The study shows that these respective speeches amplify the problem whilst taking a step towards a possible solution. The best possible inference to draw when arguing from the vantage point of these speeches is that (1) the term κύριος as an equivalent for יהוה is a theological rendering designating 'master of the universe,' and (2) the articulated κύριος, the absolute form is ascribed to Jesus not in the Yhwh sense, but is reinterpreted to mean the 'master of the NT universe. What these speeches have revealed is that Yhwh as in κύριος is still the Theos of Israel who is κύριος as in the ultimate master and ruler overall, and that Jesus becomes the κύριος, embodies the κύριος and rules as master of the NT world and beyond. When the term κύριος was used as a potential equivalent for Yhwh, the divine name for a Hebrew deity was stripped of its sacred character and lost the credentials of having a 'divine name'. The term κύριος on the lips of Theos, David and Jesus is a humble, non-deliberate attempt to make 'the name' divine again; it reignites its sacred character. By ascribing this κύριος to Jesus, makes Jesus the new 'divine' name. | 2020-11-05T09:08:31.244Z | 2020-10-30T00:00:00.000 | {
"year": 2020,
"sha1": "e12ff0fe4d49a2edcc7326fc8b2dcaabc21e698d",
"oa_license": "CCBY",
"oa_url": "https://hts.org.za/index.php/hts/article/download/6134/16359",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f946cc904f8575f15cb7962403cec3f9bd105af1",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"History"
]
} |
219313599 | pes2o/s2orc | v3-fos-license | A Social-Aware P2P Video Transmission Strategy for Multimedia IoT Devices
With the growing popularity of social network and video sharing services, video sharing behaviors have become increasingly socializing. People can produce and share high-quality videos at any time and place using high-speed mobile networks and multimedia IoT devices, e.g., Android Things and smart TV Boxes. The multimedia IoT devices require higher bandwidth, memory, and computing resources to process and transmit multimedia contents such as audio, video, and image. In this paper, we propose a social-aware P2P video transmission strategy for multimedia IoT devices. In the proposed architecture, multimedia IoT devices are communicated using a peer-to-peer network. The peers in our work are represented by multimedia IoT devices with user interactions. Since users have their own friend list, the social links can be classified into different priority classes in accordance with their social relationships, e. g., family, friends, and others. The proposed strategy adopts weighted fair queue (WFQ) for P2P video transmission according to different queuing priority classes. Each priority class is given a weighted factor according to social relationship, current download progress, and mutual resource sharing contributions. By leveraging the inherent trust associated with social links, the proposed strategy can reduce the impact of free riders and give users good video sharing and watching experiences with multimedia IoT devices.
I. INTRODUCTION
Multimedia IoT environments consists of heterogeneous devices, e.g., smartphones, tablets, and Tv Boxes. Wireless mobile networks and multimedia IoT devices are becoming more and more popular; people can produce high-quality movies and share video at any time and any place. Users share user-generated media contents (UGCs), such as travel videos, through social network services (SNSs) with multimedia IoT devices. Video sharing has become more and more socialized due to the popularity of social networking services (SNSs) such as Facebook and Twitter. Through the relationship of friends, social media disseminates information more widely and quickly than traditional portal sites and news services.
The Internet of Things (IoT) consists of network-connected multimedia IoT devices, which enables devices to collect and exchange data with each other. Fig. 1 shows the flow chart of sharing user-generated media contents (UGCs) to social network services and video sharing websites with multimedia IoT devices. First, the source video provider, such as a The associate editor coordinating the review of this manuscript and approving it for publication was Chun-Wei Tsai . multimedia IoT device user, uploads user-generated media content (UGC) to the video server. Then, the user utilizes social network services to share the video information. At the same time, the social network service (SNS) server actively notifies members of the user's community to share the video information. If the user's community members are interested in the video, then after clicking, the video transmission and the viewing is started on the multimedia IoT devices.
In a Peer-to-peer (P2P) network, users can share network resources such as storage space and bandwidth to other users with multimedia IoT devices. Most P2P networks have greedy peers who obtain resources as much as possible without contributing anything back, which impacts other contributors' sharing willingness and resulting in low resource sharing efficiency. Free riders often drain the network resources, i.e., only download but not upload resources to other users, which affects the sharing willingness of other contributors, resulting in the low efficiency of resource sharing in a P2P network [1].
In this paper, we propose a social aware peer-to-peer video transmission strategy for multimedia IoT devices. The peers are represented by multimedia IoT devices with user interactions. Each peer has its own community friend list. When a peer submits a video watching request, a list of community relations will be queried first in order to obtain the parameters like social weight (SW), receive contribution (RC), providing contribution (PC), and the estimated contribution value (CV). Then, the proposed video transmission strategy will calculate the social weighted contribution value (SWCV) of each participant peer. After obtaining the download progress (DP) of participant peers, the proposed algorithm then adopts the weighted fair queue (WFQ) and transfer finishing first (TFF) algorithm to perform video streaming transmission. Through the social connection and trust relationship between community users, the video streaming packets can be effectively transmitted to the community users in the heterogeneous network environment, which can reduce the impact of the free riders, lower the backbone load and network transmission cost, and provide good transmission quality to the community members with multimedia IoT devices.
II. RELATED WORKS A. P2P AND SOCIAL NETWORKS
A peer-to-peer network refers to an Internet system without a central server and relying on peers to exchange information. Each peer is acted as a client and a server. In a P2P network, any peer can join and leave the network at any time without limitation. P2P network technologies can be used for file sharing, but P2P file sharing systems are prone to greedy behavior, i.e., free riders that take advantages instead of sharing resources. In [2], J. Altmann et al. conducted research on the behavior of the Gnutella network, and the results showed that 70% of peers did not share any files at all, and 85% of users were the free riders who enjoyed it. In [3], Liu et al. proposed a P2P network control system with the upload/download ratio of each peer, which observes the transmission variations and finally regulates a threshold to the system to adjust peers' transmission rate.
A social network is an online community established by a group of people who share the same interests and activities, which provides users with various contact, communication and interaction services. In [4], a study conducted by Cheng et al. on Renren Network, the largest social networking site in China, shows that most users of social networking services (SNSs) are willing to share network resources and bandwidth to close friends to help them download videos. In [5], Pouwelse et al. proposed the Tribler file sharing mechanism to improve the file search, recommendation and download of multimedia contents by considering the social relationships and trust among users. In [6], Cheng et al. proposed a peer-assisted system for users to easily share personal recorded videos and to establish a video sharing website similar to YouTube by making use of the characteristics of the social community, such as the characteristics of the small-world network model. In [7], Liu et al. introduced a tit-for-tat mechanism for peer-to-peer content sharing. Peers exchanged resources through social relationship links. Each peer can limit the size of resources provided to other peers. To exchange of resources, the proposed mechanism establishes a connection according to the community relationship between the peers, and the content and resources will be transmitted through the path of the community link. In [8], Wang et al. proposed a social media sharing system, which allocates video streaming resources and adjusts the transmission bandwidth to reduce the server loads in accordance with the user's mutual contribution in the social community.
In [9], Li et al. examined the behavior of users on social networks by monitoring the behavior of more than 1 million users and crawled the data of 2,500 videos on Facebook. The results are as follows: most of the videos watched by users are from close friends of the users themselves, and most of the video watching behaviors are promoted through social interaction, and the rest are driven by interests. Based on the observation results, the author introduced the SocialTube system that systematically explores the social relationships and common interests of Online Social Networks (OSNs) in order to enhance the transmission efficiency of video sharing. SocialTube is a social network-based P2P architecture based on a social video acquisition algorithm to increase the accuracy of video prefetching and reduce the initial playback delay of the video. In [10], Kang Chen et al. proposed a reputation system named SocialTrust to incentivize peer collaboration in P2P networks. The reputation of a peer is based on feedback from other peers, and the reputation threshold is used to distinguish selfless or selfish peers. Due to the high service cost of reputation inquiry, frequent reputation inquiry can cause the system to overload and reduce service quality. SocialTrust combines traditional reputation systems and social networks to overcome shortcomings. SocialTrust has the following advantages over previous reputation systems: (1) Integrate social networks and reputation systems to save reputation query costs. (2) SocialTrust considers the social grade and reputation of a peer to measure its trust level, making the reputation evaluation more accurate. (3) SocialTrust encourages node cooperation, and the system uses service provision values and rating evaluation values to calculate trust reputation. VOLUME 8, 2020 In [11], Rajapaksha et al. presented a SocialiVideo approach that allows users to share their produced video content among existing social connections. SocialiVideo stores video content in users' networking devices and serves others using a P2P connection. The authors implement a prototype based on Facebook/Akamai content delivery approach and evaluates the performance. The results show that SocialiVideo provides benefits for multiple participants, including CDNs and ISPs, as well as better QoE for end users. In [12], Shahriar et al. proposed a time-based user grouping and replication protocol that guarantees content availability for decentralized sharing of online social media. The protocol discovers cyclic diurnal patterns in user uptime behaviors to make sure the content persistence with minimal replication cost. The authors present a mathematical model for peeruptime duration and replication group size. Simulation results show that the proposed protocol reaches high content persistence without suffering substantial network and storage overheads. In [13], Cui et al proposed an evolutionary game theory based framework to analyze the effect of resource allocation mechanisms on peers' contribution behaviors such as donors. The authors have analyzed and found that allocation mechanisms can encourage contribution behaviors of peers when the benefit of resources is larger than specific thresholds. The results show that the impact of user behaviors is limited when the benefit of resources is large enough.
B. PACKET SCHEDULING MECHANISMS
In order to provide more efficient and real-time streaming services, many scholars have developed various packet scheduling mechanisms based on the different service requirements of each data stream. From the earliest firstin-first-out (FIFO) policy, followed by the concepts of priority and queuing class, the priority queue (PQ) is developed. However, priority queues may cause starvation of low-priority data streams. In order to improve this unfair mechanism, Bennett and Zhang [14] proposed the concept of fair queue (FQ). Weighted round robin (WRR) is evolved from fair queue. It is mainly used for a fixed size packet system. Each cycle WRR sorts the packet transmission sequences proportional to the queue weight [15], [16]. Weighted fair queue (WFQ) is a scheduling mechanism proposed by Demers et al. [17]. Compared to weighted round robin (WRR) that uses packet as the unit, WFQ uses byte as the adjustment unit. First, packets are classified according to different bandwidth requirements. After that, the estimated transmission completion time of each packet is calculated according to the bandwidth usage, packet length, and arrival time of the packet. Then, the packet transmission is determined according to the estimated delivery completion time.
Although WFQ improves the unfairness of WRR priorities, it also increases the computational complexity. Therefore, Shreedhar and Varghese [18] proposed a deficit round robin (DRR) scheduling mechanism that mainly adopts the method of round robin (RR). DRR mechanism adds Deficit Counter to each queue to represent the increment of weight and counter each time. The scheduler will first give each queue the amount of bandwidth that can be used (Quantum size). At the end of each cycle, each queue will use the remaining Deficit Counter plus the Quantum size as the next cycle Deficit Counter. Each time DRR sends a packet, it will first check the Deficit Counter. If there is a remaining Deficit Counter available and the Deficit Counter value is greater than the packet length, the packet can be sent out. The remaining Deficit Counter is subtracted with the same packet length and then continue to compare each queue in sequence [15], [16]. In [19], Proskochylo et al. studies queue management mechanisms on the routers. The effect of queue management mechanisms, e.g., FIFO, PQ, CQ, WFQ, LLQ, to QoS for real-time traffic on the router in an IP network was examined. Except FIFO and CQ, the simulation results revealed that most of the queuing mechanisms can provide high QoS for real-time network traffic.
In this paper, through the trust relationship between people, we developed a social-aware P2P network video transmission strategy to stimulate the sharing willingness of users using multimedia IoT devices. Finally, the download progress (DP) value combined with weighted fair queuing (WFQ) and transfer finishing first (TFF) algorithm is used.
III. ALGORITHMS
This section describes the proposed social-Aware P2P video transmission strategy in detail.
A. SOCIAL RELATIONSHIP MANAGEMENT
In the proposed mechanism, the peers are represented by multimedia IoT devices with user interactions. Each peer has its own friend list and the corresponding social relationships. A Peer can set designated types of the relationship between friends. In the proposed mechanism, the friend relationships are divided into five different levels in descending order: family (5), close friends (4), acquaintances (3), friends with similar interests (2), and online friends (1), with the higher number (weight) the higher priority. The five levels of friend relationships are defined and influenced by [20], [21]. An example of the social relationship and resource contribution table is shown in the table I, containing five fields: friend name, relationship, social weight (SW), receiving contribution (RC) in terms of uploaded volume, and providing contribution (PC) in terms of downloaded volume. Each peer can upload and download data from other peers at the same time. When a peer requests for videos with other peers, after the transmission is successful, the requesting peer records the amount of data shared and transmitted by the sharing peer as the receiving contribution (RC). Meanwhile, the sharing peer records the amount of data provided and transmitted by the requesting peer is regarded as the providing contribution (PC).
B. TRANSMISSION QUEUE MANAGEMENT
In the proposed mechanism, the weights in WFQ are defined as bandwidth reservation values proportional to different friend relationships, current transmission status, and resource sharing contributions. Fig. 2 illustrates an example of peer requests in the proposed social-aware P2P video transmission strategy. In Fig. 2, peer A, B, C, D sends requests to sharing peer E for downloading the desired video contents. Peer A has two data flows A1 and A2; the corresponding data flow of Peer B, C, D is denoted as B1, C1, and D1. The social weight (SW) value of peer A is 5; the SW value of peer B is 2; the SW value of peer C is 1, and the SW value of peer D is 1. queue. The queue adjustment module adapts each data flow's position according to the current transmission status and resource sharing contributions, and finally the data is sent to the social-aware WFQ for waiting packet scheduling.
When the sharing peer E receives peer A, B, C, and D's requests, peer E queries the social relationship and resource contribution table for getting the corresponding value of social weight (SW), friend data contribution in terms of receiving contribution (RC), and friend data consumption in terms of providing contribution (PC). After calculating the contribution value (CV), then the social-aware classifier (SAC) will calculate the social weighted contribution value (SWCV) of each peer. After calculating the SWCV value, the transfer finishing priority value (TFFV) is estimated. Then, the algorithm determines the TFFV value falls in which classification range Classifier i and assigns the data flow to the queues with different weight, and finally sends the data to social-aware WFQ for packet scheduling. Fig. 4 illustrates the flowchart of the proposed social-aware P2P video transmission architecture. The social-aware classifier determines and arranges the packet to corresponding priority queue. Then, the queue adjustment module assigns the packet to corresponding transmission queue according to the current transmission status and resource sharing contributions. Finally, the social-aware WFQ scheduler services the packet according to the scheduled starting and finishing time.
When the social-aware classifier of the providing peer receives requests sent from the requesting peers, the classifier will query and obtain the social weights from the social relationship and resource contribution table. Then, the classifier calculates the contribution value (CV) of each request data flow based on equation The social weighted contribution value (SWCV) is defined in equation (3.6), where RCThreshold denotes the threshold of a peer's data provision value for leveling up the classified priority queue to i+1 th level. For example, as shown in table I, only Jerry can have the opportunity level up to i+1 th level if the RCThreshold is set to 100MB. If the receiving contribution (RC) value of a peer is higher enough, then the peer will be leveled up. The best value of RCThreshold can be set up based on empirical analysis, which may be vary in different networking environments. The social weighted contribution value (SWCV) is designed to encourage the friends to share resources and contribute more data for getting higher transmission priority, hence eliminating the free-rider problem.
After obtaining the social weighted contribution value (SWCV), the classifier next determines which Classifier i that the SWCV value falls into. Then, the requesting peer's data flows are placed into the corresponding priority queues. When the source peer starts transmitting data streams to the requesting peers, the social-aware classifier sends packets in accordance with the weights assigned to the sending queues, and then performs transfer finishing first (TFF) queue adjustment algorithm.
D. TRANSFER FINISHING FIRST QUEUE ADJUSTMENT
While transmitting videos, the source providing peer obtains the information of downloading video file size and the downloading progress of the requesting peer. The queue adjustment module considers the downloading progress archived and then levels up the data stream in which the downloading progress is greater than the predefined threshold DPThreshold. The transfer finishing first policy can achieve early completion of data transmission and speed up the release of the connection resources.
After the social weighted contribution value (SWCV) is obtained, the file size and the data transmission progress of the current requesting peer can be known, and then the transfer finishing first value (TFFV) is calculated as defined in formula (3.7). We use the download progress (DP) of the data to adjust the transmission queue and level up the data stream whose download progress (DP) is greater than DPThreshold, and increase the queue weight by one step, up to queue level 5. The unit of Download Progress (DP) is expressed as percentage completed of the download. This kind of adjustment is beneficial to those peers who can complete the transmission early and release system resources. The best value of DPThreshold can be set up based on empirical analysis, which may be vary in different networking environments. After obtaining the TFFV value, the algorithm determines which classification range Classifier i the TFFV value will fall in, and then assigns the requesting peer to the corresponding transmission queue, and wait for social-aware WFQ to be scheduled.
TFFV =
min (SWCV + CR, 100) , DP ≥ DPThreshold SWCV, DP < DPThreshold (3.7) Fig. 3 shows an example of queuing adjustment. In Fig. 3, after calculating each data flow's SWCV value, the data flow B1, C1, and D1 are placed into corresponding socialweighted queue (SWQ) SWQ2, SWQ1, and SWQ1. Followed by adjusting the social-weighted queue with the transfer finishing first policy, where the downloading progresses of B1 and C1 are greater than DPThreshold, data flows B1 and C1 will be leveled up, i.e., data flow B2 is upgraded to SWQ3 queue and data flow C1 is upgraded to SWQ2 queue, waiting for next packet scheduling.
E. SOCIAL-AWARE WFQ
We use the WFQ scheduler to dispatch data packets according to the reserved bandwidth assigned to each social weighted queue. The reserved bandwidth assigned to social weighted queue i is defined in equation (3.8). Table 2 defines the symbols used in WFQ packet scheduling. where SWB i denotes the reserved bandwidth assigned to social weighted queue i. Fig. 6 shows an example of social-aware WFQ packet scheduling. Assume there are five priority queues, the total bandwidth BW provided is 180 units, the allocated bandwidth for SWB 5 , SWB 3 , SWB 2 , and SWB 1 are denoted as 100, 40, 20, and 20. The estimated finishing time of each packet according to Equation (3.9) and (3.10) is denoted as A11=0.6, B11=1.0, C11=2.0, and D11=2.5. As a result, the sending order in sequence is A11, B11, C11, and D11.
IV. PERFORMANCE EVALUATION
This session simulates and evaluates the performance of the proposed packet delivery strategy. The experiments are divided into five simulation environments (a), (b), (c), and (d) to explore and analyze the results.
• The main purpose of the simulation environment (a) is to compare the number of packets received by each peer when the social-aware WFQ weight is set by each peer under the same transmission time. • In the simulation environment (b), the request time of each requesting peer is separated to observe the change in data throughput.
• The main purpose of the simulation environment (c) is to observe the change in data throughput while the requesting peer's receiving contribution RC) is greater than RCThreshold and the social weighted contribution value (SWCV) has reached the upgrade level; the SW value will increase by one.
• The main purpose of the simulation environment (d) is to observe the change in data throughput while the requesting peer's download progress (DP) is greater than 90% of the preconfigured DPThreshold and the transfer finishing first value (TFFV) has reached the upgrade level; the SW value will increase by one. In the simulations, we use the network simulator NS-2 with the version 2.35 to simulate the network environments [22]. The proposed system refers to the methods introduced by Stadtfeld [23] to establish the peers in the simulation environment. In the simulated environment, when a sharing peer that owns the video receives a video request, the sharing peer then establishes a CBR connection with the requesting peer for data transmission. The resource sharing peer will use weighted fair queuing (WFQ) to schedule the data packet transmission.
The NS2 simulation environment is 1400 * 1400 meters; the MAC type is 802.11g, the routing protocol is AODV; the broadcast radius is 200 meters, and the bandwidth is 54Mbps. There are 10 peers in total, 5 are request peers, 3 are peers responsible for forwarding packets, and 2 are source peers providing the movies. The source peers use weighted fair queuing (WFQ) to schedule the data packets. Simulation environment settings are shown in Table 3.
A. SIMULATION ENVIRONMENT (A)
The simulated environment compares the CBR packet size to 1,000 bytes with the transmission time of 10, 20, 30, and 50 seconds. The social-Aware WFQ weight is set by each peer under the same transmission time, using and not using the social weight (SW). The simulation compares the packet reception and data throughput of the requesting peer. This simulation does not consider download progress (DP).
In Fig. 7, the video source peers are n0 and n7; the source peer n0 has Video a , Video c , Video d , Video e files; peer n7 has Video b , Video c , Video d , Video e files; the requesting peer is n1, n3, n4, n8, and n9. Peer n1, n3, and n4 send video requests to peer n0 for watching Video a ; Peer n8 and n9 send video requests to peer n7 for watching Video b . The interval between transmission sequences is set to 0.2 second. After peer n0 and n7 receiving the video requests from other peers, the source peers examine the social relationship list among the request peers and categorize with the social-aware classifier (SAC). The RCThreshold is set to 100MB, with 5 weight levels. Fig. 8 shows the simulation environment and the corresponding social relationship list. n4 in 50 seconds. Comparing the original SW with 1, the peer n1 using SAWFQ has a larger bandwidth usage because SW is 5, and the amount of received packets is 17,218K bytes more than the original (SW=1), which increases 63.5% transmission data volume. The peer n3 using SAWFQ is decreased by 17,257K bytes and 66% because the SW is 1 and the allocated bandwidth is smaller than peer n1. Fig. 10 shows the receiving data throughput of peers with source peer n7. The requesting peers are n8 and n9, and the SW are 4: 1, respectively. The SAWFQ receives 82,040K bytes in peer n8 and 20,488K bytes in n9 in 50 seconds. Comparing the original SW with 1, the peer n8 using SAWFQ has a larger bandwidth because SW is 4, and the amount of received packets is 30,364K bytes more than the original (SW=1), which increases 58.7% transmission data volume. The peer n9 using SAWFQ is reduced by 30,360K bytes and 59.7% because the SW is 1 and the allocated bandwidth is smaller than peer n8. From the above results, it shows that peers with higher SW have better data reception than other peers. It means that in social relationships, people who are relatively close, such as close friends, will be given higher weight, so that close friends can have more bandwidth.
B. SIMULATION ENVIRONMENT (B)
The simulation environments (b) and (a) are basically the same. The main distinction is in the demand time of each requesting peer. In the simulation environment (b), the request time is different at each requesting peer, which is mainly used to observe the change of the data throughput when peers join at different time. Fig. 11 shows that Video a of peer n0 is requested by peer n1 at 1.0 second, and Video a of peer n0 is requested by peer n3 at 11.0 second; Video a of peer n0 is requested by peer n4 at 21.0 second. Video b of peer n7 is requested by peer n8 at 5.0 second, and Video b of peer n7 is requested by peer n9 at 15.0 second. All social weight (SW) values are unchanged, as shown in Fig. 8.
In Fig. 12, the x-axis denotes the simulation time in seconds; the y-axis denotes the data throughput in Mbps; the CBR packet size is 1,000 bytes, and the request transmission time of each peer is 50 seconds. The SW of peer n1 = 5, peer n3 = 3, and peer n4 = 1. Fig. 12 shows that peer n1 has the highest data throughput of about 16.8Mbps in 0-11 seconds, and peer n3 in the 11th second sends a request to join the transmission. When peer n1 sends a request to join the transmission at 11 second, it shows that the data throughput of peer n1 begins to drop at this time. At the 21 second, peer n4 sends a request to join the transmission; at the 51 second, peer n1 finishes the video transmission. At the same time, the data throughput of peer n3 and n4 are increasing. At 41 second, the peer n3 also finishes the video transmission. At the same time, all the bandwidth was given to peer n4 and the data throughput went straight up. In Fig. 13, the SW of peer n8 is 4 and peer n9 is 1. It shows that peer n8 has the highest data throughput of about VOLUME 8, 2020 810Mbps in 0-15 seconds, and peer n9 sends a request to join the transmission at 15 second. Fig. 13 shows that the data throughput of peer n8 starts to drop at this time. After the peer n9 joins the transmission, the peer n8 still keeps a certain level of bandwidth. At 55 second, peer n8 finishes the transmission. At same time, peer n9 has all the bandwidth and the data throughput went up in a straight line. It shows that the data throughput of peer n8 slowly decreases to the proportion of weight allocated by the SW. Finally, the WFQ allocates the available bandwidth proportional to the SW value.
C. SIMULATION ENVIRONMENT (C)
The simulation environment (c) mainly observes the changes in data throughput when the request peer's receiving contribution (RC) is greater than RCThreshold and the SW value is increased by one.
In Fig. 14, at 1.0 second, the requesting peer n1 sends a request to the source peer n0 and the corresponding SW is 3; at 1.5 second, the requesting peer n3 sends a request to the source peer n0 and the corresponding SW is 2; at 2.0 second, the requesting peer n4 sends a request to the source peer n0 and the corresponding SW is 1, the SW is raised to 2 at 30 second. At 1.25 second, the requesting peer n8 sends a request to the source peer n7 and the corresponding SW is 4; at 1.75 second, the requesting peer n9 sends a request to the source peer n7 and the corresponding SW is 4, the SW is raised to 2 at 30 second In Fig. 15, the SW of peer n1 is 3, peer n3 is 2, and peer n4 is 2. At 30 second, peer n4's receiving contribution (RC) is greater than the RCThreshold value, and the corresponding SW of peer n4 is increased from 1 to 2. Meanwhile, the data throughput of peer n4 is increased to 2.9Mbps. At the end, each transmission will continue to transmit data packets according to the bandwidth allocated by the SW. In Fig. 16, the SW of peer n8 is 4, and peer n9 is 2. At 30 second, peer n9's receiving contribution (RC) is greater than the RCThreshold value, and the corresponding SW of peer n9 is increased from 1 to 2. At the same time, the data throughput of peer n9 is slowly increased from 1.7Mbps and the data throughput of peer n8 is decreased from 13.6Mbps. At the end, each transmission will continue to transmit data packets according to the bandwidth allocated by the SW.
D. SIMULATION ENVIRONMENT (D)
The simulation environment (d) mainly observes the changes in data throughput when the request peer's download progress (DP) is greater than DPThreshold and the SW value is increased by one. If the peer n4's and peer n9's receiving contribution (RC) are greater than RCThreshold, and the corresponding SW values are increased after calculating the social weighted contribution value (SWCV). At 90 second, the download progress (DP) of peer n4 and peer n9 is greater than 90% of the DPThreshold setting, and the SW value is increased by one after calculating the TFFV value. The social weight (SW) changes are shown in Table 4. In Fig. 17, at 40 second, peer n4 reaches the SWCV level up condition, and the SW of peer n4 is increased from 1 to 2. The data throughput of peer n4 is increased. At 90 second, peer n4 reaches TFFV level up condition, and the SW of peer n4 is increased from 2 to 3, which speeding up the transmission of peer n4. At the end, each transmission will continue to transmit data packets according to the bandwidth allocated by the SW. In Fig. 18, at 40 second, peer n9 satisfies the SWCV level up condition, and the SW of peer n9 is increased from 1 to 2. The data throughput of peer n9 is increased. At 90 second, peer n9 satisfies the TFFV level up condition, and the SW of peer n9 is increased from 2 to 3, which speeding up the transmission of peer n9. At the end, each transmission will continue to transmit data packets according to the bandwidth allocated by the SW.
V. CONCLUSION
In this paper, a social-aware P2P video transmission strategy for multimedia IoT devices is proposed. We introduce a social relationship first video transmission policy that employs weighted fair queue (WFQ) for P2P video transmission according to distinctive queuing priority classes of different social relationships among friends. Each peer has its own friend list, and the social links are classified into different priority classes according to their social relationships, e.g., family, friends, and others. Meanwhile, the proposed strategy calculated the transmission priority according to social relationship, current download progress, and mutual resource sharing contributions. By leveraging the inherent trust associated with social links, the proposed strategy can reduce the impact of free riders and give users good video sharing and watching experiences with multimedia IoT devices. | 2020-05-21T09:09:38.495Z | 2020-05-18T00:00:00.000 | {
"year": 2020,
"sha1": "310504dc971f03a323416ec86120a5ad77b29a7c",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09094703.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "4c66f8bd3e70ffc1ac335892c91ed5e5a6aa7baf",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
233396288 | pes2o/s2orc | v3-fos-license | Resveratrol Enhances Inhibition Effects of Cisplatin on Cell Migration and Invasion and Tumor Growth in Breast Cancer MDA-MB-231 Cell Models In Vivo and In Vitro
Triple-negative breast cancer (TNBC) is a refractory type of breast cancer that does not yet have clinically effective drugs. The aim of this study is to investigate the synergistic effects and mechanisms of resveratrol combined with cisplatin on human breast cancer MDA-MB-231 (MDA231) cell viability, migration, and invasion in vivo and in vitro. In vitro, MTS assays showed that resveratrol combined with cisplatin inhibits cell viability as a concentration-dependent manner, and produced synergistic effects (CI < 1). Transwell assay showed that the combined treatment inhibits TGF-β1-induced cell migration and invasion. Immunofluorescence assays confirmed that resveratrol upregulated E-cadherin expression and downregulated vimentin expression. Western blot assay demonstrated that resveratrol combined with cisplatin significantly reduced the expression of fibronectin, vimentin, P-AKT, P-PI3K, P-JNK, P-ERK, Sma2, and Smad3 induced by TGF-β1 (p < 0.05), and increased the expression of E-cadherin (p < 0.05), respectively. In vivo, resveratrol enhanced tumor growth inhibition and reduced body weight loss and kidney function impairment by cisplatin in MDA231 xenografts, and significantly reduced the expressions of P-AKT, P-PI3K, Smad2, Smad3, P-JNK, P-ERK, and NF-κB in tumor tissues (p < 0.05). These results indicated that resveratrol combined with cisplatin inhibits the viability of breast cancer MDA231 cells synergistically, and inhibits MDA231 cells invasion and migration through Epithelial-mesenchymal transition (EMT) approach, and resveratrol enhanced anti-tumor effect and reduced side of cisplatin in MDA231 xenografts. The mechanism may be involved in the regulations of PI3K/AKT, JNK, ERK and NF-κB expressions.
Introduction
Breast cancer is the most common malignant tumor in women, and its incidence is on the rise [1,2]. It has become a major threat to women's health. Triple-negative breast cancer (TNBC) is the most common and invasive breast cancer subtype in younger patients and is characterized by a lack of estrogen receptor, progesterone receptor, and human epidermal growth factor receptor 2 [3]. The lack of these receptors makes TNBC aggressive and does not respond to hormones and targeted therapies. In addition, TNBC is highly metastatic and can recur within three years [4].
In the past five years, the focus of TNBC treatment research has gradually focused on molecularly targeted drugs, including epidermal growth factor receptor antibodies, small molecule single targets, and multi-target tyrosine kinase inhibitors, anti-angiogenic, and DNA repair drugs [5]. Advances in research are expected to provide more treatment options for TNBC patients to improve cure rates and improve prognosis [6]. Although significant advances have been made in a variety of new drugs for HER2 or ER in recent years, the progression of TNBC is limited. Compared with other subtypes of breast cancer, TNBC patients have a survival rate of only 77% [7].
Resveratrol (trans-3,4,5-trihydroxystilbene) is a non-flavonoid polyphenol, derived from natural medicines such as Polygonum cuspidatum, Rheum palmatum, and fruits such as grapes, blueberries, mulberries and peanuts. It has been reported that resveratrol has the effect of anti-cancer, and can inhibit the occurrence and metastasis of breast cancer [8]. Previous studies showed that the anti-breast cancer effects of resveratrol includes inhibiting cell growth and proliferation by inducing autophagy [9] and apoptosis [10], reversing epithelial-mesenchymal transition (EMT) and decreasing metastasis [11,12], regulating the phase I and phase II detoxification system [13], affecting the epigenetic mechanism [14], increasing the sensitivity [11], and reducing cytotoxicity [15] of chemotherapy, suppressing multidrug resistance [16], and modulating immune response [17].
Cisplatin as a first-line drug for metastatic disease has a response rate of more than 40% in breast cancer metastasis [18]. It is a DNA-damaging drug, especially in the TNBC. Compared with other types of breast cancer, TNBC has a higher risk of distant recurrence and death in the first five years [19]. Due to the heterogeneity of TNBC, the lack of clear molecular targets [20] and the inherent genomic instability caused by TNBC lack of DNA repair may result in the production of platinum drugs (such as cisplatin or carboplatin) in the treatment of TNBC. In clinic, the addition of platinum drugs can significantly improve the pathological complete response rate in the neoadjuvant therapy of TNBC. However, cisplatin, one of the most active cytotoxic drugs at present, has a therapeutic effect on a variety of malignant tumors [21], and also produces serious side effects such as severe toxicity including nephrotoxicity [22], neurotoxicity [23], gastrointestinal toxicity [24], peripheral neuropathy [25], ototoxicity [26], and hematological toxicity [27]. Therefore, it is particularly necessary to find drugs that can reduce the side effects of cisplatin and enhance the therapeutic effects.
In this study, we investigated the synergistic effects of resveratrol combined with cisplatin on a TNBC models, MDA-MB-231 (MDA231) cell viability, migration and invasion in vivo and in vitro, and their effective mechanisms were also explored through EMT approach and the regulations of PI3K/AKT, JNK, ERK, and NF-κB signaling pathways.
Resveratrol Combined with Cisplatin Inhibits Synergistically the Activity of MDA231 Cells
The effects of resveratrol combined with Cisplatin on the viability of MDA-MB-231 cells were detected by MTS assay. After treated with 2-64 µM Cisplatin ( Figure 1A) and 12.5-250 µM resveratrol ( Figure 1B) Figure 1C). The CIs were 14 µM cisplatin combined with 200 µM resveratrol ( Figure 1D). When MDA231 cells were treated by 185 µM resveratrol combined with cisplatin at 4, 8, 16, 32, and 64 µM for 24 h, the survival rates of cells were 80.2%, 61.6%, 24.2%, 17.3%, and 9.1%, respectively, compared to resveratrol alone ( Figure 1E). The CIs were 175 µM resveratrol combined with 16 or 32 µM cisplatin. These results indicated that cisplatin was sensitized by resveratrol, high-dose resveratrol can enhance the efficacy of cisplatin in inhibiting tumor cell growth at low doses, and there are a synergistic effect of cisplatin and resveratrol on the viability of MDA231 cells. sensitized by resveratrol, high-dose resveratrol can enhance the efficacy of cisplatin in inhibiting tumor cell growth at low doses, and there are a synergistic effect of cisplatin and resveratrol on the viability of MDA231 cells.
Resveratrol Combined with Cisplatin Inhibits the Migration and Invasion of MDA231 Cells
The effects of resveratrol combined with cisplatin on the migration and invasion of MDA231 cells were detected by Transwell assay. As shown in Figure 2A,B, 12.5, 25 μM and 50 μM resveratrol combined with 4 μM cisplatin significantly inhibited the migration of MDA231 cells compared to control group or cisplatin group (p < 0.05 or p < 0.01), and the cell migration rate were 63.7%, 48.6%, and 28.3%, respectively. As shown in Figure 2C,D, 12.5, 25, 50 μM resveratrol combined with 4 μM cisplatin inhibited significantly the
Resveratrol Combined with Cisplatin Inhibits the Migration and Invasion of MDA231 Cells
The effects of resveratrol combined with cisplatin on the migration and invasion of MDA231 cells were detected by Transwell assay. As shown in Figure 2A,B, 12.5, 25 µM and 50 µM resveratrol combined with 4 µM cisplatin significantly inhibited the migration of MDA231 cells compared to control group or cisplatin group (p < 0.05 or p < 0.01), and the cell migration rate were 63.7%, 48.6%, and 28.3%, respectively. As shown in Figure 2C,D, 12.5, 25, 50 µM resveratrol combined with 4 µM cisplatin inhibited significantly the invasion of MDA231 cells, and the cell invasion rates were 66.5%, 61.3%, and 42.8%, respectively. The results showed that resveratrol combined with cisplatin can inhibit the migration and invasion of MDA231 cells. invasion of MDA231 cells, and the cell invasion rates were 66.5%, 61.3%, and 42.8%, respectively. The results showed that resveratrol combined with cisplatin can inhibit the migration and invasion of MDA231 cells.
Effect of Resveratrol Combined with Cisplatin on TGF-Β1-Induced Epithelial and Mesenchymal Molecular Markers in MDA231 Cells
In order to demonstrate the effect of resveratrol combined with cisplatin on the migration and invasion of MDA231 cells whether through EMT approach, TGF-β1-induced the changes of epithelial and mesenchymal molecular markers expressions by western blot and immunofluorescence assays. As shown in Figure 3A, the expressions of E-cadherin significantly decreased, while the expressions of vimentin and fibronectin increased by TGF-β1 induction (5 ng/mL) in MDA231 cells, compared to control group (p < 0.01). Moreover, TGF-β1 induced the expressions of E-cadherin significantly increased, while the expressions of vimentin and fibronectin decreased by resveratrol (12.5 μM, 25 μM, 50 μM), cisplatin (4 μM), and resveratrol combined with cisplatin treatments in MDA231 cells, compared to TGF-β1-treated group (p < 0.05).
Furthermore, the expression of EMT markers were verified by immunofluorescence. As shown in Figure 3C, 5 ng/mL TGF-β1 reduced the expression of E-cadherin and increased the expression of vimentin. When 50 μM resveratrol was combined with TGF-β1, the epithelial marker E-cadherin was increased and the mesenchymal marker vimentin was decreased. There was no significant change in the expression of E-cadherin and vimentin in the combination of 4 μM cisplatin and TGF-β1. The expression of E-cadherin was increased in the combination of resveratrol, cisplatin, and TGF-β1, and the expression of vimentin was decreased, which was consistent with the control group. These results
Effect of Resveratrol Combined with Cisplatin on TGF-B1-Induced Epithelial and Mesenchymal Molecular Markers in MDA231 Cells
In order to demonstrate the effect of resveratrol combined with cisplatin on the migration and invasion of MDA231 cells whether through EMT approach, TGF-β1-induced the changes of epithelial and mesenchymal molecular markers expressions by western blot and immunofluorescence assays. As shown in Figure 3A, the expressions of E-cadherin significantly decreased, while the expressions of vimentin and fibronectin increased by TGF-β1 induction (5 ng/mL) in MDA231 cells, compared to control group (p < 0.01). Moreover, TGF-β1 induced the expressions of E-cadherin significantly increased, while the expressions of vimentin and fibronectin decreased by resveratrol (12.5 µM, 25 µM, 50 µM), cisplatin (4 µM), and resveratrol combined with cisplatin treatments in MDA231 cells, compared to TGF-β1-treated group (p < 0.05).
Furthermore, the expression of EMT markers were verified by immunofluorescence. As shown in Figure 3C, 5 ng/mL TGF-β1 reduced the expression of E-cadherin and increased the expression of vimentin. When 50 µM resveratrol was combined with TGF-β1, the epithelial marker E-cadherin was increased and the mesenchymal marker vimentin was decreased. There was no significant change in the expression of E-cadherin and vimentin in the combination of 4 µM cisplatin and TGF-β1. The expression of E-cadherin was increased in the combination of resveratrol, cisplatin, and TGF-β1, and the expression of vimentin was decreased, which was consistent with the control group. These results indicated that the effects of resveratrol combined with cisplatin on the migration and invasion of MDA231 cells induced by TGF-β1 may be involved in the regulation of EMT. indicated that the effects of resveratrol combined with cisplatin on the migration and invasion of MDA231 cells induced by TGF-β1 may be involved in the regulation of EMT. Red represents E-cadherin and vimentin, and blue represents DAPI. Photographed under a laser confocal microscope (×625).
PI3K/AKT, Smad, NF-κB, JNK, ERK Signal Pathways May Involve in TGF-β1-Induced EMT by the Regulation of Resveratrol and Cisplatin in MDA231 Cells
In order to demonstrate the effect of resveratrol and cisplatin on the migration and invasion of MDA231 cells whether through EMT approach, 5 ng/mL TGF-β1-induced EMT was treated by 25 µM resveratrol, 4 µM cisplatin, and within or without 10 µM LY290042 (PI3K inhibitor), 10 µM SB431542 (Smad inhibitor), 10 µM PDTC (NF-κB inhibitor), 10 µM SP600125 (JNK Inhibitor), 10 µM PD98059 (ERK inhibitor) for 24 h and then the expression of epithelial and mesenchymal marker proteins were observed by western blot. As shown in Figure 4A, the expression of E-cadherin was decreased and the expression of vimentin and fibronectin were increased compared to control group (p < 0.01). When resveratrol and cisplatin were combined with LY290042 or SB431542 reversed the expression of these proteins induced by TGF-β1. The results showed that the effects of resveratrol and cisplatin on the EMT induced by TGF-β1 in MDA231 cells may be involved in the regulated of PI3K and Smad signaling pathways.
Furthermore, the resveratrol and cisplatin regulated the proteins expressions of EMTrelated pathways in MDA231 cells were demonstrated by western blot. As shown in Figure 4B, the expression of P-AKT, P-PI3K, Smad2, Smad3, P-JNK, and P-ERK were increased in cells induced by TGF-β1. The combination of resveratrol 25 µM with TGF-β1 and cisplatin reversed the expression of these proteins and was superior to the treatment of cisplatin alone. The results indicate that the effects of resveratrol combined with cisplatin on the EMT induced by TGF-β1 in MDA231 cells may be involved in the regulation of PI3K/AKT and Smad, as well as related to the regulation of NF-κB, JNK, and ERK.
Resveratrol Enhances Anti-Tumor and Reduces Side Effects of Cisplatin in MDA231 Xenografts
In order to demonstrate the anti-tumor effect of resveratrol and cisplatin, MDA231 xenografts were prepared and the effects of resveratrol combined with cisplatin were assessed. As shown in Figure 5A, the effect of 50 mg/kg resveratrol combined with 5 mg/kg cisplatin was superior to cisplatin alone, which reduced significantly the tumor weight compared to model group (p < 0.01) and cisplatin-treated group (p < 0.05) in MDA231 xenografts from 3 to 8 weeks, while resveratrol alone was no effect. As shown in Figure 5B, the body weights of MDA231 xenografts were significantly increased by the combination treatments of resveratrol and cisplatin compared to cisplatin-treated group (p < 0.05), while the body weights were significantly decreased by cisplatin (p < 0.05). As shown in Figure 5C, serum BUN and Cr were decreased by the combination treatments of resveratrol and cisplatin compared to cisplatin-treated group (p < 0.05), while the serum BUN and Cr were significantly increased by cisplatin (p < 0.05). There were no significant changes of ALT and AST in each treatment group (p > 0.05). These results indicated that resveratrol can reduce body weight loss and kidney function impairment by cisplatin.
Resveratrol Combined with Cisplat Inhibits the Expression of P-AKT, P-PI3K, Smad2, Smad3, P-JNK, P-ERK, and NF-κB in Tumor Tissues of MDA231 Xenografts
In order to investigate the mechanisms of anti-tumor effect of resveratrol and cisplatin, the proteins expressions of tumor proliferation pathways in tumor tissues of MDA231 xenografts were demonstrated by western blot. As shown in Figure 6, the expressions of P-AKT, P-PI3K, Smad2, Smad3, P-JNK, P-ERK, and NF-κB were decreased by resveratrol and resveratrol combined with cisplatin compared to the model group, and the effects of the combination were better than in the resveratrol alone treatment. (p < 0.05), but there were no significant changes between cisplatin and model groups (p > 0.05). The results indicated that the regulations of P-AKT, P-PI3K, Smad2, Smad3, P-JNK, P-ERK, and NF-κB expressions may be involved in resveratrol enhances anti-tumor of cisplatin in MDA231 xenografts. tion treatments of resveratrol and cisplatin compared to cisplatin-treated group (p < 0.05), while the body weights were significantly decreased by cisplatin (p < 0.05). As shown in Figure 5C, serum BUN and Cr were decreased by the combination treatments of resveratrol and cisplatin compared to cisplatin-treated group (p < 0.05), while the serum BUN and Cr were significantly increased by cisplatin (p < 0.05). There were no significant changes of ALT and AST in each treatment group (p > 0.05). These results indicated that resveratrol can reduce body weight loss and kidney function impairment by cisplatin.
Resveratrol Combined with Cisplat Inhibits the Expression of P-AKT, P-PI3K, Smad2, Smad3, P-JNK, P-ERK, and NF-κB in Tumor Tissues of MDA231 Xenografts
In order to investigate the mechanisms of anti-tumor effect of resveratrol and cisplatin, the proteins expressions of tumor proliferation pathways in tumor tissues of MDA231 xenografts were demonstrated by western blot. As shown in Figure 6, the expressions of P-AKT, P-PI3K, Smad2, Smad3, P-JNK, P-ERK, and NF-κB were decreased by resveratrol and resveratrol combined with cisplatin compared to the model group, and the effects of the combination were better than in the resveratrol alone treatment. (p < 0.05), but there were no significant changes between cisplatin and model groups (p > 0.05). The results indicated that the regulations of P-AKT, P-PI3K, Smad2, Smad3, P-JNK, P-ERK, and NF-κB expressions may be involved in resveratrol enhances anti-tumor of cisplatin in MDA231 xenografts.
Reagents
MDA-MB-231 cell was purchased from the Shanghai Cell Bank of the Chinese Academy of Sciences (Shanghai, China). DMEM were purchased from Gibico (Brooklyn, NY, USA); MTS purchased from Promega (Madison, WI, USA); Smad2 and Smad3 were pur- Calculate the combination indices (CIs) were analyzed using CompuSyn software (Com-boSyn Inc., Paramus, NJ, USA). The CI value < 1 is a synergistic effect, the CI value = 1 is an additive effect, and the CI value > 1 is an antagonistic effect.
Invasion and Migration Assay
In the invasion experiment, 30 µg of Matrigel was applied to the upper layer of the chamber. Adjust the cell concentration to 1 × 10 5 /mL, add 200 µL of cell suspension with or without drug to the upper layer of the 24-well Transwell chamber. Aspirate the culture solution after 24 h, wipe off cells that have not invaded or migrated in the upper layer of the chamber by cotton swab. Rinse cells twice with PBS and fix with 4% paraformaldehyde for 15 min, then dye with 0.1% crystal violet for half an hour. Finally, the photo is counted and take semi-quantitative experiment.
Immunofluorescence Assay
MDA-MB-231 cells were adjusted to a cell density of 2.5 × 10 5 cells/mL, and the cells were cultured on confocal dishes treated with TGF-β1 (5 ng/mL), TGF-β1 + resveratrol (50 µM), TGF-β1 + cisplatin (40 µM) or TGF-β1 + resveratrol + cisplatin for 24 h fixed with 4% paraformaldehyde for 30 min, and then stabilized in 0.5% Triton X-100 for 20 min. After three PBS washes and blocking with Quick Block TM Blocking Buffer for Immunol Staining for 15 min, the cells were incubated with E-cadherin (1:200) and anti-Vimentin (1:500) antibodies overnight at 4 • C. After washing, the cells were blocked from light, incubated with an anti-rabbit antibody for 60 min and counterstained with DAPI. The cells were observed and photographed with a confocal fluorescence microscope (LSM880, Zeiss, Jena, Germany).
Preparation, Administration, and Treatment of MDA231 Xenografts
Seven-week-old female BALB/c mice (18-23 g) were fed at the Laboratory Animal Center at Shanghai University of Traditional Chinese Medicine. The preparation and administration of MDA231 Xenografts was as described in previous literature [12]. The mice were injected once every two days (days for Cisplatin) with 100 mg/kg resveratrol (resveratrol group, n = 10), 50 mg/kg cisplatin (Cisplatin group, n = 10) 100 mg/kg resveratrol +50 mg/kg cisplatin (Cisplatin group, n = 10) or PBS for MDA231 Xenografts (model group, n = 5) and PBS for BALB/c (normal group, n = 5) through the peritoneal cavity. Body of the mice were measured once per week. After 8 weeks of tumor cell inoculation, the mice were sacrificed, and the tumors were removed and weighed.
Liver and Kidney Function Tests
When the mice were sacrificed, 1 mL blood was collected from the eyes and then quickly centrifuged for 10 min at 3000 rpm to obtain the serum. The levels of serum ALT, AST, Cr, and BUN were detected according to the manufacturer's colorimeter testing kits (Jiancheng Bioengineering Institute, Nanjing, China).
Statistical Analysis
The experimental data were expressed as mean ± standard deviation (x ± SD). The comparison of the two means was performed by t-test, and the comparison of multiple means was analyzed by variance. The difference was statistically significant at p < 0.05. Statistical analysis was performed by SPSS 19.0 software.
Ethics Approval and Consent to Participate
All animal procedures were conducted in accordance with the guidelines of the National Institutes of Health and were approved by the Ethical Committee of the Shanghai University of Traditional Chinese Medicine (approval ID PZSHUTCM18-101804).
Discussion
TNBC refers to breast cancer that does not express the estrogen receptor (ER), progesterone receptor (PR), and HER2/neu genes [28]. Although TNBC is sensitive to chemotherapy, such as the platinum can significantly improve the prognosis of TNBC, but the serious side effects of platinum drugs are also a non-negligible fact in clinical applications [29]. Therefore, it is particularly necessary to find new drugs or synergistic combination strategies that are more effective against TNBC and can alleviate the side effects of chemotherapy drugs.
Chinese herbal medicines (one of natural medicines) have long been used in cancer therapy as such synergistic combinations for enhancing efficacy, reducing side effects, immune modulation, as well as abrogating drug resistance of chemotherapy [30]. Therein, resveratrol-based combinatorial strategies for cancer management has also been increasingly studied, that showed resveratrol sensitizes tamoxifen in antiestrogen-resistant breast cancer cells [11] and TNF-β-induced survival of 5-FU-treated colorectal cancer cells [31], increases arsenic trioxide-induced apoptosis in chronic myeloid leukemia cells [32] and lung cancer cells [33], decreases cytotoxicity of doxorubicin in breast cancer cells [15], and combined with piceatannol upregulates PD-L1 expression in breast and colorectal cancer cells [34]. However, the effects of resveratrol combined with cisplatin on breast cancer is still unclear.
In this study, we demonstrated the effects of resveratrol combined with cisplatin using a TNBC models, MDA231 cells in vivo and in vitro. Our results showed that, in vitro, resveratrol combined with cisplatin inhibits synergistically cell viability, and inhibits TGFβ1-induced cell migration and invasion (Figures 1 and 2); in vivo, resveratrol enhanced tumor growth inhibition and reduced body weight loss and kidney function impairment by cisplatin in MDA231 xenografts ( Figure 5). These results indicated that resveratrol may sensitize the inhibitive effects of cisplatin on cell viability, migration and invasion, and tumor growth in MDA231 models in vivo and in vitro.
EMT is a process in which epithelial cells lose cell polarity and intercellular adhesion, gain migration, and invasiveness and become interstitial cells. Epithelial cells express high levels of E-cadherin, while mesenchymal cells express N-cadherin, fibronectin, and vimentin. Thus, EMT causes morphological and phenotypic changes in cells [35]. EMT plays an important role in the process of TNBC. It has been reported that resveratrol can reverse EMT [12], and sensitizes tamoxifen in antiestrogen-resistant with EMT in breast cancer cells [11]. In this study, we found that resveratrol sensitizes the effects of cisplatin inhibits TGFβ1-induced MDA231 cell migration and invasion by reducing the expression of vimentin, fibronectin, and increased the expression of E-cadherin, which is reversing EMT, while there is no obvious effect in those of cisplatin alone (Figure 3). It indicated that resveratrol gives cisplatin an efficacy of reversing EMT.
There are multiple dysregulated signaling pathways such as Wnt/β-catenin, Notch, NF-κB, PI3K/Akt, Smad, MAPK (including p38, JNK and ERK) and Hedgehog in TNBC, and these signaling pathways involved in the regulation of cell growth, proliferation, migration, EMT and metastasis, and activation of apoptosis, and affected by natural compounds such as resveratrol or its combination with classical chemotherapeutic agents in TNBC [36]. In order to clarify which signaling pathway involved in the regulation of EMT treated by resveratrol and cisplatin combination in MDA231 cells, we screened the TGF-β1-induced EMT-related signaling pathways, using inhibitors including LY290042 for PI3K, SB431542 for Smad, PDTC for NF-κB, SP600125 for JNK, and PD98059 for ERK. Our results showed that, PI3K and Smad signaling pathways may be involved in the regulated of EMT induced by TGF-β1 in MDA231 cells ( Figure 4A). Moreover, further experiments showed the combination of resveratrol and cisplatin downregulated the expression of P-JNK, and P-ERK, which increased in MDA231 cells induced by TGF-β1 ( Figure 4B), indicated the effects on EMT were involved in the regulation of PI3K/AKT and Smad signal pathways, as well as regulated NF-κB, JNK, and ERK expressions.
Moreover, previous studies have reported that resveratrol inhibits tumor growth by inducing apoptosis in MDA231 xenograft and HER-2/neu transgenic mice models [37,38]. The inhibitive tumor effects of resveratrol combined with quercetin and catechin by the regulation of cell cycle progression in MDA231 xenograft has also been reported [39]. However, the tumor-inhibited mechanisms of resveratrol combined with cisplatin on breast cancer is still unclear. In this study, we found that resveratrol enhanced the inhibitive effects of cisplatin on P-AKT, P-PI3K, Smad2, Smad3, P-JNK, P-ERK, and NF-κB expressions in MDA231 xenografts ( Figure 6) indicated that the regulations of P-AKT, P-PI3K, Smad2, Smad3, P-JNK, P-ERK, and NF-κB expressions may be involved in resveratrol enhances antitumor of cisplatin in MDA231 xenografts. In addition, because resveratrol can inhibit tumor metastasis [12] and multiple signal pathway involved in metastasis of breast cancer [8,36], further study will investigate the effects and mechanisms of Resveratrol combined with cisplatin on the tumor metastasis.
Conclusions
In summary, resveratrol combined with cisplatin produced a synergistic effect on the inhibition of breast cancer cell viability, inhibits breast cancer MDA231 cell migration and invasion through EMT regulated by PI3K/AKT, Smad, NF-κB, JNK, and ERK. Moreover, resveratrol enhanced anti-tumor effect and reduced side of cisplatin in MDA231 xenografts and the effective mechanism may be involved in the regulations of PI3K/AKT, JNK, ERK, and NF-κB expressions. | 2021-04-27T05:14:40.086Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "940cfc70092cf9a878abc2e7e55d8b7a728befd0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/8/2204/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "940cfc70092cf9a878abc2e7e55d8b7a728befd0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
164734544 | pes2o/s2orc | v3-fos-license | Early Results of the Attune Knee System: A Minimum 2 Year Follow Up Observational Study
Background: The ATTUNE Knee System was designed to improve patient outcome and satisfaction. The aims of this study were to assess patient outcome after receiving an ATTUNE total knee replacement (TKR) and ensure early results were comparable to other TKR systems in Australia. Methods: 332 ATTUNE TKR’s were performed locally, mean follow-up was 2.6 years (2.0 to 3.2). Revision data was collected on all ATTUNE TKR’s. ATTUNE TKR’s performed at our university teaching hospital (n=162), had patient reported outcome measured using the Multi-Attribute Arthritis Prioritisation Tool (MAPT) questionnaire. Results: Revision rate of the ATTUNE TKR was similar to national rates (1.6% vs. 2.1%) (p=0.508). Postoperative MAPT scores were significantly lower after TKR (n=87) (median 63.4 vs. 0.0) (p<0.001). A total of 82 (94.3%) people had an improved MAPT score post-TKR. Conclusion: Our findings suggest the ATTUNE TKR has comparable revision rates to other TKRs currently available in Australia. Furthermore, patient reported outcome was good 2.4 years postoperatively.
INTRODUCTION
Total knee replacement (TKR) is performed in patients suffering from severe pain and functional limitations secondary to arthritis. The goal of a TKR is to reduce pain, restore function, correct mechanical mal alignment, ensure ligamentous balancing and restore the joint line [1]. In 2015 494,571 primary TKRs had been performed and reported to the Australian Orthopaedic Association National Joint Replacement Registry (AOANJRR) [2].
The ATTUNE Knee System (DePuy Synthes, Warsaw) was designed in an effort to improve patient outcomes by providing more options for bearing and allowing easier kinematic balancing, as well as having a more anatomic patella and trochlear groove [6,11,12]. The ATTUNE Knee System has been used since 2011 and became widely available in 2013. While studies in the USA and Europe have shown good early outcomes there has been no data from Australian patients to date [11][12][13]. Therefore, the aim of this retrospective observational study was to investigate the early results of the ATTUNE TKR and ensure comparable results with other TKR systems used in Australia.
MATERIAL AND METHODS
An arthroplasty data system from a single institution was used to identify patients that had undergone TKR using the ATTUNE Knee System between 1 st September 2014 and 31 st December 2015 allowing capture of the initial surgeries performed and a minimum follow-up of 2.0 years. Patients were selected to have an ATTUNE TKR if they fit the routine criteria in our department for a primary TKR surgery. The primary outcome was the need for revision surgery. Secondary outcomes included patient reported outcome, short-term postoperative range of motion (ROM) and surgical complications. Patients were included in the primary analysis if their primary diagnosis was osteoarthritis of the knee to allow for comparison with AOANJRR data. All patients were required to be ≥ 18 years old. Exclusion criteria was revision surgery rather than primary TKR, as well as patients with primary diagnosis other than osteoarthritis. Ethics approval was gained from South Australian Local Health Network Human Research Ethics Committee. Patients were contacted either by mail or phone after their surgery and informed consent was gained from the patient to be included in the study.
A total of 162 TKRs, on 148 patients using the ATTUNE Knee System were included from our University teaching hospital. The surgeries were performed by one of 9 consultants or a supervised fellow using the medial para-patella approach. All but one TKR used a cruciate retaining technique as this was our surgeon's preference, with one TKR using a posterior stabilising technique. All cases in our learning curve with the ATTUNE implant were included. Postoperative all patients were referred to standard postoperative rehabilitation. ROM was collected at the 6 to 12-week postoperative appointment and measured using a goniometer. Patient reported outcomes were measured using the Multi-Attribute Arthritis Prioritisation Tool (MAPT), which is a standardised and validated patient based score involving 11 multiple-choice questions asking the patient how their knee has impacted them over the previous three months [14,15]. The MAPT questionnaire was used as our routine patient reported functional assessment tool at the time this study started, and has been shown to be highly correlated with other questionnaires such as the Western Ontario and McMaster osteoarthritis index (WOMAC) and Oxford Knee, as well as demonstrating to be a good assessment of a patient's physical function and pain [14,15]. The MAPT assesses pain, limitations to daily activities, economic effects, recent deterioration and psychosocial health effects [16]. Individuals are sent a preoperative MAPT as part of routine care to aid in prioritisation of patients awaiting TKR, and since it is a validated functional score it is reasonable to compare preoperative and postoperative scores. The MAPT questionnaire produces a score ranging from 0 (least disease severity) to 100 (greatest disease severity) [14]. Previously, a MAPT score ≤20 has been considered low priority for joint replacement surgery due to the patient having sufficient function [17] and therefore in this study a score of ≤ 20 was considered a good outcome postoperatively. Conversely, a score ≥60 has been previously considered high priority for joint replacement surgery [17] and therefore in this study if a patient had a score ≥60 this was considered a poor outcome postoperatively.
Patients completed a preoperative MAPT questionnaire as part of their standard care. Additionally, patients were sent a MAPT to complete in either June or December 2017 to ensure a minimum time-to-follow up of 2.0 years. Non-responders were telephoned. Age, gender, date of TKR surgery and information regarding any surgical complication during or after surgery was collected from patients that received ATTUNE TKR at our University teaching hospital through electronic patient records.
To assess revision rates, TKRs involving the ATTUNE Knee System performed by the same surgeons at our private institution were included in the analysis. Total TKR with the ATTUNE Knee System performed at both institutions was 332. Figure 1 Highlights recruitment, inclusion and exclusion criteria. To identify ATTUNE TKRs that had received revision, two local databases were reviewed in December 2017. Additionally, an ad hoc report was requested and created from the AOANJRR to identify any other revisions that have occurred in Australia [18]. A total of 10 knees were excluded from the primary analysis as they received the ATTUNE Knee System for reasons other than osteoarthritis; rheumatoid arthritis (5), osteonecrosis (3), and other inflammatory arthritis (2). Statistical analyses were performed using STATA 15.0. Z-score test statistic was used to test the difference in two population proportions of revision rates. MAPT score is reported as mean, median and standard deviation. Non-parametric Wilcoxon paired signed rank test was used to test the difference in median MAPT scores before and after TKR. Two-sample Wilcoxon rank-sum (Mann-Whitney) test for comparing postoperative MAPT scores between two age groups and MAPT scores between gender. A P value of <0.05 was considered significant.
RESULTS
The mean age of patients receiving TKR was 69.8 (46 to 86) years and 101 (62.4%) were female. A total of 322 patients underwent TKR with the ATTUNE system between September 2014 and December 2015 for osteoarthritis of the knee as primary diagnosis. A total of five knees received revision surgery, revision rate of 1.6% with mean time to follow up of 2.6 years (2.0 to 3.2). Compared to ATTUNE TKR performed at other hospitals in Australia (0.7%), there was no statistical difference in results (p=0.090). Additionally, our revision rate was lower compared to other TKR nationwide (2.1%), but there was no statistical difference in results (p=0.508) ( Table 1). Figure 2 illustrates local ATTUNE TKR cumulative revision rates alongside both other ATTUNE and other TKR in Australia [18]. One reason for revision included infection (2) (0.6%), with both individuals receiving a change of insert. Additionally, one (0.3%) knee underwent tibial and femoral component revision secondary to medial tibial plateau fracture. This was due to one of the pin sites on the tibial component being near the cortex of the bone, which in turn caused a stress riser creating a weak point for fracture, and subsequent loosening. Lastly one (0.3%) knee underwent patella resurfacing secondary to patellofemoral pain, and one knee had a polyethylene exchange for instability (0.3%). *82 of the 87 had improved postoperative MAPT score.
A total of five out of the 162 TKR's required manipulations under anesthesia (3%). A further [9] knees presented for investigation regarding knee pain, two patients were revised due to infection, one for fracture, one for patellofemoral pain and one for instability.
DISCUSSION
The ATTUNE Knee System has been used in Australia since 2013, and few studies are available looking at the outcomes of this new system. We looked at a combination of both surgical outcomes and a patient-based outcome to determine the early results of the ATTUNE TKR.
In our sample of 322 TKR's involving the ATTUNE Knee System for osteoarthritis a revision rate of 1.6% at mean time to follow up 2.6 years (2.0 to 3.2) was found. This was found to be higher compared to national ATTUNE TKR revision rates, but lower compared to all other TKR systems available in Australia. However, no statistically significant difference was found when comparing revision rates to either group. It is unknown whether no true difference exists between either cohort, this is because patient characteristics that may possibly influence implant survivorship were unavailable to be compared due to this study retrospective design. Additionally, as the ATTUNE Knee System is relatively new, and our series included all cases including ones in the learning curve, this may be one contributing factor to the increased revision rate in the local TKR cohort compared to ATTUNE TKR national rates.
An individual's osteoarthritis disease severity is measured by the Hip and Knee Multi-Attribute Prioritisation Tool. This tool aids surgeons in prioritising individuals requiring joint replacement surgery [14]. In this study, individuals were sent a preoperative MAPT as part of routine care, and therefore to assess and compare an individual's reported outcome with their TKR the MAPT questionnaire was used for a preoperative and postoperative comparison. Both construct validity and reliability have been validated previously in the MAPT questionnaire [14]. The MAPT has not been used to assess a patients' postoperative outcome previously, however for this study it was chosen because it is our institutions routine patient reported functional assessment tool at commencement of this study. One study that has used the MAPT postoperatively found a significant difference when comparing preoperative and postoperative MAPT scores and found 94.8% of patients that had received a TKR had improved MAPT scores 6 months postoperatively [19]. In comparison, this study showed that at minimum two years 94.3% of patients that received an ATTUNE TKR had an improved MAPT score. In this study 86.7% of patients who returned a postoperative MAPT had a good outcome indicated by a MAPT score of ≤ 20.
Other studies looking at patient satisfaction in individuals that had received an ATTUNE TKR found no statistical difference in overall satisfaction compared to PFC Sigma [12]. Additionally, at two-year follow-up only 2.1% were found to be dissatisfied with their TKR, determined by a satisfaction score less than five on a visual analog scale. Ranawat et al. [12] discussed possible reasons for increased satisfaction of the ATTUNE TKR and included decreased anterior knee pain and crepitus compared to the PFC Sigma [12]. Furthermore, Indelli [13] used the Knee Society Score and Oxford Knee Score to assess clinical outcome of the ATTUNE TKR and found 98% had good to excellent clinical outcomes, however this was not statistically different compared to the PFC sigma. Also, Martin [8] found patients implanted with a posterior stabilized ATTUNE TKR had much lower crepitus compared to the PFC Sigma at two years postoperatively (0.8% vs 9.4%).
At 6-12 weeks postoperative patients had a mean total knee ROM of 100.6 o . Guild and Labib [20] found a similar total ROM of 103.0 o in the NexGen LPS at 6 weeks. It should be noted that maximal postoperative ROM has not been recorded and it is expected to be higher at the two-year interval [21]. Studies that measured postoperative total knee ROM in patients who received the ATTUNE Knee System had a mean total knee ROM of 117.0 o12 , 123.0 o13 and 120.5 o22 , similar to other TKR systems [13,22]. Additionally, due to this study's retrospective nature, preoperative total ROM and BMI were not collected and have previously been recognized to impact outcome [23]. Overall complication rate was relatively low, with an infection rate of 0.6% which is within the expected range.
CONCLUSION
In conclusion, our findings suggest the ATTUNE TKR has comparable revision rates to other TKRs currently available in Australia. Furthermore, patient reported outcome was high 2.4 years postoperatively, with a majority of patients having a good outcome. All patients in our learning curve were included and revision rates cross-referenced with the AOANJRR. This study was the first to use the MAPT questionnaire to assess patient reported outcomes post-TKR. Longer term-follow up of the ATTUNE Knee System is required to compare to other TKR systems available in Australia. | 2019-05-26T13:35:32.170Z | 2019-04-08T00:00:00.000 | {
"year": 2020,
"sha1": "729ef51c85e2d1aa47befedacafc28c4a63083ff",
"oa_license": "CCBYNC",
"oa_url": "https://biomedscis.com/pdf/OAJBS.ID.000201.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fb757cc031e15f6f5d6fdd310f79407d624762f1",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
147706516 | pes2o/s2orc | v3-fos-license | Study on Precipitation and Growth of TiN in GCr15 Bearing Steel during Solidification
In this paper, the precipitation thermodynamics and growth kinetics of TiN inclusions in GCr15 bearing steel during solidification were calculated in more detail. A more reasonable formula for calculating the segregation of the solute elements was adopted and the stability diagram of TiN precipitation considering solidification segregation was given. By solving equations, the change of the solute element content before and after TiN inclusion precipitation was calculated, and the results were substituted into the kinetic formula of the inclusion growth, which made the kinetic calculation more accurate. Results showed that the most effective way to reduce the precipitation of TiN is to increase the cooling rate and decrease the contents of Ti and N in steel. The effect of Ti content on the size of TiN inclusions is greater than that of N content.
Introduction
Bearing materials require a high fatigue life which is closely related to the purity of steel. In particular, brittle oxide inclusions and punctiform non-deformable inclusions in steel are extremely harmful to the fatigue life of the bearing material [1]. With the development of pure steel smelting technology, the cleanness of steel has been greatly improved, which gradually weakens the influence of such inclusions, while the effect of TiN inclusions with higher hardness and brittleness on the fatigue life of bearing steel is more prominent. The effect of TiN inclusions on the fatigue life of bearing steel is much greater than that of oxide inclusions of the same quantity. The impact of 6 µm nitride inclusion on fatigue performance is equivalent to that of oxides with an average size of 25 µm [2].
The precipitation of TiN in bearing steel has been studied extensively. Zhou et al. [3] and Yang et al. [4] studied the precipitation behavior of TiN during solidification, and the formation of TiN was thermodynamically calculated and experimentally analyzed by Pak et al. [5] and Fu et al. [6], who pointed out that Ti content in high quality bearing steel should be properly controlled to reduce TiN formation.
However, in these studies, the model used for calculating the segregation of solute elements does not take into account the back diffusion of solute elements, resulting in an infinite segregation rate when the solid fraction approaches 1, which is unreasonable. In addition, in the calculation of the inclusion particle growth, the change of solute element content with time has not been considered, so the integration result is doubtful.
On the basis of previous studies, the precipitation thermodynamics and growth kinetics of TiN inclusions in GCr15 bearing steel during the solidification process were calculated in more detail.
A more reasonable formula for calculating the segregation of solute elements was adopted and the stability diagram of TiN precipitation considering solidification segregation was given. By solving the developed equations, the change of solute element content during the TiN inclusion precipitation was calculated. Then, the calculated results were substituted into the kinetic formula of the inclusion growth, which made the kinetic calculation more accurate. Meanwhile, the effects of Ti content, N content and cooling rate on the size of TiN are discussed, which provides a theoretical support for reducing the size of TiN inclusions in bearing steel and reducing the damage to fatigue life.
The Chemical Composition of GCr15 Bearing Steel and Temperature of Solidus and Liquidus Line
The chemical composition of GCr15 bearing steel is shown in Table 1. The liquidus temperature (T L ) and solidus temperature (T S ) of GCr15 bearing steel are calculated according to Equation (1).
where T m is the melting point of pure iron, 1809 K. w B is the mass fraction of elements in steel. ∆T L or ∆T S is the decrease of melting point of pure iron when the mass fraction of an element in steel increases by 1%. The values of ∆T L and ∆T S are shown in Table 2 [7]. According to the above calculation, the liquidus temperature T L and solidus temperature T S of GCr15 bearing steel are 1726 K and 1608 K, respectively.
The Equilibrium Activity Product of TiN Precipitation
As the reaction (2) reaches equilibrium state, In Equation (3), f Ti and f N are the activity coefficients of Ti and N at the temperature of the solidification front, respectively, which can be calculated by Equations (4) and (5) [6]: In Equations (4) and (5), f Ti (1873 K) and f N (1873 K) are the activity coefficients of the elements Ti and N at 1873 K, respectively. lg f Ti (1873 K) = e j Ti · (%j) (6) lg where e j Ti and e j N are the interaction coefficients of the element j to Ti and N in molten steel at 1873 K, respectively. The interaction coefficients used in the present study are shown in Table 3 [7]. By substituting the composition of bearing steel into Equations (2)- (7), the solubility product of Ti and N in molten steel at equilibrium can be obtained as follows: In theory, the TiN inclusions will precipitate when the actual solubility product in molten steel is larger than the equilibrium solubility product. According to Equation (8) and the solid-liquid temperature, the stability diagram of TiN precipitation can be drawn, as shown in Figure 1.
where e Ti j and e N j are the interaction coefficients of the element j to Ti and N in molten steel at 1873 K, respectively. The interaction coefficients used in the present study are shown in Table 3 [7]. By substituting the composition of bearing steel into Equations (2)- (7), the solubility product of Ti and N in molten steel at equilibrium can be obtained as follows: In theory, the TiN inclusions will precipitate when the actual solubility product in molten steel is larger than the equilibrium solubility product. According to Equation (8) and the solid-liquid temperature, the stability diagram of TiN precipitation can be drawn, as shown in Figure 1. As can be seen from Figure 1, the actual content of Ti and N in molten steel is lower than the equilibrium solubility product of TiN at the solidus temperature and much lower than that at the liquidus temperature, indicating that it is impossible for TiN to precipitate above liquidus temperature. Whether it can precipitate in the solidification process or not, it is necessary to consider the increase of solubility product caused by elements segregation.
The Segregation of Solute Elements during the Solidification Process
It is defined that C 0 is the initial concentration of solute element. C S and C L are the concentrations of the solute element in the solid and liquid phase, respectively. f S is the volume fraction of solid. k is the equilibrium distribution coefficient of the solute element between the liquid and the solid phase. At equilibrium, it can be concluded from the lever law, Scheil's model [8] assumes that the solute elements diffuse completely in the liquid phase and non- As can be seen from Figure 1, the actual content of Ti and N in molten steel is lower than the equilibrium solubility product of TiN at the solidus temperature and much lower than that at the liquidus temperature, indicating that it is impossible for TiN to precipitate above liquidus temperature. Whether it can precipitate in the solidification process or not, it is necessary to consider the increase of solubility product caused by elements segregation.
The Segregation of Solute Elements during the Solidification Process
It is defined that C 0 is the initial concentration of solute element. C S and C L are the concentrations of the solute element in the solid and liquid phase, respectively. f S is the volume fraction of solid. k is the equilibrium distribution coefficient of the solute element between the liquid and the solid phase. At equilibrium, it can be concluded from the lever law, C L = C 0 (1− f S )+k· f S , C S = kC L . Scheil's model [8] assumes that the solute elements diffuse completely in the liquid phase and non-diffuse completely in the solid phase, and Scheil's equation C L = C 0 (1 − f S ) k−1 is given. When the volume fraction of solid f S is close to 1, the concentrations C S or C L in Scheil's equation will become infinite, which is obviously unrealistic. Brody and Flemings [9] assume that the solute diffuses completely in the liquid phase and partially in the solid phase, and the segregation equation is given. Clyne and Kurz [10] have revised the coefficients and proposed the C-K equation, which is widely used in most studies. On this basis, Ohnaka [11] has proposed a more elegant approximation to the microsegregation problem with back diffusion. Finally, Kobayashi [12] has developed an exact analytical solution to the microsegregation problem and has also provided some higher order approximations, as shown in Equation (9): With where D S is the diffusion coefficient of the solute element in the solid phase. t f is the local solidification time, which is calculated by the formula t f = . R C is the cooling rate and the value is set to 5 K/s. L is the secondary dendrite arm spacing, which can be expressed as a function of cooling rate and generally calculated by the formula L = 146 × 10 −6 R −0.39 C . Moreover, the temperature in front of the solid-liquid interface during solidification can be expressed by the following formula: The segregation of Ti and N during solidification can be calculated, based on the different segregation models, as shown in Figure 2a. The lever law describes the solidification process at equilibrium. Scheil's model is an extreme case, and it assumes that solute elements diffuse completely in the liquid phase and non-diffuse completely in the solid phase. The widely used C-K model has not considered the back diffusion of solute elements in the solid phase. Therefore, the more reasonable model proposed by Kobayashi [13] was adopted and its results were verified by previous studies. The segregation of Ti and N during solidification based on the segregation Equation (9) is illustrated in Figure 2b.
The Stability Diagram of TiN Inclusions Precipitation Considering Solidification Segregation
Due to the segregation of solute elements, such as Ti and N, during solidification, TiN will still precipitate even if the initial solubility product of Ti and N in molten steel is lower than the equilibrium solubility product of TiN at solidus temperature. By defining the segregation ratio of , and considering solidification segregation, the actual solubility product in molten The relationship between the product of the segregation ratio of Ti and N (p Ti p N ) and the volume fraction of solid (f S ) is shown in Figure 3. It can be seen from Figure 3 that the actual concentration Qʹ of Ti and N in molten steel has a maximum when solidification is proceeding. If the actual solubility product is always less than the equilibrium solubility product during solidification, TiN will not precipitate; on the contrary, TiN will precipitate during solidification. From Equation (8), it can be derived that: Based on the above calculation, the stability diagram of TiN inclusions precipitation considering solidification segregation is given, as shown in Figure 4.
The Stability Diagram of TiN Inclusions Precipitation Considering Solidification Segregation
Due to the segregation of solute elements, such as Ti and N, during solidification, TiN will still precipitate even if the initial solubility product of Ti and N in molten steel is lower than the equilibrium solubility product of TiN at solidus temperature. By defining the segregation ratio of element p = C L C 0 , and considering solidification segregation, the actual solubility product in molten steel The relationship between the product of the segregation ratio of Ti and N (p Ti p N ) and the volume fraction of solid ( f S ) is shown in Figure 3.
The Stability Diagram of TiN Inclusions Precipitation Considering Solidification Segregation
Due to the segregation of solute elements, such as Ti and N, during solidification, TiN will still precipitate even if the initial solubility product of Ti and N in molten steel is lower than the equilibrium solubility product of TiN at solidus temperature. By defining the segregation ratio of , and considering solidification segregation, the actual solubility product in molten The relationship between the product of the segregation ratio of Ti and N (p Ti p N ) and the volume fraction of solid (f S ) is shown in Figure 3. It can be seen from Figure 3 that the actual concentration Qʹ of Ti and N in molten steel has a maximum when solidification is proceeding. If the actual solubility product is always less than the equilibrium solubility product during solidification, TiN will not precipitate; on the contrary, TiN will precipitate during solidification. From Equation (8), it can be derived that: Based on the above calculation, the stability diagram of TiN inclusions precipitation considering solidification segregation is given, as shown in Figure 4. It can be seen from Figure 3 that the actual concentration Q of Ti and N in molten steel has a maximum when solidification is proceeding. If the actual solubility product is always less than the equilibrium solubility product during solidification, TiN will not precipitate; on the contrary, TiN will precipitate during solidification. From Equation (8), it can be derived that: Based on the above calculation, the stability diagram of TiN inclusions precipitation considering solidification segregation is given, as shown in Figure 4. The initial composition of Ti and N in molten steel is [%Ti] = 0.0030 , [%N] = 0.0060 , respectively. Although the initial solubility product is lower than the equilibrium solubility product of TiN at solidus temperature, the actual solubility product is higher than the equilibrium solubility product due to the segregation of elements, giving rise to the fact that the TiN will still precipitate during solidification. The red dotted line in Figure 4 indicates that unless the Ti and N contents in molten steel are controlled in the area below the red line, TiN inclusions will precipitate during solidification. That is to say, the contents of Ti and N on the red line are the critical values that determine whether TiN can precipitate during solidification or not. Therefore, in the production of bearing steel, the contents of Ti and N elements in molten steel must be well controlled in order to avoid the formation of TiN inclusions in the liquid and solid-liquid phases.
The Precipitation of TiN Inclusions during Solidification
As solidification proceeds, the contents of Ti and N in molten steel increase gradually due to segregation. When the actual solubility product in molten steel is more than the equilibrium solubility product, the TiN inclusions will precipitate. According to the initial composition of molten steel and the above calculation, the TiN inclusions begin to precipitate when the volume fraction of solid (f S ) is greater than 0.92, as shown in Figure 5. Although the initial solubility product is lower than the equilibrium solubility product of TiN at solidus temperature, the actual solubility product is higher than the equilibrium solubility product due to the segregation of elements, giving rise to the fact that the TiN will still precipitate during solidification. The red dotted line in Figure 4 indicates that unless the Ti and N contents in molten steel are controlled in the area below the red line, TiN inclusions will precipitate during solidification. That is to say, the contents of Ti and N on the red line are the critical values that determine whether TiN can precipitate during solidification or not. Therefore, in the production of bearing steel, the contents of Ti and N elements in molten steel must be well controlled in order to avoid the formation of TiN inclusions in the liquid and solid-liquid phases.
The Precipitation of TiN Inclusions during Solidification
As solidification proceeds, the contents of Ti and N in molten steel increase gradually due to segregation. When the actual solubility product in molten steel is more than the equilibrium solubility product, the TiN inclusions will precipitate. According to the initial composition of molten steel and the above calculation, the TiN inclusions begin to precipitate when the volume fraction of solid ( f S ) is greater than 0.92, as shown in Figure 5. The initial composition of Ti and N in molten steel is [%Ti] = 0.0030 , [%N] = 0.0060 , respectively. Although the initial solubility product is lower than the equilibrium solubility product of TiN at solidus temperature, the actual solubility product is higher than the equilibrium solubility product due to the segregation of elements, giving rise to the fact that the TiN will still precipitate during solidification. The red dotted line in Figure 4 indicates that unless the Ti and N contents in molten steel are controlled in the area below the red line, TiN inclusions will precipitate during solidification. That is to say, the contents of Ti and N on the red line are the critical values that determine whether TiN can precipitate during solidification or not. Therefore, in the production of bearing steel, the contents of Ti and N elements in molten steel must be well controlled in order to avoid the formation of TiN inclusions in the liquid and solid-liquid phases.
The Precipitation of TiN Inclusions during Solidification
As solidification proceeds, the contents of Ti and N in molten steel increase gradually due to segregation. When the actual solubility product in molten steel is more than the equilibrium solubility product, the TiN inclusions will precipitate. According to the initial composition of molten steel and the above calculation, the TiN inclusions begin to precipitate when the volume fraction of solid (f S ) is greater than 0.92, as shown in Figure 5. The mass of the Ti and N elements in the TiN inclusions must meet the stoichiometric ratio, that is, the mass ratio of the lessened Ti and N elements in the liquid phase is 48/14. In addition, following (12) and (13). Given the above, the contents of Ti and N in molten steel during solidification are shown in Figure 6. The mass of the Ti and N elements in the TiN inclusions must meet the stoichiometric ratio, that is, the mass ratio of the lessened Ti and N elements in the liquid phase is 48/14. In addition, following the precipitation of TiN, the contents of Ti and N remaining in the liquid must meet the equation of the solubility product at equilibrium. (12) and (13). Given the above, the contents of Ti and N in molten steel during solidification are shown in Figure 6.
The Basic Equation of TiN Inclusions' Growth Dynamics
According to the comparison of the diffusion coefficient of N in molten steel ( D L-N = 3.25 × 10 7 e 11500 RT , m 2 /s, [7]) and that of Ti (D L-Ti = 3.1 × 10 7 e 11500 RT , m 2 /s, [13]), the diffusion of Ti is slower than that of N in liquid steel and the enrichment of Ti is easier than that of N in the solidification front. So, the solute element N which spreads faster is a restrictive factor upon the growth of TiN inclusions. Based on Fick's first law, the dynamic formula of TiN inclusions' growth is derived. For the formula of inclusions' growth, Hong and DebRoy [14,15] proposed a formula which is similar in form to the formula in this study, though with less detail of subsequent calculations. The formula used by Goto et.al. [13,16] is almost the same as that used in this study, but it seems to be incorrect in form.
The Basic Equation of TiN Inclusions' Growth Dynamics
According to the comparison of the diffusion coefficient of N in molten steel (D L-N = 3.25 × 10 −7 e −11500 RT , m 2 /s, [7]) and that of Ti (D L-Ti = 3.1 × 10 −7 e −11500 RT , m 2 /s, [13]), the diffusion of Ti is slower than that of N in liquid steel and the enrichment of Ti is easier than that of N in the solidification front. So, the solute element N which spreads faster is a restrictive factor upon the growth of TiN inclusions. Based on Fick's first law, the dynamic formula of TiN inclusions' growth is derived. For the formula of inclusions' growth, Hong and DebRoy [14,15] proposed a formula which is similar in form to the formula in this study, though with less detail of subsequent calculations. The formula used by Goto et al. [13,16] is almost the same as that used in this study, but it seems to be incorrect in form. As shown in Figure 7, C L-N is the concentration of N in molten steel; C S-N is the concentration of N at the inclusion-molten steel interface; C e-N is the concentration of N at equilibrium; r m is the radius of imaginary molten steel balls and r is the radius of TiN inclusions.
According to Fick's First Law, Equation (14) As shown in Figure 7, C L-N is the concentration of N in molten steel; C S-N is the concentration of N at the inclusion-molten steel interface; C e-N is the concentration of N at equilibrium; r m is the radius of imaginary molten steel balls and r is the radius of TiN inclusions.
According to Fick's First Law, Equation (14) can be obtained Integrating Equation (14), Since the controlling step of TiN growth is the diffusion of N in molten steel, the concentration of N at the inclusion-molten steel interface is equal to that at equilibrium, i.e., C S-N = C e-N .
Obtained by Equation (17), Assuming that r m is infinite, then, That is, Obviously, Thus, In the above equations, n N is the mole number of N element and n TiN is the mole number of TiN inclusion; m N is the mass of N element and m TiN is the mass of TiN inclusion; c N is the concentration of N element; t is the growth time of inclusion particles; [%N] L is the mass percent concentration of N in molten steel, and [%N] e is the mass percent concentration of N at equilibrium; D L-N is the diffusion coefficient of N in molten steel; M N is the atomic weight of N and M TiN is the molecular weight of TiN, respectively; ρ metal is the density of liquid steel and ρ TiN is the density of TiN, valued ρ metal = 7070 kg/m 3 and ρ TiN = 5430 kg/m 3 , respectively. [4,17,18]. According to the results obtained by solving the equations in Section 2.5, the relationship between [%N] L − [%N] e and the solidification time t f can be obtained, as shown in Figure 8. molecular weight of TiN, respectively; ρ metal is the density of liquid steel and ρ TiN is the density of TiN, valued ρ metal = 7070 kg/m 3 and ρ TiN = 5430 kg/m 3 , respectively.
The Maximum Size of TiN Inclusion
Due to the segregation of solute elements during solidification, [%N] L in Equation (22) [4,17,18]. According to the results obtained by solving the equations in Section 2.5, the relationship between [%N] L − [%N] e and the solidification time t f can be obtained, as shown in Figure 8.
where r 0 is the initial radius of the inclusion particle, and r t is the radius of the inclusion particle at time t. According to the above calculation, the theoretical maximum size of TiN inclusions at the end of solidification is obtained, as shown in Figure 9. Although the initial radius of the inclusion particle r 0 is included in Equation (27), it is an integral result. According to the calculation in this study, TiN inclusions are formed during solidification. The criterion for judging whether TiN precipitates is that the actual solubility product is larger than the equilibrium solubility product, so the initial radius r 0 is set to 0. It is possible that TiN inclusions increase with oxides or other types of inclusions as a core, but the growth of composite inclusions is complex and beyond the scope of this study.
of solidification is obtained, as shown in Figure 9. Although the initial radius of the inclusion particle r 0 is included in Equation (27), it is an integral result. According to the calculation in this study, TiN inclusions are formed during solidification. The criterion for judging whether TiN precipitates is that the actual solubility product is larger than the equilibrium solubility product, so the initial radius r 0 is set to 0. It is possible that TiN inclusions increase with oxides or other types of inclusions as a core, but the growth of composite inclusions is complex and beyond the scope of this study.
The Effect of Cooling Rate on the Maximum Size of TiN Inclusions
The effect of cooling rate on the segregation of the solute elements Ti and N is shown in Figure 10. The cooling rate has little effect on the segregation, while it affects the solidification time. The larger the cooling rate, the shorter the solidification time, which affects the maximum size of TiN inclusions.
The Effect of Cooling Rate on the Maximum Size of TiN Inclusions
The effect of cooling rate on the segregation of the solute elements Ti and N is shown in Figure 10. The cooling rate has little effect on the segregation, while it affects the solidification time. The larger the cooling rate, the shorter the solidification time, which affects the maximum size of TiN inclusions.
inclusions are formed during solidification. The criterion for judging whether TiN precipitates is that the actual solubility product is larger than the equilibrium solubility product, so the initial radius r 0 is set to 0. It is possible that TiN inclusions increase with oxides or other types of inclusions as a core, but the growth of composite inclusions is complex and beyond the scope of this study.
The Effect of Cooling Rate on the Maximum Size of TiN Inclusions
The effect of cooling rate on the segregation of the solute elements Ti and N is shown in Figure 10. The cooling rate has little effect on the segregation, while it affects the solidification time. The larger the cooling rate, the shorter the solidification time, which affects the maximum size of TiN inclusions. The effect of cooling rate on the maximum size of TiN inclusions with different initial Ti and N contents is shown in Figure 11. The higher the cooling rate, the smaller the maximum size of precipitated TiN inclusions. However, when the cooling rate is sufficiently high, the maximum size of TiN inclusions is less affected by the cooling rate. In order to avoid the formation of large-sized TiN inclusions, the cooling rate should be controlled to at least 20 K/s. The effect of cooling rate on the maximum size of TiN inclusions with different initial Ti and N contents is shown in Figure 11. The higher the cooling rate, the smaller the maximum size of precipitated TiN inclusions. However, when the cooling rate is sufficiently high, the maximum size of TiN inclusions is less affected by the cooling rate. In order to avoid the formation of large-sized TiN inclusions, the cooling rate should be controlled to at least 20 K/s. Zhao [19] calculated the maximum size of TiN inclusions when [%Ti] 0 = 0.0035 , [%N] 0 = 0.0053, and the cooling rate was 0.5 K/s, 5 K/s and 50 K/s, respectively. Zhao [20] calculated the maximum size of TiN inclusions when [%Ti] 0 = 0.0030, [%N] 0 = 0.0050, and the cooling rate was 4 K/s, 6 K/s, 8 K/s, 10 K/s, and 12 K/s, respectively. In this study, the maximum size of TiN inclusions was calculated based on the data reported by Zhao [19] and Zhao [20]. As can be seen from Figure 11, the results of this study are lower than those calculated by Zhao [19] and Zhao [20]. The reason is that different segregation models are used, which have a great influence on the content of Ti and N in molten steel. Using Scheil's model or the C-K model will result in a larger segregation value.
The Effect of Ti and N Contents in Molten Steel on the Maximum Size of TiN Inclusions
The initial contents of Ti and N in molten steel have a decisive influence on the precipitation and growth of TiN. The straightforward way to reduce the size of precipitated TiN inclusions is to reduce the Ti and N content in molten steel. The maximum size of precipitated TiN inclusions at different initial Ti and N contents is shown in Figure 12. When the N content is constant, reducing the Ti content in molten steel can effectively reduce the precipitation of TiN inclusions and the size of TiN inclusions. In particular, when the Ti content is low, the effect is more significant. Even when the content of N is 70 ppm and the content of Ti decreases from 30 ppm to 20 ppm, the maximum size of TiN inclusion decreases from 5.7 µm to 1.0 µm . In contrast, when Ti content is constant, the maximum size of TiN inclusions can be uniformly reduced by decreasing N content in steel. For every 10 ppm decrease in N content in steel, the maximum size of TiN can be reduced by approximately 2 Zhao [19] calculated the maximum size of TiN inclusions when [%Ti] 0 = 0.0035, [%N] 0 = 0.0053, and the cooling rate was 0.5 K/s, 5 K/s and 50 K/s, respectively. Zhao [20] calculated the maximum size of TiN inclusions when [%Ti] 0 = 0.0030, [%N] 0 = 0.0050, and the cooling rate was 4 K/s, 6 K/s, 8 K/s, 10 K/s, and 12 K/s, respectively. In this study, the maximum size of TiN inclusions was calculated based on the data reported by Zhao [19] and Zhao [20]. As can be seen from Figure 11, the results of this study are lower than those calculated by Zhao [19] and Zhao [20]. The reason is that different segregation models are used, which have a great influence on the content of Ti and N in molten steel. Using Scheil's model or the C-K model will result in a larger segregation value. Regardless of the segregation model used, for the same segregation model, the size of TiN inclusions has little relationship with the segregation of the solute elements Ti and N. However, different segregation models result in different contents of Ti and N elements in molten steel, which affect the size of inclusions in two aspects. Firstly, different contents of Ti and N elements lead to a difference in the time when TiN begins to precipitate. The higher the concentration of the solute elements, the longer the time of inclusions' growth. Secondly, different segregation models lead to different [%N] L − [%N] e in Equation (22). The larger the concentration gradient, the larger the size of inclusions.
The Effect of Ti and N Contents in Molten Steel on the Maximum Size of TiN Inclusions
The initial contents of Ti and N in molten steel have a decisive influence on the precipitation and growth of TiN. The straightforward way to reduce the size of precipitated TiN inclusions is to reduce the Ti and N content in molten steel. The maximum size of precipitated TiN inclusions at different initial Ti and N contents is shown in Figure 12. When the N content is constant, reducing the Ti content in molten steel can effectively reduce the precipitation of TiN inclusions and the size of TiN inclusions. In particular, when the Ti content is low, the effect is more significant. Even when the content of N is 70 ppm and the content of Ti decreases from 30 ppm to 20 ppm, the maximum size of TiN inclusion decreases from 5.7 µm to 1.0 µm. In contrast, when Ti content is constant, the maximum size of TiN inclusions can be uniformly reduced by decreasing N content in steel. For every 10 ppm decrease in N content in steel, the maximum size of TiN can be reduced by approximately 2 µm. Therefore, it is concluded that the effect of Ti content on the size of TiN inclusions is greater than that of N content on the size of TiN inclusions. µm. Therefore, it is concluded that the effect of Ti content on the size of TiN inclusions is greater than that of N content on the size of TiN inclusions. In the steelmaking process, the key to controlling the N element is the strength of the vacuum treatment and the result of the atmosphere protection during casting. In general, it is difficult to reduce the N content in steel to less than 50 ppm. The key to controlling the Ti element lies in the usage of different quality ferroalloys. Titanium is an element that cannot be readily removed in the steelmaking process. In the smelting of bearing steel, the main source of titanium is ferrochromium containing titanium. The content of titanium in different grades of ferrochromium varies greatly, about 0.01%-0.5%. The use of high-quality ferroalloys with low Ti content has a direct effect on reducing the Ti element in steel, but it will bring about an increase in cost. In order to produce highquality bearing steel, the contents of Ti and N must be strictly controlled to avoid the precipitation of TiN during solidification and to maintain the precipitated TiN inclusions at a small size so as to reduce the adverse effect on the fatigue life of bearing steel.
Conclusions
1. The precipitation model of TiN in GCr15 bearing steel during solidification was established. At the solidification front, as the solid fraction increases, the Ti and N elements will segregate. When the thermodynamic conditions of TiN inclusion formation are satisfied, the TiN inclusions will precipitate in the solid-liquid zone; 2. Before the precipitation of TiN inclusions, the contents of Ti and N increase continuously with the increase of the solid fraction. After the precipitation of TiN inclusions in liquid, the contents of Ti and N decrease with the increase of solid fraction; 3. The cooling rate of molten steel has no significant effect on the segregation of Ti and N elements at the solidification front, but it has a significant effect on the size of precipitated TiN. As the cooling rate increases, the growth time of TiN inclusions decreases, and the size of TiN inclusions decreases accordingly; 4. The most effective way to reduce the precipitation of TiN is to increase the cooling rate and decrease the contents of Ti and N in steel. The effect of Ti content on the size of TiN inclusions is greater than that of N content. In the steelmaking process, the key to controlling the N element is the strength of the vacuum treatment and the result of the atmosphere protection during casting. In general, it is difficult to reduce the N content in steel to less than 50 ppm. The key to controlling the Ti element lies in the usage of different quality ferroalloys. Titanium is an element that cannot be readily removed in the steelmaking process. In the smelting of bearing steel, the main source of titanium is ferrochromium containing titanium. The content of titanium in different grades of ferrochromium varies greatly, about 0.01%-0.5%. The use of high-quality ferroalloys with low Ti content has a direct effect on reducing the Ti element in steel, but it will bring about an increase in cost. In order to produce high-quality bearing steel, the contents of Ti and N must be strictly controlled to avoid the precipitation of TiN during solidification and to maintain the precipitated TiN inclusions at a small size so as to reduce the adverse effect on the fatigue life of bearing steel. | 2019-05-09T13:09:54.056Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "83e1e63c30ddd09501c229c0f155e0e58335c553",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/12/9/1463/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83e1e63c30ddd09501c229c0f155e0e58335c553",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
17025781 | pes2o/s2orc | v3-fos-license | WMicaD: A New Digital Watermarking Technique Using Independent Component Analysis
This paper proposes a new two-mark watermarking scheme that is based on the independent component analysis (ICA) technique. The first watermark is used for ownership verification while the second one is used as the copy ID of the image. Using a small-sized support image, the extraction is carried out on size-reduced level, bringing computational advantage to our method. The new method, undergoing a variety of experiments, has shown its robustness against attacks and its capability of detecting tampered area in the image.
INTRODUCTION
Digital watermarking, in which some information called the watermark is embedded directly and imperceptibly into original data (the so-called work), is one of the effective techniques to protect digital works from piracy [1,2]. Once embedded, the watermark is bound to the work and should be extractable to prove the ownership, even if the work is modified [3]. Besides, it is preferable if the watermark also contains the tracking information about the copies of the work, that is, the copy ID. Because of its importance in digital media, watermarking has been extensively studied in recent years, with many approaches such as Fourier transform, Wavelet transform, QIM (quantization index modulation), and ICA (independent component analysis).
The idea of applying ICA to watermarking has been introduced in several studies, such as in the works of Zhang and Rajan [4], Gonzalez et al. [5], Bounkong et al. [6], and some others [7][8][9]. The similarity between ICA and watermarking schemes and the blind separation ability of ICA are the reasons that make ICA an attractive approach for watermarking.
In this contribution, we develop a novel method called WMicaD (watermarking by independent component analysis with dual watermark) that aims for the two above-mentioned goals: verifying the ownership and tracking the copies. To do it, the WMicaD method employs a dual watermark embedding scheme and an ICA-based extraction scheme. While the two watermarks allow us to verify the ownership as well as to track the copy ID, the ICA algorithm and watermark modification scheme allow us to extract the watermark with a single small-sized support image, the key image, without any information about the embedding parameters. Moreover, since the watermark extraction is carried out on size-reduced images, WMicaD gains computational advantage. In summary, our proposed method has the following characteristics.
(i) The size of the key image is much smaller than the original image. Thus, we need less storage memory space. Besides, the watermarked image may be made public if necessary. (ii) The ICA-based extraction scheme does not require the original image and the watermarks. Also, the embedding parameters can be any arbitrary numbers. (iii) The extraction is carried out on the down-sized images. It provides computational advantage compared to the extraction scheme with original size of the test image. (iv) The proposed watermarking algorithm can serve for both ownership verification and image authentication. . . This paper is organized as follows. An overview of ICA and its similarity with watermarking is shown in Section 2. The WMicaD embedding and extraction schemes are detailed in Sections 3, 4, 5, and 6. We provide the computer simulations in Section 7. Finally, in Section 8, we conclude and discuss the issues related to the proposed algorithm.
WATERMARKING USING ICA
Independent component analysis (ICA) [10] is an important technique in signal processing whose goal is to unveil the hidden components from given observations. Assuming that the observed signals are mixtures of unknown independent sources, the ICA is carried out by finding a transform of the observation so that the new signals are as independent as possible [11]. Because of its blind extraction ability, many algorithms have been developed for ICA, for example, Infomax [12], FastICA [13], and ThinICA [14].
Shown in Figure 1 is the full ICA model which includes a mixing scheme and a demixing scheme. In the mixing scheme, the observed signals are generated by an unknown linear combination of the unknown sources. The scheme can be represented mathematically as T is a vector of original signals, and A N×N is a mixing matrix representing the unknown combination. This mixing scheme is similar to a watermark embedding scheme if we consider the work and the watermarks as unknown sources, and the watermarked images as the observations. The goal of the ICA demixing scheme is to recover the hidden sources s i , given the observations. It is similar to the watermark extraction scheme, where the watermarks are extracted from watermarked images. ICA carries out this task by maximizing the statistical independence criteria among the outputs y 1 , . . . , y N via a demixing matrix B: When converged, B will be an inverse of A up to some permutations and scales, and y 1 , . . . , y N will be a permutation of the unknown sources s 1 , . . . , s N . That is, if an ICA demixing scheme is applied on watermarked images, the outputs will be the embedded watermarks and the work.
Being interested in the potential of ICA, several authors have focused their studies on ICA-based watermarking [4,5,[7][8][9]. As ICA algorithms require enough number of mixtures to run (the number of mixtures has to be equal to or more than the number of sources), a common challenge for ICA-based watermarking methods is to create different observations from the watermarked images and additional data. In [4,5], the authors partitioned the original image into small blocks. The ICA algorithm was applied on these blocks to extract the independent components (ICs). Some of the less significant ICs were replaced by the watermarks. The watermarked image was then constructed from this new set of ICs. Major disadvantages of this approach, however, are the need of a large number of ICs and the high computational workload.
In [7], the authors used the original image and one of the two watermarks as the additional data. This is not preferable as the original image must be presented whenever ones want to proof the image ownership. In [9], the original image is not required but another watermarked image embedded by the same watermarks is needed. The extraction cannot be carried out without this large-size supporting image. Our proposed WMicaD method attempts to reduce the size of the supporting image by a watermark modification process. The modification is applied on the watermark so that it reveals different content on different image size.
WATERMARK MODIFICATION
In this paper, we treat a gray-level image, I of size M × N, as a matrix of M × N whose entries are the pixel intensity values.
The downsizing and upsizing operators
The downsizing operator, denoted by D, resizes an image of size M × N to k-time smaller images, I [k](M/k)×(N/k) = D(I M×N , k). The (m, n)th entry of the size-reduced image is the average of the pixel values inside a window of size k × k of the original image I M×N . That is, where k is a nonzero positive integer, called "resizing factor," m = 0, 1, . . . , (M − 1/k), and n = 0, 1, . . . , (N − 1/k). The upsizing operator U, in contrast, duplicates each element of I M×N to every element in a window of size k × k.
Watermark modification
As introduced in Section 2, we aim to embed the two watermarks (W 1 and W 2 ) into the original image. Hence, in order to apply ICA algorithm into the extraction scheme, we need at least three mixtures. However, we only have two available observations: a watermarked image and a small supporting image. Simple linear combination of these two images cannot create three independent mixtures. Therefore, our solution is to modify the watermarks with certain conditions so that they reveal different information at different image scales. The first watermark, W 1 , is modified in such a way that when it is downsized by a factor k 1 , it produces a small-sized watermark, W 1[k1] . But when W 1 is downsized by a factor k 1 k 2 , it produces a nullmatrix. Mathematically, this property can be expressed as where ∅ denotes a null matrix. The second watermark, W 2 , is modified so that when we downsize and subsequently upsize it again with the same factor, the watermark remains unchanged. Mathematically, this property can be expressed as There are many ways to create the watermarks that satisfy (5), (6), and (7). In the appendices of this paper, we will introduce a simple modification method to create such watermarks. Also, in Section 5, we will explain in detail the use of the watermarks W 1 and W 2 . Figure 2 is the detail of our WMicaD embedding scheme. A watermarked image I + is generated by embedding two watermarks W 1 and W 2 into the original image, I. At the same time, a small-sized key image, K, is generated as the supporting image which will be used later in the watermark extraction.
Shown in
We begin the embedding scheme by creating two visual masks V 1 and V 2 for the two watermarks. As discussed in [15], the visual masks help us to increase the embedding strength of the watermarks while maintaining the image's quality and watermark's invisibility. Our visual masks are computed from the original image, I, using NVF (noise visibility function) technique [15,16]. Figure 3: The WMicaD extraction scheme. Now, we create the watermarks from given signatures, S 1 and S 2 . Visual mask V 1 and a modification function M 1 (see the appendices) are applied on S 1 to generate the first watermark, W 1 , that satisfies (5) and (6). Visual mask V 2 and modification function M 2 are applied on S 2 to generate the second watermark, W 2 , that satisfies (7).
In the last step, W 1 and W 2 are inserted into I to produce watermarked image I + . Meanwhile, W 1 is combined with I and then downsized to produce the key image K. In summary, steps involved in the embedding scheme are given below.
(1) Create two visual masks V 1 and V 2 by NVF method.
The visual mask V 1 can be different from V 2 by choosing different masking window half-lengths, L 1 =L 2 . (2) Create watermarks using modification functions where k 1 , k 2 are the resizing factors. (3) Create the watermarked image I + and the key image K: Parameters α and β are called "embedding strengths" and γ is called "key-image coefficient." These parameters can be any nonzero values in the range of [−1, 1]. Figure 3 is the detail of our WMicaD extraction scheme. We extract the two watermarks from the watermarked image, I + , using ICA-based technique with support from the key image, K. As discussed earlier, firstly, we have to generate three mixtures and then apply ICA algorithm on them to receive the outputs. All of these processes will be carried out on size-reduced images.
Shown in
The steps involved in the WMicaD extraction scheme are given below.
(1) Downsize the watermarked image I + to the size of the key image K with resizing factor k 1 : (2) Create the image I 4 from I 1 and K by applying upsizing and downsizing operators with a resizing factor k 2 , (3) Create 1D signals from I 1 , I 4 , and K, where C 2→1 denotes a 2D-to-1D operator.
THE POSTPROCESSING SCHEME
As discussed in [11], one of the ambiguities of ICA is about the output order. In ICA, the outputs will be a permutation of the original sources. That is, we cannot say if the output y 1 corresponds to the source s 1 , or whether y 2 is an estimate of s 2 , and so on. Therefore, we develop a postprocessing scheme for our WMicaD method to identify the corresponding estimates, and to generate the estimates of the signatures from the estimated watermarks.
The postprocessing scheme is based on the correlation between each output Y i , i = 1, 2, 3, and the watermarked image I + [k1] (in downsized version). To measure the similarity between two images, we use the absolute correlation coefficient (abCC). The absolute correlation coefficient between X and Y (both of size M × N) is calculated by where The abCC will approach 0 when two images are uncorrelated, and 1 when the two images are very similar to each other. In the next step, we obtain the original signatures from the watermark estimates. Since the watermarks are created by replicating the owner's signature, S 1 , and the copy ID number, S 2 , we partition the image Y i into l subimages, Y i1 , Y i2 , . . . , Y il , each of size M S ×N S , where M S ×N S is the size of the owner's signature. Averaging these subimages yields the estimate of the signature: Thang Viet Nguyen et al.
PERFORMANCE ANALYSIS
The robustness of the watermarked images was tested through various simulations under different attacks, including JPEG compression, gray-scale reduction, resizing, and noise addition. Besides, an authentication test was carried out to verify the WMicaD's ability of detecting the tampered area.
Simulation setup
Two binary images (16 × 64), a university name, and a copy ID, as shown in Figure 4, were used as the signatures during the embedding scheme. Two well-known gray-scale Lena and Baboon images, each of size 512 × 512, were used as the original images in the simulations. The original images, watermarks, watermarked images, and key images that were generated by WMicaD embedding scheme are shown in Figure 5. In the embedding process, peak signal-to-noise ratio (PSNR) was chosen as the criterion to measure the quality of the watermarked image. The PSNR between an image I and its modification I is defined as where M ×N is the size of the two images. And for the extraction process, absolute correlation coefficient (abCC) ((24)) between the estimated signature and its original one, |r S,S |, is chosen as the performance index.
To maintain the quality of the watermarked image and the imperceptibility of the watermarks, the embedding coef- ficients α, β and the window half-length L used in the visual mask function V were monitored so that PSNR ≥ 43dB in all experiments. The resizing factors k 1 and k 2 were also appropriately selected so that the key image K is small enough while the watermarks still have adequate details. Details of the parameters are provided in Table 1.
With the chosen parameters, there is no noticeable difference between the original and watermarked images (see Figure 5). Moreover, the size of the key image (128 × 128) is 16 times reduced from the original 512 × 512.
In the next step, test images were generated by applying different attacks/modifications on the watermarked images. The WMicaD extraction and postprocessing scheme were carried out on the test images to estimate the signatures. The estimated signatures were then compared with the original ones, using abCC as the performance index to evaluate the quality of the estimation. In addition, we repeated the simulations with different ICA algorithms, such as SOBI (secondorder blind identification) [17], JADETD (joint approximate diagonalization of eigen matrices with time delays) [18], and FPICA (fixed-point ICA) [13], in order to get a more general evaluation. It turned out that their results are almost identical. Thus, in this paper, we only show those simulations that were carried out with SOBI.
Common modification test
In this simulation, we tested the WMicaD method with three common image processing techniques: JPEG compression, gray-scale reduction, and resizing. A JPEG compression tool was used to compress the watermarked images with quality factor ranging from 90% down to 10%. In gray-scale reduction, the gray level was reduced from 256 down to 128, 64, . . . , 8 levels. And in resizing tests, the images were rescaled from 512 × 512 down to 128 × 128, and up to 1024 × 1024.
The results of WMicaD on the three tests are shown in Figure 6 and some illustrations of the estimated signatures are shown in Figure 7. In the two figures, "Expt1" and "Expt2" denote the performance plots of our experiments on the Lena and Baboon images, respectively. The symbols "−W1" and "−W2" represent the results on the first and second watermarks, respectively. As we can see, WMicaD produced good performance on all experiences. The quality of the estimates, in terms of abCC with the original signatures, is high even when the JPEG quality factor or the gray level is reduced to low value. Among the three modifications, simulations on resizing yielded the worst performance. It is probably due to the destruction of the first watermark's properties (5) and (6) when the image is resized, that is, pixel values are interpolated.
For further investigation, we compared the proposed method with several well-known watermarking techniques that work on different processing domains [19]. These techniques include a discrete cosine transform algorithm Cox-DCT [20], a spatial domain algorithm Langelaar-spa [21], and a discrete wavelet transform algorithm Wang-DWT [22]. The Lena images (in Expt1) were used as the original images. Our copy ID signature (the number sequence) was chosen as the watermark. After the embedding process, the distortions of the watermarked images in terms of PSNR were found to be 38.4 dB , 34.2 dB, and 36.7 dB for the Cox-DCT, Wang-DWT, and Langelaar-spa, respectively. It may be noted that in our experiments, the PSNR is found to be 44.9 dB and 43.1 dB for Expt1 and Expt2 (see Table 1). The performance results of the watermark extraction were computed in term of the absolute correlation coefficient and they are shown in Figure 6. As it can be seen in Figure 6, WMicaD provided a competitive performance; it even yielded better results in JPEG and gray-level reduction tests. These are very encouraging results, considering that WMicaD uses two watermarks that are overlapped on each other.
Addition-of-noise test
From some points of view, an attack to the watermarked image can be considered as a noise being added to the image. Therefore, in this section, we investigate the performance of WMicaD under different types of noise, including Gaussian-noise, "salt and pepper" (S&P) noise, and multiplicative noise. Noise range and properties used in the simulations are presented in Table 2.
The simulation results of WMicaD on the noise tests are shown in Figure 8. The method provided good performance on the "S&P" noise and multiplicative noise experiments but not very impressive performance on Gaussiannoise test. This can be explained from the ICA property. As discussed in [11], in order to get a good ICA estimation, the source signals should be non-Gaussian. Therefore, when the Gaussian-noise was added, it made the sources more Gaussian and hence, a poor performance of the ICA-based extraction scheme.
More simulations on image rotation, cropping, brightness and contrast adjustments, and filtering have been carried out to measure the performance of WMicaD [16]. The method produces very good result on the brightness and contrast adjustment attacks. In the desynchronization attacks, such as rotation and cropping, WMicaD performance is not as good as on the JPEG compression test, but it is better than in the Gaussian-noise attack. For example, in rotation attack, we assumed that the rotation angle was unnoticeable to the extractor, that is, no preinverse rotation operation was applied. The extraction is carried out directly on the rotated image. The results were encouraging, and the estimated signatures are still recognizable even when the image was rotated by 0.25 degree.
WMicaD for detection of tampered area
The previous section has shown the ability of WMicaD in verifying the ownership. In this section, another ability of WMicaD in image authentication is introduced. The following experiment will demonstrate how WMicaD method is able to detect the tampered area in the image. Shown in Figure 9 is Lena image that was tampered by a small portion of the image (the feather portion in the hat's tail area). This portion was copied and maliciously overwritten to another similar place in order to make it undetectable by naked eyes.
Detecting the tampered area
Now, we carry out the extraction scheme and carefully observe the three output images Y 1 , Y 2 , and Y 3 . As it is shown in Figure 10, the tampered area, even if small, is clearly noticeable in the watermark estimates, with the pixel values of the tampered area being much higher than the rest of the images.
Recovering the signatures
After successfully detecting the tampered area, WMicaD is still able to extract the signature from the tampered image by doing an additional step before carrying out the postprocessing scheme. Here, we replace the pixel values in the tampered area (the area where pixel values are significantly high) by the average values of the other pixels (the pixels that are not inside the tampered area). Next, we quantize all the pixels of the image to 256 gray level. Finally, we put the corrected image to the postprocessing scheme to estimate the signatures. And as it is shown in Figure 11, the estimated watermarks and signatures are clearly visible and easy to recognize.
DISCUSSION AND CONCLUSION
In this paper, we have proposed a novel watermarking method called WMicaD that embeds two watermarks into the host image. The unique two-watermark embedding scheme and the ICA-based extraction scheme have brought many interesting properties to WMicaD. Firstly, this dual watermark embedding scheme allows us to achieve two goals at the same time: verifying the ownership of the image and tracking the copy ID of the original image. Unlike other watermarking algorithms that use a sequence of numbers as a single watermark, we apply images as the watermarks. Hence, at the extraction side, the estimated signatures can be easily verified by visual inspection. In addition, overlapping of watermarks makes them harder to be recognized in the host image.
Secondly, utilization of specially tailored watermarks and ICA algorithm in the extraction scheme makes it possible to estimate the watermarks without the original image, and without any information about the embedding parameters. Please note that while ICA is considered as a blind separation method, our WMicaD extraction is not considered as a totally blind watermarking extraction, since it uses a small supporting key image. We can embed the watermark with different embedding strengths (the alpha and beta parameters), and different copy IDs (the second watermark) on different image copies. Since all of the three parameters (alpha, beta, and gamma) can be changed in every image, it is almost impossible for the attackers to know these parameters. Thus, it helps to prevent the watermarks from being discovered or removed.
Theoretically, carrying out the extraction on size-reduced images brings to WMicaD a computational advantage. As seen in the simulations, the size of images was reduced by 4 × 4 times, resulting in a much more faster processing time in comparison with the extraction on the original images. Please note that if the other competitive algorithm also applies down-sizing operation before carrying out the watermark extraction, then our WMicaD might not have clear computational advantages. However, not every algorithm can carry out the extraction on the down-sized images. And even if it is possible, the quality of the estimated signatures is another topic that needs further investigation. In addition, size-reduced images also prevent the attackers from removing the watermarks from the host image, since the small-size estimated outputs are much different from the original one.
Through the simulations, we have used several watermarking algorithms for performance comparison using ab- solute correlation coefficient (abCC) as a performance index. It is good but not a perfect measure. Sometimes, an estimate with poor abCC is easy to observe than one with higher abCC. Also, since we are using a two-watermark embedding scheme and carrying out the extraction on size-reduced image, it is hard to have an absolute comparison. The comparison used in experiments should be considered as an illustration for our WMicaD performance. In addition, the performance is varied, depending on the content of the two watermarks as well as the original image.
A. FIRST WATERMARK MODIFICATION
The goal of the first watermark modification function, M 1 , is to generate a watermark, W 1 , from the owner's signature so that the watermark satisfies (5) and (6). Details of the scheme are provided in the following paragraphs and shown in Figure 12. Let S 1 be an image of size (M/k 1 k 2 )×(N/k 1 k 2 ) that represents the owner signature. The scheme to construct the watermark W 1 from the owner's signature is described by First, the signature S 1 is upsized by a factor k 2 to create a matrix Z 1 . Second, Z 1 is multiplied element by element with a "chessboard" matrix £ to produce Z 2 . Finally, Z 2 is upsized by a factor k 1 to generate the watermark W 1 . It can be seen that when W 1 is downsized by k 1 k 2 , it will result in a null matrix satisfying (5). In this scheme, the chessboard matrix £ is a matrix whose (m, n)th entry is defined by £ (m,n) = 1 if(m + n) = even,
B. SECOND WATERMARK MODIFICATION
The second modification function M 2 is to create a watermark W 2 that satisfies (7). Beginning with a signature S 2 of size (M/k 1 k 2 ) × (N/k 1 k 2 ), we apply the upsizing operator U on S 2 with resizing factors k 1 k 2 to obtain Thang Viet Nguyen et al.
9
Shown in Figure 13 is an illustration of the second modification scheme, M 2 . The second watermark W 2 of size 8 × 4 is constructed from a signature S 2 of size 2 × 1 by an upsizing operator U with the resizing factors k 1 = 2 and k 2 = 2. It is easy to see that the generated watermark W 2 satisfies (7). | 2014-10-01T00:00:00.000Z | 2008-01-01T00:00:00.000 | {
"year": 2007,
"sha1": "25263b5d6d88eea8c2840d488e288522cc85f7ee",
"oa_license": "CCBY",
"oa_url": "https://asp-eurasipjournals.springeropen.com/track/pdf/10.1155/2008/317242",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "01ea1a16032a430d903735534512fc3e301caaae",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258824722 | pes2o/s2orc | v3-fos-license | Metagenomic analysis reveals gut plasmids as diagnosis markers for colorectal cancer
Background Colorectal cancer (CRC) is linked to distinct gut microbiome patterns. The efficacy of gut bacteria as diagnostic biomarkers for CRC has been confirmed. Despite the potential to influence microbiome physiology and evolution, the set of plasmids in the gut microbiome remains understudied. Methods We investigated the essential features of gut plasmid using metagenomic data of 1,242 samples from eight distinct geographic cohorts. We identified 198 plasmid-related sequences that differed in abundance between CRC patients and controls and screened 21 markers for the CRC diagnosis model. We utilize these plasmid markers combined with bacteria to construct a random forest classifier model to diagnose CRC. Results The plasmid markers were able to distinguish between the CRC patients and controls [mean area under the receiver operating characteristic curve (AUC = 0.70)] and maintained accuracy in two independent cohorts. In comparison to the bacteria-only model, the performance of the composite panel created by combining plasmid and bacteria features was significantly improved in all training cohorts (mean AUCcomposite = 0.804 and mean AUCbacteria = 0.787) and maintained high accuracy in all independent cohorts (mean AUCcomposite = 0.839 and mean AUCbacteria = 0.821). In comparison to controls, we found that the bacteria-plasmid correlation strength was weaker in CRC patients. Additionally, the KEGG orthology (KO) genes in plasmids that are independent of bacteria or plasmids significantly correlated with CRC. Conclusion We identified plasmid features associated with CRC and showed how plasmid and bacterial markers could be combined to further enhance CRC diagnosis accuracy.
Introduction
Colorectal cancer (CRC) is the most common clinical malignant tumor of the digestive system and poses a huge threat to human health and society (Bray et al., 2018). Most CRC patients are diagnosed at an advanced stage and lose the opportunity for radical surgery (Di Nicolantonio et al., 2021). Prompt diagnosis of CRC is essential for effective treatment and favorable prognosis (Tomizawa et al., 2017). Colonoscopy and biopsy are currently considered the gold standard for the screening of CRC (Rex et al., 2006). Fecal occult blood test (FOBT) is non-invasive and the most commonly used method for colorectal cancer screening currently (Faivre et al., 2004;Lee et al., 2020). The specificity of FOBT for CRC detection was 92.4%, but the sensitivity was only 30.8% (Allison et al., 1996). Due to its dependence on tumor tissue bleeding, FOBT has limited sensitivity and accuracy for CRC (Hardcastle et al., 1996). Therefore, there is an urgent need for reliable and efficient biomarkers for the diagnosis of colorectal cancer.
With the development of metagenomic technology, an increasing number of recent studies have highlighted the vital role of the gut microbiome in regulating human health and disease (Ghaisas et al., 2016;Schmidt et al., 2018;Gurung et al., 2020). The gut microbiome may have an impact on the onset and development of CRC (Zamani et al., 2019), while some intestinal bacteria may slow the disease's progression (Chan et al., 2019). The efficacy of gut bacteria as diagnostic biomarkers for CRC has been confirmed (Dai et al., 2018;Liu et al., 2022).
Plasmids play important roles in the evolutionary events of microbial communities, and many plasmid genes are involved in bacterial survival and adaptation to environmental changes (Fondi et al., 2010;Dib et al., 2015). Many bacteria can exchange genetic material through horizontal gene transfer, which is facilitated by plasmids and transposable elements carried by plasmids (Smalla and Sobecky, 2002). It indicates that plasmids should not be disregarded in research. Plasmidomics refers to the whole plasmid DNA of the samples (Brown Kav et al., 2012;Bleicher et al., 2013). With the advancement of next-generation sequencing technology and the development of bioinformatics tools, numerous methods were developed for identifying plasmid sequences in metagenomic data, such as Plasflow (Krawczyk et al., 2018), Plasmidseeker (Roosaare et al., 2018), PlasmidFinder (Carattoli et al., 2014), SCAPP (Pellow et al., 2021), and cBar (Zhou and Xu, 2010). For short-reads metagenomic sequencing, PlasFlow software based on deep neural networks is the way of maximizing plasmid coverage and minimizing false positives currently (Hilpert et al., 2021). With the help of these techniques, we can examine how intestinal plasmids and plasmid genes change during diseases.
Many human diseases are closely associated with plasmids, particularly those involving antibiotic resistance genes and virulence genes (Cheung et al., 2004;Dolejska and Papagiannitsis, 2018). Enterotoxigenic Escherichia coli (ETEC) causes numerous cases of diarrheal disease worldwide, which is linked to the virulence plasmid pEntYN10 within ETEC (Ban et al., 2015). Emerging research points to the significance of other microbial kingdoms in gastrointestinal disease in addition to gut bacteria (Liu et al., 2022), but no studies on intestinal plasmids in CRC patients have been explored. The primary goal of this study is to examine the key characteristics of the plasmids in the gut microbiomes of CRC patients from eight cohorts worldwide. We seek to expand existing CRC diagnosis biomarkers and develop a more precise diagnosis model using newly discovered plasmid biomarkers.
Public data collection
We used the terms "Colorectal cancer" and "Human gut metagenomics" to search the NCBI database, 1 and we found a total of nine CRC gut metagenomic cohorts. We excluded the Italian 1 https://www.ncbi.nlm.nih.gov/ cohort (PRJNA447983) since we were unable to determine the casecontrol status that matched the sequencing data in that dataset. We selected an Asian cohort from China and a European cohort from Germany as independent validation datasets, and the other six cohorts as training datasets, to ensure the reliability and generalizability of the prediction model. We downloaded fecal metagenomic sequencing data of the eight cohorts in NCBI on CRC patients and healthy controls (Supplementary Table 1). For discovery cohorts (n = 1,123), Accession of China Cohort1 (CHN1) is PRJNA763023 , CRC, n = 100; and Control, n = 100. Accession of China Cohort2 (CHN2) is PRJNA731589 (Liu et al., 2022), CRC, n = 80; and Control, n = 86. Accession of Japan (JPN) is PRJDB4176 (Yachida et al., 2019), CRC, n = 218; and Control, n = 212. Accession of Austria (AUS) is PRJEB7774 (Feng et al., 2015), CRC, n = 46; and Control, n = 63. Accession of France (FRA) is PRJEB6070 (Zeller et al., 2014), CRC, n = 53; and Control, n = 61. Accession of the United States of America (USA) is PRJEB12449 (Vogtmann et al., 2016), CRC, n = 52; and Control, n = 52. For validation cohorts (n = 119), Accession of China Cohort3 (CHN3) is PRJNA514108 (Gao et al., 2022), CRC, n = 32; and Control, n = 44. Accession of Germany (GER) is PRJEB6070 (Zeller et al., 2014), CRC, n = 38; and Control, n = 5. The cohorts' characteristics are listed in Supplementary Table 1.
Microbial ecological analysis
For each sample, Shannon metrics of plasmids were used to calculate alpha diversity. The Bray-Curtis distance was used to calculate the beta diversity. Using the "Vegan" R package (v 2.6-2) in R software (Jari Oksanen et al., 2022), Shannon's index for each sample and the Bray-Curtis distance between samples was both evaluated. Using principal coordinates analysis (PCoA), the Bray-Curtis dissimilarity index was used to visualize the microbial community structures. Permutational multivariate ANOVA (PERMANOVA) was performed to reveal the plasmid community differences between groups or cohorts with 999 permutations (Anderson, 2001).
Feature selection
Plasmid community batch effects among cohorts were corrected using the "adjust_batch" function of the MMUPHin R package (v 2.6-2; Ma et al., 2022). We identified differential plasmids as candidate features for the CRC diagnosis models with the "lm_meta" function of MMUPHin. Subsequently, feature selection was performed using the package Boruta (Miron and Kursa, 2010;v7.0.0) with default settings (pValue = 0.01, mcAdj = T, maxRuns = 100). Differential EggNOG gene KOs, CAZY, and bacteria species were selected with the same pipeline.
Prediction model construction and validation
Random forest prediction model was constructed using "random forest" R package with 500 trees (Breiman, 2001). Based on differential plasmids and bacteria signatures, the random forest prediction model for CRC was trained with 10-fold cross-validation on the discovery cohorts. Model evaluation was performed with cohort-to-cohort transfer validation, leave-one-cohort-out (LOCO) evaluation, and independent validation. In cohort-to-cohort validation, the models were trained on a single cohort and their performances were evaluated in the other cohorts. In LOCO evaluation, the models were trained on five of the six cohorts in the discovery dataset and their performances were evaluated on the sixth cohort. Furthermore, an independent validation analysis was conducted in order to assess the reliability of microbial features as CRC diagnostic markers, and two additional datasets from CHN3 and GER were used in the process.
Associations between species and function
Associations between bacteria, plasmids, and their KO genes were performed by Spearman correlation using the "corAndPvalue" function of the "WGCNA" R package (Langfelder and Horvath, 2008).
Statistical analysis
All statistical analyses were conducted by R software (v 4.1.2, the R Project for Statistical Computing). In order to compare the two groups, Wilcoxon rank-sum test was used. Correlations were calculated using Spearman's rank correlation. The Benjamini-Hochberg method was used to adjust p values for multiple testing to account for the false discovery rate (FDR). p value <0.05 is considered statistically significant.
Characterization of CRC cohorts
We gathered metagenomic data from 1,242 samples across eight publicly available CRC cohorts worldwide (Supplementary Table 2). We included six of these cohorts as discovery cohorts to identify gut plasmids as biomarkers for CRC diagnosis, consisting of 549 CRC patients and 574 tumor-free controls from five countries (China, CHN1 and CHN2; Japan, JPN; Austria, AUS; France, FRA; and the United States, USA). As a result, the independent validation dataset, which comprised 70 CRC patients and 49 tumor-free controls from two countries, was created (China, CHN3 and Germany, GER). The bioinformatics analysis of all raw shotgun sequencing data was conducted consistently to reduce technical bias.
Alteration of the intestinal plasmids in CRC patients
In the discovery cohorts, we identified a total of 12,515 plasmids using metagenomic approaches. Only 628 plasmids were present in all six cohorts, with more cohort-specific plasmids being found in CHN1, CHN2, and JPN cohorts ( Figure 1A). We found that Proteobacteria and Firmicutes phylas made up the majority of the host taxa for each cohort of plasmids, and that there were no differences in these proportions between CRC patients and healthy controls. However, compared to other cohorts, a greater percentage of plasmids in the US cohort had Bacteroidetes phyla as their host ( Figure 1B). We found no discernible differences in the proportion of plasmids between CRC patients and controls, although a smaller portion of the identified plasmids were conjugative or carried antibiotic-resistance genes (Supplementary Figure 1).
We then assessed differences in intestinal plasmid alpha diversity between CRC patients and controls. According to the Shannon index in the discovery cohorts, we observed increased plasmid alpha diversity in CRC patients (p = 0.015; Figure 1C). Meanwhile, geographic differences are visible in intestinal plasmid alpha diversity (Supplementary Figure 2). The difference in intestinal plasmid alpha Frontiers in Microbiology 04 frontiersin.org diversity between CRC patients and healthy controls was only found in the CHN1 cohort (p = 0.03). In other cohorts, the intestinal plasmid alpha diversity between CRC patients and healthy controls was not significantly different (Supplementary Figure 2). Based on the analysis of beta diversity, the beta diversity of intestinal plasmids was not associated with CRC (p = 0.129, Figure 1D), nor was there a significant difference between cohorts (p = 0.697; Figure 1D).
Plasmid biomarkers for CRC diagnosis
We conducted a meta-analysis of six datasets from the discovery cohort in order to find plasmids that could be used as diagnostic markers for CRC. After that, we discovered 198 plasmids that had different abundances in patients with CRC and controls (Supplementary Table 3), 108 of which were highly abundant in the guts of CRC patients (p < 0.05), and 90 of which were decreased in the guts of CRC patients (p < 0.05). To screen out plasmid signatures for diagnosing CRC, we performed further signature selection on these 198 plasmids using Boruta. We screened 21 plasmids, of which 13 (including NZ_CP036554.1) were more prevalent in CRC patients and eight (including NZ_AP023416.1) were less prevalent in CRC patients ( Figure 2A). We first trained the random forest classifier with the 21 plasmid features in each dataset used 20 times repeated 10-fold crossvalidation to assess the diagnostic accuracy of the plasmid features for diagnosing CRC. Depending on the region, the plasmid random forest classifier performed differently. The plasmid random forest classifier demonstrated strong predictive power in the CHN1, CHN2, and FRA cohorts, with mean AUC ranging from 0.75 to 0.80 across cohorts that were 20 times repeated using 10-fold cross-validation. In contrast, the plasmid random forest classifier performs worse in JPN (AUC, 0.58), AUS (AUC, 0.67), and USA (AUC, 0.62) datasets ( Figure 2B).
We conducted cohort-to-cohort validation and leave-onecohort-out (LOCO) validation on the training cohorts to evaluate the geographical robustness of plasmid signatures as a universal biomarker. In cohort-to-cohort validation, the mean AUC of the plasmid random forest model ranged from 0.51 to 0.75 ( Figure 2C). The LOCO performance of the plasmid model ranged from 0.59 to 0.71 ( Figure 2D). To further test predictive performance, the plasmid classifiers trained within study cross-validation were applied to two independent validation sets. In the CHN3 and GER cohorts, the model's average AUC was 0.79 and 0.66, respectively ( Figure 2E).
Improved predictability based on a combination of plasmid and bacterial features
Using the same pipeline as plasmids, 91 differential bacteria species were identified (p < 0.05), and 39 of them were extracted as biomarkers for the diagnosis of CRC (Supplementary Figure 3A; Supplementary Table 4). Previous studies have demonstrated a strong link between gut bacteria and the occurrence and progression of CRC (Sang et al., 2020;Yinhang et al., 2022). Bacterial classifiers are effective at detecting CRC (Wirbel et al., 2019). The bacterial random forest classifier performed admirably in diagnosing CRC in our study. The bacteria random forest classifier showed strong predictive power within cohorts, with a mean AUC ranging from 0.81 to 0.93 except for the JPN (0.68) and USA (0.63) cohorts due to the distinct food culture of Japanese and the prolonged cryopreservation of fecal specimens in USA cohort, respectively (Supplementary Figure 3B). The cohort-to-cohort validation (Supplementary Figure 3C) and LOCO validation had similar outcomes (Supplementary Figure 3D). In independent validation, the average AUC of the model obtained in the CHN3 and Plasmid metagenomic classification models generalize across different cohorts. (A) Bar plot of the 21 plasmid features' effect sizes for the prediction of CRC diagnosis, as determined by MMUPHin and Boruta. The significance of the difference between patients with CRC and controls was determined via Wilcoxon rank-sum test: *p < 0.05. (B) CRC classification performances (AUC) calculated through the cohort-to-cohort model transfer for the random forest classifier trained on relative abundance profiles of plasmids. The values refer to an average value of 20 times repeated 10-fold cross-validation. (C) CRC classification performances (AUC) calculated through 20 times repeated 10-fold cross-validation within each study for the random forest classifier trained on relative abundance profiles of plasmids. (D) CRC classification performances (AUC) calculated through leave-one-cohort-out validation (LOCO, Model was trained using five of six cohorts and validated by the other one) for random forest classifier trained on relative abundance profiles of plasmids. (E) Validation of the plasmid random forest classifier in two independent cohorts (CHN3 and GER). The CRC classification performances (AUC) of the plasmid random forest classifier trained with all the training cohorts were obtained in the CHN3 and GER cohorts.
Frontiers in Microbiology 06 frontiersin.org GER cohorts were 0.84 and 0.86, respectively (Supplementary Figure 3E). We investigated whether creating a diagnostic panel with plasmids and bacterial species would result in better performance. 13 plasmids and 37 bacteria made up the panel after feature screening ( Figure 3A) Figure 3B). The model showed valuable prediction performance in cohort-to-cohort validation ( Figure 3C) and LOCO validation ( Figure 3D). The average AUC of the model obtained in the CHN3 and GER cohorts during independent validation was 0.87 and 0.81, respectively ( Figure 3E). In all training cohorts (
Correlations between gut bacterial features and plasmids
We further investigated the correlations between the bacteria and plasmids based on the Spearman correlation analysis in the controls and patients with CRC, respectively, to gain insights into the bacteriaplasmid interactions from an ecological perspective. In comparison to CRC cases, we found that the bacteria-plasmid correlation strength was stronger in controls. NZ_CP041417.1 (Escherichia coli strain STEC711 plasmid pSTEC711_1) in the gut of CRC patients served as the hub of the correlation network. And the relevant network in the control group's NZ_CP059935.1 (Escherichia coli strain 28.1 plasmid p4) was at its hub. Escherichia coli and plasmids were strongly associated in both CRC patients and controls. In addition, we found other bacteria that were closely related to the plasmids only in controls, particularly Enterobacter cloacae and Atopobium parvulum ( Figure 5).
Plasmid functional alterations in CRC
We looked at the plasmid functional alterations at the Kyoto Encyclopedia of Genes and Genomes (KEGG) orthology (KO) genes and carbohydrate-active enzymes (CAZy) genes in order to investigate the plasmid metagenomic functions of pathogenesis in CRC. From 9,514 plasmids KO genes, we first identified 613 differential KO genes (p < 0.05), including 333 KO genes with increased abundance and 280 KO genes with decreased abundance in CRC patients compared to controls (Supplementary Table 5).
FIGURE 5
Coabundance correlations between plasmids and bacterial species in patients with CRC and controls. Coabundance networks involving plasmids and bacterial species in the CRC and control samples, with absolute correlations above 0.7 and with a significance cut-off of FDR < 0.05. The colors of nodes indicate plasmids (green) and bacterial species (deep pink). Plasmid functional classification models generalize across different cohorts. (A) Bar plot of the 34 plasmid gene KO features' importance for the prediction of CRC diagnosis, as determined by MMUPHin and Boruta. The significance of the difference between patients with CRC and controls was determined via Wilcoxon rank-sum test: *p < 0.05. (B) CRC classification performances (AUC) calculated through the cohort-to-cohort model transfer for the random forest classifier trained on relative abundance profiles of plasmid KO genes. The values refer to an average value of 20 times repeated 10-fold cross-validation. (C) CRC classification performances (AUC) calculated through 20 times repeated 10-fold cross-validation within each study for the random forest classifier trained on relative abundance profiles of plasmid KO genes. (D) CRC classification performances (AUC) calculated through leave-one-cohort-out validation (LOCO, Model was trained using two of three cohorts and validated by the other one) for random forest classifier trained on relative abundance profiles of plasmid KO genes. (E) Validation of the plasmid KO gene random forest classifier in two independent cohorts (CHN3 and GER). The CRC classification performances (AUC) of the plasmid KO gene random forest classifier were obtained by using 20× repeated 10-fold cross-validation in the CHN3 and GER cohort.
Following feature screening, 35 KO genes (including K03561, K05595, and K06250), mainly related to metabolism, were found to be potential biomarkers for CRC prediction ( Figure 6A). The plasmid KO random forest classifier showed strong predictive power within cohorts 20 times repeated 10-fold cross-validation, with mean AUC ranging from 0.63 to 0.84 ( Figure 6B). The mean AUC of the plasmid KO random forest model ranged from 0.63 to 0.81 in cohort-tocohort validation ( Figure 6C). The LOCO performance of the plasmid KO model ranged from 0.68 and 0.84 ( Figure 6D). In independent validation sets, the average AUC was 0.72 and 0.69, respectively, in the CHN3 and GER cohorts ( Figure 6E). We carried out the Spearman correlation analysis of differential plasmid KO genes with differential plasmids or bacteria to comprehend the relationship between differential KO and differential bacteria or plasmids, Differential plasmid KO genes had no significant correlation with differential plasmids or bacteria (Supplementary Figure 5). Plasmid KO genes might serve as biomarkers for diagnosing CRC, which is independent of bacteria and plasmids. From 414 plasmids CAZy Frontiers in Microbiology 09 frontiersin.org genes, we first identified 43 differential CAZy genes (p < 0.05), including 16 CAZy genes with increased abundance and 27 CAZy genes with decreased abundance in CRC patients compared to controls (Supplementary Figure 6A; Supplementary Table 6). The plasmid CAZy random forest classifier showed strong predictive power with mean AUC ranging from 0.61 to 0.71 in cross-validation (Supplementary Figure 6B). The mean AUC of the plasmid CAZy random forest model ranged from 0.63 to 0.61 in cohort-to-cohort validation (Supplementary Figure 6C). The plasmid CAZy model's LOCO performance ranged from 0.62 and 0.72 (Supplementary Figure 6D). In independent validation sets, while the average AUC of the model obtained in the GER cohort was 0.51, it was 0.76 on average for the CHN3 (Supplementary Figure 6E). Plasmid CAZy genes were less effective as diagnostic indicators for CRC than plasmid KO genes.
Discussion
Plasmid-mediated horizontal gene transfer is regarded as a major driver of bacterial adaptation and diversification, as demonstrated by several studies (Smalla et al., 2015;Wein et al., 2020;Rodríguez-Beltrán et al., 2021). Plasmids can provide ecological benefits to their host bacteria (Di Venanzio et al., 2019). These plasmids may change the biological characteristics of their bacterial hosts, which may have an impact on human health (Rozwandowicz et al., 2018). However, little is known about the function of gut plasmids, which are carried by bacteria that cause disease. We thoroughly analyzed the plasmidome in this study across eight different CRC cohorts. This study provides the most comprehensive metagenomic sequencingbased gut plasmidomic study to date in the largest sample of CRC patients. The bioinformatics pipeline allowed us to locate 12,515 intestinal plasmids in total. We observed that compared to healthy controls, intestinal plasmid diversity was higher in CRC patients. It might imply that CRC patients' intestinal environments were more stressful than those of controls, where bacteria required more plasmids to adjust to changes. To the best of our knowledge, our study is the first to pinpoint differential intestinal plasmids in patients with colorectal cancer. Some of the 198 differential plasmids, including NC_012780.1 (Eubacterium eligens ATCC 27750 plasmid unnamed, complete), corresponding bacteria that were equally abundant in CRC patients and controls. Such bacteria may increase the abundance of their associated plasmids to increase their tolerance rather than changing their own abundance in order to adapt to changes in the gut environment of colorectal cancer patients. The bacteria corresponding to other differential plasmids, like NZ_CP036554.1 (Bacteroides fragilis strain DCMOUH0067B plasmid pBFO67_1, complete), are also differential in abundance between CRC patients and controls. Although these bacteria also affected the plasmids they were associated with, changes in the colorectal cancer patients' intestinal environment could also affect the abundance of these bacteria. In contrast to controls, the abundance of intestinal plasmids in CRC patients was more independent of their gut microbiota's abundance. According to this, the relationships between bacteria and plasmids may be relevant in the microbiome-mediated tumorigenesis of CRC. An additional layer of information about the contribution of plasmid genes to host health independent of changes in bacterial abundance was revealed by the intriguing fact that the differential plasmid genes in our study were not associated with differential gut bacteria or differential gut plasmids.
The prognosis of CRC is closely related to the stage of the patient at the time of diagnosis (Bruni et al., 2020). Host gene variation (Schmit et al., 2019), RNAs (Wu et al., 2021), proteins , metabolites (Chen et al., 2022), and gut microbes (Liu et al., 2022) are some of the currently validated colorectal cancer markers; however, more work needs to be done to increase their predictive power. A non-invasive, effective, and efficient diagnostic method is urgently needed for colorectal cancer patients who are asymptomatic in order to lower CRC morbidity and mortality, and thereby lower the economic costs of CRC. We screened 21 plasmids, including NZ_ CP036554.1 and NZ_AP023416.1, and created a colorectal cancer prediction model based on these intestinal plasmids for the first time, applying various validation techniques to demonstrate the robustness and accuracy of the model. Additionally, we observed that the combination of plasmids and bacteria markers could further improve the predictive power of CRC. In the external validation, the mean specificity and sensitivity of the plasmid and bacterial marker combo for CRC detection were 65.2 and 88.5%, respectively. Our plasmid and bacterial marker combo predict CRC with high accuracy and is as non-invasive as FOBT. Our model has a relatively low predictive effect for the Japan cohort. We suspect that this may be related to the regional heterogeneity of the gut microbiome. It has been shown that glycoceramides contained in the Japanese diet increase the abundance of Blautia coccoides in the intestine, which affects the composition of the intestinal flora (Hamajima et al., 2016). Meanwhile, glycoceramides inhibited the development of colorectal cancer in multiple intestinal neoplasia (min) mice (Symolon et al., 2004). The regional heterogeneity of intestinal bacteria in the Japanese cohort is likely due to Japanese diet. Further experimental verification of the specific mechanism is needed.
Several limitations of this study are noted. Identification of plasmids from short-read metagenomic sequencing data remains challenging. It can be difficult to detect and extract a complete plasmid since plasmids can vary greatly in size, have high homology with other plasmids or with the host genome, often contain repetitive regions, or may be incomplete or missing key regions. We have used filtering techniques to exclude less accurate plasmid contigs in light of these difficulties, but we cannot completely rule out the possibility of false positives. As a result, long-read sequencing technology (Pacific Biosciences and Oxford Nanopore Technology) and future tool development may enable us to fully understand the structure of human gut plasmids (Suzuki et al., 2019). The staging of tumors, gender, age, and other factors affecting the incidence of CRC were not taken into consideration. The controls in the majority of cohorts were determined by colonoscopy without detecting CRC, yet the controls in the CHN2 cohort were selected from Taizhou Imaging Study who did not undergo colonoscopy, which could potentially introduce detection bias. A fourth limitation is the cohort effect due to variations in the distribution of gut flora across regions and the use of different sequencing platforms, even though we eliminated the batch effect through MMUPHin. We were unable to determine the actual host of the plasmids because of the phenomenon of the horizontal transfer of plasmids. A highthroughput technique called Microbe-seq was created by Zheng et al. to examine individual bacterial cells in the microbiota. This approach enables further exploration of plasmid horizontal transfer and the host profile of plasmids (Zheng et al., 2022). Future prospective studies with Frontiers in Microbiology 10 frontiersin.org large patient cohorts are needed to validate the results. We cannot establish a causal relationship between CRC and plasmids in the current data collection. We anticipate that long-read metagenomic sequencing and upcoming experimental research will clarify the causal relationship between CRC and plasmids.
In conclusion, we used plasmid-related sequences to identify the corresponding plasmids and found that they were able to distinguish between CRC patients and controls. We constructed a combined plasmid and bacteria panel, which performed superior at predicting CRC than bacteria alone. Our study expands the knowledge of the function of plasmids in CRC patients may lead to further research into potential CRC diagnosis applications. Plasmids should be taken into account when studying the gut microbiota.
Data availability statement
Publicly available datasets were analyzed in this study. This data can be found at: https://www.ncbi.nlm.nih.gov/sra.
Ethics statement
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Author contributions
ML and ZC designed the research. ZC, PL, WZ, and JW collected the data. ZC, JL, XS, KL, and SL performed the statistical analysis. ML and ZC wrote the paper. All authors contributed to the article and approved the submitted version.
Funding
This study was funded by grants from the National Natural Science Foundation of China (grant number 82000628), and the Department of Science and Technology of Guangdong Province to the Guangdong Provincial Key Laboratory of Biomedical Imaging (2018B030322006). | 2023-05-22T13:13:39.872Z | 2023-05-22T00:00:00.000 | {
"year": 2023,
"sha1": "4d486eea31e4e2c82c6d8f5aa25f2bd7e823f99f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "4d486eea31e4e2c82c6d8f5aa25f2bd7e823f99f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
10830487 | pes2o/s2orc | v3-fos-license | Effectiveness, cost-effectiveness and cost-benefit of a single annual professional intervention for the prevention of childhood dental caries in a remote rural Indigenous community
Background The aim of the study is to reduce the high prevalence of tooth decay in children in a remote, rural Indigenous community in Australia, by application of a single annual dental preventive intervention. The study seeks to (1) assess the effectiveness of an annual oral health preventive intervention in slowing the incidence of dental caries in children in this community, (2) identify the mediating role of known risk factors for dental caries and (3) assess the cost-effectiveness and cost-benefit of the intervention. Methods/design The intervention is novel in that most dental preventive interventions require regular re-application, which is not possible in resource constrained communities. While tooth decay is preventable, self-care and healthy habits are lacking in these communities, placing more emphasis on health services to deliver an effective dental preventive intervention. Importantly, the study will assess cost-benefit and cost-effectiveness for broader implementation across similar communities in Australia and internationally. Discussion There is an urgent need to reduce the burden of dental decay in these communities, by implementing effective, cost-effective, feasible and sustainable dental prevention programs. Expected outcomes of this study include improved oral and general health of children within the community; an understanding of the costs associated with the intervention provided, and its comparison with the costs of allowing new lesions to develop, with associated treatment costs. Findings should be generalisable to similar communities around the world. The research is registered with the Australian New Zealand Clinical Trials Registry (ANZCTR), registration number ACTRN12615000693527; date of registration: 3rd July 2015.
Background
Globally, of the 50 most prevalent chronic diseases, four are related to oral health: (1) dental caries of permanent teeth, (2) chronic periodontitis, (3) dental caries of deciduous teeth and (4) edentulousness (total tooth loss) [1,2]. Tooth loss is predominantly a consequence of dental caries in the permanent dentition and adult chronic periodontitis, and potentially results in substantial social and health consequences. Dental caries in the deciduous dentition is a significant predictor of dental caries in the permanent dentition [3]. With few exceptions, both dental caries and periodontitis are preventable conditions with adequate self-care and healthy lifestyle habits, with additional health promotion and preventive interventions at the health services level [4,5]. However we continue to struggle to significantly reduce, and preferably eradicate, the burden of preventable oral health conditions. Appropriate self-care is largely dependent on the social and health capital of the community [6]. In disadvantaged, marginalised, rural and Indigenous communities, this is often absent. These communities also have the additional burden of limited health promotion and preventive services, carry a significant health burden and suffer a seriously reduced oral health-related, and overall quality of life [7,8].
Most dental health promotion and preventive services require professionally trained personnel, needing equipment and regular availability of the service for the re-application of interventions. The current evidence is that a number of dental preventive interventions require re-application 2 to 4 times a year [9,10]; and in disadvantaged and especially remote communities, this is usually not possible, and the burden of the common oral diseases is never dealt with: more importantly, never prevented. To our knowledge there is currently no evidence on the effectiveness, costeffectiveness and cost-benefit of a less frequent and therefore sustainable dental preventive intervention strategy.
The proposed study will implement a novel dental caries preventive intervention in children to firstly reduce the pathologic bacterial load using an oral antiseptic, secondly seal the part of the tooth most prone to decay with a fissure sealant, and thirdly to strengthen the tooth structure with a fluoride varnish, all during a single annual visit, in a rural remote Indigenous community in Far North Queensland, Australia. While there are data on the effectiveness of the three specific interventions, we have developed a novel approach to assess a less frequent combined application of these interventions. Importantly, we will assess its effectiveness and its cost effectiveness to determine the value for money of the service.
The team believes that the current suggested frequency 2-4 times per year for these preventive interventions is not possible, or sustainable, in poorly-resourced, remote communities and is proposing an alternative strategy requiring fewer resources to address the burden of dental decay in children.
Burden of dental caries in Indigenous communities in Australia
In Australia, dental conditions are especially prevalent in Indigenous communities, and are a significant health burden in rural remote communities [7,8]. A recent report of the dental caries status of Indigenous children in Australia showed that those located in rural and/or remote areas have a much higher mean number of decayed, missing and filled deciduous teeth (dmft) (~4 in 6-year old children) compared to non-Indigenous children in metropolitan (dmft~1.5) and rural settings (dmft~1.8) as well as Indigenous metropolitan children (dmft~2.6) [11]. The situation is similar in the permanent dentition of older children. The National Survey of Adult Oral Health found that 57 % of Indigenous adults had untreated coronal dental caries, compared with 25 % of non-Indigenous adults [12]. The mean number of decayed teeth amongst Indigenous adults (>15 years of age) was 2.7 compared to 0.8 amongst non-Indigenous adults.
There are high social costs associated with poor dentition, and a diminished quality of life due to pain and discomfort [13][14][15][16], and especially because of long waiting lists for treatment [17]. Social costs include lack of sleep, lost time for school, behavioural problems, lack of cooperation and diminished learning. Lost working time for parents accompanying children to dental treatment sessions has been reported to lead to a loss of employment. Studies in this area need to appreciate the full impact of childhood caries on the child, family and society.
An oral health survey in 2004 (pre-water fluoridation) in the Northern Peninsula Area (NPA) of far north QLD, found that dental caries experience of 6-and 12-year-old children was more than twice that of the state average and more than four times greater than the comparable figures for Australian children overall. Soon after this survey the reticulated water supply of the five small rural communities in this area was fluoridated. A follow-up oral health survey in NPA conducted in November 2012 by this team, in which we examined over 70 % of known resident schoolchildren (n = 339), suggests that the dental caries status has improved significantly since the 2004 survey [18]. Few teeth had restorations in both surveys. Ageweighted overall caries prevalence and severity declined from 2004 to 2012 by 37.3 %. The effect was most marked in younger children, dmft decreasing by approximately 50 % for ages 4 to 9 years; at age six, mean decayed score decreased from 5.20 to 3.43. DMFT levels decreased by half in 6 to 9 year old children. However, significant unmet treatment needs exist at all ages. To address this, practical and affordable ways have to be found.
One of the reasons for the improved oral health status could be due to fluoridation of the local water supplies. Whilst the economic viability of water fluoridation for a small community such as this might be questioned, we posit the costs are outweighed by the significant caries reduction in both the deciduous and permanent dentitions as found in our study [18]. Moreover, the fluoridation plant has functioned erratically since being implemented and has been out of operation since April 2011 following a lightning strike. The likelihood that the water will again be fluoridated is uncertain due to budget constraints and because the 2012-2015 Queensland State Government legislated to give local governments across the State the power to decide to fluoridate or not. Dental caries rates may again increase in the absence of water fluoridation. It will be crucial to investigate alternative models to ensure improvements in dental caries incidence in this resource-constrained community. The envisaged dental prevention model will essentially reduce the microbial load with the topical disinfectant, povidone iodine [10]; inhibit biofilm adherence to susceptible sites by application of fissure sealants [19] and reduce the susceptibility of the tooth to demineralisation by acids generated in the microbial biofilm by the application of a fluoride varnish [9,20]. Measures of local fluoride concentration [20] and of oral mutans streptococci counts return to baseline after a few months [21], so that most reported studies have used regular reapplications (2-4 times per annum), which would be difficult to implement in remote settings lacking appropriately trained personnel and resources. Fissure sealants, on the other hand, have excellent longevity [19].
To date, the reported economic burden of childhood caries is likely to be underestimated, as previous reports did not capture the full scope of costs and missed the potential cost-savings of prevention programs. There are few studies involving child populations that evaluate the cost-effectiveness of prevention programs for childhood caries [22]. Savage et al. [23] concluded that pre-school aged children in the USA who had early dental prevention visits would experience lower dental-related costs over 5 years. Similarly, a second USA study by Ramoz-Gomez et al. [24] looked at minimal, intermediate and comprehensive prevention programs and concluded that all three were cost-effective. They concluded that government health systems can save considerable resources by investing in early childhood caries prevention. A study by Lee et al. [25] in Carolina, USA, found early dental visits were highly cost-effective for high-risk children. These studies reiterate the need for translating this evidence into policies for childhood caries prevention. Although the cost-savings of prevention programs are based on predictions, the potential economic benefits are encouraging.
Review of interventions to reduce the incidence of dental caries in children Pit and fissure sealants Dental decay most often occurs on the occlusal pits and fissures of permanent molar teeth. A pit and fissure sealant is defined as a material [both glass-ionomer and resinbased materials are widely used] that is introduced into the occlusal pits and fissures of caries susceptible teeth [19]. Its application to newly-erupted posterior teeth is the best method to prevent pit and fissure caries, and/or to prevent the continued development of incipient caries into frank caries when the incipient lesion is sealed over. Sealing the occlusal surfaces of permanent molars in children and adolescents reduces the incidence of new carious lesions for up to 48 months when compared to no sealant. Recommendations and reviews of the evidence of preventing dental caries through school-based sealant programs suggest that it remains an important and effective public health approach [26].
Fluoride varnish
Fluoride varnishes are a liquid resin or synthetic base that contain a high concentration of fluoride and set quickly on contact with teeth, even in the presence of saliva [9]. Fluoride ions in the material are released when the pH drops in response to acid production in the biofilm on the tooth surface and these become available to promote remineralisation of damaged tooth enamel in early carious lesions (white spots). The fluorhydroxyapatite formed over time during the remineralisation process in an initial caries lesion is more resistant to future demineralisation. Children at moderate or high-risk to dental caries benefit from fluoride varnish programs. Varnish delivers a higher concentration of fluoride than other professionally applied fluoride gels and foams; therefore it is applied in smaller amounts.
The fluoride varnish layer slowly disappears over time and needs repeated application to maintain effectiveness as a primary prevention strategy. While one application of fluoride varnish may provide some benefit, the majority of studies of professionally applied fluoride demonstrate that at least two applications each year, for at least 2 years, are necessary to demonstrate effective reductions in dental caries, making this a difficult strategy in disadvantaged communities who are most in need of this intervention [20].
Povidone (PVP)-iodine addition to fluoride
Children with high rates of tooth decay are much more heavily colonized with pathogenic bacteria than children who experience less tooth decay [27]. PVP-iodine interferes with the ability of mutans streptococci to bind to tooth surfaces by disrupting the expression and production of glucosyltransferase [28]. A seminal in vitro study of anti-plaque agents demonstrated that 1 % iodine (10 % PVP-iodine is 1 % active iodine) was bactericidal to intact S. mutans biofilm [29]. A series of studies have led scientists to conclude PVP-iodine is an appropriate adjunctive antimicrobial for use on teeth to prevent tooth decay [10,21,30,31].
Topical Treatment by PVP-Iodine in conjunction with Fluoride Varnish is simple [32]. The iodine comes in a single application swab. The total treatment time is 3 to 4 min and costs less than 20 cents. Clinically the teeth are brushed to remove debris and disrupt the biofilm; then dried with gauze, and painted with 0.2 ml PVP-Iodine. After the iodine application, the teeth are dried again and coated with fluoride varnish at the same visit.
As a result of our recently conducted oral health survey in the NPA, the research team has established networks and support from the local community, health workers and schools, as well as from Queensland Health (QH), the State agency responsible for all public health services. The importance of sustainable dental prevention interventions is evident, especially if fluoridation of the water supplies is not to continue. This study will assess the effectiveness, cost-effectiveness and benefits of a novel single annual dental caries preventive intervention in a remote rural Indigenous community in north Queensland.
Aims and hypotheses
The study seeks to (1) assess the effectiveness of an annual oral health preventive intervention in slowing the incidence of dental caries in children in a remote, rural Indigenous setting, (2) identify the mediating role of known risk factors for dental caries and (3) assess the cost-effectiveness and cost-benefit of the intervention.
We hypothesise that 1 and 2 years after the intervention (1) the actual incidence of dental caries in children will be significantly lower than the expected incidence, based on modelling from the two oral health surveys conducted over the past 11 years in the same community, the current survey and (2) the intervention will be cost-effective and cost beneficial, and therefore feasible and sustainable for broader implementation across similar communities in Australia and internationally.
Study design
This is a longitudinal preventive intervention study. All school children in the NPA will be invited to participate. As it is unethical to withhold any proven intervention from any child, no control group will be created. Children who do not consent to participate may be natural controls if they consent to a dental examination at the end of the study. The actual caries increment in the children who participate will be compared to the expected caries increment modelled on oral health surveys carried out in this community in 2004 (pre-water fluoridation); in 2012 (by the Griffith University team post-partial water fluoridation) and 2015 (baseline survey for this study).
All consenting children will undergo a detailed head, neck and dental clinical examination and complete a questionnaire on their basic demography (gender and age), residential history (exposure to fluoridated drinking water), own general and oral health perceptions; oral health behaviours, attitudes and knowledge; dental visits; diet and oral health-related quality of life.
All active disease will be treated prior to implementing the dental caries preventive intervention. In years 2 and 3 of the study, all participating children will be invited to return for a dental examination, treatment of new incident disease and repeat of the prevention regime.
Study setting
The study will be conducted in a number of small towns in the remote Northern Peninsula Area (NPA) of Far North Queensland. In the 2011 Census the population of the NPA was estimated at 1046 and is comprised of 52.8 % females and 47.2 % males. The median/average age of the NPA population is 22 years, 15 years below the Australian average. 98.7 % of people living in these communities were born in Australia.
English is spoken as first language by 22.7 % of the population, Yumplatok (Torres Strait Creole) by 64.9 %, Kalaw Kawaw Ya/Kalaw Lagaw Ya by 1.8 and 0.3 %. Sixty eight percent of residents are employed full time, and 21 % are working on a part time basis. The NPA has an unemployment rate of 7 %. The median individual income is AUD554.00 per week and the median household income is AUD1, 179.00 per week (AUD = Australian Dollar).
Study participants
All children (approximately 600-650) attending the two primary and one secondary school campuses will be invited to participate in the intervention study. These children will be 4-17 years of age, and almost all are Indigenous. As all children in the community will be invited to participate a sample power calculation was not performed.
Intervention
The proposed "Big Bang" preventive intervention is to firstly reduce the pathogenic bacterial load, then seal grooves on posterior (molar) teeth, and thirdly to strengthen the tooth structure.
Oral health promotion will essentially educate children regarding appropriate health behaviours to maintain oral health, with emphasis on the importance of toothbrushing with fluoridated toothpaste, healthy eating habits, emphasising the role of sugar in the tooth decay process, and the importance of reducing both the quantity and frequency of sugary products in the diet.
Prior to the annual intervention the research team will undertake clinical examination of all consenting schoolage children to assess dental caries experience. A team comprising of a dentist and/or oral health therapist will treat all existing tooth decay and other oral health problems. Each child will receive the preventive intervention when other treatments are completed.
Prior to the treatment of existing disease, we will investigate saliva of the participants as a component of caries risk assessment. Saliva will be collected using commercially available test kits for measurement of flow rate, pH, buffering capacity and then cultured for bacterial assessment [33]. The number of teeth, fillings and other retentive sites in mouth influence the bacterial load and a high count of bacteria in dental plaque correlates with salivary bacterial counts, making it possible to assess saliva for cariogenic microbes [34,35]. Such kits use selective media for mutans streptococci and for lactobacilli.
Primary outcome variable
The International Caries Detection and Assessment system (ICDAS-II) for clinical caries diagnosis will be used to record caries experience and to determine incidence [36]. This will be measured annually across the 3 years of the study, at baseline and after years 2 and 3 of the project.
ICDAS-II consolidates features of several caries classification systems into one universal system using a six-point ordinal scale ranging from non-cavitated to extensive cavitated lesions to describe any signs of past or present caries activity. Unlike World Health Organisation (WHO) caries detection criteria, ICDAS-II allows the detection of initial/non-cavitated carious lesions, and is thus considerably more sensitive, whilst allowing the extraction of data comparable to previous surveys which used WHO criteria [37].
Secondary outcome variables
General Child Quality of Life, the social impact of oral disorders and Oral Health-Related Quality of Life (OHR-QoL) will be measured at baseline and at years 2 and 3. Existing validated and reliable instruments will be used (CHU-9D [38], OHIP-14 [39] and Child-OIPD [40]), appropriately modified for our participants.
The retention of the fissure sealants at the follow-up periods will be assessed, and recorded. Saliva of the participants will be collected again in years 2 and 3. Findings will be compared to baseline to assess the impact of the less frequently applied anti-bacterial component of the intervention.
Resources use and costs of providing the intervention will be recorded throughout the intervention period. Resource use and costs to participants to receive the intervention (e.g. time off work to bring the child to the clinic) will be recorded as well as any emergency treatment required between annual visits by the team. Emergency treatment includes travel to the nearest dental facility, the Royal Flying Doctors Service call-outs, local GP visits for antibiotics and analgesia, and so forth.
Covariates
The relationship between the primary and secondary outcomes and the preventive intervention will be adjusted for known risk factors for dental caries -oral hygiene behaviours and diet (sugar consumption).
Analysis
All baseline socio-demographic characteristics will be described for the selected sample using counts and frequencies. Baseline and follow-up caries experience and questionnaire related information will be reported. Dental caries increment (incidence) will be the main outcome measure used to determine the effectiveness of the preventive intervention. The expected caries increment will be modelled from the three oral health surveys conducted in this community (2004; 2012 and 2015) and compared with the actual caries increment from 2015-2016; 2016-2017 and 2015-2017. The mean caries increment will be compared between the expected (modelled) and actual findings, and adjusted for known risk factors for dental caries. The hypothesis will be that caries increment observed in the period of 2015 to 2017 is smaller than the modelled caries increments. Two independent samples ttest will be used for the analysis with significance being determined if p < 0.05.
Children who receive only a part of the intervention will be separately assessed: for example we will have children who fully participate, those with baseline and only a year 1 follow-up, those with baseline and only a year 2 follow-up. This will 'naturally' further inform us on the most appropriate frequency of this preventive strategy. Both a group and matched analysis will be conducted to account for children who receive only part of the intervention.
Development of Markov model
A health state transition Markov model will be developed using Tree Age pro software (TreeAge Software Inc., Williamstown, Massachusetts, USA) to analyse the cost effectiveness of the intervention. The model will be populated with the caries experience of the children in NPA using the intervention caries data and compared with modelled data from baseline in a non-intervention scenario. The model cohort will start at the age of 6 years where mixed dentition is emerging. The model will track these children up to 17 years using the data from the study for each year. Health states for the model will include "healthy" and "caries" health states. It is anticipated to have health states for conditions such as pulpal abscess as well. The model will be made sensitive for waiting periods, available treatment facilities, availability and costs of resident or fly-in/fly-out professional staff and common practices of the local dental clinics.
Costs calculations
The costs of providing the preventive intervention, the costs of all treatment for carious lesions, and the out-ofpocket costs in relation to caries experience will be assessed. These costs will be assigned to each child taking into account the number of surfaces treated. Cost intervention will include sealants, an oral anti-septic application, application of a fluoride varnish and including cost for human resources and logistics. The costs of treating incremental caries will be estimated using government costs for treatments. Total out-of-pocket costs for parents of children with caries will be calculated based on the quantities of resource use provided in the surveys. Mean, median and interquartile range costs will be presented for each major treatment category in caries. All costs will be presented in 2015 AUD.
Estimation of utility weights
The utility values for dental health states will be estimated from the CHU-9D data. Using the CHU-9D (Child Health Utility) multi attribute utility instrument, quality of life scores (utility scores) for each caries severity level experienced by the children will be determined. A scoring algorithm that has been validated in the UK for CHU-9D for children will be applied [38]. Utility scores will be presented for different age groups and gender. These values will contribute to estimate Quality Adjusted Life Years (QALYs). The CHU-9D will be validated in a similar Indigenous population prior to the application in the study population.
Transition probabilities
Caries increment prior to intervention and post intervention will be used to calculate transition probabilities respectively for the two scenarios of non-intervention and intervention examined by the Markov model. The caries increment rates for intervention will be directly observed. These rates for the non-intervention will be estimated from the modelled data.
Cost utility analysis
The cost utility of the "Big Bang" prevention strategy will be estimated using the Markov model. This analysis will adhere to the best modelling practices as given by International Society for Pharmacoeconomics and Outcomes Research (ISPOR) guidelines [41]. All costs will be presented in 2015 AUD. Costs and outcomes will be discounted at 5 % per year. The model will present the societal perspective. Incremental cost effectiveness ratio (ICER) will be generated by calculating incremental costs for caries treatment divided by the outcome (number of carious lesions prevented and QALYs gained separately). The intervention group will be compared with the modelled values for a non-intervention scenario. To address the uncertainty in the costs and effectiveness estimates, univariate sensitivity analyses will be used. For all probabilities, the 95 % confidence intervals will be used, and for costs high and low values will be estimated. A probabilistic sensitivity analysis will also be performed by re-sampling 1000 times at random from the probability distributions for each parameter. This procedure is similar to multivariate sensitivity analysis and will address the uncertainty of all estimates simultaneously. Gamma distributions will be used for cost estimates and beta distributions will be used for probabilities.
Ethics
Prior to seeking ethics approval for the project, support was obtained from the QH Chief Dental Officer (CDO), community Elders, Cape York Health Council (Apunipima), the Torres and Cape Health and Hospital Service (TCHHS) management of QH, the Northern Peninsula Area Regional Council (NPARC) and the principal of the Northern Peninsula Area State College (NPASC).
Ethics approval was primarily sought from the Griffith University Human Research Ethics Committee (GUHREC) though a National Ethics Application Form (NEAF) submission, taking into account the "National Statement, Values and Ethics: Guidelines for Ethical Conduct in Aboriginal and Torres Strait Islander Health Research" and the "Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS) Guidelines for Ethical Research in Indigenous Studies" [42,43]. Following approval from the GUHREC, a further ethics approval submission was made to the Far North Queensland Human Research Ethics Committee (FNQHREC). Both committees have approved the study protocol.
An information sheet as per the template of the GUHREC will accompany the informed consent form to the parents/guardians of potential participants for approval prior to examination, treatment and any preventive intervention being performed.
Discussion
Tooth decay in rural Indigenous children is unacceptably high, and their general and oral health-related quality of life is significantly compromised. There is an urgent need to reduce the burden of dental decay in these communities, by implementing effective, cost-effective, feasible and sustainable dental prevention programs. Expected outcomes of this study include improved oral and general health of children within the community; an understanding of the costs associated with the intervention provided, and its comparison with the costs of allowing new lesions to develop, with associated treatment costs. The work will benefit Indigenous children and reduce disparities. If found to be effective and cost-effective in reducing dental caries, this initiative could be implemented in similar communities elsewhere.
The study will provide longitudinal data on dental caries prevalence and incidence and OHRQoL for remote Indigenous children, a group often neglected or underrepresented in national oral health surveys. The study will further provide information on the oral hygiene practices and diet (sugar) of these children. Data on the impact of a less frequent antibacterial intervention on the presence of the main dental caries associated bacteria in the saliva, as well as the retention of fissure sealants, will be reported.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions NWJ is Chief Investigator of the NHMRC Grant: all other authors are co-investigators. NWJ, RL, JK, OT and LJ were the principal designers of the study: SK and PS provided advice on statistics and health economics; VL and YC-J provide advice on Indigenous cultural aspects; RB advises on logistics and clinical matters; SF assists with clinical interventions. All authors read and approved the final manuscript. | 2017-04-03T17:08:32.526Z | 2015-08-29T00:00:00.000 | {
"year": 2015,
"sha1": "88ba81e536dfad5008c9ecd6b71908baf640e624",
"oa_license": "CCBY",
"oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/s12903-015-0076-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88ba81e536dfad5008c9ecd6b71908baf640e624",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235454207 | pes2o/s2orc | v3-fos-license | Fruit bats adjust their foraging strategies to urban environments to diversify their diet
Background Urbanization is one of the most influential processes on our globe, putting a great number of species under threat. Some species learn to cope with urbanization, and a few even benefit from it, but we are only starting to understand how they do so. In this study, we GPS tracked Egyptian fruit bats from urban and rural populations to compare their movement and foraging in urban and rural environments. Because fruit trees are distributed differently in these two environments, with a higher diversity in urban environments, we hypothesized that foraging strategies will differ too. Results When foraging in urban environments, bats were much more exploratory than when foraging in rural environments, visiting more sites per hour and switching foraging sites more often on consecutive nights. By doing so, bats foraging in settlements diversified their diet in comparison to rural bats, as was also evident from their choice to often switch fruit species. Interestingly, the location of the roost did not dictate the foraging grounds, and we found that many bats choose to roost in the countryside but nightly commute to and forage in urban environments. Conclusions Bats are unique among small mammals in their ability to move far rapidly. Our study is an excellent example of how animals adjust to environmental changes, and it shows how such mobile mammals might exploit the new urban fragmented environment that is taking over our landscape. Supplementary Information The online version contains supplementary material available at 10.1186/s12915-021-01060-x.
Background
Understanding the interactions between an animal and its environment and assessing how its behavior responds to changes in the environment is a major challenge in behavioral biology [1]. Movement is crucial for a large spectrum of behavioral processes and can serve as a measurable response to a combination of internal states and environmental changes such as shifts in resources, increase in conspecific competition, or changes in the habitat [2]. The analysis of movement is thus key for understanding how processes such as long-range navigation, orientation, and foraging strategies are affected by environmental changes, for example those induced by urbanization.
The ongoing massive growth of urban areas (i.e., urban sprawl) has resulted in the vanishing of vast natural habitats thus affecting many different species in various ways [3][4][5][6][7]. Although most animals are affected negatively, a minority of species can modify their behavior to the novel environment and adjust to a life in the city [8,9]. Various behavioral adaptations have been reported in urban-dwelling animals. Studies conducted on birds, for example, show that individuals from urban populations were bolder and better at problem-solving than their rural counterparts [10][11][12][13]. Animals living in urban areas have also been reported to adjust their communication [14][15][16][17][18][19] and foraging [20][21][22][23]. Urban environments often offer new resources that differ in their distribution in comparison to the native environment of the animal. Urban foraging thus often requires strategy adjustments and specifically movement adjustments. Indeed, there is accumulating evidence that human activity and urbanization, in particular, affect animals' movement and foraging patterns [23], but we are far from understanding the details of these effects.
Bats are among the most common mammals in cities [24,25]. Previous studies on bats found that their response to urbanization is highly species-specific [24,[26][27][28]. Some species profit from urban habitats and human settlements, roosting in buildings, drinking from swimming pools [29][30][31][32][33], and reproducing more successfully [26], while the presence of other species dramatically declines in response to habitat loss and disturbance [21,[34][35][36][37][38]. There is little work on bats' movement and foraging in urban environments. Geggie and Fenton suggested that Eptesicus fuscus bats in urban colonies spend more time out of their roosts (in comparison to rural conspecifics) and hypothesized that this might be a result of lower prey densities in the city, forcing these bats to fly farther [39]. Tomassini et al. suggested that changes in the cranial size of Pipistrellus kuhlii bats are a result of a diet shift due to anthropogenic activity [40].
Fruit bats (family Pteropodidae) are among the more prevalent families of bats in urban environments [41][42][43][44][45]. One hypothesized reason for this preference is its preferable micro-climate which is characterized in warmer temperatures suitable for these bats [46]. Cities have also been hypothesized to provide fruit bats a refuge from predation [47] and perhaps also to ease navigation due to the abundance of landmarks [44,48]. Another suggested reason for fruit bat urban activity is fruit availability and diversity which is often richer and more stable year-round in cities in comparison to rural environments due to irrigation and planting [44]. Because the distribution of fruit trees and fruit species is much denser in urban environments than in rural or natural environments, we hypothesized that fruit bats will differ in their foraging strategy in these two environments. Specifically, we predicted that bats will fly less and visit trees near the roost. To test this, we documented and compared the foraging movement of Egyptian fruit bats (Rousettus aegyptiacus) in urban and rural environments.
The Egyptian fruit bat congregates in colonies of dozens to thousands of individuals and feeds on a wide range of fruit and nectar providing plants [49]. Many Egyptian fruit bats successfully exploit urban-roosting sites that can host large colonies (e.g., roofed parking lots) and can be seen foraging in gardens and backyards in Israel and the area [49,50]. In parallel to their abundance in cities, many fruit bats still roost in rural environments. Fruit trees, the resource exploited by fruit bats, have very different distributions in urban and rural environments in Israel. Specifically, the diversity of fruit species per area is much larger in the city. This provides a fascinating opportunity to examine the differences in foraging in these two environments. We thus aimed to examine how Egyptian fruit bats that expertise on exploiting the city adjust their movement and foraging behavior. We used miniature onboard GPS devices to track the exact foraging behavior of fruit bats in urban and rural environments. We moreover reconstructed the bats' diet by localizing and identifying the trees they ate from. This approach revealed that the environment (urban or rural) significantly affects foraging patterns. We further discovered that the location of the colony (urban or rural) does not determine the foraging grounds of its inhabitants, that is, many bats that do not roost in settlements nightly commute to forage within them. We suggest that fruit bats exploit cities in order to diversify their diets.
Results
In total, we tracked 39 bats-19 from two rural colonies and 20 from two urban colonies for an average period of 8.1 ± 12.1 nights each. Because male and female bats can differ in their space exploitation [51], and because pregnant females might move differently, we narrowed our comparison to males only. Bats from urban colonies spent the great majority of their foraging time (72% on average) in urban areas, but interestingly, bats from rural colonies often foraged in urban environments, spending on average 45% of their time foraging in settlements. We used the Global Urbanization Footprint criterion (GUF, DLR 2016) [52][53][54] to distinguish between urban and rural foraging sites, and we quantified the percent of time each individual bat spent in each environment (see the "Methods" section).
Exploratory urban foraging vs. fixed rural foraging
Independently of their roost, bats were much more diverse when foraging in urban environments, namely, they switched foraging sites more often, than in rural environments. Our results reveal a strong correlation between the amount of time a bat spends in urban environments and the number of foraging sites it visited. Bats visited up to three times more sites per night in urban environments (see Fig. 1a, b for bats' trajectories and Fig. 1c; mixed effect GLM, P < 10 −3 , with the number of sites set as the explained variable; the percent of time spent in urban environments and the location of the roost-urban/rural-set as fixed effects; and with roost ID, bat ID, and season set as random effect intercepts). Using only bats that were tracked for at least 5 nights (ca. 54% of the data) did not alter the results (Additional File 1: Fig. S1), suggesting that our result is not an artifact of the tracking periods. Bats from urban colonies tended to switch sites more than bats from rural colonies (compare the gray and black points in Fig. 1c) but this difference was only nearly significant: P = 0.06 for the effects of roost location (urban or rural) and the interaction between roost location and the time spent in urbanization were not significant (P = 0.50). The result remained the same when examining the number of sites visited per hour (rather than over the entire night) to control for the need of rural bats to fly farther to reach the cities (P < 10 −3 , mixed effect GLM as above, but with the number of sites per hour set as the explained variable). Bats that foraged in urban environments routinely were also more prone to switch foraging sites on consecutive nights, while bats foraging in rural environments mostly returned to the same sites night after night (P = 0.02, mixed effect GLM as above, with the number of switches in consecutive nights set as the explained variable). The size of the settlement and the human population density where the bats foraged did not significantly affect the rate of site switching (P > 0.07, P > 0.12 when adding either the population size or density as fixed factors, to the above mixed effect GLM, with the number of visited sites set as the explained variable). This suggests that, in the research area, bats Roost and expertise shape foraging patterns. a The trajectories of 10 individuals: left-five bats from an urban colony who mostly forage in an urban area, and right-five bats from a rural colony who mostly forage in a rural area (each individual is colored differently). b The movement of one rural-and one urban-roosting bat. In both a and b, yellow dots depict foraging sites and yellow squares depict the roosts. c The number of sites visited by the bats as a function of the percent of time they spent in urban areas. In c and d, each point represents one night and all bats are overlaid. d The number of tree species visited as a function of the sites visited by the bat. e The Shannon index as a function of the percent of time they spent in urban areas. Each point represents a bat, black for bats from urban colonies and gray for bats from rural colonies. See main text for the statistical analysis of panels c-e behave similarly in urban environments independently of their characteristics (e.g., large or small). The season of the year did not have a significant effect on the bats' tendency to forage in urban or rural areas (P = 0.57, mixed effect GLM as above, with the percent of time spent in urban environments set as the explained variable, and the month of the year and the location of the roost-urban/rural-set as fixed factors. The interaction between season and roost location was also not significant P = 0.31).
We hypothesized that two main possible reasons can explain these environment-dependent differences in foraging: (1) increased competition in the urban environment drives bats to leave foraging sites more often; or (2) bats switch sites in urban environments to gain some benefit, such as to diversify their diet. We next examined both hypotheses.
Competition
Although fruit bats sometimes attempt to scrounge food from each other [55], these attempts seem to be part of a complex system of sociality [56], and our vast observations of foraging fruit bats did not reveal territorial behavior aiming to defend a tree and remove competitors. This did not surprise us, because trees offer much more fruit than visitors can consume on a given night. To validate this, we quantified the amount of ripe fruit on one of the common tree species eaten by the bats (Ficus rubiginosa, see the "Methods" section). Our assessment (based on n = 41 trees visited by our bats) suggests that on an average night, any of these trees in the region of-fers~27 kg of ripe fruit. Note that this is the available fruit mass on a given night which is the important measurement for our purpose as it already takes consumption into account. That is, in the region of the study, every Ficus rubiginosa had on average~27 kg of ripe fruit in every given night based on our assessments. This amount is enough to supply the nightly food demand of 185 bats even if this was their only source of food. In our hundreds of observations, we have never observed more than ten individuals on a tree (the average number of bats was 1.9 ± 1.3, mean ± SD, n = 100 observations). The abundance of un-consumed fruit on the trees is also supported by the fact that a lot of fresh ripe fruit can be found on the ground under the trees regularly, suggesting that depletion does not drive the bats to switch foraging sites. Moreover, we identified~200 fruiting (Ficus rubiginosa) trees in the area which should be able to feed more than 10,000 bats even if this was their only food source, while in reality, bats eat from dozens of other types of fruit (see Additional File 2: Table S1; because of Ficus seasonality, we assumed that~25% of the trees have fruit at any moment). These are all rough estimates (see the "Methods" section), but in our calculations, we always tried to underestimate the number of trees and the amount of fruit, while we always overestimated the number of bats and their consumption, thus assuring that the conclusions are valid. We quantified the amount of fruit in one species that is easy to quantify (due to its relatively large fruit and spacious foliage), but it is important to note that settlements in the region are densely populated with fruit trees planted by the municipality or by private house-owners (Additional File 3: Fig. S2).
Moreover, an analysis of the bats' social interactions at the foraging sites also suggests that competition was not driving them to switch foraging sites. Fruit bats often interact at the foraging sites and these interactions are accompanied by vocalizations (bats commonly land near a perching bat, a behavior that results in vocal communication [57]). We thus recorded audio continuously onboard nine bats (additional to the 39 above, the "Methods" section) to estimate the density of vocal interactions, as a proxy for bat density at foraging sites. Acoustic monitoring is a common method to assess bat density [58,59], and we could estimate that9 0% of the social calls we recorded were emitted by conspecifics who were not interacting with the focal bat carrying the microphones. We could determine this because the intensity of a vocalization differs greatly when it is emitted by the individual carrying the microphone or by a remote conspecific. We could not determine whether one or more bats were calling, but it is unlikely that our result was driven by this. In light of our large sample size, it is unlikely that a local bias (e.g., one bat calling at one or a few locations) would generate the correlation that we observe (it could explain part of the noise that we see).
If competition drove the bats to switch foraging sites, we would expect a negative correlation between bat density and the time an individual bat spends at a foraging site. However, not only that this was not the case, we actually found a positive correlation between the abundance of social calls and the bats' tendency to spend time on a foraging tree (P = 0.004, mixed effect GLM with the percent of time spent on the foraging tree set as the explained variable, the number of social calls set as a fixed factor, and the bat's ID and date set as random effect intercepts). This finding is in line with the highly social nature of this species which causes them to seek conspecifics (bats are mostly alone on a tree). For example, when offering two food sources in captivity, most bats will aggregate at one source, consume it, and then move to the next one even though this is the less efficient strategy.
Moreover, there was no correlation between the bats' mean propensity of tree switching and the mean bat density (Pearson correlation, R = 0.17, P > 0.6). While the GPS data showed that the bats switched trees more often in the first half of the night, the acoustic monitoring revealed that the number of bats on trees peaked at the middle of the night. Once again, this points against competition.
Using our onboard sensors, we also recorded continuous acceleration of these nine bats, which revealed that they spent the great majority of the time (> 80%) resting on the trees between occasional bouts of feeding (this is also the behavior we observe in the field). Once again, this contradicts competition, because if there was much competition over food, the bats would have been expected to be feeding or moving elsewhere but not resting most of the time. Noteworthy, bats from the same colony do not typically fly together to the same tree [60], so there is no group-defense incentive, as was suggested for other fruit-eating bats [61].
Dietary diversification
We hypothesized that the main benefit the bats could gain from changing their foraging strategy in urban environments is improving their diet, that is, acquiring more proteins and perhaps other nutrients that are essential for the species but often low in fruit [62]. We predicted that an attempt to improve the diet would be expressed by a higher diversity of fruit species in the diet of urban bats. We thus mapped the foraging sites visited by the bats in both rural and urban environments and identified the fruit they ate. Urban areas in warm countries like Israel are characterized by plentiful fruit trees, planted by both municipalities and individuals. We quantified the average number of fruit species in ten random urban squares of 0.5 × 0.5 km 2 counting only species known to be consumed by fruit bats. Our estimates showed that such an area consists of a mean of 29.8 ± 4.6 bat-eatable fruit species, while in agricultural or wild rural environments, such an area will never contain more than a handful of fruiting tree species (often no more than one). The fact that in urban environments, the bats almost always switched fruit species when moving between sites strengthened the diet diversification hypothesis (Fig. 1d, notably, the Pearson correlation between sites visited and tree species visited was significant P > 10 −4 , R 2 = 0.56. The abundance of tree types is far from uniform, so this correlation cannot be explained by random visitation). To quantify diet diversification, we compared diet diversity in urban and rural foraging environments using the Shannon and the Simpson diversity measurements. The diet diversity (measured by either parameter) was significantly correlated with the percent of time the bat spent foraging in urban environments (Fig. 1e, P = 0.0003 for the Shannon index, Additional File 4: Fig. S3, P = 0.0007 for the Simpson index; mixed effect GLM with the diversity set as the explained variable and the rest as above; results are reported for diversity estimates over 3 days, but they were significant for 4-5 days as well). In this analysis, there was also a significant correlation for the interaction term between the time spent in urban sites and the location of the colony, suggesting that bats that roost in the city might have some advantage in food diversification (P = 0.038 and P = 0.069 for the Shannon and Simpson respectively in the same GLM). Finally, if bats were driven by competition and did not actively try to diversify their diet, we would expect them to visit tree species according to their abundance, but this was clearly not the case. Comparing the distribution of available types of fruit in the area and the distribution of the trees actually visited reveals great differences between the two (Additional File 5: Fig. S4), suggesting that the bats are not simply hoping to the next available tree.
Discussion
Understanding animal behavior in a rapidly changing world is one of the main goals of modern ecology. Specifically, it is crucial that we collect better information on how animals deal with fragmented urban environment, but in order to do so, we must acquire detailed information about the behavior of urban and rural populations, which is difficult to do, especially in small animals. Bats are of special interest due to their relative abundance in cities and their unique mobility among mammals. In this study, we GPS tracked the movement of fruit bats roosting and moving in urban and rural environments for the first time, allowing us to examine their precise foraging behavior. Our results demonstrate how fruit bats expertise in exploiting the city. When foraging in urban environments, bats exhibit a different foraging strategy switching foraging sites often, exhibiting much less stereotypical behavior than when foraging in rural environments.
We do not find evidence for competition or defense, and we hypothesize that switching foraging sites is a behavior adapted to the distribution of food in urban environments. The size or density of the settlement did not significantly affect the bats' switching behavior. This is reasonable, because in Israel, the availability of different types of fruit within a short range is much higher in urban environments, including small villages, than in the countryside-where there is often plentiful fruit (e.g., in agricultural plantations), but where bats must fly far from one type of plantation to another. Indeed, we demonstrate that bats in urban environments switch fruit types very often, and by doing so, they achieve a more diverse diet. While in the country ca. 8 species of trees account for 70% of the bats' diet, in the city, more than twice (ca. 17) species account for the same percentage (Additional File 6: Fig. S5). The city diet is characterized by many introduced species which are not common to rural areas such as Ficus rubiginosa. Bats turned to feeding in rural areas when a high-quality fruit such as Diospyros kaki was available in plantations. This was one of the only cases where bats from urban colonies exited cities to feed in rural areas. We did not directly measure how this behavior affects bats' diet in terms of nutrients but as the bats seem to choose the species they visit, we hypothesize that food diversity is a proxy for diet quality. Switching foraging sites increases food diversity both directly, by switching fruit type, but probably also indirectly, by encountering new bats and potentially acquiring social information about additional resources. This idea of social information transfer was supported by our finding that bats seem to prefer trees with more conspecifics [55,56]. Another possible advantage of switching foraging sites in urban environments is exploration of new sites (and repeated examination of familiar sites), which is probably more important in a rapidly changing environment such as a city (e.g., trees can be removed or pruned).
Although we cannot completely exclude some effect of competition over resources, this does not seem to be the main factor explaining urban exploration behavior, as trees typically have enough fruit to support many bats over many nights. Differences in predation risk, which have been used to explain animal behavior in urban environments (usually asserting that reduced predation in cities makes animals bolder) [19,63,64], are also not likely to be the reason for the behavior that we observe. Although we cannot overrule such differences completely, common species of owls, which are the main predators of fruit bats at the foraging site, can be found in both rural areas and settlements, but are probably more abundant in rural areas (e.g., Bubo bubo and Tyto alba [65]), and thus, we would expect bats foraging in rural areas to switch foraging sites more often.
Our results show that in a fragmented area, where human settlements are always available within a few kilometers, bats have to make two almost independent decisions: where to roost and where to forage. Of these two decisions, it is the bat's foraging area and not its home that seems to determine how a bat behaves. Interestingly, we found that a substantial percent of the bats chose to live in rural colonies and commute to urban environments nightly to exploit them. Why these specific individuals do not roost inside urban environments is an open question. One possible hypothesis is the lack of stable roosts inside settlements. Urban fruit bats roost in roofed parking lots or abandoned buildings, but these roosts tend to be unstable due to human activity, and the bats are commonly driven out. An alternative explanation is that this choice reflects differences in behavioral types (often referred to as personality [66]) and that this might be another example of how personality shapes urban foraging behavior [67], where bolder individuals who are more susceptible to changes choose to roost inside settlements. Although urban-roosting bats occasionally forage in rural areas outside of the city, we observed very few urban-roosting bats that consistently foraged in the countryside. Even in the few cases that we observed, it seemed that this behavior was a result of the availability of highly attractive (ephemeral) fruit that cannot be found in cities-the bats flew to persimmon orchards.
Conclusions
Among mammals, bats are unique in their immense movement capacity relative to their size. Indeed, we find that many fruit bats live outside settlements and commute nightly to exploit them. This is an example of how animals with high motility can live outside urban areas and still exploit them on demand. The better we understand how animals move and exploit urban environments, the better we will be able to draw conservation conclusions, which are essential in our rapidly changing world. Our study also demonstrates how animals can behave dramatically differently depending on the environment, exemplifying the importance of comparing animal behavior across backgrounds and contexts.
Animal model
The study was performed according to the permit or the Tel-Aviv University IACUC (Number: L-11-054). Rousettus aegyptiacus adult male bats were captured with permission from the Israeli National Parks Authority. Bats were collected from four colonies: two urban colonies (Herzelia cave (n = 18 bats) and our in-house university colony (n = 2 bats)) [55], both located deep inside the Tel-Aviv urban area, and two rural colonies (Beit Guvrin cave n = 17 and Segafim cave n = 2) both located in rural areas that are partially natural and partially agricultural. The two rural colonies are at least 10 km away from any city, but are surrounded by small agricultural settlements. Data were collected between January 2012 and February 2018. We monitored bats in all colonies along all seasons. For the movement comparison, we used data for 39 bats for which we had at least two nights of tracking (see full details in Table 1). Because we examined bats throughout the year and did not want to interfere with pregnant or lactating bats (which might also move differently), we only tracked males. For the audio and acceleration recordings to study competition, we used data from nine additional bats. We sampled bats from four colonies to avoid a strong colony bias, but it is important to note that all individuals were analyzed together as 39 independent individuals. Supporting the claim that these are independent individuals, we have shown in the past that the genetic relations of individuals in these fruit bat colonies are random (i.e., they are not more related than the average population [56]). We have also shown that fruit bats rarely follow each other in flight when emerging from the cave, strengthening their being independent foragers [60]. Moreover, these bats sometimes move to nearby roosts, further strengthening our claim that treating the individual colony as a sampling unit is meaningless. The sample size used in this study is large in comparison to other studies that include tracking small animals. We tracked the bats for a period of~6.5 nights on average, which should be enough to detect environment-related differences. The fact that our data is spread over several years and over different seasons is an advantage, reducing the possibility of finding a difference in foraging that is a result of some transient difference between environments.
Animal tracking
Bats were caught in their roost using mist or hand nets. All bats were processed and tagged within 2 h and released at their cave. Tags were retrieved by collecting them on the ground after they fell off the animals. The tracking device was a GPS data-logger (Lucid Ltd., Israel, 30 × 20 × 4 mm). The device's total weight (including battery, coating, and a telemetry unit-LB-2X 0.3 g, Holohil Systems Ltd. Carp, Ontario, Canada) was 11.8 g on average which accounted for 6.9% ± 0.42% of the weight of the bats (mean ± SD). The telemetry unit was attached to the device to assist in finding it once it fell of the bats. The devices were wrapped in polymorph for waterproofing and were attached to the bats using medical cement glue (Perma-Type Surgical Cement, AC103000, USA). After attaching, bats were held for about 5 min to allow the adhesive to dry and then placed in a cloth bag for another 15 min before release (see ref [68,69] for full details). GPS positions were sampled at 10-15-s intervals.
To analyze bats' behavior in response to conspecifics, we tracked additional nine bats from our open colony [70] for a period of four nights each using GPS devices that also include a microphone and an accelerometer (3 DOF) (Vesper Inc., with a Knowles microphone, FG series, A.S.D-tech). The tag's average weight was 8.1 g accounting for 4.7% ± 0.29% of the bat's body mass. GPS positions (sampled every 30 s) allowed us to extract the bats' flight trajectories and the foraging trees that they visited (which were all visited by us). Continuous audio recordings were analyzed manually to detect all social calls emitted at the foraging site. Note that due to their low frequency [71], social calls can be picked up from a relatively long distance (at least 50 m) and thus all calls emitted (by any of the bats present on the foraging tree) were likely to be recorded. Because fruit bats often interact and vocalize when perching on foraging tress, we used the number of social calls as a proxy for the density of bats on the tree. Finally, acceleration analysis allowed us to distinguish between flight and perching bouts that were detected manually.
Foraging and commute segmentation
Whenever we refer to commute in the study, we refer to the accumulated parts of the movement defined as commuting. Similar to our approach in a previous study [69], we used a combination of two indices, the straightness index [72] and the first passage time [73], to define foraging sites and to separate them from periods of commute.
a) The straightness index (ranging between zero and one) is defined as the ratio between the minimal distance between two points and the length of the actual path traveled between these two points. Following our procedure in [69], the straightness index was calculated at each point along the trajectory with a window of 12 min. Values below 0.5 were defined as foraging (see [72] for details). b) The first passage time is defined as the total duration the animal spends within a given circle centered around any location along the trajectory. The first passage time was estimated for each location along the trajectory with a radius of interest of 50 m. The minimum first passage time for defining location as foraging site was set to 50 s (see [73] for details). These thresholds were motivated by the typical radius and time of flying around a foraging tree when also taking into account the GPS error. We have used it successfully in the previous study [69]. Any point along the trajectory that crossed one of the two thresholds (had a straightness index of less than 0.5 or a first passage time of more than 50 s) was defined as a moment of foraging. After identifying all potential foraging sites (i.e., connecting all locations in which foraging occurred), we omitted sites in which bats spent less than 30 s in total.
Identification of foraging trees
The centers of the foraging sites were identified based on the bats' GPS data by taking the mean over the x and y positions of all the locations that were defined as a foraging event.
We then visited and identified the trees in most of the bats' foraging sites (72.5%). The tree closest to the center of the site was photographed and leaf samples were taken for consulting with experts. If there were more than one species of tree in close proximity, we could usually exclude non-relevant ones based on the season when the bat visited the site. Often, we could also find remains of chewed fruit or leaves under the trees (which are typically spat by fruit bats). In total, we mapped and identified 872 trees of 62 species (see Additional File 2: Table S1).
Quantification of fruit on Ficus rubiginosa
We chose to quantify the amount of fruit on Ficus rubiginosa because it was one of the popular trees consumed by our bats, but not a tree that is extremely common and would thus make counting difficult. Forty-one Ficus rubiginosa trees visited by our bats were visited within 2 weeks of the bats' visitation. A section of each tree was photographed and the number of ripe fruits (which can be detected according to their color (Additional File 7: Fig. S6)) was counted manually. The area of the section was estimated per photo using the size of a typical fruit as a scale. This provided us with the number of fruits per meter square which we extrapolated to the entire outer surface of the tree assuming it was a hemisphere (2•pi•R 2 ). By including only the surface (and not the internal branches), we underestimate the amount of fruit. Moreover, we chose R = 4 m, even though most of our trees had radii of at least 6 m, once again to underestimate fruit quantity. The number of fruits per tree was translated to mass using the average fruit mass (1.9 g, measured on 500 fruits). To estimate the average amount of fruit offered nightly by all trees, we estimated that only a third of them offer ripe fruit at any given moment [74,75]. The number of Ficus rubiginosa trees which we used for this estimate (200) is most likely a serious underestimate of the real number. This number was taken from the Tel-Aviv Municipality tree map which only maps ca. 50% of the trees and in the public domain only, and thus is, again, an underestimate (https://gisn.tel-aviv.gov.il/iView2js4/index.aspx?extent= 3871338,3774019,3871706,3774169&layers=628,865 &back=0&year=2019&opacity=0.8&filters=).
Quantification of fruit species density in urban areas
To quantify the density of fruit species eaten by Rousettus in urban areas, we randomly chose 10 squares of 0.5 × 0.5 km 2 . We used the Tel-Aviv tree map (see above) and our own tree mapping to calculate how many fruit species (including only species known to be consumed by fruit bats) exist in each square. This is, again, an underestimation of the actual number of fruit species because of the map's partiality (see above).
Urban vs. rural foraging sites
Urban foraging sites were defined as sites in urban regions, that is, sites that according to the GUF data are within a built-up area (a region featuring man-made building structures). The GUF data is a binary map (values of 255 for built-up areas and 0 for non-urban areas) generated from a global coverage of the Earth surface with TerraSARX/TanDEM-X radar data in 3 m ground resolution.
Use of urbanization
The time individual spent in urban environments was defined as the average time an individual spent in urban foraging sites in a single night or across all of its nights.
Behavioral variability over consecutive nights
Behavioral variability over consecutive nights was defined as the number of foraging site switches on consecutive nights. To this end, we estimated the number of changes required to transform one night's array of foraging sites to the array of the consecutive night. The sets of foraging events on two consecutive nights were compared and a set with a minimal number of operations was sought to transform the first set into the second without maintaining the same order (i.e., the sets [1,2] and [2,1] are considered identical). The set of allowed operations includes insertion and deletion of sites, with both operations at a unit cost.
Estimating diet diversity
Based on our monitoring of the fruit trees visited by the bats during their foraging, diversity indices were calculated to address the question of species diversity within the foraging sites of each community (city/country) and to compare the richness and evenness between them. Two of the most popular diversity indices are the Shannon index and the Simpson index, which use P i -the proportion of fruit species i [76]. We ran the same analysis on windows of 3-5 days and got a significant correlation with urbanity in all cases. Shannon index: Simpson index:
Statistics
Generalized linear mixed effect models were used (Matlab 2018) to assess the effect of different parameters on foraging and movement. The specific factors used in each analysis are described in the main text. All random effects were random intercepts. Count variables (e.g., the number of visits or tree species per night) were modeled using a Poisson distribution. To correct for possible seasonal biases, we added the month as an additional random effect in all statistical analyses.
Additional File 1: Figure S1. Urban bats visit more sites per night. The number of sites visited by bats that were tracked for at least 5 nights as a function of the percent of time they spent in urban areas. Each point represents one night. Each point represents a bat, black for bats from urban colonies and grey for bats from rural 241 colonies.
Additional File 2: Table S1. Fruit species visited by bats in our study.
Additional File 3: Figure S2. Fruit trees available in urban environments. We color-coded all trees around the Herzelia cave where most of our urban bats came from (vegetation that was not color-coded is mostly comprised of fields). The great majority of these trees offer fruit that is consumed by fruit bats. Trees were identified using a green-color filter while validating our classification several patches with highresolution images. We attempted to underestimate the identified trees in the image.
Additional File 4: Figure S3. Urban bats diversify their diets. The Simpsons index as a function of the percent of time they spent in urban areas. Each point represents a bat.
Additional File 5: Figure S4. Urban bats select the tree-types they visit. The graph shows the distribution of fruit trees in the city (blue line) with the trees ordered from the most to the least common species (left-right); while in red we present the actual visitation rate for each species. It is very clear that the visitation does not follow the distribution (we highlight a few of the most popular species).
Additional File 6: Figure S5. Urban bats visit a larger variety of fruit types. The accumulated percentage of the feeding according to the number of fruit species in urban and rural bats (based on Additional file 1: Table S1).
Additional File 7: Figure S6. Fruit bats select ripe fruit. A fruit bat feeding on a Ficus rubiginosa tree. Only red fruit (like the one in the bat's mouth) was marked as ripe when estimating the amount of fruit. | 2021-06-17T13:48:39.761Z | 2021-06-16T00:00:00.000 | {
"year": 2021,
"sha1": "6b7d76ee6205827bdccb2ae067c5b19c66b1a01d",
"oa_license": "CCBY",
"oa_url": "https://bmcbiol.biomedcentral.com/track/pdf/10.1186/s12915-021-01060-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff7fe570252f592e5a6d372f9321b32acb9e18d9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219687002 | pes2o/s2orc | v3-fos-license | Optical Studies of 8 AM Herculis-Type Cataclysmic Variable Stars
We report detailed follow-up observations of 8 cataclysmic variable stars (CVs) that are apparently AM Her stars, also called polars. For all, we either determine orbital periods for the first time, or improve on existing determinations. The seven for which we have spectra show the high-amplitude radial velocity curves and prominent HeII 4686 emission lines characteristic of strongly magnetic CVs, and their periods, which range from 81 to 219 minutes, are also typical for AM Her stars. Two objects from the Gaia-alerts index, Gaia18aot and Gaia18aya, are newly identified as CVs. Another, RX J0636.3+6554, eclipses deeply, while CSS080228:081210+040352 shows a sharp dip that is apparently a partial eclipse. The spectrum of Gaia18aya has a cyclotron harmonic near 5500 Angstroms that constrains the surface field to about 49 Megagauss or greater.
INTRODUCTION
Cataclysmic variable stars (Warner 1995) are close binary systems in which a white dwarf accretes material from a more extended companion, usually resembling a main-sequence star, which overflows its Roche lobe (critical equipotential surface). The name arose because the first known examples underwent outbursts -classical nova explosions occur when nuclear fuel accumulated on the white dwarf's surface explodes, and the more common dwarf novae undergo outbursts when gas accumulated in an accretion disk becomes unstable, and rapidly accretes onto the white dwarf.
Magnetic CVs -in which the white dwarf is strongly magnetized -can behave quite differently. They are usually much stronger X-ray emitters than non-magnetic CVs. If the magnetic field is not especially strong, an accretion disk can form far from the white dwarf, but the inner disk is disrupted and the field forces material to fall onto the poles of the white dwarf. Systems of this kind are called DQ Herculis stars (Patterson 1994), or intermediate polars, and they show pulsations in the X-ray and optical bands at the rotation period of the white dwarf and/or the orbital sidebands (e.g. the orbital-spin beat). A still stronger magnetic field can disrupt the formation of an accretion disk entirely; often, magnetic torques force the white dwarf to co-rotate with the orbit, though the coupling is weak enough that the white dwarf can sometimes be temporarily knocked out of co-rotation. In these systems, at least some of the matter lost from the companion threads onto the magnetic field and falls directly onto the white dwarf's magnetic poles via magnetically confined accretion columns. CVs of this kind are classified as AM Herculis stars, after their prototype, and are also called polars (Cropper 1990), because they often show strong circular polarization modulated at the the orbital (= rotational) frequency.
It is often easy to recognize an AM Her star even without polarization measurements. When they are accreting actively, their spectra show strong emission lines, with high excitation; the He II λ4686 emission is usually comparable in strength to Hβ. Because the emission lines arise largely in the accretion column, their radial velocities are often dominated by infall, which can reach velocities much higher than the white dwarf's orbital speed. The rotation of the white dwarf changes our viewing angle, leading to large variations in velocity (up to a few thousand. km s −1 ) periodic on the white dwarf rotation period, which in co-rotating systems is the same as the orbital period P orb . The brighter parts of the accretion column can also disappear over the limb of the white dwarf as it rotates, causing the intensity of both the lines and the continuum to vary. As with any binary system, eclipses also occur if the inclination is high enough. Typically, most of the eclipsed flux arises from the bright base of the accretion column (or columns). Note-Positions, mean G magnitudes, and distances from the GAIA Data Release 2 (DR2; Gaia Collaboration et al. 2016Collaboration et al. , 2018. Positions are referred to the ICRS (essentially the reference frame for J2000), and the catalog epoch (for proper motion corrections) is 2015. The distances and their error bars are the inverse of the DR2 parallax πDR2, and do not include any corrections for possible systematic errors.
We have been observing CVs, mostly spectroscopically, to characterize them and in particular to measure their orbital periods when possible. Here we present studies of 8 CVs that are apparently AM Her stars. Table 1 lists the stars discussed here.
In Section 2, we describe the instrumentation and techniques used for our observations, reductions, and analysis. Section 3 gives detailed information on the individual objects. Section 4 summarizes and draws attention to the results we think are most interesting.
TECHNIQUES
Nearly all of the data presented here are from MDM Observatory, on Kitt Peak, Arizona. Here we only summarize our observing protocols, data reduction, and analysis techniques, since they were mostly similar to those described in previous papers (e.g. Halpern et al. 2018;Thorstensen et al. 2016).
Spectroscopy
Most of our spectra are from the "modspec" spectrograph 1 , usually mounted on the 2.4m Hiltner telescope, though occasionally on the 1.3m McGraw-Hill telescope. A 600 line mm −1 grating gave 2 Å pixel −1 with either of the two SITe CCD detectors (2048 2 or 1024 2 ) we used. We reduced these data with IRAF software driven by python scripts, but extracted the 2-dimensional spectra to 1-dimensional spectra using our own implementation of the optimal extraction algorithm described by Horne (1986). For wavelength calibration we derived a pixel-wavelength relation from comparison lamps taken in twilight, and then adjusted the zero point using the [O I] λ5577 airglow feature, since with this instrument a linear shift accurately compensated the flexure of the Cassegrain-mounted spectrograph as the telescope moved.
The most recent observations are from the Ohio State Multi-Object Spectrometer (OSMOS; Martini et al. 2011) mounted on the 2.4 m, using the blue grism and 'inner' slit, which gave 0.7 Å pixel −1 and ∼ 3 Å resolution. While the reductions were generally similar to modspec, OSMOS required a more elaborate wavelength calibration procedure since the pixel-to-wavelength scale was less stable. To adjust the wavelength scale we either measured airglow features (tabulated by Osterbrock et al. 1996) or took short Hg and Ne lamp exposures adjacent to our science exposures.
We measured radial velocities, mostly of Hα, in the individual exposures by by convolving the line profile with an antisymmetric function as described by Schneider & Young (1980). The choice of convolution function serves to emphasize different parts of the line profile (Shafter 1983). For the most part we chose the derivative of a Gaussian as the convolution function, which provides a measure of the 'overall' location of the line, including the line core.
The emission lines of AM Her stars display complicated profiles that change through their orbits. We display these by creating two-dimensional images as follows. In most cases, we start by rectifying the spectra, that is, dividing them by a smooth function fitted to the continuum. Cosmic rays and other obvious artifacts are then edited out by hand. We compute the orbital phase of each spectrum, divide the orbital cycle into 100 phase bins and average together spectra that fall within a window of each phase point, using a weighting function that is a truncated Gaussian in phase, centered on the phase point. Finally, we stack the averaged spectra into a two-dimesional image, repeating a cycle to avoid discontinuities. The sources studied here are rather faint, so some of our sources required exposure times of 720-900 s for adequate signal-to-noise; this resulted in some phase smearing. Even so, most of the trailed emission line spectrograms show a rather sharp component that is brightest as it swings from red to blue. This behavior is consistent with emission from the side of the companion star irradiated by the X-ray and ultraviolet flux from the white dwarf (see, e.g., Schwope et al. 1997, and for a very early example, Thorstensen et al. 1978).
For one of our targets, CSS080228:081210+040352, we also obtained four spectra with the Southern African Large Telescope (SALT; Buckley et al. 2006); these are described in Section 3.4.
Photometry
The MDM time-series photometry is from the 1.3m telescope, mostly with an Andor IKON CCD frame-transfer CCD. Some of the photometric data were taken with a 1024 2 -pixel SITe CCD, cropped to a 256 2 pixel subarray to reduce the CCD readout time. The reduction script performed aperture photometry on the program star, a comparison star, and several check stars in each frame.
We also include some time-series photometry from the 1-meter telescope at the South African Astronomical Observatory (SAAO), taken using the Sutherland High-speed Optical Camera (SHOC; Coppejans et al. 2013), using an Andor iXon 888 EM-CCD camera.
The OSMOS spectroscopic target acquisition procedure requires at least one direct exposure to place the slit on the target. We took these through a Sloan g filter, and developed an automated program to infer the target's g magnitude. The script detects the stars in the image, matches them to entries in the the PAN-STARRS 1 Data Release 2 catalog, performs aperture photometry, establishes the offset between the instrumental magnitude and the catalogued g, and from this infers the g magnitude of the target just before the spectra were taken. Because the offset between instrumental and catalogued magnitude is common to the program and field stars, the procedure is differential. Given adequate signal-to-noise, it is accurate even in thin clouds and poor seeing.
Gaia18aot
This source was listed in the Gaia Transient Alerts on 2018 March 07, with an alerting magnitude of 17.32. Its Gaia light curve shows irregular fluctuations, mostly between 18th and 19th magnitude, but sometimes fainter than 20th. The Catalina Real Time Survey Data Release 2 light curve is similar, but also shows a brief flare on 2007 Nov. 02 that reaches 16.0.
Most of our spectra are from 2018 September. The mean spectrum ( Fig. 1) shows the strong emission in the Balmer, He I, and He II lines characteristic of magnetic CVs. The emission line velocities vary with P ∼ 114 min, in a non-sinusoidal pattern ( Fig. 1. We obtained some velocities in 2018 November, 2018 December, and 2019 January. Combining these, we found P = 0.078830(2) d, with no ambiguity in cycle count.
On the same observing runs, we obtained multiple orbits of time-series photometry, which are summarized in Fig. 2. The object remained in a similar photometric state through all our runs, and showed modulation at the orbital period, most notably a rapid decrease in flux around phase 0.2 in the radial-velocity ephemeris, and a more gradual recovery around phase 0.7. The small scatter of the phase of the rapid decrease over multiple observing runs corroborates the already-secure choice of cycle count. . Photometry of Gaia18aot from seven nights spread over four observing runs. The data are folded on the spectroscopic orbital period, but with a new zero in phase defined for each night so as to maintain time ordering through the night. Phase zero corresponds to blue-to-red crossing of the radial velocity. The earliest data are at the top, and each night's data is offset downward by 1 magnitude, as indicated in the legend. The vertical scale is correct for the lowermost trace. The large squares, from OSMOS setup images, were taken with a g filter; the rest of the data are essentially white-light, but adjusted to approximate g using the PAN-STARRS 1 magnitude of the comparison star.
PT Per
Herschel Telescope on three successive nights in 2015 April, at airmass > 3, and in evening twilight, which showed no strong emission or absorption features, but did show weak, Zeeman-split absorption at Hα and Hβ, consistent with a magnetic field of ∼ 25 MG. They suggested that PT Per is a polar, and that their optical spectra were taken in a low state. Their observations indicated a relatively small distance, perhaps as nearby as 90 pc, and indeed its Gaia DR2 distance of 1/π = 185(+5, −4) pc makes in the closest object studied here. In 2019 January, we found PT Per in a much more active state and obtained spectra with OSMOS on two successive nights. The spectrum (Fig. 3, top) showed strong emission lines, in contrast to the Watson et al. (2016) spectra. Large, rapid radial velocity shifts were immediately apparent; an analysis of the two nights' velocities gives a period P = 81.00(4) min, with no cycle-count ambiguity, in reasonable agreement with the 81.7(4) min period found by Watson et al. (2016). The phase-resolved spectra in the middle panel of Fig. 3 shows the large velocity shifts, as well as the asymmetric line wings characteristic of AM Her stars. Large blue-shifted velocity excursions are also seen, near phase ∼0.7, typical of polars. The Hα radial velocities (lower panel) are modulated almost sinusoidally. The velocity half-amplitude, K = 340 ± 14 km s −1 , is much too large to be plausibly orbital, so the infall velocity of the accretion column causes most of the velocity shift.
The high-state data amply confirm that PT Per is an AM Her star, as suggested by Watson et al. (2016). They note that their optical data were taken in a remarkably low state, with no clear emission lines, whereas most AM Her stars continue to show some emission lines even in very low states.
On 2019 Jan 21 and 22, we used the 1.3 m telescope and Andor camera to obtain the time-series photometry shown in Fig. 4. The light curve shows with two maxima per orbit, along with some flickering. There is no sign of an eclipse, so the correspondence between the orbital phase plotted and the locations of the stars in their orbits is not constrained. The double-humped light curve indicates that accretion likely occurs onto two poles.
3.3. RX J0636.3+6554 Appenzeller et al. (1998) discovered this star as the optical counterpart of a ROSAT X-ray source; they noted it was blue, variable on a timescale of hours, and that one of their spectra showed broad Hα emission at rest velocity. It was listed in the Downes et al. (2001) catalog, but apparently no follow-up studies have appeared. The CRTS Data Release 2 light curve (Drake et al. 2009) shows short term variation of about 1 magnitude superposed on a gradual decline from ∼ 17.6 mag in 2006 to about 20.0 mag in 2013.
We took spectra of this star in 2018 February. The mean spectrum (Fig. 5, top panel) shows strong emission lines on a blue continuum, with He II λ4686 nearly as strong as Hβ. Passing the fluxed spectrum through the V response function tabulated by Bessell (1990) gives V ∼ 17.9, so we caught the system in a relatively bright state. The emission lines immediately showed large velocity swings on a period just over 100 min (Fig. 5, middle and lower panels).
The star disappeared from time to time during the spectroscopy, so we obtained time-series photometry on the same observing run (see Fig.6). This showed eclipses ∼ 2 mag deep and lasting ∼6 min on a ∼ 103 min period, as well as out-of-eclipse flickering. Over the last two years we have observed 23 eclipses, including two that were generously observed by Karolina Bąkowska in 2018 April. Table 2 gives the times of eclipse center, along with the cycle count and residuals from the best linear ephemeris, which is BJD mid-eclipse = 2458174.62129(5) + 0.071221298(9). (1) where E is an integer eclipse number and the time base is UTC. Nearly all our time-series photometry was relative to a star ∼ 81 arcsec from the program object in position angle ∼ 63 • , for which Gaia DR2 lists α = 6:36:34.75 and δ =+65:54:51.7. The PAN-STARRS data release 2 gives gives g PSF = 17.07 for this star, which we added to our differential magnitudes. During our 2018 February and April observations , the out-of-eclipse magnitude averaged g ∼ 18.1, while for all our other time-series photometry it was much fainter at ∼ 20.1. Fig. 7 shows the brighter-and fainter-state eclipse light curves in greater detail. In both the brighter and fainter states, the egress is sharply defined and occurs at a very consistent phase. Our exposures (typically 20 sec) do not resolve the sharp rise in egress. In the fainter state, the ingress is also very consistent, but in the brighter state there is significant dispersion in the ingress phase. This suggests that in the bright state, a significant source of light lags behind the trailing side of the white dwarf; a natural candidate for this would be an accretion stream that fades away during the faint state. In some light curves, the ingress starts slightly earlier than in others, suggesting an extra source of obscuration, which might be the outermost parts of the accretion stream.
We also note that in the bright-state egress, following the initial rapid rise, the object consistently undergoes a slower, steady brightening. This may be explained by the gradual uncovering of the inner part of the magnetically threaded accretion stream.
The eclipse in the fainter state appears to be that of the white dwarf alone, and we conservatively estimate the full width as 412 ± 8 seconds. If the secondary star fills its Roche lobe, this implies a minimum q = M 2 /M WD of 0.11 to 0.13 for an edge-on orbit (Chanan et al. 1976). In non-magnetic systems, the relationship between P orb and q has been calibrated at short periods (see e.g. Patterson 2011); dwarf novae at this period have q < 0.2, which if applicable here implies an upper limit for i of ∼ 83 • . Assuming an implausibly large q = 1 gives i ∼ 73 • .
Taking 90 • ≤ i < 83 • constrains the dynamically important quantity sin 3 i to an accuracy of better than 3 per cent, so if we did have a reliable measurement of the secondary's velocity amplitude K 2 , we could in principle determine M WD quite accurately. Although the secondary is likely to be extremely faint, similar systems often have a strong, narrow component in their emission line profiles that arises on the side of the secondary facing the white dwarf. Our spectroscopy shows a hint of this, but at our spectral resolution it is not cleanly defined, so we are unable to draw any useful conclusions. . Photometry of PT Per from two successive nights. The phase is treated as in Fig. 2. Phase zero corresponds to blue-to-red crossing of the radial velocity. No vertical offset is applied. The large squares, from OSMOS setup images, were taken with a g filter; the time-series points were taken almost unfiltered and were adjusted to agree with the calibrated OSMOS magnitudes obtained simultaneously.
This object (abbreviated CSS0812+04) was detected in 2008 by the Catalina sky survey at a magnitude of 18.8, and listed as an eclipsing CV with a Sloan Digital Sky Survey (SDSS) magnitude of 22.4. However, the PAN-STARRS 1 survey consistently detects it with 18.0 < g < 19.1, suggesting that SDSS caught it in a state of low mass transfer. We selected this object in 2014 for further photometric and spectroscopic studies after it was identified by one of us, MM, as a candidate polar based on its long-term CRTS light curve; this is part of a study to identify candidate polars based on long-term photometric behavior. Independently, Oliveira et al. (2020) obtained a survey spectrum and classified it as a magnetic system.
Our mean spectrum from 2017 March (Fig. 8) shows strong He II λ4686 emission, and also emission at λ 5411, which are typical of magnetic CVs. Four additional spectra were taken with the Robert Stobie Spectrograph (RSS) (RSS; Burgh et al. 2003;Kobulnicky et al. 2003;Smith et al. 2006) on the Southern African Large Telescope (SALT; Buckley et al. 2006) on 4 & 5 January, 10 February and 1 March 2016. The RSS was used in long-slit spectroscopy mode with a slit width 1.5". The PG900 VPH grating was used, set to an incidence angle of 14.75 • , giving a spectral coverage of 4060-7100 and a mean resolution of 5.7 Å. On each night, 2×690 s exposures were taken. The wavelength calibration was done using Ar lamp exposures taken immediately following the observations and relative flux calibration was achieved using standard stars LTT 377 and LTT 4364, depending the night of observation. The SALT spectra are shown in Fig. 9. Note-Observed times of mid-eclipse. The first column gives the eclipse number E, and the second the barycentric julian date minus 2 450 000., on the UTC system. The penultimate column gives the residual compared to the best-fit linear ephemeris (eqn. 1), and the last the calendar date in UT.
Radial velocities taken over two nights gave P = 162.0 ± 0.3 min. The velocity half-amplitude K = 131 ± 14 km s −1 is much smaller than expected for an AM Her star. Fig. 10 shows a sampling of our time-series photometry. Many of our light curves show a very short dip, resembling a partial eclipse. This feature appeared insignificant until we found the spectroscopic period, which made it evident that dips on successive nights were separated by integer multiples of the orbital period. We were able to connect dips found in 2014, 2017, and 2019 with a unique ephemeris, BJD sharp dip = 2457844.6379(2) + 0.11241902(2)E, which we take to be orbital; Table 3 gives the observed dip times and their assigned cycle numbers. In many light curves, a broader minimum occurs shortly before the dip. The phase of this decline is consistent to better than ∼ 0.05 cycle, which corroborates our choice of period. . Differential photometry of RX J0636.3+6554 on four nights during the brighter state, folded on the best-fit linear ephemeris (eqn. 1). The curves are offset vertically by 3 mag. The data were taken with a UV cutoff filter only; the g magnitude of the reference star has been added to convert to a rough g magnitude. Fig. 11 is a close-up view of the dip, with data from 13 nights plotted. The dip appears to be stable in phase, about 250 seconds wide, and typically about 0.6 mag deep. Its consistency suggests it is caused by a grazing eclipse of the bright accretion column by the secondary star. A compact accretion column disappearing momentarily over the limb of a rotating white dwarf might, in principle, mimic the dip's appearance, but such events tend to have more gradual ingresses and egresses, and not to be as consistent.
SDSS J100516.61+694136.5
Wils et al. (2010) discovered this object (hereafter SDSS1005+69) by mining data from SDSS, Galaxy Evolution Explorer (GALEX), and various astrometric catalogs for dwarf nova candidates. They noted strong emission lines, including He II λ4686 comparable to Hβ, in the SDSS spectrum, and suggested that it is a magnetic CV, varying from 17.9 through 21.2 mag.
We obtained single spectra in 2012 January and 2015 April, but did not find the system bright enough to study. We enjoyed better luck in 2018 February and March and obtained spectra on three nights. The top panel of Fig. 12 shows the mean spectrum, which includes the He II emission characteristic of magnetic CVs. The Hα emission line velocities are strongly modulated at an unambiguous period of 218.6(4) min; the modulation is non-sinusoidal with a rapid rise and a more gradual decline in each cycle. The phase used in the lower panel of Fig. 12 is based on a sinusoidal fit to the velocity data, and is essentially arbitrary. Using the 1.3m telescope, we obtained time-series photometry Note-Observed times of the sharp dip. The first column gives the dip number E, and the second the barycentric julian date minus 2 450 000., on the UTC system. The penultimate column lists the residual compared to the best-fit linear ephemeris (eqn. 2), and the last gives the UT date. contemporaneous with our 2018 spectroscopy, and also on two nights in 2020 January (see Fig. 13). The comparison star was at α = 10 h 05 m 14 s .34, δ = +69 • 43 23 .4, 108 arcsec from, and almost due north of, the target; the PS1 DR2 lists g = 15.53, r = 14.98 for this star. The light curves show a rise starting around phase zero, and a slower decline, but no definite eclipse. The spectroscopic orbital period is not precise enough to specify phase for the 2020 data; to prepare the figure, we assumed the minimum around phase zero is stable in phase and adjusted the period slightly to force its phase to align with the 2018 data.
For the light curves taken on 26 February and 1 March 2018 (top panel of Fig. 13), we see evidence of possibly periodic fluctuations on a ∼800 s timescales. We therefore produced periodograms of the 2018 light curves using Gatspy, a Python implementation of the Lomb-Scargle method (VanderPlas & Ivezić 2015). The results are shown in Fig. 14, which clearly show period peaks at 810 s and 771 s, respectively, both of which have formal false-alarm probabilities below 1 per cent. In addition, the periodogram of the four combined nights clearly shows the presence of the orbital period and its harmonic (Fig. 15). The fact that the two shorter period peaks are not separated by the orbital frequency would seem to rule out an intermediate polar interpretation, where the two frequencies could be due to the beat and spin modulations, respectively. We therefore conclude that the system probably exhibits quasi-periodic variability from time to time.
The photometric variations seen in polars have variously been characterized as flickering, fluctuations and sometimes quasi-periodic oscillations (QPOs). The latter are variations that show some degree of coherence over a number of cycles of the QPO period. The discovery of ∼Hz frequency QPOs in the visible light of polars is now over 30 years old (e.g. Middleditch 1982), and at the time resulted in a flurry of theoretical studies. The commonly held understanding is that they are due to plasma oscillations in the magnetically confined accretion columns. Until recently, only five systems were known to exhibit such QPOs, with the sixth (V379 Tel) being the only discovery in over two decades, despite attempts to find more examples (Breytenbach, H. et al., in preparation). Longer period QPOs were also seen in AM Her (Bonnet-Bidaud et al. 1991), at 250−280s, while more recently a ∼320 s QPO was detected in IGR J14536-5522 5.4 (Potter et al. 2010). The origin of these longer period QPOs is still debated, with proposed sites suggested near the L1 point (King 1989), the stream coupling region or within the magnetically confined flow, close to the white dwarf surface (Bonnet-Bidaud et al. 1991).
In polars the QPOs seem to occur with typically a few seconds period (accretion column oscillations), or with periods of many minutes. The latter are larger amplitude and are quite a common feature of polars, often seen by eye in the lightcurves, as seems to be the case for SDSS J100516.61+694136.5 (see Fig. 13), where they can appear to show some sort of coherence, but are not necessarily obvious in power spectra; they are often referred to as "QPO-like" (Potter et al. 2010).
3.6. SDSS J133309.20+143706.9 Schmidt et al. (2008) published time-series spectroscopy and polarimetry of this object not long after it was discovered in SDSS. The detection of circular polarization firmly established it as an AM Her star. The radial velocities of Hα varied with K ∼ 250 km s −1 on a period of 2.2 ± 0.1 hr. Southworth et al. (2015) obtained time-series photometry on three nights, but were unable to improve on the period.
On 5 consecutive nights in 2016 February, we obtained time series photometry with the 1.3m and Andor camera. The light curves (Fig. 16) consistently show a flat-topped brightening that recurs on a period of 0.08814(4) d, or 126.92(6) min, consistent with the radial-velocity period found by Schmidt et al. (2008). The daily cycle count is unambiguous.
We also have time series from 2016 June 10 and from 2017 June 21 and 22. The 2016 June time series shows a brightening toward the end that is similar to those seen in the other light curves, but does not cover the decline. The 2017 June light curves show clearly-defined brightenings similar to the others. Only one choice of long-term cycle count fits all the brightening ingress times comfortably, and it implies BJD of brightening = 2457434.8817(4) + 0.0881118(3)E (provisional). ( We label this as provisional because of the lack of redundant timings on the longer baselines; the less precise value from 2016 February is firmly established. One reason for caution is that the three brightenings seen in the Southworth et al. (2015) arrive early in this ephemeris by ∼ 25 min, in contrast to the MDM timings, which all align to better than 2 min. 3.7. SDSS J134441.83+204408.3 Szkody et al. (2011) found this object (hereafter SDSS1344+20) in the SDSS data and noted its apparently magnetic nature. In a short series of spectra, they found the radial velocities of Hα and Hβ varying on a period of ∼ 115 min, with semi-amplitudes K ∼ 400 km s −1 . Szkody et al. (2014) present further observations, including photometry and spectroscopy showing changes of photometric state.
We observed this star most intensively in 2016 February and March. In the mean spectrum (Fig. 17), He II λ4686 is less prominent that usual in AM Her stars, about half the strength of Hβ. The continuum is strong and blue. Hot continua usually show a smooth upward sweep toward the blue; this continuum may have a very broad hump from ∼ 5100−5700Å. If real, this might be a cyclotron feature.
As Szkody et al. (2011) found, the radial velocities of Hα are strongly modulated, and with our more extensive data set we determine P orb = 101.652(6) min. The cycle count between nights and between the two observing runs is unambiguous; the relatively small uncertainty reflects the 21-day span of the time series. Figure 18 shows light curves taken on three different observing runs; the spectroscopic ephemeris used to compute the phases is only valid for the 2016 February data, so the phases in the top and bottom panels are arbitrary. Not all the runs used the same comparison stars, but the magnitude scales have been adjusted using the different stars' r magnitudes from PAN-STARRS. No periodic behavior is evident, though the intervals of rapid fluctuation seen in the middle panel are both centered on a brief interval before phase zero. We speculate that the V-shaped ∼0.5 magnitude dips seen near phase zero in the 2016 data, when the object was brighter than for the other observations, could be a partial grazing eclipses of an accretion hot-spot. The data from 2018-02-26 shows a brightening by ∼ 2 mag over less than one orbit; note that Szkody et al. (2014) . Light curves of SDSS1333+14 from three observing runs. The differential magnitudes have been adjusted using the r magnitudes of the comparison stars. The lowermost trace is plotted without a vertical offset, and successive trace above it are offset upward by 1.5 mag, as indicated by the color-coded tick marks on the left. The ephemeris used to compute phases is provisional, and is the epoch of the ingress into the bright phase.
Gaia18aya
The Gaia light curve for this source shows it varying between 18 and 19 mag, except for a few days in 2018 April when it triggered an alert at a magnitude of 17.52, and a pair of detections at 17.27 on 2018 May 25.
The mean spectrum (Fig. 19) shows the usual emission lines, but the most striking feature is a cyclotron emission harmonic centered around ∼ 5500 Å. The cyclotron harmonic clinches the AM Her classification. The Hα radial velocities from 2018 September establish an unambiguous orbital period near 120 min. We obtained more observations in 2018 November, December, and January which constrain the period uniquely to 120.165(3) min.
The cyclotron emission hump varies in strength with the orbital period. This can be seen in Fig. 20, which is similar to the middle panel of Fig. 19 but with wider wavelength range. In both these figures the spectra were not rectified (normalized to a continuum) before being averaged and stacked; rather, flux-calibrated spectra were used, so variations in flux can be seen. Fig. 21 shows time series photometry. During 2018 September, the variation is irregular without obvious periodicity, but in 2018 November the source was somewhat brighter and varied smoothly with the orbital period.
The wavelength of the nth cyclotron harmonic is Figure 18. Light curves of SDSS1344+20 from three observing runs. The differential magnitudes have been adjusted using the r magnitudes of the comparison stars. The lower trace (red) in the lower panel is plotted without a vertical offset, and the upper trace (blue) is offset upward by 1.0 mag. The ephemeris used to compute phase is given in the axis label, but it is only valid for the middle panel, which was contemporaneous with the spectroscopy.
where B 8 is the magnetic field in units of 10 8 Gauss (10 4 Tesla). The ∼ 5500 Å feature is the only harmonic we clearly observe, which implies that the allowable magnetic fields for different assumed cyclotron harmonics in the range n = 2−7, varies from 28−97 MG. If the cyclotron feature at ∼5500 Å is associated with the n = 6 (32 MG) or 7 (28 MG) harmonic, this implies that the shorter and longer wavelength harmonics at n ± 1 ( ∼4800 and ∼4700 Å and ∼6400 and ∼6600 Å respectively) should be detectable in our spectra. For n = 5 (B = 39 MG), we should also see the n = 4 harmonic at ∼6900 Å. The fact that we see no other cyclotron features corresponding to these wavelengths is evidence that n < 5. If we take n = 4, then the neighbouring harmonics should occurs at ∼4400 Å (n = 5) and ∼7300 Å (n = 3), respectively. From Fig. 19 (top panel) we see the flux increases from ∼7000 Å to the red limit, at 7400 Å consistent with a broad cyclotron line at ∼7300Å. Similarly, the flux also increases for wavelengths ≤4900, to the blue limit of our spectra, at 4550 Å also consistent with the expected cyclotron line at ∼4400 Å. So this is all consistent with identifying the clearly observed hump at ∼5500 Å with the n = 4 cyclotron harmonic from a B = 49 MG magnetic white dwarf. Lower harmonics, at n = 2 or 3, are also admittable, with higher field strengths, though the n ± 1 harmonics are now well outside the wavelength range of our spectra. A good far-red or near-infrared spectrum could help determine the field strength by clearly identifying the lower harmnonics and allowing for subtraction of the underlying secondary star flux, which is likely an M-type star given the ∼2 h orbital period.
4. CONCLUSIONS Figure 22 shows a histogram of the orbital periods of AM Her stars listed in the final release (version 7.24) of the Ritter & Kolb (2003) catalog of cataclysmic binaries. The periods of the stars discussed here are also indicated. They all have periods typical of the population. Table 4 summarizes our findings. We classify three objects (Gaia18aot, RX J0636.3+6554, and Gaia18aya) as AM Her stars for the first time; the two Gaia sources are also newly-recognized as CVs. For six of the objects we determine P orb for the first time, and for two more (PT Per and SDSS1344+20) we improve significantly on previous period determinations. We confirm that PT Per is a magnetic CV, as Watson et al. (2016) suggested.
Three of our objects have especially interesting light curves. RX J0636.5+6554 eclipses deeply. CSS0812+04 shows a sharp dip that is stable in phase and appears to be a partial eclipse. Finally, SDSS1333+14 persistently shows a distinctive bump consistent the the appearance of an otherwise self-occulted accretion spot.
The spectrum of Gaia18aya has an apparent cyclotron emission hump near 5500 Å, which constrains the magnetic field to be greater than ∼ 49 MG.
It is worth noting that magnetic CVs appear to be underrepresented in various listings. Pala et al. (2020) constructed a volume-limited sample of 42 CVs within 150 pc, as judged by Gaia DR2 parallaxes, and found that over 30 per cent were magnetic, and that 11 out of the 42 in the total sample were polars, including the prototypical polar, AM Her. High-cadence synoptic sky surveys have found very large numbers of new CVS (see, e.g. Breedt et al. 2014), but they are clearly biased toward dwarf novae, which show distinct, large amplitude outbursts. The objects in this paper no doubt represent a very sizeable population of more subtly variable AM Her stars, as yet unrecognized. (6) · · · · · · New per., firm 0.0881118 (2) 57434.8818(4) · · · · · · Provisional SDSS J134441.83+204408.3 Spec. 0.070592(4) 57456.8785 (7) 273(17) −50(12) Improved per.
38(8) New CV
Note-A summary of the measurements presented here. Sinusoids, where fitted, are of the form v(t) = γ + K sin 2π(t − T0)/P . Epochs are barycentric Julian dates minus 2,400,000., in the UTC time system; these can be converted to TDB with sufficient accuracy by adding 69 s. a Non-sinusoidal velocity curve; parameters are formal best fits only. | 2020-06-16T01:00:44.776Z | 2020-06-15T00:00:00.000 | {
"year": 2020,
"sha1": "81631ab75b1da0bb4679c32327f685f0437e679b",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-3881/ab9d1b/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "81631ab75b1da0bb4679c32327f685f0437e679b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
259276105 | pes2o/s2orc | v3-fos-license | The clinical characteristics of patients with asthma exposed to different environmental risk factors: A cross‐sectional study
Abstract Background Smoking, biomass, and occupational exposure are the main environmental risk factors for asthma. The purpose of this study was to analyze the clinical characteristics of exposure to these risk factors in patients with asthma. Methods This cross‐sectional study enrolled patients with asthma from an outpatient department according to the Global Initiative for Asthma. Demographics, forced expiratory volume in 1 s (FEV1), FEV1%pred, FEV1/forced vital capacity (FVC), laboratory tests, asthma control test (ACT), asthma control questionnaire (ACQ) scores, and the inhaled corticosteroid (ICS) dose were recorded. A generalized linear mixed model was used to adjust for potential confounders. Results A total of 492 patients with asthma were included in this study. Of these patients, 13.0% were current smokers, 9.6% were former smokers, and 77.4% were never smokers. Compared with never smokers, the current and former smokers had a longer duration of asthma; lower ACT scores, FEV1, FEV1%pred, and FEV1/FVC; and higher ACQ scores, IgE, FeNO, blood eosinophils, and ICS dose (p < .05). In addition, the patients exposed to biomass alone were older; had higher exacerbation in the past year; a longer duration of asthma; and lower FEV1, FEV1%pred, FEV1/FVC, IgE, and FeNO compared with smoking or occupational exposure alone. Compared with smoking exposure alone, patients with occupational exposure alone had a longer duration of asthma and lower FEV1, FEV1%pred, FVC, IgE, FeNO, and ICS dose (p < .05). Conclusions There are significant differences in the clinical characteristics of patients with asthma depending on the smoking status. In addition, significant differences also observed among smoking, biomass, and occupational exposure.
| INTRODUCTION
Asthma is a highly heterogeneous chronic respiratory inflammatory disease characterized by wheezing, coughing, and chest tightness. Asthma has a high morbidity and affects about 350 million people worldwide. 1 Therefore, treatment and prevention are urgent.
Environmental risk factors including smoking, biomass, and occupational exposure are adversely associated with asthma. 2 Smoking is one of the preventable factors of asthma: around half the adult patients with asthma are current or former smokers. 3 Studies have shown that long-term smoking significantly reduces the sensitivity of patients with asthma to inhaled corticosteroids (ICS) and causes poor outcomes. 4,5 In addition, exposure to cigarette smoke could induce the release of proinflammatory mediators by activated neutrophils, macrophages, and T cells to cause airway inflammation and to promote the progression of asthma. Compared with never smokers, exposure to cigarette smoke in patients with asthma is associated with the recruitment, activation, and altered function of macrophages, natural killer cells, and T and B cells. 3 Biomass including wood, charcoal, dried animal dung, and agricultural residues for cooking are the leading environmental risk factors for asthma in developing countries. 6 Several studies have found that the use of biomass for cooking is associated with an increased risk of severe asthma symptoms. 7 In addition, a study showed that biomass-derived particulate matter could cause immune disorders and inflammation to aggravate asthma. 8 Occupational exposure is another environmental risk factor for asthma. 9 An estimated 5%−20% of new cases of adult-onset asthma can be attributed to occupational exposure. 10 In addition, studies have shown that persistent occupational exposure is associated with worse asthma outcomes. 11 Compared with healthy controls, diseaserelated immune functions in blood cells, including leukocyte migration, inflammatory responses, and decreased expression of upstream cytokines such as tumor necrosis factor and interferon gamma, are suppressed in patients who develop asthma from occupational exposure. 12 Currently, numerous studies are focus on the diagnosis and pathogenesis of asthma caused by a single risk factor. However, there are no studies that have compared the clinical characteristics of asthma caused by different risk factors (i.e., smoking, biomass, and occupational exposure). Therefore, we investigated and compared the clinical characteristics of patients with asthma exposed to different environmental risk factors.
| Study participants
This was a cross-sectional study. All subjects were from the outpatient department of the Jiangxi Hospital of Integrated Traditional Chinese and Western Medicine between January 2020 and June 2022. Asthma was diagnosed according to the Global Initiative for Asthma (GINA) guidelines, with bronchodilation forced expiratory volume in 1 s (FEV1) change >200 mL and 12%; positive bronchial stimulation test; and symptoms of asthma (including wheezing, difficulty breathing, chest tightness, or coughing). 13 Patients with lung cancer, pneumonia; bronchiectasis; tuberculosis; and severe heart, liver, or kidney disease were excluded.
This study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the Jiangxi Province Hospital of Integrated Chinese and Western Medicine (Number: 202301). All patients provided their informed consent.
| Data collection
Data including age; sex; education level; body mass index (BMI); smoking status; FEV1; FEV1%pred; FEV1/forced vital capacity (FVC); asthma control test (ACT); and asthma control questionnaire (ACQ) scores; exacerbations in the past year; laboratory tests including IgE, FeNO, and blood eosinophils; and ICS dose were collected at the patient's first visit.
| Variable definition
A current smoker has had smoking exposure of ≥10 pack/years, while a former smoker has had ≥10 pack/ years but had not smoked for more than 6 months. A never smoker had never smoked or had smoked fewer than 100 cigarettes in their lifetime. 14 Biomass exposure was defined as using biomass fuels (wood, grass, charcoal, and crop residues) for cooking or heating at least 2 h per day for at least 1 year. Occupational exposure was defined as exposure to dust, gases/fumes, insecticides, chemical substances, paints, and metals at work for at least 8 h per day for more than 1 year. 15 The ACQ consists of seven items, each scored from 0 to 6.
In this study, the ACQ scores are the average of the seven items. 16
| Study procedures
The patients were divided into three groups according to the smoking status: current smoker, former smoker, and never smoker. The following criteria were used to distinguish the subgroups: smoking alone group, patients had only been exposed to cigarette smoke, including current and former smokers; biomass alone, patients had only been exposed to biomass; occupational exposure alone, patients had only been subjected to occupational exposure; and never smoking alone, patients had not been subjected to cigarette smoke, biomass, or occupational exposure. SPSS 25.0 (IBM) was used for statistical analysis of the data. Continuous variables are expressed as mean ± standard deviation or median and interquartile range. Continuous variables with a normal distribution and homogeneous variances were analyzed with analysis of variance; otherwise, nonparametric tests were used. The χ 2 test or Fisher's exact test was used to analyze categorical variables. A logistic regression was used to determine the relative factors for smoking cessation and calculate the adjusted odds ratio (aOR) and adjusted 95% confidence interval (a95% CI). A generalized linear mixed model was generated to control for potential confounders. p < .05 was considered to be statistically significant.
| The clinical characteristics in the different smoking status groups
We included a total of 492 patients with asthma in this study ( Figure 1). The mean age was 49.9 ± 13.2 years. Female accounted for 62.8% of the subjects. Of these patients, 77.4% were never smokers, 9.6% were former smokers, and 13.0% were current smokers ( Table 1).
As shown in Table 2, the current and former smokers had a longer duration of asthma; lower ACT scores, FEV1, FEV1%pred, and FVC; and higher ACQ scores, IgE, FeNO, blood eosinophils, and ICS dose compared with never smokers (p < .05). Compared with former smokers, current smokers had a a longer duration of asthma; lower ACT scores, FEV1, FEV1%pred, and FVC; higher ACQ scores, IgE, FeNO, blood eosinophils, and number of exacerbations in the past year (p < .05).
|
The clinical characteristics of smoking, biomass, or occupational exposure alone for patients with asthma The patients with biomass alone were older, had higher exacerbations in the past year; a longer duration of asthma; and a lower FEV1, FEV1%pred, FVC, IgE, and FeNO compared with the patients with smoking or occupational exposure alone (p < .05). Compared with F I G U R E 1 Flow chart of the study. LUO | 3 of 10 smoking alone, the patients with occupational exposure alone had a longer duration of asthma and lower FEV1, FEV1%pred, FVC, IgE, FeNO, and ICS dose (p < .05) ( Table 4).
After controlling for potential confounders including sex, age, education level, BMI, and FEV1/FVC. The generalized linear mixed model showed that the biomass alone group had a longer duration of asthma; higher exacerbations in the past year; and lower FEV1, IgE, and FeNO compared with the smoking and occupational exposure alone group. In addition, the occupational exposure alone group had a lower FEV1, FEV1%pred, IgE, and FeNO compared with the smoking exposure alone group (p < .05) ( Table 5).
|
The clinical characteristics of never smoking, biomass, or occupational exposure alone for patients with asthma Compared with the never smoking alone group, the biomass, and occupational exposure alone groups had a higher exacerbation in the past year and a longer duration of asthma, while lower FEV1, FEV1%pred, and FEV1/FVC (p < .05) (Supporting Information: Table 1).
After controlling for potential confounders including sex, age, education level, and FEV1/FVC. The generalized linear mixed model showed that the biomass and occupational exposure alone groups had a longer duration of asthma, higher exacerbations in the past year, and lower FEV1 and FEV1%pred compared with the never smoking alone group (p < .05) (Supporting Information: Table 2).
| DISCUSSION
Environmental risk factors including smoking, noxious chemicals, occupational exposure, and air pollution are triggers for asthma, especially in adults. In addition, they Exacerbations in the past year, (median, IQR) 1 (0−1) a,b 0 (0−1) 0 (0−1) .003 Exacerbations in the past year, n (%) .008 can lead to a poor outcome, including higher future exacerbations, mortality risk, and persistent airflow limitation. [17][18][19] In fact, according to the GINA guidelines, the nonpharmacological interventions for patients with asthma include smoking cessation and avoidance of occupational exposure and indoor air pollution are important. 20 Smoking is the main risk factors for asthma in adults, especially in male. In this study, current and former smokers had a longer duration of asthma; lower ACT T A B L E 4 The clinical characteristics among biomass, occupational exposure, and smoking alone of asthma patients. sores, FEV1, FEV1%pred, and FVC; and higher ACQ scores compared with never smokers. In addition, current smokers had higher exacerbations in the past year. Several studies have found that smoking contributes to increase the exacerbation risk and symptoms in patients with asthma. 21 In addition, smoking can lead to an accelerated decline in pulmonary function and increased severity of airflow obstruction. 22 In this study, we also found that smokers with asthma had worse pulmonary function. FeNO and IgE are both important biomarkers of airway eosinophilic inflammation. Blood eosinophils and FeNO to have comparable diagnostic accuracy which was superior to total serum IgE in adult asthma patients. 23 In fact, there was study showed that severe asthma with 10-pack/year history was associated with higher proportion of eosinophilic airway inflammation and autoimmunity toward eosinophils. 24 Our study found that smoking patients with asthma had a higher value of FeNO, IgE, and blood eosinophils. Long-term smoking significantly reduces the sensitivity of patients with asthma to ICS: they require higher doses for treatment. 4 Consistently, we found that patients with asthma had a higher ICS dose upon enrollment. Of course, quitting smoking is necessary to manage and prevent asthma. Research is needed to explore the factors related to successful smoking cessation so that clinicians can effectively guide patients with asthma to quit smoking. We identified several factors for successful smoking cessation including age, education level, ACT level, and well controlled asthma. Our findings are consistent with previous studies. [25][26][27][28] In developing counties, most energy for homes is supplied from the biomass, which can be a trigger for asthma. 29 However, researchers have confirmed that patients with chronic obstructive pulmonary disease (COPD) exposed to biomass have worse pulmonary function compared with those subjected to smoking and occupational exposure. 15,30 We also found that patients with asthma had worse pulmonary function. In addition, patients with asthma exposed to biomass had lower levels of inflammatory biomarkers, a phenomenon that has also been seen in patients with COPD. 31 There are some limitations of this study. First, we only examined patients from a single center. Future studies should involve more centers and patients. Finally, we did not stratify the patients according to exposure to different environmental risk factors. This approach would have provided additional information.
| CONCLUSIONS
We identified significant clinical differences among patients with asthma depending on the smoking status. In addition, we noted significant clinical differences among patients with asthma subjected to smoking, biomass, and occupational exposure. The information provided here can help guide clinicians regarding the effects of these risk factors and to encourage them to take effective action to improve the targeted prevention of asthma.
T A B L E 5 Linear mixed models for the association among asthma patients exposed to smoking alone, biomass alone, and occupational exposure alone. | 2023-06-29T13:03:53.336Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "2a8b3542f0209853767d63f19a2969c7ce817625",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "2a8b3542f0209853767d63f19a2969c7ce817625",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199552972 | pes2o/s2orc | v3-fos-license | Evaluation of synergistic effect of tazobactam with meropenem and ciprofloxacin against multi-drug resistant Acinetobacter baumannii isolated from burn patients in Tehran
Background: Acinetobacter baumannii is an increasingly important cause of nosocomial infections worldwide. In addition to the intrinsic resistance of Acinetobacter baumannii to many antibiotics, available treatment approaches with older antibiotics are significantly associated with an increase in multiresistant strains. The aim of this study was to evaluate the synergistic effect of tazobactam with meropenem and ciprofloxacin against carbapenems and drug resistant Acinetobacter baumannii isolated from burn patients in a tertiary burn center in Tehran. Materials and methods: In this study, a total of 47 clinical isolates of A. baumannii were included from burn patients admitted to the Shahid Motahari Burns Hospital, Tehran, from June 2018 to August 2018. The disk diffusion method was used to determine resistance patterns. The synergistic effect of tazobactam with meropenem and ciprofloxacin was evaluated by determining the MIC. A PCR assay was performed to determine blaOXA-40-like, blaOXA-58-like and blaOXA-24-like. Results: Antibiotic susceptibility testing revealed that all of the isolates were resistant to meropenem and ciprofloxacin. The MIC values decreased in the cases of combined use of ciprofloxacin and meropenem with tazobactam. The blaOXA-24-like gene was the predominant carbapenemase gene (93.6%), followed by blaOXA-40-like, which was detected in 48.9% of isolates. None of the A. baumannii isolates harbored the blaOXA-58-like gene. Conclusions: Based on in-vitro antimicrobial susceptibility in the current study, the MIC of tazobactam combined with meropenem or ciprofloxacin have been shown to be variable. Furthermore, the data acquired from such in vitro conditions should be confirmed by reliable results from sufficiently controlled clinical trials.
Background
Burn-wound infections are considered as one of the important causes of death in developing countries [1]. Patients with severe burns are at high risk of acquiring nosocomial pathogens and contracting numerous infections as a result of the immunocompromising effects of burns, cutaneous and respiratory tract injury, prolonged hospital stays, and invasive diagnostic methods and treatment procedures [2], [3]. The control and prevention of life-threatening infectious diseases among burn patients remains a major concern worldwide, as the environment in burn units can become contaminated with resistant opportunistic pathogens [3]. Acinetobacter baumannii is considered an important nosocomially acquired opportunistic pathogen causing a wide range of severe infections, including those of burnwounds, surgical wounds, the urinary tract (UTI), ventilator-associated pneumonia (VAP), as well as nosocomial meningitis and bacteremia [4], [5]. The bacterium is highly successful in persisting and spreading in the hospital environment, and thus can survive under dry, aharsh environmental conditions [6]. Additionally, A. baumannii can develop resistance to numerous antimicrobial agents using different mechanisms [7]. It is well documented that one of the most important factors contributing to the high mortality of A. baumannii infections is the ability to acquire a wide variety of antibiotic resistance genes and rapidly develop multidrug resistance (MDR), extensive drug resistance (XDR) and even pan-drug resistance (PDR) [8]. Dissemination of MDR A. baumannii strains has significantly limited the choice of therapeutic options available for the treatment of infections caused by this bacterium and the associated poor clinical outcome [9]. According to previously published data, carbapenems are considered as the "last-line" antibiotic against infections caused by MDR A. baumannii strains in patients and healthcare workers [10]. Due to a severely limited range of alternative therapeutic options, unfortunately, recent reports described an increasing trend of multi-drug resis-tance in A. baumannii in many parts of the world, so that carbapenem resistant A. baumannii strains have emerged as a major public health concern [11], [12]. However, OXA carbapenemases are significantly inhibited by clavulanic acid, sulbactam and tazobactam [13]. Thus, increasing meropenem and ciprofloxacin susceptibility in A. baumannii by considering the potential inhibitory effect of tazobactam on OXA enzymes was examined in this study. Acinetobacter species can acquire resistance against carbapenems by producing various carbapenemase enzymes, which are members of the molecular class A, B, and D β-lactamases. The class D carbapenemases, which consist of OXA-type β-lactamases (OXA) such as bla OXA-23-like , bla OXA-24-like , bla OXA-51-like , and bla OXA-58-like , are frequently detected in MDR A. baumannii strains [14]. Although clinical use of carbapenem agents in the treatment of infections has become well established, the use of this antibiotic alone must be limited due to concerns about the emergence and spread of resistant strains. Moreover, the high mortality rates of carbapenem-resistant A. baumannii infections highlight the importance of early prediction and appropriate control measures of this bacterium in healthcare settings [15]. However, little information is available on whether different treatment regimens should be used for carbapenem-resistant A. baumannii infections. Given the lack of novel antimicrobials available in the clinical setting in Iran, we investigated the effects of meropenem and ciprofloxacin alone and in combination with tazobactam on A. baumannii isolated from burn patients, in the attempt to more effectively employ available antibiotics. The aim of this study was to evaluate the synergistic effect of different concentrations of tazobactam with ciprofloxacin and meropenem, and also to detect bla OXA-24-like , bla OXA-40-like and the bla OXA-58-like genes.
Sample collection and bacterial strains
The current study was carried out on 47 clinical isolates of A. baumannii obtained from patients admitted to Shahid Motahari Burns Hospital, Tehran, in a two-month period from June 2018 to August 2018. The study protocol was approved by the Ethics Committee of the National Institutes for Medical Research Development (IR NIMAD REC 1396 223), Tehran, Iran. Strains were identified by conventional biochemical and microbiological methods, e.g. oxidase, TSI, SIM, etc. In addition, to confirm A. baumannii identification, amplification and sequencing of intrinsic bla OXA-51-like genes were carried out using specific primers, as previously described [16]. All strains were stored in Tryptic Soy Broth (TSB; Merck, Germany) containing 20% glycerol at -80°C for further analysis.
Antibiotic susceptibility testing
In-vitro susceptibility testing was performed using a panel of three antibiotics in the Kirby-Bauer disc diffusion method, according to the Clinical and Laboratory Standards Institute (CLSI 2018) [17] guidelines. The antimicrobial drugs tested included imipenem (10 µg), meropenem (10 µg) and ciprofloxacin (5 µg). Escherichia coli ATCC 25922 were used as a quality control strain in every test run. In this study, multi-drug resistance (MDR) was defined as non-sensitivity to ≥1 agent in ≥3 antimicrobial categories in CDC report [18].
Minimum inhibitory concentration (MIC) assay
The minimum inhibitory concentrations (MIC) of meropenem and ciprofloxacin were determined alone and in combination with tazobactam against the A. baumannii isolates by a macro broth dilution according to the CLSI 2018 guideline [17]. Specifically, the following concentrations were used: meropenem: 256 µg/ml to 16 µg/ml; ciprofloxacin: 128 µg/ml to 16 µg/ml. All antimicrobials were purchased from Sigma-Aldrich (St. Louis, MO, USA).
Synergic effect of tazobactam and antibiotics assay
The minimum inhibitory concentration of each strain against meropenem and ciprofloxacin with different concentrations of tazobactam was determined. 10 µg/ml, 30 µg/ml and 50 µg/ml tazobactam were used.
Detection of carbapenem
DNA of the isolates was extracted using the boiling method as described previously [16]. The existence of class D carbapenemase genes (bla OXA-24-like , bla OXA-40-like , and bla OXA-58-like ) was determined using PCR via specific primers (Table 1). The PCR products were detected by agarose gel electrophoresis (1.5%), then they were stained with ethidium bromide and visualized under UV light (UVItec, Cambridge, UK).
Results
The results of the Kirby-Bauer disc diffusion test indicated that all of the tested isolates were resistant to meropenem, imipenem and ciprofloxacin. Therefore, all isolates were considered MDR and carbapenem-resistant A. baumannii. Table 2 shows the MICs (µg/mL) and the susceptibility ratios of the MDR and carbapenem-resistant A. baumannii isolates for meropenem and ciprofloxacin alone and in combination with tazobactam. The MICs exhibited manifold decreases between 10 µg/mL and 30 µg/mL with 50 µg/mL in the cases of combination use of ciprofloxacin and meropenem with tazobactam. In some cases, the results showed that more than one fold reduction in compare with 50 µg/mL and 10 µg/mL, although using tazobactam alone for A. baumannii had no inhibitory effect, and all isolates grew. According to the results of the present study, bla OXA-24-like was the predominant carbapenemase gene (93.6%), followed by bla OXA-40-like , which was detected in 48.9% of isolates. None of the A. baumannii isolates harbored the bla OXA-58-like gene ( Figure 1). Furthermore, the co-existence of bla OXA-24-like /bla OXA-40-like was detected in 48.9% of A. baumannii isolates.
Discussion
In recent decades, the emergence of MDR and carbapenem-resistant A. baumannii isolates with a high potential for acquiring resistance to various antibiotics has been described in health settings worldwide [12], [19]. Our results indicated that all A. baumannii isolates were MDR and carbapenem resistant. The high prevalence of MDR A. baumannii strains is in accordance with the findings reported by Farsiani et al. (97%) and Rynga et al. (85%) in Iran and India, respectively [20], [21]. The global spread of MDR clones in healthcare settings has raised a great deal of concern, because carbapenem agents are commonly the first choice in the treatment of A. baumannii infections [22], [23]. The high prevalence of MDR and carbapenem-resistant A. baumannii can be attributed to the indiscriminate use of antibiotics and poor implementation of measures. The spread of these resistant strains has impeded the successful treatment of A. baumannii infections, thus necessitating alternative treatment approaches. Among the recommended approaches, the use of a combination of antibiotics is currently the preferred treatment strategy [24]. Combination therapy is principally used to avoid the development of antimicrobial resistance, treat polymicrobial infections, and decrease dose-dependent side effects. Moreover, it is also used to treat severe infectious diseases with high mortality rates, as a combination of antimicrobial agents provides a synergistic effect against the multi-drug-resistant isolates [25]. However, the absence of antagonistic interaction among antibiotics in cases of combination therapy has clinical importance; thus, many studies have emphasized the need to determine the interactive effects of antibiotic combinations in vitro [26]. It has been previously described that the combined administration of aminoglycoside and carbapenem agents, which are the most frequently used combination in the empiric treatment of Acinetobacter infections, generally demonstrates an in vitro synergistic effect [27]. The present study attempted to investigate the in vitro interactions between tazobactam and two antibiotics, meropenem and ciprofloxacin, as possible treatment options given carbapenem-resistant A. baumannii isolates from burn patients. Although sulbactam alone has verified antibacterial activity against A. baumannii and has intrinsic bactericidal activity against MDR A. baumannii as it inhibits the penicillin-binding proteins, there are no welldocumented clinical practice guidelines for tazobactam and clavulanate [26]. Tazobactam has long been used in combination with ampicillin and piperacillin, and an additive effect against clinical isolates of A. baumannii was recently observed when tazobactam was combined with meropenem or colistin [28]. However, in this study, a significant reduction in MIC was observed for meropenem when combined with tazobactam. Moreover, the in vitro efficacy of ciprofloxacin/tazobactam combinations was evaluated against A. baumannii isolates. Our findings revealed a significant reduction in MIC when ciprofloxacin and meropenem were combined with tazobactam. These results are in accordance with data reported by several authors when sulbactam was combined with amikacin and ciprofloxacin [26], [29], [30]. Our finding is in accordance with the study by Rezaei et al. in [33], [34], [35]. The percentage of bla OXA-24-like genes, which encode acquired carbapenemases, was 93.6% in the present study, followed by bla OXA-40-like with 48.9%. Furthermore, bla OXA-58-like was not detected in our study. Accordingly, in a study in Iran, the percentage of the bla OXA-24-like gene among tested isolates was 62.1% and the bla OXA-58-like was not detected among the isolates in that study [31]. In contrast to our results, Taherikalani et al., [36], [12], [37]. Additionally, other studies in Turkey, China, Brazil, and France indicated the presence of the bla OXA-58-like gene in A. baumannii isolates [38], [39], [40], [41].
The results of the present study demonstrated that the co-existence of bla OXA-24-like /bla OXA-40-like in half of the A. baumannii isolates. In this regard, our results and those of others confirmed that the presence of multiple alleles of the bla OXA gene or a combination of them can be directly related to the reduction of the sensitivity or resistance to some antibiotics [42], [43].
Conclusions
The results of this first study in Tehran demonstrate a high level of MDR and carbapenem-resistant A. baumannii isolates from burn patients. From a molecular standpoint, the existence of class D carbapenemase genes was established among a majority of the A. baumannii strains. Based on in vitro antimicrobial susceptibility in the current study, the MICs of tazobactam combined with meropenem or ciprofloxacin have been shown to be variable. Given the different mechanisms of antibiotic resistance in clinical isolates of A. baumannii, all results observed with a given combination is expected among A. baumannii strains. Furthermore, the data acquired from such in vitro conditions should be confirmed by reliable results from sufficiently controlled clinical trials. Because previous studies confirmed the inhibitory effect of tazobactam on OXA enzymes, the synergistic effect of tazobactam with ciprofloxacin and meropenem reflected in decreased MIC may be held responsible for inhibiting the identified OXA enzymes in the tested bacteria. In this study, also bacterial MIC was in antibiotic resistance range, so several mechanisms may be involved in the emergence of these resistances. Further investigation is necessary.
Notes
Competing interests | 2019-08-11T08:12:36.212Z | 2019-08-02T00:00:00.000 | {
"year": 2019,
"sha1": "2871b4042fec865f2b4085b1cb4ddb5dea752774",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2871b4042fec865f2b4085b1cb4ddb5dea752774",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230641825 | pes2o/s2orc | v3-fos-license | AHLAKİ LİDERLİK TARZININ ÜRETİM SEKTÖRÜNDEKİ ÇALIŞANLAR ÜZERİNDE ETKİLERİNİN İNCELENMESİ
In today's competitive environment, both leadership styles and the attitudes and behaviors of employees have become critically important for the performance of companies. How successful companies are managed in their sector is the subject of many studies. In particular, some psychological factors that employees experience in the organization have positive or negative effect. Indeed, leadership style is one of the most important factors among these psychological effects . Within the scope of the study, the aim is to analyze the relationship among moral leadership, creativity, effective communication, emotional exhaustion and intrinsic motivation variables on engineers working in the manufacturing sector. When the data was analyzed, it was concluded that emotional exhaustion had a negative effect on performance, but intrinsic motivation had a positive effect on employees. The research was conducted by collecting questionnaires from 427 white-collar employees working in companies producing white goods in the manufacturing sector in Istanbul. SPSS 25 program was used to analyze the data. Since the questions were asked on a Likert scale, firstly the factor and reliability analysis were performed, then the correlation analysis, regression analysis, and the sobel test and Hayes process were used in the analysis of the mediation variable effect.
INTRODUCTION
In 1976, Silin described moral leadership as a leadership style with unselfish behavior. Twenty years later, Westwood (1997) supported Silin's (1976) study and added the concept of "role model" along with "not behaving selfish". Moral leadership greatly contributes to the psychological response of the employees, not only showing their commitment to the organization but also showing better job performance (Cheng et al., 2004). Emphasizing the personal honesty and dedication of leaders, moral leadership creates a trust and supportive environment and contributes to the increase of psychological empowerment among employees (Chan et al., 2008). Moral leadership has the characteristic of constructive behavior in providing performance feedback that enables employees to evaluate their own competencies and improve themselves . Emotions of employees towards their organizations take the form of emotional reactions to events in the organization and as a result, it is stated that an employees' motivation, performance, loyalty, and long-term job satisfaction are affected (Weiss & Cropanzano, 1996). Since employees need to manage their emotions in the face of situations that may occur in the organization, it is impossible to define the behaviors of the employees without considering the emotional situation in the employees' behaviors (Sliter et al., 2010). However, when individuals express feelings that are incompatible with their personality, negative effects are likely to occur: individuals prone to negative emotions tend to be exhausted (Diefendorff et al., 2005). Hence, leadership plays an important role in the reduction or elimination of these negative emotions since employees are more likely to develop a strong identity with Moral Leadership , and in fact, has been shown to have a significant effect on respect for employees, giving them autonomy within their areas of responsibility. Thus, commitment to work and organization is stronger in employees. Moral leadership supports employees' participation in decision-making and allows them to increase their effectiveness on organizational results by management's ability to listen to their ideas. In eliminating emotional exhaustion and providing internal motivation and creativity, moral leadership creates a reliable, supportive, and encouraging climate within the organization. We assume that this sincere and supportive leadership enhances the effective communication between employees, supposing that employees share the self-concept and similar values with the leader. In order to establish effective communication among employees in organizations, it is necessary to share information in a timely and accurate way, either officially or informally (Sharma & Patterson, 1999). At the same time, the quality of the information exchange among employees is explained as effective communication (Sanzo et al., 2003). In terms of enhancing communication, creativity is a complex and detailed structure that has been defined in many ways. Creativity, according to the most widely accepted definition, includes the development and conceptualization of ideas, actions, and procedures developed by employees or a group of employees (Shalley et al., 2000). Creativity allows the original ideas of employees to be useful inventions for organizations and society. It may not always be possible to give both the creative ability and the opportunities to use this ability to the staff. However, in cases where there is an opportunity to benefit from this ability, it should be used as a job satisfaction means. It should be taken into consideration that employees can be more productive in an organizational environment where people can apply their thoughts for testing, try different things, find opportunities to work on their own, and do original projects. Based on both theoretical and empirical studies in the literature, in this study, research was designed to determine the effects of moral leadership, intrinsic motivation, and emotional exhaustion on creativity and effective communication. The research was carried out between white goods manufacturers in Istanbul. It is thought that researching a sample of engineers with some different characteristics constitutes the original aspect of this study. In addition, it is hoped that the results obtained will contribute to the literature and practitioners in business life.
Moral Leadership
Moral leadership gives employees confidence and strength, in other words, good motivation. Employees' assessment related to reliability of the leader within the organization depends primarily on the personal character of the leader. It is suggested that moral leadership can create a sense of trust in mutual relations with employees within the organization (Butz et al., 2001). Employees will not trust managers unless they are absolutely certain of the moral excellence of the manager. Moral leadership is seen by employees as ideal leaders who display honesty and behave with goodness rather than personal interest, who are respected and admirable (Niu et al., 2009), and are trusted. The moral evaluation of leaders (and their personality characteristics) depends on the confidence process aroused in the followers as a result of measuring their attitudes and behaviors (Colquitt et al., 2007). When Chinese family enterprises are examined in the literature, it can be seen that the concept of moral leadership is unique to the traditional family enterprises in China. It is emphasized that the characteristics of moral leadership are determined to meet the needs of contemporary Chinese family enterprises (Farh et al., 2008)": personal honesty, selfishness, work commitment and serving as a model. Moral Leadership is especially important for employees because of Confucianist ideology and morally focused values (Chen & Farh, 2009), and in addition moral leadership has an effect on the internal motivation and trust of the leader, which can affect the behavior and performance of the employees . Serving as a role model is an important aspect of moral leadership (Westwood, 1997), because it is a process in which employees shape their perceptions, beliefs, and behaviors. Chen et al. (2014) argue that moral leadership is an important leadership style in terms of the high sense of trust the employees have given. In a study conducted by Moye et al. (2005), it is explained that trust is the most important element in ensuring a positive working environment among employees. It is emphasized that employees are more willing to perform their duties from the moment they start to feel that they are respected and supported by their managers and that the trust towards their organizations' managers is strengthened. If employees trust their managers, they can behave more positively within the organization and feel that they receive more support from their managers. In this case, employees will tend to work more unselfishly, and, in terms of intrinsic motivation and mental fatigue, these characteristics will be lessened. If employees do not trust their managers, their behavior within the organization may be negative and they may feel that they receive less support from their managers. In the face of this situation, both the employees' internal motivation will be extremely low and they will also feel mentally tired. Therefore, we investigate the effects of moral leadership on employees. Examined and tested hypotheses;
Creativity
Every person has certain skills that can be expanded upon by exploring new areas. The fact that employees in organizations want to make new inventions and/or make creative efforts make them not only an organizational benefit but also a motivational factor within their environment. In this case, the employees remain satisfied with the work, produce creative solutions for the problems they face or encounter, and as a result, are able to present beneficial results, not only to themselves but also to the organization and the society (Eren, 1998). Organizational studies on leadership and creativity indicate that creative employees have the intention of quitting if this freedom is tamped down by the organization (Myatt, 2013). In a study conducted by Janssen et al. (2004), they emphasized the importance of creativity and stated that "this is an important concept that brings together individuals and groups, and in fact, these ideas form the basis of innovation along with discussions." Creativity is therefore very important to the long-term success of organizations in a highly competitive environment. Mayfield and Mayfield (2007), found (in a study of the relationship between creativity and intentions to leave from the work) that if employees were encouraged to be creative within the organization and a creative environment was perceived, their intention to leave was reduced. In this case, both the support of the employees and the high level of intrinsic motivation of the employees increase creative activities. Creative employees tend to pass to another organization, in other words leave the current organization in search of an environment where they feel happy or perform creative activities at the desired level all dependent on whether they are satisfied with the opportunities offered to them in organizations and do not encounter career opportunities (Shih & Susanto, 2011). However, if we take a critical approach, we can state that this perspective is a very complex cycle when we consider the impact of real-life workers and in which sector, in which task, in which responsibilities, and in which organizational climate and organizational culture they work (Rosso, 2014). Therefore, in the study, the relationships between them were examined by considering certain variables. Examined and tested hypotheses; H5: Intrinsic motivation in organizations has a positive effect on creativity H7: Emotional exhaustion in organizations has a negative effect on creativity
Effective Communication
In the literature, the researchers emphasized the importance of communication, and in particular stated that an effective approach to eliminate mutual doubt should be timely, accurate and useful communication (Yousafzai et al., 2005). Effective communication is important in terms of maintaining healthy communication between individuals and maintaining clear and comprehensible messages (Olkkonen et al., 2000). One of the most important problems in organizations is the disruption in the tasks as well as tasks that occur due to an insufficient level of communication. These disruptions also reflect unrest and performance within the organization. The quality of communication is one of the most important issues that managers should pay attention to in their relationship with employees. In order to achieve this, the importance of increasing the quality of the relationship between managers and employees through effective communication should be given greater importance (Yen et al., 2011). Characteristics of effective communication are multiple: it is bi-directional, formal and informal, meaningful and regular. These characteristics are very important in the relationship between the employees in the organization and the relationship between manager and subordinate because the culture of the organization can also become stronger through effective communication. The lack of conflict between managers and employees or between employees in organizations depends on high levels of effective communication. Through effective communication, the conflict situation that may occur within the organization is kept to a minimum, the uncertainty in the organization disappears, and most importantly, a strong dialogue is established between all the stakeholders of the organization (Massey & Dawes, 2007). Thanks to healthy communication established within the organization, the definitions of duties and responsibilities in the manager-subordinate relationship can be comprehensively fulfilled. If effective communication cannot be established in conflicts between employees or internal stakeholders, it is stated that productivity has started to decrease (Jehn & Mannix, 2001). In organizations with high levels of effective communication, it is expected that there will be minimal conflict of duties between the employees, the expectations of the employees are met, and the level of satisfaction in the work they do will be higher. However, the decrease in the satisfaction of the attitudes and behaviors of the employees is explained as a sign that there is no positive feedback expected from the managers and that conflicts of duties have started to occur among the employees. Therefore, the effects of moral leadership, intrinsic motivation, and emotional exhaustion on effective communication are examined in scope within this model of research. Examined and tested hypotheses; H6: Intrinsic motivation in organizations has a positive effect on effective communication H8: Emotional exhaustion in organizations has a negative effect on effective communication
Intrinsic Motivation
Decisions taken by managers within their organizations and insufficient investments can cause employees to be stressful because their expectations are not met (Hobfoll, 1989). This stress among the employees in the organization causes emotional exhaustion and may decrease their internal motivation. This situation in the organization encourages the managers to find the sources of the stress and to find ways to prevent stress from occurring again (Hobfoll, 1988). One of the most important factors in helping employees benefit from their organizations is their intrinsic motivation (Wright & Cropanzano, 1998). Intrinsic motivation expresses the pleasure of the employee and the desire to work more (Amabile, 2018). Employees experiencing emotional exhaustion show less effort and lower intrinsic motivation. Similarly, employees' intention to leave is very high, as their emotional exhaustion weakens their commitment to the organization and to the work. The intrinsic motivation to be provided by the managers in the employees will ensure both the willingness of the employees and their commitment to the organization. It is an undeniable fact that internal motivation is important for employees to engage in creative activities (Elsbach & Hargadon, 2006). With the internal motivation of leaders within the organization, significant improvement can occur in an employees' desire to learn, interest in the job, and an expansion in their curiosity (Ryan & Deci, 2000). Particularly in terms of achieving the continuity principle, which is one of the aims of the organizations, employees with high levels of motivation and creativity are needed. For this reason, it is important for the continuity of organizations to provide intrinsic motivation in relations and communication between employees (Shalley et al., 2004). It is accepted that internal motivation is an important function in terms of continuous development of creativity and dynamic structure in employees (Amabile, 1988). Intrinsic motivation helps employees to tackle challenging and complex tasks, while encouraging confidence and engagement to deliver greater concentration (Gagné & Deci, 2005). Accordingly, the purpose of this study is to investigate the mediation effect of intrinsic motivation as indicated in the research model between moral leadership, creativity, and effective communication. Examined and tested hypotheses; H9: Intrinsic motivation has a mediation variable effect on the relationship between moral leadership and effective communication in organizations H10: Intrinsic motivation has a mediation variable effect on the relationship between moral leadership and creativity in organizations
Emotional Exhaustion
It is emphasized that emotional exhaustion has an effect, resulting in the beginning of symptoms of stagnation within workers, cooling off from their work, feeling emotionally tired, and the resulting decline in the working procedures of the organization and an inability to perform designated tasks (McCarthy et al., 2016). With the occurrence of emotional exhaustion, especially in employees, a decrease in work desires and energy is observed (Hobfoll & Shirom, 2000). Leaders who give importance to work in an organizational sense give importance to the high energy of the employees by offering continuous motivating activities that prevent employees from experiencing emotional exhaustion. Otherwise, with the emergence of emotional exhaustion in employees, they start to behave more slowly in performing their duties, have a decrease in the commitment to the organization, and the intention to leave the organization (Bronkhorst & Vermeeren, 2016;Chi & Liang, 2013). In the event of emotional exhaustion within the employees, the willingness of the employees to voluntarily help in fulfilling and achieving the goals of the organization is eliminated. Indeed, employees experiencing emotional exhaustion are less motivated to undertake behaviors that can be considered beneficial to the organization (Aryee et al., 2008). On the other hand, employees who do not experience emotional exhaustion and who have high levels of intrinsic motivation are more willing to help the organization achieve its goals (Hobfoll, 2001). Emotional exhaustion is defined as a psychological response to the intensity of work-related stress accumulated by employees (Cordes & Dougherty, 1993). According to this theory, employees are faced with various demands in their organizations, and emotional exhaustion begins to occur in the event of intense energy being used to deal with the needs of their organizations. Therefore, employees need to make both physical and psychological efforts to meet the demands from their organizations. Researchers state that emotional exhaustion affects work performance (Rutherford et al., 2009) and that emotionally depleted employees exhibit negative reactions to their organization (Rutherford et al., 2009). Therefore, it is useful to examine the creativity of employees experiencing emotional exhaustion and the relationship between communication within the organization. In a recent study, it was found that emotional exhaustion negatively affected job satisfaction in organizations (Hur et al., 2015). The creativity and communication quality of the employees with low job satisfaction are analyzed by hypotheses. Examined and tested hypotheses; H11: Emotional exhaustion has a mediation variable effect on the relationship between moral leadership and effective communication in organizations.
H12: Emotional exhaustion has a mediation variable effect on the relationship between moral leadership and effective communication in organizations.
METHODOLOGY
The survey was conducted with a total of 427 white-collar employees (engineers) in white goods manufacturers in Istanbul. After performing factor analysis (explanatory and confirmatory) and reliability analysis using SPSS 25 and SPSS AMOS programs, correlation analysis, and regression analysis to test the hypotheses, Sobel test and Hayes process were performed for mediation variable analysis. A 6-question scale (Cronbach's alpha value is 0.88 in the present study) developed by Cheng et al. (2004) was used in the research in order to analyze moral leadership variables. In order to measure the emotional exhaustion, variable questions developed by Maslach and Jackson (1981) (Cronbach's alpha value in the current study is 0.87) were used. In order to measure the intrinsic motivation, variable questions developed by Kuvaas et. al. (2017) (Cronbach's alpha value in the current study is 0.87) were used. In order to evaluate the creativity of the employees, the questions developed by Zhou and George (2001) (Cronbach's alpha value is 0.97 in the current study) and Liao and Chuang (2004) (Cronbach's alpha value is 0.92 in the current study) were used. For effective communication scale, scales used by Sharma and Patterson in their 1999 study were beneficial.
Research Aim
The research aimed to determine the effects of moral leadership as independent variables, emotional exhaustion and intrinsic motivation as both independent and mediation variables, and creativity and effective communication as dependent variables on white-collar (engineers) employees in manufacturing companies (companies producing white goods). The reason for the selection of the manufacturing sector is to investigate the importance of fatigue, motivation, and creativity within employees in the manufacturing sector. The reason why the sample was chosen from white-collar employees is that they take part in decision-making mechanisms, engage in creativity activities, in addition to motivation and fatigue playing a central role in the dilemma. Therefore, our research aim is to evaluate and analyze manufacturing companies in terms of both leadership and mental fatigue, intrinsic motivation, creativity, and effective communication. To test the propositions, a field survey was conducted using the survey.
Findings
427 white-collars (engineers) answered our survey. 185 of the participants were females, 242 of them were males, and 36.7% of them were between the ages of 30-40 and 49.7% (of them between the ages of 41-50). The number of managers above the age of 51 is 13.1%. The level of achievement of the goals of the employees stated by 61 participants as "Too Low", 67 of them as "Low", 147 of them "Medium", 106 of them "High", and 46 of them "Very High". Areas of activity of the institutions where the participants work are, 136 participants work in the "National", 164 participants work in the "Regional", and 127 participants work in "International" activities.
Research Framework
Based on the literature review, the independent variable (IV) is moral leadership, mediation variables (MV) are emotional exhaustion and intrinsic motivation, and dependent variables (DV) are creativity and effective communication. In this study, analyzes were made in order to determine the relationship between the statistical concepts, and thus, a quantitative approach was adopted (Bell et al., 2018;Ghauri et al., 2020).
Analyses
Factor Analysis is used to provide clues about the structure of the relationship between many variables which are thought to be related (İslamoğlu & Alnıaçık, 2014). Kaiser-Meyer-Olkin (KMO) and Bartlett tests are performed to test the suitability of the scales and data representing the variables for factor analysis (Ural & Kılıç, 2013). If the KMO test sample size is 0.7-0.8 good, 0.5-0.7 medium and should be at least 0.5, if less than 0.5, more data should be collected. Since KMO value; 0.929, exceeds 0.50 and Bartlett's test Sig. value is significant 0.000, the data set was found suitable for factor analysis. In the study, the scales consisted of 35 questions and the scales were prepared in 5-point Likert. As a result of factor analysis, 9 questions did not show factor distribution, as a result, the remaining 26 questions are distributed to 5 factors: Confirmatory Factor Analysis; It is used to statistically define the questions representing the measured variables or the multivariate models observed in a large number within the research model (Brown, 2015). As a result, the validity of the 5-variable structure was confirmed in the confirmatory factor analysis after the exploratory factor analysis. Reliability analyses for reliability and validity of scale developed in order to determine whether survey questions ensure integrity to explain or to query to a homogeneous structure and Cronbach alpha value, reliability coefficient, must be greater than α=0,70 for social sciences (Hair et al., 2014;Nunnally & Bernstein, 1994).
Reliability analyses results: Moral leadership (5 questions, α=.895), Creativity (5 questions, α=.892), Effective communication (5 questions, α=.855), Emotional exhaustion (6 questions, α=.850), Intrinsic motivation (5 questions, α=.883). When the reliability table was examined, it was found that the values of cronbach's alpha coefficient were highly reliable for all factors. Since the reliability coefficients are very high, there is no need to cancel any survey questions. Correlation analysis as shown in Table 3, one-to-one relationships between variables were discussed. As a result of correlation analysis, there is a reverse significant relationship between effective communication and emotional exhaustion. In this relationship there is a reverse but significant relationship where emotional exhaustion decreases in the organization where effective communication is present. At the same time, there is a significant negative relationship between emotional exhaustion and creativity: as emotional exhaustion increases, creativity decreases in employees. As a result of the regression analysis, the effect of other changes, except emotional exhaustion, was positive and significant in terms of sig values, and negative and significant in terms of emotional exhaustion variables. 8 hypotheses, which are accepted outside the effect of the mediation variable in foreseen study, are shown in Table 5. In the research model, mediation variables; In order to determine the effect of intrinsic motivation and emotional exhaustion; Analyzing the role of mediation variable between moral leadership independent variable and effective communication and creativity dependent variables; In order to explain the effect of the mediation variable, the variable between the IV and the DV must be a measured variable. One of the tests measuring the effect of this mediation variable is the Sobel (1982) test. It is calculated by using uncorrected regression coefficients and standard error values in conducting mediation analysis with the Sobel test. There are two main versions of the Sobel test: Aroian (1944Aroian ( /1947 and Goodman (1960). After the sobel test in the mediation variable analysis, the Hayes process analysis, developed by Hayes (2017), was also performed. Mediation variable analysis was performed in model 4 in the Hayes process. In order to understand whether there is a mediation variable effect in Hayes process, it can be explained that if there is no "0" value between BootLLCI and BootULCI, there is a mediation variable effect. The variable effect of mediation is supported by the hypotheses in Table 9.
Hypothesis results;
P<0.001
H12: Emotional exhaustion has a mediation variable effect on the relationship between moral leadership and creativity in organizations.
P<0.001
In the research model where the mediation effects of intrinsic motivation and emotional exhaustion are measured, it is supported by the hypotheses that intrinsic motivation mediation variable has a positively significant contribution, but conversely, emotional exhaustion has a negative part as it is supported theoretically. When emotional exhaustion occurs in the relationship between moral leadership and effective communication, the positive relationship becomes negative. This also kills the creativity of employees who are experiencing emotional exhaustion and can be inefficient in the organization. Therefore, the better the motivation and the morale of the employee in the organization, the better it will be for the healthy work of the organization.
Discussion
In the study we conducted on white-collar in the manufacturing sector, it was concluded that there are negative effects on an employees' creativity and communication in cases of emotional exhaustion. At the same time, the effect of moral leadership on the mediation and positive effect of moral leadership turns negative. We can conclude that the management styles of organizations should be such that they do not ignore the emotional exhaustion that employees can experience. Working conditions should be prepared with consideration for both the performance and efficiency criteria and in terms of the manufacturing sector. Performance and productivity criteria should not be met by intensive working hours but rather by conditions in which an organizational climate is provided in which employees can feel comfortable in physical and mental terms. Moral leadership is especially important for employees because of Confucianist ideology and morally focused values (Chen & Farh, 2009;. The reason why it is important is clear: employees who are managed with moral leadership feel better due to the fact that a healthy two-way communication can be established, as well, leadership provides motivation for employees and employees can be partners in the decision-making mechanism. Management style is important for institutions, especially in the manufacturing sector, in order for there to be energetic, highly motivated and creative environments for the employees who work at a busy pace (Brown & Trevin˜o, 2006). The results of the analysis show that moral leadership is strongly associated with employee attitudes and behaviors (Farh et al., 2006). In working conditions where negative factors, such as intentions to leave, cynicism and exclusion are involved, Moral Leadership has a significant impact on employees' internal task motivation and trust in their leader Wu et al., 2012). Such a leadership style will positively affect an employees' behavior and performance. Recent research has shown, both directly and indirectly, the impact of leadership with moral values on the creativity of employees (Rego et al., 2012;Tu & Lu, 2013). Since the study is conducted only on engineers in the manufacturing sector, there are certain constraints. At the same time, it will be possible to create a general opinion with future studies to be done-on employees in the senior management level (experts, department managers/officials) in the service sector. It will be possible to achieve better results not only among white-collars but also among blue-collar (workers) employees who are the backbone of the organization and carry all the workload of the organization, in addition to the factors that have an important role in the performance of employees, such as creativity, emotional exhaustion, and motivation.
Conclusion and Recommendations
Moral leadership has an important place in leadership, as understood by Chinese Confucianism ideology. The expectation of employees in organizations for their leaders is that they have self-discipline, be virtuous, and reflect this attitude to employees while preserving their moral values. Leaders with these characteristics are known as moral leaders. When the research was examined between employees and moral leadership; there is a close relationship between commitment, loyalty, and performance (Cheng et al., 2004;Liang et al., 2007). In the study conducted by Cheng et al. (2004), Moral Leadership provides respect and communication among employees and is suggested as the leadership style that organizations should have, especially in terms of the positive impact of moral leadership on business performance. As a result of the research carried out in the manufacturing sector, it is accepted as an important leadership style in terms of motivation and communication in the organizations to which employees' affiliated. Also, moral leadership positively impacts creative activities and ensures an employees' commitment to the organization. We can state that the intention of leaving is reduced or disappears in employees in organizations with high intrinsic motivation and effective communication. There is always a need for leadership understanding that keeps positive employee motivation because intrinsic motivation is an important fundamental mechanism connecting obligations and creativity (Shalley et al., 2004). In Amabile's (2018) study, it is stated that in an organizational structure where there are negative working conditions, resulting negative thoughts towards the organization start to increase because the intrinsic motivation of employees decreases. In order for future studies to be better, to create qualitative studies, and to create new theoretical concepts, it is necessary to focus more on the different effects of leadership styles and organizational structures on employees. By examining the differences between regions, it will be possible to introduce new concepts consisting of cultural effects, not only within the literature, especially in the field of management and organization, but also in the field of social sciences. The problems that arise in working life help to create new leadership and management styles as well as academically new concepts. Clearly, it is necessary to examine the problems experienced by the employees and present their solutions and suggestions, so it will be possible to gain new insights and concepts that may contribute to future literature. | 2020-12-10T09:04:40.671Z | 2020-12-04T00:00:00.000 | {
"year": 2020,
"sha1": "13ad9dbba9708e88d7ab71409dd7fd8713ed35f3",
"oa_license": "CCBYNC",
"oa_url": "https://dergipark.org.tr/en/download/article-file/829787",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "1585e65ab6337e791b90d3797dac7f990f0adfe0",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
264602614 | pes2o/s2orc | v3-fos-license | Comparison Analysis of Open and Closed Proportional Election Systems in the Perspective of State Law in Indonesia
The development of elections in Indonesia has two systems, namely, closed proportional system and open proportional system. Both electoral systems have their own advantages and disadvantages. The implementation of both electoral systems has significant legal and political implications in the context of state law in Indonesia. This background explanation illustrates the existence of legal problems related to the electoral system in Indonesia and the legal implications of the application of each electoral system. The research used in this research is a combination of conceptual approach with comparative approach. The comparative analysis of open and closed proportional electoral systems in the perspective of state laws in Indonesia is conducted. It can be concluded that the implementation of an open or closed system cannot be viewed as a single solution to all legal problems in the country. Decisions on the most suitable electoral system for Indonesia must consider aspects of democracy, political participation, justice, and political stability. The selection of the right electoral system will have significant implications in building a democratic and effective political system.
INTRODUCTION
The position of President is the highest position in executive power in the Indonesian government system. Along with the four amendments to the 1945 Constitution, the presidential institution has undergone a significant transformation, affecting the election mechanism, position, authority and termination. This change is in line with the importance of the electoral system in maintaining the principles of democracy and political participation in Indonesia. Election itself is a conditio sine quanon for a modern democratic country, meaning that the people choose someone to represent them in the context of people's participation in the administration of state government, as well as a series of political activities to accommodate the interests or aspirations of the people (Andayani, et al: 2017). However, elections are not just a series of political activities, but are also an effective means of measuring the level of legitimacy of power obtained by leaders based on the participation of all levels of society. The success of democracy can be assessed through the level of public participation in elections, which reflects the extent to which citizens are actively involved in determining the direction and goals of the country's politics. More than that, political participation has an important role in increasing awareness of rights and obligations, creating a sense of responsibility for both the rulers and the people, and strengthening the foundations of democratic governance by broadening political understanding and insight.
There are several reasons underlying the need to hold regular general elections, namely (Asshiddiqie: 2016): 1. First, people's views and aspirations towards various aspects of common life are dynamic and continue to evolve over time. In a certain period of time, there may be a change in the majority of people in their opinions regarding state policies. 2. Second, in addition to changes in people's views, people's living conditions can also change both due to dynamics at the international level and internal and external factors within the country. 3. Third, changes in people's aspirations and opinions can also occur due to the increase in population and adult generations. Especially new voters or first-time voters, they may have different attitudes from previous generations, including their parents. 4. Finally, regular elections are important to ensure a change of leadership of the country, both at the executive and legislative levels.
In recent decades, the debate about open and closed proportional electoral systems has become a major concern in the context of state law. With the condition of Indonesian society that is very plural or heterogeneous with a fairly dense population with various backgrounds. To maintain this, a wise and able government is needed to be a representative of a heterogeneous society, both geographically and ideologically. One way to get leaders in government both in the Executive and Legislature who can be representatives of the Indonesian people is to conduct General Elections. General elections can also be an instrument to maintain people's sovereignty as a form of developing democracy after Indonesia's reforms. In Indonesia's pluralistic situation with high complexity in people's lives, elections are indispensable to find leaders with integrity to the people. In the development of elections in Indonesia has two systems, namely closed proportional system and open proportional. Indonesian elections have been held 12 times, the first election was implemented in 1955, after which its implementation was carried out constantly in 1971, 1977, 1982, 1987, 1992, 1997. After the end of the Suharto era, elections were held again in 1999, 2004, 2009, 2014 Currently the open proportional system is in the stage of examination in the Constitutional Court (Judicial Review), to be replaced again by a closed proportional system in legislative elections. However, it will be debated if the Supreme Court approves the use of a closed proportional system at the time of the 2024 general election because the closed proportional system is considered a system that illustrates a setback because this system is a relic of the new order. The government has announced the schedule for the election and regional elections to be held simultaneously in 2024. In accordance with PKPU No. 3 of 2022, voting will be held on Wednesday, February 14, 2024. This background explanation illustrates the existence of legal problems related to the electoral system in Indonesia. Therefore, researchers are interested in discussing several problem formulations, which are as follows.
Issues: What is the fundamental difference between open and closed proportional electoral systems? What are the legal implications of the application of each electoral system in the perspective of state law in Indonesia?
RESEARCH METHOD
The research method used in this paper is a normative legal research method. Normative legal research itself is legal research that puts law as a structured system of norms. The approach used in this research is a combination of conceptual approach with comparative approach. Secondary data obtained from literature studies are used to analyze and compare open and closed proportional electoral systems. Thus, through research that has been conducted, researchers can explain how the impact of implementing an open and closed proportional election system in Indonesia.
RESEARCH RESULTS AND DISCUSSION Differences between Open and Closed Proportional Electoral Systems
A closed proportional electoral system can also be referred to as a multi-member constituency electoral system or a balanced electoral system. An open or closed proportional electoral system is an electoral system in which the seats available in the central parliament to be contested in a general election are distributed to political parties or socio-political power organizations participating in the election according to the balance of votes obtained in the election. For example, the number of valid voters in a general election is 10,000,000 people and the number of seats in the people's representative body is determined at 100 seats, meaning that for one representative the people need 100,000 votes. To determine the number of seats in each constituency is usually determined by the number of residents who take part in an election. For instance, a constituency, because the population is quite dense, the number of representatives is set at 10 people where each seat must get 20,000 votes. After the election was held, it turned out that only 180,000 valid votes were cast. Thus, to get one seat requires 18, 000 votes. Therefore, the minimum required vote income depends on the number of votes obtained by each political party that participates in the general election. The proportional electoral system has advantages including: 1. There are so few votes wasted, that in a proportional electoral system no votes are wasted, if there is an excess of votes from the minimum voting revenue that has been set for one legislative candidate then the excess votes will be transferred to another legislative candidate. 2. The proportional system is very democratic, because no vote is wasted, in other words every vote cast. 3. All political parties or socio-political power organizations participating in the general election will have representation in the national representative body.
Although proportional systems have advantages, proportional systems also have disadvantages: 1. The proportional system facilitates party fragmentation and the emergence of new parties, but for Indonesia this weakness seems to have been anticipated by Law Number 10 of 2008 concerning the General Election of Members of the People's Representative Council, Regional Representative Council, and Regional People's Representative Council. 2. Judging from the relationship between voters and people's representatives in the people's representative body becomes less close, because in general elections voters only choose political parties and voters do not know who their representatives are actually from their regions. In other words, voters only choose the political party's image sign (Closed Proportional System). 3. The power of political parties is very large, because ultimately what determines who the candidates are is the central leader of the political party concerned, so that legislative candidates elected by the central leader of the political party give their loyalty to the central leader of the political party rather than to the interests of the people so that the aspirations of the people cannot be channeled and fought. 4. The vote count is very convoluted. 5. Expensive cost.
6. People's representative institutions do not purely build the interests of the people because in one region there are three to four or even four to five people's representatives.
Open and closed proportional electoral systems have fundamental differences in terms of the mechanism of election and determination of elected candidates. In an open proportional electoral system, voters have complete freedom to choose candidates individually, without having to consider political parties. In this case, voters can vote directly to the candidate they deem most qualified or best suited to their political preferences. Meanwhile, in a closed proportional electoral system, voters cast ballots for a political party, and the elected representatives of the political party are then determined based on the order of candidates predetermined by that party.
The advantage of an open proportional electoral system lies in giving voters the freedom to choose candidates individually. In this system, voters can choose candidates based on their personal qualities, political track record, or vision of the desired representative. In addition, an open proportional electoral system can also allow the emergence of independent candidates who are not affiliated with a particular political party. This can increase representation and political pluralism within representative institutions. However, the open proportional electoral system also has its drawbacks. Since voters have the freedom to choose individual candidates, it is possible for voters to be split among many candidates, which can reduce the proportionality of seat gains. In addition, with independent candidates, the risk of fragmentation and fragmentation of political parties also increases, which can reduce political stability.
On the other hand, a closed proportional electoral system gives a more dominant role to political parties in determining the elected candidates. In this system, political parties determine the order of candidates based on established party policies or internal processes. The opportunity to succeed in any struggle of highly diverse interests depends on the level of togetherness within an organization. That is why the advantage of this system is higher political stability, because political parties have greater control over the composition of their representatives. In addition, with this mechanism, political parties can more easily coordinate their political programs and platforms. However, the closed proportional electoral system also has its drawbacks. Reliance on political parties in determining elected candidates can reduce individual political participation and ignore voters' preferences for certain candidates. In addition, the system is also vulnerable to the policies of party elites that can influence the democratic process. If political parties ignore equal representation or reflect the interests of society, this system can reduce public trust in representative institutions. legislature. The choice of political party is not necessarily the choice of the voter.
Degree of equality of candidates
It is possible to have a cadre that grows and grows from below and wins because of the support of the masses. Dominated by cadres who are entrenched to the top because of their closeness to the political party elite, not because of mass support.
Number of seats and list of candidates
The party gained seats proportional to the votes obtained.
Each party presents a list of candidates with more than the number of seats allocated to a single constituency or electoral district.
Legal Implications of the Implementation of Open and Closed Proportional Election System in Indonesia
General elections have until now been recognized as legitimate democratic institutional instruments and become parameters for the working of a democratic political system so that the voice or will of the people becomes the basis for determining legislative and executive public officials. Since 1955 elections have been implemented in Indonesia using a proportional system. Basically, both the open proportional system and the closed proportional system have been implemented in Indonesia, the closed proportional system was used in the 1999 and 2004 system. An important aspect of implementing an open proportional system is to limit the control of political parties in determining the structural circulation of the legislature, in order to achieve people's sovereignty, candidates for legislative members can come into direct contact with the people, and the people can know and choose who the people expect to be representatives of the people in parliament. Another case with a closed proportional system where the people can only see the picture of the political party on the ballot and the people only choose which political party will field their legislative candidate to sit in parliament, so that the people cannot know who the party chooses to be the people's representative based on the order of numbers that have been determined by the internal political party so that the loyalty of legislative candidates or candidates for people's representatives is more inclined to The interests of political parties because those who have determined the order of candidates for people's representatives are at the top is the head of the political party.
In the 2009 elections, it is hoped that the proportional system can bring fair implications, so that the elected legislative candidates have more representative integrity and much stronger legitimacy because those who are entitled to seats in parliament are legislative candidates who get the most popular support. But after running from 2009 to 2019, the proportional system has not escaped from various problems. The implications are the opposite of what is expected: campaign costs are becoming expensive, the integrity of candidates and voters is at stake with the rise of money politics, political polarization, identity politics, and the costs incurred by the state are numerous. In this system, it allows only candidates who have large capital to be competitive in elections and even if they are not party cadres who are close to their political party as long as they have capital can compete for seats in elections.
Compared to the 2009 election, using an open proportional system, the 2014 election is estimated to cost much more campaigns, the 2009 election cost around 3.3 billion and in 2014 it rose to 4.5 billion and only the 'capable' people can compete with such capital. It is possible that these 'capable' people are not necessarily people who are experts in the field of Indonesian statehood, even the Center for Political Studies of the University of Indonesia (PUSKAPOL UI) noted that more than half of the legislative candidates who contested for parliamentary seats in the 2014 election or around 58.86% were businessmen or professionals. And in 2019, the campaign costs of legislative candidates reached tens of billions for the budget, campaign tools, and others with the aim of seizing people's day.
In the 2019 election, an open proportional system is implemented simultaneously with the presidential and vice presidential elections, where an election system regulated in the Law must have implications that will have consequences on the technical implementation of each stage of the election both in terms of administrative requirements, procedures, time, implementing personnel, facilities, budgets and support from other institutions. The hope of implementing an open proportional system is that people no longer vote for cats in sacks, because people know the identities of legislative candidates listed on the ballot and from the opening of legislative candidates on the ballot, the people can also recall the track record of legislative candidates. So that when elected, accountable political relationships can be established) between the people and the people's representatives.
General elections will be held again in 2024, but are still waiting for legal certainty from the results of the Constitutional Court's decision on which proportional system will be used. Article 168 of Law Number 7 of 2017 concerning General Elections is being tested in the Constitutional Court, because it is considered unconstitutional with Article 22E Paragraph 3 of the 1945 Constitution which states that election participants are political parties.
The results of Kherid's (2021) research show that closed proportions are more ideal to be applied with several improvements with notes: 1. In a closed proportional electoral system, only political parties vote. The list of Legislative Candidates is not contained in the voting, but is shown or displayed on the Polling Station board, so that people can see and consider starting from the performance and track record of the candidate they want before choosing a political party so as not to vote for a cat in a sack. This concept can also reduce logistics costs and simplify vote counting, as well as a middle ground between an open proportional system and a closed proportional system. So that this concept can have a good impact, there is no more competition for votes between candidates in one political party, no more buying and selling votes, no chance of instant candidates relying on counter-democratic elements to win parliamentary seats. 2. Each legislative candidate needs to pass an open survey or public test regionally in each constituency. So that it can open opportunities for anyone who wants to run as a legislative candidate, and can close the oligarchy gap and remove closed candidate determination and who take advantage of closeness with political party officials. This concept creates transparency to the people regarding the performance of legislative candidates, so that voters can also get to know their candidates further, not only during the campaign period. 3. The determination of legislative seats in parliament is left entirely to political parties: whether to use sequential numbers or rankings based on qualities that political parties think are the best, a track record of integrity, or certain qualifications. Seat assignment. As Kherid (2021) reviews from the meaning of Article 22E Paragraph (3) of the 1945 Constitution, seat determination is the right of political parties so that political parties should be in direct contact with voters, not candidates. The public can also assess the extent to which democratic mechanisms work in one political party in determining who sits in parliamentary seats. 4. By choosing a political party, the legislator's responsibility is entirely under the political party, so political parties will compete to place candidates who are truly electorally favorable to political parties. Otherwise, simultaneous elections as electoral courts will prove it. People can "judge" corrupt and underperforming political parties by not voting for them. 5. The calculation method was returned to the Hare Quota because it fit into a closed proportional system that only selects political parties.
The implementation of an open and closed proportional electoral system has significant legal implications in the context of state law in Indonesia. In an open proportional electoral system, legal protections of individual political rights, such as the right to vote and be elected, must be strengthened. A transparent, honest, and fair candidate selection mechanism must be guaranteed to ensure public confidence in the electoral process. In addition, legal regulations governing elections need to provide clarity on candidacy procedures and electoral mechanisms applicable in open proportional election systems. Meanwhile, in a closed proportional electoral system, legal aspects relating to the internal policies of political parties become important. Clear and unequivocal political party laws should regulate the mechanism of determining candidates and the order of candidates applicable in this system. In this regard, legal protection of political parties and party members must also be observed to prevent abuse of power or violations of internal party democratic principles.
In addition, in both electoral systems, legal arrangements related to election campaigns, election supervision, and sanctions for electoral violations must also be considered. Strong and effective electoral laws can ensure electoral integrity, encourage active political participation, and safeguard democratic principles in the political system. However, keep in mind that the implementation of an open or closed proportional electoral system cannot be viewed as a single solution to all legal and political problems in Indonesia. There needs to be an in-depth study and discussion involving various stakeholders to evaluate the existing electoral system and find the best solution that suits the needs and characteristics of Indonesia.
CONCLUSION
Based on a comparative analysis of open and closed proportional election systems in the perspective of state law in Indonesia, it can be concluded that both electoral systems have their own advantages and disadvantages. Decisions on the most suitable electoral system for Indonesia must consider aspects of democracy, political participation, justice, and political stability. The selection of the right electoral system will have significant implications in building a democratic and effective political system in Indonesia. | 2023-07-11T15:57:20.976Z | 2023-07-04T00:00:00.000 | {
"year": 2023,
"sha1": "84322ac7f326e90b7600835a9dfdcffa14de5931",
"oa_license": "CCBYNC",
"oa_url": "https://rayyanjurnal.com/index.php/aurelia/article/download/695/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2b11f95588ef6629a05b1aedaf2aad1fcf4a2652",
"s2fieldsofstudy": [
"Law",
"Political Science"
],
"extfieldsofstudy": []
} |
119157350 | pes2o/s2orc | v3-fos-license | Emergent statistical-mechanical structure in the dynamics along the period-doubling route to chaos
We consider both the dynamics within and towards the supercycle attractors along the period-doubling route to chaos to analyze the development of a statistical-mechanical structure. In this structure the partition function consists of the sum of the attractor position distances known as supercycle diameters and the associated thermodynamic potential measures the rate of approach of trajectories to the attractor. The configurational weights for finite $2^{N}$, and infinite $N \rightarrow \infty $, periods can be expressed as power laws or deformed exponentials. For finite period the structure is undeveloped in the sense that there is no true configurational degeneracy, but in the limit $N\rightarrow \infty $ this is realized together with the analog property of a Legendre transform linking entropies of two ensembles. We also study the partition functions for all $N$ and the action of the Central Limit Theorem via a binomial approximation.
Introduction
For thermal systems formed by particles interacting via standard forces the limit of validity of equilibrium statistical mechanics is, trivially, non-equilibrium. Thermal systems constitute the normal realm of the Boltzmann-Gibbs (BG) formalism, but there are other types of systems for which it has been known for some time that they accept a statisticalmechanical description of the BG type. These are multifractals and chaotic nonlinear dynamical systems [1], among which one-dimensional unimodal iterated maps, represented by the quadratic logistic map, are familiar model systems [2,3] that exhibit such properties. The chaotic attractors generated by this class of maps have ergodic and mixing properties and not surprisingly they can be described by a thermodynamic formalism compatible with BG statistics [1]. But at the transition to chaos, the period-doubling accumulation point, the so-called Feigenbaum point, these two properties are lost and this suggests the possibility of exploring the limit of validity of the BG structure in a precise but simple enough setting.
Recently a comprehensive description has been given [4,5] of the elaborate dynamics that takes place both inside and towards the Feigenbaum attractor. Amongst several conclusions, these studies established that the two types of dynamics are related to each other in a statistical-mechanical way, i.e. the dynamics at the attractor provides the 'microscopic configurations' in a partition function while the approach to the attractor is efficiently described by an entropy obtained from it. As we show below, this property conforms to q -deformations [4,5,6,7], of the ordinary exponential weight of BG statistics. This novel statistical-mechanical feature arises in relation to a multifractal attractor with vanishing Lyapunov exponent. Here we explore in more detail this property with focus on how the statistical-mechanical structure develops along the period-doubling bifurcation cascade [2,3], i.e. out of chaos.
Deformed exponentials appear in the studies of many physical systems. For instance, simulated velocity distributions of statistical-mechanical models resemble closely the socalled q-gaussian expression [8,9], suggesting the occurrence of generalized statisticalmechanical structures under non-equilibrium conditions. Here, as an effort to provide a firm basis to a wider content discussion, we chose to study a nontrivial archetypal system under ergodicity and mixing failure and precisely determine its properties independently of any method that assumes a statistical-mechanical formalism. After that, the results obtained can be analyzed in relation to generalized entropy expressions or properties derived from them.
Brief recall of the dynamics within and towards the Feigenbaum attractor
The trajectories associated with the period-doubling route to chaos in unimodal maps exhibit elaborate dynamical properties that follow concerted patterns. At the perioddoubling accumulation points, periodic attractors become multifractal before turning chaotic. At these points the Lyapunov exponent λ vanishes as it changes sign [2,3]. There are two sets of properties associated with the attractors involved: those of the dynamics inside the attractors and those of the dynamics towards the attractors. These properties have been characterized in detail, the organization of trajectories and also that of the sensitivity to initial conditions at the Feigenbaum attractor are described in Ref. [4], while the features of the rate of approach of an ensemble of trajectories to this attractor has been explained in Ref. [5].
We recall some of the basic features of the bifurcation forks that form the perioddoubling cascade sequence in unimodal maps, often illustrated by the logistic map f µ (x) = 1−µx 2 , −1 ≤ x ≤ 1, 0 ≤ µ ≤ 2 [2,3]. The knowledge of the dynamics towards a particular family of periodic attractors, the so-called superstable attractors [2,3], facilitates the understanding of the rate of approach of trajectories to the Feigenbaum attractor, located at µ = µ ∞ = 1.401155189092..., and highlights the source of the discrete scale invariant property of this rate [5]. The family of trajectories associated with these attractors -also called supercycles -of periods 2 N , N = 1, 2, 3, ..., are located along the bifurcation forks. The positions (or phases) of the 2 N -attractor are given by Associated with the 2 N -attractor at µ = µ N there is a (2 N − 1)-repellor consisting of 2 N − 1 positions y k , k = 0, 1, 2, . . . , 2 N − 1. These positions are the unstable solutions, df (y), n = 1, 2, . . . , N . The first, n = 1, originates at the initial period-doubling bifurcation, the next two, n = 2, start at the second bifurcation, and so on, with the last group of 2 N −1 , n = N , setting out from the N − 1 bifurcation. [2,3]. Central to our understanding of the dynamical properties of unimodal maps is the following in-depth property: Time evolution at µ ∞ from τ = 0 up to τ → ∞ traces the period-doubling cascade progression from µ = 0 up to µ ∞ . There is an underlying quantitative relationship between the two developments. Specifically, the trajectory inside the Feigenbaum attractor with initial condition x 0 = 0, the 2 ∞ -supercycle orbit, takes positions x τ such that the distances between appropriate pairs of them reproduce the diameters d N,m defined from the supercycle orbits with µ N < µ ∞ . See Fig. 1 in Ref. [5]. This property has been basic in obtaining rigorous results for the sensitivity to initial conditions for the Feigenbaum attractor [4], and for the dynamics of approach to this attractor [5]. Other families of periodic attractors share most of the properties of supercycles.
The organization of the total set of trajectories as generated by all possible initial conditions as they flow towards a period 2 N attractor has been determined in detail [5,10]. It was found that the paths taken by the full set of trajectories in their way to the supercycle attractors (or to their complementary repellors) are exceptionally structured. The dynamics associated to families of trajectories always displays a characteristically concerted order in which positions are visited, and this is reflected in the dynamics of the supercycles of periods 2 N via the successive formation of gaps in phase space (the interval −1 ≤ x ≤ 1) that finally give rise to the attractor and repellor multifractal sets. To observe explicitly this process an ensemble of initial conditions x 0 distributed uniformly across phase space was considered and their positions were recorded at subsequent times [5,10]. This set of gaps develops in time beginning with the largest one associated with the first repellor position, then followed by a set of two gaps associated with the next two repellor positions, next a set of four gaps associated with the four next repellor positions, and so forth. The gaps that form consecutively all have the same width in the logarithmic scales [5], and therefore their actual widths decrease as a power law, the same power law followed, for instance, by the position sequence x τ = α −N , τ = 2 N , N = 0, 1, 2, ..., for the trajectory inside the attractor starting at x 0 = 0 (and where α ≃ 2.50291 is the absolute value of Feigenbaum's universal constant). The locations of this specific family of consecutive gaps advance monotonically toward the sparsest region of the multifractal attractor located at x = 0. See Refs. [4,5,10].
Sums of diameters as partition functions
The rate of convergence W t of an ensemble of trajectories towards any attractor/repellor pair along the period-doubling cascade is a convenient single-time quantity that has a straightforward definition and is practical to implement numerically. A partition of phase space is made of N b equally-sized boxes or bins and a uniform distribution of N c initial conditions is placed along the interval −1 ≤ x ≤ 1. The ratio N c /N b can be adjusted to achieve optimal numerical results [5]. The quantity of interest is the number of boxes W t that contain trajectories at time t. This rate has been determined for the supercycles µ N , N = 1, 2, 3, ..., and its accumulation point µ ∞ [5]. See Fig. 19 in that reference where W t is shown in logarithmic scales for the first five supercycles of periods 2 1 to 2 5 where we can observe the following features: In all cases W t shows a similar initial and nearly constant plateau , and a final well-defined decay to zero. As it can be observed in the left panel of Fig. 19 in [5], the duration of the final decay grows approximately proportionally to the period 2 N of the supercycle. There is an intermediate slow decay of W t that develops as N increases with duration also just about proportional to 2 N . For the shortest period 2 1 , there is no intermediate feature in W t ; this appears first for period 2 2 as a single dip and expands with one undulation every time N increases by one unit. The expanding intermediate regime exhibits the development of a power-law decay with logarithmic oscillations (characteristic of discrete scale invariance). In the limit N → ∞ the rate takes the form The rate W t , at the values of time for period doubling, τ = 2 n , n = 1, 2, 3, ... < N , can be obtained quantitatively from the supercycle diameters d n,m . Specifically, In the above expression, τ = t − t 0 = 2 n−1 , n = 1, 2, 3, ... < N . Eq. (1) expresses the numerical procedure followed in [11] to evaluate the exponent B but it also suggests a statistical-mechanical structure if Z τ is identified as a partition function where the diameters d n,m play the role of configurational terms [5]. The diameters d N,m scale with N for m fixed as d N,m ≃ α −(N −1) y , N large, where the α y are universal constants obtained from the finite discontinuities of Feigenbaum's trajectory scaling function [2,5]. The largest two discontinuities of σ(y) correspond to the sparsest and denser regions of the multifractal attractor at µ ∞ , for which we have, respectively, d N,
A closer analysis of the partition functions for the supercycles
We proceed now to study in more detail the diameters d N,m so that we can evaluate the soundness of their association with configurational terms in a partition function. With this in mind we determined their values for the supercycles of periods 2 N from N = 1 to N = 12, that is, starting with the case of a single diameter d 1,0 = 1 and following successively up to a set of 2048 diameters d 12,m , m = 0, 1, ..., 2 12 − 1. This task required the precise evaluation of the control parameter values µ N , N = 1, ..., 12.
In Fig. 1 we show the lengths of these sets when arranged with decreasing values, namely, we present the d N,m as a function of their rank r, the size-rank distributions, in logarithmic scales, as it is often done for these type of distributions that exhibit frequently power law behavior. We observe in Fig. 1 that the distributions have a downhill terraced (or multiple-plateau) structure, the diameters form well-defined size groups and these sizes decrease on average a fixed amount (in the logarithmic scales shown) equal to log 10 α ≃ 0.39844 from group to group. This amount reflects the well-known [2,3] power-law scaling of diameter sizes via the universal constant α. Their size-rank distributions satisfy a piecewise Zipf-like law. For example, for the largest diameter we have d N,0 /d N +1,0 ≃ α 1 , whereas for the smallest we have d N, We observe clearly in Figs. 1 and 3 that the diameter lengths within each group are not equal, so that there is no degeneracy in them. However the differences in lengths within groups diminishes rapidly as N increases. There are two groups with only one member, the largest and the shortest diameters, and the numbers within each group grow monotonically from each end towards the middle-sized length group. The numbers of diameters forming these groups can be neatly arranged into a Pascal Triangle (see Fig. 2), and therefore we anticipate the action of the Central Limit Theorem, in a form reminiscent of the De Moivre-Laplace theorem, so that in the limit N → ∞, the middle-sized-length group of diameters dominates the partition function Z τ and a situation similar to the saddle-point approximation occurs. Also, in the limit N → ∞ the lengths of the dominant group (as well as those of all other groups of diameters with smaller lengths) become closer in size (see the trend in Fig. 1), so that in the limit N → ∞ there appears a true degeneracy in the dominant partition function configurations that gives the statistical-mechanical structure the required characteristics for ensemble equivalence and the Legendre transform property central to statistical mechanics.
The above facts and understanding allow us to be more precise and we denote now the diameters as d N,l,i where the subindexes l and i provide more specific information than the former subindex m. Subindex l = 0, 1, ..., N − 1 designates the group terrace (as in Fig. 1 Eq. (2) resembles a basic statistical-mechanical expression except for the presence of the amplitudes A N,l,i and the fact that q-deformed exponential weights appear in place of ordinary exponential weights (that are recovered when Q = q = 1). To explore further we use a binomial approximation for Z τ [5]. That is, we adopt the approximation of considering the diameter lengths in each group to actually have equal length (A N,l,i = 1) and assume that this common lengths are given by the binomial combination of the scale factors of those diameters that converge to the most crowded and most sparse regions of the multifractal attractor. Namely, the 2 N −1 diameters at the N -th supercycle have lengths equal to α −(N −1−l) α −2l and occur with multiplicities where τ = 2 N −1 . We obtain B = 0.8386, and Q = 2.1924, a surprisingly good approximation when compared to the numerical estimates B = 0.8001 and Q = 2.2498 of the exact values [5]. Eq. (2) reads now where F/ǫ = (1−q)/(1−Q), and Ω(N −1, l) = N −1 l , α −(N −1−l) α −2l = 2 −(N −1+l)(ln α/ ln 2) = exp q (−βǫ l ). In the language of thermal systems Eq. (4) reads as follows: There are N − 1 degrees of freedom that generate 2 N −1 configurations, and these occupy N energy levels with degeneracies Ω (N − 1, l), l = 0, 1, ..., N − 1. Under the binomial approximation the energies of the 2 N −1 configurations become confined into the energy values ǫ l = 2 (N −1+l) , l = 0, 1, ..., N − 1. In the generalized canonical partition function all the q-exponential weights acquire a fixed inverse temperature β = ln α/ ln 2. When we extend the study of the quadratic map to the infinite family of unimodal maps with extremum of nonlinearity 1 < z < ∞ the inverse temperature β can be varied continuously, as the universal constant α(z) varies monotonically with z [12].
A limiting statistical-mechanical structure for the dynamics at the Feigenbaum point
According to our scheme, for finite N (the supercycle of period 2 N at µ N ) we can form N − 1 partition functions Z τ , τ = 2 n−1 , n = 1, 2, 3, ..., N − 1. The number of terms in these partition functions range from a single term, d 1,0 , to 2 N −1 terms, d N,m , m = 0, 1, 2, ..., 2 N −1 −1. As explained, for uniform distributions of initial conditions −1 ≤ x 0 ≤ 1 at µ = µ N , the partition functions Z τ measure the fraction of ensemble trajectories still away from the attractor at times τ = 2 n−1 , n = 1, 2, 3, ..., N − 1. These times coincide with the sequential process of phase-space gap formation by the trajectories [5]. The gaps correspond to the intervals in −1 ≤ x ≤ 1 located between the bifurcation forks in the period-doubling cascade, when µ = µ N , that is, the gap intervals are placed between consecutive diameters. As N grows new smaller gaps proliferate while the new diameters grow in number and each of them decreases in value. See Fig. 3 where the numbers of diameters are shown for each group formed for the case of the 12th supercycle. The number of groups into which the diameters distribute increases as it does the number of diameters within each group. As we have indicated these increments obey the entries in the Pascal Triangle generated by a binomial. Although the diameters within each group are never equal their differences decrease rapidly. The dominant term in Z τ is that associated with Ω(N − 1, (N − 1)/2), N odd, and in the limit N → ∞ we have that Z ∞ = Ω(N → ∞, l = N/2 → ∞). We interpret this last equality as ensemble equivalence in the thermodynamic limit (here N → ∞ is the attractor at the transition to chaos).
It is more convenient to describe the ensemble equivalence in terms of the binomial approximation of the partition function Z τ given by Eqs. (3) and (4), where Ω(N − 1, l) plays the role of a 'microcanonical' partition function representing the system configurations with fixed diameter length α −(N −1−l) α −2l and Z τ stands for the 'canonical' partition function that is formed by weighting the degenerate configurations Ω(N − 1, l) for each length group by the factor α −(N −1+l) ≡ exp q (−βǫ l ). According to the De Moivre-Laplace early form of the Central Limit Theorem the growth of N drives the binomial distribution towards a Gaussian distribution where δ = α −1 + α −2 , ρ ∼ α −2 , σ ∼ α −1 and x = l − N ρ. For large N the midpoint terms in the expansion of the binomial dominate, Z τ = α −1 + α −2 N −1 ≃ N N/2 α −3N/2 ∼ 2 N α −3N/2 , and in the limit N → ∞ we have that Z ∞ = Ω(N → ∞, l = N/2 → ∞). For N fixed the 'energies' ǫ l range from 2 N −1 − 1 to 2 2(N −1) − 1. For large N the 'energy' that corresponds to the 'microcanonical' partition function that becomes the dominant term in Z τ is ǫ N/2 ≃ 2 3/2N . A crossover to ordinary BG type statistics takes place when µ µ ∞ and the attractor becomes chaotic. For ∆µ ≡ µ − µ ∞ > 0 the attractors are made up of 2 N , N = 1, 2, 3, ..., bands, with N larger for ∆µ smaller, while the Lyapunov exponent scales as λ ∼ 2 −N . The trajectories consist of an interband periodic motion of period 2 N and an intraband chaotic motion. As explained in Ref. [5] the consideration of backward iterations in unimodal maps, together with the expansion of separation of trajectories when λ > 0, can be invoked to write a partition function similar to that in Eq. 2 but now with ordinary exponentials as configurational weights.
As it is well known [1], the so-called thermodynamic formalism for the description of the geometric properties of multifractal sets is built around a statistical-mechanical framework of the BG type. The partition function formulated to study multifractal properties, The figures indicate the number of diameters in each group. As N grows the length-rank distributions approach the binomial size-rank distribution and the De Moivre-Laplace theorem applies at the transition to chaos. See text. In the background we show the same diameters before sorting them out as a function of m like the spectrum of singularities f ( α), is written as where the l m in one-dimensional systems are M disjoint interval lengths that cover the multifractal set and the p m are probabilities given to these intervals. The standard practice consists of demanding that Z( τ , β) neither vanishes nor diverges in the limit l m → 0 for all m (notice that in this limit M → ∞) . Under this condition the exponents τ and β define a function τ ( β) from which f ( α) is obtained via Legendre transformation [1]. When the multifractal is an attractor its elements are ordered dynamically, and for the Feigenbaum attractor the trajectory with initial condition
Summary and discussion
The items we studied are the following: i) The partition function we considered is the sum of attractor position distances (the so-called diameters of the supercycles [2,3]) for each period 2 N along the bifurcation cascade that leads to the transition to chaos. ii) For uniformly-distributed sets of initial conditions x 0 the partition function is equal to the number of bins that still contain trajectories en route to the attractor at time τ = 2 n , n = 1, 2, 3, ... N , where the supercycle period is τ = 2 N , N > 1. iii) For N fixed the values of the diameters distribute into well-defined groups with a size-rank structure that develops into a power law as N increases. These groups can be arranged into a Pascal Triangle when considering all N up to N → ∞, but the diameters within each group are not equal. Nevertheless, their differences diminish rapidly as N increases, so that a binomial approximation can be introduced such that the diameters within each group are considered equal for all N . iv) In the limit N → ∞ the diameter-group degeneracy imparts the partition function the required structure to observe ensemble equivalence, and other familiar features of statistical mechanics, even though the configurational weights are not exponential. v) The visible or 'macroscopic' manifestation of the statistical-mechanical structure, the emergence of a power law with log-periodic modulation associated with the rate of approach of trajectories towards the Feigenbaum attractor, is linked to the sequential process of phase-space gap formation. vi) Beyond the transition to chaos, when the attractors become sets of chaotic bands, the configurational weights are converted into ordinary exponentials and the usual BG form is recovered.
The main advance presented here with respect to Ref. [5] is the determination of the terrace structure displayed by the diameters for finite N shown in Figs. 1 and 3. This fact allowed us to write the partition function in Eq. 1 explicitly as Eq. 2. Therefore we were able to study how the lack of configurational degeneracy gradually disappears as N → ∞ leading to ensemble equivalence.
Chaotic dynamics in nonlinear systems accepts statistical-mechanical descriptions [1]. Unimodal maps, usually represented by the logistic map, offer a simple but nontrivial model system in which to explore the development of such a statistical-mechanical structure, to examine the gradual fulfilment of basic elements and eventually the display of the full ordinary features of the BG formalism. A unimodal map is a well-defined and controllable numerical laboratory for the observation of the limit of validity of the BG formalism when the ergodic and mixing properties of chaotic dynamics break down. As it as long been known unimodal maps display two bifurcation cascades that take place in opposite directions in control parameter space, one for µ < µ ∞ when periodic attractors double their periods, and the other for µ > µ ∞ when chaotic-band attractors attractors split doubling their number of bands. The two cascades meet at µ = µ ∞ . Infinitely many reproductions of these inverse cascades appear within the windows of periodicity that interrupt the chaotic-band attractors for µ > µ ∞ [2,3].
As we have mentioned, the ergodic and mixing trajectories of chaotic-band attractors conform to a statistical mechanical-structure of the BG type [1]. We have described that the positions of periodic attractors can be used to define partition functions and that these capture information on the dynamics towards the attractors [5]. However, as we have explained, these partition functions lack some standard properties required in a thermodynamic formalism, such as the degeneracy of configurational states that manifests as ensemble equivalence and the correspondence of their respective thermodynamic potentials in the thermodynamic limit (that in the unimodal map model is the limit N → ∞ of infinite period). For finite N the configurational terms (diameters d N,m ) separate into well-defined magnitude (length) groups but they are not equal within each group. These groups of diameters are the prototypes of 'microcanonical' ensembles while the consideration of all groups, all diameters for a given supercycle of period 2 N is the candidate version of the 'canonical' ensemble. As we have seen, when N → ∞ the diameters seem to fulfill a binomial approximation such that the (vanishing) lengths within the dominant diameter groups (with divergent numbers) become equal and the De Moivre-Laplace theorem establishes the equivalence between the 'microcanonical' and 'canonical' ensembles. The binomial approximation we presented for finite N allows for a conventional interpretation in the language of thermal systems. | 2014-03-05T03:13:11.000Z | 2014-02-01T00:00:00.000 | {
"year": 2014,
"sha1": "bcf5e9ffb89695344bd84a330759159e0e21e85b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1403.0993",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bcf5e9ffb89695344bd84a330759159e0e21e85b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
208185869 | pes2o/s2orc | v3-fos-license | New species and new records of camallanid nematodes (Nematoda, Camallanidae) from marine fishes and sea snakes in New Caledonia
Recent examinations of camallanid nematodes (Camallanidae) from marine fishes off New Caledonia, collected in the years 2003–2011, revealed the presence of the following five new species of Procamallanus Baylis, 1923, all belonging to the subgenus Spirocamallanus Olsen, 1952: Procamallanus (Spirocamallanus) dispar n. sp. from the common ponyfish Leiognathus equulus (type host) and the striped ponyfish Aurigequula fasciata (both Leiognathidae, Perciformes); Procamallanus (Spirocamallanus) bothi n. sp. from the leopard flounder Bothus pantherinus (Bothidae, Pleuronectiformes); Procamallanus (Spirocamallanus) hexophtalmatis n. sp. from the speckled sandperch Parapercis hexophtalma (Pinguipedidae, Perciformes); Procamallanus (Spirocamallanus) synodi n. sp. from the sand lizardfish Synodus dermatogenys (Synodontidae, Aulopiformes); and Procamallanus (Spirocamallanus) thalassomatis n. sp. from the yellow-brown wrasse Thalassoma lutescens (Labridae, Perciformes). These are described based on light and scanning electron microscopical (SEM) studies. An additional three congeneric nematodes unidentifiable to species are reported from perciform fishes and a shark: Procamallanus (Spirocamallanus) sp. 3 of Moravec et al., 2006, Procamallanus (Spirocamallanus) sp. 1, and Procamallanus (Spirocamallanus) sp. 2. Ten fish species are recorded as new hosts for Camallanus carangis Olsen, 1954. Two camallanids, Procamallanus (Spirocamallanus) sp. 3 (subgravid female) and Camallanus carangis (fourth-stage larva) were also found in the digestive tract of the New Caledonian sea krait Laticauda saintgironsi, serving apparently as postcyclic and paratenic hosts, respectively, for these fish nematodes.
Introduction
Nematodes of the family Camallanidae Railliet et Henry, 1915, characterized by a well-developed, usually orangecoloured buccal capsule and a life cycle involving a copepod intermediate host, are mostly gastrointestinal, blood-sucking parasites of marine, brackish-water and freshwater fishes and, less often, of amphibians and aquatic reptiles (turtles, snakes) [1,10,41]. Although camallanids are frequent parasites of Indo-Pacific fishes, where many species have been reported, data on these nematodes in New Caledonian waters are scarce. To date, only the following nominal species of camallanids have been recorded from New Caledonia: Camallanus cotti
Materials and methods
Fish were caught off New Caledonia by various means; those obtained from the fishmarket in Nouméa were very fresh and thus were probably fished in the near vicinity. For fish, we generally used the "wash" method [15]. For sea snakes, as these hosts are emblematic protected species, an indirect sampling method without any effect on the individual survival was used; a gentle massage of the sea krait abdomen provided the stomach content by regurgitation [4]. The regurgitated contents might include recently swallowed fish, which are thus recognizable [43], or, if digestion has already occurred, no recognizable item, as was the case for the samples described in this study. The nematodes for morphological studies were fixed in hot 4% formalin or 70% ethanol. For light microscopical examination, they were cleared with glycerine. Drawings were made with the aid of a Zeiss microscope drawing attachment. Specimens used for scanning electron microscopical (SEM) examination were postfixed in 1% osmium tetroxide (in phosphate buffer), dehydrated through a graded acetone series, criticalpoint-dried and sputter-coated with gold; they were examined using a JEOL JSM-7401F scanning electron microscope at an accelerating voltage of 4 kV (GB low mode). All measurements are in micrometres unless otherwise indicated. The fish nomenclature adopted follows FishBase [7].
Remarks
The present specimens from L. equulus and A. fasciata are considered to be conspecific because of their morphological and biometrical similarity and the fact that both of their host species belong to the same fish family. These nematodes belong to the subgenus Spirocamallanus of the genus Procamallanus in the conception of Moravec and Thatcher [32], namely to the group of Spirocamallanus species characterized by the presence of wide caudal alae, three pairs of pedunculate preanal papillae and two unequal spicules, that are mostly parasites of marine fishes [37]. Most species of this group are characterized by the presence of two caudal spikes, one dorsal and one ventral, on a digital projection in the female [9,46], whereas these are lacking only in a few species. According to Petter et al. [38], Rigby and Adamson [39] and Moravec et al. [27], the shape and structure of the female tail appears to be constant within a species of Procamallanus (Spirocamallanus).
By the shape of the female tail and the absence of any terminal spikes, P. [11] and P. (S.) sinespinis from the marine fish Pomadasys argenteus (Forsskål) (Haemulidae) off New Caledonia [26]. Vicente and Santos [45] did not report the presence of two terminal caudal spines in females of the inadequately described P. (S.) macaensis Vicente et Santos, 1972, a parasite of several species of marine fishes in Brazil [16], but these are present according to the later redescription of this species [42].
However, in contrast to the new species, the right spicule of P. murrayensis is distinctly shorter (290 lm vs 408-449 lm). The right spicules of P. mexicanus and P. sinespinis are only slightly longer (456-480 lm and 465-525 lm, respectively vs 408-449 lm) than those of P. dispar, but the number of spiral ridges in their buccal capsules is 10-12 (vs [12][13][14]; in addition, the male tail tip of these two species bears either a single conical cuticular spike (P. mexicanus) or a knob-like structure (P. sinespinis) (vs two, dorsal and ventral, terminal spikes are present); the female tail of P. mexicanus has a different shape, its anterior portion being narrow and conical (vs broad and posteriorly rounded). Moreover, P. mexicanus and P. murrayensis are parasites of freshwater fishes, whereas the hosts of P. dispar are marine fishes.
Moravec et al. [27] reported a nematode subgravid female, designated as Procamallanus (Spirocamallanus) sp. 3, from the marine fish Scolopsis bilineata (Bloch) (Nemipteridae) from off New Caledonia. The shape of its tail is similar to that of P. dispar and also the number (13) of spiral ridges in the buccal capsule corresponds to this species. However, the location of deirids is different (at the level of the nerve ring vs at the mid-way between the buccal capsule and the nerve ring), so that this specimen probably represents another species. Prevalence, intensity and details about fish: 1 fish infected/ 1 fish examined; 12 nematodes. The infected fish (Fish number: JNC3310) was 234 mm in fork length and 128 g in weight.
Etymology: The specific name of this nematode relates to the genitive form of the generic name of the host.
Remarks
Nematodes of the present material belong to the morphological group of Procamallanus (Spirocamallanus) species characterized by the presence of wide caudal alae, three pairs of pedunculate preanal papillae, two unequal spicules and two caudal spikes on a digital projection in the female. According to Yooyen et al. [46], in the Indo-Pacific region this group contains 23 nominal species reported mostly from marine fishes. However, the great majority of them are poorly described and should be considered species inquirendae [see also 31,40].
The following nine species of this morphological group from the Indo-Pacific region can be considered valid: [27].
Of these, as compared with the new species, the right spicule is distinctly longer in P. gobiomori (318-348 lm vs 267-270 lm), P. pereirai (430 lm), P. rigbyi (315-360 lm), P. similis (435-492 lm) and P. variolae (327-357 lm); moreover, the spiral ridges in the buccal capsule are less numerous in P. gobiomori (8-10 vs 13-19), P. similis (10-12) and P. variolae (11)(12) and all these five species also differ in the family and order of their fish hosts (Perciformes: Eleotridae, Serranidae, Sciaenidae and Sillaginidae or Atheriniformes: Atherinidae vs Pleuronectiformes: Bothidae). The right spicule of P. anguillae is somewhat longer (289-384 lm vs 267-270 lm) than that of P. bothi n. sp., the spiral ridges are usually less numerous (10-15 vs 13-19) and both species differ in the type of the host (freshwater eel vs marine flatfish). The length of the right spicule in P. istiblenni and P. monotaxis is rather similar to that in the new species (263-302 lm and 279-315 lm, respectively vs 267-270 lm), but the spiral ridges are mostly less numerous (12-15 and 10-17, respectively vs 13-19); deirids of P. istiblenni are located in 2/3 of the distance between the base of the buccal capsule and the nerve ring (vs in the mid-length of this distance) and the excretory pore at the level of the posterior end of the muscular oesophagus (vs somewhat posterior to this level), and the female tail of P. istiblenni is more conical as compared with that of P. bothi n. sp.; the excretory pore of P. monotaxis is located short distance anterior to the posterior margin of the muscular oesophagus (vs somewhat posterior to the anterior end of the glandular oesophagus). Moreover, the hosts of P. istiblenni and P. monotaxis belong to different fish families and orders (Perciformes: Blenniidae and Lethrinidae, respectively vs Pleuronectiformes: Bothidae). Prevalence, intensity and details about fish: 1 fish infected/ 2 fish examined; 4 nematodes (Fish number: JNC2736). The infected fish was 185 mm in fork length and 68 g in weight.
Deposition of type specimens: Helminthological Collection, Institute of Parasitology, Biology Centre of the Czech Academy four small inner papillae present near margin of oral aperture accompanied by distinct proximal pore; pair of small lateral amphids present (Figs. 5D, 6A and 6B). Buccal capsule orange, thick-walled, longer than wide, with simple, well-developed basal ring. Maximum width/length ratio of buccal capsule 1:1.25-1.27. Inner surface of capsule provided with 11-12 spiral ridges in lateral view, of which three incomplete (Figs. 5B, 5C and 5E). Muscular oesophagus shorter than glandular oesophagus; both parts of oesophagus somewhat expanded near their posterior ends (Fig. 5A). Intestine brown, narrow. Deirids small, simple, with rounded end situated just anterior to level of nerve ring (Figs. 5A, 5B, 5E, 5K and 6G). Excretory pore located at level of junction of both parts of oesophagus (Fig. 5A).
However, all these species, except for P. variolae, have deirids situated near the mid-point between the base of the buccal capsule and the nerve ring (vs deirids situated just anterior to the nerve ring). Based on this feature, the new species resembles only P. variolae, in which, however, the deirids are located slightly posterior (vs anterior) to the level of the nerve ring. Although P. variolae and P. hexophtalmatis n. sp. have the same numbers (11)(12) of spiral ridges in the buccal capsule and the body lengths of their gravid (larvigerous) females are identical (approximately 24 mm), the new species differs from P. variolae in the length ratio of the muscular and glandular portions of the oesophagus (1:1.5-1.6 in males and 1:1.7 in the gravid female vs 1:1.1-1.3 in males and 1:1.3 in the gravid female), location of the excretory pore at the level of the muscular and glandular oesophageal junction (vs somewhat posterior to this junction) and in that the vagina of the gravid female is directed anteriorly (vs posteriorly) from the vulva. Whereas the tail of both gravid and subgravid females in the new species is widely rounded (Figs. 5H and 5I), that of the gravid female of P. variolae is somewhat more conical. Moreover, both these species differ in the family of their fish hosts (Pinguipedidae vs Serranidae).
Moravec et al. [27] reported Procamallanus (Spirocamallanus) sp. 1 from P. hexophtalma in New Caledonia. However, although the general morphology of the only available specimen (subgravid female) was similar to that of P. hexophtalmatis n. sp., the spiral ridges in its buccal capsule were more numerous (16), deirids were located approximately at 2/3 of a distance between the base of the buccal capsule and the nerve ring and its tail was more conical as compared with that of the subgravid female of P. hexophtalmatis, resembling thus P. monotaxis [27]. Therefore, allocation of this specimen to P. hexophtalmatis is uncertain. Rigby and Adamson [39] reported P. monotaxis, originally described from a lethrinid fish from Hawaii [35], from Parapercis millepunctata (Günther) and members of several other fish families in French Polynesia, but this identification needs to be confirmed. In New Caledonia, P. monotaxis was recorded only from Lethrinus spp. [25].
The SEM examination of the first-stage larva of P. hexophtalmatis shows that the tail tip bears six digitiform processes (Figs. 6E and 6F). Similar caudal processes were previously observed in first-stage larvae of Camallanus cotti and C. lacustris [24] and those of two Procamallanus species from African freshwater fishes [18]. Apparently, these caudal processes serve the larva to better attach by its tail to the bottom, after the larvae are released into the water [24].
Procamallanus (Spirocamallanus) synodi n. sp. Prevalence, intensity and details about fish: 1 fish infected/4 fish examined; 5 nematodes. The infected fish (Fish number JNC2756) was 120 mm in fork length and 20 g in weight.
Deposition of type specimens: Helminthological Collection, Institute of Parasitology, Biology Centre of the Czech Academy of Sciences, České Budějovice, Czech Republic (male holotype and female allotype, both mounted on SEM stub, N-1203); Female (two ovigerous specimens; allotype; measurements of paratype in parentheses. Measurements of one nongravid specimen in brackets): Length of body 12. 10
Remarks
The present nematodes belong to the same morphological group of Procamallanus (Spirocamallanus) as the previous two species, P. bothi n. sp. and P. hexophtalmatis n. sp. By the length of the right spicule, they resemble nine very similar species occurring in the Indo-Pacific region, viz. P. anguillae, P. bothi n. sp., P. gobiomori, P. hexophtalmatis n. sp., P. guttatusi, P. istiblenni, P. monotaxis, P. rigbyi and P. variolae (see above). However, in having deirids located at or near the level of the nerve ring, they resemble only P. hexophtalmae and P. variolae, whereas deirids in other species are situated approximately in the mid-point between the buccal capsule and the nerve ring (in P. istiblenni in 2/3 of this distance).
In contrast to the new species, the female tail of P. hexophtalmatis n. sp. is widely rounded (vs conical), deirids are located slightly anterior to the level of the nerve ring (vs deirids at or just posterior to this level), the excretory pore is at the level of the junction of both parts of the oesophagus (vs at a short distance posterior to the anterior end of the glandular oesophagus), the vulva is slightly pre-equatorial (vs equatorial or somewhat postequatorial), the vagina is directed anteriorly (vs posteriorly) from the vulva and the males and females are distinctly longer (male 15.5 mm, gravid female 24.0 mm vs males 9.1-9.4 mm, subgravid females 11.7-12.1 mm).
Procamallanus (S.) variolae differs from the new species in the shape of the female tail (rounded vs conical), in having a distinctly pre-equatorial vulva (vs vulva equatorial or postequatorial), a largely different length ratio of the muscular and glandular parts of the oesophagus (1: 1.1-1.3 vs 1:1.3-1.9) and in that the males and females of this species are longer (males 10.5-12.7 mm, gravid female 24.5 mm vs males 9.1-9.4 mm and subgravid females 11.7-12.10 mm); the buccal capsule of P. variolae is larger (84-87 Â 60-66 lm in males and 99 Â 78 lm in female vs 66-81 lm in males and 66-72 Â 66-72 lm in subgravid females). Moreover, the hosts of both P. hexophtalmatis and P. variolae belong to other fish families than that of the new species (Pinguipedidae and Serranidae, respectively vs Synodontidae).
Procamallanus (S.) synodi n. sp. is the first species of this genus reported from a fish of the family Synodontidae. Etymology: The specific name of this nematode relates to the genitive form of the generic name of the host.
Description
General: Medium-sized nematode with finely transversely striated cuticle. Mouth aperture oval, surrounded by 12 submedian cephalic papillae arranged in three circles, each formed by four papillae; papillae of outer circle larger; each of four small inner papillae present near margin of oral aperture accompanied by distinct proximal pore; pair of small lateral amphids present (Figs. 9D, 10A, 10B and 10C). Buccal capsule orange, thickwalled, longer than wide, with simple, well-developed basal ring. Maximum width/length ratio of buccal capsule 1:1.07-1.21. Inner surface of capsule provided with 11-12 spiral ridges in lateral view, 4-5 of them being incomplete (Figs. 9B, 9C and 10C). Muscular oesophagus shorter than glandular oesophagus; both parts of oesophagus slightly expanded near their posterior ends (Fig. 9A). Intestine brown, narrow. Deirids small, simple, with rounded end situated slightly anterior to level of nerve ring (Figs. 9B, 9G and 10D). Excretory pore located somewhat posterior to anterior end of glandular oesophagus (Fig. 9A).
Male (one specimen, holotype): Length of body 12.53 mm, maximum width 313. Buccal capsule including basal ring 87 long, its width 72; basal ring 12 long and 57 wide. Maximum width/length ratio of buccal capsule 1:1.21. Spiral ridges 12, 5 of which incomplete. Length of muscular oesophagus 435, maximum width 93; length of glandular oesophagus 748, maximum width 126; length ratio of muscular and glandular oesophagus 1:1.72. Length of entire oesophagus and buccal capsule representing 10% of body length. Deirids, nerve ring and excretory pore 279, 299 and 558, respectively, from anterior extremity. Posterior end of body ventrally bent, provided with wide, vesiculated caudal alae supported by pedunculate papillae; anteriorly alae interconnected by mound, forming a kind of pseudosucker, and posteriorly reaching to caudal terminal spines (Figs. 9E, 9F, 11A, 11B, 11C and 11D). Preanal papillae: three pairs of subventral pedunculate papillae, of which second and third pairs closer to each other than first
Remarks
The nematodes from T. lutescens belong to the same morphological group of Procamallanus (Spirocamallanus) as the species P. bothi n. sp., P. hexophtalmatis n. sp. and P. synodi n. sp. (see above). From the Indo-Pacific species of this group, P. pereirai and P. similis can be differentiated from P. thalassomatis n. sp. by possessing a distinctly longer right spicule (430 lm and 435-492 lm, respectively vs 330 lm) and P. bothi in having a shorter right spicule (267-270 lm), whereas the length of this spicule in the remaining species (P. anguillae, P. gobiomori, P. guttatusi, P. istioblenni, P. monotaxis, P. rigbyi and P. variolae) is rather similar. However, in having deirids located near the level of the nerve ring, they resemble only P. hexophtalmatis, P. synodi and P. variolae, whereas deirids in other species are situated approximately in the mid-way between the buccal capsule and the nerve ring (in P. istiblenni in 2/3 of this distance).
On the basis of the location of deirids somewhat anterior to the level of the nerve ring, P. thalassomatis n. sp. resembles P. hexophtalmatis n. sp., whereas deirids in P. variolae and P. synodi n. sp. are located at the level of the nerve ring or just posterior to this level. However, P. thalassomatis differs from P. hexophtalmatis in the vagina directed anteriorly (vs posteriorly) from the vulva; although the male body of the former species is shorter than that of the latter species (12.5 mm vs 15.5 mm), its buccal capsule is distinctly larger (87 Â 72 lm vs 75-84 Â 60 lm). The new species can be differentiated from P. variolae mainly by the length ratio of the muscular and glandular parts of the oesophagus (1:1.7-1.8 vs 1: 1.1-1.3) and by the percentage of the length of the oesophagus and buccal capsule of the entire body length of gravid females (7-8% vs 5%), whereas from P. synodi mainly by the shape of the female tail (broadly rounded vs conical) and the larger buccal capsule (87 Â 72 lm in male and 96-99 Â 90 lm in gravid female vs 66-81 Â 60-66 lm in male and 66-75 Â 66-72 lm in subgravid female). Moreover, the hosts of P. hexophtalmatis, P. synodi and P. variolae belong to other fish families than that of the new species (Pinguipedidae, Synodontidae and Serranidae, respectively vs Labridae).
Procamallanus (S.) thalassomatis n. sp. is the first species of this genus reported from a fish of the family Labridae. Deposition of voucher specimen: Muséum National d'Histoire Naturelle, Paris, France (1 specimen, MNHN JNC619A).
Remarks
Based on a single specimen (subgravid female) from S. bilineata off New Caledonia, Moravec et al. [27] described Procamallanus (S.) sp. 3, characterized by 13 spiral ridges in the buccal capsule and a broad tail with a short smooth projection. Two available specimens (also subgravid females) of the present material from the same host species, 14.12 and 18.84 mm long, are morphologically identical with that reported by Moravec et al. [27] and there is no doubt that both these forms belong to the same species. By the shape of the female tail and the absence of terminal spikes, this species is similar to P. (S.) dispar n. sp. and a few other congeners (see above). However, since conspecific males remain unknown, the species identification of these nematodes is impossible.
Remarks
Only a single male specimen of this nematode was available to study. Because some taxonomically important morphological features are found in females in this group of nematodes (e.g., the shape of the female tail), species identification was not possible. No species of Procamallanus has so far been reported from a carangid fish. (Fig. 13B). Length of muscular oesophagus 490, maximum width 105; length of glandular oesophagus 843, maximum width 138 (Fig. 13A); length ratio of muscular and glandular oesophagus 1:1.18. Length of entire oesophagus and buccal capsule representing 12% of body length. Nerve ring 313 from anterior extremity. Deirids not located. Excretory pore at short distance posterior to posterior end of muscular oesophagus, at 734 from anterior end of body (Fig. 13A). Vulva postequatorial, 8.34 mm from anterior extremity, at 52% of body length. Vulval lips not elevated. Vagina directed posteriorly from vulva. Uterus filled with small amount of eggs. Tail broad, rounded, its posterior end abruptly narrowed to form digital protrusion provided with 1 small terminal cuticular knob; length of entire tail 163; digital protrusion 39 long, 15 wide (Figs. 13C and 13D).
Remarks
Due to availability of a single subgravid female, the species identification based on morphology is impossible. This species is characterized by the broadly rounded tail with a short projection without terminal cuticular spines. In this case, the shark apparently served as a postcyclic host, which had acquired the infection by feeding on the true definitive hosts (fish) of this nematode.
Site of infection: Collected from regurgitated digestive content of snake.
Deposition of voucher specimen: Muséum National d'Histoire Naturelle, Paris, MNHN JNB011. (Fig. 14C). Length of muscular oesophagus 598, maximum width 105; length of glandular oesophagus 1020, maximum width 192 (Fig. 14A); length ratio of muscular and glandular oesophagus 1:1.71. Length of entire oesophagus and buccal capsule representing 7% of body length. Nerve ring 367 from anterior extremity. Deirids small, situated 243 from anterior extremity, approximately at mid-way between base of buccal capsule and nerve ring (Fig. 14B). Excretory pore located short distance posterior to posterior end of muscular oesophagus, at 816 from anterior end of body (Fig. 14A). Vulva postequatorial, 11.90 mm from anterior extremity, at 52% of body length. Vulval lips not elevated. Vagina directed posteriorly from vulva. Uterus filled with many eggs. Tail broad, somewhat conical, its posterior end abruptly narrowed to form digit-like protrusion provided with one small terminal cuticular knob; length of entire tail 122; digit-like protrusion 27 long, 15 wide (Figs. 14D and 14E).
Remarks
The presence of a single subgravid female but no male makes species identification of this nematode impossible. As in the previous case, it is apparent that the actual definitive host is a fish and the sea-snake acts only as a postcyclic host, which acquired the infection by feeding on fish. The nematode was collected from the regurgitated digestive content of a snake induced by manipulation, no identifiable prey fish was recovered.
Camallanus carangis Olsen, 1954 [23,27]. The present survey extends considerably the range of hosts of C. carangis in New Caledonia, now including 15 fish species of the perciform families Carangidae, Lutjanidae, Mullidae, Nemipteridae and Serranidae, and a representative of the clupeiform family Chirocentridae. Of them, however, gravid (= larvigerous) females of this nematode have [20,21,44]. The present record of the C. carangis fourth-stage larva in the digestive tract of a sea-snake indicates that the snake acquired this infection while feeding on fish.
To date, C. carangis is the only representative of Camallanus parasitizing marine fishes in New Caledonian waters. Another congeneric species, C. cotti, a parasite of freshwater fishes, was introduced into New Caledonia [24].
Discussion
All species of Procamallanus (Spirocamallanus) reported in this study belong to the morphological group of nematodes characterized by the presence of wide caudal alae, three pairs of pedunculate preanal papillae and two unequal spicules; as mentioned above, this mostly includes parasites of marine perciform fishes [37,38]. Many species with these characteristics have been, often inadequately, described from different geographical zones, which makes a thorough comparison of them almost impossible. This situation is more complicated by the fact that some taxonomically important morphological features of these nematodes (e.g., the shape and position of deirids, excretory pore or the number and distribution of postanal papillae) are not easily observed under the light microscope and, consequently, that some insufficiently described species are reported from numerous, often unrelated hosts.
According to Petter et al. [38], Rigby and Adamson [39] and Moravec et al. [29], the shape and structure of the female tail of these nematodes appear to be constant within a species in Procamallanus (Spirocamallanus). Most species possess 2-4 terminal spines on the digital caudal projection in the female, whereas these are lacking only in a few species. Nevertheless, the morphology of all these species is rather similar. Although the division of Procamallanus (Spirocamallanus) species according to geographical zones by Andrade-Salas et al. [2] has been used by some authors [22,27,31,39,40] for the comparison of species, recent detailed morphological studies of some of these nematodes, including SEM, indicate a certain degree of their host specificity (approximately at the level of fish family), which should also be considered when evaluating these nematodes. This is supported by the present findings.
A quite different situation is regarding the species C. carangis, which, in New Caledonia, has been reported from 15 host species belonging to six fish families. This is not surprising, because a low degree of host specificity is well known for some other species of Camallanus, for example C. cotti, C. lacustris, C. oxycephalus or C. truncatus (Rudolphi, 1814). This is related with the circulation of these parasites in the environment, when different categories of hosts are employed during the development of these nematodes, i.e., different fishes may play a role of paratenic, definitive, paradefinitive or postcyclic hosts. Some aquatic snakes were found to be postcyclic hosts of the European species C. lacustris and C. truncatus [21].
Our records might be the first parasitological records for L. saintgironsi, a species recently described [6]; the species is endemic to New Caledonia [8] and sympatric with another species, L. laticaudata (Linnaeus) [5]. Its diet consists of non-spiny anguilliform fish, with the lipspot moray Gymnothorax chilospilus Bleeker representing about half of the prey [5]. The present records of Procamallanus (S.) sp. 3 subgravid female and that of C. carangis four-stage larva in the sea snakes L. saintgironsi indicate that these hosts acquired the infection with camallanids by feeding on fish hosts -probably morays -of these nematodes. Camallanids may survive in the digestive tract of fish-eating snakes for a long period (up to several months), as observed in C. truncatus overwintering in the European colubrid snake Natrix tessellata (Laurenti) [19]; in this case, it served as the postcyclic host. Regarding the above-mentioned camallanids in L. saintgironsi, these snakes served as postcyclic and paratenic hosts, respectively. | 2019-11-21T14:05:13.377Z | 2019-11-20T00:00:00.000 | {
"year": 2019,
"sha1": "58260393fc13d298ef01f1d119f2d3b2819bda2b",
"oa_license": "CCBY",
"oa_url": "https://www.parasite-journal.org/articles/parasite/pdf/2019/01/parasite190130.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf0758aa0d1caa4874999da6ce71eaac4c5a4ff9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
73623606 | pes2o/s2orc | v3-fos-license | The Vitalistic Conception of Salvation in Japanese New Religions: An Aspect of Modern Religious Consciousness
This paper aims to clarify the structure of teachings presented by Japanese New Religions through an analysis o f their conception of salvation. From the apparently diverse teachings of the New Religions, we have discerned a common structure to their teachings of salvation, one which may be called vitalistic. It regards the cosmos as a living body or the Original Life from which all living things emanate, and advocates the full realization of the growth and efflorescence o f man's life through harmony with the Original Life. I t includes the following characteristics: an idea o f a primary religious Being who bears and nurtures all things, confidence in the inherent goodness of the world, exhortations to thank the deity for its beneficial bestowal of life force, and an optimistic view o f a salvation easily attainable in this world. We conclude that the concepts of worldly benefit and salvation are conjoined without contradiqation in Japanese New Religions. We also suggest some aspects of and relationships to the sociocultural background which seems to have been conducive to the emergence and penetration o f the vitalistic conception o f salvation. These include the inheritance o f the idea o f fertility which was transmitted from an agricultural society, the conception being founded in folk religion, a functional division with "Funeral Buddhism," and a stimulus from the feeling o f liberation and aggrandizement in modern Japan. Finally, we interpret the contemporary position o f the vitalistic conception of salvation as being in a state o f crisis in that the vitalistic teachings o f the established New Religions are weakening and undergoing a transformation and that new modes o f conceiving o f salvation are becoming prominent in recently arisen active groups.
This paper aims to clarify the structure of teachings presented by Japanese New Religions through an analysis o f their conception of salvation. From the apparently diverse teachings of the New Religions, we have discerned a common structure to their teachings of salvation, one which may be called vitalistic. It
regards the cosmos as a living body or the Original Life from which all living things emanate, and advocates the full realization of the growth and efflorescence o f man's life through harmony with the Original Life. It includes the following characteristics: an idea o f a primary religious Being who bears and nurtures all things, confidence in the inherent goodness of the world, exhortations to thank the deity for its beneficial bestowal of life force, and an optimistic view o f a salvation easily attainable in this world.
We conclude that the concepts of worldly benefit and salvation are conjoined without contradiqation in Japanese New Religions. We also suggest some aspects of and relationships to the sociocultural background which seems to have been conducive to the emergence and An analysis of teachings presented by the New Religions1 is important in understanding the modern religious consciousness of the Japanese people. This is so not simply because Tsushima, Nishiyama, Shimazono, Shirarnizu the number of members of the New Religions is large, but also because their religious outlook can be regarded as a systematic expression of the unarticulated religious consciousness of the Japanese people. This is especially true with respect to the New Religions' conception of salvation, for these views have had a great impact, even superseding Buddhism as the ,predominant religious influence in this regard. This paper aims to clarify the structure underlying the views of salvation presented by the New Religions.
The teachings and organizational bodies of the various New Religions seem, at first glance, highly diverse. Some have been classified as Buddhist, while others have had Shinto origins attributed to them. Thus the teachings of New Religions seem to derive from absolutely different traditions. Moreover, as between those groups founded before the Meiji Restoration (1868) and those founded after World War 11, the differences seem great. These classifications, however, fail to recognize what is common to these groups. We shall argue that there is a common underlying structure to the teachings of the various New Religions. This is mainly due to the fact that they have arisen from the same religious base, namely, what is called folk religion, and that in the process of forming their respective teachings they have, for the most part, been influenced not by the doctrines of established religions but by folk religion or the teachings of other New Religions. Especially when dealing with the structure of the views of salvation the similarities become remarkable.
Hitherto, the views of salvation in the New Religions have not been studied adequately, primarily because it was usually thought that the religious goal of the New Religions was only to acquire worldly benefit (genzeriyaku), and that this goal had nothing t o do with salvation. Though it is true that most New Religions are interested in worldly benefit, the problem is that this aspect has been overemphasized. At the same time the concept of worldly benefit has been counterposed to the concept of salvation. From this point of view, salvation is supposed t o be spiritual, transcendent, universalistic, this-world denying, and t o require the total commitment of followers. And because the concept of worldly benefit is materialistic, human-centered, particularistic, instrumental, this-world affirming and requires only partial or temporary commitment, New Religions have been considered to have no concept of salvation. Moreover, even when the salvational aspect in the New Religions has been recognized, the relationship between worldly benefit and salvation has not been consideredwith the result that the New Religions have been viewed as containing a curious blend of contradictory religious objectives2 This attitude, which tends t o use polarizing concepts such as religion vs. magic and value-oriented rationality vs. goal-oriented rationality, discloses the deeply-rooted bias of a modern culture 2. B. Wilson's typology of religious movements may be instructive in this connection. Among seven types of "responses to the world" or conceptions of salvation, he includes two types which are chiefly interested in worldly benefit as well as salvation. He calls them the thaumaturgist and manipulationist types and includes most of the Japanese New Religions in these categories. He thus attempts to relate worldly benefit to salvation, although the results are not very successful. While his presupposition seems t o be that salvation is a way to overcome "the world" which is evil, a t the same time he admits that the thaumaturgist and manipulationist types do not regard "the world" as evil. Because he does not seem to be aware that his presuppositions about salvation and worldly benefit are contradictory, we are left with the impression that in these two types of religious movements, there exists an anomalous combination of two contradictory religious objectives.
which has been heavily influenced by Christianity and other historical religions. To understand the concept of salvation in the New Religions, this bias must be removed, and a fresh approach, open to their religious outlook, must be adopted. What is meant by salvation in a particular religion must be considered in the context of its total religious outlook. Thus with regard t o the New Religions, it is important to have a deeper understanding of the overall structure of their teachings. With this, the relationship between worldly benefit and salvation within the New Religions will become clear.
THE VITALISTIC CONCEPTION OF SALVATION
We have suggested that the teachings of the New Religions have an identical structure despite multifarious forms of expression. In order to elucidate this structure we shall investigate what we call the vitalistic conception of salvation, because what integrates the teachings as a whole and characterizes a believer's view of salvation is the concept of life force. In this section we shall analyze and discuss the vitalistic conception of salvation from eight different viewpoint^.^
1.
The essence of the cosmos. The characteristics of a vitalistic conception of salvation can be observed in the way in which the universe, the world, nature, and the essence of all things are perceived. The cosmos is regarded as a living body or a life force with eternal fertility. Sometimes it is perceived as a deity. In the latter case, the deity is looked on both as the source from which all life emanates and the source which nurtures all life. Thus even when the The Vitalistic Conception o f Salvation cosmos is thought to be a deity, there is consistency with the view that "the cosmos is life." In this way, the whole universe is grasped as one living body. And from this stems the notion that all things are harmonious, interdependent, mutually sympathetic, and constantly growing. From the standpoint of each component of the cosmos, especially that of human beings, the universe or the world is seen as the source from which all living things spring. Hence, the universe will also be imaged as a beneficial and gracious entity which gives each individual being eternal and ultimate life. Of course, each of the New Religions employs this cosmology with a different degree of emphasis. While groups like Kurozumikyb, Seichb no Ie, Risshb K6sei-kai, P L Kybdan, Sekai Kyiiseikyb, and Sbka Gakkai place a great deal of emphasis on these views, groups such as ReiyGkai attach little importance to the notion that "the cosmos is life." One reason for the lesser emphasis in Reiyiikai is that the idea of harmonious interdependency of life is regarded particularistically as pertaining to relations between an individual and his ancestors. Therefore, a wider view which encompasses the whole cosmos and inquires about the ultimate source of life is weak. In the case of those groups which await the millennium, for example, ¬o and Tenshb Kbtai JingGkyG in their early periods, the dark side of the world is stressed, and consequently the concept of vitalism is muted. The concept survives, however, in their portrayal of the coming world. Finally, in other groups like Tenrikyb and Konkbkyb where the concept of the primary religious Being is personified, the vitalistic cosmology tends to be absorbed into the image of the ultimate deity and becomes less prominent. In these cases, the personified deity is identified as the source and the nurturer of life and the focus of attention shifts away from an impersonal cosmos.
Primary religious Being. Each of the New Religiolls
Japanese Journal o f Religious Studies 611-2 March-June 1979 believes in a universalistic primary religious Being which is placed at the core of their teachings, although the universalistic tendency is weak in Reiyukai. Despite the wide variety of names by which the various primary religious Beings are known, their characteristics and functions are remarkably similar. In the first place, they are considered to be the Original Life which bears and nurtures all living things. Moreover, they are perceived as "the Great Life" (daiseimei) of the universe, to which all living things are returned and unified. Concerning the creation of all things by a Being, the New Religions d o not consider it to be a process wherein things are manufactured from materials, but see it, rather, as the consequence either of spontaneous germination from the Being or reproduction through sexual means. The Tenrikyo creation myth, Doroumikciki, is the best example, as it is heavily colored by the image of fertility. The religious Being is thus thought of as a motherly being who affectionately nurtures all things, rather than controlling and ruling over them.
The Beings can be regarded as monotheistic and transcendent, existing outside all things since they produce them. In their nurturing function, however, they should also be recognized as pantheistic and immanent, omnipresent in all things and therefore providing an internal and undying life force. Finally, if we apprehend the religious Being as an ultimate savior, its function seems to be that of renewing the life force and enhancing it. In this sense, the salvation offered by the New Religions can be regarded as something that gives people the "benefit of life itself' (seimeiriyaku) rather than "worldly benefit" (genzeiriyaku).
3. Human nature. Human beings, being a part of the universe, are naturally thought to have an existence deriving from and nurtured by the Original Life. The New Religions regard human beings either as separate bodies, children, and vessels, or as branch streams of the Original Life. In all cases the human being is considered an individualized manifestation of the Original Life or an existence which has been endowed with this life force. Thus both the idea that human nature is divine, unpolluted, pure, and perfect, and the idea that human beings can return t o or unite with the Original Life arise from the recognition of human beings partaking in or comprising a part of the divine life. The continuity between deity and man precludes the idea that human beings are inherently sinful or diseased. Moreover, because all men derive from a common Original Life, the equality of man is suggested in that they can relate to each other harmoniously, mutually united by their participation in the divine life. This idea of equality, in turn, is one of the grounds for a universalistic world view. Finally, it is stressed that human beings are "kept alive" (ikasarete iru) and nurtured by the gracious and infinite benefit of the Original Life and that they cannot exist independent of it. For this reason, the duty of man is to thank the Original Life and also to conform to the deity's willwhich leads to the growth and efflorescence of human life.
Life and death.
In sharp contrast to the this-worldly pessimism of the other-worldly oriented or emancipatory conceptions of salvation, the vitalistic conception of salvation promises salvation in the here and now, amidst the everyday activities of man. Thus it is the growth and the efflorescence of life in this world which is the focus of its attentionnot something to be attained in a world after death. The result is that the fruits of salvation are happiness in everyday life, longevity, the prosperity of offspring, and the realization of "heaven on earth" (chijo tengoku).
The view of salvation notwithstanding, there are a variety of views concerning man's destiny after death. While Konk6-ky6 shows almost no concern with this problem because death has nothing to do with the daily life of people in this world, Kurozumiky6 denies death and insists that life is unending (ikidoshi). Similarly Seichb no Ie and P L Kybdan profess the existence of an immortal soul which survives in unification with "the Great Life" after the death of the carnal body. Finally, Tenriky6, Reiyfikai, S6ka Gakkai, and Tensh6 Kbtai Jingfiky6 all believe in the rebirth of human beings. From all the cases cited above, it is clear that far less value is placed on one's destiny after death than on life in this world. Moreover, it should be noted that for those groups that adhere to the idea of rebirth into this world, rebirth is tantamount to salvation. In this sense the views of the New Religions stand in marked contrast to the Buddhist conception of rebirth, which visualizes it as the prolongation of suffering in the present life. In addition, those groups which emphasize the holding of services for ancestors do so not in order to assure themselves and their ancestors of bliss in another world, but to enhance life in this world, "to make the rope of life thicker." Similarly their views about change differ from those of both Buddhism and Christianity which stress the "impermanence" or "temporality" of earthly existence, for the New Religions optimistically accept change as the process through which the energy of the Original Life constantly reasserts itself toward unlimited growth.
Evil and sin.
As in all religions, one of the most important concerns of the New Religions involves the dichotomization of two contrasting states of affairs. In the New Religions the polarization is formulated around the vitalistic idea. While the good or positive state is that in which the cosmos is vital and harmonious, the bad or negative state is one in which the cosmos and all things in it lose their vividness, where the power of growth weakens or the Original Life's potential for germination, growth, and efflorescence is stultified. The expressions which describe the latter state, "poverty, sickness, and discord" (hin-byeso), "gloom" (inki), "drooping life" (shibonda seimei), reflect the vitalistic concern.
The question of how the negative state comes to appear in a perfect world, originally filled with life force, can be explained in terms of man's relationship to the Original Life. If man forgets that he is "kept alive" through the benefit of the Original Life, the relation between them will be destroyed. He will be severed from the life force which sustains him, and consequently, he will be unable to realize the full potential of his life. In addition, there is a secondary explanation which may account for the unhappiness which appears to lie beyond the control of followers. This is the view that our fates are determined by acts committed in past lives, both our own and others'. (The Buddhist term is innen or "karmic connection.") This view rests on the presupposition that there is an inter-and intra-generational interdependency among all lives. This explanation thus derives from the primary explanation in that deviation from a proper relationship with Original Life in the past has an unavoidable impact on the present. This does not lead, however, to a fatalistic pessimism. Rather, the New Religions insist on the possibility of making a better life (shukumei tenken) through efforts in this world. Underlying the vitalistic view of salvation is an optimistic confidence in the possibility of restoring the originally unsullied vital state; and from this emanates the great energy needed to struggle against difficulties in the world as it now is.
6. Means of salvation. If the evil state is one in which the growth and efflorescence of the life force is obstructed by disharmony with Original Life, liberation from this situation can be attained by harmonizing and restoring a good relationship with it. To attain this, men must repent and recover a pure heart (kokoro no irekae) by ridding themselves of selfishness and renewing their feeling of gratitude for the benefit given by the Original Life. The specific norms and practices differ from group to group. Generally speaking, they all emphasize gratitude, sincerity, and honesty as the moral bases for daily life, but do not require rigorous physical self-denial.
Since vitalism affirms a spontaneous natural lifestyle, it is incompatible with extreme asceticism. In addition to an emphasis on the maintenance of a general moral attitude in daily life, the New Religions have devised simple religious practices as direct and instantaneous means for the restoration of vitality. Through these practices, followers can at least momentarily, contact, have communication with, and finally become one with the Original Life.
7. The saved state. To return to the Original Life, t o recover resonance with it, t o realize fertility and vitality in one's own life, and t o live joyfully in unity with and sharing in the happiness of the deitythese are the dominant and most common views of how the saved state can be attained. Although salvation is this-worldly and can be experienced bodily in the here and now, it can only be attained in the absolute realization of life force. Seemingly partial worldly benefits are thought to be concrete manifestations of the efflorescence of the life force and are therefore inseparable from the total conception of salvation. It should be added that the conception of the saved state also varies in that some groups (for example Tenrikyo, o m o t o , etc.) emphasize the collective and joint realization of salvation, while others (such as Kon-kbky6, Seich6 no Ie, P L Kyedan, etc.) are more concerned with the individual achievement of salvation. The former type reveals a heavy influence from village life, while the latter shows a greater influence from urban life.
Founders. Many of the founders of the Japanese New
Religions are regarded not as mere instructors of new religious truths or as religious leaders, but as "Living Gods." Each "Living God" claims that through some special experience, such as possession or mystical unification, the primary religious Being, the Original Life, has entered their bodies and is residing there, and that he or she is the only person who has been given the mission and the power to reveal the divine will for universal salvation. They act as if they were the ultimate media or outlets for the welling forth of Original Life, while followers regard them as the embodiment of this life and also the model for and proof of the saved state. Moreover, they are sometimes perceived as saviors, with little distinction made between them and the primary religious Being.4 As we shall discuss later, the conception of salvation outlined above is basically the product of an animistic mind. This does not mean, however, that the Japanese New Religions are at a primitive stage of religious consciousness or that they have no conception of salvation. An apparently naive vitalism is the foundation upon which is based the idea of a primary religious Being who is the savior, a general and inclusive explanation for the cause of suffering, and a fairly clear view of the saved state. Thus the New Religions are salvational and can elicit the total commitment of their followers. In this elaboration of the inextricable relationship between vitalism and salvation lies the uniqueness of the New Religionsa unique religiosity which could only have been achieved by the common people.
Thus far, we have yet to define concisely what is meant by the vitalistic conception of salvation. Toward this end some comparisons with the concepts of salvation held by other religions may be helpful. In this context we shall 4. In some groups, such as S6ka Gakkai, which claim Buddhist origins, the notion that the leader is a "Living God" is not clearly stated. In practice, however, the attitude of followers in such groups toward their "Presidents" does not differ greatly from that of followers in other groups toward their "Living Gods." analyze the other-worldly oriented soteriology of Christianity, on the one hand, and the emancipatory soteriology of Buddhism on the other. Firstly, because the New Religions profess that nature is blessed with the benefit of Original Life, nature is not thought t o be spoiled by earthly imperfections; nor is it thought to be the stage for the endless repetition of illusory events. What man must do, then, is to be in resonance and harmony with nature. He need neither manipulate and control it nor detach himself from it. Secondly, "human nature," which includes the fulfilment of desires and pleasures, is affirmed as being a consequence of the benefit of the Original Life. It is not suppressed as "corporal pleasure," worldly passions (bonno), or the root of sin and suffering. Thirdly, as for the ultimate religious Being, it is immanent in all living things and bears and nurtures them. It is, therefore, neither a transcendent ruler who deliberately creates and controls all things nor an abstract law which lies beyond the illusory events of this world. Finally, the ideal and the tarnished are not conceived in terms of a heavenly world ruled by spiritual principles as opposed to a mundane world ruled by physical principles. Nor is it expressed in terms of the state of nirvana which represents freedom from the suffering and distress of this world. Rather, what are polarized in the New Religions are the "vital world" wherein the life force develops endlessly and flourishes harmoniously, and the "gloomy world" which, obstructing the life force, has lost its vitality and harmony. Thus the New Religions believe that salvation can be realized in this worldnot somewhere beyond it. And the proof of salvation lies in the increased activity of the life force. In addition, they are optimistic about the prospect of realizing salvation in this world since they do not regard evil and suffering as inherent in it.
SOCIOCULTURAL BACKGROUND
Why was the vitalistic conception of salvation developed and accepted by so large a number of people in modern Japan? Space limitations prevent a full consideration of this problem, but we shall inquire into the sociocultural background of the New Religions, as this may provide some hints.
The cultural tradition o f an agricultural society.
The religious consciousness of the Japanese people has been nursed in the cultural tradition of an agricultural society which even imported religions, like Buddhism, could not eliminate. The core of this tradition is the idea of fertility. Moreover, the belief in gods and spirits thought to bring fertility has been widespread in folk belief. In many villages seasonal rituals are held to supplicate fertility, and sometimes these rites are connected with the ideas of sex and reproduction. At the level of folk belief, however, the idea of a life force as the bringer of fertility and propagation has been vague and fragmented. The ideas of fertility and propagation were not associated with salvation, except in the minds of some Buddhist or Shinto scholars who were isolated from the religious consciousness of the people. Therefore, vitalism remained at the stage of an unconscious receptivity. The New Religions, most of which were founded by farmers or exfarmers, took up this receptivity and elaborated it to arrive at a systematically organized set of teachings on salvation.
Foundations in folk religion. The earlier New Religions,
which for the first time developed the vitalistic conception of salvation, were not directly influenced by the traditional or imported established religions. The founders rarely learned these religions' doctrines or joined their organizational activitiesand even if they did, this had little to do with their later religious activities. It was in the activities of folk religion led by semiprofessional and shamanistic religious practitioners that the founders acquired their religious experiences and deepened their faith. Most of the teachings and organizational devices of the New Religions can be regarded as developed forms of the loosely organized symbolism and the associations ( k 6 ) of folk religion.' Although this argument cannot be applied as strictly t o groups founded in more recent times, it can nevertheless be said that they derive most of their teachings and the forms of their activities from folk religion or from other New religion^.^ In addition, it should be noted that followers are usually converts from folk religion or other New Religions rather than from the traditional or imported established religions. Thus the established religions could only influence the New Religions indirectly, through the filter of folk religion, and this fact can at least account for the lack of the transcendent or thisworld denying aspect that remains characteristic of the traditional established religions.
Division of functions with "Funeral Buddhims."
The penetration of Buddhism t o the level of the common people ( 1 5-17 th centuries) was accomplished through its association with rituals for the dead. Buddhism was more a religion which could assure one of bliss after death (616) and a means by 5. In this paper we distinguish between "folk belief" and "folk religion." "Folk beliefs" are fragmentary and limited customary beliefs and rituals which work within limited narrow communities. "Folk religion" is a loosely organized system of beliefs and rituals directed by shamanistic, semiprofessional practitioners Cyantobushi, sendatsu, oshi) whose organizations, though loosely held together, extend beyond the confines of the small community. Of course we see different transitional forms between these two types. 6. Groups usually regarded as belonging to the Nichiren Buddhist tradition, such as Sdka Gakkai and Risshd Kdsei-kai, may be considered as based in traditional established religion. Even in these groups, however, most teachings and organizational devices derive from folk religion or other New Religions. For example, Rissh6 K6sei-kai's founder and co-founder started their religious careers as followers of a folk religion cult and Tenriky6.
which ancestors could be aided in finding permanent peace in the next world (ekd) than a religion of ethical teachings for the present world. This tendency became more and more pronounced over time until Buddhism finally came to be called "Funeral Buddhism." Because the New Religions were not reform movements against Buddhism, but developments emerging from a folk religion which showed no concern for rituals for the dead, they were from the beginning interested neither in those rituals nor in salvation after death. Most of their followers, however, continued to perform the traditional Buddhist rituals for the dead. One example is that of a second or third son who moves from his village to a large city to obtain work. He might return home during the Bon festival and practice enthusiastically the Buddhist rituals for ancestors. But on going back to the city, he would feel no contradiction at being a follower of a New Religion whose only concern is with the problem of life, not death. Thus a kind of functional division has existed between the New Religions and Buddhism, making it possible for the New Religions to leave the problem of salvation after death to Buddhism. This has allowed the New Religions to devote themselves to the problem of life in this world and to develop the vitalistic conception of salvation.'
Liberation from traditional authority.
In Japanese feudalism the social order was maintained through an authoritarian system of political control legitimated by the Confucian emphasis on observing a proper relationship between those of superordinate and subordinate status. In the process Tsushirna, Nishiyama, Shimazono, Shiramizu of modernization, however, this authoritarian system was breached, and people came t o feel it as unduly restrictive. As a result, they struggled with the system and were liberated. Through this struggle, the teachings and activities of the New Religions were devised. The teachings affirmed happiness in this world and the desire for it, and insisted on liberating the people from taboos and conventions thought to be barriers t o the realization of happiness. Moreover, the New Religions advocated the equality of all people, affirmed the importance of women and youth, and offered them a role in the groups' activities. Also, ample opportunities for sensory pleasure in ritual, worship, and entertainment were allowed instead of insisting on ascetic self-denial. Finally they encouraged people t o speak freely and to voice their own opinions in public. These ideas and activities indicate that the New Religions played an active role in the people's struggle with traditional authority and suggest that the vitalistic conception of salvation is an expression of the feeling that one has been freed and can achieve happiness in this world.
5. The sense of being homeless and a compensatory sense of aggrandizement. Industrialization deprived many people of their sense of bondedness to mother earth and to their rural communities as they streamed into the cities from their native villages. They lost the opportunity t o share the joy of abundant harvest and t o experience close relationships with fellow villagers, being forced, instead, t o live poor and isolated lives as lower class urban dwellers. The New Religions provided new hopes and communities at that time; and a sense of rapidly increasing fortunes, aggrandizement, began to play an important and unique role in modern society. Though this sense of aggrandizement derived mainly from a general increase in wealth and a radical growth in the urban population, it had already been sensed by the founders of the earlier New Religions who had observed mass pilgrimages t o such sacred places as the Ise Shrine toward the end of the Tokugawa period (mid-nineteenth century). The New Religions stimulated this sense of aggrandizement, and many followers found encouragement enough to improve their living conditions. In the earliest stage of a movement the awareness of a rapid increase in the number of fellow followers may have enhanced the feeling, while at a later stage this sense was probably stimulated by the expansion of the organization. Organizational growth could be perceived through such indicators as the construction of a gigantic headquarters building, the holding of mass assemblies, and the distribution of colorful magazines. In addition, opportunities for raising one's status within the organization may have contributed t o the sense of rapidly increasing fortunes. Several aspects of the vitalistic conception of salvation, especially the view of endlessly increasing life force, can be regarded as expressing the feeling of aggrandizement.
The five different features of the sociocultural background discussed above have been persistent factors during the onehundred years between the mid-19th century and the end of the period of "rapid economic growth" sometime around 1970. However, the period from around the second t o the fifth decades of this century was one in which economic and political crises dampened the optimistic attitude described in 4 and 5. In this period some groups, such as 0 m o t o and Tenshb Kijtai Jingiikyij, departed from a strictly vitalistic conception of salvation. To our regret, lack of space precludes a discussion of this problem.
CONTEMPORARY CRISIS
The vitalistic conception of salvation still prevails among active members of the Japanese New Religions. Nevertheless, the New Religions are undergoing a considerable transfonnation, and this is especially true in their higher echelons. More-over, in recent years it has become clear that the views of salvation held by some New Religions contain elements that do not accord with the vitalistic conception of salvation. These observations suggest the coming of a crisis in the popularity of the vitalistic conception of salvation. In the following, we shall give a general description of and also suggest some possible causes for the critical situation at hand.
I . Transformations of the vitalistic conception of salvation.
Partly because of institutionalization brought on by large organizations and partly because of changing social circumstances, the vitalistic conception of salvation has tended t o weaken and become transformed. This tendency can be broken down into four types: Culturism, Expressionism, Moralism, and Social Reformism. What is meant by Culturism is the tendency for New Religions t o attempt t o make their vitalistic conception of salvation more sophisticated by introducing up-to-date scientific knowledge or the theories of famous and widely recognized scholars. While this attempt adds scientific or philosophical plausibility t o the teachings of the New Religions, it introduces abstract and theoretical arguments which have little t o d o with the religious needs and experiences of followers. The result has been a general weakening in the attractiveness of teachings such as those on how to achieve salvation. For example, S6ka Gakkai has with great effort gathered relevant information from the theory of elementary particles, space science, biochemistry, and psychoanalysis in order t o lay the foundations of a scientific "theory of life." In so doing, they have given an air of refinement and authority t o their teachings. Their original claim, however, that worldly benefit can be gained through mystical unification with the mandala, "the machine for making happiness" (k6fuku seizoki), has receded into the background.
Expressionism is the tendency t o place great emphasis on the expression of feelings through arts, music, dancing, and other amusements in group activities. Although this tendency has been used as an expedient to attract youth and integrate the group, it weakens the passion for salvation and diminishes the religiosity of the group as a whole. Reiyiikai's "Inner Trip" campaign, which tries to respond to the demand of youth for group recreation, and S6ka Gakkai's cultural activities, such as its "Cultural Festival" and "Chorus Festival," can be cited as examples of this tendency toward Expressionism.
Moralism is the tendency to emphasize daily ethics and reflective self-criticism. Generally speaking, the vitalistic conception of salvation affirms the existence of human beings and requires neither rigorous asceticism nor thoroughgoing self-reflection; but present trends emphasizing self-denial and spiritual rigorism have become prominent. There are, of course, great differences in the degree of severity from group to group, from the quite mild Moralism of Rissh6 K6sei-kai to the strictness of Tensh6 Kbtai Jingiiky6.
Social Reformism is the tendency to encourage followers to participate in activities for alleged social reform and social services (voluntary activities). Engaging in these activities, people have far less opportunity to experience a feeling of unity with the Original Life than through activities, usually the search for personal worldly benefit, directly related to salvation. Thus S6ka Gakkai, in accordance with the "mass welfare" policy put forward by its affiliated political party, the K6meit6, has strengthened its advocacy of social reform measures and performed various social services in addition t o launching a moderate peace movement. To cite another example, Rissh6 K6sei-kai has given seminars on volunteer activities, launched its "Better Society Movement" for the purposes of assisting and cooperating with local communities, and organized the "World Conference on Religion and Peace."
Appearance o f new conceptions o f salvation.
New conceptions of salvation are of two types, one reflecting an Eschatological Fundamentalism and the other Counterculturism.
Eschatological Fundamentalism holds a pessimistic view of the world and emphasizes a dualistic confrontation between right and wrong. The anticipation of "the end of the world" and the millennium to come stem from its rejection of secular society as corrupted. The world and human beings are seen as fundamentally evil, and it is thought that only serious reflection and inner faith can lead to salvation. Mybshinkb, a group vehemently antagonistic to Sbka Gakkai in spite of adherence to the same Nichiren Shbshii tradition, and Christian sects such as the Unification Church and the Watchtower movement, can be classified under this heading. It should be noted that recently these groups have been growing rapidly through active, albeit smallscale, propagation.
Counterculturism, developed in large part by youth, is a reaction to the dominant rationalistic culture of modern society. It attaches special importance either to sensitivity and nature or to supernaturalness and mystery. The emphasis on sensitivity and nature can be seen in the espousal of religious communes active after the setback of the student movement in the late 1960s, while the tendency toward supernaturalness and mystery can be seen in the activities of such groups as Shinreikyb, GLA, and Sekai Mahikari Bunmei Kybdan. These groups regard faith healing and miracles as simply the manifestation of a force struggling against the stifling rationality of modern society, not as benefits from the Original Life. To be sure, the influence of vitalism is recognizable in both trends of Counterculturism, but the main motif is denial of rationalism and modern culture, a denial inconsistent with the vitalistic conception of salvation with its relatively optimistic view of modernity.
Sociocultural background o f the crisis.
We should point out, in brief, sociocultural factors that have brought about and promoted the above mentioned crisis in the vitalistic conception of salvation. This will be related to what is held to be the last phase of modernization in Japan, the stage of "rapid economic growth" that occurred in the 1960s. Firstly, there was a profound transformation from a mentality based upon the cultural background of an agricultural society to one which reflected the rapid and wide urbanization produced by "rapid economic growth." An important outcome of urbanization was the marked decline in the appreciation of nature's fertility. Secondly, there was an improvement in the standard of living, medical treatment, and social welfare on the one hand, and a more thoroughgoing penetration of the mass media and a rise in the level of education on the other. The first set of changes has made naive belief in the deities of folk religion less plausible. Of the four previously mentioned tendencies indicating the weakening of vitalism, Culturism can be regarded as an attempt to restore plausibility to teachings, while Expressionism, Moralism, and Social Reformism may be understood as attempts to compensate for the general loss of interest in seeking worldly benefit through religion. Thirdly, the costs of "rapid economic growth" have been the pollution and destruction of nature and an increase in the feeling of alienation which resulted from the development of a bureaucratic society. Moreover, there was a gradual decrease in chances for upward mobility with the end of this period of growth. Observation of these phenomena has engendered the feeling that the progress and prosperity of society has come to a standstill. This dim view of society has counteracted the optimistic view of increasing aggrandizement and of liberation from authority, which were conductive to the spread of the vitalistic conception of salvation.
Eschatological Fundamentalism and Counterculturism which reject rationalism and modern culture can be regarded as responses to such a critical view | 2018-12-21T02:25:39.310Z | 1979-05-01T00:00:00.000 | {
"year": 1979,
"sha1": "6f9ab209e3d88363a0c2260c80a29b9082a33246",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18874/jjrs.6.1-2.1979.139-161",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8895bf2f6148eb675ef93c0477768cb265005769",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
254614415 | pes2o/s2orc | v3-fos-license | Small ankyrin 1 (sANK1) promotes docetaxel resistance in castration‐resistant prostate cancer cells by enhancing oxidative phosphorylation
Docetaxel (DTX) plays an important role in treating advanced prostate cancer (PCa). However, nearly all patients receiving DTX therapy ultimately progress to DTX resistance. How to address DTX resistance in PCa remains a key challenge for all urologists. Small ankyrin 1 (sAnk1) is an integral membrane protein in the endoplasmic reticulum. In the present study, we identified that sAnk1 is upregulated in PCa tissues and is positively associated with DTX therapy resistance in PCa. Further investigation demonstrated that overexpression of sAnk1 can significantly increase the DTX‐resistant ability of PCa cells in vitro and in vivo. In addition, overexpression of sAnk1 could enhance oxidative phosphorylation (OXPHOS) levels in PCa cells, which was consistent with the higher OXPHOS levels observed in DTX‐resistant PCa cells as compared to DTX‐sensitive PCa cells. sAnk1 was also found to interact with polypyrimidine‐tract‐binding protein (PTBP1), an alternative splicing factor, and suppressed PTBP1‐mediated alternative splicing of the pyruvate kinase gene (PKM). Thus, overexpression of sAnk1 decreased the ratio of PKM2/PKM1, enhanced the OXPHOS level, and ultimately promoted the resistance of PCa cells to DTX. In summary, our data suggest that sAnk1 enhances DTX resistance in PCa cells.
Docetaxel (DTX) plays an important role in treating advanced prostate cancer (PCa). However, nearly all patients receiving DTX therapy ultimately progress to DTX resistance. How to address DTX resistance in PCa remains a key challenge for all urologists. Small ankyrin 1 (sAnk1) is an integral membrane protein in the endoplasmic reticulum. In the present study, we identified that sAnk1 is upregulated in PCa tissues and is positively associated with DTX therapy resistance in PCa. Further investigation demonstrated that overexpression of sAnk1 can significantly increase the DTX-resistant ability of PCa cells in vitro and in vivo. In addition, overexpression of sAnk1 could enhance oxidative phosphorylation (OXPHOS) levels in PCa cells, which was consistent with the higher OXPHOS levels observed in DTX-resistant PCa cells as compared to DTX-sensitive PCa cells. sAnk1 was also found to interact with polypyrimidine-tract-binding protein (PTBP1), an alternative splicing factor, and suppressed PTBP1mediated alternative splicing of the pyruvate kinase gene (PKM). Thus, overexpression of sAnk1 decreased the ratio of PKM2/PKM1, enhanced the OXPHOS level, and ultimately promoted the resistance of PCa cells to DTX. In summary, our data suggest that sAnk1 enhances DTX resistance in PCa cells.
Metastasis is still a lethal factor of prostate cancer (PCa), with the second highest mortality of cancers in men in developed countries [1]. In recent years, the incidence of PCa in China has strikingly increased [2]. Patients diagnosed with early-stage PCa have a good prognosis, with a 5-year survival of nearly 100%, whereas only 30% of patients with metastatic disease can achieve a 5-year survival [3]. Despite the initial effectiveness of androgen-deprivation therapy (ADT) in treating advanced and metastatic PCa, nearly all patients finally progress to castration-resistant PCa (CRPC) [4]. As the first line of defense, chemotherapy is the treatment for CRPC, which means patients could benefit from docetaxel (DTX) treatment. However, almost all patients accepting DTX treatment ultimately become refractory because of DTX resistance [5]. Therefore, investigating the molecular mechanisms underlying DTX resistance has great clinical significance with potential for novel strategies to treat DTXresistant PCa.
To date, many studies have focused on the molecular mechanisms of DTX resistance in advanced PCa.
Abnormal overexpression of multidrug resistance genes in tumor cells represents one of the most extensively investigated mechanisms of chemotherapy resistance [6,7]. Multidrug resistance genes, such as ABCB1 and ABCC4, have been reported as upregulated and may contribute to DTX resistance in PCa [7,8]. In addition, some studies indicate that β-tubulin isotypes may affect the response of cancer cells to microtubuletargeting drugs [9]. For example, βIII-tubulin has been reported to be elevated in DTX-resistant cells and showed an association with the response of PCa to DTX-based chemotherapy [10]. Although enormous progress has been achieved in the study of docetaxelresistant PCa, few agents can be used in current clinical settings due to severe or poor side effects [11]. Recent studies suggest that metabolomic changes unique to drug-resistant cancer cells may hold the key to reversing drug resistance in cancer [12]; however, numerous studies are required to probe the metabolic characteristics and reprogramming mechanisms of DTX-resistant PCa cells [12].
As an integral membrane protein of the ankyrin family, ankyrin 1 (ANK1) functions as a protein adaptor in the organization of specialized membrane domains [13]. In recent studies, ANK1 has been reported to regulate glucose uptake in skeletal muscle, and alteration of the expression of ANK1 may induce insulin resistance [14,15]. The small ankyrin 1 (sANK1), a small (~17 kDa) ankyrin isoform, is a short alternative splicing of the Ank1 gene [16]. Our previous study found that miR-486-5p, which shared the same promotor with sANK1 [17], was upregulated and suppressed multiple tumor suppressor pathways in PCa, playing a critical role in PCa progression [18]. Moreover, we found that sANK1 was obviously upregulated in PCa tissues compared to adjacent normal tissues and was also upregulated in the PCa tissues of DTX-resistant patients rather than in DTX-sensitive patients in the present study. Moreover, overexpression of sANK1 can significantly increase DTX resistance and simultaneously enhance oxidative phosphorylation (OXPHOS) levels in PCa cells. Therefore, the focus of this study is on the function and mechanism of sANK1 during the progression of DTX-resistant PCa.
Tissue samples and tissue microarray
The Ethics Committee of Drum Tower Hospital, Medical School of Nanjing University, approved this study, which was consistent with the Declaration of Helsinki principles (approval code: 2018-165-01). Written informed consent has been obtained from each patient. Formalin-fixated prostate cancer tissues (N = 36) for immunohistochemistry as well as frozen prostate cancer and adjacent normal tissues for quantitative real-time PCR (N = 20) were collected from patients undergoing prostatectomy from 2019 to 2020. All these patients underwent a prostatectomy prior to the receipt of any adjunctive therapy. The frozen tissues have been made into frozen sections to confirm the histopathological features by an experienced pathologist. The DTX-sensitive tissues were obtained from patients undergoing the first biopsy before DTX treatment, and the corresponding DTX-resistant tissues were obtained from the same patients undergoing another biopsy after they were resistant to DTX chemotherapy (n = 4). DTX chemoresistance was defined as patients receiving ADT and docetaxel chemotherapy whose serum testosterone concentration had reached a castration level which is less than 50 ngÁdL −1 or 1.7 nmolÁL −1 with any of the following progression: (a) biochemical progression: the consecutive PSA increases three times, in which the interval was more than 1 week and two of the three progressions were 50% higher than the nadir PSA level with a PSA level of more than 2 ngÁmL −1 . (b) Radiographic progression: more than two new bone metastases on bone scans or appearance of an enlarged soft tissue on the scans were evaluated through Response Evaluation Criteria in Solid Tumors (RECIST) criteria. The clinical characteristics of enrolled patients are listed in Tables 1 and 2. A tissue microarray (TMA) was constructed with one tissue core (2 mm in diameter) from a representative area of each sample identified by two experienced pathologists. A signed consent was obtained from each patient.
Cell lines and cell culture
Human PCa cell lines (Du145 and PC-3) were purchased from the National Collection of Authenticated Cell Culture (Shanghai). Cells were cultured in RPMI 1640 medium with 10% fetal bovine serum (FBS), 100 UÁmL −1 penicillin, and 100 mgÁmL −1 streptomycin. All the cells were placed in a moist environment with 5% CO 2 at 37°C.
DTX-resistant cell generation
Du145 and PC-3 cells were cultured in a medium consisting of 5 nM DTX for 24 h. The cultured cell medium was then replaced with normal RMPI 1640 medium. When the cells continued to proliferate, 5 nM DTX was added. While the cells could proliferate in the medium with 5 nM DTX, they were treated with a higher concentration of DTX. DTXresistant cells were defined as DTX-treated cells with an inhibitory concentration of 50% (IC 50 ), 10 times higher than that of the parental cells.
Immunohistochemistry (IHC) and immunofluorescence (IF)
Tissue microarray and the biopsy tissue specimen were used for IHC. Immunostaining was assessed independently by a pathologist in a blinded manner. The staining intensity was scored as 0 (negative), 1 (weak), 2 (intermediary) and 3 (strong), while the staining range was scored as 0 (0%), 1 (1-25%), 2 (26-50%), 3 (51-75%) and 4 (75-100%), which combined and resulted in scores on a scale of 0-12. The staining intensity of the biopsy tissue was calculated by IMAGE-PRO PLUS 6.0 (Media Cybernetics, Inc., Bethesda, MD, USA). Immunofluorescence (IF) was performed to detect the protein distribution in cells. Cells were cultured in a 48-well plate and fixed with 4% paraformaldehyde (PFA) (w/v). The cells were permeabilized in 0.3% Triton X-100 (v/v) (Sunshine Biotech, Nanjing, China) diluted in PBS, then 3% (w/v) bovine serum albumin (BSA) (Sangon Biotech, Shanghai, China) was used to block the cells for 1 h. The primary antibody was then added to the wells, and the plate was placed in a 4°C environment overnight. The cells were incubated with fluorescence-labeled secondary antibody (CST, Danvers, Massachusetts, USA) for 1 h at room temperature and finally treated with Sigma 4 0 ,6-Diamidine-2 0 -phenylindole dihydrochloride (DAPI) for 2 min. All incubations were followed by three PBS washes.
Quantitative real-time PCR (qRT-PCR) analysis
qRT-PCR was performed as described previously [18]. Briefly, the total RNA was extracted from adjacent normal tissue and prostate cancer by TRIzol reagent and reverse transcribed to cDNA using PrimeScript RT Master Mix (TaKaRa Biotech, Nojihigashi,Japan). Meanwhile, SYBR Master Mix (Vazyme, Nanjing, China) was used for qRT-PCR on the QuantStudio -™ 6 Flex System (PE Applied Biosystems, Foster City, CA). The relative expression of mRNA was normalized to ACTB by the 2 ÀΔCtΔCt method. The forward primer sequence of sANK1 was F 5 0 GGAGA CCATCTCCACCAGG 3 0 , and the reverse primer sequence was R 5 0 CCACCTTGCGAATGATCTTCT 3 0 . The forward primer sequence of ACTB was F 5 0 CATGTACG TTGCTATCCAGGC3 0 , and the reverse primer sequence was R 5 0 CTCCTTAATGTCACGCACGAT 3 0 .
Western blot analysis
Western blot (WB) analysis was performed as described previously [19]. Briefly, RIPA buffer containing protease inhibitor and phosphatase inhibitor was used to solubilize the cells, and the lysate was centrifuged at 16,000 g for 20 min. The supernatant was collected and boiled at 95°C with loading buffer for 5 min and used for WB analysis. Tubulin was selected as the reference gene. Antibodies against sANK1 and PKM2 were purchased from Abcam (Cambridge, UK), and antibodies against PTBP1 and tubulin were purchased from Proteintech Group (Wuhan, China).
Plasmid construction, lentiviral infection and oligonucleotide transfection
The coding sequence of human sANK1 was amplified by PCR with the forward primer F: 5 0 GACGATGACAAG CTTGCGGCCGCTATGTGGACTTTCGTCACCCA-G3 0 and the reverse primer R: 5 0 GATCGCAGATCCTTCGCG GCCGCTCACTGTTT-CCCCCTTTTCAG3 0 and inserted into the multiple cloning site of the lentiviral vector pCDH-CMV-3 × FLAG-ZHX3-EF1-puro (pCDH). Then, the vector was cotransfected into HEK293T cells along with a three-plasmid expression system. Forty-eight hours after transfection, a 0.2 μm filter was used to filter the
Mass spectrometry
Cell lysates of Du145-sANK1 and PC-3-sANK1 cells were immunoprecipitated with anti-FLAG antibody. The mass spectrometry data acquisition and analysis were completed at GeneChem. In brief, the protein solution was digested by protease into a mixture of peptides, which were subjected to NSI source followed by tandem mass spectrometry (MS/MS) in Q ExactiveTM Plus (ThermoFisher Scientific, Waltham, Massachusetts, USA) coupled online to the UPLC. The resulting MS/MS data were processed using PROTEOME DISCOVERER 1.3 (ThermoFisher Scientific).
Co-immunoprecipitation (Co-IP)
Cell lysates of Du145-sANK1 and PC-3-sANK1 cells were incubated with protein A/G agarose beads one time and then incubated with an anti-FLAG antibody and control IgG overnight with the protein A/G agarose beads. The complexes were washed three times with lysis buffer and resuspended in 2× SDS loading buffer. The immunoprecipitated proteins were eluted from the beads by incubation at 95°C for 5 min. The eluted proteins were detected by WB.
Measurement of intracellular calcium
Cells were digested by tyrosine without EDTA and washed with PBS two times. Then, the cells were resuspended and loaded with
Glucose uptake and deprivation assay
Glucose uptake and deprivation assays were performed by a flow cytometer (BD Sciences). Glucose uptake was assessed by 2-NBDG, a fluorescent glucose analog. Cells were cultured in 12-well plates and treated with 2-NBDG for 2 h after 2 h of glucose deprivation. Then, flow cytometry was used to analyze the 2-NBDG uptake at excitation and emission wavelengths of 465 and 540 nm, respectively. For the glucose deprivation assay, cells were seeded into 24-well plates, and the medium was replaced with glucosefree medium (Gibco, ThermoFisher Scientific) for 12 h. Apoptosis was measured by flow cytometry with an Annexin V-FITC Apoptosis Detection Kit (Vazyme).
Metabolic assay
A Seahorse XF96 analyzer (Seahorse Biosciences, Billerica, Massachusetts, USA) was used to assess the oxygen consumption rate (OCR), which reflects the rate of OXPHOS. A total of 15 000 cells were seeded into a 96-well XF96 plate and cultured overnight. The Cell Mito Stress Test Kit (Agilent Technologies Inc., Santa Clara, California, USA) was used to measure cellular mitochondrial flux. Several inhibitors, oligomycin (1.25 μM, mitochondrial ATP synthase inhibitor), carbonyl cyanide 4-(trifluoromethoxy) phenylhydrazone (FCCP) (2.5 μM, protonophore and uncoupler of ATP synthesis from mitochondrial respiration), and rotenone (0.75 μM, electron transport inhibitor) were added according to the manufacturer's instructions. ATP production and spare respiratory capacity were two important indicators to reflect the ability of OXPHOS. At basal respiration, OCR decreased significantly after oligomycin was added to inhibit ATPase; reducing OCR was represented to the ability of ATP production. After adding the FCCP uncoupling agent, electron transport would lose the constraint of proton gradient and move at the maximum speed, enabling OCR to increase sharply to the maximal oxygen consumption. At this time, the difference between the maximal oxygen consumption and basic respiration is called spare respiratory capacity.
Animal study
Six-week-old male nude mice were purchased from the Model Animal Research Center of Nanjing University. A total of 5 × 10 6 cells in each group were subcutaneously injected into the two flanks of mice (left: Du145-NC, right: Du145-sANK1, n = 6). The large (L) and small (S) diameters of the tumors and the weight of the mice were measured every 4 days, and the tumor volume was calculated using the formula S 2 × L/2. Four weeks later, these mice were treated with DTX (5 mgÁkg −1 ) once a week for 3 weeks continuously. One week after the final treatment, the mice were sacrificed by euthanasia, and the tumor
Statistical analysis
The data analysis was performed with GRAPHPAD PRISM 6 (San Diego, California, USA) and IBM SPSS STATISTICS 17.0 (International Business Machines Corporation, Armonk, New York). The repeated measurement data were analyzed by the chi-square test. Normally distributed data are expressed as the mean AE SD and were compared by Student's t-tests. P < 0.05 was considered statistically significant.
sANK1 was upregulated in chemoresistant PCa
We first investigated the sANK1 protein levels in PCa tissues and paired adjacent normal tissues by IHC staining. The IHC results revealed that PCa tissue had a higher sANK1 level than the adjacent normal tissue (Fig. 1A,B). We also analyzed the mRNA expression of sANK1 in 20 pairs of PCa and adjacent tissues. Higher sANK1 mRNA expression was observed in 18 of 20 cases (Fig. 1C). Then, we compared the sANK1 protein levels in chemoresistant PCa tissues with those in chemosensitive PCa tissues and found that sANK1 was upregulated in chemoresistant PCa tissues (Fig. 1D,E). Next, we constructed two chemoresistant PCa cell lines (PC-3-DR and Du145-DR) and further investigated sANK1 expression in those cell lines. As shown in Fig. 1F, chemoresistant cell lines had higher sANK1 expression level than their parental cells. These results indicated that compared to normal prostate tissue, sANK1 was overexpressed in PCa tissues and showed a positive correlation with the chemotherapy resistance of PCa.
Changing chemosensitivity in PCa through dysregulation of sANK1
Considering the overexpression of sANK1 in chemoresistant PCa tissues and cells, we hypothesized that sANK1 might play more than one role in PCa DTX resistance. To confirm this hypothesis, we constructed sANK1-overexpressing cell lines (Du145-sANK1 and PC-3-sANK1) with a sANK1-overexpressing lentivirus and knocked down the sANK1 level in DTX-resistant cell lines (Du145-DR and PC-3-DR) using specific siRNA for sANK1. WB results confirmed the efficiency of overexpression and knockdown of sANK1 ( Fig. 2A,B). MTT assays were performed to evaluate the impact of the altered sANK1 expression on DTX sensitivity in PCa cells. Cells with sANK1 overexpression showed more resistance to DTX treatment than the control group cells, and knocking down sANK1 in Du145-DR and PC-3-DR cells obviously increased the cell sensitivity to DTX (Fig. 2C,D). Considering the effect of sANK1 in PCa in vitro, we further investigated whether the dysregulation of sANK1 affected the sensitivity of PCa to DTX in vivo. Two flanks of nude mice were subcutaneously injected with Du145-NC and Du145-sANK1 cells. After 4 weeks, these mice were treated with DTX intraperitoneally once a week and three times continuously. Compared to that in the Du145-NC group, both tumor volumes (Fig. 2E,G,H) and tumor weight (Fig. 2I) in the Du145-sANK1 group were significantly augmented after a 3-week treatment with DTX. Ten days after implantation, the weight of the mice was measured every 4 days, and the mice showed no obvious weight loss, suggesting that the DTX dose was appropriate (Fig. 2F). The above results demonstrated that sANK1 could significantly enhance the DTX resistance of PCa cells in vitro and in vivo.
Overexpression of sANK1-induced chemoresistance by restraining the polypyrimidine-tract-binding protein 1 (PTBP1) localization To further reveal the mechanism by which sANK1 promoted the DTX resistance ability of PCa cells, we proceeded to identify the downstream target of sANK1 in DTX-resistant PCa cells based on the location of sANK1 on the sarcoplasmic reticulum/endoplasmic reticulum (SR/ER) as a membrane protein. It has also been reported that sANK1 can interact with sarco-(endo)plasmic reticulum Ca 2+ -ATPase (SERCA1) [20], which led to the hypothesis that sANK1 mediates DTX resistance in PCa via proteinprotein interactions. Therefore, we performed a Co-IP assay and analyzed the potential target proteins by mass spectrometry. The mass spectrometry and subsequent WB results indicated that sANK1 could interact with PTBP1 (Fig. 3A,B). PTBP1 was reported to mediate the alternative splicing of PKM mRNA to generate two different isoforms (PKM1 and PKM2) [21]. We thereby detected alterations in the expression of PKM2 and PKM1 in PC-3-sANK1 and Du145-sANK1 cells. The results showed that overexpression of ANK1 could increase the slicing of the PKM1 isoform and inhibit the slicing of the PKM2 isoform without affecting the protein level of PTBP1 (Fig. 3C, D). Furthermore, we restored PKM2 expression in Du145-sANK1 and PC-3-sANK1 cells (Fig. 3E,F), and found their sensitivity to DTX was obviously enhanced (Fig. 3G). These results suggested that sANK1 induced DTX resistance through PTBP1-PKM2 signal. Considering that sANK1 is a membrane protein in the ER, we hypothesized that sANK1 might bind to PTBP1 and restrain its entry into the nucleus to mediate alternative slicing. The immunofluorescence showed that PTBP1 distributed in the cytoplasm in sANK1-overexpressing cells increased compared with that in control cells, and PTBP1 distributed throughout the nucleus correspondingly decreased (Fig. 3H,I), indicating the blocking effect of sANK1 on the nuclear entry of PTBP1. These results show that sANK1 impacted the ratio of PKM2/PKM1 by regulating the distribution rather than the protein expression of PTBP1.
Overexpression of sANK1 enhanced OXPHOS in PCa cells
Considering the change in the PKM2/PKM1 ratio, we wondered whether the metabolic pathway shifted when the expression of sANK1 was upregulated. The oxygen consumption rate (OCR) of cells was assessed by using a Seahorse metabolic analyzer in PCa cell lines. The basal respiration was the OCR level before oligomycin treatment, including the oxygen consumption of mitochondrial oxidative phosphorylation and proton leakage. When cells were treated with oligomycin, the decrease in OCR represented the ability to produce ATP. When cells were treated with FCCP, the increase in OCR represented the ability of maximum respiratory capacity and the respiratory potential of mitochondria. Our results showed that both the spare respiratory capacity and ATP production of Du145-sANK1 and Du145-DR cells were increased compared with those of Du145-NC cells ( Fig. 4A-C). In PC-3 cell lines, the spare respiratory capacity and ATP production of PC-3-DR cells were higher than PC-3-NC cells, but only the spare respiratory capacity was increased in PC-3-sANK1 cells while ATP production was unchanged when compared to the PC-3-NC cells (Fig. 4D-F). These results revealed that like DTXresistant PCa cells, sANK1-overexpressing PCa cells showed enhanced mitochondrial OXPHOS compared to control cells, suggesting that sANK1 might promote DTX resistance through enhancing mitochondrial OXPHOS of PCa cells. In addition to regulating the ratio of PKM2/PKM1 to enhance mitochondrial OXPHOS, sANK1 was reported to interact with SERCA1 which could affect the level of Ca 2+ in the myoplasm, and Ca 2+ is believed to regulate mitochondrial oxidative phosphorylation [22]; we wondered whether sANK1 regulated the OXPHOS via regulating Ca 2+ . As shown in Fig. 4G, we found that the expression of sANK1 did not affect the level of intracellular Ca 2+ in prostate cancer cells. Furthermore, through treating cells with 2-NBDG, we found the glucose uptake of Du145-sANK1 and PC-3-sANK1 cells was obviously increased compared to corresponding control cells (Fig. 4H). When cells were cultured in glucose-free medium for 12 h, a higher apoptosis percentage was also observed both in PC-3-sANK1 and Du145-sANK1 cells than that in corresponding control cells (Fig. 4I,J). These results showed that sANK1overexpressing cells were more dependent on glucose metabolism. In summary, the above results demonstrated that overexpressing sANK1 could enhance OXPHOS in DTX-resistant PCa cells.
Discussion
Although there have been novel hormonal drugs used in clinical practice, DTX still plays a crucial role in treating advanced PCa [23]. How to address DTX resistance in PCa remains a key challenge for all urologists. In this study, we found that sANK1 was positively associated with DTX resistance in tumor tissues from PCa patients. Further investigation demonstrated that overexpression of sANK1 can significantly induce PCa cell resistance to DTX in vitro and in vivo. Moreover, we revealed that overexpression of sANK1 could obviously enhance the OXPHOS levels in PCa cells, in accordance with higher OXPHOS levels in the DTXresistant PCa cells. We then explored the mechanism underlying sANK1-mediated DTX resistance and found that sANK1 regulated the ratio of PKM2/ PKM1 through PTBP1, thereby enhancing OXPHOS and mediating DTX resistance. Metabolic reprogramming is an important way for cancer cells to respond to external stress [24]. Cancer cells prefer to use aerobic glycolysis rather than mitochondrial OXPHOS for glucose metabolism and ATP production even in oxygen-rich conditions, a preference referred to as the famous "Warburg effect" [25]. However, cancer cells can also promote their adaptability and resistance to therapies by adapting their metabolism to different treatments [26]. Reports have shown that increasing OXPHOS levels in tumor cells are an indication of cells resisting chemotherapy [27]. For instance, a recent study described a new cisplatin resistance mechanism in non-small-cell lung cancer in which an increased OXPHOS function was the key for the chemotherapy resistance phenotype, proposing the therapeutic exploitability of OXPHOS inhibitors or PGC-1α downregulation [28]. In addition, a study also revealed that targeting OXPHOS with ALDH inhibitors could reverse drug resistance by blocking autophagy recycling in the mouse xenograft models of various cancers, including PCa [27]. However, the mechanism of metabolic reprogramming in DTXresistant PCa cells has rarely been reported. In our current study, we found that overexpression of sANK1 could obviously enhance OXPHOS through PTBP1mediated PKM alternative splicing, which promotes the resistance of PCa cells to DTX. PTBP1 is an alternative splicing factor regulating pre-RNA processing and belongs to the family of heterogeneous nuclear ribonucleoproteins [29]. PTBP1 plays an oncogenic role in many cancers by regulating the alternative splicing of critical genes. In colorectal cancer, PTBP1 is often overexpressed which leads to the alternative splicing of CD44, promoting colorectal cancer progression [30]. In glioblastoma, PTBP1 promotes glioblastoma progression by mediating annexin A7 exon splicing, eliminating its tumor suppressor functions [31]. Only one recent study reported the association between PTBP1 and PCa, which showed that PTBP1 increased in PCa tissues, and its genetic variants affected patient response to androgen-deprivation therapy in PCa patients [32]. Despite the wide acknowledgment of the alternative splicing function of PTBP1, few studies have focused on the regulation of PTBP1. Here, we found that sANK1, as an integral membrane protein in the endoplasmic reticulum, could interact with PTBP1 and fix PTBP1 to the endoplasmic reticulum membrane, restraining its entry into the nucleus to mediate alternative slicing. In cancer cells, the most important process involving PTBP1 is glycolysis [29]. Pyruvate kinase (PK) catalyzes the phosphorylation of phosphopyruvate and adenosine diphosphate to produce pyruvate and ATP which is a key rate-limiting enzyme for glycolysis [33]. The PKM gene encodes two subtypes of PK (PKM1 and PKM2), and they show distinctly different regulatory and catalytic features [33]. In some cancer cells, PTBP1 could promote PKM splicing to PKM2 rather than PKM1, thus leading to a metabolic shift from OXPHOS to glycolysis [21,34]. The role of PTBP1 in PKM splicing is already known in several cancer types [21,34]. In this study, we revealed the relationship between PTBP1-mediated PKM splicing and DTX resistance in PCa. We demonstrated that sANK1enhanced OXPHOS promotes the resistance of PCa cells to DTX by suppressing PTBP1-mediated PKM alternative splicing.
Conclusion
In conclusion, our results revealed that sANK1 was overexpressed in PCa tissues and positively associated with the resistance ability of PCa cells to DTX. Further mechanistic investigation demonstrated that sANK1 could interact with PTBP1 and suppress PTBP1-mediated PKM alternative splicing, thereby decreasing the ratio of PKM2/PKM1, enhancing OXPHOS, and ultimately promoting the resistance of PCa cells to DTX. These findings offer novel potential therapeutic targets to address DTX resistance in PCa.
MD and CJ; HG and WG designed the research. HQ collected the tissue samples and conducted part of the IHC analysis. HY and MC conducted the TMA. WC and WD conducted the paraffin section. All authors read and approved the final manuscript. | 2022-12-14T16:05:57.061Z | 2022-12-12T00:00:00.000 | {
"year": 2022,
"sha1": "8ab7765322c76665622751e082d544796094bd24",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "c890eaae8951e6fc1c0f1ff96b9428831ef5f51d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254855329 | pes2o/s2orc | v3-fos-license | Unpacking rural tourism on household livelihood: Insights from Kakum Ghana
Abstract The influence of rural tourism on the socio-economic well-being of indigenous people is often ignored in scholarly literature. To address this gap in the literature, a qualitative case study research design was used to gain insights into how tourism strategies had over the years influenced the economic wellbeing of indigenous people living within the catchment area of the Kakum National Park (KNP) in the central region of Ghana. The participants of the study comprised residents of “Dwaso” and “Ahafo” within the central region of Ghana. The study’s findings revealed that the establishment of tourist site has been a mixed blessing. This is not surprising because it has made the names of the communities popular across the world. However, the negative side of the tourist site far outweighs the dividends because it has impoverished the people in terms of depriving them of their source of livelihood. To worsen their plight, natives are subjected to various forms of harassment as their source of eking out a living and virtually rendering them paupers, which used not to be so before the advent of the tourist site.
PUBLIC INTEREST STATEMENT
Recognizing the resultant consequences of existing tourist attractions from the perspective of rural fringe communities is an essential component of sustainable tourism development, about which little is known. Tourism in Ghana is making a significant contribution to the country's development but little is known about how the Kakum National Park is contributing to livelihood improvement and wellbeing of the indigenous people through the earlier and creation of new livelihood activities within the catchment area. This study interviewed participants who hail from the communities surrounding the tourist site on their opinions regarding how beneficial the site is to them, especially through the creation of livelihood activities. The result revealed that the site has been mixed blessings because it has positively projected the name of their community worldwide, but the only snag is that poverty resulting from loss of livelihood activities and displacements, including maltreatment has become their lot.
Introduction
Political and philosophical debates have emerged over the years regarding how the aims of conservation projects can be tailored to conform with the demands and desires of people who are likely to be directly affected (Borchers 2004 as cited in Agyeman, 2005). Governments from different countries across the globe in 1992 converge at the Earth Summit. The essence of the meeting was to deliberate on how tourist sites can be managed prudently to become sustainable and improve conservation to engender sustainable development. Unlike previous practices that perceived local communities as a threat to nature, the 1992 Earth Summit, African Union Agenda 2063, and the United Nations Sustainable Development Agenda 2030 takes into account not only conservation but sustainable use and directing of benefits to local people from conservation projects including a high standard of living, quality of life and wellbeing of local people (African Union Commission, 2015;Bergman et al., 2018;Colglazier, 2015;Gowreesunkar, 2019). In line with this, conservation agreements such as the Convention on Biological Diversity (CBD) were entered into by member countries (Paterson, 2011).
The convention appeared to be a blessing in disguise because several developing counties have seen an increase in the number of national parks and its influence on local livelihoods in the wake of the signing of the Convention on Biological Diversity (CBD) agreement by governments across the globe and the significance therein has been remarkable (Atisa, 2014;De Oliveira et al., 2011). African countries like South Africa, Kenya, Tanzania, Uganda, Zimbabwe, and Ghana are among the few countries to have witnessed such an impact regarding the creation of national parks. Yet, there still exist problems for the legal regime regulating the creation of the parks and tourism activities therein. Even though there are benefits that accrue to the beneficiary communities regarding the establishments of national parks and tourism, the sacrifices that they have to bear are enormous and painful (Amoah & Wiafe, 2012).
An effective way of developing tourism in local communities to improve livelihoods is said to be through nature-based tourism and the distribution of benefits tied to it in a targeted manner (Mayaka et al., 2018). However, in a developing country such as Ghana, these reserves and national parks are on the same land those local communities depend on for their livelihoods. Even though the establishment of nature reserves and national parks has been a priority of policymakers in the field of tourism and conservation since the 19 th century (Agyeman, 2005), the creation of these parks and reserves restricts livelihood activities of local communities on their land considering their activities as unsustainable (Adu-Ampong & Kimbu, 2019;Melubo & Lovelock, 2019). Livelihood activities of local communities, such as hunting and collecting Non-Timber Forest Products (NTFPs) as well as building materials are considered activities that tend to degrade the natural environment, which in turn leads to the extinction of biodiversity (Agyeman, 2005).
Tourism is reliant on local communities and the natural and man-made environment. This explains why communities in most developing countries that support tourism have their development channeled through tourism development. Tourism is currently viewed by many practitioners as an instrument for preserving and conserving the natural and monumental resources of a country. Tourism is tied to development (Adu-Ampong & Kimbu, 2019;Asiedu, 2002); and has many times been used in development studies as a way of addressing the issue of inequalities among communities through equal distribution of benefits. There are identified means through which benefits from tourism projects could be channeled to local communities to address issues of inequality and to enhance the long-term viability of tourism projects. As a result, tourism practitioners use these means to ensure local communities living close to tourist attractions centers benefit considerably from tourism in place of lost livelihood activities (Akyeampong & Asiedu 2008).
Sharing of revenues from nature-based tourism is among the strategies widely accepted and used by international, national, local, and private tourism organizations including conservationists at all levels to address issues of inequalities and loss of livelihoods among communities (Melubo & Lovelock, 2019). This claim validates campaigns for conservation alongside issues of poverty and restricted livelihood opportunities in less developed countries (Ahebwa et al., 2011). In less developed countries like Ghana, this strategy is used to channel benefits to local communities for development and livelihood enhancement. How these benefits are distributed and how they impact the livelihoods of individual households is the crux of this study. Tourism is linked with local involvement and the livelihood of local communities. It is believed that the benefits from the Park which will be shared among communities are likely to reduce poverty (Novelli, 2015) and influence the lives of the local people in the various fringe communities. Direct and indirect tourism livelihood strategies in the communities are as well vital for enhancing the livelihoods of the local people (Njana et al., 2013;Rakodi, 2014).
The question that arises is the extent to which Communities living around the Kakum National Park (KNP) have benefits from the establishment of the park and the extent to which it has improved the lives of the people, and this is the crux of the study. Even though anecdotal evidence suggests that the part to some has been mixed-blessings, research evidence is needed to confirm this claim and this is the crux of the current study. In essence, the study seeks to address the following research objectives; (1) Assess livelihoods strategies available in the Conservation Area; (2) Determine the influence of livelihood strategies on households in the Conservation Area.
The following research questions guided the study: (1) What livelihood strategies are available to people in that locality?
(2) How have livelihood strategies influenced the lives of individual households?
Research context
Tourism in Ghana has been identified with exceptional performance and contribution to the country. Statistics have indicated that tourism is the fastest-growing industry in the Ghanaian economy (Adu-Ampong, 2017; Asiedu, 2002). Tourism contributes greatly in terms of Gross Domestic Product (GDP) and both direct and indirect employment (Adu-Ampong, 2018;MBOKA, 2008). Ghana has always strived on strategies to increase both international and domestic tourist arrivals and to make the tourism industry the most important economic sector in Ghana (Cobbinah & Darkwah, 2016). Government policies were instigated to make the growth of the Ghana tourism industry possible. Tourism in the mid-1980s was recognized and given precedence in the investment code of Ghana (PNDC Law 116) as one among the five sectors permitting development as well as investments. The prospective nature of tourism as a long-term national development plan resulted in the formulation of national tourism development plans that that focused on the notion of sustainability in Ghana. Among these plans was the 15-year National Development Tourism Plan (NDTP) 1996-2010, which was followed by the Tourism Sector Medium Term Development Plan (TSMTDP) 2010-2013 and currently the Ghana National Tourism Development Plan (GNTDP) -2027(Adu-Ampong, 2019E. Adu-Ampong, 2018;Teye, 2008). These plans propose equal benefit distribution and development among residents of the country and specifically local people of rural communities living close to attractions. In this case, the plan supports the development of rural tourism, which considers issues of rural development important. This is to make sure that resources are utilized sustainably with tourism benefits equitably distributed throughout the country (Akyeampong & Asiedu, 2008;Asiedu, 2002).
Furthermore, the continuous formulation of policies for the Ghanaian tourism industry establishes the fact that successive governments in addition to the economic benefits of tourism have plans to develop tourism and use it effectively to achieve other socio-economic needs such as regional development as well as poverty reduction in Ghana (Akyeampong, 2011). Tourism in this sense is anticipated to be one most important sources to create reasonable employment for the residence of Ghana. Currently, one main aim of tourism about Ghana National Development Planning Commission (2013) is to ensure the generation of more opportunities for local participation and tourism benefit acquisition by local entrepreneurs in terms of employment, training, and awareness including accessibility to better infrastructure.
Ghana is blessed with numerous rural tourism resources and for that reason, a strategic rural tourism plan has been formulated by the Ghana Wildlife Department which is in charge of managing all protected areas and ensuring that these resources benefit residents living in surrounding communities. The main aim of the plan is to increase visitor arrivals and revenues yielded from protected areas. Districts with rural tourism resources such as the Twifo-Hemang, Lower Denkyira, Yilo Krobo, Mpohor-Wassaw East, etc. are currently implementing or formulating rural tourism development plans to enhance and manage their rural tourism resources.
Within the West African region, Ghana seems to possess much improved opportunities for establishing diverse game reserves. Some rural tourism resource in Ghana has been selected and classified by the Wildlife Department. Among the classifications are tropical rainforest, savanna woodland, coastal wetlands, outlier forests, sub-montane forests, wetlands, ancient grooves, and other cultural links to conservation, waterfalls, bird watching, monkey and butterfly sanctuaries (Asiedu, 2002). However, the main rural tourism facilities that are legally protected by the Wildlife Department of Ghana are the Kakum, Mole, Bui and Bia National Parks, Shaihills, Kogyae and Bobiri reserves, Paga and Agyambra crocodile ponds, Tafi Atome and Buabeng Fiema monkey sanctuaries, Lake Bosomtwe and the Volta River estuary including some other wildlife sanctuaries and wetlands. Apart from these are other unprotected areas such as the Digya and Kyabobo range national parks, Kalakpa and Gbele resource reserves that are important resources for rural tourism development
Concept of sustainable rural livelihoods
The issue of development and poverty reduction in rural parts of the world has become important for development organizations, non-governmental and governmental organizations across the world. And currently, prominence has been given to the sustainability of rural livelihoods also termed as sustainable livelihoods. Livelihood is argued to be sustainable when it can develop present and future capabilities and handle stress and shock without damaging the natural resources upon which local people depend (Ashley & Hussein, 2000). This could mean food, health, family ties, income, properties, social relations, and diversities as well as occupations of people. It could also be used to explain the diverse levels of resources that provide physical as well as social wellbeing and quality of life of individuals and groups living in a particular location (Scoones, 2009, p. 2). Significant values to livelihoods are most excellent systems and practices that are people-centered and focus on poverty reduction and rural development (IDS, 2011;McNamara, 2008).
Rural tourism projects existed to support conservation together with the development needs of local communities. According to Kumar (2005), these projects aim to encourage the survival of local people with nature through participation and enhanced livelihood opportunities. However, Bediako (2000) identified limited participation of local people in the management of parks has brought about leakage of economic benefits meant for local communities. In many cases studies done in rural communities for instance, have exhibited issues of destruction of nature and poverty, lack of livelihood opportunities, and unequal contribution to livelihood conditions (Mbaiwa & Stronza, 2010;Nkhata & Breen, 2010). Also, literature available on rural tourism indicates that most projects have contributed to human and financial capital wastage, with little importance given to households (Akyeampong, 2011;. Few studies have used the SLF in analysing the influence of the KNP on local livelihood conditions though some others by Sey (2011) used the SLF focusing on poverty reduction and technology use in Ghana; Marchetta (2011) adopted the SLF with emphasis on strategies to cope with the changing economic, institutional and environmental conditions in the Northern region; and used the SLF focusing on residents perception of impacts of the Amansuri in the western region of Ghana.
Theoretical framework
The study was framed by the Sustainable Livelihood Framework (SLF) (DFID, 2007) which highlights principles that highlight mechanisms for improving the livelihoods of venerable groups in catchment areas of tourism projects. Improving the livelihoods of vulnerable groups is largely dependent on their asses to livelihood resources and the use of effective livelihood strategies positive/ negative and direct/indirect for the production of different levels of livelihood outcomes.
The Sustainable Livelihood Approach serves as a tool that aids in exploring people's livelihoods whereas it envisages the main features of influence on these livelihoods (Rakodi, 2014). The approach tends to investigate livelihood impacts in the viewpoint of local people and therefore adopts the qualitative and participatory techniques in enquiring (Van Rijn et al., 2012). In this study, the approach allows for evaluating the various influence on livelihood strategies and resulting outcomes on individual households. The various influences identified strategies also envisage the type of influences and outcome rural tourism projects have on the lives of rural communities hence the sustainability of the project.
In effect, upon powers that be in tourist sites such as the Kakum National Park, the sustainable livelihood framework (SLF) offers the best stands to look at the livelihoods of local communities and to answer questions of what impacts. The livelihoods of households are largely dependent on the utilization of assets in their livelihood strategies to improve their lot. However, the use of assets by households is also shaped by individual preferences together with external influences such as policies, institutions, and processes. But these activities can only yield the desired outcome if the product contributes to improvements in conditions of households in the beneficial communities. It is therefore essential to evaluate the impact of rural tourism projects considering the diverse effects they have on local communities (Gubbi et al., 2008;Huluka & Wondimagegnhu, 2019).
The sustainable livelihood theoretical framework comprises five main components: the vulnerability context, livelihood assets, structures and processes affecting livelihood strategies, and associated livelihood outcomes (E. J. Mensah, 2011;Pandey et al., 2017). Figure 1 depicts the elements of the framework. The vulnerability context is considered the first point in the livelihoods analysis and borders the external environments in which local people live and operate. The context articulates that the livelihoods of local people and the accessibility of assets by local people are influence by natural, demographic and economic situations with trends, shocks, and seasonality being liable for the increase in vulnerability. This can be positive and/or negative, therefore influencing communities and households' decisions on strategies to implement for their livelihood conditions.
Assets represent the strength of local people. These are assets that local people depend on to make livelihood activities. They are the fundamental livelihood foundation and these establish local people's capability to break out of poverty. They comprise human, financial, social, physical, and natural assets (DFID, 2007).
Human assets are the skills, knowledge, and the capability of individuals to work as well as the health of individuals that facilitate in carrying out their livelihoods strategies to achieve their livelihood outcomes.
A financial asset is the economic resources that individuals obtain to achieve their livelihoods preferences. These may include savings in the form of cash and microcredit that can be obtained from both formal and informal basis. Others may as well include inflows such as gifts, remittances, etc.
A social asset is various social resources that local people gain in the quest for their livelihoods. These consist of trust, social norms, networks, and membership of groups that usually lie behind profit-making ventures/activities.
Physical asset is the infrastructure and producer goods that are required by communities and individuals to sustain their livelihoods. They include roads, safe shelter, adequate water supply, and sanitation as well as clean and affordable energy and equipment used to effectively achieve livelihoods objectives.
Natural assets comprise land, water, forests, marine resources, air quality, erosion protection as well as biodiversity. This asset is essential to individuals whose livelihoods partly or fully depend on resource-based activities such as farming, collection of non-timber resources, and fishing (Kollmair & Juli, 2002).
Livelihood structures and processes include public, private and non-governmental institutions and policies; also political, economic, social, legal and cultural mechanism that controls assess to livelihood assets (Bannett, et al., 1999;Duncombe, 2006).
Livelihood outcomes are the component that indicates improved and better wellbeing of local people. Out of the implementation of the livelihood strategies results from the livelihood outcome and improved wellbeing. Livelihood outcomes include increase incomes, reduced vulnerabilities, food security, and sustainable use of natural resources resulting from livelihood strategies.
Focusing on the livelihood strategies and outcomes, this study considers how the park has impacted on livelihoods of local households through direct and indirect tourism strategies.
Methodology and study area
The Kakum National Park is located in the Twifo-Hemang-Lower Denkyira District in the central region of Ghana. It is one of the remaining nature conservation areas in Ghana that has existed as a forest reserve since 1934. The area was turned into a national park in 1992 but on 5 March 1992, it was formally declared as a national park (Amoah & Wiafe, 2012;Caesar, 2010). The headquarters of this park is in Abrafo-Odumase; one of the communities that surround the park. The vegetation of the park is a tropical moist semi-deciduous forest of about 347 square meters. The park lies between the latitude 05° 20´ and 05° 40´ with longitudes 1° 18´ and 1° 26´W and approximately 30 km from Cape Coast the central regional capital. The park is surrounded by about 60 fringe communities. These communities include villages such as Abrafo-Odumase, Mfuom, Ankaako, and Antwikwaa as well as other small settlements like Gyaware, Mesomagor, and Anomakwaa among others. Kakum National Park and two surrounding villages (Abrafo-Odumase and Mfuom) were selected for the study for the reason of certain successes realized by the park. Also, Abrafo-Odumase is the headquarters and the entrance/gate to the Kakum National Park (KNP) while Mfuom is the next village and about 6.1 km from Abrafo-Odumase. Figure 2 shows the location of KNP together with Abrafo-Odumase and Mfuom To explore the various influences of both direct and indirect tourism strategies resulting from KNP on the livelihood conditions of individual households, a qualitative approach of data collection and analysis was used for the study. This was to allow local communities to make meanings about available strategies and associated influences based on their experiences (Boeije, 2010, p. 11;Ospina & Wagner, 2004). A significant principle regarding the use of qualitative study is to gather unexpected data through exploring the minds of individuals and groups to subjective and differing meanings based on experiences. Also, this approach encourages probing deep into complex issues in communities under study and helps appreciate the motives of the individuals involved (James, 2007). Two out of the about 60 fringe communities that surround the park and make livelihoods out of forest resources were selected (Abrafo-Odumase and Mfuom).
However, for the purpose of anonymity and confidentiality of the communities and participants of the study, pseudonyms "Dwaso" and "Ahafo" were captured in the analysis to represent the two communities.
Figure 2. Map showing the location of Abrafo-Odumase and Mfuom in the Kakum Conservation Area.
The data collection was completed within 7 weeks. And 26 households were purposely selected based on households who were either participants or non-participant of tourism strategies in the communities. Both males and females were selected to participate in the study without particular attention to age and level of gender inclusion/participation. This is because households in these communities were not dependent on age and gender as people with ages less than 20 years in the communities were already married and living with their families and others as single parents. For a successful data collection, contacts were first made through phone calls and e-mails to some lecturers, experts, and others who have insights into the phenomenon under investigation, and permission was asked from opinion leaders in the communities before data collection began (Ruth-mcswain, 2011). Indepth face-to-face interviews were conducted and participants were given the free will to decide if they preferred to participate in the study or not. Twenty-six households in total were interviewed from the two selected fringe communities around the KNP. The interviews covered the objectives and allowed for the viewpoints of participants to shape what was to be adhered to (Joffe & Yardley, 2004).
Data analysis is significant because it makes it possible in identifying themes tied to the study (Boeije, 2010, p. 76). Thematic analysis was used to analyze the data. The recorded interviews were transcribed and coded into categories. The themes were drawn from words that repeatedly occurred in the responses (data) of households interviewed. Thematic analysis is effective for qualitative data because themes and their corresponding sub-themes can be unearthed (Braun & Clarke, 2006;Ryan & Bernard, 2003). This is significant because the data was coded into categories and which in turn, made it possible for the themes to be identified (Boeije, 2010, p. 75). The supervisor listened to the audio recordings and compared them with the transcriptions that were done.
Credibility
In this study, the four benchmarks for guaranteeing credibility, as outlined by Lincoln and Guba (1985), include the use of different data sources (triangulation), debriefing, member checking and thick description.
Findings and analysis
The pseudonyms of the participants of the study have been captured in the analysis.
The following excerpts highlight livelihood strategies that two communities (Dwaso and Ahafo) employed to eke out a living since the establishment of the tourist site since its establishment. The following excerpts capture the perceptions of the two communities regarding livelihood strategies that were in existence before tourism and are currently being practiced by the participants of the study.
Existing livelihood strategies
Dwaso: There are not many strategies for us to participate since the establishment of the park; we are struggling in taking care of our families. This is bad because we were given the assurance that, the existence of the park would add many more strategies to the existing ones but now look, we have lost even what we use to have. Dwaso: Even petty trading has been what our women do long before the park came to existence but there has been nothing new about it even with the park just close to us. For now, we cannot point to anything new or even improvement in old strategies even though the park is receiving a lot of tourists. Ahafo: The Park is a hindrance to meeting our livelihood needs. The majority of our youth have stopped participating in the few existing strategies to migrate to the cities for better jobs that can assist them to support their families. I wish I could go to the city myself because nothing is here to do and the few is not lucrative; employment opportunities we expected from the park have failed while much priority is given to people outside the community rather than us. Ahafo: We have been in this trading strategy for a very long time but there has not been any form of improvement. Though we expected that the introduction of tourism will help us have access to more credit facilities to improve our trading business, our expectations had not come to reality.
It can be discerned from the excerpts from the participants of the study that the establishment of the tourist site has brought nothing new in terms of improving their lot through livelihood strategies that have since been implemented by the powers that be. Their concerns appear to be genuine because the strategies they relied on in the past like farming, collection of NTFPs, petty trading, and basket weaving, etc. are the very ones they are currently relying on to eke out a living. In essence, the lofty promises made to them have not materialized and their plight keeps worsening day in day out.
Influence on livelihood strategies
The following excerpts capture the communities' perception about tourism-related livelihood strategies employed by the powers that be at the tourist site to improve the standard of living and associated pleasures and displeasures expressed by people living in that catchment area about lack of new strategies and the sitting of the national park on their land resource.
Ahafo: Only about ten of the community members are working in the park with few others who are also working in the craft centre; it is not even consistent. The hotel which was just put up by a private man has not started operation so only about five young men have been given temporary employment by the owner to take care of the place till it fully starts operation. We have not seen new strategies introduced in this community except the two men who are employed as tour guides. It is a cheat on this community; it is only Ahafo that has had some development and new livelihoods strategies. Hmmm, even Ahafo is the only community allowed to sell at the entrance of the park.
Ahafo: For this community, it hasn't brought anything good apart from just the two people who happen to be working there that maybe they get some money to take care of themselves, it hasn't brought anything good. There is not even one strategy from the park has been introduced in this community; they have not done anything good for us since they came. If you come to this community and you happen to mention this name (Kakum National Park) we might not even give you water to drink. Dwaso: I would be very happy should the park collapse because we have not benefited from the park since its establishment, even as we sit here should we be asked to demonstrate against them, I would gladly join in the demonstration. Dwaso: If it is money from the laborer work, then yes, with that even if I want to do it today, I can go and do it right now and come back with money for my family". For me, I am happy the park is here and I want the whites to come since it is a form of civilization for the community. If we are allowed into the forest, then the forest would be destroyed, and all the trees would be cut down, therefore I won't be happy if that happens. This is because when the park collapses, the people working there will become unemployed and it would bring problems into the community.
Ahafo: I cannot say the park has hurt my household rather than positive impact since our forefathers had money from the forest through the picking of snails, cutting of canes, etc. Though the establishment of the park has deprived us of all these benefits, I believe it is the changing situation in the world. As the world changes, things also change, therefore I cannot wholly say it hurts us; to a certain extent, it is good. Ahafo: we do take pride in that since this community has become popular, for instance, when we go out and want to show someone where we come from, we tell the person we live at where the canopy walkway is and they easily locate us as a result of that. At the beginning there were problems, but we later adjusted and coped with it. Even with that, it was just at the beginning, for now, we have adjusted, though there are still poachers around, they are punished when arrested. For this community, they have not arrested any of us because we are law-abiding.
They have done nothing wrong. They are working and when you flout their laws, they make you pay, therefore since there are laws we also have to follow and obey them.
It can be inferred from the extract that the sitting of the park is a mixed-blessing because the town has become popular worldwide. This is not surprising because tourists from diverse backgrounds across the globe in terms of race and social standing have become regular visitors to the site and this has placed the community on the map of the tourist world thereby bringing dignity and honour to the indigenes, Ghanaians, and the black race across the world.
Influence on livelihoods
The following extracts illustrate the distress that the indigenous communities sited in the catchment areas are currently going through because day-in-day-out a number of them go through hardships because they do not have any reliable source of eking out a living. Because the lands of their forefathers have virtually become part and parcel of the tourist site (the national park) thereby, depriving them of their source of livelihood.
Dwaso: If you are caught in the forest, you would be arrested. As if that is not enough, the way they would even beat you before jailing you is disturbing. Ever since they established Kakum, they said we should not enter the forest ever again. Those of us who have landed there, should they come to meet you there, you would be arrested and beaten seriously to the extent that you may even die as they drag you through the forest to their office.
Dwaso: This community, we are scared because one old man who is suffering from illness happened to go into the forest, though the park management did not have any evidence on what he was doing, they beat and dragged him to the police station. It was serious because this man was bleeding in his old wounds. I think he is even dead now. The Park has made life difficult; we do not get the medicines from the forest any longer. Food prices in the market have even gone high. Also, because this community and other communities are competing for livelihoods in the same side of the forest given to us, food is scarce and expensive now than before.
We are suffering; no work for us and our children and all other companies have collapsed due to the establishment of the park. We have educated youth who can work at the park but as I am talking most of them are jobless and some few have gone to the city in search of greener pastures.
Ahafo: All the things we used to get from the forest and that we sold to support our families have seized. This has resulted in most children dropping out of school since there is no money from any other strategies in this community.
No job has been created after the establishment of the park; the creation of the park rather destroyed the only form of employment we had in this community. They have taken our farmlands from us and also refused to employ our youth; they have also refused to ensure that we also benefit from the park. All this has affected our standard of living in the sense that previously, we could go into the forest to take whatever we needed to support our life but now we have been stopped and seized from entering the park, this means that all the things that we could have gone for to sell and support our life, we no longer have control over them.
Ahafo: With the state farm and the previous forest reserve, both men and women were given employment and the opportunity to work with them, it brought development to this community because they paid us at the end of every month and also gave us foodstuffs in the form of provisions. The government had a lot of local workers working there.
It can be discerned from the excerpts that; life has become a living hell for people within the tourist site. Their source of livelihood such as traditional medicine, which plays a significant role in the health delivery system of the rural folks is currently a thing of the past. Because the indigenes are not allowed to draw closer to the forest, and those who dare defy the order of the officials who manage the site are subjected to diverse forms of inhuman treatment thereby, making their lives a living hell and virtually reducing almost everybody to a life of squalor, deprivation, and sorrow. This, in turn, has a detrimental effect concerning the education of their children thereby exacerbating the poverty situation in that locality.
Discussion
Before the establishment of the tourist site, the indigenes of the community eke out a living by engaging in different activities. This is not surprising because they had competing needs and whatever they earned from one source of livelihood strategy was not enough to cater to all their needs. And this explains why they have to devise ingenious ways of earning extra income to supplement the mainstream of their income generation mechanisms. The common element that runs through their mode of generating income and sustaining their livelihoods is the use of a natural resource that they have been blessed with. Their survival in the past and within the current dispensation has always been driven by the forest that surrounds them or the elements therein. Scoones (2009) describes sustainable rural livelihoods as the various levels of natural resources that improve the wellbeing and quality of life of individuals and groups residing in a specific locality. These natural elements have forged their lives in the past and into the future because it is endowed with a variety of natural resources that they can exploit to improve their lot. But the irony is that the establishment of the tourist site to some extent has changed their story, which can fittingly be best described as a mixed blessing. In line with this, Amoah and Wiafe (2012) echo the fact that there is no doubt that diverse benefits associated with rural tourism development are attractive, however, the sacrifices to be endured by communities involved are unavoidable. In one breath, the existence of the park has brought their community to the limelight because tourists travel from far and near to visit the site to draw closer to nature to understand its essence through lived experiences thereby contributing to the sustenance of biodiversity, which is directly linked to human survival.
Despite the lofty ideals behind the creation of the tourist site, it has brought untold hardships to the indigenes of the community because the officers who are manning the site met out inhumane treatment to some of the indigenes of the community. Different studies (Mbaiwa & Stronza, 2010;Nkhata & Breen, 2010) identify poverty, lack of livelihood opportunities, and unequal contribution to livelihood conditions as resultant consequences of rural tourism on livelihoods of rural folks. Novelli (2015) and Rakodi (2014) point to the development of national parks as means to reducing poverty and improving the livelihoods of people living in communities close to these parks. The following excerpt succinctly captures the frustrations, pain, and sorrow that some of them have lived with over the years since its establishment: If you are caught in the forest, you would be arrested. As if that is not enough, the way they would even beat you before jailing you is disturbing.
Moreover, with the establishment of the site and its negative effect on the survival of the indigenes, the majority of them have to engage in multiple-income generation activities to survive and this is having a toll on their health. Those who are unable to afford the high cost of health delivery in the hospitals and clinics resorted to the use of herbal medicine in the past. But with the establishment of the tourist site, such people are still not able to afford the high cost of health care but are also not allowed to go to those sites they use to harvest herbs with medicinal value. In essence, a lot of people are suffering from different kinds of ailments but cannot treat them because they do not have the wherewithal. In line with Melubo and Lovelock (2019), Adu-Ampong and Kimbu (2019), and Agyeman (2005) development of national parks and reserves usually ends up restricting livelihood activities of local communities on their lands because their activities are considered unsustainable and threat to the environment.
Finally, the lives of innocent people are trampled upon by officers manning the tourist site because at the least suspicion innocent people are subjected to different forms of brutalities. This is creating a state of fear and panic in several communities in and around the site. The following excerpts capture the plight of the people in their quest to survive.
Everybody is scared in this community because one old man who is suffering from illness happened to go into the forest, but the officers did not have evidence of what he was doing, he was beaten up and sent to the police station. I guess he even died.
Contrary, the long-term national tourism development plans as described by Adu-Ampong (2019), E. Adu-Ampong (2018) and Teye (2008) aim to extend development to rural residents especially those communities situated close to tourist attractions. According to Akyeampong (2011), the continuous development of national tourism plans by various governments of Ghana is to effectively position tourism as a means of reducing poverty and which in turn, enhance and promote national development.
Conclusions
The study's findings revealed that the potential exists for the use of tourism as a catalyst to improve the livelihoods of residents in tourist sites. The study revealed that tourism can be a mixed blessing because in one breath it has projected the community in question to the limelight across the globe. But the question that arises is the extent to which the site has contributed to the wellbeing of the indigenes within the catchment area. Undoubtedly, tourism can be used as a vehicle for the socio-economic transformation of communities within tourist sites and examples abound in several countries. But in the case of Kakum, the narrative is quite different because brutalities and arrests, poverty, and loss of sources of livelihoods have become a daily ritual etched in the lives of the people. This requires pragmatic steps need to be taken to alleviate the distress and associated resentments of the people. The lesson learned from the Kakum example is likely to inform policy decisions regarding siting of tourist sites in developing countries with socio-cultural contest akin to that of Kakum. | 2022-01-08T16:06:07.049Z | 2022-01-05T00:00:00.000 | {
"year": 2022,
"sha1": "2ba0c099e513a6f1879fe8afd993f80d7deaf129",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311886.2021.2019453?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "2b2c60139214f983d42a447b3bb7c807764594c1",
"s2fieldsofstudy": [
"Sociology",
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
} |
233177051 | pes2o/s2orc | v3-fos-license | Parallel disease activity of Behçet’s disease with renal and entero involvements: a case report
Background Behçet’s disease (BD) is a systemic inflammatory vasculitis with both autoimmune and autoinflammatory properties. Renal involvement in BD and its spontaneous remission have been rare. We herein describe a case of parallel disease activity of BD with entero and renal involvements, followed by a spontaneous remission without corticosteroid treatment. Case presentation A 54-year-old woman who had a 4-year history of BD, maintained with colchicine treatment, suffered abdominal pain, hemorrhagic stool and diarrhea. Physical examination revealed strong tenderness in the entire abdomen. Laboratory test results showed increased levels of inflammation, and a computed tomography scan revealed edematous intestinal wall thickening with ascites. Blood and stool cultures showed no specific findings. Since she was suspected to have developed panperitonitis with acute enterocolitis, she started treatment with an antibacterial agent under bowel rest. Her abdominal symptoms gradually improved, while diarrhea and high levels of inflammatory reaction persisted. Colonoscopy revealed discontinuous abnormal mucosal vascular patterns and ulcerations in the whole colon except for the rectum, and histological analyses of the intestine demonstrated transmural mucosal infiltration of inflammatory cells without epithelioid granuloma or amyloid deposition. Based on these findings, she was diagnosed with entero BD. Meanwhile, pedal edema appeared during her hospitalization. Urinalysis results were consistent with nephrotic syndrome, thus a renal biopsy was performed. Light microscopy showed no obvious glomerular and interstitial abnormalities, whereas electron microscopy revealed foot process effacement without immune complex deposition or fibrillary structure, compatible with minimal change disease (MCD). Only with conservative therapy, her proteinuria decreased, followed by a complete remission in 3 weeks from the onset of edema. The coincident episode of MCD was finally diagnosed as renal BD that paralleled disease activity to entero BD. She started adalimumab administration, resulting in the further improvement of diarrhea and inflammatory levels. Conclusions This is the first report to demonstrate MCD as renal involvement of BD along with the disease activity of entero BD.
Background
Behçet's disease (BD) is a systemic inflammatory vasculitis characterized by oral aphthous ulcers, genital ulcers, nodular skin lesions, ocular lesions, and other atypical manifestations such as gastrointestinal, neurological and cardiovascular abnormalities [1]. Although the pathogenesis of BD is unclear, the activation of both innate and adaptive immunity plays an important role in the development of BD [2]. BD has been classified as the intersection of autoimmune and autoinflammatory syndromes that show unprovoked exacerbation and remission of inflammatory episodes. Although renal involvement in BD is relatively rare, there has been an increasing number of reports showing a connection [3]. Among the renal BD, glomerular disease is relatively rare, especially as there have been only a few reports that describe a case of minimal change disease (MCD). Here we present a case of parallel disease activity of entero and renal BD, diagnosed as MCD, followed by a spontaneous complete remission without corticosteroid treatment.
Case presentation
A 54-year-old Japanese woman who had a 4-year history of BD suffered abdominal pain, hemorrhagic stool and diarrhea. BD was diagnosed based on the presence of oral and genital ulcers and erythema nodosum, and carriage of human leukocyte antigens (HLA)-B51. She started taking prednisolone at 20 mg/day and colchicine, resulting in disease remission. Prednisolone was tapered down and discontinued in a year, while colchicine had been continued for maintenance therapy. She was admitted to our department for examination and treatment for abdominal symptoms.
Physical examination revealed strong tenderness in the entire abdomen. Laboratory test results (Table 1) showed elevated levels of white blood cell counts (WBC; 32,080 /μL) and C-reactive protein (CRP; 26.7 mg/dL), while decreased levels of serum albumin (2.9 g/dL). The interferon-gamma release assay was negative. A computed tomography scan revealed edematous intestinal wall thickening with ascites. Blood cultures from separate sampling showed no microbial growth, and stool culture did not result in the growth of any specific bacteria that cause enteritis such as enteropathogenic Escherichia coli or Campylobacter. Based on the findings of physical examination, laboratory test results and imaging studies, she was suspected to have developed panperitonitis with acute enterocolitis. Therefore, we started her treatment with an antibacterial agent (Meropenem) under bowel rest, and required analgesics (Acetaminophen) for 1 week. Her abdominal pain and hemorrhagic stool gradually improved, however, diarrhea and high levels of CRP persisted. The antibacterial agent was discontinued on day 18. Colonoscopy on day 24 revealed discontinuous abnormal mucosal vascular patterns and ulcerations in the cecum, ascending colon, transverse colon, descending colon and sigmoid colon (Fig. 1a). Histological analyses of the intestine showed transmural mucosal infiltration of inflammatory cells including lymphocytes and neutrophils, without the findings of epithelioid granuloma or amyloid deposition (Fig. 1b). Based on these findings, the patient was diagnosed with entero BD. Meanwhile, pedal edema had appeared and had been exacerbated from day 7. Urinalyses showed high levels of proteinuria (7.42 urine protein to urine creatinine ratio) that was consistent with nephrotic syndrome (Table 1). Given the concurrent development of proteinuria and entero-BD and no abnormal findings from the immunoserological test (Table 1), we suspected BD-associated glomerulonephritis and thus performed a renal biopsy. Light microscopy showed no obvious glomerular or interstitial abnormalities, whereas electron microscopy revealed foot process effacement without immune complex deposition or fibrillary structure ( Fig. 1c and d), compatible with MCD. Only with conservative therapy, her proteinuria turned to decrease and her pedal edema gradually improved. After 3 weeks from the onset of pedal edema, the patient had achieved complete remission without any additional treatment.
The coincident episode of MCD was finally diagnosed as renal BD that paralleled disease activity to entero BD. In order to manage the activity of BD and maintain the remission of renal BD, she was administered adalimumab (initial doses of 120 mg and 80 mg on days 32 and 46, respectively, followed by 40 mg every other week), resulting in the further improvement of diarrhea and the serum levels of CRP. Both entero and renal BD maintained in remission during a 3-month follow-up (Fig. 2).
Discussion and conclusions
The present case of MCD with a rare manifestation of renal involvement of BD showed spontaneous complete remission in 3 weeks from the onset of pedal edema without corticosteroid treatment, paralleled with the disease activity of entero BD. This report indicates the potential of renal BD to show remission without specific treatment depending on the activity of BD.
Although the pathogenesis of BD is obscure, genetic factors and immunological aberrations have been shown to play an important role in the development and progression of BD [2]. Various immune cells and cytokines released by activated innate and adaptive immune systems, including autoimmune regulatory T (Treg) cells and type 22 T helper (Th22) cells, have been reported to be involved in the immunopathogenesis of BD. The Th17/Treg balance is important in the regulation of inflammation in patients with active BD [4], while increased Th22-type cytokines and cells have shown to be involved in the acute immune response in BD [5,6]. With respect to MCD, circulating mediators produced by abnormal T cells are thought to be related to its development. The overexpression of interleukin-13 (IL-13), which is produced mainly by Th2 cells and partly by Th22 cells [7], induced MCD-like disease with foot process effacement and proteinuria [8]. Moreover, a Fig. 1 Findings of colonoscopy, intestinal biopsy and renal biopsy. a Discontinuous abnormal mucosal vascular patterns and ulcerations were detected in the cecum, ascending colon, transverse colon, descending colon and sigmoid colon by colonoscopy. b Histological analyses of the intestine showed transmural mucosal infiltration of inflammatory cells including lymphocytes and neutrophils. c The Periodic acid-Schiff staining of the kidney showed no glomerular and interstitial abnormalities. Scale bar, 50 μm. d Electron microscopy of the kidney revealed foot process effacement (arrowheads) without immune complex deposition or fibrillary structure. Scale bar, 2 μm hypofunction of Treg cells has been shown to be crucial for the development of MCD [9]. Based on these reports, we hypothesize that a malfunction of Treg cells or increased levels of IL-13 produced in part by Th22 cells in BD might be related to the development of MCD as the common etiology.
The frequency of renal manifestations in BD is reported to vary from less than 1 to 29% with wide clinical and histological spectrums. The underlying pathological changes in the kidney are classified into five groups; (a) amyloidosis, (b) glomerulonephritis, (c) renal vascular involvement, (d) interstitial nephritis, and (e) others such as drug-induced nephrotoxicity. Treatment of renal BD depends on the pathological changes and other organ involvements; corticosteroids, colchicine, azathioprine and cyclophosphamide have been used in the management of glomerulonephritis in BD. The prognosis of patients with glomerulonephritis in renal BD is favorable, with only a few cases known to have developed into endstage renal diseases [3]. MCD is a rare manifestation in renal BD and only a few cases have been reported in literature to date [10,11]. They developed in an active state of BD along with oral and genital ulcers and venous thrombosis, followed by the improvement using corticosteroid treatments.
In the present case, the patient was diagnosed with entero BD based on the findings of colonoscopy and histological analysis of intestinal biopsy; discontinuous ulcerations were observed throughout the colon including cecum with transmural mucosal infiltration of inflammatory cells, compatible with previous cases [12]. Infectious enteritis and other inflammatory bowel disease were excluded based on the clinical course and the findings of colonoscopy, histological analysis and cultivation tests. A coincident episode of MCD was diagnosed as renal BD that paralleled disease activity to intestinal involvement. No other secondary causes of MCD, such as infections, allergy, malignancies and drugs were detected [13].
Of note, our case with MCD in renal BD showed spontaneous remission in 3 weeks from the onset of symptoms without immunosuppressive therapy. There are a few cases of renal BD that showed spontaneous remission [14,15]. These reports showed immunoglobulin A (IgA) nephropathy as a manifestation of renal BD that occurred during the inactive state of BD, followed by complete remission in 1 year from the diagnosis. Although it may be difficult to rule out primary IgA nephropathy, the reports indicate a possibility of renal BD to show spontaneous remission. A randomized trial that has explored the use of corticosteroids in MCD showed that about 60% of the patients with MCD in the control group experienced a spontaneous remission in 2 years, while there is a little decrease of proteinuria during the first month among control patients compared to patients with a corticosteroid-treatment group [16]. On the other hand, a patient with MCD associated with influenza B infection showed spontaneous remission within 2 weeks after the onset of symptoms only with conservative treatment [17]. Though more evidential reports should be accumulated, it is worth keeping in mind that the resolution of the cause of MCD might lead to early remission as with our case.
Tumor necrosis factor-α (TNF-α) is a representative pro-inflammatory cytokine produced by a wide range of immune cells and plays an important role in the induction and maintenance of inflammation in the autoimmune response. TNF-α antagonists, such as infliximab, adalimumab and etanercept, have shown to be an effective treatment for BD [1]. We so far discussed the present case of renal biopsy-proven MCD as secondary to BD because a) the disease peak matched to entero BD, b) no other secondary cause of MCD was detected, and c) the disease induced early remission without corticosteroid therapy. Since BD and MCD share the common pathogenesis of immunological aberrations, adalimumab would be a useful treatment for secondary MCD to BD in our case to maintain remission and manage the activity of BD itself, as etanercept showed effectiveness in a case of nephrotic syndrome due to focal segmental glomerulosclerosis in renal BD [18], This is the first case report to demonstrate that MCD as renal involvement of BD showed spontaneous remission along with the disease activities of BD. Renal BD, having both properties of autoimmune diseases as well as autoinflammatory syndromes, may have the potential to show unprovoked remission without specific treatment depending on its states of disease. | 2021-04-08T13:34:51.803Z | 2021-04-07T00:00:00.000 | {
"year": 2021,
"sha1": "fadafb7d01b2d11b1c485768bcdf27b316770635",
"oa_license": "CCBY",
"oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-021-02327-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59ba361039c5613732c52a946493747ebdabf259",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53598006 | pes2o/s2orc | v3-fos-license | Extraction Meets Abstraction: Ideal Answer Generation for Biomedical Questions
The growing number of biomedical publications is a challenge for human researchers, who invest considerable effort to search for relevant documents and pinpointed answers. Biomedical Question Answering can automatically generate answers for a user’s topic or question, significantly reducing the effort required to locate the most relevant information in a large document corpus. Extractive summarization techniques, which concatenate the most relevant text units drawn from multiple documents, perform well on automatic evaluation metrics like ROUGE, but score poorly on human readability, due to the presence of redundant text and grammatical errors in the answer. This work moves toward abstractive summarization, which attempts to distill and present the meaning of the original text in a more coherent way. We incorporate a sentence fusion approach, based on Integer Linear Programming, along with three novel approaches for sentence ordering, in an attempt to improve the human readability of ideal answers. Using an open framework for configuration space exploration (BOOM), we tested over 2000 unique system configurations in order to identify the best-performing combinations for the sixth edition of Phase B of the BioASQ challenge.
Introduction
Human researchers invest considerable effort when searching very large text corpora for answers to their questions. Existing search engines like PubMed (Falagas et al., 2008) only partially address this need, since they return relevant documents but do not provide a direct answer for the user's question. The process of filtering and combine information from relevant documents to obtain an ideal answer is still time consuming (Tsatsaronis et al., 2015). Biomedical Question Answering (BQA) systems can automatically generate ideal answers for a user's question, signif- * denotes equal contribution icantly reducing the effort required to locate the most relevant information in a large corpus.
Our goal is to build an effective BQA system to generate coherent, query-oriented, non-redundant, human-readable summaries for biomedical questions. Our approach is based on an extractive BQA system (Chandu et al., 2017) which performed well on automatic metrics (ROUGE) in the 5th edition of the BioASQ challenge. However, owing to the extractive nature of this system, it suffers from problems in human readability and coherence. In particular, extractive summaries which concatenate the most relevant text units from multiple documents are often incoherent to the reader, especially when the answer sentences jump back and forth between topics. Although the existing extractive approach explicitly attempts to reduce redundancy at the sentence level (via SoftMMR), stitching together existing sentences always admits the possibility of redundant text at the phrase level. We improve upon the baseline extractive system in 3 ways: (1) re-ordering the sentences that are selected by the extractive algorithm; (2) fusing words and sentences to form a more humanireadable summary; and (3) using automatic methods to explore a much larger space of system configurations and hyperparameter values when optimizing system performance. We hypothesize that the first two techniques will improve the coherence and human readability, while the third technique provides an efficient framework for tuning these approaches in order to maximize automatic evaluation (ROUGE) scores.
Overview of Baseline System Architecture
In this section, we provide a brief layout of our baseline system, which achieved the top ROUGE scores in the final test batches of the fifth edition of BioASQ Challenge (Chandu et al., 2017). This system includes baseline modules for relevance ranking, sentence selection, and sentence tiling. The baseline relevance ranker performs the fol-lowing steps: 1) Expand concepts in the original question using a metathesaurus, such as UMLS (Bodenreider, 2004) or SNOMEDCT (Donnelly, 2006); and 2) calculate a relevance score (e.g. Jaccard similarity) for each question/snippet pair (to measure relevance) and each pair of generated snippets (to measure redundancy). The baseline sentence selection model used the Maximal Marginal Relevance (MMR) algorithm (Carbonell and Goldstein, 1998), which iteratively selects answer sentences according to their relevance to the question and their similarity to sentences that have already been selected, until a certain number of sentences have been selected. The baseline sentence tiling module simply concatenates selected sentences up to a given limit on text length (200 words), with no attempt to module or improve the coherence of the resulting summary.
The baseline system achieved high ROUGE scores, but performed poorly on the human readability evaluation in BioASQ 2017. In order to improve human readability, we first developed several post-processing modules, such as sentence reordering and sentence fusion, which will be discussed in detail in following sections.
3 Sentence Ordering
Motivation
As discussed in Section 1, we tried to improve upon the Soft MMR system (Chandu et al., 2017). This pipeline assumes the relevance to be a proxy for ordering the selected sentences to generate the final summary. On the other hand, it does not take into account the flow and transition of sentences to build a coherent flow between these sentences. Since the maximum length of the answer is 200 words (as imposed by the guidelines of the competition), this system optimizes on selecting the most non-redundant query relevant sentences to maximize the ROUGE score. In this section, we focus on different types of sentence ordering that lead to more coherent answers.
Similarity Ordering
The intuition behind the Similarity Ordering algorithm is that sentences that have similar content should appear consecutively so that the generated answer is not jumping back and forth between topics. Our implementation is based on work by Zhang (2011), which discusses the use of similarity metrics at two levels -first to cluster sentences, and then to order them within a cluster -which can lead to big improvements in coherency and readability. We apply this approach to the BQA domain, where we cluster our set of candidate answers using k-means with k = 2. We then order the sentences within each cluster, starting with the candidate sentence nearest to the centroid of its cluster and working outward. The intuition is that the most central sentence will contain the largest number of tokens shared by all the sentences in the cluster, and is therefore likely to be the most general or comprehensive sentence in the cluster. This supports our goal of an ideal answer that begins with a broad answer to the question, followed by specifics and supporting evidence from the literature.
In Figure 1a we see that the order of the sentences that appear in the final answer is completely independent of their ordering in the original snippets.
Majority Ordering
The Majority Ordering algorithm Barzilay and Elhadad (2002) makes two main assumptions that are quite reasonable: sentences coming from the same parent document should be grouped together, and the most coherent ordering of a group of sentences is how they were presented in their parent document. Topically, it is logical that sentences drawn from the same parent document would be similar. Grammatically and syntactically, it is logical that the sentences may be structured in a way such that maintaining an invariant ordering would augment human comprehension.
Specifically, the Majority Ordering algorithm groups sentences by their parent document and then orders the blocks by the ranking of the highest ranked sentence in a block. Figure 1 illustrates the differences between Similarity Ordering, Majority Ordering, and Block Ordering. The color of each sentence unit indicates the document it was selected from, and the suffix indicates the relevance score of that unit within the document.
Block Ordering
Intuitively, the Block Ordering algorithm is an amalgamation of the Similarity Ordering and Majority Ordering algorithms. The Block Ordering algorithm has two primary components. The first component involves grouping the sentences into blocks based on their parent document. This step is shared between the Block Ordering algorithm and the Majority Ordering algorithm. The second step involves ordering the grouped blocks of text.
The algorithm for ordering the blocks of texts combines document heuristics with our Similarity Ordering algorithm. We first order the blocks by their length (the number of sentences in teh block). For blocks of equal length, we calculate the similarity of each block with the last fixed sen-tence. Hence, given the last sentence of the preceding block, we select the next block first by its length, and then by the similarity of the block with the preceding sentence. If there is no single longest block to begin the answer, then we select the longest block that is most similar to the entire answer. This algorithm is tuned for specific goals with respect to human comprehension and readability. Grouping the sentences into blocks is done to maximize local coherence. The use of block length as an ordering heuristic is done to order topics by relevance. Finally, ordering blocks of equal length by similarity to the preceding sentence is done to maximize sentence continuity and fluidity.
In Figure 1c the green block is ordered first because it is the longest. The blue block is ordered second because it has the highest similarity score with sentence 3.4. The yellow block is ordered third because it has a higher similarity with sentence 2.2, and the red block is thus last.
Quantitative Analysis
To evaluate our approaches, we performed a manual analysis of 100 different answers, ordered by each of our proposed ordering algorithms (see Table 1). We rate each ordering as 'reasonable' or 'unreasonable'. Note that this rating does not pass judgment on the correctness of the answer, since it is designed for a comparative analysis at the module level (i.e. to compare ordering approaches rather than content selection).
Qualitative Analysis
Because sentence ordering in the baseline system is based solely on question-answer relevance, we identified two major issues: global coherence and local coherence. The global coherence issue is generally a problem of layout and cohesiveness. An ideal answer would begin with a broad answer to the question and move into more specific details and any available evidence to support the answer. Further, an ideal answer should not be hopping back and forth between topics and should stick to one before moving on to another. The baseline system did a decent job of beginning with a broad answer to the question because the input sequence is ordered by their relevance score. However after the first sentence, answers tended towards redundant information and divergent trains of thought.
The local coherence issue has more to do with the semantics of the sentence and grammatical restrictions of the language. For instance, language like 'There was also' should not appear as the first sentence in an answer because this makes no sense logically. Additionally certain words like 'Furthermore' indicate that the content of the sentence is highly dependent on the content of the preceding sentence(s), and this dependency is frequently broken by the baseline ordering approach.
Similarity Ordering
We found that the Similarity Ordering performed poorly; only 55 of 100 answers were deemed 'reasonable'. We believe that this is due to the high degree of similarity between the candidate sentences in our domain. Because the candidate sentences are so similar to each other, the results of clustering are highly variant and appeared to be almost arbitrary at times. All the sentences contain similar language and key phrases that makes it difficult to create meaningful sub-clusters. Additionally, one of the biggest problems with our system is due to the sentences that began with phrases like 'However' and 'Furthermore' that place strict requirements on the content of the preceding sentence. This was particularly problematic for the Similarity Ordering algorithm which has no mechanism for making sure that such sentences are placed logically with their dependent sentences. The Similarity Ordering algorithm does perform relatively well in creating logical groups of sentences that cut down on how often an answer is jumping from one topic to another. Additionally these groups are ordered well, beginning with the more general of the two and then finishing with specifics and a presentation of the supporting data. However, we note that the problems with local coherence greatly outweigh the strengths in global coherence since a good answer can still be coherent, even if the organization could be improved, whereas if local coherence is poor, then the answer becomes nonsensical.
Majority Ordering
The Majority Ordering algorithm proved to be a successful method for ordering sentences, where 71 out of 100 answers were deemed 'reasonable'. The Majority Ordering displayed very strong local coherence, which confirms the hypothesis that sentences should likely be kept in their original ordering to maximize human readability and coherence.
However, this algorithm faced issues with global coherence. It produced answers that start with a relevant topic more often than not; however, after the initial block, it struggled to smoothly transition from one block to the next. This is consistent with expectations for the Majority Ordering algorithm. The block with highest rated sentence is ordered first, which explains why the first block is frequently the most topically relevant. After the initial block placement, however, the algorithm makes no explicit attempts to manage or smooth transitions between blocks. Compared with the other two algorithms, this is where the Majority Ordering algorithm displays its poorest performance. It performs strongly when ordering sentences within a block, enforcing local coherence so that sentences beginning with language such as 'Finally', 'Lastly', 'Therefore', etc. followed a related sentence that satisfied the sequential dependency.
Block Ordering
The Block Ordering algorithm produced the best answers, with 75 out of 100 answers ranked as 'reasonable'. This is consistent with our expectations, as the Block Ordering algorithm effectively combines the strongest aspects of the Majority Ordering and Similarity Ordering algorithms. With respect to local coherence, this algorithm displays similar performance when compared to the Majority Ordering algorithm, while displaying stronger coherence between blocks (due to the use of a similarity metric to order blocks). This algorithm also displayed the strongest global coherence, which is likely due to first grouping the sentences into blocks and then ordering them. This algorithm displayed one core weakness, which is its inability to identify high-quality opening sentences. This is due to the usage of block length as a heuristic for topic relevance. While in the majority of cases this heuristic proved to be successful, accounting for these outliers may significantly improve the performance of the Block Ordering algorithm. We note that the Block ordering algorithm performed well in producing highquality, coherent answers; although the develop-ment of coherence models and measures is not the main focus of this paper, we can see that Block Ordering performs the best with respect to the simple coherence evaluation we conducted.
Sentence Fusion
An observed weakness of the original system is that the generated summaries often contain highly repetitive information. While MMR is added in the pipeline to deal with redundancy and maximize the diversity of covered information, extractive summarization still picks entire sentences that may partially overlap with a previously selected sentence. To tackle this problem, we introduce sentence fusion as a way to identify common information among sentences and apply simple abstractive techniques over the baseline extractive summaries.
Methodology
Given a set of candidate sentences generated by the pipeline for each summary, the sentence fusion module operates in two steps: 1) the candidate set is expanded to include fused sentences, and 2) sentences are selected from the expanded set to produce a new summary.
Expansion of Candidate Set
To generate fused sentences, we begin by building upon previous work on multiple-sentence compression (Filippova, 2010), in which a directed word graph is used to express sentence structures. The word graph is constructed by iteratively adding candidate sentences. All words in the first sentence are added to the graph by creating a sequence of word nodes. A word in the following sentence is then mapped onto an existing word node if and only if it is the same word, with the same part of speech. Our assumption is that a shared node in the word graph is likely to refer to the same entity or event across sentences.
We then find a K-possible fused sentence by searching for the K-shortest path within the word graph. Definition of the edge weights follows from the original paper (Filippova, 2010): where dif f (s, i, j) is the difference between the offset positions of word i and j in sentence s. Intuitively, we want to promote a connection between two word nodes with close distance, and between nodes that have multiple paths between them. We also prefer a compression path that goes through the most frequent no-stop nodes to emphasize important words. When applying the sentence fusion technique to the BioASQ task, we first pre-process the candidate sentences to remove transition words like 'Therefore' and 'Finally'. Such transition words may be problematic because they are not necessarily suitable for the new logical intent in fused sentences, and may break the coherence of the final answer. We also constrain fusion so that the fused sentences are more readable. For instance, we only allow fusing of pairs of sentences that are of proper length, in order to avoid generating overly complicated sentences. We also avoid fusing sentences that are too similar or too dissimilar. In the first case, information in the two sentences is largely repetitive, so we simply discard the one containing less information. In the latter case, fusing two dissimilar sentences more likely confuses the reader with too much information rather than improving the sentence readability. Finally, we add a filter to discard ill-formed sentences, according to some hand-crafted heuristics.
Selecting Sentences from Candidate Set
The next step is to select sentences from the candidate set and produce a new summary. An Integer Linear Program (ILP) problem is formulated as follows, according to (Gillick and Favre, 2009) In the equation, z i is an indicator of whether concept i is selected into the final summary, and w i is the corresponding weight for the concept. The goal is to maximize the coverage of important concepts in a summary. During the actual experiments, we assign diminishing weights so that later occurrences of an existing concept are less important. This forces the system to select a more diverse set of concepts. We follow the convention of using bigrams as a surrogate for concepts (Taylor Berg-Kirkpatrick and Klein, 2011;Dan Gillick and Hakkani-Tur, 2008), and bigram counts as initial weights. Variable A ij indicates whether concept i appears in sentence j, and variable y j indicates if a sentence j is selected or not. Table 2 shows the results of different configurations of the ordering and fusion algorithms (Rows 1 -4, Row 7, Row 9). Though the overall ROUGE score drops slightly from 0.69 to 0.61 after sentence fusion with the ILP-selection step, this is still competitive with other systems (including the baseline). The sentence re-ordering does not directly impact the ROUGE scores.
Discussion
We manually examined the fused sentences for 50 questions. We found that our sentence fusion technique is capable of breaking down long sentences into independent pieces, and is therefore able to disregard irrelevant information. For example, given a summary containing the original sentence:
'Thus, miR-155 contributes to Th17 cell function by suppressing the inhibitory effects of Jarid2. (2014) bring mi-croRNAs and chromatin together by showing how activationinduced miR-155 targets the chromatin protein Jarid2 to regulate proinflammatory cytokine production in T helper 17 cells.'
our fusion technique is able to extract important information and formulate it into complete sentences, producing a new summary containing the following sentence: 'Mir-155 targets the chromatin protein jarid2 to regulate proinflammatory cytokine expression in th17 cells.' The fusion module is also able to compress multiple sentences into one, with minor grammatical errors. For example:
Sentence 1: 'The RESID Database is a comprehensive collection of annotations and structures for protein post-translational modifications including N-terminal, C-terminal and peptide chain cross-link modifications[1].' Sentence 2: 'The RESID Database contains supplemental information on post-translational modifications for the standardized annotations appearing in the PIR-International Protein Sequence Database[2]'
our approach produces the fused sentence:
'The RESID Database contains supplemental information on post-translational modifications[1] is a comprehensive collection of annotations and structures for protein post-translational modifications including N-terminal, C-terminal and peptide chain cross-link modifications[2].'
However, the overall quality of fused sentences is not stable. As shown in Figure 2, around 25% of the selected sentences in final summaries are fused. Among the fused sentences, 47% improved the overall readability by reducing redundancy and repetition. 5% of the sentences have improved readability with minor grammatical errors, such as a missing subordinate conjunction or superfluous discourse markers. 8% of the fused sentences did have an appreciable effect on readability. However, a large number of fused sentences (around 26 %) were not coherent and degraded the quality of the answer.
Further Improvements
In order to further improve the performance of our system, we made a few modifications to each module in the system, and improved the overall architecture of the module pipeline: • Modification of System Architecture: We intuited that the ILP process in the sentence fusion model could not handle a very large number of candidate inputs, producing a lot of (redundant, similar) fused sentences. In order to resolve this problem, we removed the ILP model from the sentence fusion step, and moved the sentence fusion step before the sentence selection module (Rows 12-13), so that the MMR algorithm in the sentence selection module could take care of eliminating redundant fused sentences.
• Modifications to Sentence Selection Module and Relevance Ranker: For the sentence selection module, we modified the original MMR model. The original MMR model selected a fixed number of sentences, which naturally introduced repetition. In order to reduce repetition, we built a so called 'Early-Stop MMR' which stops selecting sentences when maximum overlap score grows beyond a certain threshold and minimum relevance score drops down below another threshold (Rows 4-8).
For the relevance ranker, we explore an alternative similarity metric ((Row 6). The Query Likelihood Language Model (Schütze et al., 2008) is widely used in information retrieval. We formulated the relevance ranking procedure as an information retrieval problem and used a language model, so that long sentences would get higher penalty. • Post-Processing: To further reduce repetition, we add an additional filter before final concatenation by iteratively adding the selected sentences to the final output, and discarding a sentence if it is too similar to the existing summary (Rows 8,11 and 13se).
Configuration Space Exploration
Configuration Space Exploration (CSE) is the technique of trying different combinations of configurations of all modules to find the best configuration (Yang et al., 2013;Yang, 2016). We used the BOOM framework to explore and optimize the space of hyperparameters and module configurations. We explored 2,268 unique configurations of three different hyperparameters: α, used for the MMR module; k, used for the clustering-based Ordering module; and a token limit, used in the Tiling module. Figure 3 shows the pipeline structure we used.
• Alpha: This parameter of the MMR module controls the trade-off between snippet similarity to the question and snippet similarity to already selected snippets. In our experiments alpha was varied between 0 and 1 at intervals of .05. We have found that the ideal value for alpha is 0.1.
• Number of Clusters: The k in the Ordering module controls the number of clusters used to order the snippets for the clustering-based Sentence Ordering algorithms. A small k value produces few, general clusters, while a large k value produces many highly specific clusters with the danger of creating clusters that are actually meaningless or having many clusters that contain a single sentence. In our experiments, k was tested at values from 2 to 10. Although the effect on Rouge score was very small, we have found that the ideal value for k is 3. A caveat to this result is that we are measuring the effect hyperparameter k has on the final Rouge scores achieved by the system. Since the purpose of k is to assist in sentence ordering, not precision or recall, we would expect that adjusting k would have a negligible impact on the Rouge score. Further parameter tuning is needed in cases like this where the primary effect of the parameter is not easily captured by Rouge.
• Token Limit: The token limit is used by the Tiling module to set a maximum number of allowed tokens in the answer. If the cumulative token count of the selected snippets exceeds the token limit then sentences will be removed from the end of the final answer until the token limit is satisfied. In our experiments the token limit was tested at values from 25 to 300 in increments of 25. We have found that the ideal value for the token limit is 100.
The two distinct clusters found in the histogram shown in Figure 4 are entirely explained by the token limit. All scores less than 0.27 were obtained by configurations where the token limit was set to 25. The rest of the scores, all above 0.28 were obtained by configurations where the token limit was greater than or equal to 50. In addition to the Rouge score penalty for extremely low token limits, we observed a significant, though much smaller, penalty for token limits of 150 and greater. Table 2 shows the results of extensions to the baseline system. Two systems are highlighted (Rows 5 and 10)), as they give the most balanced results between the quality of retrieved information and conciseness: one system performs sentence selection, then ranks sentences prior to ordering by relevance, and applies the additional post-processing step (Row 5); the other system performs sentence selection, fusion, and then ranks sentences prior to ordering without the post-processing step (Row 10).
Analysis: Effects of Individual Modules
Rows 5, 8, 11 and 13 show the effectiveness of the additional post-processing step. Overall, this procedure is able to reduce the answer length, while preserving important information. We observed that the post-processing step is less effective when fusion is performed after MMR. This is because in these settings, there is an additional sentence selection step in the fusion module using integer linear programming that forces the selected sentences to be diverse. In all other settings, including when fusion is performed prior to MMR, we only have one sentence selection step. Since MMR iteratively selects sentences according to both similarity and relevance, the last selected ones may be informative but repetitive. Row 6 shows our experiments with language modeling; the language model gives a higher penalty to longer sentences, which produces shorter but less informative results.
Analysis: Impact of System Architecture
Exploring the performance of systems using different architectures, we observed that systems with fusion prior to ordering can generate more logically coherent summaries. Table 3 shows an example. All underlined sentences express the same fact that DVL1 is the cause of Robinow syndrome. In Row 1, where fusion is performed after ordering, there is a sentence that serves like an explanation between the underlined sentences, which breaks the logical coherence. In Row 2 and Row 3 where ordering is performed after fusion, the generated answers demonstrate better coherence: All underlined sentences are placed together, following by the explanation; The opening sentences are also more concise and more directly related to the question.
We also experimented with architectures where the fusion module is run prior to MMR, and MMR is used as the only sentence selection step. In these systems, MMR receives many fused sentences that overlap and complement each other at the same time, because all similar sentences are fused prior to sentence selection. As a result, such architectures sometimes produce summaries that are more repetitive compared to others.
Conclusion and Future Work
Though extractive summarization techniques can be developed to maximize performance as measured by evaluation metrics like ROUGE, such systems suffer from human readability issues as mentioned above. In this paper we attempted to combine extractive techniques with simple abstractive extensions, by extracting the most relevant non-redundant sentences, re-ordering and fusing them to make the resulting text more human-readable and coherent. Using an initial set of 100 candidate answer sets, we experimented with different ordering algorithms such as Similarity, Majority and Block Ordering, and identified that Block Ordering performs better the others in terms of global and local coherence. We then introduced an Integer Linear Programming based fusion module that is capable of not only fusing repeated content, but also breaks down complicated sentences into simpler sentences, thus improving human readability. The improved baseline system achieved a ROUGE-2 of 0.6257 and ROUGE-SU4 of 0.6214 on test batch 4 of BioASQ 4b. We acknowledge that providing immediate human feedback during the BioASQ competition is manually expensive, although this would greatly help in tuning our systems. We were able to perform a manual evaluation on a sub-sample of the data, in order to introduce the use of human evaluation during system development. We also incorporated an automatic evaluation framewook (BOOM) which allowed us to test many different system configurations and hyperparameter values during system development. As BOOM is completely general and can be applied to any pipeline of Python modules, this adaptation was relatively straightforward, and allowed us to automatically test more than 2,000 different system configurations.
In the future, we would like to explore parameter tuning for sentence ordering using human evaluation metrics. There are several additional refinements (abstractions) of the extracted sentences which rely on simple post-processing or text cleaning methods which could be performed before sentences are passed to the fusion module. Another interesting direction that we would explore is the possibility of automatically predicting reasonable sentence orderings. | 2018-11-03T09:43:09.479Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "84021bf07ae100f441656854f00e8e63333750f1",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/W18-5307.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "84021bf07ae100f441656854f00e8e63333750f1",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
234339439 | pes2o/s2orc | v3-fos-license | Design of UWB Slot Antenna for WBAN Application
Introduction: The essential development of medical science and health care systems has flagged track for the application of the UWB (Ultra Wide Band) antenna. It is combined with material science engineering to provide its service in the Body Centric Wireless Communication (BCWC). The standard operating range of the UWB antenna is from 3.1GHz to 10.6 GHz. The UWB satisfies the maximum requirement of an antenna in bio-medical and bio-apparatus manufacturing. Objective: The UWB antenna was designed to be more suitable and appropriate for WBAN (Wireless Body Area Network) application with the following characteristics such as low profile, high reliability, high data rate and high efficiency. The quality of living was improved by their applications in the bio-medical productions. Methods: This proposed structure has a ground and a patch scratched over the surface of the Roger Duroid-5870 substrate with 1.6mm thickness. The structure is partially grounded with meta-material to reduce the back-body radiations. Result: The prerequisite maximum efficiency of about 98%, greater directivity, minimum SAR (Specific Absorption Rate) value and smaller size for on-body application has been achieved. Conclusion: Our proposed design yielded a UWB slot antenna with excellent characteristics making it well suitable for on-body communications.
INTRODUCTION
The drastic evolution of medical science and health care systems has paved the path for the application of UWB. It is merged with material science engineering to provide its service in the Body Centric Wireless Communication (BCWC). These systems have brought down the cost and increased dependability because of their potentially feasible features and applications. Thereby improving the quality of living of the people, by their application in the bio-medical industries. The UWB satisfies the maximum requirement of an antenna in the bio-medical and bio-equipment industries. These antennas provide lower power spectral densities of about 41.3dBm/MHz which would manage to provide low to medium data rate for computing applications. Apart from low spectral densities, they are preferred because of their appreciative compact size, lightweight and minimal radioactive standards for avoiding radiation risk. [1][2][3] The UWB antenna provides a broad range of frequency with a low SAR value. As the transmission power is low, UWB antennas are most suitable for Wireless Body Area Network. And the communication period is also impulsive this antenna does not affect the human body. In amidst various types of antenna, the microstrip patch antenna is widely used in wireless applications due to its low profile, low cost, lightweight and simple architecture. 4,5 The normal working range of the UWB antenna is from 3.1GHz to 10.6 GHz. Also, the characteristic like low profile, reliability and high performance makes it more suitable and appropriate for WBAN application. 6,7 Roger Duroid-5870 is used for the fabrication of substrate which has a permittivity of 2.33. It is laminated uniformly from one plane to the other and provides a constant value over a wide range of frequency. As discussed in the paper mentioned, the UWB antenna is applicable for bio-medical application as it provides high fidelity, low bit rate and bandwidth. 8,9 A special material called meta-material is used for the fabrication of the antenna. Due to their macroscopic periodic nature, they are capable of providing low loss, better efficiency and effective bandwidth, thus making it easy to handle. 10,11 The microstrip patch antenna can provide better performance in off body radiation and making it user friendly, which can also be demonstrated in the open environment along with the real-time application. 12,13 For a wireless body area network, the SAR value for on body radiation should be less than 1.6 W/Kg as per the standardization. 14 The different slotted or slit shapes have been designed, and also concluded that among all shapes, S shape is the better most choice for the overall size reduction of the Microstrip antenna. 15 The problem to be addressed for a reliable on-body application is overcome by providing wide bandwidth small size and low backward radiations. The same is achieved by slotting the structure and implementing partial ground proportionally. The proposed antenna is designed for UWB range in wireless body area network (WBAN) for on-body communication. This antenna uses a linearly plotted slit shape microstrip patch for size reduction and wider Bandwidth. To reduce power consumption and avoid the back radiation the partial ground technique is employed.
The organization of the paper is as follows; the second section presents the material used, design and structure of the antenna, the third section gives simulated results in terms of antenna parameters and the same has been compared with tested results. Finally, the conclusion is presented in the fourth section.
MATERIALS AND METHODS
The basic design of this antenna is implemented using Linear Plotting Slit structure for improving the bandwidth and for size reduction. 18 The ground is partially sized with the dimensions of 28x7 mm 2 beneath the substrate. Roger Duroid 5870 is used as the substrate material to provide higher efficiency and a larger band range. And the substrate is spread over the surface of 28x33x1.6 mm 3 giving it a compact nature. A bidirectional pattern is produced which reduces the power loss thus producing minimal return loss. The edge feeding technique is used to attain impedance matching. Table 1 and Table 2 represent the dimensions of the Antenna and Antenna Parameters respectively. The antenna performance characteristics are discussed in the later section.
RESULT AND DISCUSSION
The designed Roger Duroid 5870 based UWB antenna is operating at a frequency of 5GHz.The same has been simulated using HFSS software. The simulated S 11 plot is shown in figure 2. The return loss is the measure of how well the device or the line are matched and the power reflected from the antenna, at 4.5 GHz the return loss is found to be -16.92 dB. The fabricated antenna is tested and obtained return loss at the operating frequency is found to be -17.13 dB at 4.52GHz, -10.76 dB at 7.56GHz and -24.25 at 3.6GHz. The extensive simulation and calibration process produced wider bandwidth trading off between directivity as shown in figure 2. This plot covers the desired band range of UWB frequency operating from 4.13 to 5.18 GHz providing 1.68 GHz as bandwidth.
Thus, better matching between the feed probe and the patch is achieved. It is also seen that due to effective wave designing and feeding technique the bandwidth achieved is 1.68GHz. The energy absorbed by the human body when exposed to a radio frequency (RF) electromagnetic field is known as the Specific Absorption Rate (SAR). The average SAR value for a human body is 1.6 W/kg. The simulated SAR pattern for the designed antenna is shown in figure 3. There is a uniform distribution of deep blue over the patch of the antenna which represents zero SAR value which makes it most suitable for the on-body application. Figure 4a shows the far-field radiation pattern at 0°. Figure 4b shows the far-field radiation pattern at 90°. Figure 4c shows the far-field radiation pattern at 180°. The term Gain combines antenna directivity and electrical efficiency. The gain performance of the Roger Duroid UWB antenna is shown in figure 5. The graph result shows that the UWB antenna proposed provides a peak gain of 3.23 dB which makes this antenna more suitable for biomedical application. The plot of the gain as a function of direction is bidirectional and hence capable of transmitting and receiving electromagnetic radiation. Directivity is the concentration of emitted radiation in a single particular direction. The Directivity for the proposed UWB antenna is shown in figure 6. The directivity obtained for this antenna is 3.21dB. As a result, better impedance matching has been obtained and more power is delivered to the antenna. The E-field, 3-D radiation pattern of this antenna is shown in the figure 8. And the overall simulated performance characteristic of this proposed antenna is given in table 3.
The simulated results as mentioned in table 3 pieces of evidence that the proposed Antenna is furthermost appropriate for on-body Bio-medical applications.
CONCLUSION
The optimal pulse shaped patches and partially slotted ground led to attaining a satisfactory wideband frequency of 1.68 GHz and minimum SAR required for the antenna to be concerned in the Bio-medical applications. Also, the proposed antenna radiates with an efficiency of 98% and a return loss of -16.4dB. These crucial factors empower this antenna idyllic for on-body WBAN applications.
ACKNOWLEDGEMENT
Authors acknowledge the immense help received from the scholars whose articles are cited and included in references to this manuscript. The authors are also grateful to authors/ editors/publishers of all those articles, journals and books from where the literature for this article has been reviewed and discussed. | 2021-05-11T00:06:29.096Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "17ea3eaff20fb8df77d42de1bf3db06da36e2c2b",
"oa_license": null,
"oa_url": "https://doi.org/10.31782/ijcrr.2021.13613",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3bb44cf746f691c90708a9a9096572459bc83a75",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
233282081 | pes2o/s2orc | v3-fos-license | Implications of the cut type and apex length of stem cuttings used for the production of plantlets of Conilon coffee
Producing plantlets of Conilon coffee within the specific recommendations and with a high level of quality is fundamental since it is capable of promoting the initial development of the crops. To identify the best protocol to prepare the stem cuttings is fundamental to the process of plantlet production of the species. In this context, this study aimed to evaluate the implications of the type of apex cutting and the length of the remaining apex of stem cuttings to produce plantlets of Conilon coffee (Coffea canephora). To this end, two trials were conducted in the Marilândia Experimental Farm (Instituto Capixaba de Pesquisa, Assistência Técnica e Extensão Rural, Marilândia-ES). The first trial evaluated the types of apex cutting (straight or bevel cut), and the second trial studied the different lengths of the remaining apex (0.5, 1.0, 1.5, 2.0, and 2.5 cm). Characteristics of the vegetative growth and photosynthetic traits of the plantlets of Conilon coffee were evaluated after 120 days of cultivation in a nursery. The biomass accumulation of the plantlets of Conilon coffee produced by stem cutting may be favored by the use of bevel cut on the apex. The length of the remaining apex does not seem to have a expressive effect over the quality or growth of the plantlets, being only possible to observe effects for leaf area and biomass accumulation.
INTRODUCTION
Conilon coffee (Coffea canephora Pierre ex A. Froehner) is a species that has enormous importance for agricultural activities in Brazil and, especially, for the economy of the state of Espírito Santo, as it is the largest producer of Conilon coffee in the country (Belan et al., 2011;Bernardes et al., 2012;Covre et al., 2013).
There is a considerable area of coffee plantation being annually renewed; in this context, there is a growing demand for coffee plantlets. Since 2013, it is estimated that approximately 110 million plantlets of Conilon coffee were produced in Brazil, of which only about 10% originated from seeds (Mauri et al., 2015).
Due to a mechanism of gametophytic selfincompatibility, the Conilon coffee is a species of allogamous reproduction. A factor that leads to the formation of heterogeneous populations for traits such as plant height, vigor, season and maturity uniformity of fruits, shape, size, and weight of grains, susceptibility to pests and diseases, tolerance to drought and, especially, productive potential (Conagin;Mendes, 1961;Van Der Vossen, 1985;Carvalho et al., 1991;Fazuoli, 1993;Covre et al., 2013;Rocha et al., 2014;Ferrão et al., 2019).
The technique used to obtain uniform plants for crops of Conilon coffee is the asexual propagation, manly performed by the stem cutting method, which consists in cutting segments of orthotropic stems originated from serial buds, retaining a pair of leaves and two plagiotropic branches (Paulino;Matiello;Paulini, 1985;Bragança et al., 1995;Ferrão et al., 2007;Paiva et al., 2012;Partelli et al., 2014;Fonseca et al., 2019).
There are several advantages of using the propagation by stem cuttings compared to the use of seedlings (multiplication by coffee seeds), such as the faster formation of the plant canopy, higher crop uniformity, easier handling of pruning, precocity of production, higher crop yield, among others (Espindula;Partelli, 2011;Fonseca et al., 2019).
In this context, to produce plantlets within the specific recommendations and with a high level of quality is fundamental, since its capable of promoting the initial development of the crops, as well as higher possible yield. Even if there have been significant advances in the cultivation of Conilon coffee and broad utilization of the technology of rooting stem cuttings, there are still different pieces of information to be elucidated about the asexual propagation of this species. Paulino, Matiello and Paulini (1985) presented the first works detailing the processes involved in the production of clonal plantlets by cutting plagiotropic branches, mainly concerning the preparation of the clonal cutting. Until recently, it was recommended a bevel cut on the base of the stem cutting to induce a rapid rhizogenesis (Fonseca et al., 2005). However, a study executed by Verdin Filho et al (2014) for Conilon coffee revealed that modifying this traditional bevel to a straight cut, in the base of the stem cutting, resulted in a higher quality of the plantlets.
To refine the process of preparing stem cuttings is a strategy to improve the quality of plantlets of Conilon coffee and ensure the formation of more productive crops. Thus, it is necessary to expand the studies on such processes used in the asexual multiplication of the species. In this context, this study aimed to evaluate the implications of the cut type and the length of the remaining apex of stem cuttings of Conilon coffee over the growth and physiology of plantlets.
Experimental design
Two trials were conducted in the Marilândia Experimental Farm (Fazenda Experimental de Marilância, FEM), an agricultural research base managed by INCAPER (Instituto Capixaba de Pesquisa, Assistência Técnica e Extensão Rural), located in the municipality of Marilândia, Northwest Region of Espírito Santo State, Southeast Region of Brazil, at the geographic coordinates 19º24'26.09"S and 40º32'26.83"W, and altitude of 89 m above sea level. The trials were carried out concurrently and executed in a nursery for the production of Conilon coffee plantlets, covered by a polyethylene screen to promote the level of 50% of shade, under controlled conditions. The cultivar used in the trials was the clonal cultivar "Vitória Incaper 8142".
The first trial followed a completely randomized design, with two treatments referring to the cut type (straight or bevel cut), tested using 15 repetitions and experimental plots composed of four plantlets. The second trial also followed a completely randomized design, testing five treatments referring to the lengths of the remaining apex of the stem cutting (0.5, 1.0, 1.5, 2.0, and 2.5 cm from the insertion of the plagiotropic branches), using 15 repetitions and four plantlets per experimental plot.
Production of plantlets
Well-developed shoots were collected randomly from adult parent-plants from the cultivar "Vitória Incaper 8142" grown in a clonal garden, cultivated with the technique of bending orthotropic branches to stimulate the emission of shoots. The matrix-plants were standardized regarding age, nutritional, and phytosanitary aspects.
The clonal cuttings were extracted from the central part of the shoots, discarding the basal and apex regions of the stems, since these regions are excessively lignified (base) or soft (apex). In the preparation of the cuttings for the first trial, the type of apex cut was changed in the two treatments ( Figure 1). For the second trial, the length for the apex of the stem cuttings was altered to 0.5; 1.0; 1.5, 2.0, and 2.5 cm from the insertion of the pair of plagiotropic branches, wherein this case, it was used the bevel cut ( Figure 2). For both trials, the base was standardized at 4 cm of length using a straight cut on the lower end (Verdin Filho et al., 2014), as well as leaving a pair of leaves per stem cutting, which were cut of about a third of its original area. The remaining stages of the propagation by stem cutting for Conilon coffee plantlets followed the recommendations of Fonseca et al. (2019).
After preparing, the stem cuttings had 2/3 of their length buried, in a vertical position, in plastic tubes (tubets) with a volume of 280 cm 3 , using a mixture with 70% commercial substrate and 30% coffee husk obtained from the processing of coffee harvested during the previous year (Verdin Filho et al., 2018). The plantlets were cultivated in a nursery along 120 days, and their nutrition, irrigation, and phytosanitary management were carried out according to the recommendations to produce plantlets of Conilon coffee of Ferrão et al. (2012) and Fonseca et al. (2019).
Figure 1:
Illustrative scheme of the types of cutting (straight and bevel) at the apex of the stem cuttings adopted in the first trial and demonstration of the starting point for the measurements of the apex length (1 cm) and base length (4 cm) of stem cuttings of Conilon coffee, as well as the straight cut used at the base of the cuttings.
Evaluations
After 120 days of cultivation, the plantlets from the first trial were evaluated regarding the growth parameters: height of plantlet (HGT), using a graduated ruler; stem diameter (DIA), using a digital caliper; leaf area of the plantlet (TLA), obtained by the non-destructive method of linear dimensions (Barros et al., 1973;Brinate et al., 2015). For the physiological parameters, in the first completely expanded pair of leaves from the apex, it was evaluated the gas exchange rate, using a portable infrared gas analyzer (Licor, IRGA 6400XT), between 9:00-11:00 am on sunny days. An irradiance of 1,000 PAR and a CO 2 concentration of 400 ppm were used. The net assimilation rate of CO 2 (A) and the transpiration rate (E) were evaluated. In these same leaves and at the same time, the total chlorophyll content (CHL) was obtained using a portable chlorophyll-o-meter (Falker, ClorofiLOG FL1030).
After these analyses, the plantlets were collected, separated into stems, leaves, and roots, adequately identified, and dried in a forced-air circulation oven at 65 ºC ± 2 ºC until a constant mass was obtained and later weighed on a precision electronic scale (precision of 0.0001 g). The total dry matter of plantlets (TDM) was obtained by adding the leaves dry mass (LDM), stems dry mass (SDM), and roots dry mass (MSR). The Dickson's quality index (DQI) was calculated following the method proposed by Dickson et al. (1960), through the equation: DQI = [total dry matter/(RHD+RAR)], where RHD represents the ratio between the height of plantlet and its stem diameter; and RAR is the ratio between the mass of the aerial part and the mass of the root system.
The plantlets from the second trial were evaluated after 120 days of cultivation, regarding the following variables: height of plantlet (HGT); stem diameter (DIA); leaf area of the plantlet (TLA); net assimilation rate of CO 2 (A); transpiration rate (E); total chlorophyll content (CHL); leaves dry mass (LDM); stems dry mass (SDM); and roots dry mass (MSR); total dry matter of plantlets (TDM) and Dickson's quality index (DQI), following the same methods proposed in the first trial.
In addition to these variables, the following parameters were analyzed: leaf mass ratio (LMR), obtained by the ratio between LDM and TDM; stem mass ratio (SMR), obtained by the ratio between SDM and TDM; root mass ratio (RMR), obtained as the ratio between RDM and TDM; leaf area ratio (LAR), obtained as the relationship between TLA and LDM; and the intercellular concentration of CO 2 (C i ) was obtained during the gas exchange analyses.
Statistical analysis
The data were subjected to the assumptions of normality and homogeneity, followed by the analysis of variance (ANAVA). For the types of cutting on the apex of the stem cuttings, the qualitative data were compared using the Tukey test (p≤0.05). For the length of the apex of the stem cutting, the quantitative data were subjected to regression analysis (p≤0.05), using the statistical software SISVAR version 5.6 (Ferreira, 2011).
Trial 1 -Implications of the cut type on the apex of the stem cutting of Conilon coffee
There was no statistical differentiation for the analyzed variables, except for LDM, SDM, and TDM, for which higher means were verified from plantlets produced using the bevel cut at the apex (Figure 3).
In this study, cuttings with a bevel cut at the apex showed an increase of 17.8% in the total biomass (TDM). Since the DQI considers the vegetative vigor and the distribution pattern of biomass in the plantlet, it is possible that with the advancing age, there may be a tendency of superiority for the variable in plantlets whose cuttings have a bevel cut at the apex compared to those with a straight cut, even though there was no significant difference of this variable at the evaluated age. , total leaf area (C), leaf dry mass (D), stem dry mass (E), root dry mass (F), total dry matter (G), Dickson's quality index (H), total chlorophyll content (I), net assimilation rate of CO 2 (J) and transpiration rate (K) of plantlets of Conilon coffee produced using stem cuttings, at 120 days of cultivation, as a function of the cut type of the apex of the stem cutting, produced in Marilância-ES (means followed by the same letter in the comparison between bars do not differ by Tukey's test, at 5% of probability).
In general, the bevel cut at the apex of the stem cuttings was statistically similar to the results obtained using the straight cut, however, with a better result of biomass accumulation. Therefore, the bevel cut at the apex may be indicated for the production of clonal plantlets.
Trial 2 -Implications of the length of the apex of the stem cutting of Conilon coffee
For the growth variables, there was no significant effect (p≤0.05) of the treatments, except for TLA and TDM ( Figures 4C and 4D). As for HGT and DIA, there was no significant difference as a function of the length of the apex of the stem cutting, with average values of 9.17 cm and 3.60 mm, respectively ( Figures 4A and 4B).
Regarding TLA, there was a difference with quadratic adjustment to the linear regression model, in which there was a tendency to increase the leaf area up to the apex length equal to 1.53 cm, obtaining a maximum expansion of 226.33 cm 2 ( Figure 4C). From that length, there was a decrease in TLA.
The same occurred for TDM, in which the most significant accumulation of dry mass (2.18 g) was found at the apex length of 1.61 cm, as estimated by the quadratic adjustment of the linear regression ( Figure 4D).
The proportion of biomass allocated in the different vegetative organs of the plant was not altered by the length of the apex of the stem cuttings. Of the total biomass accumulated, 56.57% was allocated to leaves (RMF), 15.11% to stem (RMC), and 28.32% to roots (RMR) (Figures 4E, 4F, and 4G, respectively). There was also no change in the expansion of leaf tissue per unit of dry mass of the plantlets, with an average of 99.60 cm 2 of leaf tissue formed per gram of dry biomass produced ( Figure 4H).
Regarding the DQI, there was no variation due to the apex lengths of stem cuttings, with an average quality index of 0.40 ( Figure 4I). However, it is suggested that better quality of plantlets may be obtained when the cuttings are prepared keeping the apex around 1.53 and 1.61 cm in length, due to the larger leaf area and higher accumulation of biomass observed in these lengths (respectivelly) being favorable after the period evaluated in this study. Figure 4: Height of plantlet (A), stem diameter (B), total leaf area (C), total dry matter (D), leaf mass ratio (E), stem mass ratio (F), root mass ratio (G), leaf area ratio (H) and Dickson's quality index (I) of clonal plantlets of Conilon coffee produced using stem cuttings, at 120 days of cultivation, as a function of the length of the apex of the stem cutting, produced in Marilância-ES ( * regression coefficient is significant, at 5% of probability, by the t-test).
DISCUSSION
The accumulation of dry mass is a robust property to complement the growth assessment of plant species (Paiva et al., 2010;Covre et al., 2013). According to Dardengo et al. (2013), the total dry matter and the stem diameter are the most favorable variables to indicate the quality of young plants of Conilon coffee.
The TDM values observed in this study were higher than those found by Aquino et al. (2017), while studying the production of clonal plantlets by the cutting method in Conilon coffee . Verdin Filho et al. (2014) found a higher TDM in Conilon coffee by using a straight cut at the base of the stem cuttings, which favored the development of the root system and, consequently, of the seedling as a whole. This result corroborates the hypothesis that different types of cut, either at the base or at the apex of the clonal cut, may influence th e development of plantlets of Conilon coffee.
It was also observed that the bevel cut could promote a lower incidence of phytosanitary problems by favoring the flow of water from the apex of the cuttings, while the straight cut may favor the accumulation. The higher abundance of water at the apex of the cuttings, caused by irrigation, can lead to the occurrence of tissue rot in contact with atmospheric air. Fungi such as Fusarium xylarioides, indicated as the etiologic agent of "tracheomycosis" or vascular wilting of Coffee trees (Steyaert, 1948), can be established under these conditions. The fungus invades the vascular system and, after a short incubation period, causes wilt and, finally, the death of the plant (Blittersdorff;Kranz, 1976). It is transmitted by air through ascospores and conidia. The ascogenic phase is known as Gibberella xylarioides (Booth;Waterston, 1964).
The length of the cut is a factor of great importance for the survival, rooting, and emission of sprouts of the plantlet since it is related to the amount of carbohydrates and endogenous auxins reserved in the tissues (Pontes Filho et al., 2014). Short cuttings may not have enough reserves to meet the energy demands of the rhizogenesis, while excessively long cuttings may become more susceptible to dehydration due to the large surface exposed to the environment and the greater demand for water to keep the most substantial amount of tissue alive (Braga et al., 2006;Lima et al., 2006). The dehydration that may occur in long cuttings is due to the orthotropic branches, still young, not being covered by a thick cuticle (Albiero et al., 2005), not making an anatomical defense against dehydration (Carvalho et al., 2001;Taiz;Zeiger, 2013).
Figure 5:
Chlorophyll content (A), net assimilation rate of CO 2 (B), transpiration rate (C) and intercellular concentration of CO 2 (D) of clonal plantlets of Conilon coffee produced using stem cuttings, at 120 days of cultivation, as a function of the length of the apex of the stem cutting, produced in Marilância-ES ( * regression coefficient is significant, at 5% of probability, by the t-test).
For the success of coffee crops, one of the critical points is the use of high-quality plantlets, as they contribute to the faster and more vigorous development of the plantation (Fonseca et al., 2019). Studies like this are of fundamental importance since they contribute to a better understanding of the factors that can limit the growth and quality of clonal plantlets of Conilon coffee.
In this study, apexes with lengths ranging from 0.5 to 2.5 cm were considered. Adding to the length of the base (4.0 cm), the total lengths ranged from 4.5 to 6.5 cm. In this way, the short apex favors the reduced total size, just as the long apex favors an excessive total length. Overall, the growth analyses show that the length of the apex does not effect most part of the variables, being only possible to observe gains for leaf area and biomass accumulation for cutting with 1.53 and 1.61 cm respectively, with no effect over the final quality of the plantlets.
CONCLUSIONS
The biomass accumulation of the plantlets of Conilon coffee produced by stem cutting may be favored by the use of bevel cut on the apex.
The length of the remaining apex does not seem to have a expressive effect over the quality or growth of the plantlets, being only possible to observe effects for leaf area and biomass accumulation. | 2021-04-17T19:45:59.108Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "36c8fca9a7ffa0082ff19a29e0d2f28880a32c47",
"oa_license": "CCBY",
"oa_url": "http://www.coffeescience.ufla.br/index.php/Coffeescience/article/download/1770/2297",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "36c8fca9a7ffa0082ff19a29e0d2f28880a32c47",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
86377104 | pes2o/s2orc | v3-fos-license | Clitics : Lexicalization Patterns of the So-called 3 rd Person Dative *
Manzini and Savoia (1999, 2001, 2002, to appear) argue that the basic facts about the clitic string are best accounted for without having recourse to anything but a minimalist syntactic component, i.e. making no use of a specialized morphological component nor of optimality-type comparisons between derivations/ representations. In particular, they assume that clitics correspond to specialized inflectional categories, and are merged directly into the positions where they surface; such categories are furthermore ordered in a universal hierarchy, as we will detail below. The aim of the present paper is to consider datives in the light of this framework. We will conclude that there is no evidence for the category dative in the Romance dialects we shall consider, while in fact there is evidence for categorizations of so-called dative clitics as quantificational elements or as deictic elements (locatives). In all cases, the relevant categorization relies entirely on referential properties, or more generally on interpretive properties intrinsic to the lexical items involved, calling into question the traditional notion of Case itself.
Theoretical background
Following Sportiche (1996), rather than Kayne (1975Kayne ( , 1989Kayne ( , 1991Kayne ( , 1994)), we assume that clitics are inserted under specialized functional categories, i.e. in stricter minimalist terms their merger projects specialized functional categories.If clitics are generated in the ordinary argument positions and adjoined to verbal or inflectional positions it is hard to predict that they appear in a fixed number, in a fixed order and with fixed cooccurrence (or mutual exclusion) patterns which do not necessarily correspond to the number, order, cooccurrence (or mutual exclusion) patterns of corresponding arguments and adjuncts.To be more precise, the theory can derive the relevant properties of clitics in conjunction with a morphological component able to (re)order strings (Bonet 1995, Halle and Marantz 1993, 1994).To the extent that the (re)ordering operations match those of the syntax (Merge and Move), the resulting system is however highly redundant; viceversa to the extent that the two sets of (re)ordering operations do not match, the resulting system is considerably more complex.Therefore we assume that a purely syntactic account is to be preferred for reasons of simplicity of the theory.
We specifically assume that clitics are generated directly in the position where they surface, hence that there are clitic positions between I and C. Adopting a universal hierarchy of functional positions of the type in Cinque (1999), though not necessarily one containing the positions postulated there, we are led to propose a universal clitic string, within which positions can neither be reordered (contra Ouhalla 1991) nor packed and unpacked (contra Giorgi and Pianesi 1998).A first set of clitic categories relevant for the hierarchy is motivated in relation to subject clitics by Manzini and Savoia (2002), who argue in favor of a category P(erson) for 1 st and 2 nd person clitics, a category N(oun) for 3 rd person clitics, a category Q(uantifier) for plural clitics, and a category D(efiniteness) for otherwise uninflected clitics.In their conception, therefore, clitic categories correspond to denotational properties.Thus P implies reference to the speaker (1 st person singular), the hearer (2 nd person singular) and the sets including them (1 st and 2 nd person plural); in turn, N identifies the so-called 3 rd person simply with the predicative property N.As for the Q and D categories, they are to be understood exactly as in the analysis of noun phrases, i.e. as encoding weak quantificational properties (corresponding to numerals, existentials, etc.) in what concerns Q, and definite denotation in what concerns D.
To these categories, Manzini and Savoia (to appear) add three further categories characterized in broadly denotational terms, and specifically connected to their discussion of object clitics.In particular they individuate the need for a Loc(ative) category lexicalizing reference to the spatial coordinates of discourse and for a R(eferential) category corresponding to strong quantificational or specificity properties.They also add a category DO p associated with modal/ intensional properties to serve as a nominal counterpart to modal/ intensional categories of the verb, generally represented by the complementizer system.
A natural order for the categories individuated so far is suggested by the observation that many, if not all, such categories are independently postulated in cur-rent generative analyses of the internal structure of the noun phrase.In particular, the sequence D -R -Q -N constitutes the basic skeleton of the noun phrase, where N is associated to the nominal head, hence to the predicative content of the phrase, Q to indefinite quantifiers, R to specific quantifiers and D to the definite article.As for Loc, this position can be identified with demonstratives, on the grounds of the general spatial interpretation of these elements, and more specifically of the fact that in Romance dialects they surface coupled with overt locative pronouns; on the basis of the position occupied by the latter the Loc position is relatively low within the noun phrase (Brugè 1996, Bernstein 1997), presumably between N and the quantificational projections.The P category, which like Loc is interpreted in terms of discourse-anchored reference, is naturally construed as occurring in the same area of the nominal tree; there it can correspond to the merger position for possessives.Finally DO p can host prepositional elements which can equally well introduce a noun phrase or serve as complementizers of a sentence, such as Italian di ('of'), a ('to'); they close off the entire noun phrase appearing in the position immediately above D. Therefore the hierarchy of nominal categories within the noun phrase takes the shape in (1), with the content of the different categories briefly summarized in (2).
(1) [DO p [D [R [Q [P [Loc [N (2) f.Loc is associated with Locative, i.e. reference to the spatial coordinates (demonstratives) g.DO p is associated with a nominal counterpart to modal/ intensional categories of the sentence ('of', etc.) Manzini and Savoia (to appear) argue that the string in (1) also defines the basic order of clitic categories within the sentential string, and assign the various descriptive classes of clitics to the categories in it.In keeping with the observation that subject clitics generally appear before object clitics, it seems natural to reserve the higher positions of the string for them; more specifically, we identify the subject clitic position with D. If Manzini and Savoia (forthcoming) are correct, furthermore, the highest DO p position is not associated with clitics at all, but rather hosts complementizers, including those of the that type (Italian che, etc.) as well as prepositional ones.As we anticipated, the basic aim of our discussion is to arrive at a characterization of datives, which we shall therefore leave for later discussion; here and throughout, reference to Case categories such as dative (as well as to person, number and gender ones) is purely descriptive.
Obviously enough, the P category is lexicalized by 1 st and 2 nd person (non-subject) clitics.The observed behavior in Italian clitic systems (Manzini and Savoia 1998) supports the position already taken by typological approaches, which sharply differentiates the status of 1 st and 2 nd person (singular, and eventually plural) from that of the so-called 3 rd person.In our grammar, therefore, the total membership of the Person category is constituted by the speaker and the hearer, whose denotation is fixed directly by the universe of discourse, and by the sets including them.When it comes to the dative, this entitles us to consider 3 rd person only, abstracting from what may be described as 1 st and 2 nd person datives.Crucially, in contrast to what happens with 3 rd person, there is no morphological differentiation between P forms used as accusatives and datives; nor do P forms have a different position in the clitic string depending on the dative/ accusative divide.
We note next that the characterization of Loc in terms of spatial reference must be conceived in wide enough terms to include a whole series of possible interpretations associated with the locative clitic.Thus in a language like Italian, ci can have a strictly locative meaning, an instrumental one, a comitative one, etc.The purely locative interpretation itself can be seen to be internally articulated in several different meanings.Thus the locative can be associated with a stative interpretation or with a motion intepretation, under which the locative typically refers to the coordinates of the final point of the event.In general, the Loc category does not correspond to a point on the aspectual contour of the event.Rather it must be understood deictically in connection with other elements whose denotation is fixed by the universe of discourse, namely the speaker ('I'), the hearer ('you') and the temporal coordinates of the discouse ('now').
A language like standard Italian provides evidence that as suggested in (1), the P clitic is ordered before the Loc one, as in (3a).The relevant substructure is as in (3b); note that, here and througout, a linear format rather than a tree one has been used for structural representations for purely practical reasons.
| | mi ci
We already indicated that N corresponds to 3 rd person; the observation that accusative clitics appear, in many languages, in the lowest position of the string leads us to assign them to the N category.Thus in a language like standard Italian they follow both P and Loc clitics, as indicated in (4)-( 5).In this framework, the total content of the so-called accusative clitic is reduced to its N intrinsic content; the interpretation of the N clitic as the internal argument of the verb is the result of the application of some interpretive principle.In other words, interpretive categories such as 'theme of', or 'Measure of' in aspectual terms, are interpretations available for the relevant syntactic structures at the LF interface, essentially along the lines adopted by Chomsky (1995).In particular N is never lexicalized but by the internal argument, though it is obvious that the reverse does not hold.Thus in inaccusatives, the internal and only argument of the verb corresponds to the D clitic in subject clitic languages in virtue of some version of Chomsky's (1995) The partitive clitic, ne in standard Italian, does not directly denote an argument either in the event structure, or in the domain of discourse, but it contributes to the denotation of one such arguments.For instance in (6a) the denotational content of ne enables us to fix the reference of the internal argument of the verb, represented here by the numeral quantifier tre 'three'.Since in (2f) we analyze the di 'of' element introducing partitive noun phrases as belonging to the DO p category, we tentatively assign the partitive clitic to the category DO p as well.In other words standard Italian ne is of category DO p on the evidence of the fact that it doubles phrases headed by an DO p element such as di.On the other hand, the evidence relating to the position of ne in the string indicates that it is lower than DO p and in fact corresponds to N in a language like standard Italian, where the partitive appears lower after P and Loc clitics, as in (6b).We take it that the N position of the DO p element corresponds to the fact that an DO p element cannot but be interpreted as a specification of a predicative N head.In particular in an example like (6) merger of ne in the N position of the string corresponds its interpretation as an DO p specification of the internal argument independently lexicalized in the string by the quantificational head tre 'three'.We shall return to this property of ne in the discussion in section 2.3 below.We tentatively assume that not only it licences the N merger position of ne, but it also prevents it from surfacing in a position corresponding to DO p of the sentential string, where it would not be possible to assign to it the relevant relation to a predicative N head.
The one remaining clitic in a language like standard Italian is at this point si, associated with reflexives, impersonals, and passives.The only relevant point is that we take its merger position to be normally Q, in virtue of its denotational properties, which are essentially those of a free variable (Manzini 1986).As detailed by Manzini andSavoia (1999, 2001) an analysis along these lines derives the different construals for si.Interestingly, a Q merger point would predict that si precedes not only accusative and partitive clitics, but also P and Loc ones, as it does in a large number of Italian dialects (Manzini and Savoia 2001).In standard Italian on the contrary si normally precedes partitive and accusative, but follows P and Loc clitics, as in (7a).Our conception of the R position helps in this respect, since its specificity properties make it a potential host for the whole series of object clitics.As in (7b) we may assume therefore that R hosts the locative clitic, preceding si in Q. Naturally, the conception of R as a specificity category predicts that we should be able to find in the same R position not only a Loc clitic such as ci in ( 7), but also other types of clitics.Indeed in section 2.1 we shall propose that the accusative series of the dialect of Olivetta can be hosted in R, and the same holds for the si-type clitic, i.e. a Q clitic, in the dialect of Vagli.For the dialects of Piobbico in section 2.2 and of Celle di Bulgheria in section 3.2 we shall propose that the dative is hosted by R. For the dialects of Nocara in section 2.3, of Làconi in section 3.1 and Nociglia in section 3.2 we shall associate R with the partitive.In some of the cases just reviewed, it is interesting to note that R is a possible point of merger, rather than a necessary one.This is evident already from the comparison of standard Italian (7), where the locative is in R, with (3), where it is in Loc.We take it that in the case of ( 3) and ( 7), the Loc merger point is straightforwardly justified by the denotational content of the clitic; we conceptualize the R merger point in terms of scopal properties of the clitic itself.In other words the specific nature of the locative denotation allows for the scopal R position as well for the Loc one.On the basis of a scopal conception of R, we may equally expect that in some languages one or more clitics necessarily appear in the higher scopal position, as we shall see in particular for Olivetta in section 2.1.From a purely empirical point of view, of course, R serves as the one major source of reordering within the clitic string.
On the basis of the discussion at the outset, the clitic string in (1), hence the partial structures in (3)-( 7), occupy the area of the sentence between the I position, where the finite verb normally appears in declarative sentences, and the C position where it appears in main clause interrogatives.Empirical evidence, relating in particular to the doubling of clitics on either side of the verb in C, strongly argues in favor of a conception in which the clitic string repeats itself identical above C as well, as discussed in particular by Manzini and Savoia (1999).Generalizing this conclusion, we assume that a clitic string is generated above each of the three main verbal domains, i.e. immediately above V, I and C.This gives rise to the organization of the sentence in ( 8), where the dotted space is to be filled by the string in (1).Evidence for the lower string is provided by Manzini and Savoia (forthcoming); intuitively, however, it is clear that it corresponds to the main argumental domain of the sentence, so that we may provisionally assume that lexical arguments are merged in the (Spec of) the relevant positions.
(8) ... [C ... [I ... [V As well as an analysis of the overall structure of the clitic string and of the categories it consists of, an account of cliticization in Romance dialects presupposes an analysis of the internal structure of clitics themselves.Previous approaches in the literature include both morphological and syntactic ones.Among the former, we find James Harris's (1994) account of the internal make-up of Spanish clitics, which recognizes a lexical basis l-for the 3 rd person series as well as nominal class morphemes such as -a (traditionally the feminine) and a number suffix -s.A syntactic, rather than morphological, characterization of the internal structure of clitics is attempted in a few recent papers, including Kayne (2000) on 1 st and 2 nd person clitics as opposed to 3 rd person ones, and Cardinaletti and Starke (1999) on clitics compared to weak and strong pronouns.The general idea of Cardinaletti and Starke (1999) is that clitics have the internal structure of a DP, albeit an impoverished one with respect to lexical DP's or even non-clitic pronouns.In their terms the latter are associated with a full structure, equivalent to a sentential CP; on the other hand clitics are characterized by a deficient structure, reducing to the equivalent of a sentential IP projection.According to Kayne (2000), on the other hand, 1 st and 2 nd person clitics and pronouns lack full DP structure, while the latter characterizes 3 rd person clitics, as revealed by the presence of full agreement features.
The approach that we take to the internal structure of clitics relies on the idea that clitics are just ordinary noun phrases.As for the structure of the latter we have already seen that (1) corresponds to the basic organization of nominal categories not only within the sentence, but also within the noun phrase itself.This idea needs to be made more precise in just one respect.Following work by Abney (1986), Szabolcsi (1994), the structure of the noun phrase is organized along similar lines as the structure of the sentence.On the model of the sentence, the lowest position in the noun phrase, associated with predicative content, can be taken to coincide with N in (1); but we also need to identify an I and a C position.What we propose is that the string in (1) as a whole repeats itself within the noun phrase as within the sentence, yielding the basic noun phrase skeleton in (9).I and C are just labels for the N positions of higher strings, reflecting their scopal properties.The dotted space in (9) therefore encloses the functional subsequence of the string in (1), namely D Op -D -R -Q -P -Loc -N.We are now in a position to turn to the internal structure of clitic forms.We can translate the morphological analysis of Romance clitics proposed in works such as James Harris (1994) into the present syntactic model by identifying nominal class (gender) morphemes such as o, a which accompany the l lexical base in many Romance dialects with the I projection of a noun phrase.As for number morphemes, a natural analysis within our framework identifies them with the Q category.This analysis applies in particular to number morphemes added to nominal class ones, as is the case for s in Spanish.We then obtain structures of the type in (10), which account for instance for the as, ɔs observed in the plural nominal inflections of typical Sardinian dialects.This analysis introduces an asymmetry between the conceptual status of nominal class (gender) morphemes, which lexicalize the nominal category I, and that of number morphemes, which lexicalize the functional category Q.This result appears to be on the right track, since number is indeed a functional specification of the noun, while gender is an intrinsic property of a nominal category.
Inflections of the type in (10) can be added to adjectival or nominal bases, but what interests us here directly is that they can be added to the l morpheme of 3 rd person clitics.Precisely the observation that nominal constituents of the type in (10) have independent existence as agreement morphemes suggests that 3 rd person clitics involve a nominal head l.As illustrated in (11), we take it that l lexicalizes the normal inflectional position, i.e.I, within its own noun phrase, embedding the separate noun phrases in (10).
The internal structures of clitics are directly relevant to an important question concerning the hierarchy in (1).In rejecting in particular the morphological model of Halle and Marantz (1993), we have adopted the point of view of the minimalist grammar of Chomsky (1995), where syntactic structures are directly projected by the insertion of lexical material.Thus there cannot be structures such as (1) produced by the syntactic component, to which lexical material is matched by lexical insertion.Rather, if hierarchies such (1) hold, it must be because of independent constraints which as suggested by Manzini and Savoia (to appear) can ultimately be thought of in Full Intepretation terms.This puts a heavy constraint on our grammar, since we cannot simply insert a clitic in an already given position as a default lexicalization, not presenting any mismatch with the syntactic category.On the contrary, we must be able to show that in each case, it is an internal category of the clitic that projects the relevant category of the sentential string.The discussion that follows will uphold this general conclusion; in many cases we shall be able to show that the category projected by the clitic on the sentential tree corresponds exactly to the category of the internal head of the clitic itself.Thus a clitic series such as (11) will typically project N on the sentential tree.For pure ease reading, exactly as we describe clitics in terms of accusative, dative, 1 st and 2 nd person i.e. of features that do not correspond to our actual categories, so we will speak of their insertion points.In all cases we will understand by insertion point of a clitic, the category that the clitic itself projects on the basis of its internal constituency.
Morphologically 3 rd person datives
Manzini and Savoia (forthcoming), in considering the lexicalization of the socalled dative argument in several dozens of Italian dialects, note that it is the exception, rather than the rule, that they should present a morphologically 3 rd person clitic form for the 3 rd person dative.Standard Italian is among the dialects which possess such a form, which furthermore combines with the accusative, preceding it.This pattern is normally found in Central Italian dialects, and emerges in the dialects of Lucania as well as in Tuscan dialects, including Vagli di Sopra in (12).The (a) example displays the isolation form of the dative, i.e. i.The (b) example shows the combination of dative and accusative, in this order.The ə morpheme that surfaces in the feminine plural lə is a phonological alternant of e, which surfaces for instance in sentence-final position as in the enclitic camə-le 'call them!'; this yields an accusative clitic series l/la/ i/le.As shown in (c), the dative is also followed by other clitics, such as the partitive.Note that to economize on glosses we have given the meaning of i as 'to him'; in fact, this is short for 'to him/ to her/ to them'.We have followed the same general principle throughout the article.
(12) Vagli di Sopra (Tuscany) a. i ji La k'kweste he to.himgives this 'He gives this to him.' b. i ji l/la/ji/lə 'La he to.himit-m./it-f./them-m./them-f.gives 'He gives it/them to him.' c. i ji nə La d'doi he to.himof.themgives two 'He gives two of them to him.'Though the language chosen for exemplification has 3 rd person subject clitics, we will disregard the shape taken by the latter.On the basis of the discussion that precedes we take the series of 3 rd person clitics illustrated in (12), i.e. l/la/ji/le, to correspond to noun phrases.Because a head Noun normally occupies the I position within the noun phrase, we take l in particular to occupy the I position within its nominal constituent following the schema in (11).As for the morphemes combining with l, a lexicalizes gender, i.e. nominal class, in the I position of a separate nominal constituent, deriving the singular feminine form la as in (13b).On the other hand the feminine plural le appears to combine l with an e formative associated with the N position of a separate nominal constituent, as shown in (13d).The structure in (13d) implies that in the feminine, plurality is not lexicalized through a Q morpheme, but rather through the switch from the nominal inflection class a, to what we take to be a pure N morphology, i.e. e.We justify this latter conclusion by observing that e is the nominal morphology that turns up on the participle in the absence of person, number and gender agreement with the object or subject; in the terms of Manzini and Savoia (forthcoming) this means that e corresponds to a pure N form.As for l of the so-called masculine singular, it corresponds to uninflected l, as in (13a).A characteristic of the Vagli dialect on which we shall return is that the so-called masculine plural accusative ji coincides with the number-and gender-neutral dative.At least as part of the accusative paradigm, we can assume that ji consists of a i morpheme lexicalizing plurality in Q, while j corresponds again to the I head of its own nominal constituent, as in (13c).
According to our description, the language of Vagli furthermore has a dative clitic, ji, invariant for number and gender, which coincides with the form ana-lyzed in (13c).More generally, the systematic study of Italian dialects conducted by Manzini and Savoia (forthcoming) reveals that what descriptive grammars treat as specialized 3 rd person dative forms generally coincide with accusative forms, typically masculine plural ones.We shall see more evidence for this in what follows.In other words, case distinctions, at least between accusative and dative, are in fact not registered by pronominal systems.This confirms our tentative conclusion that only denotational properties are relevant to the definition of such systems.The lexicon in (13) provides the basis for predicting the insertion position of the relevant clitics.Thus in virtue of its properties, which include in all cases one or more nominal bases in I, the whole series of clitics in ( 13) can be inserted in the N position of the string in (1).In virtue of its Q properties on the other hand the ji clitic can equally insert in Q. Therefore we are able to associate structural descriptions with the dative-accusative clusters in (12b), as illustrated in ( 14).
The dative interpretation of the ji clitic goes hand in hand with an interpretation of the Q property different from plurality, since as we have seen dative ji is ambiguous with respect to number as well as gender.In this connection, we note that the syntactic Q category is compatible with plurality, but it does not imply it; thus we expect plurality to be a possible interpretation of Q, but not a necessary one.Next, we observe that in the case of an accusative ji, the Q specification is part of its internal structure, but does not correspond to its position of insertion.Viceversa in the case of a dative ji, Q represents both one of its internal specifications and its position of insertion in the clitic string.We propose that in the former case the plural reading of ji is determined by the fact that its Q specification remains purely internal to the clitic; its interpretation therefore is associated with the N predicative content of the nominal j head, corresponding to the traditional plural.In the other case, however, the internal Q specification of ji corresponds to a Q insertion position in the clitic string; this means that the dative interpretation, which is one of the possible interpretations attaching to the Q position of the clitic string, becomes available to ji.As it turns out the plural reading and the distributive one are mutually exclusive in the sense that the distributor is not necessarily plural.We take this to be an effect of scope; in other words, either i has scope internal to the clitic, in which case its reading is plurality; or it takes scope in effect over the sentential string, in which case its reading is dativity.One scope excludes the other.
It is worth stopping a moment to consider what this interpretation of the Q position of the sentential string may be, given that it cannot simply be reduced to plurality.The Q -N order seems to imply that Q hosts elements taking scope over N. In this perspective, the question regarding the nature of Q essentially reduces to which scopal properties the order Q -N instantiates.It is independently known from the lit-erature that scopal phenomena are sensitive to the relative structural prominence of arguments.Thus Reinhart (1983) reads the relative scope of quantifiers off c-command relations in surface structure.May (1985), while introducing the Quantifier Raising operation in abstract syntax, notices further surface effects such as the possibility for a wh-quantifier to commute in scope with a subject but not with an object.
One scope phenomenon that involves in a particularly obvious way datives and accusatives is that of distributivity; thus an appropriately quantified subject can distribute over an indefinite object and a dative over an accusative, while the reverse is not true.This is in essence also the conclusion of Beghelli (1997).Exceptions involve the presence on the distributor of an each, every quantifier, or the presuppositional reading of the distributor; in the first case no correspondence to surface argument hierarchies holds, in the second case at least the indirect -direct object hierarchy breaks down.In both cases Beghelli (1997) argues that dedicated quantificational positions are involved.Some relevant examples of the normal case are provided in ( 15)-( 16) from standard Italian: (15) a. Loro hanno visto un uomo ciascuno.
They have seen a man each b.*Un uomo li ha visti ciascuno.
A man them has seen each ( 16) a. Assegnai loro un compito ciascuno.I gave them an assignment each b.*Li assegnai a uno studente ciascuno.Them I assigned to a student each Putting together these observations with the hierarchy of argumental positions postulated in (1), it is natural to hypothesize that the set of possible distributors corresponds to the set of arguments, ie.datives or subjects, which have independently been motivated to occupy a position with quantificational properties, be it Q or D. Conversely the accusative object does not have the properties of a distributor in that it corresponds to the non-quantificational N category.In general, we agree with Beghelli (1997), Beghelli and Stowell (1997) that quantificational properties are syntactically encoded; nor do they belong to the high C domain of the sentence, but can be found in the inflectional domain where arguments otherwise appear.However in the present conception there aren't two distinct series of argumental and quantifier positions, but a single series, which is partially defined in quantificational terms.Since the dative is associated with the Q position, we are led to conclude that the dative has quantificational properties, which can be construed as those of a distributor.
In some languages, which otherwise have properties comparable to those illustrated for Vagli, the 3 rd accusative form precedes the 3 rd dative form.This parametric possibility is illustrated by several dialects of Corsica and of Western Liguria, such as Olivetta S.Michele in (17), where (a) illustrates as before the dative form in isolation and (b) the combination of dative and accusative in this order.By con-trast 3 rd dative generally precedes other clitics that it can cooccur with, for instance the partitive as in (c).
(17) Olivetta S.Michele (Liguria) a. el i 'duna a'ko he to.himgives this 'He gives this to him.' b. el u/ i/ a/e i 'duna he it-m./it-f./them-m./them-f.to.him gives 'He gives it/them to him.' c. el i n 'duna 'dyi he to.himof.themgives two 'He gives two of them to him.'On the basis of the discussion concerning Vagli, datives are associated with a high position in the sentential nominal string, and specifically with the Q position.This conclusion is confirmed by the empirical data of Olivetta, since as shown in (17c), the dative clitic precedes the partitive (in N).If the dative is inserted under Q, the accusative, that precedes it, has at its disposal only the R position, where it can be preceded in turn by the subject clitic in D, as indicated in (18); as before, subject clitics are not our concern here: Explaining the parametrization between Vagli in (12) and Olivetta in (17) requires the lexicon of Olivetta to be accounted for.The i form, subsuming in descriptive terms the accusative masculine plural and the dative, has an internal structure comparable to that assigned to the ji clitic of Vagli in (13c); indeed we propose that i lexicalizes a Q position within its phrase, as in (19c).As for the other forms of the accusative paradigm, we can assign u, a and e to the I category, treating them as nominal class markers, as in (19a)-(19c).We note that the systematic lack of an l formative in the structures in (19), makes the clitics of Olivetta identical to the inflections observed on the nominal and adjectival system.
What remains to be seen is how the lexicon in ( 19) relates to the structure in (18).To begin with, the Q specification of the i clitic makes it compatible with insertion in Q.What is more, the internal structure of all of the elements in ( 19) is evidently compatible with insertion in R; indeed it is the general conclusion of section 1 that R is a normal insertion position for all elements that are associated with specific properties.Thus we obtain the basic observed order, namely 3 rd accusative followed by 3 rd dative.In fact, nothing in the lexical entries in (19) prevents 3 rd person clitics from inserting in N; we must assume that the fact that they take a scope position such as R, rather than the N position associated with aspectual properties, is what the Olivetta child learns as a parameter of the language.
Saying that in Vagli the accusative clitic appears in the N position associated with the internal argument interpretation, while in Olivetta it appears in the scopal position R for specific elements, is similar to saying that the wh-phrase appears in its thematic position in a language like Chinese, while it appears in scope position in a language like English.One may object that the position of the wh-phrase in English is the result of movement, not of merger.In fact, we take it that the idea that lexical material merges directly in the position where it surfaces holds not only for clitics but for all elements in grammar, hence for wh-phrases; one possible instantiation of this idea is the representational model of Brody (1995).More precisely, whphrases can be inserted in argumental position in English as well, in appropriate contexts.There is therefore a particularly close match between the properties of wh-phrases in a language like English and the properties described in section one for clitic ci of standard Italian, which will either insert in Loc or in R according to the context.In general, we take it that the intrinsic denotational content of wh-phrases, as of clitics in the case at hand, determines their compatibility with several positions in the syntactic tree; their actual position will depend on other properties.These are identified by Chomsky (1995Chomsky ( , 2000Chomsky ( , 2001) ) with non-interpretable EPP properties of the landing site; but these are only notationally lexical properties, whilst in fact they stand for a syntactic parameter, which is fully comparable to the one given here for Vagli vs. Olivetta.
The same high position, R, that hosts the u, a, e clitics associated with the internal argument interpretation, can also host the i clitic, included the case when it is interpreted as a dative.This is shown by examples of the type in (20a), where the i clitic precedes the partitive n clitic and the impersonal hə clitic (corresponding to Italian si).While the partitive can be associated with the N position, the impersonal is naturally associated with the Q position, in virtue of its generic, i.e. quantificational, interpretation.Therefore the i clitic will itself appear in a position higher than Q, i.e.R, as illustrated in (21a).The relevant contrast is with a dialect like Vagli, where the dative actually occurs after the impersonal; in this case, we must assume that the relative order of the two elements is the reverse, with ji keeping the quantificational Q position and si being allowed in R, as in (21b).( 20
Specialized ('opaque') forms for the combinations of 3 rd dative and 3 rd accusative
In what precedes we have considered in some details languages where a morphologically 3 rd person dative normally combines with 3 rd person accusatives, in either one of the possible orders.Our data on the other hand also record the appearence of specialized clitic forms, endowed with 3 rd person morphology, in connection with the clustering of 3 rd person dative and accusative.This is reminiscent of the description provided by James Harris (1994), Bonet (1995) for the Catalan dialect of Barcelona (Barceloní), where the cluster of accusative and dative (singular) does not surface as such but as a single form li, which corresponds to the dative in isolation.In the analysis of these authors however li is not simply the dative form but rather what they call an 'opaque' form (Bonet 1995), i.e. a specialized lexicalization of the cluster.
It is interesting to note that contrary to what implied by the Barcelonì cases reported in the literature the emergence of forms specialized for the 3 rd dative -3 rd accusative context does not depend on the mutual exclusion between the two clitics, as can be seen in particular from several dialects of the Marche, such as Piobbico in (22).In the Piobbico dialect the accusative series is el/ la/ (l)i/ lə, both in isolation and in combination with other clitics, for instance of the P series as in (22b); the i clitic furthermore represents the dative, in isolation as in (22a), but also in combination with clitics such as si in (22d).The lexicalization of dative and accusative in this language does not however lead to sequences i + el/ la/ (l)i/ lə; rather we find a specialized li form, preceded by i as in (22c).( 22 Let us begin by considering the internal structure of the accusative and dative series.Taking up again the analyses proposed in section 2.1 we assume that the l morpheme lexicalizes the I head of a nominal constituent, while a vocalic morpheme corresponding to inflectional class specifications such as a/ə appears in the I head of a separate nominal constituent embedded under l, yielding structures of the type in (23b)-(23c).The case of el in (23a) is analyzed on the other hand on a par with the pure l forms of section 2.1, i.e. as the I head of a nominal not embedding any inflectional specification.The i clitic in turn, corresponding to both the descriptive accusative masculine plural and the isolation form of the dative, is amenable to a Q categorization as in (23d).Finally nothing in the grammar bars the combination of the Q morpheme, i.e. i, with the l base denoting definiteness.It is therefore natural to propose the internal structure in (23e) for the li form, which corresponds to the internal argument in the context of a dative, as in (22c), but also to the internal argument with (masculine) plural interpretation in other contexts, as in (22b).
[ I el The lexical properties of all clitics in (23) are compatible with insertion in the N position of the clitic string, where the el/la/lə set receives the ordinary interpretation as internal arguments of the verb, as does the (l)i form, whose quantificational properties also induce a plural reading.As argued at length in section 2.1, the i clitic can furthermore lexicalize the Q position, or more correctly in this case the R position, since it precedes si in Q in examples such as (24a), receiving there a distributive interpretation, which corresponds to the descriptive label of dative.Remember that the i clitic of Olivetta could analogously lexicalize R where however it could be interpreted non only as a dative (distributor) but also as a masculine plural accusative.We correlate this parameter between the two languages to the obvious fact that it is the whole 3 rd person clitic series that merges in R in Olivetta, but only i in Piobbico.We maintain the proposal developed above that merger in the scopal R position corresponds to the specificity properties of the Olivetta 3 rd person series; on the other hand the exclusively dative interpretation of the i clitic in R of Piobbico is explained, if what motivates it is specifically its scopal sentential properties as a distributor.
The problem that we need to consider is that in the case of a 3 rd person argument distributing over a 3 rd person internal argument, the latter is lexicalized by li.We note that the internal structure suggested for li in (23e) in fact consists both of a Q morpheme, i.e. in our terms a potential distributor, and of the l morpheme, which in terms of the present proposal lexicalizes 3 rd person reference under the form of a pure definiteness property.Therefore it is reasonable to assume that li is the specialized lexicalization precisely for a 3 rd person object in the scope of a distributor.The insertion position of li can in turn coincide with N, given the presence of the nominal l head; in this position it is of course preceded by i, which we can equally well associate with Q as with R, as indicated in (24b). (24
The types 'ci', 'ne', 'si' for 3 rd person dative
In many Italian dialects the so-called dative is represented not by a morphologically 3 rd person form, but one which coincides with a clitic of the language independently associated with the locative denotation, Loc, or with the partitive denotation, i.e.DO p , or finally with the impersonal/reflexive one, i.e. a Q element of the si type.
At least this third type of lexicalization of the dative is known in the literature for contexts including a 3 rd person accusative.In fact, in a language like Spanish, the incompatibility of 3 rd accusative and 3 rd dative leads to the apparent substitution of the dative with the se clitic (Perlmutter 1972, Bonet 1995, and in an optimality framework Grimshaw 1997).It is important to realize, however, that in this section we shall present cases where the lexicalization of the so-called 3 rd person dative by a si-type clitic, or by a locative or by a partitive, is totally independent of the syntactic context; thus it holds in all clitic combinations, and in isolation as well.
To begin with, the dative coincides with the locative in the majority of Northern Italian dialects, as well as in many dialects of Central and Southern Italy.In a Northern Italian dialect such as Modena, g lexicalizes the 3 rd person dative in isolation as in (25a) and in combination with other clitics, as in (25b); at the same time it represents the locative form of the language, as in (25c).We describe the data in terms of a lexicalization of the dative by the locative, rather than the other way round, because forms such g are unconnected to object or subject 3 rd person morphology, contrary to the so-called datives considered in the previous sections.The g clitic not only precedes the accusative, as in (25b), and the partitive, but also follows all other clitics it can cooccurr with, i.e. the P and si clitic, as in (25d)-(25e).Note that g is glossed 'there' or 'to him' (meaning 'to him / to her/ to them' as above) in accordance with the translation; the same principle is followed in glosses throughout this section.'mεt dla 'rɔba it one there puts some stuff 'One puts some stuff there/ some stuff is being put there.' The relative position of the g clitic with respect to P clitics, to the si clitic in Q and to accusative clitics confirms that it is associated with the locative denotation and inserted in the Loc position of the string, as in (26), which illustrates the position of the g clitic relative to the accusative clitic in N.
We can assign to the g clitic a lexical entry which directly reflects both its locative interpretation, and its insertion point in Loc, by associating the g morpheme with the Loc position within the clitic noun phrase as in ( 27).
While in sections 2.1-2.2we have analyzed dialects in which the descriptive category of dative corresponds to a Q element, i.e. a distributor, in dialects of the Modena type the descriptive dative corresponds to locative properties, connected to the spatial coordinates of the discourse and the event.Thus if possession is a sort of location (cf.Freeze 1992), the classical idea of Kayne (1984) that double object verbs embed a small clause, where the dative is the possessor of the accusative argument, amounts to a locative interpretation of the dative.More generally, typical dative-accusative verbs such as give can be described in terms of a change in the spatial location of the internal argument; thus John gave a book to Peter implies that the book, located at John at the beginning of the event, changed its location to Peter at the end of the event.
To complete our discussion it is worth mentioning that in several Italian dialects the identification of dative and locative involves an i clitic which like those considered in previous sections is morphologically a 3 rd person form, coinciding with the accusative (masculine) plural.A case in point is the Lombardy-type dialect of Casaccia, where in descriptive terms i is the masculine plural accusative, as in (28b), as well as the dative in (28a) and (28c), and the locative in (28d).On the basis of the discussion in sections 2.1-2.2we are led to analyze the i clitic of the Casaccia dialect as a pure Q form, as in (29), merged in the Q position of the string.This means that a dialect like Casaccia, though superficially similar to dialects like Modena, turns out to be specular to them with respect to the lexicalization of dative and locative.A dialect like Modena never lexicalizes a distributor, but inserts a locative in contexts where other languages may have a distributor.Viceversa, we are led to claim that a dialect like Casaccia never lexicalizes a locative (not of the 'there' type in any event), but rather inserts a distributor in contexts where other languages have a locative.
If our construal of the data is correct, we expect several consequences to follow.In particular, we predict that in languages that have both distributors and locatives, they should be licenced in the same eventive environments and therefore alternate to a large extent.This prediction is verified in standard Italian by examples of the type in (30a).The occurrence of locative hi and dative li in many of the same contexts is discussed for Catalan by Rigau (1982), who accounts for the alternation in terms of animacy.Both the distributor gli and the locative ci are however interpreted as inanimate in (30a).Another potential prediction of the account introduced here for Modena and Casaccia is that there will be eventive environments that though compatible with Modena's g will not be compatible with Casaccia's i or viceversa, leading to no lexicalization of one of the two forms.It is indeed frequently the case that dialects of the Casaccia subgroup (including in particular Piedmontese dialects) will present a reduced occurrence of i in locative contexts, where a specialized locative is lexicalized by other dialects.An example for Casaccia is provided in (30c), which represents the native speaker's translation of standard Italian (30b): (30) a. Gliene/ Ce ne attacco due to.it-of.them/there of.themI.stick two 'I stick two of them to/on it.'b.Mi ci manda me there he.sends 'He sends me there.' c.Casaccia a m 'manda he me sends 'He sends me (there).' That the same event (or state) can support different argumental series is wellknown from the literature on phenomena such as the locative alternation whereby I loaded the wagon with hay alternates with I loaded the hay on the wagon.The approach often taken in the literature (Levin and Rappaport 1995) is that there are underlying arrays of arguments which can be differently linked to syntactic structures.Here we rather take the view that the superficially seen array is the only real one; thus it must be admitted that a verb such as load is compatible with a construal of the location as a location or as an internal argument (accusative).To take another example, Longa, Lorenzo and Rigau (1998) note that the same locative environments, with verbs of the 'to be' class, support in the different dialects of Iberian Romance either a locative clitic (Catalan) or an accusative neuter clitic (Galician, Asturian, Northwestern Spanish).The authors make a point that the appearence of the accusative neuter in the latter dialects is the implementation of a default strategy, the Clitic Recycling Strategy, requiring 'Use the (most) unmarked clitic to fill in gaps of the system'.Thus the appearence of the neuter accusative reflects lack of a locative in the system, given presumably an underlying argumental array.The point of view taken here is of course different, namely that there are no underlying abstract arrays corresponding to either optimal or default surface lexicalizations; but rather that the same verbal environments can truly support different argumental arrays.
Several dialects of the Southern Lucania/ Northern Calabria area (the so-called Lausberg area) and of the Salento, again do not have a morphologically 3 rd person form of the dative but lexicalize instead a ne clitic, which generally coincides with the partitive form.Thus for Nocara, (31a)-(31b) exemplify the lexicalization of the interpretation corresponding to a 3 rd person dative by the nə clitic, both by itself and in combination with an accusative clitic.The example in (31c) illustrates the case in which nə lexicalizes the partitive; finally (31d) shows that nə can occurr twice in the string, giving rise to a combination of its two possible interpretations.The alternation between ða and ðaðə is phonologically determined.( 31 The fact that two nə's cooccur as in (31d) shows that there must be at least two points of insertion available to the clitic.On the basis of the discussion of partitives in section 1, the lower one can be identified with N; in such case nə will receive what we call a partitive interpretation, connected to the internal argument of the verb, and will therefore appear after other clitics, such as P clitics in (32a).The nə clitic that lexicalizes the 3 rd dative interpretation in connection with accusative clitics can on the other hand correspond to a structure of the type in (32b), where nə is inserted in the higher R position.The cooccurrence of the two nə clitics is then predicted to be possible, as in (32c), with the lower nə inserted in N and interpreted as the partitive, and the higher one in R. On the other hand, we already suggested in section 1, that the DO p position of insertion is generally not available to a nə clitic in that its DO p specification must be interpreted in connection with a predicative N head and cannot be interpreted as an intensional specification of the sentence.
If our characterization of the partitive in section1 is on the right track, nə can be analyzed as in (33), where n represents a specialized DO p morpheme, while ə, as in general the vocalic inflections of Italian dialects, is associated with the head position of the clitic noun phrase.For reasons discussed by Manzini and Savoia (forthcoming) we assume that ə does not lexicalize a nominal class specification, i.e.I, but rather corresponds to a N element.
In section 1, we provided an explanation for how the so-called partitive interpretation comes about.The question posed by dialects such as Nocara is how the ne-type clitic becomes associated with 3 rd person dative interpretation.In particular we have proposed in section 1 that the ne-type clitic is not itself interpreted as an argument of the event but contributes to fixing the denotation of such an argument, namely the obligatory internal one.As we did in all of the cases that precede, we assume that the same basic characterization holds for contexts traditionally described in terms of a dative interpretation.Consider concretely the n u cluster in (32b).While u in N is associated with the internal argument of the verb, n in R concurs to the fixation of its reference, by introducing a partitive specification, or in traditional Case terms a genitive specification, of the N argument itself.This strategy is particularly close to the one we have just described for dialects of the type of Modena; in this latter case, the insertion of a Loc clitic fixes the coordinates of the internal argument of the verb, lexicalizing its possessor at the end of the event being described.Intuitively, languages like Nocara do the same thing, anchoring the reference of the internal argument of the verb at a possessor, which is lexicalized however as an DO p element; thus Nocara's (32c) corresponds roughly to 'I give it (and it is) of his'.
The final distribution of clitics to be considered in this section involves dialects of Calabria, where the si-clitic lexicalizes the impersonal and the 3 rd person reflexive, exactly as described in section 1 for standard Italian (cf.Manzini and Savoia 2001), but also the 3 rd person dative both in isolation, as in (34a) and in combination with other clitics, as in (34b).Because of its general properties, we predict that given the right context, both a reflexive reading and a 3 rd dative one are equally salient and available.This is indeed the case in an example such as (34c), which thus exemplifies also the reflexive reading of si, meaning both 'he buys it for him' or 'he buys it for himself'.( 34 Taking up again the analysis of impersonal and reflexive si hinted at in section 1, we associate si of the relevant Calabrian dialects with a structure of the type in ( 35), where the s formative is associated with the Q specification internal to the clitic constituent.The i morpheme can in turn be identified with the I nominal head, since in the relevant dialects, which have a rather different inflectional structure from the other Italian dialects considered so far, it arguably corresponds to a nominal class specification, rather than to a Q morpheme itself.
(35) S.Agata del Bianco The Q categorization of the si clitic in (35) forms the basis for its insertion in the clitic string, which targets the Q position, preceding in particular accusative clitics in N, as illustrated in (36).
(36) S.Agata del Bianco Manzini and Savoia (2001) argue that the interpretive properties of impersonal and reflexive si can be naturally derived from its characterization as a quantificational variable (Manzini 1986).In particular the so-called impersonal reading is the result of the binding of the si variable by a generic operator (Chierchia 1995), while the reflexive reading implies a pronominal interpretation dependent on an antecedent.The discussion of morphologically 3 rd person datives in sections 2.1 and 2.2 above as Q elements, lexicalizing a distributivity property, suggests a similar treatment for the cases involving dative si such as (36).In other words, we are led to propose that in appropriate environments the quantificational properties of si can equally well receive a distributive reading in the relevant languages, hence conventionally a dative one.
Summary
It is worth stopping at this point to briefly summarize the conception of parametric variation emerging from the preceding discussion.In general both traditional and generative analyses imply that there is a common nucleus of syntactic and semantic properties that are properly labelled together as a distinctive category of dative.In this perspective, the parametrization between languages would have to do with the particular way in which these same properties are morphologically realized, for instance by a specialized form (say of the i type) or by suppletion, typically construed as replacement by an underspecified form (say si).
As already noted at the outset, our theory programmatically avoids reference to what we consider to be theoretically expensive notions of underspecification or default; nor does it conceive of parameters in terms of the overt realization of the same underlying semantico-syntactic units by different lexical material.This is particularly evident if the proposals being advanced here are compared with the model of Halle and Marantz (1993), in which syntactic operations manipulate features and lexical insertion is Late, meaning at the end of the syntactic cycle.In the present model, as in the minimalist model of Chomsky (1995), syntactic structures are conceived as the result of applying the operation Merge to actual lexical material.
Therefore, we exclude that there is a 'dative' category, or a predefined 'dative' set of features, which remains constant in the face of superficial variation.On the contrary, where a language like standard Italian (or Vagli or Olivetta or Piobbico) lexicalizes a morphologically 3 rd person distributor, another language such as S.Agata may lexicalizes a si-type distributor.Another possibility is the lexicalization of an DO p specification of the N argument, i.e. ne as in the Nocara language, or of a Loc clitic specifying the spatial coordinates of the N argument, as in the Modena language.This latter case is interesting also in that it is equally possible to find languages, such as Casaccia, where a morphologically 3 rd person element can be used not only in traditional dative contexts but also in locative ones.
The 'Spurious se' pattern
In sections 2.1 and 2.2 above we have considered several languages in which clusters of morphologically 3 rd person datives and accusatives are possible in either order; while in section 2.3 we have illustrated several languages which lack a morphologically 3 rd person dative independently of its clustering with other clitics.As we have already mentioned, in some Romance languages a morphologically 3 rd person dative is excluded by clusters including a 3 rd person accusative, though it surfaces in isolation or in combination with other clitics.This mutual exclusion between 3 rd dative and 3 rd accusative clitics has received wide attention in the literature, as have the suppletion phenomena to which it apparently gives rise.The best known single instance of the dative-accusative mutual exclusion pattern in Romance languages is the so-called 'Spurious se rule' of Spanish.The discussion in section 2 is directly relevant to this complex question, in that the apparently suppletive pattern produced by the 'Spurious se rule' of Spanish, whereby se receives the dative interpretation in combination with an accusative clitic, is actually found in some languages (S.Agata) independently of any mutual exclusion.This amounts to saying that the pattern emerging from the apparent suppletion mechanism does not require any explanation beyond those provided above for languages where suppletion is not found.
Among Italian dialects, the spurious se pattern is attested by Sardininian ones.The essential data are reproduced in (37) for the dialect of Làconi.The language has a specialized dative form which emerges in isolation, as in (37a), and a full accusative paradigm, illustrated in (37b).In combination with an accusative, the dative interpretation is however conveyed by the si clitic, as in (37c).Both the accusative clitic and the dative clitic appear to follow all other clitics, such as the P clitic or the partitive in (37d)-( 37e).( 37 The analyses that precede provide us with a basis for the systematization of both the clitic inventory and the insertion positions involved in a dialect like Làconi.The evidence concerning the position of both accusative and dative clitics is compatible with the conclusion that the insertion point of both clitics is N.This explains the fact that they surface to the right of all other clitics, including P clitics which pre-cede the accusative as in (38a), and the partitive clitic which precedes the dative as in (38b).Note that in keeping to the conclusions of section 2, in (38b) the position of the partitive has been taken to be R; the argument that the dative is in N is particularly strong in cases of clusters such as (38b) since, if the dative could be inserted in a higher position in the string, we could expect the partitive itself to occur in N and thus to follow the dative.The si -accusative cluster can simply be assigned the structure in (38c) where si occupies the Q position, in consonance with the discussion in section 2.
As for lexical entries of 3 rd person clitics, we note that Sardinian dialects in general, and the Làconi one in particular, have a fully specified sets of clitics corresponding to the accusative and dative paradigm.What is especially interesting is that contrary to the other cases considered so far, there is no lexical overlapping of dative and accusative.Let us begin with the accusative paradigm.The morpheme, which we analyze as an I head within the clitic constituent, combines with u and a morphemes for the masculine and feminine singular respectively, which we in turn analyse as heads of an embedded nominal.To a and u can in turn be added the plural morpheme s; the latter will be identified with a lexicalization of Q, as in (39c)-(39d).The a morpheme is treated as an I, i.e. a nominal class (gender) morpheme, as in (39b), while considerations pertaining agreement (in particular of the past participle) lead Manzini and Savoia (forthcoming) to analyse u as an N element, yielding (39a).It remains for us to consider the dative.In this case as well, we find the I morpheme followed by a morpheme i which we may take to be specialized for distributivity.Because of this we associate i with Q, as in (39e); the noteworthy property of the Làconi dialect in this respect is that it has two separate lexicalizations for plurality, i.e. s, and for distributivity, i.e. i. Nothing in principle prevents i and s from combining, and indeed they do combine in a dative plural clitic which takes the form illustrated in (39f).Note that in the case of Piobbico, we analyzed li as a noun phrase where i, compatible with the plural interpretation, is the Q inflection of the nominal imbedded under l head; similarly in the case of Làconi, though lacking the evidence for plural interpretation, we analyze i as a Q specification of a nominal embedded under .Thus the whole series in (39) has a biphrasal structure.
The lexical properties of the clitics in (39), in particular the fact that they correspond to a full noun phrases including a nominal head, induce insertion into the N position.This also holds for the i/ is forms which embed a Q specification, evidently not sufficient to induce insertion in Q.We can tentatively connect this to the fact that datives inserted in Q have either a pure i morphology (Olivetta, Piobbico, Casaccia) or a specialized consonantal head (i of Vagli), that effectively selects for the i morpheme itself.Clitics comprising an l-type head and an i morpheme lexicalize N, as in the Làconi dialect itself, in that of Piobbico and below in the Nociglia one.The R alternative is in principle open for both i-type and li-type clitics.In fact, there is evidence that the R insertion point can alternate with Q for i of Olivetta and possibly of Piobbico, though not for ii of Vagli or i of Casaccia.Similarly we shall argue that an R insertion point characterizes li of Celle di Bulgheria below, though not i of Làconi.This is in keeping with the conclusion of section 2.1 that lexicalization in R represents an independent parameter; more precisely, merger in R corresponds to lexicalization of a scope specificity position.What is directly relevant here is that given the N insertion point for i of Làconi, the mutual esclusion between accusative and dative can be attributed simply to the fact that they insert in the same N position.Either one can be inserted in N, but if the accusative is inserted then the dative is excluded, and viceversa.
The mutual exclusion between accusative and dative results in the apparent substitution of the dative by the si clitic.The basic lexical entry for si as a Q element straightforwardly predicts the existence of strings where si in Q is followed by the accusative inserted in N. The fundamental characterization of si as a quantificational variable implies the possibility of the impersonal interpretation, i.e. a generic interpretation, as well as of the reflexive interpretation, whereby the reference of si is fixed by an antecedent.In some dialects, as in the case of S.Agata in section 2.3, it also yields a distributive (dative) interpretation; the same holds for Làconi, when a cluster with the accusative is involved.That purely interpretive properties are involved, and not structural ones, is underlined by the ambiguity between the reflexive interpretation of si and the non-reflexive dative one evident in contexts such as (40).
(40) Làconi si az a ssamu'naðaza to.him/to.himselfthem he.has washed 'He has washed them (e.g.his/his own hands).' From the present perspective the question is why the reflexive reading of si remains available in contexts where there is no accusative clitic, while the nonreflexive reading becomes impossible.We have seen in the course of the preceding discussion that the traditional 3 rd person dative specification corresponds to the combination of two properties, namely the distributivity property with the property of 3 rd person denotation.Indeed i combines both properties, namely the 3 rd person property, lexicalized by the definiteness morpheme , and the distributivity property associated with the i morpheme; si can be associated with distributivity given its Q nature, but does not have definite (3 rd person) denotation.Therefore we propose that the 3 rd person dative reading is available for purely quantificational si only in contexts in which definite denotation is independently lexicalized in the string, specifically by the clitic in N, corresponding to the argument over which the dative distributes.In other contexts it remains perfectly possible to have si but only with its reflexive/ impersonal reading, different from that of a definite pronoun.
Crucially, if what precedes is on the right track, the phenomena routinely described as substitution of a clitic for another in a cluster are nothing of the sort.Two independent accounts are involved on the one hand for the mutual exclusion of two clitics in a string and on the other hand for the emergence of some other combination such as si -accusative as well as for the range of possible interpretations associated with it.It is important to emphasize that the analysis proposed does at no point rely on the comparison between possible representations or derivations, differing in this respect from optimality approaches (Grimshaw 1997(Grimshaw , 1999)).Furthermore, no manipulation of features/ categories is implied, either in the form of feature changing, or in the form of feature fusion, fission and in general of the operations introduced by Distributed Morphology (Halle and Marantz 1993).A particularly clear comparison is with Calabrese (1997) who also briefly considers the Sardinian examples.Indeed Calabrese accounts for mutual exclusion on the basis of an ad hoc morphological restriction on morphological feature clusters, while suppletion is produced by a repair rule changing one of the conflicting features.
Our proposal concerning Làconi, where the possibility of the non-reflexive dative reading for si in accusative contexts only is related to the lack of intrinsic 3 rd person properties, is supported by the observation that in this dialect si also appears as the 1 st and 2 nd person plural reflexive, as illustrated by the reflexive paradigm in (41a).More generally it lexicalizes reference to the 1 st and 2 nd plural person in non-reflexives contexts, as in (42).By contrast, a dialect such as S.Agata does not extend the denotation of si to 1 st or 2 nd person plural in any context; even the reflexive paradigm in (41b) has distinct 1 st and 2 nd person plural forms, namely ndi and vi respectively.41)-( 42), it is natural to conclude that the isolation use of si as the non-reflexive dative in the S.Agata dialect is connected to the fact that the clitic does not admit of what we have conventionally characterized as 1 st or 2 nd person readings, while the reverse is true in a dialect like Làconi.We may usefully begin by considering what a more precise characterization of the 1 st and 2 nd person plural readings may be, and how they can be made consistent with the basic nature of si assumed so far.A relevant consideration is that while there are several dialects which admit of si as the reflexive in 1 st and 2 nd person contexts both in the singular and in the plural, none of the dialects reviewed by Manzini and Savoia (forthcoming) have si as the lexicalization of 1 st and 2 nd person singular in non-reflexive contexts.Indeed the so-called 1 st and 2 nd person singular correspond to individual denotations introduced directly by the universe of discourse, namely the speaker and hearer respectively.On the contrary the denotation of the so-called 1 st and 2 nd person plural consists of a set including the speaker and hearer but also other individuals, whose reference is not necessarily anchored in the universe of discourse.Therefore we are led to conclude that while the usual non-reflexive, i.e. non-antecedent determined, interpretation of si cannot subsume speaker or hearer, neverthless it can subsume reference to a set including hearer or speaker.
Even in a language like standard Italian, which is comparable to S.Agata from the point of view of the properties of si, impersonal si can not only be associated with a generic interpretation, but also with a specific interpretation of sorts.Thus if in (43a) si is most naturally intepreted as referring to human beings in general, the most natural interpretation of (43b) is one in which si refers to the restricted set of people belonging to the family.The two relevant interpretations are discussed by Chierchia (1995) who characterizes them as 'generic' and 'episodic' respectively.What is directly relevant for the present discussion is that one interpretation which is particularly salient in specific (or episodic) contexts is precisely the 1 st person plural interpretation, i.e. 'we'; thus (43b) itself can be rendered as in questa famiglia siamo sempre scontenti 'in this family we're always unhappy'.
when one is good, one is happy b.In questa famiglia si è sempre scontenti.in this family one is always unhappy We propose that the ability of si to refer to a set restricted by contextual information forms a basis for its interpretation as the set contextually restricted by reference to the speaker, i.e. the so-called 1 st person plural, or to the hearer, i.e. the so-called 2 nd person plural.In this way, in keeping with the general program of a minimalist explanation of clitic systems, we account for the lexicalization of 1 st and 2 nd person plural reference, as in the Làconi dialect, without having recourse to ad hoc morphological mechanisms such as readjustment strategies (Bonet 1991).
On the other hand, since we derive the 1 st and 2 nd person plural denotations of si from the quantificational variable content which also gives rise to the impersonal and reflexive readings, we could expect dialects such as S.Agata (or standard Italian) to also have it.To explain why in such languages si has in fact a 3 rd person denotation only, we can slightly modify the lexical entry proposed for si in S. Agata as in (44a), where the s morpheme is associated with a nominal I head.On the contrary we can associate the wider ranging si clitic of Làconi with a lexical entry of the type indicated originally in section 2.3, where the s morpheme corresponds to a Q category, as in (44b); remember that in this dialect we also categorize i as Q.
(44) a. S.Agata del Bianco The parameter in (44) taken together with our discussion of the 'Spurious se' pattern suggests that only languages like S.Agata which construe s as a nominal head will lexicalize the distributor by means of si in all contexts; indeed the si clitic by including a nominal head can be said to have intrinsic 3 rd person reference.
Other dialects, like Làconi, where s is a Q head, will be able to lexicalize the distributor by means of si, only in contexts where a 3 rd person reference is independently lexicalized, in particular by the N argument that si distributes over.
Other suppletion patterns
According to the discussion that precedes, mutual exclusion between datives and accusative and the emergence of apparent suppletion patterns are causally unrelated phenomena.As a consequence of this, we may expect that the ways of lexicalizing the dative in apparent suppletion contexts are exactly the same as we have found for the dative in general, independently of suppletion.Thus since in section 2.3 we have seen that the dative can be lexicalized in all contents by morphologically non-3 rd person forms including the si-clitic, but also by the partitive or the locative, we may expect suppletion patterns to be possible not only with si, as in section 3.2, but also with the partitive or locative.In theories in which the insertion of si or some other clitic is caused by mutual exclusion of accusative and dative, and dictated by criteria of underspecification, there is no particular reason to expect this result, i.e. that the patterns that we end up with are all and only those that are attested for dative contexts independently.
To begin with, we consider dialects where a partitive form is substituted for the morphologically 3 rd person dative, which appears in isolation and in clusters with other clitics.These include some dialects of Calabria and Lucania as well as of Apulia, as illustrated in (45).As usual, (a) gives the isolation form of the dative, which is also found in clusters with non accusative clitics, such as the partitive in (45d); in the latter case the dative appears at the end of the clitic sequence.In combination with accusative clitics, whose paradigm is provided in (b), the dative is substituted by a partitive-type clitic, as in (c).Note that example (45d) coincides with example (45c) on the string nε li; the latter is in other words ambiguous between the dative -accusative reading indicated in (45b) and the partitive -dative reading indicated in (45d): ( 45 The basic properties of the dialect of Nociglia, that account for the complementary distribution of morphologically 3 rd person dative and accusative, are not unlike those already considered in section 3.1 for Làconi.We associate the clitics of the accusative series lu/la/li/lε with the lexical entries in (46), where the l morpheme corresponds to a nominal I head while the vocalic morpheme that follows it occupies the I head of an embedded nominal or, in the case of u, its N head.The i morpheme, associated with the plural or distributive (dative) interpretation is inserted in Q.
Because of their nominal properties, the clitics in ( 46) are inserted in the N position of the string.This holds in particular for the li form, which even as a distributor follows all other clitics it can cooccur with, including the partitive.If a high position was available to the dative, say Q, we could expect the partitive to occur in N, hence to the right of the dative.Instead the partitive is presumably inserted in R, as in (47).As discussed more than once, the availability of R to the partitive clitic but not to the dative depends on an independent parameter.Thus nε admits of lexicalization in the scopal specificity position corresponding to R, while the l series, including li, is constrained to the N position.The insertion of all clitics in (46) in N position means of course that they are in complementary distribution, excluding in particular the combination of li as a distributor with another clitic of the series.
The impossibility of combining two clitics of the set in (46) in a dative -accusative cluster gives rise to the apparent substitution of the dative by nε.According to the discussion in section 2.3 above concerning Nocara, nε can be analyzed as a DO p element, whose interpretation contributes to fixing the reference of the internal argument N of the event.This is true both of the partitive interpretation of nε and of its so-called dative one, whereby (45c) informally corresponds to 'they give it (and it is) of his'.As in the case of 'spurious se', the problem is why the partitive takes on this particular interpretation only in the presence of an accusative clitic.
In the traditional perspective, taken up and theorized by optimality accounts, inserting the more specialized form of dative i.e. li is necessary when possible; insertion of nε in its place is just a last resort option for those contexts where insertion of li is not possible.The approach taken here however sees the alternation between the lexicalization of li and nε in a radically different light, since the two clitics effectively lexicalize different interpretive contents, which can only descriptively be imputed to a common label of dative.In this respect, it is crucial that li does not in any way represent a specialized dative, as optimality treatments would imply, since it is also the accusative masculine plural; it is not obvious therefore that li has more features in common with the gender-and number-neutral dative than nε.In this sense neither li nor nε represents an optimal solution to some underlying 3 rd person dative feature; or more precisely, both of them are equally optimal solutions, if as Chomsky (1995) puts it syntax is an optimal solution to the problem of interfacing LF and PF.We must therefore conclude that the the child learning the Nociglia language learns a slightly more complex system than those considered so far, in which distributivity is lexicalized only in contexts where the internal argument is not a definite clitic pronoun.When the internal argument is such a clitic, what is lexicalized in the same contexts is an DO p specification.
The last typology to be considered here involves dialects of Central and Southern Italy where a morphologically 3 rd person form of dative emerging in isolation and in non-accusative contexts, alternates with a Loc clitic in clusters with accusatives, as illustrated in (48).As before, (a) provides the isolation form of the dative, while (b)-(c) illustrate the accusative paradigm; note that the accusative plural form differs from the dative form in that the former but not the latter triggers gemination of the following consonant (a type of 'raddoppiamento fonosintattico').As can be seen in ( d)-(e), it is the Loc clitic that combines with the accusative in dative contexts, exactly as in locative ones, cf.(g).The combination of dative and partitive furthermore gives rise to the order li -ne, as illustrated in (f).( 48 The analysis of the clitic set in the dialect of Celle cannot abstract from the fact that the relative order of clitics is compatible with a high insertion position for the dative itself.The latter in general precedes the clitics it cooccurs with, including the partitive and the si clitic; this suggests an R insertion position, as illustrated in (49).
An immediate consequence of the high insertion position of the dative in ( 49) is that the complementary distribution between datives and accusatives cannot be explained simply by the fact that they compete for the same N position.This situation, though not considered so far, is far from rare in Italian dialects; in other words, there are many dialects where mutual exclusion patterns are found even when two or more different positions in the string are available for insertion of the relevant clitics.Several such cases are considered in detail by Manzini and Savoia (to appear) who provide an explanation depending on the lexical properties of the clitics themselves.The idea is that the l morpheme of clitics whose insertion excludes that of other clitics of the same series lexicalizes all properties it is associated with for the whole clitic string.In particular then the insertion of an l clitic prevents the re-lexicalization in the string of the nominal properties associated with the l morpheme, and interpretively connected in our model to 3 rd person reference.
Let us then consider the accusative series in the Celle di Bulgheria dialect, i.e. lu/ la/ li.As discussed above, we associate the l morpheme with the I head of a nominal constituent; in turn the u and a morphemes can be associated with their own nominal head, namely an I head in the case of a and an N head in the case of u.Since the plural li provokes phonosyntactic gemination of the following consonant, we are led to assume that its lexical entry includes an abstract final consonant.Following previous discussion, i is a Q formative; the fact that an abstract consonant enters into the interpretation of the clitic as plural suggests however an overall analysis on the model of Sardinian (39) in which it is the consonant that morphologizes plural.This yields a clitic paradigm of the type in (50).The idea that insertion in any position of the clitic string of an l clitic of the type in (50) succeeds in lexicalizing the relevant nominal properties of the whole string means that if the li clitic in (50d) is inserted in R, it prevents the insertion of an accusative clitic in N. Viceversa insertion of an accusative clitic in N, excludes that of the distributor li in the higher R position for the same reason.This explanation does not touch the possibility of combining the clitics of the l series in (50) with other clitics, which do not have the relevant l-type properties; hence li can be combined with si and the partitive as in (49), and the accusative can of course be combined with the locative, as in the suppletion pattern in (48d).
As for the apparent suppletion patterns itself, its explanation follows already familiar lines.On the one hand we have indicated in some detail in section 2.3 how the Loc clitic can come to lexicalize contexts which in other languages may be lexicalized by a 3 rd person distributor (the so-called specialized dative).On the other hand in a language like Celle it is only in combination with a 3 rd person accusative that the Loc clitic takes on the so-called dative interpretation, i.e. one in which it provides the possessor coordinatines of the internal argument of the verb.As in the case of ne suppletion patterns, we will assume that this relatively complex distribution is learned by the native speaker.
The discussion concerning Celle di Bulgheria, in basing the mutual exclusion between dative and accusative on a lexical property of the l morpheme implies that it is independent of the status of the clitic as a dative or as an accusative.Indeed Manzini and Savoia (to appear) show that many Northern Italian dialects with subject clitics do not allow for the combination of a 3 rd person subject clitic with a 3 rd person object clitic.Interestingly in the simplest case this mutual exclusion leads to the lexicalization of only one of the two clitics, namely the accusative; in many dialects the accusative takes a fixed form, reminiscent of the form taken by the accusative in the dative-accusative pattern of the Piobbico type.In no cases that we know of, one of the two clitics can actually be substituted by a different form altogether (ci, ne, si etc.).This further clinches the argument in favor of the conceptual and empirical separation of mutual exclusion and suppletion.
General summary
On the evidence of our discussion of the so-called dative, the traditional morphological category of Case is a spurious one.In some languages, indeed, reference to the dative reduces to reference to the spatial (Loc) or other (DO p ) coordinates of the internal argument of the event.In other languages, reference to the dative is introduced by means of a category which appears to be associated with quantificational properties, interpreted both as distributivity and as plurality (as with the type i/li), or as genericity (in the case of si).What interests us directly is that all empirical elements are in place for concluding that crucially dative is a descriptive category and does not correspond to a syntactic category.What is more, the categories that we adopt in alternative to dative characterize intrinsic denotational content; thus Loc is interpreted with reference to the locative coordinates of discourse, Q is interpreted as plurality, distributivity, genericity, etc.In no case are the relevant categories characterized by relational properties such as Case would be.Our discussion suggests furthermore that the conclusions just drawn for the dative hold for Case categories of traditional grammar in general.Thus traditional accusative reduces to the internal argument interpretation (forcing a reanalysis of ECM), while nominative can be construed as another name for the EPP property.
The observation that in many dialects the dative coincides with the accusative plural (masculine) is in fact directly relevant not only to the status of the traditional feature of Case but also to that of number, which traditionally represents the distinction between singular and plural.In fact, the discussion that precedes supports the conclusion that there is no independent number category, but rather an all-purpose Q category underlying weak quantification, which encompasses plurality, as it does numeral quantification and more.
Nor do the other traditional phi-features survive a careful analysis.The gender category is in fact problematic even within the framework of Chomsky (1995), at least if we want to enforce the idea that agreement features are interpretable on Nouns; for gender corresponds to a property with referential import (roughly feminine sex) only in a small subset of cases in the Romance languages.On the contrary a characterization of gender that will hold true in all cases is that it corresponds to a nominal inflection class, as we assumed throughout this article.Thus in a language like standard Italian the so-called masculine (-o) and feminine (-a) coincide with two separate inflectional classes, to which must be added a third (-e) class which can combine with either of the preceding (i.e. it is either feminine or masculine in traditional terms).Concerning Person, it is of course a category of our grammar; but its content is not that of traditional (and generative) treatments opposing speaker (1p), hearer (2p) and others (3p).Rather we take it that P(erson) coincides with 1 st and 2 nd person, whose distribution and general behavior differs from those of traditional 3 rd person.
Another respect in which the present approach differs from the others found in the literature is that it does not introduce any form of comparison between derivations in the grammar to account for the 'preference' of one clitic over another according to context.That comparison between derivations (or representations) is involved is particularly evident in the recent optimality treatment of Romance clitics by Grimshaw (1997Grimshaw ( , 1999)).In essence according to Grimshaw (1997Grimshaw ( , 1999) ) lexical insertion takes place on the basis of the need to safisfy the maximum possible amount of constraints defined by the grammar.This means that in isolation the closests match to a 3 rd person dative, in some languages a dedicated form, is inserted.If for some reason the dedicate form is unavailable, the grammar provides for the insertion of a severely underspecified element such as se, other positively specified elements necessarily violating more constraints than it does.Essentially the same conceptual schema, based on the implicit or explicit comparison between derivations or representations is in fact implied by morphological theories that use Elsewhere as the basic lexical insertion principle, effectively the main line of generative morphology down to current Distributed Morphology frameworks (Halle and Marantz 1993).
Our account of the relevant phenomena makes use of no Elsewhere principle, with the allied notions of underspecification or default, nor of comparison between derivations/representations.It seems to us that to the extent that such notions rep-resent an enrichement of the grammar the present account has an edge over its competitors.As for notions of comparisons of derivations or representations, recall that though they play some role in the earlier minimalist framework of Chomsky (1995) they have shown to be not only unnecessary, but to effectively derive the wrong results in more recent statements of the theory (Chomsky 2000(Chomsky , 2001) ) where they are altogether abandoned.As for notions of Elsewhere, and the attending concepts of underspecification and default, we note that these notions have been discounted in the very phonological domain in which they have first arisen (cf. the government phonology literature, e.g.John Harris 1994).
( 25 )
Modena (Emilia) a. a g 'dag kwas-'kε I to.himgive this 'I give this to him.' b. a g al/la/i/li 'dag I to.himit-m./it-f./them-m./them-f.give 'I give it/them to him.' c. a g 'mεt kwas-'kε I there put this 'I put this there' d. a m g la 'mεt I myself there it put 'I put it there (for myself).' e. a se g to.it of.themputs inside two 'He puts two of them inside it.' | 2019-03-28T13:04:15.272Z | 2002-12-01T00:00:00.000 | {
"year": 2002,
"sha1": "1d597922b4c11043e9587fdb019cb501787b73db",
"oa_license": "CCBYNC",
"oa_url": "https://revistes.uab.cat/catJL/article/download/v1-manzini-savoia/51",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1d597922b4c11043e9587fdb019cb501787b73db",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
245449983 | pes2o/s2orc | v3-fos-license | Post-error Slowing Reflects the Joint Impact of Adaptive and Maladaptive Processes During Decision Making
Errors and their consequences are typically studied by investigating changes in decision speed and accuracy in trials that follow an error, commonly referred to as “post-error adjustments”. Many studies have reported that subjects slow down following an error, a phenomenon called “post-error slowing” (PES). However, the functional significance of PES is still a matter of debate as it is not always adaptive. That is, it is not always associated with a gain in performance and can even occur with a decline in accuracy. Here, we hypothesized that the nature of PES is influenced by one’s speed-accuracy tradeoff policy, which determines the overall level of choice accuracy in the task at hand. To test this hypothesis, we had subjects performing a task in two distinct contexts (separate days), which either promoted speed (hasty context) or cautiousness (cautious context), allowing us to consider post-error adjustments according to whether subjects performed choices with a low or high accuracy level, respectively. Accordingly, our data indicate that post-error adjustments varied according to the context in which subjects performed the task, with PES being solely significant in the hasty context (low accuracy). In addition, we only observed a gain in performance after errors in a specific trial type, suggesting that post-error adjustments depend on a complex combination of processes that affect the speed of ensuing actions as well as the degree to which such PES comes with a gain in performance.
INTRODUCTION
We all make mistakes. For instance, many of us have experienced sending an email to the wrong person. After such an error, we typically write a second message to apologize and rectify. But when we send the second email, we usually take more time to check that the recipient is correct. We, therefore, adapt our behavior in order to avoid reproducing previous mistakes. Such an ability to adapt after an error is essential to achieving our goals.
Errors and their consequences are typically studied in two-choice reaction time tasks by investigating changes in decision speed and accuracy in trials that follow an error, commonly referred to as ''post-error adjustments''. Using such tasks, many studies have reported that subjects slow down following an error, a phenomenon called ''post-error slowing'' (PES; Fu et al., 2019;Dubravac et al., 2020;Nigbur and Ullsperger, 2020;Topor et al., 2021).
The functional significance of PES is still a matter of debate though (Wessel, 2018;Damaso et al., 2020;Kirschner et al., 2021). Because slowing down after an error is often associated with an increase in accuracy, PES is traditionally attributed to adaptive adjustments of decision policies, favoring a more cautious response style to improve performance in the subsequent trial (Rabbitt and Vyas, 1970;Smith and Brewer, 1995;Cavanagh et al., 2014;Siegert et al., 2014;Purcell and Kiani, 2016;Steinhauser and Andersen, 2019;Beatty et al., 2020). However, several recent studies have revealed that PES can also occur in a somewhat ''maladaptive'' way as, sometimes, slowing does not necessarily lead to improvement in accuracy; in fact, PES can even come with a decrease in decision accuracy (Ceccarini et al., 2019;Eben et al., 2020a;Schroder et al., 2020;Smith et al., 2020;Compton et al., 2021;Kirschner et al., 2021). These findings indicate that the functional significance of PES may vary according to the context in which it is observed.
A careful analysis of the literature reveals that the degree to which PES is adaptive (i.e., increases accuracy) or ''maladaptive'' (i.e., takes place without accuracy improvement) depends partly on the average level of accuracy of subjects in the task at play. That is, in studies reporting an adaptive PES, the overall level of choice accuracy is typically low (i.e., generally between 60% and 80% of correct choices) because the task is relatively complex and/or because the instruction requires subjects to respond quickly within a given time limit (Siegert et al., 2014;Purcell and Kiani, 2016;Steinhauser and Andersen, 2019). In this situation, errors are clearly expected and slowing down after them has a positive effect on choice accuracy (Hajcak et al., 2003;Siegert et al., 2014;Dyson et al., 2018;Wessel, 2018;Damaso et al., 2020). By contrast, studies reporting a maladaptive PES rather use reaction time tasks that are quite simple such that the overall level of choice accuracy is usually much higher (i.e., more between 80% and 100% of correct choices; Notebaert et al., 2009;Nunez Castellar et al., 2010;Houtman et al., 2012;Eben et al., 2020a;Li et al., 2020;Kirschner et al., 2021;Compton et al., 2021). In such settings, errors represent infrequent and unexpected events that may catch attention, resulting in a maladaptive PES that deteriorates (rather than enhances) choice accuracy in the consecutive trial (Sokolov, 1963;Nunez Castellar et al., 2010;Houtman et al., 2012).
Thus, whether PES is adaptive or maladaptive might be partly influenced by choice accuracy. This in turn depends on the task characteristics, such as its global difficulty or on the speedaccuracy tradeoff (SAT) policy of subjects performing the task. Indeed, most decisions require balancing speed and accuracy, making the SAT a universal property of behavior (Henmon, 1911;Rinberg et al., 2006;Salinas et al., 2014;Guo et al., 2020;Reynaud et al., 2020;Miletić et al., 2021). Humans and other non-human animals are able to adjust their SAT depending on the context, favoring either hasty (i.e., high speed, low accuracy) or cautious (i.e., low speed, high accuracy) decision policies (Chittka et al., 2009;Heitz, 2014;Spieser et al., 2017;Thura, 2020). Hence, because choice accuracy varies depending on the SAT, it is plausible that PES can shift from being adaptive to being maladaptive depending on whether the emphasis is on speed or accuracy when performing the same task in separate blocks.
In conclusion, past research suggests that errors can trigger PES of adaptive or maladaptive nature (van Driel et al., 2012;Schiffler et al., 2017;Wessel, 2018). These two different types of behavior have been evidenced in separate studies using distinct tasks or instructions where performance is either characterized by a low or a high level of choice accuracy, respectively. Here, we hypothesized that the nature of PES can also vary within a given task depending on whether the SAT context favors a hasty (i.e., high speed, low accuracy) or a cautious (i.e., low speed, high accuracy) decision policy. More precisely, we predicted that errors would be common and expected when the context favors choice speed due to the choices' promptness (Damaso et al., 2020), whereas they would be rare and unexpected when the context favors choice accuracy. Hence, we expected PES to be less adaptive (and potentially maladaptive) when the emphasis is on choice accuracy in a cautious SAT context compared to when the emphasis is on response speed. To test this hypothesis, we used a modified version of the ''tokens task'' (Cisek et al., 2009;Derosiere et al., 2019Derosiere et al., , 2022, involving choices between left and right index fingers. In this task, incorrect choices led either to a low or high penalty in two different SAT contexts, inciting subjects to implement either hasty or cautious decision policies, respectively. We predicted that PES would be more adaptive (i.e., associated with a higher increase in accuracy) in the low than in the high penalty context.
MATERIAL AND METHOD Participants
A total of 43 healthy volunteers participated in this study (25 Women: 23.5 ± 2.3 years old). All participants were righthanded according to the Edinburgh Questionnaire (Oldfield, 1971). None of them had any neurological disorder or history of psychiatric illness or drug or alcohol abuse, and no one was following any clinical treatment that could have influenced performance. Participants were financially compensated for their participation and could also receive extra compensation based on their performance on the task (see below). All gave written informed consent at the beginning of the experiment. The protocol was approved by the Ethics Committee of the Université catholique de Louvain (UCLouvain), Brussels, Belgium. Data presented here were also used (for a different purpose) in another article (Derosiere et al., 2022).
Tokens Task
Subjects were seated in front of a computer screen, positioned at a distance of 70 cm from their eyes. Both forearms were placed on the surface of a table with the left and right index fingers placed on a keyboard turned upside down ( Figure 1A). Subjects performed a variant of the ''tokens task'' (Cisek et al., 2009;Derosiere et al., 2021) which was implemented by means of LabView 8.2 (National Instruments, Austin, TX). In this decision-making task, participants had to continuously monitor the distribution of 15 tokens jumping one by one from a central circle to one of two lateral circles. The subjects were instructed to guess which lateral circle would ultimately receive the majority of the tokens; they had to indicate their choice before FIGURE 1 | (A) Schematic of the tokens task. In each trial, 15 tokens jumped one by one every 200 ms from the central circle to one of the lateral circles. The subjects had to indicate by a left or right index finger keypress (i.e., F12 and F5 keys, respectively) which lateral circle they thought would receive the majority of tokens at the end of the trial. For a correct response, the subjects won, in e cents, the number of tokens remaining in the central circle at the time of the response. Hence, the reward earned for a correct response decreased over time, as depicted in (B). The right side of panel (A) depicts the monetary outcome in three exemplary cases. The upper inset represents the reward provided for a correct response between Jump 8 and Jump 9 , that is when seven tokens remain in the central circle at the moment the left circle is chosen; the middle inset represents the penalty for an incorrect response in the hasty context, fixed at −4 cents; the lower inset shows the penalty in a "Time Out" trial (no response), fixed at −4 cents, regardless of the context. For representative purposes, the "Time Out" message is depicted below the circles in this example, while it was presented on top of the screen in the actual experiment. (B) Contexts. Incorrect responses led to a fixed negative score, which differed depending on the context. In the hasty context (shown on the left), the penalty was low, equaling only 4 cents (see red line), promoting fast decisions. In contrast, in the cautious context (shown on the right), the penalty was high, equaling 14 cents, promoting thus slower decisions.
the last token jump, by pressing a key with the left or right index finger (i.e., an F12 or F5 key-press for the left or right circle, respectively).
As depicted in Figure 1A, in between trials, subjects were always presented with a default screen, consisting of three blue circles (4.5 cm diameter each) displayed on a white background for 2,500 ms. Each trial started with the appearance of the 15 tokens randomly arranged in the central circle. After a delay of 800 ms, a first token jumped towards the left or right circle, followed every 200 ms, by the other tokens, jumping one by one, to one of the two lateral circles. Subjects were asked to respond as soon as they felt sufficiently confident. The reaction time (RT) was calculated by computing the difference between the time at which subjects pressed the key to indicate their choice and the time of the first tokens jump (Jump 1 ). After subjects had pressed the corresponding key, the tokens kept jumping every 200 ms until the central circle was empty (i.e., 2,800 ms after Jump 1 ). So, the feedback appeared only once all tokens were distributed. At this time, the chosen circle was highlighted either in green or in red depending on whether the response was correct or not, respectively. In addition, a numerical score displayed above the central circle provided subjects with feedback of their performance (see the ''Reward, Penalty and SAT contexts'' section below). In the absence of any response before the last jump, the central circle turned red with a ''Time Out'' message and a ''−4'' (score) appeared on top of the screen. The feedback screen lasted for 500 ms and then disappeared at the same time as the tokens did (the circles always remained on the screen), denoting the end of the trial. From the appearance of the tokens in the central circle, each trial lasted for 6,600 ms.
One key feature of the tokens task is that it allows one to calculate, in each trial, the ''success probability'' p i (t) associated with choosing the correct circle i at each moment in time t. For example, for a total of 15 tokens, if at a particular moment in time the right (R) circle contains NR tokens, the left (L) circle contains NL tokens, and the central (C) circle contains NC tokens, then the probability that the circle on the left will ultimately be the correct one (i.e., the success probability of guessing left) is described as follows, where k represents the number of elements in the summation component of the equation: Although the token jumps appeared completely random to subjects, the direction of each jump was determined a priori, producing different types of trials according to specific temporal profiles of pi(t). There were four trial types: ambiguous, obvious, misleading, and arbitrary. The majority of trials (60%) were ambiguous, as the initial jumps were balanced between the lateral circles, keeping the p i (t) close to 0.5 until late in the trial (i.e., p i (t) remained between 0.5 and 0.66 up to the Jump 10 ). Fifteen percentage of trials were ''obvious'', meaning that the initial token jumps consistently favored the correct circle (i.e., p i (t) was already above 0.7 after Jump 3 and above 0.8 after jump 5 ). Fifteen percentage of the trials were ''misleading'', where most of the first token jumps occurred towards the incorrect lateral circle (i.e., p i (t) remained systematically below 0.4 until Jump 3 ; from then on, the following tokens jumped mainly in the other direction, that is, towards the circle that eventually turned out being correct). Finally, we included 10% of trials that were completely arbitrary. These different types of trials were always presented in a randomized order.
Reward, Penalty, and SAT Contexts
As mentioned above, at the end of each trial, subjects received a feedback score. Correct responses led to a positive score (i.e., a reward) while incorrect responses led to a negative score (i.e., a penalty). Subjects were told that the sum of these scores would turn into a monetary reward at the end of the experiment.
In correct trials, the reward corresponded to the number of tokens remaining in the central circle at the time of the response (in e cents). Hence, the reward for a correct choice in a given trial gradually decreased over time ( Figure 1B). For instance, a correct response provided between Jump 5 and Jump 6 led to a gain of 10 cents (10 tokens remaining in the central circle). However, it only led to a gain of 5 cents when the response was provided between Jump 10 and Jump 11 (5 tokens remaining in the central circle). Hence, using a reward dropping over time increased time pressure over the course of a trial and pushed subjects to respond as fast as possible (Derosiere et al., , 2022. The penalty provided for incorrect choices did not depend on the time taken to choose a lateral circle. Importantly though, it differed between the two contexts. In the first context, the cost of making an incorrect choice was low as the penalty was only −4 cents, pushing subjects to make hasty decisions in order to get high reward scores (hasty context). Conversely, incorrect choices were severely sanctioned in the second context as the penalty there was −14 cents, emphasizing the need for cautiousness (cautious context).
Moreover, not providing a response before Jump 15 (i.e., time out trials) also led to a penalty, which was −4 cents both in the hasty and in the cautious contexts. Hence, in the hasty context, providing an incorrect response or not responding led to the same penalty (i.e., −4 cents), further increasing the urge to respond before the end of the trial in this context. Conversely, in the cautious context, the penalty for making an incorrect choice was much higher than that obtained for an absence of response (i.e., −14 vs. −4 cents, respectively), further increasing subjects' cautiousness in this context. Hence, with these two contexts, we could consider post-error behavioral adjustments depending on whether the cost of errors was either low or high, prompting the subjects to put the emphasis on decision speed (low accuracy) or on decision accuracy (high accuracy), respectively. As mentioned above, we expected to observe a post-error slowing (PES) in both cases but predicted that it would be more adaptive in the hasty than in the cautious blocks.
Sensory Evidence at RT
The tokens task also allowed us to assess the amount of sensory evidence (i.e., available information) supporting the subjects' choice at the RT. To estimate the level of sensory evidence at RT, we computed a first-order estimation as the sum of log-likelihood ratios (SumLogLR) of individual token movements at this time (Cisek et al., 2009): In this equation, p(e k |S) is the likelihood of a token event e k (a token jumping into either the chosen or unchosen lateral circle) during trials in which the chosen circle S is correct, and p(e k |NS) is its likelihood during trials in which the unchosen circle NS is correct. K, here, represents the different token jumps. The SumLogLR is proportional to the difference between the number of tokens contained in each lateral circle; the larger the number of Frontiers in Human Neuroscience | www.frontiersin.org tokens in the chosen circle, as compared to the unchosen circle, the higher is the evidence for the choice and thus the SumLogLR . We expected the latter to be overall higher in the cautious than in the hasty context, reflecting the higher evidence needed before committing to an accurate choice in the former context (Ratcliff, 2002;Heitz, 2014;Miletić et al., 2021).
Experimental Procedure
Subjects performed the task in the two contexts in two different experimental sessions conducted on separate days at a 24-h interval. The order of the two sessions (i.e., hasty and cautious) was counterbalanced across participants. As described below, each session involved the same structure, except for the addition of a familiarization block in the first session only, to allow subjects to become acquainted with the basic principles of the task (this was of course not necessary for the session coming on the 2nd day). Each session started with two short blocks involving a simple reaction time (SRT) task. This task was similar to the tokens task described above except that, here, all tokens jumped simultaneously into one of the two lateral circles. The subjects were instructed to respond as fast as possible by pressing the appropriate key (i.e., F12 or F5 for the left or the right circle, respectively). In a given SRT block, the tokens jumped always into the same circle, and subjects were informed in advance of the circle to choose within a block. This SRT task allowed us to estimate the sum of the delays attributable to the sensory and motor processes in the absence of a choice, as achieved in past studies (Cisek et al., 2009;. Then, subjects performed a few practice blocks. The first one (10 trials) consisted of a version of the tokens task in which the feedback was simplified; indicating only if the subjects' choice was correct or incorrect by highlighting the chosen circle in green or red, respectively; no reward or penalty was provided here. This first practice block served to familiarize subjects with the basic aspects of the task and was only used during the first session. The practice then continued with two blocks (20 trials each) where subjects performed the task in the context they would be involved in for the whole session (hasty or cautious blocks).
After that, the actual experiment involved eight blocks of 40 trials (320 trials per session; 640 trials per subject). Each block lasted about 4 min and a break of 2-5 min was provided between each block. Each session lasted approximately 150 min.
Statistical Analyses
The analyses comprised two parts: first, we ran some tests to check that our manipulation of the penalty indeed led the subject to adopt different SAT policies in the two contexts. Second, and more related to the goal of the current study, we performed analyses to compare the post-error adjustments in the two contexts. Most of the statistical comparisons involved repeated-measures analyses of variance (ANOVA RM ) run with the Statistica software (version 10.0, Statsoft, Oklahoma United-States). Post hoc comparisons were conducted using the Tukey's Honestly Significant Difference (HSD) procedure. The significance level was set at p < 0.05. Moreover, for the analyses regarding post-error adjustments, we ran a Bayesian equivalent of the ANOVA RM (and t-tests) with JASP (Wagenmakers et al., 2018). In this case, the Bayes Factor (BF 10 ) quantifies the evidence for the alternative hypothesis against the null hypothesis and the prior and posterior inclusion probabilities [P(incl) et P(incl|data)] refer to the importance of each parameter based on the prior and posterior probabilities of each model including it, respectively. All data are presented as mean ± SE.
Manipulation Check
In order to verify that our manipulation of penalty (−4 or −14 cents) successfully induced SAT adaptations, we considered the RT, the percentage of correct choices (%Correct), and the SumLogLR at RT in the two contexts. Overall, we expected to observe larger values for these variables in the high penalty (−14 cents) than in the low penalty (−4 cents) blocks, supporting a more conservative behavior in the cautious context compared to the hasty one. To address this directly, we analyzed each variable using two-way ANOVAs RM with CONTEXT (hasty or cautious) and TRIAL_TYPE (obvious, ambiguous, or misleading) as within-subject factors.
Post-error Adjustments
All analyses on post-error adjustments focused on behavior in ambiguous trials. This allowed us to characterize post-error adjustments in a homogeneous set of (ambiguous) trials. We investigated behavior in these trials, referred to as the ''n'' trials (trials n ), according to whether they followed an error or a correct choice. These trials preceding trials n are referred to as trial n-1 and were separated according to whether they were ambiguous or misleading; there were too few errors in obvious trials to consider them as trials n-1 . Thus, we considered post-error adjustments on ambiguous trials n according to the type of trials n-1 (ambiguous or misleading). For this analysis, we had to exclude 14 participants who had less than five trials n in at least one of the experimental conditions. As a result, statistical analyses were run on a total of 29 subjects (17 women: 23.4 ± 2.4 years old). On average in the hasty context, we characterized adjustments following errors in 22 ± 8 ambiguous trial n-1 and 15 ± 6 misleading trial n-1 (corresponding to an error rate of 21 ± 7% and 44 ± 18%, respectively). In the cautious context, errors occurred in 13 ± 5 ambiguous trials n-1 and 10 ± 4 misleading trial n-1 (corresponding to an error rate of 13 ± 5% and 30 ± 12%, respectively).
There are different methods for quantifying post-error adjustments in trials n (Hajcak and Simons, 2002;Dutilh et al., 2012a). In the present study, we used a traditional approach consisting in calculating deltas (∆) for the RT (∆RT, ms) and for the % Correct (∆%Correct) in trials n as follows: ∆RT was obtained by calculating the difference between the RT in correct trials n that either followed an error or a correct choice in trials n-1 (Williams et al., 2016;Damaso et al., 2020;Smith et al., 2020). Similarly, ∆%Correct corresponded to the difference in % Correct between trials n following an error or a correct choice in trials n-1 . Hence, PES manifests as a positive ∆RT. If this positive ∆RT is associated with a positive ∆%Correct, it means that the PES is adaptive (i.e., is associated with a gain in decision accuracy) while a null or negative ∆%Correct reflects a maladaptive PES (no gain or drop in decision accuracy).
These ∆RT and ∆%Correct were analyzed using two-ways ANOVAs RM with CONTEXT (hasty or cautious) and trials n-1 -TYPE (ambiguous or misleading) as within-subject factors.
Manipulation Check
On average, subjects displayed RTs of 1866 ± 457 ms; they performed with a %Correct of 81 ± 18%, and did so for a level of evidence corresponding to 0.35 ± 0.72 (SumLogLR at RT value; a.u.). Importantly, as depicted in Figure 2 (upper panel), all these values were lower when the penalty was low (i.e., equal to −4 cents) compared to when it was high (i.e., equal to −14 cents), supporting a shift from a cautious to a hasty response policy when the penalty was low (all CONTEXT F 1, 28 > 7.3, all p < 0.05, see Table 1).
In addition, as shown on the lower panel of Figure 2, the ANOVA RM revealed an effect of the TRIAL_TYPE on all three parameters (all F 2, 56 > 222, p < 0.001). As expected, subjects responded faster and more accurately in the obvious trials than in the other trials (all p < 0.001). They were also faster in misleading than in ambiguous trials (p < 0.001) but showed a lower accuracy (i.e., lower %Correct) in the former trial type (p < 0.001), consistent with their misleading nature. Regarding the SumLogLR at RT, it was the highest in the obvious and the lowest in the ambiguous trials (all p < 0.001), consistent with the different predefined patterns of token jumps in these different trial types.
Finally, the RT and SumLogLR at RT did not display any significant CONTEXT × TRIAL_TYPE interaction (all F 2, 56 < 2.8, all p > 0.05). Yet, as depicted in Figure 3, this interaction was significant for the %Correct (F 2, 56 = 14.35, p < 0.001). As such, the %Correct was larger in the cautious context relative to the hasty one but only in ambiguous and misleading trials (p < 0.01 and p < 0.001, respectively). In fact, the obvious trials were so easy that subjects did not make mistakes in this trial type whatever the context.
Post-error Adjustments
Post-error adjustments (∆RT and ∆%Correct), calculated with the traditional approach (Dutilh et al., 2012a), are displayed in Figure 4 for trials n (always ambiguous), following either ambiguous or misleading trials n-1 . Even if ∆RTvalues were positive in all conditions, which would be consistent with the occurrence of PES, Student's t-tests against 0 showed that this slowdown was only significant in the hasty context (∆RT significantly above 0 with a Bonferroni-corrected threshold of 0.05/4), regardless of the TRIAL n-1 _TYPE (both t 29 > 3, The main error square (MES), critical F-value, p-value, and partial eta-squared (η 2 p ) are provided for each factor (CONTEXT; TRIAL_TYPE) and their interaction (CONTEXT × TRIAL_TYPE), following the analysis of the reaction time (RT), percentage of correct choices (%Correct), and sensory evidence at RT (SumLogLR at RT). Significant p-values are highlighted in bold and blue.
FIGURE 3 | CONTEXT × TRIAL_TYPE interaction on the percentage of correct choices (%Correct).%Correct was lower in the hasty (blue bars) than in the cautious (red bars) context when considering Misleading (M) and Ambiguous (A) trials but not for the Obvious (O) trials. Note the absence of errors in these latter trials (%Correct = 100), whether in the hasty or cautious context. * * p < 0.01, * * * p < 0.001: significantly different.
analyses (Wagenmakers et al., 2018) showed moderate to strong evidence for PES in the hasty context (BF 10 = [8.330 98.677]), and moderate evidence for a lack of adjustment after an error in the caution context (BF 10 = [0.275 2.207]). Consistently, the ANOVA RM revealed a significant effect of CONTEXT on ∆RT (F 1, 28 = 6.26, p = 0.018), in the absence of TRIAL n-1 _TYPE effect (F 1, 28 = 0.49, p = 0.49) or CONTEXT × TRIAL n-1 _TYPE interaction (F 1, 28 = 1.39, p = 0.25). These results were supported by a Bayesian analysis showing moderate evidence for a context effect (BF 10 = 5.411, see Table 2). Figure 4 (lower panel) evokes a positive ∆%Correct in all conditions, which would indicate an increase in decision accuracy in trial n . Yet, the Student's t-tests showed that this effect was only significant in the hasty context; more surprisingly, it was only present following misleading trials n-1 (t 29 = 4.21, p < 0.001, Cohen's d = 0.781 and BF 10 = 118.923; Bonferroni-corrected threshold = 0.05/4, see Table 3 for more details). Note though that the variations in ∆%Correct between the different conditions were rather weak, as confirmed by the ANOVA RM analyses which only revealed a marginal effect of TRIAL n-1 _TYPE (F 1, 28 = 3.53, p = 0.07), with no effect of CONTEXT (F 1, 28 = 1.51, p = 0.23) FIGURE 4 | Post-error adjustments of reaction time (∆RT; upper panel) and %Correct (∆%Correct; lower panel) depending on the context (hasty or cautious) and on whether trial n-1 was ambiguous (crosshatched bars) or misleading (empty bars). While the positive ∆RT in all conditions suggests the presence of PES, this slowing down was only significant in the hasty context. The latter PES came with a positive ∆%Correct but this effect was only significant following misleading trials n-1 . Error bars represent SE. # : t-test against 0 (significant difference from 0). *p < 0.05: significantly different.
In conclusion, our data indicate that post-error adjustments varied according to the context in which subjects performed the tokens task, with PES being only significant in the hasty context, The prior inclusion probability P(incl), the posterior inclusion probability P(incl|data), and the change from prior to posterior inclusion odds (BF incl ) are provided for each factor (CONTEXT; TRIALn-1_TYPE) and their interaction (CONTEXT × TRIALn-1_TYPE), following the analysis of reaction time and %Correct change in trialn (∆RT and ∆%Correct). In addition, the BF 10 grades the strength of evidence for the alternative hypothesis against the null hypothesis and the partial eta-squared (η 2 p ) represents a measure of the effect size. BF 10 revealing a significant factor effect is highlighted in bold and blue. The critical t-value, the p-value, and the Cohen's d as a measure of the effect size are represented for each factor (after ambiguous or misleading trialsn-1 in hasty or cautious context), following the analysis of reaction time and %Correct change in trialn (∆RT and ∆%Correct). Significant p-value (with a Bonferroni-corrected threshold of 0.05/4) are highlighted in bold and blue. and a gain in performance being only observed after errors in misleading trials.
DISCUSSION
The literature on post-error adjustments is quite diverse and controversial, especially regarding the nature of PES; although often adaptive, PES sometimes comes with a decline in accuracy, suggesting that it can be maladaptive in some instances. Here, we investigated if the nature of PES can vary according to whether a subject behaves in a context favoring hasty or cautious decisions. To address this point, we had subjects perform the tokens task in separate blocks where errors were either poorly penalized, encouraging hasty responses (but low accuracy), or highly penalized, calling for more cautiousness (at the cost of speed). The results show that, overall, subjects slowed down after erroneous choices, supporting the presence of PES. Yet, despite the fact that ∆RT values were numerically positive in all conditions, this PES was only significant in the hasty context (after correction for multiple comparisons). Moreover, consistent with an adaptive adjustment in this context, we observed a significant improvement in performance, but only following misleading trials n-1 ; the positive ∆%Correct did not reach significance following ambiguous trials n-1 .
The positive values of ∆RT in all conditions indicate that, if anything, subjects slowed down after an error. However, contrary to our expectation to observe PES in the two contexts, this ∆RT was only significant in the hasty context suggesting that subjects only slowed down when they were in a context emphasizing speed (low accuracy) but not when the context promoted more accurate choices. PES, as observed in the hasty context, is usually associated with a cognitive control process recruited to prevent future errors (Smith and Brewer, 1995;Siegert et al., 2014;Beatty et al., 2020). Such a process is thought to operate at least in part at the level of the decision threshold, increasing its height with respect to baseline activity as a means to augment the amount of (neural) evidence accumulation required to reach the decision threshold (Dutilh et al., 2012b;Purcell and Kiani, 2016;Schiffler et al., 2017;Fischer et al., 2018;Derosiere et al., 2018Derosiere et al., , 2019Derosiere et al., , 2022Alamia et al., 2019); this, of course, prolongs the decision time but increases the probability of choosing the right circle and therefore the reward rate.
Consistent with the occurrence of such adaptive adjustment maximizing the reward rate in the hasty context, the PES observed there was associated with positive ∆%Correct values (Botvinick and Braver, 2015;Thura, 2020;Vassiliadis and Derosiere, 2020). Yet surprisingly, this was only true after misleading trial n-1 but not after ambiguous trial n-1 , as ∆%Correct did not reach significance following the latter trial type. Hence, PES in the hasty context led to a gain in performance following misleading trial n-1 but not after ambiguous trial n-1 . Such a finding suggests that errors did not solely trigger shifts in decision thresholds. Indeed, if this had been the case, one would have expected PES to be accompanied by a consistent increase in accuracy regardless of the type of trial n-1 in which an error occurred. Alternatively, a non-exclusive possibility is that task engagement varied following errors in these two trial n-1 types. We believe this may be the case because post-error task engagement (or arousal) has been shown to vary with the level of confidence at the moment an error is made (Yeung and Summerfield, 2012;Purcell and Kiani, 2016;Desender et al., 2019), which itself depends on the amount of sensory evidence available to make the (incorrect) choice (Meyniel et al., 2015;Pouget et al., 2016;Sanders et al., 2016;Urai et al., 2017;Desender et al., 2019). Accordingly, past studies have shown that when errors are made based on poor sensory evidence (i.e., with a low confidence level, as in ambiguous trials), arousal decreases significantly in the following trial (Notebaert et al., 2009;Nunez Castellar et al., 2010;Navarro-Cebrian et al., 2013;Purcell and Kiani, 2016;Wessel, 2018;Desender et al., 2019), possibly precluding an initially adaptive PES from leading to a significant gain in performance. By contrast, as previously observed incognitive interference tasks, when errors are related to the presence of high (conflicting) sensory evidence (as in misleading trials), arousal is found to increase in the following trial, an effect that may help dedicate attention to relevant sensory evidence (King et al., 2010;Danielmeier et al., 2011). Hence, it is plausible that in the current study, post-error task engagement was larger following misleading than ambiguous trial n-1 , allowing PES to result in a performance gain following the former but not the latter trial type. Such a hypothesis could be tested in future work by investigating changes in pupil diameter following errors in our task (Kahneman and Beatty, 1966;Saderi et al., 2021).
Critically, Wessel proposed that PES arises from a sequence of processes including first a transient automatic response to the unexpected event (i.e., error), which triggers a reorientation of attention, followed by an adaptive process increasing the decision threshold to prevent future errors (Wessel, 2018). Based on this adaptive orienting theory, the delay between the feedback on trial n-1 (indicating an error) and the start of trial n , which corresponds to the intertrial interval(ITI) duration, can influence the nature of PES and needs to be long enough to allow the second adaptive process to take place. This was the case in our study where the ITI duration was 2,500 ms which, based on Wessel, is long enough for the adaptive process to occur. Hence, because the ITI duration was also comparable between the PES conditions, it is unlikely that this aspect of the task affected our data.
Unexpectedly, in this study, we observed PES only in the hasty but not in the cautious context. One tempting explanation is that slowing down after errors could only effectively increase accuracy in the hasty but not in the cautious context. That is because subjects emphasized speed in the hasty context, it is likely that a great proportion of errors were made because subjects responded too fast and not necessarily because the trial was difficult (Damaso et al., 2020). Hence, in this context, errors could easily be avoided by slowing down a bit in the following trial. In contrast, subjects were generally more cautious in the other context and it is thus plausible that errors occurred when choices were complex rather than because responses were too hasty (Ratcliff and Rouder, 1998;Brown and Heathcote, 2008;Ratcliff and McKoon, 2008). Slowing down following these trials may not be effective as it would not necessarily enhance accuracy; that is, even if subjects fail on the most complex choices, they are generally cautious enough to succeed on most trials and slowing down further would not lead to any performance gain. Yet, we believe such an explanation does not hold here. As such, it is important to note that RTs in cautious blocks were around 2,017 ms, which falls between Jump 10 and Jump 11 , coinciding thus with the moment sensory evidence in favor of the correct choice starts to increase greatly (see Section ''Material and Method''). This means that even if subjects were already generally cautious (and slower) in this context, slowing down would have been adaptive because it would have allowed providing responses based on more evidence.
A more plausible explanation is that the absence of PES in the cautious context is related to the way we promoted cautiousness in the current study. As such, changes in the SAT policy between the two contexts were engendered by manipulating the penalty size. However, even if error punishment is known to increase cautiousness (Potts, 2011;Derosiere et al., 2022), as desired here, monetary losses also generate an emotional response (Carver, 2006;Simoes-Franklin et al., 2010;Frijda et al., 2014;Eben et al., 2020b), a sense of frustration increasing with the size of the loss (Gehring and Willoughby, 2002;Holroyd et al., 2004;Yeung and Sanfey, 2004;Eben et al., 2020c). Importantly, such negative emotion has been shown to induce a post-error acceleration of RTs rather than a slowdown (Verbruggen et al., 2017;Dyson et al., 2018;Damaso et al., 2020;Eben et al., 2020c;Dyson, 2021). Accordingly, several studies have found that subjects act more impulsively after a loss or a non rewarded trial than a rewarded one (Gipson et al., 2012;Verbruggen et al., 2017;Eben et al., 2020c). Altogether, this literature suggests that the emotional response to monetary loss might have precluded us from observing PES in the cautious context. In other words, errors in the cautious context may have triggered opposite reactions counteracting each other; that is, a frustration feeling due to the high penalty (speeding up behavior) and an adaptive adjustment to prevent hasty errors (slowing down behavior). In the future, it would be interesting to dissociate the manipulation of the context from that of the penalty. Moreover, as the level of punishment sensitivity impacts error monitoring (Unger et al., 2012;Laurent et al., 2018), it also seems relevant to add some questionnaires to measure this personality trait such as the behavioral inhibition system (BIS) scale.
Interpretation of the current data is limited by the fact that a large number of subjects were excluded from the analyses because of an insufficient number of trials, reducing thus the sample size and the statistical power. In addition, the low error rate in the cautious context and the presence of different trial types impacted also the calculation of PES by preventing the use of another method than the traditional one. We recognize that this traditional method is prone to different biases such as global fluctuations in subject performance or in the number of post-correct trials outnumbering the number of post-error trials (Schroder et al., 2020). Note that even if some studies show that these biases can lead to an underestimation of post-error adjustments by decreasing effect sizes (Damaso et al., 2020;Schroder et al., 2020), others suggest that these biases do not radically change the results (van den Brink et al., 2014;Murphy et al., 2016).
In conclusion, our findings highlight a complex combination of processes that come into play following errors and that affect the speed of ensuing actions as well as the degree to which such post-error adjustment comes with a gain in performance or is rather maladaptive. The recruitment of these processes depends on several factors, including the context within which choices are made and the nature of erroneous trials, which affect altogether the subjects' strategy, their engagement in the task, and likely also their emotional reaction to the error.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Comité d'éthique Hospitalo-Facultaire Saint-Luc-UCL. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
FF designed the study, analyzed the data, and wrote the first draft of the manuscript. GD acquired the data. FV and JD contributed to the study design and data analyses.
All authors contributed to the article and approved the submitted version.
FUNDING
FF is a doctoral student supported by the Fund for Research training in Industry and Agriculture (FRIA/FNRS: FC29718; Fonds pour la Formation à la Recherche dans l'Industrie et dans l'Agriculture). GD was a postdoctoral fellow supported by the Belgian National Funds for Scientific Research (1B134.18). FV is supported by an European Union's Horizon 2020 research and innovation programme, grant (ERC Consolidator grant: 769595). JD was supported by grants from the Belgian FNRS (F.4512.14) and the Fondation Medicale Reine Elisabeth. This work was supported by the grant from the Belgian National Funds for Scientific Research (FRIA-B2; INHIBACTION). | 2021-12-25T16:11:29.574Z | 2021-12-23T00:00:00.000 | {
"year": 2022,
"sha1": "cd348fe390065619ab73b451d600e512fbf67502",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f0b7cfb3cfc3f38c376fc984681d879cbeba7496",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology"
]
} |
216216023 | pes2o/s2orc | v3-fos-license | Dynamics of Psychological Status and Quality of Life Indicators in Patients with Diabetes Mellitus Type 2 and Chronic Gastritis Before and After the Treatment
Depression increases the risk of diabetes mellitus type 2 development and the subsequent risks of hyperglycemia, insulin resistance, microand macro-vascular complications. The association between depression and diabetes mellitus type 2 may include autonomic and neurohormonal dysregulation, weight gain, inflammation, and structural changes in the hippocampus. Objective of the work. To evaluate the psychological status and quality of life indicators in patients with diabetes mellitus type 2 and chronic gastritis before and after the treatment with the use of medicine Magnicum-Antistress. Materials and methods. Based on the Endocrinology Department of the Transcarpathia Regional Clinical hospital named after A.Novak there were examined 40 patients, whose average age was to 53.7 ± 4.1 years. All patients with diabetes mellitus type 2 and chronic gastritis were assessed for quality of life, psychological status, and stress levels using questionnaires, namely using the SF-36, ”PSM-25 Psychological Stress Scale methodology”, Holmes and Rahe stress test. After the survey, all patients were treated with Magnicum-Antistress medicine on the background of pathogenetic treatment. Results. Thus, after the course of treatment during 1 month, the level of stress decreased, so in the male patients the high level of stress was observed in 58.3% of patients, and among female patients – 35.8%. Also, the level of stress-resistance increased, so in male patients, the low stress-resistance level was observed in 66.7% of patients, and among female patients – 25%. After the course of treatment according to the Quality of Life Assessment Scale (SF-36), patients showed a positive tendency in the indicators of the psychological and physical health components. Conclusions. The level of chronic stress in patients with DM type 2 and CG is mostly high (52.5%). The level of stress-resistance in the vast majority of patients with DM type 2 and CG is low (52.5%). Complex therapy with the use of the medicine Magnicum-Antistress in patients with DM type 2 and CG is pathogenetically justified, and also leads to an improvement in the quality of life and stress-resistance in these patients.
Depression -is a disease in which a person feels depressed for a long time (at least two weeks), loses interest in activities that have previously been satisfying, and cannot do the daily activities. Clinically significant depression is present in each of four people with type 2 diabetes mellitus. The depression increases risk of diabetes mellitus type 2 development and the subsequent risks of hyperglycemia, insulin resistance, micro-and macro-vascular complications in depression. Conversely, diabetes mellitus 2 type diagnosis increases the risk of incident depression and may contribute to more severe depression. The association between depression and diabetes mellitus type 2 may include autonomic and neurohormonal dysregulation, weight gain, inflammation, and structural changes in the hippocampus. [1] Magnesium is a vital electrolyte. Magnesium deficiency is associated with cardiovascular disease, arteriosclerosis, diabetes mellitus, and metabolic syndrome. Daily magnesium intake leads to a significant reduction in symptoms of depression and anxiety, regardless of age, gender, the initial severity of depression, or antidepressant use. Thus, magnesium is a fast, safe and readily available alternative or supplement before the beginning or increase of antidepressants' dose. [2] One of the most studied chronic diseases as for the magnesium deficiency is type 2 diabetes mellitus and metabolic syndrome. Magnesium plays a crucial role in glucose and insulin metabolism, mainly due to its effect on insulin receptor tyrosine kinase activity by transferring a phosphate from ATP to a protein. Magnesium can also affect the activity of phosphorylase b kinase by releasing glucose-1-phosphate from glycogen. Besides, magnesium can directly affect glucose transporter 4 (GLUT4) and helps regulate glucose translocation into the cell. [3] Magnesium also potentiates the phosphorylation of cAMP-binding protein (CREB), increases BDNF expression in the prefrontal cortex, and enhances the activation of calcium/calmodulin-dependent protein kinase II (CaMKII). BDNF and CaMKII are reduced in certain brain regions in patients who suffer from different types of depression. [4] According to scientific sources, serum fasting plasma glucose, postprandial glycemia, and serum glycosylated hemoglobin (HbA1c) are higher in patients with hypomagnesemia than in patients with normomagnesemia. The percentage of adipose tissue is also significantly higher in patients with hypomagnesemia compared with patients with normomagnesemia. That is, the low concentration of magnesium in the blood of patients with type 2 diabetes is directly related to poor metabolic control. [5] Serum magnesium levels were found to decrease with increasing HbA1c levels and the duration of type 2 diabetes. Hypomagnesemia is associated with poor control of diabetes mellitus type 2. Besides, serum magnesium depletion increases exponentially with disease duration. [6] Drugs such as proton pump inhibitors, can impair the absorption of magnesium in the gastrointestinal tract. This effect may be the result of a drug-induced decrease in pH that alters the affinity of the transient receptor potential of Melastatin-6 and Melastastin-7 (TRPM6, TRPM7) channels of the apical surface of enterocytes for magnesium. [7] According to recent recommendations of the Magnesium Research Association, the patients with diabetes mellitus are benefiting in four categories from magnesium intake: insulin-sensitizing, calcium antagonism, stress regulator and stabilizing effects of the endothelium. [8] The objective of the research. To evaluate the psychological status and quality of life indicators in patients with diabetes mellitus type 2 and chronic gastritis before and after the treatment with the use of medicine Magnicum-Antistress.
Materials and Methods
There were examined 40 patients (whose average age was 53.7 ± 4.1 years old) on the basis of the endocrinology department of the Transcarpathian Regional Clinical Hospital named after A. Novak. This study was performed with the participation of 28 (70%) female and 12 (30%) male. All patients were diagnosed with diabetes mellitus type
Dynamics of Psychological Status and Quality of Life Indicators in Patients with Diabetes
Mellitus Type 2 and Chronic Gastritis Before and After the Treatment -3/6 2. The diabetes mellitus type 2 diagnosis was made according to the International Diabetes Federation guidelines (IDF, 2005), namely, the determination of the glucose level in the blood serum in fasting state and 2 hours after its intake, that was performed using glucose oxidase test. The degree of diabetes mellitus compensation was assessed by the level of glycosylated hemoglobin (HbA1c, %), which was determined with the help of chromogenic analysis using a Sysmex 560 apparatus (Japan) and Siemens reagents. All the patients were performed fibroesophagogastroduodenoscopy (FEGDS using a "Pentax FG-29V" endoscope, Japan) with a targeted biopsy (5 biopsy specimens were taken from the gastric mucosa). These specimens were transmitted for further histological examination. HP was determined using a rapid urease test (CLO-test) and determination of HP antigens in feces (CITO TEST H.Pylori Ag, Pharmasco, Ukraine). The evaluation of the quality of life, psychological status, and stress levels using questionnaires, namely, using the SF-36, PSM-25 Psychological Stress Scale, Holmes-Rahe Stress Assessment was performed in all patients with DM type 2 and CG.
The "PSM-25 Psychological Stress Scale" aims to assess the level of stress feelings in somatic, behavioral and emotional indicators. Thus, the patients should evaluate their overall condition by selecting a number from 1 to 8 for each of the 21 statements, that most clearly expresses the patient's condition during the last days (4-5 days). Further, the results are processed and interpreted, if the patient scored less than 99 points -low stress, 100 -125 points -medium stress, more than 125 pointshigh stress.
The stress test (Holmes and Rahe) consists of a scale comprising 43 questions about important life events, each of which is answered by a certain number of points depending on the degree of stress. A large number of points are an alarm to alert you to the risk of psychosomatic illnesses. So, if the total score is 150-199 -the degree of resistance to stress is high, 200-229 points -the threshold, 300 or more points -low one.
The "SF-36 Health Status Survey" refers to nonspecific quality of life (QOL) questionnaires. The 36 items of the questionnaire are grouped into eight scales: physical functioning, role-playing activity, physical pain, general health, vitality, social functioning, emotional state, and mental health. The indices for each scale vary between 0 and 100, where 100 represents full health, all scales form two indicators: the psychological and the physical component of health.
After the survey, all the patients were treated with Magnicum-Antistress medication on the background of pathogenic treatment. Magnicum-Antistress was given at a dose of 2 tablets 2 times a day for 1 month.
The criteria for enrolling patients in this study were: Patients with a confirmed diagnosis of type 2 diabetes and chronic gastritis associated with HP; Exclusion criteria for this study were: Type 1 diabetes patients; All studies were carried out with the consent of patients, and their methodology was consistent with the 1975 Declaration of Helsinki and its revision in 1983.
Scientific research is a fragment of the DB theme # 851 "Mechanisms of formation of complications in diseases of the liver, methods of their treatment and prevention" (state registration number 0115U001103), as well as the scientific theme of the Department of Internal Medicine Propedeutics "Polymorbid pathology in the diseases of digestive tract, pathogenesis peculiarities, possibilities of correction" (state registration number 0118U004365).
The analysis and processing of the results of the patients' examination were carried out using the computer program STATISTICA 10.0 (StatSoftInc, USA).
Results and Discussion
The following results were observed after the PSM-25 psychological stress questionnaires' analysis. The chronic stress was found in all patients before treatment according to the results of the questionnaire. Also, in patients with DM type 2 and with CG, a high level of stress was significantly more likely to be observed compared with female patients. The obtained data about stress levels are represented in Fig. 1.
As can be seen from Fig. 1, among male patients 3 (25%) patients have moderate stress level, and 9 (75%) have high-stress level. Male patients experienced an improvement in their psychological state, with 5 (41.7%) patients suffering from an average level of stress and 7 (58.3%) -had a high level of stress.
As can be seen from Fig. 2, low levels of stress were found in one female patient (3.6%). There were 15 (53.6%) female patients who scored 100-125 points that indicate an average level of stress, and 12 (42.8%) patients -more than 125 points, that means they have high level of stress. The level of stress decreased in female patients after the treatment. Thus, a low level of stress was detected in 5 (17.7%) patients, the average level -in 13 (46.5%) and the high level in 10 (35.8%) patients. No high-stress threshold was found among the patients of both genders with DM type 2 and CG, according to the data. The results found by the processing of the stress assessment test (Holmes and Rahe) questionnaire are represented in Fig. 3.
In patients with DM type 2 and CG, low stress tolerance level was more commonly reported before the treatment, namely, 10 (83.3%) patients and 2 (16.7%) patients had a threshold level distress tolerance. In patients with type 2 diabetes mellitus and CG, a low level of stress was observed in 8 (66.7%) patients and 4 (33.3%) patients with a threshold stress-resistance level. As can be seen from Fig. 4, among female patients showed low stress-resistance level in 11 (39.3%), and threshold stress-resistance -in 17 (60.7%) patients before the treatment. Therefore, after the treatment among female patients, a low stress-resistance level was found in 21 (75%) and threshold stress-resistance in 7 (25%) patients.
According to the evaluation of the health psychological component according to the SF-36 quality of life rating scale in patients with DM type 2 and CG before and after the treatment, the following results were obtained (Fig. 5).
Therefore, according to Fig. 5, the health psychological component improved slightly in patients with DM type 2 and CG after the treatment, so patients rated their psychological health as much as 54 points, compared to 32 before the treatment; role emotional functioning -10 points before and 44 points after the treatment; social functioning -43 points before and 50 points after the treatment; viability -23 points before and 45 points after the treatment. According to the evaluation of the health psychological component according to the "SF-36" quality of life rating scale, in patients with DM type 2 and CG before and after the treatment, the following results were obtained (Fig. 6).
Therefore, according to Fig. 6, physical health components improved slightly in patients with DM type 2 and CG after the treatment, so patients rated their overall health as much as 38 points, compared with 30 before treatment; pain factor -40 points before and 58 points after the treatment; role physical functioning -25 points before and 35 points after the treatment; physical functioning -38 points before and 46 points after the treatment. Thus, after the course of treatment within 1 month, the level of stress decreased, so in male patients the high level of stress was observed in 58.3% of patients, and among the female patients -35.8 ones. Also, the level of stress-resistance increased in male patients, the low stress-resistance level was observed in 66.7% of patients, and among the female patients -25% one. After a course of treatment according to the Quality of Life Assessment Scale (SF-36), patients showed a positive trend in the indicators of the psychological and physical components of health.
The sufficient amount of magnesium in the blood helps to improve glycemic control, reduce the level of oxidative stress and insulin resistance, and at least every 4 patients with DM type 2 suffer from depressive correction and psychological status of patients, so use of the medicine Magnicum-Antistress in DM type 2 is justified. Magnesium is involved into the metabolism of carbohydrates, proteins, and fats, as well as redox reactions, and in combination with pyridoxine, it has a positive effect on diabetic polyneuropathy. This medicine also helped to normalize the function of the nervous system, reducing the feeling of irritability and fear. | 2020-04-02T09:12:21.417Z | 2020-03-27T00:00:00.000 | {
"year": 2020,
"sha1": "ce7d832ade6a590d3085f09e3b9394138f6fb0dc",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.21802/gmj.2020.1.10",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "614c972c5f21a8cd8f5aaa8e4d5bbb28c50d9399",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237770946 | pes2o/s2orc | v3-fos-license | Efficient and Accurate Hemorrhages Detection in Retinal Fundus Images Using Smart Window Features
: Diabetic retinopathy (DR) is one of the diseases that cause blindness globally. Untreated accumulation of fat and cholesterol may trigger atherosclerosis in the diabetic patient, which may obstruct blood vessels. Retinal fundus images are used as diagnostic tools to screen abnormalities linked to diseases that affect the eye. Blurriness and low contrast are major problems when segmenting retinal fundus images. This article proposes an algorithm to segment and detect hemorrhages in retinal fundus images. The proposed method first performs preprocessing on retinal fundus images. Then a novel smart windowing-based adaptive threshold is utilized to segment hemorrhages. Finally, conventional and hand-crafted features are extracted from each candidate and classified by a support vector machine. Two datasets are used to evaluate the algorithms. Precision rate ( P ), recall rate ( R ), and F1 score are used for quantitative evaluation of segmentation methods. Mean square error, peak signal to noise ratio, information entropy, and contrast are also used to evaluate preprocessing method. The proposed method achieves a high F1 score with 83.85% for the DIARETDB1 image dataset and 72.25% for the DIARETDB0 image dataset. The proposed algorithm adequately adapts when compared with conventional algorithms, hence will act as a tool for segmentation.
Introduction
The world health organization (WHO) estimates Diabetic retinopathy (DR) as the fifth leading cause of visual impairment and the fourth cause of blindness in the world. Globally, 800 million people have myopia, hypermetropia, presbyopia. Out of this, 100 million have moderate-to-severe distance vision blindness [1]. The increasing number of individuals suffering from diabetes mellitus (DM) has made the number of DR patients rise inevitably. Factors responsible for the prevalence of DM include obesity, sedentary lifestyle, physical inactivity, and lack of awareness [2]. Early detection and prevention are important steps to avoid blindness from DR disease. Effective control of vision loss includes regular eye examination and management of risk factors (such as glycemic, hypertension, and hyperlipidemia) [3,4]. The American Diabetes Association (ADA) recommends that type 1 diabetes patients be screened three to five years after onset, and type 2 diabetic patients be screened one year after onset [5,6]. Abnormalities in DR are categorized as non-proliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR). NPDR disease is the earliest stage of DR that changes the blood vessel of the eye. The changes in NPDR do not present any symptoms and are not visible to the naked eye. During the later stage of pathology, microaneurysms turn into hemorrhages.
In the past decades, eye examinations were performed sequentially. The best-corrected vision is first located, and then a slit-lamp microscope obtains the pressure of the eye. Next, the ophthalmoscopy is used to judge the entire retinal (macular area, retinal periphery, and ora Serrata). This procedure requires the judgment of an experienced ophthalmologist [7]. Recently, refined techniques such as digital retinal imaging, fundus photographs, and fluorescein angiography are popular methods to perform retinal examination [8]. However, this does not replace the primary examination of the eye. The new modalities advance the field of ophthalmology; hence they act as the second interpreter for the ophthalmologist to make findings plausible. Regular eye examination reduces the risk of permanent vision loss and alerts many people of serious health problems (such as high blood pressure, high cholesterol, diabetes, and cancer). Retinal imaging produces a clearer view into the eye for the doctor to see any early signs of health conditions [9]. Despite the numerous advantages associated with retinal imaging, they are often blurry, poorly illuminated, and have a narrow field of view. The characteristics of the hemorrhages are similar to some of the dark regions caused by lighting conditions and the blood vessels; hence segmenting hemorrhages in retinal images becomes difficult. Figure 1 is an example of retinal images.
The problems of retinal images necessitated an algorithm that can segment hemorrhages efficiently and accurately. The proposed algorithm first enhances the contrast of the fundus image with the adaptive histogram equalization. Then the edge information manipulates the intensity transformation adaptively to make the hemorrhage appear more prominently. Next, a fuzzy-logic-based filter is used to sharpen the images. Finally, the candidate image is extracted, segmented, and classified. The major contributions of this paper are summarized as follows: 1. A method to overcome the problem of blurriness and to distinguish hemorrhages from the blood vessels; 2. A preprocessing and candidate extraction method for hemorrhage detection; 3. A smart window-based feature extraction procedure for segmentation of hemorrhages.
Related Work
Several methods [10][11][12][13][14][15][16][17][18][19][20][21][22] were proposed in the literature for retinal image segmentation. Reference [10] provides a detailed review of retinal blood vessel segmentation. Several empirical and machine learning-based image processing approaches were employed for hemorrhage segmentation. Reference [13] is a survey of recent developments in automatic detection of DR. References [14][15][16] used the k-Nearest neighbor algorithm to cluster lesions in fundus images. Kande et al. [17] and García et al. [18] used a support vector machine to categorize the performance prowess of different classifiers. Although these methods produce good results, they evaluated their output based on lesion and image.
Huang et al. [11] used a convolutional neural network (CNN) for hemorrhage segmentation. This method preprocesses, trains, refines the data, and finally segments with CNN. The technique consists of the coarsely annotated bounding box for each hemorrhage. The box size is increased at random and accompanied by hemorrhage. Next, a refining network is used to capture the data from the bounding box procedure. Finally, CNN is applied for the automatic detection of hemorrhages. Although this method gives accurate results, they require a large amount of training data for effective segmentation. Rani et al. [12] combined edge detection, morphology, and connected components for the segmentation of blood vessels and hemorrhages. This method involves segmentation and classification stages. In the segmentation stage, blood vessels and lesions are extracted from the fundus image, and then the local binary features are extracted. Finally, the Naïve Bayesian classifier is used to discriminate between hemorrhage and non-hemorrhage. This method may produce a high false-positive rate (common to edge segmentation approaches). Arun et al. [20] segmented hemorrhages using the splat feature. First, the images are segmented into different partitions (called splats). Then the watershed algorithm is used to extract boundaries. Next, pixels are grouped by an irregular grid algorithm and are estimated with boundaries from the magnitude of the gradient. Srivastava et al. [21] used a Frangi-based filter for red lesion detection in retinal fundus images, while Mohamed et al. [22] used mathematical morphology to detect non-proliferative diabetic retinopathy.
Most of these studies evaluate their output based on lesion or image. Output evaluation by image count the number of lesion or image and then find the sensitivity or specificity of the output. This paper proposes a different output evaluation method. Unlike the several existing literature, we evaluate output based on the pixel. It is believed that output evaluation using pixels instead of lesion/images provides a clearer assessment when detecting retinal images [19]. Our method detects hemorrhages (NPDR or PDR) from retinal fundus images.
Furthermore, previous algorithms perform segmentation of non-hemorrhage candidate regions based on estimations of morphology or connected component analysis. They reduce the false-positive rate with the formation of hemorrhages using the concepts of any irregular shape. The proposed method, unlike previous methods, considers all candidates for analysis and detection.
The paper is organized as follows. The fundamental concepts and proposed method are reported in Section 2. The results of the experiment are illustrated in Section 3. Finally, a discussion is given in Section 4, and the paper is concluded in Section 5.
Dataset Setup
The proposed method was tested with two datasets (DIARETDB0 and DIARETDB1). The first dataset has 130 retinal fundus images, 110 images contain the signs of DR, and 20 normal images [23]. The second dataset has 89 fundus images, with 84 of these images having mild non-proliferative signs and 5 normal images [24]. The images in both datasets have the dimension of 1152 × 1500 pixels and a 50-degree field of view digital fundus camera with various imaging settings.
Methodology
The proposed method involves three stages: preprocessing, segmentation, feature extraction, and classification. Figure 2 is the block diagram depicting various steps of the proposed method.
Preprocessing Stage
Some retinal images have lower quality because the images are taken under different lighting conditions. The common characteristics of images include: (1) exposure to external light ruins some regions close to the rim; (2) the edges of the hemorrhage are blurry, and the contrast is low. Therefore, we preprocess retinal fundus images with two approaches: (a) contrast enhancement and image brightness and (b) image sharpening. These approaches will be discussed subsequently.
Brightness and Contrast Enhancement
To increase the contrast of color channels, contrast-limited adaptive histogram equalization (CLAHE) [25] is used. A sample of enhanced images is depicted in Figure 3c. Image brightness is adjusted using an adaptive gamma correction technique [26,27]. Gamma correction method is obtained by using adaptive parameter [28]. The conventional transform-based gamma correction is given by: where P ∈ { 0, }, and is the maximum gray-value of the image, is the correct parameter. Reference [26] obtained the gamma value based on an improved cumulative density function (CDF). The improved CDF produced a good result; however, images are over-enhanced (which may result in loss of vital information). The gamma in the range < 1 is used to brighten the dark regions. To create an adjustment that can withstand low light images, we obtain gamma value using Sobel operator given by: where and are gradient in x and y directions. The gamma value is applied to the individual color channel, as shown in Figure 3. The graph in Figure 3b explains the brightness correction for different values of gamma.
Image Sharpening
The experiment suggests that images in the green channel provide better quality for the retinal component; hence images in the green channel are adopted for this study [29]. Image sharpening is the inverse method of image blurriness. The blurred image does not contain meaningful texture and edge information, thus show low quality. The purpose of image sharpening is to make objects clear. Linear image sharpening techniques are proven adequate for many applications, but they are more susceptible to noise. The non-linear techniques preserve the edges and image information efficiently. This research used fuzzy logic-based non-linear unsharp masking [30] to refine the blurriness of fundus photographs. The benefit of adopting fuzzy logic is to enhance the edges and sharpen the smooth areas. In addition, it can better deal with noise by controlling the parameter , due to the physical randomness of the image acquisition system. The method computes the fuzzy relationship between the intensities of the focused pixel and its neighbor in a moving window (W) of size 3 × 3.
For each pixel in image , the intensity value ( ) obtained in Equation (4) is adjusted linearly with a high pass filter ℎ( ), defined as follows: where is a constant, and is the parameterized membership function that analyzes the intensity difference between two pixels, is the output of the pseudo-high-pass filter, is a set of neighboring pixel N around ( , ), and a, b, c are parameters. In this work, the parameter function classifies the intensity difference into four levels through parameters , , and (0 < < < ). The intensity difference in the range | − | ≤ is considered noise that does not contribute to the membership function. The difference in the range ≤ | − | < yields the strong sharpening effect to highlight image details. The sharpening effect is limited when the difference follows the range ≤ | − | < . Previous experiments suggested that when the values , , and , are set to 2, 5, and 50, respectively, they provide effective results. Therefore, we set the values of , , and , to 2, 10, and 50, respectively, and is set to 1.5. A sample of image sharpening is depicted in Figure 4a.
Segmentation Stage
Image segmentation of objects and patterns has become the hallmark of digital image processing. Obtaining good performance for many segmentation tasks is a great challenge. In this paper, we adopt three procedures to segment fundus images. They include hand-crafted procedures, seed points of candidates, and the SWAT. These procedures are discussed in subsequent subsections.
Hand-Crafted Image
A common procedure to achieve good accuracy for segmentation tasks is to extract the region of interest (ROI) before segmentation. The black background is unnecessary because the hemorrhages appear at the vitreous humour of the retina. The background is illuminated to reduce the search space and achieves automation. This retinal mask is constructed by binarizing the median filter applied on the green channel of the image. The eroded mask is subtracted from the retinal mask to obtain the boundary of the retina. The image with illuminated background (Figure 4b) is used for feature extraction in subsequent steps.
Seed Points of the Hemorrhage Candidates
To extract the seed points, the Gray Level Co-Occurrence Matrix (GLCM) based local cross-entropy thresholding [29] eliminates the low response of the matched filter [31]. Then, the morphological operation isolates the objects from the blood vessels, thereby producing the seed points.
Blood vessels and hemorrhages share intensity information because of their similar appearance (they look darker in comparison with the surrounded regions). Their edges are sharp when compared with other retinal structures. These features are useful for creating a matched filter that enhances the edges of the retinal structures (for details about matched filters, see [32]). The method proposed by Fangyan et al. [29] excellently suppresses the weak response of the matched filter. Sample results of matched filter and GLCM based local cross-entropy are shown in Figure 5a,b. We use mathematical morphology to analyze spatial structures. The morphological opening, erosion followed by dilation, is used to eliminate redundant objects and to break retinal structures larger than the structuring element. The size of the structuring element used in this experiment is 11 × 11. A sample result of morphology operation is shown in Figure 5c. The output points acted as seed/initial points for the automatic segmentation of hemorrhages.
Smart Window-based Adaptive Thresholding for Segmentation (SWAT)
The major faction of segmentation algorithms is to locate the precise contour of an object. This task becomes easy when the region to be segmented is simplified. There should be a significant dissimilarity between surrounded regions and objects. Since hemorrhages share intensity information with other retinal structures and can be located anywhere in the retinal region, a novel smart window-based adaptive thresholding (SWAT) is proposed to segment hemorrhages.
Hemorrhage inhibiting at the rim of the retina sometimes blends with a darkish background. Detecting hemorrhage that belongs to this category is a challenging task. The proposed segmentation method (SWAT algorithm) is automated with the help of a bounding box of the retinal mask (binary mask is achieved in the previous section). Then the bounding box is stretched 80 pixels wide towards each direction to achieve a sufficient search space for hemorrhage segmentation (see Figure 5d). The white portion of the image corresponds to the search region and is surrounded by the black region. In addition, we segment those hemorrhages that are attached to blood vessels. The segmentation of such hemorrhages is complicated due to the homogeneity between the two retinal structures. This problem is resolved by using the effectiveness value method. The higher the effectiveness value yields, the higher the degree of segmentation from its surroundings. The appropriate selection of threshold levels imparts maximum value of effectiveness that could better segment the retinal structures from each other.
The SWAT initiates the segmentation process from the seed points that emerge from the blood vessels and hemorrhages. A seed is used to estimate retinal structures from the hand-crafted image (Figure 5c). The SWAT originates from each seed to capture the hemorrhages in the window. The window size increases after every iteration.
Otsu's method [33] finds the threshold value successively and maximizes the interregion variance using an image histogram. This method utilizes the complete gray range and chooses the best range that provides maximum inter-region variance. The normalized histogram of an image is given by: where is the frequency of intensity occurring in the image. In our experiment, the number of threshold levels tarts from one and goes up to twenty. Optimum threshold value * is computed by taking the weighted variance between regions.
where is the total probability, the mean value, and is the weighted variance of the individual region in the total number of regions . The threshold levels split the image into regions. For instance, if is equal to one and the threshold value is , then the intensities are divided into two regions as = {0,1,2, … , } and = { + 1, + 2, + 3, … , − 1}. It can be observed that the mean does not depend upon the threshold value . The effectiveness value is used to control the degree of thresholding. It provides information about how well the threshold value can distinguish the specified region from the rest of the regions in a window. It is computed by: The proposed SWAT employs the optimum threshold * , the effectiveness value , the seed points of the morphological opened image, and the hand-crafted image for segmentation and feature extraction. The search process starts by computing the bounding box from a seed point.
where is a vector that comprises threshold levels. The maximum length of the and the number of iterations can be 20. Equation (11) assumes that the intensities of the hemorrhage are lower than the minimum threshold value of the vector , while the rest of them belong to the non-hemorrhage regions. After the selection of , the window ( , ) is binarized using the minimum threshold of the vector given by: The conditions in Equation (11) adapt the regional diversity between the hemorrhages and foregrounds. When a bright foreground encompasses hemorrhage, the iterative process approaches the specified effectiveness value ( is equal to 0.8) in fewer iterations and a smaller number of threshold levels. When a dark region surrounds a hemorrhage, it reaches the effectiveness value comparatively in more iterations and threshold levels.
A window can have more than one hemorrhage. Priority is given to hemorrhages with larger areas because they are more important than smaller ones in the diagnostic process. All objects are removed except larger ones. This maneuver is performed so that the dark shade will not mislead the segmentation, and the actual hemorrhage may not be eliminated. This rationale is adopted because a seed point may belong to a hemorrhage as well as to the intensity variations around the dark shades. Furthermore, objects closer to the center of the window are retained because of the higher probability that they are hemorrhages. The probability criterion is proposed because the window emerges from the seed point. The seed point is more likely to belong to a hemorrhage because its intensity profile analogous to the matched filter. The distance between the centers ( ( ) , ( ) ) of the window ( , ) to the object computed using the Euclidean distance is given by: where ( ) denotes the spatial locations of pixels in the object while ( ) denotes the y spatial locations of pixels in the object. is part of the set {1,2}, and is a vector having two distances belong to two different objects. From the vector , the object with the longest distance from the center is eliminated. The pixels at the border of the window guarantee either to stop or to keep growing. If the hemorrhage is smaller than the window, then SWAT stops because the pixels are not found at the border of the window. The size of the window increases when it is bigger than the window. This task is accomplished by checking the border pixels of the window. For every instance of iteration, the size of the window is increased by updating the vertices in with the equation below: Q is a vector that contains the information of border pixels. The binary variables , , , and represent the left, top, right, and bottom border pixels. Once all the variables in Q are 0, then it shows that the SWAT has segmented the object (no further iteration is required to grow the size of the window). Any variable in Q has a value of 1, which represents that the border is not empty, and the object is bigger towards a specific direction. The size of the window is increased, and the hand-crafted image is cropped using an updated vector .
In addition, the white region is characterized as the search region (see Figure 5d). The vector represents the indices of the white region. If the window forms the seed point relevant to dark shade, the window may keep increasing and can go beyond the image range. To achieve the automation, the conditions (Equation (14)) check whether the vertices of the vector are in the search space or not. The segmentation process is implemented iteratively until all conditions are met (conditions: the vertices of vector Q become zero or the vector ⊄ ).
Feature Extraction
In our problem, features are extracted to classify hemorrhages and non-hemorrhages. Shape and geometric features are extracted using connected component analysis [34]. These features are effective to separate hemorrhages from blood vessels. Eleven shape features (area, major axis length, minor axis length, eccentricity, orientation, convex area, filled area, the equivalent diameter of the circle, solidity, perimeter, and mean intensity) and 20 texture features (autocorrelation, contrast, correlation, cluster prominence, cluster shade, dissimilarity, energy, entropy, homogeneity, maximum probability, sum of squares, sum average, sum variance, sum entropy, difference variance, difference entropy, measurement of correlation 1, information measurement of correlation 2, inverse difference normalized, and inverse difference moment normalized) are extracted.
Furthermore, CIE LAB and HSV color spaces features are used in this research. The CIE LAB isolates the color information in light (L*) and colors (a* and b*) channels while HSV retains the information as hue, saturation, and value [35,36]. Apart from the features mentioned, four hand-crafted features are used. The SWAT can segment the hemorrhages completely; however, some other structures such as blood vessels and dark shades cannot be restricted within the windows. The reason is that the window stops when the vertices of the window go beyond the search region. Hence, the open and closed contour of the object is taken as a feature. Similarly, the blood or liquid at vitreous humour spreads in each direction that yields a shape of hemorrhage with regular and fewer corners as compared to other segmented objects. Therefore, the distance of the spatial locations and the object's corner from its center is taken as a feature. Similarly, the sum of squares of gray levels at the object's contour from its mean is used as a feature. The macula of the retinal also has the same intensity profile as hemorrhage, but the edges of the macula are blurrier than hemorrhages. To distinguish hemorrhage from the macula, one feature using Laplacian of Gaussian is extracted. Overall, forty-one (41) features are extracted for classification.
Classification
In machine learning, the support vector machine (SVM) is a statistical learning algorithm for classification and learning problems. SVM maximizes the margin between positive and negative classes by placing a hyperplane into them. A kernel function makes SVM capable of learning adaptively with the help of features. The SVM is used to classify windows between hemorrhages and non-hemorrhages. A radial basis function (RBF) is used as a kernel to classify the two categories. The windows that belong to the hemorrhages are labeled as the positive class. The rest of the windows are the negative class. The features are used to train the SVM and tested for the classification of hemorrhages.
Experiments Setup
The proposed method was developed with a 3.40 GHz, 16 GB RAM computer. Random subsets of images are used as the validation set during training. Segmentation results (region-based pixel-by-pixel) are compared with the ground truths. Metrics such as precision, recall, and F1 score are used to evaluate segmentation methods. In addition, five evaluation parameters are used to analyze the performance of the preprocessing stage: mean squared error (MSE), peak signal to noise ratio (PSNR), information entropy (IE), contrast (C), and combination of PSNR, IE, and C (denoted as S). The equations of the evaluation metrics are given below: A. Precision Rate (P) B. Recall Rate (R) C. F1 Score E. Peak signal to noise ratio (PSNR) G. Information Entropy (IE) H. Contrast (C) I. Combination of PSNR, IE, and C (S)
Segmentation Result
The proposed segmentation algorithm (SWAT) is compared with five state-of-the-art segmentation techniques: spatial fuzzy clustering with level set (FCLS) [37], active contour mean separation (ACMS), active contour Chan-Vese (ACC-V) [38], k-means clustering (KMC) [39], and region growing (RG) [40]. All the methods are applied on DIARETDB1 and DIARETDB0 datasets. The statistical results of the DIARETDB1 dataset are provided in Table 1 and can be depicted in Figure 6. The statistical results of the DIARETDB0 dataset are provided in Table 2, and visual inspection can be seen in Figure 7. The proposed algorithm segments hemorrhage attached to blood vessels or with a black background. Although, ACMS and FCLS perform well with 67.45% and 78.23% F1 scores, respectively. However, the proposed method performs better than other methods in terms of precision rate, recall rate, and F1 score by achieving an 83.85% F1 score.
Method
Results Original Image Figure 7. Results of the different segmentation methods on the DIARETDB0 dataset.
Preprocessing Result
The proposed preprocessing technique is compared with other brightness enhancement methods such as histogram equalization (HE) [41], adaptive gamma correction using weighting distribution (AGCWD) [28], brightness preserving dynamic fuzzy histogram equalization (BPDFHE) [42], and non-parametric modified histogram equalization (NMHE) [43]. The results of all algorithms, including the proposed gradient-based adaptive gamma correction method (GAGC), are provided in Table 3 and Figure 8. From Figure 8, we observe that HE introduces over-saturation at the smooth areas (because of probability distribution uniformly on all intensities), producing a black and white smooth region. Meanwhile, AGCWD performs better on the dark smooth regions; however, bright smooth regions are also enhanced. Because the bright smooth regions contribute to the cumulative distribution function (CDF) of the image, an over-saturation procedure is introduced. The BPDFHE modifies the image histogram using fuzzy algorithms and does not produce over-saturation. However, dark shade cannot be brightened through this method. NMHE performs well for intensity variations and mutates histograms by a weighting factor that is computed by a local variance of pixels with the same intensity values. A lesser weight is introduced to the smooth regions and higher weights in varying regions. The proposed preprocessing method (GAGC) does not have a problem with over-saturation and performs well in dark regions. The smooth areas (intensity peaks in the histogram) do not contribute too much to the gradient of the image. Overall, the proposed preprocessing algorithm and AGCWD perform better than other methods.
Discussion
This paper presents a three-stage technique to detect hemorrhages in retinal fundus images. The proposed method can perform effectively with images obtained under different illumination conditions. Regardless of size and location, the proposed method effectively detects hemorrhage attached to a blood vessel (see Figures 6 and 7). Separate estimations were performed for precision, recall, and F1 scores for both datasets. The results of precision, recall, and F1 scores for the proposed method produced higher values for the DIARETDB1 dataset when compared with results from DIARETDB0. For DIARETDB1, the results of precision, recall, and F1 score were estimated to be 83.97%, 83.74%, and 83.85%. For DIARETDB0, the results decreased to 70.51%, 74.08%, and 72.25%. A major reason for the decrease could be the quality of images. The proposed method also offers a high level of preprocessing (contrast and enhancement) which gives the segmentation algorithm higher visual and quantitative results. While the state-of-the-art segmentation techniques perform well, their results are not satisfactory. For example, KMC, RG, ACC-V, ACMS, and FCLS produce F1 scores of 62.46, 63.53, 72.99, 67.45, and 78.23, respectively. Specifically, KMC and RG produced the worse performance, while ACMS and FCLS produced the best results among competing methods. Overall, the proposed method performs better than all the methods. Figures 9 and 10 depict the statistical comparison of all methods. Comparison of methods with the DIARETDB1 dataset ( Figure 9) shows that KMC produces a good result for recall; however, results for precision and F1 score are not satisfactory. The RG algorithm produces a good result for precision, but results for recall and F1 score are not satisfactory. In addition, ACMS, ACC-V, and FCLS produce average results for all evaluation metrics. Overall, the proposed method produces a good result for all evaluation metrics. In the DIARETDB0 dataset ( Figure 10), all methods closely compete, especially with the precision evaluation metrics. However, for recall and F1 score, the proposed method, KMC, ACC-V, and FCLS algorithm, perform well. The reason for good performance by the proposed method is because of the SWAT technique and the efficient preprocessing procedure. Figure 11 shows the plot of the curve between the true positive (TP) rate and the false positive (FP) rate for different probability thresholds. A higher area under the curve (AUC) value shows a higher capability of the classifier. The behavior of the classifier changes significantly by changing the kernel function. The selection of suitable kernel function is very important and is carried out by analyzing the ROC curves of the linear, radial basis function (RBF), and polynomial kernels. The AUC of RBF is the largest; therefore, we use it to classify hemorrhages. By visual comparison, cases with extreme difficulty further demonstrate that the proposed method produces accurate segmentation results. In the two datasets, when the proposed method was compared with other competing methods, we discover that the proposed method produces an acceptable result despite some missing targets. We tried to test our research on a dataset collected from the local hospital. However, the availability and copyright law were the major delimitations during our experiment.
Conclusions
This paper presents an algorithm to detect and segment hemorrhages. The previous experiments performed by the research community suggest that two categories of hemorrhages are difficult to segment. The first category of hemorrhages is located at the retinal border blended with the black background. The second category of hemorrhages is attached to the blood vessels. The proposed algorithm segments both categories satisfactorily. The proposed algorithm preprocesses, segments, and classifies hemorrhages from retinal fundus images. Two well-known datasets (DIARETDB1 and DIARETDB0) are used in this research. State-of-the-art methods are used for benchmarking, while quality evaluation criteria are used to report the results. The SWAT algorithm (proposed method) segments hemorrhages efficiently and accurately. Our results suggest that the proposed method performs better than other methods in terms of quantitative and visual inspection (see Tables 1-3 and Figures 6-8). The preprocessing and enhancement techniques are used in the detection phase, while the SWAT algorithm isolates the hemorrhages from other pathological features and non-hemorrhage regions.
Finally, since the SWAT algorithm is adaptive and has better segmentation characteristics, it could be helpful in other ophthalmological conditions as well. The proposed model is expected to be useful in clinical medicine, such as surgery navigation and diagnosis. It can be promising to extend the proposed method by using information from the original image as a guide for preprocessing and segmentation. In the future, we plan to extend and combine the proposed method with a deep learning framework. Conflicts of Interest: All authors in this paper have no potential conflict of interest. | 2021-09-28T01:09:26.422Z | 2021-07-10T00:00:00.000 | {
"year": 2021,
"sha1": "5e95b34850dd1e67a8e7c9bfbdace6a2fad016c8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/14/6391/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9c31c7c00db57dda6bd4526fa202864ef7e8e17a",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
13895122 | pes2o/s2orc | v3-fos-license | A note on 4-rank densities
For certain real quadratic number fields, we prove density results concerning 4-ranks of tame kernels. We also discuss a relationship between 4-ranks of tame kernels and 4-class ranks of narrow ideal class groups. Additionally, we give a product formula for a local Hilbert symbol.
Introduction
Let F be a real quadratic number field and O F its ring of integers. In [4], the authors gave an algorithm for computing the 4-rank of the tame kernel K 2 (O F ). The idea of the algorithm is to consider matrices with Hilbert symbols as entries and compute matrix ranks over F 2 . Recently, the author used these matrices to obtain "density results" concerning the 4-rank of tame kernels, see [6], [7].
Matrices
Hurrelbrink and Kolster [4] generalize Qin's approach in [8], [9] and obtain 4-rank results by computing F 2 -ranks of certain matrices of local Hilbert symbols. Specifically, let F = Q( √ d), d > 1 and squarefree. Let p 1 , p 2 , . . . , p t denote the odd primes dividing d. Recall 2 is a norm from F if and only if all p i 's are ≡ ±1 mod 8. If so, then d is a norm from Q( √ 2), thus d = u 2 − 2w 2 for u, w ∈ Z. Now consider the matrix: If 2 is not a norm from F, set v = 2. Otherwise, set v = u + w. Replacing the 1's by 0's and the −1's by 1's, we calculate the matrix rank over Recall that our case is Q( √ p 1 p 2 p 3 ) for primes p 1 ≡ p 2 ≡ p 3 ≡ 1 mod 8. In this case a = a ′ and we may delete the last row of M F/Q without changing its rank (see discussions preceding Proposition 5.13 and Lemma 5.14 in [4]). Also note that v is an p 1 -adic unit and hence Let us now prove Theorem 1.1.
Proof. The idea in [6] and [7] is to first consider an appropriate normal extension N of Q and then relate the splitting of the primes p i in N to their representation by certain quadratic forms. The next step is classifying 4-rank values in terms of values of the symbols (−d, v) 2 , v p i . The values of these symbols are then characterized in terms of p i satisfying the alluded to quadratic forms. We then associate Artin symbols to the primes p i and apply the Chebotarev density theorem. In what follows, we classify the 4rank values in terms of the symbols (−d, v) 2 , v p i and in parenthesis give the relevant densities in X obtained by using the above machinery. Let us consider the following four cases (see Table III in [9]). Case 1: Suppose p 2 The (t − 1) by t matrixR F/Q can be extended, without changing its rank, to a t by t matrix R F/Q by adding the last row (−d, p t ) p 1 , (−d, p t ) p 2 , . . . , (−d, p t ) pt .
R F/Q is known as the Rédei matrix of the field F ′ := Q( √ d ′ ) (see [5] or [10]). Its rank determines the 4-rank of the narrow ideal class group C + F ′ of the field F ′ by 4-rank C + F ′ = t − 1− rank(R F/Q ). Combining this information with Lemma 2.1, we have that if (−d, u + w) 2 = −1, then 4-rank K 2 (O F ) = 4-rank C + F ′ . Using Rédei matrices, Gerth in [3] derived an effective algorithm for computing densities of 4-class ranks of narrow ideal class groups of quadratic number fields. It would be interesting to see if density results concerning 4-class ranks of narrow ideal class groups (coupled with the product formula in the appendix) can be used to obtain asymptotic formulas for 4-rank densities of tame kernels.
Appendix: A product formula
Most of the local Hilbert symbols in the martix M F/Q are calculated directly. Difficulties arise when d is a norm from Q( √ 2). In this case, we need to calculate the Hilbert symbols (−d, u + w) 2 and (−d, u + w) p k . The local symbol at 2 is calculated using Lemmas 5.3 and 5.4 in [4]. In this appendix we provide a product formula which allows one to calculate (−d, u + w) p k using 2 factors of d at a time.
Let d be a squarefree integer and assume that all odd prime divisors of d are ≡ ±1 mod 8. Then d is a norm from F = Q( √ 2) and we have the representation d = u 2 − 2w 2 with u > 0. Let l be any odd prime dividing d. Note that l does not divide u + w and so = y l . Now let r be an integer not divisible by l which can be represented as a norm from Q( √ 2). Denote by π r = s + t √ 2 an element such that N Q( √ 2)/Q (π r ) = r with s, t > 0. Now let u r and w r be such that . By the choice of x, y, s, t, we have u r > 0. Note that N Q( ]/l ∼ = Z/lZ. This allows us to work mod l as opposed to mod l. From the above, u r + w r = 2xs + 3tx + 3sy + 4yt. Modulo l, we have We may now reduce to the following d = rl: d = −l, d = 2l, and d = pl, i.e. calculate the symbols u −1 +w −1 l , u 2 +w 2 l , and up+wp l . The first two symbols can be calculated using the following two elementary lemmas. [2]). Note that π l is well defined and so Lemma 3.7 is applicable. | 2014-10-01T00:00:00.000Z | 2004-09-01T00:00:00.000 | {
"year": 2007,
"sha1": "7066e2cf7f56c8aca337113eb4f2fc1aac7db3a2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/0703325",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "905a228c48da2f5ac319682535527a0df6226947",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
13071603 | pes2o/s2orc | v3-fos-license | Structural Insights into Apoptotic DNA Degradation by CED-3 Protease Suppressor-6 (CPS-6) from Caenorhabditis elegans*
Background: CPS-6 (EndoG) degrades chromosomal DNA during apoptosis. Results: The crystal structure of C. elegans CPS-6 was determined, and the DNA binding and cleavage mechanisms by CPS-6 were revealed. Conclusion: The DNase activity of CPS-6 is positively correlated with its pro-cell death activity. Significance: This study improves our general understanding of DNA hydrolysis by ββα-metal finger nucleases and the process of apoptotic DNA fragmentation. Endonuclease G (EndoG) is a mitochondrial protein that traverses to the nucleus and participates in chromosomal DNA degradation during apoptosis in yeast, worms, flies, and mammals. However, it remains unclear how EndoG binds and digests DNA. Here we show that the Caenorhabditis elegans CPS-6, a homolog of EndoG, is a homodimeric Mg2+-dependent nuclease, binding preferentially to G-tract DNA in the optimum low salt buffer at pH 7. The crystal structure of CPS-6 was determined at 1.8 Å resolution, revealing a mixed αβ topology with the two ββα-metal finger nuclease motifs located distantly at the two sides of the dimeric enzyme. A structural model of the CPS-6-DNA complex suggested a positively charged DNA-binding groove near the Mg2+-bound active site. Mutations of four aromatic and basic residues: Phe122, Arg146, Arg156, and Phe166, in the protein-DNA interface significantly reduced the DNA binding and cleavage activity of CPS-6, confirming that these residues are critical for CPS-6-DNA interactions. In vivo transformation rescue experiments further showed that the reduced DNase activity of CPS-6 mutants was positively correlated with its diminished cell killing activity in C. elegans. Taken together, these biochemical, structural, mutagenesis, and in vivo data reveal a molecular basis of how CPS-6 binds and hydrolyzes DNA to promote cell death.
ptosis in C. elegans, including NUC-1, DCR-1, and cell deathrelated nucleases (CRN-1 to CRN-7) (14 -17). CPS-6 interacts not only with CRN-1 and WAH-1 but also with CRN-3, CRN-4, CRN-5, and CYP-13, and these proteins, likely in the form of a multi-nuclease complex, work together to promote apoptotic DNA fragmentation (14). Inactivation of cps-6 (the gene encoding CPS-6) resulted in the accumulation of TUNEL-positive cells and delayed appearance of embryonic cell corpses during development, suggesting that CPS-6 is required for normal apoptotic DNA degradation (14). Knock-out of the EndoG gene in mice did not cause significant phenotypes either in embryogenesis or in apoptosis, possibly because of the presence of redundant apoptotic nucleases and various apoptotic DNA degradation pathways during apoptosis (18,19).
Investigation of the cellular functions of EndoG suggests that this nuclease not only has a pro-death role in apoptosis but also plays a pro-life role in mitochondrial DNA replication and recombination (20,21). The conflicting life versus death role of EndoG was clarified in budding yeast in which EndoG functions as a potent cell death inducer only under high respiration conditions in a caspase-and AIF-independent mechanism, whereas EndoG promotes cell viability under high cell division conditions (22). Thus yeast EndoG can either act as a crucial but uncharacterized molecule in mitochondria for cell proliferation or be switched to a death executioner, digesting chromosomal DNA in the nucleus during apoptosis in a caspase-independent pathway.
Mammalian, Drosophila, C. elegans, and yeast CPS-6/EndoG share sequence homology with Serratia nuclease, which con-tains a conserved DRGH sequence in the ␣-metal finger motif that has been identified in a number of other bacterial nonspecific nucleases ( Fig. 1) (23,24). Yeast and mammalian EndoGs are Mg 2ϩ -dependent homodimeric proteins with a preference for single-stranded RNA and DNA substrates (3). Drosophila EndoG particularly has a nuclear inhibitor EndoGI that likely protects the cell against low levels of EndoG that leaks out from mitochondria (25). The crystal structure of Drosophila EndoG in complex with its inhibitor EndoGI has been reported, revealing how the monomeric EndoGI inhibits the activity of the dimeric EndoG by blocking its active site and oligonucleotide-binding groove (26).
Although the pro-death role of CPS-6/EndoG has been studied most extensively in C. elegans, the biochemical and structural information for CPS-6 remains unknown. To understand the intriguing life versus death role of CPS-6, we employed a range of biochemical assays in combination with x-ray crystallography to determine the crystal structure of CPS-6 bound with a Mg 2ϩ cofactor in the active site at a high resolution of 1.8 Å. This structural information led to the identification of the critical DNA binding residues, which were verified through site-directed mutagenesis and several different in vitro assays and provided invaluable insights for in vivo functional assays. In particular, reduced DNase activities of the CPS-6 mutants positively correlated with their decreased pro-apoptotic activities in C. elegans. This study thus provides a molecular basis for the DNA binding and hydrolysis by CPS-6 during apoptosis. FIGURE 1. Sequence alignment of CPS-6/EndoG. Sequences of CPS-6/EndoG from C. elegans, Homo sapiens, Bos taurus, D. melanogaster, and Saccharomyces cerevisiae are aligned and listed. CPS-6 shares high sequence identities of 50, 49, 55, and 39% with human, bovine, fruit fly, and yeast EndoG, respectively. The amino acid residues found in H. sapiens, B. taurus, and D. melanogaster important for metal ion coordination and DNA binding are colored in pink and brown, respectively (22,23,26,30). The ␣-metal finger motif (4-5-␣4) is marked in orange, and the conserved 145 DRGH 148 sequence is located in 4. The residues that are likely involved in DNA binding and that were subjected to site-directed mutagenesis in this study are colored in yellow. The secondary structures derived from the crystal structure of CPS-6 are depicted as green cylinders for ␣-helices and blue arrows for -strands. MLS, mitochondrial localization sequence.
Single colonies of the Escherichia coli strain M15 transformed with pQE30-CPS-6-(63-305) or pQE30-WAH-1-(214 -700) plasmids were inoculated into 10 ml of LB medium supplemented with 100 g/ml ampicillin and grown at 37°C overnight. The overnight cultures were grown to an A 600 of 0.6 and then induced with 0.8 mM isopropyl -D-thiogalactopyranoside at 18°C for 20 h. The harvested cells were disrupted by a microfluidizer in a buffer containing 50 mM HEPES (pH 7.4), 300 mM NaCl, 10 mM imidazole, and 5% glycerol. The crude cell extract was passed through a TALON metal affinity resin column (BD Biosciences) followed by a gel filtration chromatography column (Superdex 200; GE Healthcare) in 50 mM Tris-HCl (pH 7.4), 500 mM NaCl, and 2.5 mM DTT. Purified protein samples were concentrated to suitable concentrations and stored at Ϫ80°C until use.
Filter Binding Assay-Single-stranded DNA substrates for the filter binding assay were 5Ј-end labeled with [␥-32 P]ATP by T4 polynucleotide kinase. The 32 P-labeled DNA (14 fmol) were incubated with the serial dilution of protein samples in the binding buffer containing 10 mM HEPES, pH 7.0, 100 mM NaCl, 2 mM DTT, and 2 mM EDTA for 30 min at room temperature. The reaction mixtures were then passed through the filter binding assay apparatus (Bio-Dot SF microfiltration apparatus; Bio-Rad). After extensive washing, the CPS-6-DNA complexbound nitrocellulose membrane and free DNA-bound nylon membrane were air-dried and exposed to a phosphorimaging plate. The intensities of CPS-6-DNA complex and free DNA were quantified by the program AlphaImager IS-2200 (Alpha Innotech). The binding percentages were calculated and normalized. The apparent K d values were estimated by one-site binding curve fitting using GraphPad Prism 4.
Immunoprecipitation-The His-tagged CPS-6 (22-308) H148A mutant (1 g) was mixed with 1 g of His-tagged WAH-1 (214 -700) and incubated for 2 h at 4°C in buffer containing 50 mM Tris-HCl (pH 7.0), 300 mM NaCl, 10% glycerol, and 5 mM DTT. This reaction was followed by the addition of anti-CPS-6 (polyclonal rabbit; 1:100 dilution) and continued to be agitated at 4°C for 2 h. Protein G beads (Amersham Biosciences) were premixed with 1 g of bovine serum albumin followed by washing three times with reaction buffer and then added to the reaction mixture, followed by incubation for 2 h at 4°C. The protein beads were then centrifuged, and the supernatant was removed. After washing three times with reaction buffer, the beads were loaded onto a 10% SDS-PAGE followed by the detection of immunoblotting using anti-His antibodies (monoclonal mouse).
C. elegans strains were maintained using standard procedures (27). The CPS-6 expression constructs (at 20 g/ml) were injected into cps-6(sm116) animals as previously described (28), using the pTG96 plasmid (at 20 g/ml) as a coinjection marker, which directs GFP expression in all cells in most developmental stages (29). The numbers of cell corpses in living GFP-positive transgenic embryos were determined using Nomarski optics as described previously (11).
Crystallization and Crystal Structural Determination-Crystals of CPS-6 were grown by the hanging drop vapor diffusion method at 4°C. The crystallization drop was made by mixing 0.5 l of protein solution and 0.5 l of reservoir solution. The CPS-6 H148A mutant (10 mg/ml in 50 mM Tris-HCl, pH 7.4, 500 mM NaCl and 2.5 mM DTT) was crystallized using a reservoir solution containing 6% Tacsimate, 0.1 M MES (pH 6.0), and 25% PEG 4000. The diffraction data were collected at the BL44XU Beamline at SPring-8 (Japan) and were processed and scaled by HKL2000. The crystal structure was solved by the molecular replacement method using the crystal structure of Drosophila melanogaster EndoG (Protein Data Bank code 3ISM, chains A and B) as the search model by program MOLREP of CCP4. The models were modified by Coot and refined by Phenix. The diffraction and refinement statistics are listed in Table 1. The structural coordinates and diffraction structure factors have been deposited in the RCSB Protein Data Bank with the Protein Data Bank code 3S5B for CPS-6 H148A mutant.
RESULTS
Recombinant CPS-6 Is Functional Homodimeric Endonuclease-To investigate the structure and biochemical properties of CPS-6, the recombinant wild-type His 6 -tagged CPS-6 (residues 63-305) was expressed in E. coli. The N-terminal residues (1-62) and the C-terminal residues (306 -308) were not included for expression. However, the expression level of the wild-type CPS-6 was low because of its toxic DNase activity in E. coli. Therefore, the putative general base residue His 148 was mutated to generate a CPS-6 mutant (H148A), which can be expressed in a large amount. The N-terminal sequence (residues 1-62) was removed because it was unstable and degraded with time. The wild-type and mutated CPS-6 (residues 63-305) proteins were expressed and purified by chromatographic methods using a Ni 2ϩ -nitrilotriacetic acid affinity column, followed by a Superdex 200 gel filtration column. They were purified to homogeneity as confirmed by SDS-PAGE ( Fig. 2A). The recombinant CPS-6 appeared as a dimeric protein as determined by size exclusion chromatography on a Superdex 200 column (Fig. 2B), with an apparent native molecular mass of about 48 kDa (calculated molecular mass of CPS-6 monomer, 28.5 kDa).
The nuclease activity of purified wild-type CPS-6 was analyzed by incubation of CPS-6 with a linear 1.6-kb DNA fragment in a buffer containing 25 mM NaCl and 5 mM MgCl 2 for one h. A relatively rapid degradation of DNA was observed with wild-type CPS-6, but not with the H148A mutant, as a function of increasing protein concentration (Fig. 2C). Furthermore, CPS-6 (0 -1 M) was incubated with DNA in the presence and absence of EDTA. CPS-6 had significantly reduced DNase activity in the presence of EDTA, suggesting that divalent metal ions are required for the enzyme activity of CPS-6 (supplemental Fig. S1). The optimal conditions for the nuclease activity of CPS-6 were further characterized over a wide range of pH values and salt concentrations. CPS-6 digested plasmid DNA most efficiently at neutral pH and at low salt concentrations (25-100 mM NaCl) in the presence of 2 mM MgCl 2 (Fig. 2, D and E). A relatively more efficient nuclease activity for CPS-6 was found at pH values ranging from 7 to 10 rather than that of 4 to 6 in the presence of 100 mM NaCl (Fig. 2D). Given the pK a value of ϳ6 for His 148 , acidic conditions might have caused its protonation and hence the loss of the general base function. Moreover, high salt buffers might interfere with the interactions between the positively charged CPS-6 and negatively charged DNA.
The recombinant CPS-6(H148A) mutant was further tested for its ability to interact with WAH-1 by immunoprecipitation. Anti-CPS-6 antibody was used to pull down the CPS-6/WAH-1 complex, which was detected by Western blotting using anti-His antibodies. This experiment showed that the purified recombinant His-tagged CPS-6 interacted directly with the purified His-tagged WAH-1 (Fig. 2F). Taken together, these results show that the recombinant CPS-6 was a fully functional protein, capable not only of DNA digestion but also of interaction with WAH-1.
CPS-6 Digests RNA and DNA with Preference for Binding a G-track DNA-To determine the substrate preference for CPS-6, pET28 plasmid DNA (25 ng) was used as the substrate in a concentration course experiment under the reaction conditions of 100 mM NaCl and 2 mM MgCl 2 at pH 7. CPS-6 cleaved plasmid DNA in a concentration-dependent manner, steadily converting the supercoiled DNA into the open circular and linear forms, when the concentration of CPS-6 was increased from 0.03 to 2 M (Fig. 3A). A significant amount of DNA digestion, visible as a broad smear, was observed when the protein concentration was increased to 2 M (Fig. 3A). This result confirms that CPS-6 has endonuclease activity.
Linear ssDNA and dsDNA substrates (48-mers) were next used for digestion experiments with wild-type CPS-6 (residues 63-305). We observed that ssDNA was cleaved a bit more efficiently than dsDNA (Fig. 3B). A comparison between 11-mer ssDNA and ssRNA showed that wild-type CPS-6 (residues 63-305) digested 11-mer ssRNA more efficiently than 11-mer ssDNA (Fig. 3C). These results show that CPS-6 digests both DNA and RNA and prefers single-stranded nucleic acid substrates slightly over double-stranded ones, in agreement with earlier findings with yeast and mammalian EndoG (3).
To investigate the sequence preference for CPS-6, four 5Ј-end 32 P-labeled 14-nucleotide single-stranded DNA containing zero, two, four, and six consecutive G nucleotides were used for protein-DNA binding experiments (Fig. 3D). The inactive CPS-6 mutant H148A was used for the protein-DNA binding assays to avoid DNA digestion by CPS-6. The dissociation constants measured between CPS-6(H148A) and DNA were increased hierarchically from (dG) 6 , (dG) 4 , (dG) 2 to (dG) 0 DNA . Hence, this result shows that CPS-6 prefers to bind DNA with G-tract sequences.
Crystal Structure of Dimeric CPS-6 Reveals Basic DNA-binding Groove-To determine the crystal structure of CPS-6, the H148A mutant was used for crystallization screening experiments because the inactive H148A mutant can be expressed and purified in a larger quantity as compared with wild-type CPS-6. The H148A CPS-6 mutant was crystallized by the hanging drop vapor diffusion method. X-ray diffraction data up to a resolution of 1.8 Å were collected. CPS-6 H148A mutant was crystallized in the space group P2 1 with one dimer per asymmetric unit. The crystal structure of CPS-6 (residues 63-305, H148A mutant) was determined by molecular replacement using the D. melanogaster EndoG structure (Protein Data Bank entry 3ISM) as the search model. The crystal structure was refined to an R factor of 16.6% for 41,201 reflections and an R free of 21.0% for 1,876 reflections from 37.6 to 1.8 Å (Table 1).
CPS-6 has a mixed ␣ topology similar to that of Serratia nuclease (31) and EndA (32) with a central six-stranded -sheet packed against the rest of the ␣-helices and -strands (Fig. 4). The two ␣-metal finger motifs (shown in cyan), consisting of the conserved 145 DRGH 148 sequence in one of the -strands, are located distantly on the two sides of the homodimer. A long -strand (8) from each protomer forms an antiparallel -sheet at the dimeric interface. The two protomers are well packed with sufficient buried interfaces of 1392.2 Å 2 to stabilize the dimeric structure.
Although magnesium ions were not present in the crystallization buffer, a Mg 2ϩ was bound to Asn 180 in the ␣-metal finger motif in the crystal structure of CPS-6. The omit map clearly shows that Mg 2ϩ coordinated to Asn 180 and five water molecules in an octahedral geometry (Fig. 4B). The conformation of the active site of CPS-6 is similar to other nucleases containing a ␣-metal finger motif in which His 148 in the conserved 145 DRGH 148 sequence functions as a general base to activate a water molecule, and the Mg 2ϩ -bound water molecule likely functions as a general acid to provide a proton for the 3Ј-phosphate leaving group (24). Moreover, the amide side chain of the metal-binding residue Asn 180 formed a hydrogen bond (2.69 Å) with the carboxylate side chain of Asp 145 , showing that Asp 145 in the conserved 145 DRGH 148 sequence is important for stabilizing the metal ion-binding residue in the CPS-6 active site. On the other hand, Arg 146 within the conserved 145 DRGH 148 sequence is likely involved in DNA binding (see the mutagenesis results in the next section).
The electrostatic potential mapping onto the CPS-6 structure further shows a basic groove extending on the molecule next to the active site (Fig. 4C). It has been shown that endonucleases containing a ␣-metal finger motif, such as I-PpoI, Vvn, and ColE7, bind to DNA in a similar mode, i.e. the relative orientations between the ␣-metal finger motif and DNA are comparable (33). A model of the CPS-6-DNA complex was thus constructed by superimposition of the ␣-metal motif of CPS-6 (residues 144 -155 and 170 -182) with that of Vvn (residues 77-82 and 118 -129) in the Vvn-DNA complex (Protein Data Bank code 1OUP). After removal of Vvn, the DNA was well fitted onto the surface of CPS-6 with one phosphate backbone bound to the basic groove around the active site (Fig. 4C). We therefore suggest that the phosphate backbone of DNA is likely bound and digested at this basic groove in CPS-6.
Critical DNA-binding Residues in CPS-6-The CPS-6-DNA complex model suggests that DNA binds to a site located in proximity to the active site in the dimeric CPS-6 ( Fig. 5A). A closer look at the model revealed five amino acid residues located near DNA: the two aromatic residues Phe 122 and Phe 166 , and the three basic residues Arg 117 , Arg 146 , and Arg 156 (Fig. 5B). To investigate the influence of these residues on the nuclease activity of CPS-6, site-directed mutagenesis was employed to selectively mutate those amino acid residues located within the interfacial region of the CPS-6-DNA complex model. We found that the nuclease activity of four mutant proteins, except R117A, was reduced significantly, with F122A, R156A, and F166A digesting the least amount of DNA and R146A digesting less DNA as compared with wild-type CPS-6 ( Fig. 5D).
To determine whether these residues were involved in DNA binding, double mutants H148A/R117A, H148A/F122A, H148A/R146A, H148A/R156A, and H148A/F166A were constructed and purified to obtain inactive mutants that cannot digest DNA substrates (supplemental Fig. S2). Filter binding assays were employed to determine the dissociation constants between CPS-6 double mutants and single-stranded 48-nt DNA substrates. The dissociation constants had the same trend as those of activity assays (Fig. 5, D and E Residues at CPS-6 Catalytic Site or Involved in DNA Binding Are Important for Apoptosis Promoting Activity in C. elegans-To examine whether critical residues identified by structural predictions and in vitro nuclease assays are important for cps-6 cell killing activity in vivo, full-length CPS-6 or CPS-6 mutants was expressed in the cps-6-deficient strain (sm116) under the control of the dpy-30 gene promoter (P dpy-30 CPS-6), which directs ubiquitous gene expression in C. elegans (34). Compared with wild-type animals, the cps-6(sm116) mutant displayed a delay of cell death defect during embryonic development (4): less cell corpses were observed at early embryonic stages (comma and 1.5-fold stages), and more cell corpses were seen at later embryonic stages (2-, 2.5-, and 3-fold stages) (Fig. 6A). Expression of wild-type CPS-6 fully rescued the delay of cell death defect of the cps-6(sm116) mutant (Fig. 6A). In contrast, expression of CPS-6 harboring a catalytic site mutation (H148A) failed to rescue the cell death defect of cps-6(sm116) animals (Fig. 6B). Interestingly, expression of either CPS-6(R146A) or CPS-6(F166A), both of which showed significantly reduced nuclease activity (Fig. 5D), partially rescued the cps-6(sm116) mutant (Fig. 6, C and D). These results indicate that the DNA binding residues and the catalytic site of
DISCUSSION
Catalytic Mechanism of CPS-6 in DNA Hydrolysis-In this study, we used CPS-6 as a model system for biochemical and structural analysis. CPS-6 shares a high sequence identity of 50, 49, 55 and 39% with human, bovine, fruit fly, and yeast EndoG, respectively (Fig. 1). We determined the high resolution crystal structure of CPS-6 and revealed that the highly conserved 145 DRGH 148 sequence is located within the ␣-metal finger motif. The geometry of the active site of CPS-6 is similar to the ones observed in other ␣-metal finger nucleases, all of which display a divalent metal ion bound to one or two amino acid residues and four or five water molecules, including I-PpoI (35), Hpy99I (36), T4 EndoVII (37), I-HmuI (38), Serratia nuclease (31), EndA (32), NucA (39), and Vvn (40).
The comparison of the active site between CPS-6 and Vvn shows clearly that several catalytic residues are located at similar positions, including the general base residue His 148 (mutated to Ala) in CPS-6 and His 80 in Vvn, and the metalbinding residue Asn 180 in CPS-6 and Asn 127 in Vvn (Fig. 7A). The mutation of His 148 to Ala abolished the enzyme activity of CPS-6 ( Fig. 2C), supporting the role of His 148 as the general base residue. The general base residue His 80 in Vvn is polarized by a hydrogen bond to the carbonyl group of Glu 113 , where similarly the general base residue His 148 in CPS-6 can be polarized by a hydrogen bond to the carbonyl group of Thr 165 . The metal ion-bound residue Asn 127 in Vvn is fixed by a hydrogen bond network to the side chain of Glu 77 and Arg 72 , where similarly, Asn 180 in CPS-6 makes a hydrogen bond network to Asp 145 and Arg 181 . In summary, these two nonspecific ␣metal finger nucleases share similar active site architectures.
A parallel hydrolysis mechanism analogous to that of Vvn is therefore proposed for CPS-6, with His 148 functioning as a general base to activate a water molecule for the in-line attack on the scissile phosphate, and a Mg 2ϩ -bound water molecule functioning as a general acid to provide a proton to the 3Ј-oxygen leaving group (Fig. 7B). Structural modeling of the CPS-6-DNA complex further reveals a basic DNA-binding groove constituted by basic residues Arg 146 and Arg 156 and nearby aromatic residues Phe 122 and Phe 166 (Fig. 5B). Site-directed mutagenesis confirmed that these basic and hydrophobic residues (Arg 146 , Arg 156 , Phe 122 , and Phe 166 ) are critical for DNA binding and cleavage activity of CPS-6 ( Fig. 5, C and D). Arginine residues are frequently located within protein-DNA interfaces and preferentially make hydrogen bonds to guanine (41). On the other hand, phenylalanine side chains often stack with DNA bases, with a preference for thymine, adenine, and cytosine, but not guanine (41,42). It is speculated that the preference of EndoG for the cleavage of poly-(dG) tracks in DNA is linked in part to these basic and phenylalanine residues, particularly the preference of arginine residues for making hydrogen bonds with guanine bases.
Given the obvious impacts of these CPS-6 mutations on the nuclease activity of CPS-6 and CPS-6 DNA binding affinity (Fig. 5), it is of interest in this study to address the functional role of those mutants in vivo. An identical delay of cell death defect was observed in the catalytic site CPS-6 mutant (H148A) (lacking the DNase activity) to that of the cps-6-deficient animal, cps-6(sm116) (Fig. 6B). In contrast, expression of the CPS-6 DNAbinding site mutants (R146A and F166A) that still have residual DNase activity can partially rescue the cell death defect of the cps-6(sm116) mutant (Fig. 6, C and D). Therefore, the reduced nuclease activity of CPS-6 is positively correlated to its diminished cell killing activity in C. elegans.
Different Dimeric Interfaces of CPS-6-The crystal structure of CPS-6 reveals an overall mixed ␣ topology similar to that of Serratia nuclease (31) with a ␣-metal finger motif situated on one face of the central -sheet. Apart from Serratia nuclease, a number of other ␣-metal finger nucleases are also homodimeric enzymes, including I-PpoI, Hpy99I, and T4 Endo VII. I-PpoI is a homing endonuclease that generates staggered products with four-nucleotide 3Ј overhangs (43), whereas Hpy99I is a restriction endonuclease that generates staggered products with five-nucleotide 3Ј overhangs (36). The reported crystal structures of I-PpoI-DNA and Hpy99I-DNA complexes show that the two ␣-metal finger motifs are orientated in a similar way, with close distances of 15.1 and 20.4 Å between the two Mg 2ϩ ions in I-PpoI and Hpy99I, respectively (Fig. 8B). Moreover, the two ␣-metal finger motifs are located closely to the DNA sugar-phosphate backbones so that each monomer can make one nick on one strand of the double-stranded DNA to produce precisely the staggered end products. On the other hand, the two ␣-metal finger motifs in the Holliday junction resolvase T4 Endo VII are arranged more distantly (26.1 Å between the two Mg 2ϩ ions) in a different relative orientation for the binding and cleavage of a four-way DNA junction (44). Interestingly, in these three protein-DNA complexes, the 2-fold symmetry axis between the two protomers roughly coincides with the 2-fold axis of the DNA substrates (see the 2-fold axis marked in Fig. 8B). Therefore, the relative orientation and distance of the two ␣-metal finger motifs in these dimeric endonucleases is actually restrained by their substrates.
On the other hand, CPS-6 is a nonspecific endonuclease, and it shares not only a similar fold but also a similar activity to that of Serratia nuclease. However, CPS-6 dimerizes in a way completely different from that of Serratia nuclease. Superimposition of one of the protomers of the dimeric structure of CPS-6 and Serratia nuclease shows that the dimeric interfaces are located in different regions in the two proteins (Fig. 8A). As a result, the relative orientation and distances between the two ␣-metal finger motifs are different in the two nonspecific endonucleases: 44.5 Å in CPS-6 FIGURE 6. The cell death assay in C. elegans. Transgenic cps-6(sm116) animals expressing wild-type CPS-6 (A), CPS-6(H148A) (B), CPS-6(R146A) (C), or CPS-6(F166A) (D) under the control of the dpy-30 promoter were generated, and the numbers of cell corpses were scored. For each construct, the data were collected from three independent transgenic lines. The stages of transgenic embryos examined were: comma and 1.5-, 2-, 2.5-, 3-, and 4-fold. The y axis represents the average number of cell corpses scored, and the error bars show the standard deviations. Fifteen embryos were counted for each developmental stage. The significance of differences were determined by two-way analysis of variance, followed by Bonferroni comparison. *, p Ͻ 0.001; **, p Ͻ 0.05. All other points had p values Ͼ 0.05. and 54.4 Å in Serratia nuclease for the distance between the two Mg 2ϩ ions. This result indicates that each monomer of CPS-6 and Serratia nuclease likely interacts with DNA substrates independently. Hence, the relative orientation of the two distant ␣-metal finger motifs in these nonspecific nucleases is irrelevant. Why CPS-6 and Serratia nuclease did not evolve into monomeric enzymes, such as the nonspecific nucleases EndA, NucA and Vvn, is unknown. In the future, it will be necessary to cocrystallize CPS-6 with DNA to further elucidate its substrate binding mode and the basis of sequence preference. . The active site and proposed catalytic mechanism for CPS-6. A, the active site of CPS-6 shares a similar conformational arrangement with that of Vvn. The ␣-metal finger motif is colored in cyan, with the conserved 145 DRGH 148 sequence (and the corresponding 77 EWEH 80 sequence in Vvn) displayed in marine blue. B, schematic diagram of the proposed DNA hydrolysis mechanism by CPS-6. His 148 acts as a general base to activate a water molecule, which in turn makes an in-line attack on the scissile phosphate. The magnesium ion stabilizes the phosphoanion transition state, and the Mg 2ϩ -bound water molecule functions as a general acid to provide a proton to the 3Ј-oxygen leaving group. FIGURE 8. Structural comparison of nonspecific and site-specific dimeric ␣-metal finger nucleases. A, the overall folds of the two nonspecific nucleases, CPS-6 and Serratia nuclease, are similar. However, the dimeric interfaces are located in different regions as revealed by the superimposition of one protomer of the two proteins (boxed in the right panel). B, the crystal structures of the three site-specific endonucleases Hpy99I, I-PpoI, and T4 Endo VII in complex with their DNA substrates show that the two ␣-metal motifs are positioned and oriented next to the DNA sugar-phosphate backbones. The 2-fold symmetry (displayed as an oval) of the dimeric proteins coincides roughly with the 2-fold axis of the DNA substrates. | 2018-04-03T05:08:49.603Z | 2012-01-05T00:00:00.000 | {
"year": 2012,
"sha1": "011e245db7b5fa83f7085c7630c9a86e8242afc8",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/287/10/7110.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "e3ef540482459e7de4b1141c20acb98e2602ad2f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
248734337 | pes2o/s2orc | v3-fos-license | Genetic Diversity and Population Structure of Bullet Tuna (Auxis rochei) from Bali and Its Adjacent Waters
Bullet tuna ( Auxis rochei ) dominates the neritic tuna catch, especially from the purse seine fleet within the western and southern Indonesian waters. However, high catches can lead to stock depletion and lower genetic diversity due to possible inbreeding. Therefore, population genetic information is important in monitoring the sustainability of fish stocks and proposing an appropriate species-specific conservation strategy. This study aimed to analyze the genetic diversity, population structure, and kinship relationship of bullet tuna in Bali and its adjacent waters. Sampling was carried out in September 2020 at landing sites/ports representing the north, east, south, and west region, whereas at least 30 samples were acquired at each location. The result showed that the DNA concentration obtained could produce DNA bands with allele length ranged from 94-260 bp. Observed heterozygosity (Ho) was around 0.440-0.627. While the expected heterozygosity (He) was between 0.932-0.945. The genetic variation among population, within-population, and individuals was 0.36%, 41.04%, and 58.60%, respectively. The results of the analysis of genetic diversity between individuals in the population showed very high genetic diversity. The population structure of the bullet tuna landed in West Bali, East Bali, South Bali and North Bali is the same population stock. The kinship relationship indicates that the four populations are closely related genetically.
Introduction
The neritic tuna managed in the TCT RPP consists of four types of neritic tuna including bullet tuna (Auxis rochei), frigate (Auxis thazard), kawakawa (Euthynnus affinis) and longtail tuna (Thunnus tonggol) (Suryaman et al. 2017). Bullet tuna is the dominant species caught within coastal areas by small-scale or artisanal tuna fishery (Naderi 2016). Neritic tuna are mostly found in the tropical waters of the Indo-Pacific. Even though they live in the ocean, tuna prefers to be near the coast and even juveniles of these fish can be found in bays and harbors (Agus 2017).
According to Sastra et al. (2018), the population of neritic tuna is widespread in almost all Indonesian waters, including Bali and its surrounding areas. The Bali Strait itself holds a potential supporting system of aquatic marine life for coastal fisheries communities (Syah et al. 2020). Best scientific total catches estimation of bullet tuna from the Indian Ocean in 2019 reaching 24,000 tons with an average (2015-2019) of around 19,000 tons (IOTC-WPNT11 2021). At least 34% (~6,000 tons) was contributed from the Indonesian fleet during the same period (IOTC-WPNT11 2021), in which a small part of the catch were generated from the Bali Strait (Prayoga et al. 2017). The high catch of bullet tuna reported in the last five years is shadowed by the uncertainty of catch estimation rather than representing actual condition (IOTC-WPNT11 2021). However, the significant rise of bullet tuna production indicates intensive fishing activities driven by increasing market demand and could potentially cause overfishing of local depletion. Constant monitoring by observing some of the biological parameters of the bullet tuna population is pivotal in keeping its resource in check. One of the tools is population genetics, where one of its main purposes is to investigate genetic diversity. According to Nugraha et al. (2016) genetic information can determine the right conservation strategy in a population. In addition to conservation and management of fish stocks, genetic diversity is also a very important factor because the improvement of genetic quality is based on the genetic diversity possessed by a population (Sundari et al. 2018). It can determine whether there is a genetic transfer between populations to assess the stock status of the population. Zedta and Setyadji (2019), obtained the results of visualization of PCR products with Aro2-38 microsatellite DNA primers on bullet tuna and frigate tuna showed DNA bands in samples that were successfully amplified from all loci, but not all samples showed the same thickness of DNA bands. This marker can be a useful tool for use in population genetic studies of bullet tuna species and other fish of the same genus.
Therefore, due to the lack of information regarding its genetic variability and population structure, especially in Indonesian waters, this preliminary study intended to examine the genetic diversity, population structure, and kinship of bullet tuna. Especially those landed in landing sites/fishing ports scattered around the island of Bali. Such information is essential for conducting a better harvest strategy in the future.
Sample Collection
Tissue sampling was carried out in September and October 2020 on four locations representing the waters around Bali, namely PPN Pengambengan on the west, TPI Karangasem on the east, TPI Bondalem Buleleng on the north, and PPI Kedonganan on the south (Figure 1). The length of the journey from land to the fishing area ranges from 1-2 hours so it can be ascertained that it is still in the waters of Bali and its surroundings. Samples were taken in the form of slices of meat from the pectoral to dorsal parts of bullet tuna fish as many as 30 samples per location. The tissue was taken using a cutter and then put into a vial filled with 96% alcohol.
DNA Extraction
DNA extraction was carried out using the DNeasy Blood and Tissue Mini Kit. The extraction process is according to the instructions from the DNeasy 8°S 8°S 9°S
115°E
115°E 115°E 115°E 116°E Figure 1. Sampling location of bullet tuna (Auxis rochei) in around Bali and its adjacent waters Vol. 29 No. 4, July 2022 Blood and Tissue Mini Kit. The extracted DNA was then measured for its concentration using a nanophotometer. If the amount of concentration in the DNA tissue that has been extracted is low, the extraction process will be repeated again.
DNA Amplification
Amplification in the nucleus of DNA cells was carried out using five microsatellite DNA primers (Catanese et al. 2007), as shown in Table 1. The PCR amplification process used a combination of Red Mix, NFW (Nuclease-Free Water), primer F, primer R, and DNA template with a total concentration of 25 µl. The amplification temperature configuration used is as follows: pre-denaturation at 95°C for 2 minutes for one cycle, followed by 34 cycles of denaturation at 95°C for 30 seconds, annealing with temperature and time based on Catanese et al. (2007), extension at 72°C for 45 seconds and one final extension cycle at 72°C for 5 minutes.
Electrophoresis Using QIAxcel
The microsatellite locus polymorphism screening process was conducted using the QIAxcel fragment analysis tool. It uses a high-resolution DNA screening gel cartridge with a size marker ranging 25-500 bp, with an alignment marker measuring 15 bp/600 bp (Qiagen 2017). Band pattern data and electrophotogram were analyzed using QIAxcel Biocalculator software to score the alleles that emerged. The result was used to measure several population genetic parameters, including the number of alleles, allele frequency, heterozygosity, variability (Ho/He), genetic distance, kinship, and population structure.
Data Analysis
Allele variety, population structure, and genetic diversity were calculated using the Arlequin version 3.5 program (Excoffier et al. 2005). Relationships between populations were determined based on genetic distance parameters calculated according to Slatkin M. (1995). Differences in genetic diversity among the population were estimated using Analysis of Molecular Variance (AMOVA) to determine genetic variation and population structure between population groups of bullet tuna (A. rochei). Meanwhile, GenAlEx software version 6.5 (Peakall and Smouse 2012) was utilized to determine the polymorphic locus.
Population Structure
The fixation index (FST) analysis by using Arlequin showed that there was no genetic differentiation between populations (p-value<0.05) at a 95% confidence interval (Table 4 and 5). This indicates that the bullet tuna populations landed at the four fish landing sites in Bali are still in the same population stock and come from the same parent population and migration pattern.
The average value of the inbreeding rate among the population for bullet tuna (FIS) was 0.41184 (low), while the inbreeding rate within the population (FIT) was 0.41397 (low) and the genetic differentiation (FST) was 0.00362 (very low). Genetic variation among bullet tuna populations from all locations was 0.36%. In contrast, the genetic variation among individuals and within individuals was 41.03% and 58.60%, respectively (Table 6).
Kinship Analysis
Kinship analysis can be determined based on the genetic distance via DNA band profiles ( Table 7). The smaller the genetic distance value obtained, the closer the kinship relationship of the bullet tuna population and vice versa. The bullet tuna landed in the east and northern part of Bali has the closest kinship while between south and north has the farthest. The low genetic distance values indicated that the four populations were closely related.
Discussion
Genetic diversity could act as an indicator of a certain condition in the future (Nozawa et al. 1982). It could be determined through one of its attributes, namely heterozygosity (Tanabe et al. 1999). Based on this study, the genetic diversity of bullet tuna between individuals landed at four fish landing sites in Bali is categorized as high. According to Nei (1987), the values fell between 0.8-1.0. This result was relatively similar to the study from Catanese et al. (2007) in the Mediterranean, Atlantic, and Pacific waters. Populations with high genetic diversity have a better chance of survival because each individual responds differently to environmental conditions. The higher the heterozygosity value, the higher the outbreeding, thus increasing the proportion of heterozygous genotypes (Noor 2000). Hendiari et al. (2020) explained that the high value of genetic diversity of fish in a population could occur due to two reasons. The first reason is the size of the fish caught and the large number of fish populations in the waters. The second one is related to the high migratory ability of this species.
The mean value of observed heterozygosity (Ho) obtained in this study was smaller than the expected heterozygosity value (He). It indicates the genotypes imbalance in the population (Tambasco et al. 2003). Machado et al. (2003) added it could be a sign of intensive selection and the possibility of inbreeding mating. Based on those two explanations, it is suggested that all bullet tuna caught in waters around Bali belong to a single population. It has a cosmopolitan-type ability and usually forms a large school. Besides, the spread of bullet tuna also often follows the circulation of sea currents (Agus 2017). Kasim et al. (2020) adding that, seasonal migration patterns at the adult fish stage and during the spawning stage will cause the potential for dispersal to be high so that genetic differentiation between populations will be lower. The waters of the Bali Strait are semi-enclosed waters that connect the Bali Sea in the north and the Indian Ocean in the south (Priyono et al. 2008). The circulation of water masses in the waters of the Bali Strait enters from the Indian Ocean (south-southeast) towards the Bali Sea (northnorthwest) (Pranowo and Realino 2006). Migration activities can also allow cross breeding and mixing of genes between populations (Agus 2017). Hartl and Clarke (1997), divided the FST value into 4 levels, namely low (<0.05), moderate (0.05-0.15), high (0.15-0.25) and very high (>0.25). Based on these criteria, the bullet tuna in this study were classified as having low genetic differentiation, indicating a strong genetic relationship between the populations. Furthermore, the statement was supported by insignificant differences (p>0.05) among four bullet tuna populations around Bali. Unfortunately, the lack of a similar study makes this study's results incomparable. Both inbreeding rate within the population (FIS) and inbreeding rate among the population (FIS) showed they were not significantly different from zero, which implied no sign of migration between the existing population. Moreover, the genetic differentiation value (FST) was close to zero, which illustrates that the genes in each subpopulation still have a fairly high genetic diversity due to the low inbreeding coefficient rate. This means that blood mixing between subpopulations is less likely to occur in closely related mating. Or in other terms, mating still occurs randomly between subpopulations, with no ability to interfere with each other. Further, AMOVA analysis ( Table 6) also confirmed that the genetic differences of bullet tuna from all the landing sites were unlikely influenced by the differences between populations but rather caused by differences within and between individuals (41% and 58%, respectively). These results indicate that the genetic diversity between individuals in the population was high, while between populations was low. It indicated genetic mixing between populations, causing similarities in genetic structure. The similarity of gene structures between populations with far geographical distances is caused by several factors, including the similarity of origin (ancestry refugia) (Tsuda et al. 2009).
The eastern part of Bali's population has a close relationship with its northern counterpart, whereas the population between the south and north had the most distant relative relationship. Theoretically, bullet tuna migration could be detected in a sequence from the north, toward the east, through the southern part and moved back up through Bali Strait in the west following the Indonesian throughflow 512 Agustina M et al. Vol. 29 No. 4, July 2022 (Arlindo), which flows from the Pacific Ocean to the Indian Ocean (Gordon 2005). In addition, the fishermen's behavior also probably influenced the mixing, where fishing grounds are not fixated in just one area and could be a combination, depending on the seasons. The close kinship of the four bullet tuna populations suggests that these populations come from the same lineage group. High migration mobility results in gene flow due to the greater chance of meeting between populations (Akbar et al. 2020). Populations with close kinship have genetic and morphological similarities, possibly due to environmental conditions (Saleky et al. 2016).
In conclusion, it is suggested that the bullet tuna population around Bali and its adjacent waters is a single panmictic population. It refers to a random mating technique used by fish in which breeding occurs just as frequently between any two individuals in a group as it does between any two others. Any environmental (e.g., geographic closeness), hereditary (e.g., spawning period), or social interaction does not affect this type of mating (Bahagiawati et al. 2006). It was also found in several studies on tuna groups and tuna-like populations (Chiang et al. 2008 andAkbar et al. 2014).
No distinct population structure was detected for bullet tuna in the western, eastern, northern and southern parts of Bali's waters. Therefore, for future consideration, stock-based assessment of bullet tuna species, especially from Bali waters should be considered as a single stock. The genetic conservation strategy of bullet tuna in Bali based on research that has been carried out can be considered as one large population, so that the exploration of genetic material in the context of ex situ and in situ conservation can be represented by only one population. Meanwhile, to maintain the stock status of the bullet tuna population in Bali waters, better fisheries management is needed, namely by keeping the bullet tuna migration route the main focus in protecting (conservation) and maintaining the fitness of the bullet tuna fish population. These steps are necessary to ensure the sustainability of fishery resources. Furthermore, the development of Next-Generation Sequencing (NGS) technique for population structure is suggested for higher resolution insight into the population structure of this species. | 2022-05-13T15:16:40.646Z | 2022-04-14T00:00:00.000 | {
"year": 2022,
"sha1": "099e5bcad07076d80ede2b6527a0240721c7e35c",
"oa_license": "CCBYNC",
"oa_url": "https://journal.ipb.ac.id/index.php/hayati/article/download/39207/23303",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "53a52978aab91f81517472beabb1cd7d544743a9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
206327108 | pes2o/s2orc | v3-fos-license | Unique Spin Dynamics and Unconventional Superconductivity in the Layered Heavy Fermion Compound CeIrIn_5:NQR Evidence
We report measurements of the ^{115}In nuclear spin-lattice relaxation rate (1/T_1) between T=0.09 K and 100 K in the new heavy fermion (HF) compound CeIrIn_5. At 0.4 K<T<100 K, 1/T_1 is strongly T-dependent, which indicates that CeIrIn_5 is much more itinerant than known Ce-based HFs. We find that 1/T_1T, subtracting that for LaIrIn_5, follows a 1/(T+\theta)^{3/4} variation with \theta=8 K. We argue that this novel feature points to anisotropic, due to a layered crystal structure, spin fluctuations near a magnetic ordering. The bulk superconductivity sets in at 0.40 K below which the coherence peak is absent and 1/T_1 follows a T^3 variation, which suggests unconventional superconductivity with line-node gap.
The emergence of superconductivity near a magnetic instability in cerium (Ce)-based heavy fermion (HF) compounds is one of the most intriguing phenomena in strongly correlated electron systems. Except for CeCu 2 Si 2 which is superconducting at ambient pressure with T c =0.65 K [1], the superconductivity emerges near the quantum critical point (QCP) where the magnetic ordering is suppressed by large applied external pressure in CeIn 3 [2], CeCu 2 Ge 2 [3], CePd 2 Si 2 [4] and CeRh 2 Si 2 [5]. In spite of efforts and progress, however, knowledge about this class of superconductors is still limited because of difficult experimental conditions. The recently discovered new family of Ce-based heavy electron systems, CeMIn 5 (M=Rh, Ir) with M=Ir being a superconductor already at ambient pressure [6,7], provides new opportunities for studying the nature of the superconductivity in the vicinity of a magnetic instability, the interplay between magnetic excitations and superconductivity, etc. In particular, CeIrIn 5 is suitable for studies using microscopic experimental probes that can be applied more easily at ambient pressure.
CeMIn 5 (M=Rh, Ir) consists of alternating layers of CeIn 3 and MIn 2 . CeRhIn 5 is an antiferromagnet with T N =3.8 K but becomes superconducting below T c =2.1 K under pressures larger than 1.6 GPa [6]. In CeIrIn 5 , the resistivity is already zero at ambient pressure below 1.2 K, but the Meissner effect and the jump in the specific heat are found only at 0.4 K [7]. The electronic specific heat coefficient γ is found to be 750 mJ/mol K 2 [7], which suggests a large mass enhancement. Recent de Haas-van Alphen Oscillation in CeIrIn 5 also reveals a cyclotron mass that is ∼20 times larger than the band mass, consistent with the specific heat result [8].
In this Letter, we report a measurement using local probe, the 115 In nuclear quadrupolar resonance (NQR) study in CeIrIn 5 down to 90 mK, at zero magnetic field.
From the temperature (T ) dependence of the nuclear spin lattice relaxation rate (1/T 1 ), we find that CeIrIn 5 is much more itinerant than known Ce-compounds such as CeCu 2 Si 2 [9], and show that this compound is located near a magnetic ordering with anisotropic spin fluctuations due to the layered crystal structure. No anomaly was found at 1.2 K in the NQR quantities, but 1/T 1 shows an abrupt decrease at 0.40 K below which the NQR intensity also decreases as does the ac susceptibility, confirming a bulk superconductivity below T c =0.40 K. The lack of coherence peak in 1/T 1 just below T c =0.40 K followed by a power-law T -variation, 1/T 1 ∝ T 3 , indicate that the superconductivity is of unconventional type with an anisotropic gap. Our results show that CeIrIn 5 bares some resemblance to itinerant, quasi-twodimensional high-T c copper oxides.
Single crystal of CeIrIn 5 was grown by the In-flux method as in Ref. [6]. X-ray diffraction indicated that the compound is single phase and forms in the primitive tetragonal HoCoGa 5 type structure. The resistivity already drops to zero at 1.2 K, which is in agreement with the reported property [7]. The single crystal was crushed into powder to allow a maximal penetration of oscillating magnetic field, H 1 . The measurements below 1.4 K were performed by using a 3 He/ 4 He dilution refrigerator. A small H 1 was used to avoid possible heating by the RF pulse. There are two inequivalent crystallographic sites of In in this compound, In(1) in the CeIn 3 plane and In(2) in the IrIn 2 plane. Two sets of In NQR lines corresponding to these two sites were observed as shown in Fig. 1. The first set of the NQR lines that are equally spaced is characterized by ν Q =6.065±0.01MHz and the asymmetry parameter η=0. The second set of lines that are unequally spaced was found at the positions centered at 33.700, 38.350, 52.185 and 71.432 MHz, respectively, which correspond to ν Q =18.175±0.01 MHz and η=0.462±0.001. Here ν Q and η are defined as ν Q ≡ ν z = 3 2I(2I−1)h e 2 Q ∂V ∂z , and η = |ν x −ν y |/ν z , with Q being the nuclear quadrupolar moment, I=9/2 being the nuclear spin and ∂V ∂α (α = x, y, z) being the electric field gradient at the position of the nucleus [10]. The symmetric lines are assigned to the In(1) site and the asymmetric lines to the In(2) site, respectively, since crystallographically In(1) site is axially symmetric but In (2) is not. This assignment is consistent with the observation in CeIn 3 where η is zero [11]. ν Q for In(1) is smaller, but ν Q for In (2) is larger by about 10% than the respective values in CeRhIn 5 [12]. The transition lines are narrow with the full width at half maximum (FWHM) being ∼50 kHz, which indicates a good quality of the crystal. 1/T 1 measurements were done mostly on the 1ν Q (±1/2 ↔±3/2) transition at low T but at the 4ν Q (±7/2 ↔±9/2) transition at high T for the In(1)-site. The value of 1/T 1 was obtained from the recovery of the nuclear magnetization following a single saturation pulse and an excellent fitting to the equations given by MacLaughlin [13]. The value of 1/T 1 measured at different transitions shows excellent agreement. Figure 2 shows 1/T 1 as a function of T in the temperature range of 0.09 K≤T ≤100 K. We discuss the normalstate property above T c first. Remarkably, 1/T 1 shows strong T dependence up to the highest temperature that we have measured, 100 K. This is to be compared to other Ce compounds, such as CeCu 2 Si 2 where 1/T 1 becomes Tindependent above ∼10 K [9], which is assigned to be the Kondo temperature, T K below which the localized 4f moment is screened to produce a heavy quasiparticle state. This result indicates that CeIrIn 5 is much more itinerant than other known Ce-compounds. In Fig. 3, we plot 1/T 1 T as a function of T . The inset shows a log − log plot that displays more clearly the behavior just above T c . For comparison, we also show the data 1/T 1 T =0.81 Sec −1 K −1 for LaIrIn 5 where no 4f spins are present. The measurement for LaIrIn 5 was carried out at 4ν Q =23.77 MHz. It is seen that 1/T 1 T of CeIrIn 5 is largely enhanced over that of LaIrIn 5 and increases strongly with decreasing T . Also note that a T 1 T = const. relation, that would be expected from the Landau-Fermi liquid theory, is not obeyed. These aspects indicate that 1/T 1 in CeIrIn 5 is dominated by itinerant spin fluctuations (SFs). In fact, it was found more recently that substituting 40% of Rh for Ir results in an antiferromagnetic (AF) ordering of the system at T N =2.7 K [14], which also suggests that CeIrIn 5 is a nearly AF metal. In Fig. 4 we show T 1 T as a function of T . The open circles above 0.4 K correspond to 1/T 1 T = 1/T 1 T (CeIrIn 5 )-0.81Sec −1 K −1 (LaIrIn 5 ), which represent the contribution due to the presence of Ce 4f spins alone. Note that this correction, by subtracting the relaxation of LaIrIn 5 which represents other relaxations including the In orbital contribution, is negligibly small for 0.4 K≤ T ≤15 K. As seen in the figure, the data can be fitted to a relation of T 1 T = C(T +θ) 3 4 with θ=8 K and C=4.75 msecK 1 4 (solid curve).
This unique T -dependence of 1/T 1 T has never been observed in other HF compounds. We argue that this novel feature arises from anisotropic AF spin fluctuations, due to the layered crystal structure of CeIrIn 5 . In d-, and also f-electron weak or nearly AF metals, many physical quantities can be explained by the self-consistent renormalization theory for SFs [15]. In this theory, it was shown that the staggered susceptibility at the AF wave vector q = Q, χ Q (or the squared magnetic correlation length) follows a Curie-Weiss (CW) variation above the Neel temperature in a weak AF magnet due to mode-mode coupling of SFs, namely, χ Q ∝ 1 T +θ . The value |θ| is just the Neel temperature (T N ) here. In a nearly AF metal that does not order at finite temperature, χ Q is shown to also obey a CW variation, while in this case θ measures the closeness of a system to the magnetic ordering; θ decreases towards zero upon approaching the ordering. Now, by assuming [15]. Indeed, in many quasi-2D high-T c cuprates it is found 1/T 1 T ∝ 1 T +θ where the value of θ decreases upon approaching the magnetic ordering. For example, in the so-called overdoped compound TlSr 2 CaCu 2 O 6.8 which is far away from the magnetic ordering, θ is 235 K [16]. In less hole-doped system, La 2−x Sr x CuO 4 , θ=120 K for x=0.24 while it decreases linearly with decreasing hole doping, reaching to θ=20 K at x=0.075 [17]. In AF ordered 3D HF compounds, on the other hand, 1/T 1 T ∝ ( 1 T +θ ) 1 2 is well obeyed [18]. The predicted results by the 2D or 3D models are shown in Fig. 4. As can be seen in the figure, although both models capture the low-T behavior, neither of them fits the data in the high-T range. Let us now consider a situation where the dispersion of the SFs is in-between 2D and 3D ones. If the SF dispersion in one direction is flat, namely, the magnetic correlation length (ξ) is much shorter in one direction than in others, then by assuming χ(Q+q) −1 = χ −1 Q +a 1 (q 2 x +q 2 y )+a 2 q 4 z instead of isotropic quadratic dispersion [19], it is shown that 1/T 1 T ∝ χ 3/4 Q . This anisotropic SF model explained the dynamical susceptibility in d-electron antiferromagnet YMn 2 , which orders at T N =110 K but the ordering can be suppressed either by applying external pressure or by substituting Sc for Y. In paramagnetic Y 0.97 Sc 0.03 Mn 2 , inelastic neutron scattering measurement found that ξ is shorter along the [001] direction (ξ ⊥ =1.72Å) than that along the [110] direction (ξ =2.86Å), which is ascribed to the geometrical frustration of the magnetic interaction [20]. Indeed, the same T -variation as found here, namely, 1 T1T ∝ ( 1 T +θ ) 3 4 was observed in paramagnetic YMn 2 under pressure [21]. On the above basis, we propose that the T -variation of 1/T 1 T , that can be fitted to ( 1 T +θ ) 3 4 in the entire T -range except near T c , is due to anisotropic spin fluctuations in CeIrIn 5 . In fact, CeIrIn 5 has a layered crystal structure. Because of this 2D-like structure, a weaker magnetic correlation along the c-axis can be expected. Further investigation by inelastic neutron scattering measurement would be interesting to confirm the SF dispersion in this compound. More systematic NQR/NMR study is also underway to see if the deviation of the low-T data from the anisotropic SF curve points to any possible crossover to a different SF regime upon lowering T . In any case, the small value of θ < 10 K indicates that CeIrIn 5 is located in close proximity to the magnetic ordering. Finally, we remark that the strong SFs near the magnetic ordering may also make an appreciable contribution to the huge specific heat. Next, we discuss the superconducting (SC) state. First, as seen in Fig. 2, no anomaly was detected in 1/T 1 around 1.2 K below which resistivity is zero. We also checked carefully the intensity and the linewidth of the NQR spectrum below 1.4 K; no anomaly is found when passing through 1.2 K. However, 1/T 1 decreases abruptly at T = 0.40 K, below which the NQR intensity decreases as does the ac susceptibility due to the Meissner effect. These results indicate that the bulk superconductivity sets in at 0.40 K, which is in good agreement with the specific heat measurement [7]. The property of the SC state below 0.40 K is remarkable. Namely, 1/T 1 shows no coherence peak just below T c , and decreases in proportion to ∼ T 3 upon lowing T . This behavior is not compatible with isotropic s-wave gap. Rather, our result is qualitatively similar to that in other HF superconductors, such as CeCu 2 Si 2 [9], UBe 13 [22], etc [23,18], and also high-T c cuprate superconductors [24], which indicates that the SC energy gap is anisotropic. In terms of density of states (DOS), T 1s in the SC state is expressed as T1N T1s = is the DOS in the SC state, f (E) is the Fermi function, ∆ is the energy gap and C = 1 + ∆ 2 EE ′ is called the coherence factor. In an isotropic s-wave superconductor, the divergence of N s at E = ∆ results in the coherence peak of 1/T 1 just below T c , and 1/T 1 decreases as exp(−∆/k B T ) at low T because N s =0 for E < ∆. By contrary, an anisotropic gap generally reduces the divergence of N s and produces a finite DOS at low energy. For example, if we assume a line-node gap of ∆(φ) = ∆ 0 cos φ, then by integrating C and N s = The finite value of N s at E = ∆ 0 removes the coherence peak and the E-linear DOS below ∆ 0 gives rise to a T 3 variation of 1/T 1 at low T . The curve below T c in Fig. 2 depicts the calculated result assuming the above model with 2∆ 0 = 5.0k B T c and a BCS T -dependence for ∆ 0 . This gap amplitude ∆ 0 is about the same as that for CeCu 2 Si 2 to which should the same model be applied. It is, however, substantially smaller than that in some uranium (U)-based HF superconductors where 2∆ 0 would reach ∼ 10k B T c for the same gap function [22,23], which may be related to its proximity to a magnetic instability of the present compound. In fact, a recent study found a reduced ∆ 0 in Ce 0.99 Cu 2.02 Si 2 [25], which is believed to be located closer to QCP than a stochiometric compound [26]. Applying external pressure increases ∆ 0 [25]. In CeIrIn 5 , applying pressure increases T c [7]. Further investigations by NQR under pressure are in progress in order to reveal how the gap amplitude evolves with pressure, and the intimate relation between the superconductivity and the magnetic exitations.
In summary, we find from NQR 1/T 1 measurement that the new heavy fermion (HF) compound CeIrIn 5 is much more itinerant than known Ce-based HFs. We further find that 1/T 1 T , subtracting that for LaIrIn 5 , follows a ( 1 T +θ ) FIG. 2. T dependence of the 115 In nuclear spin-lattice relaxation rate. The solid curve is a calculation by assuming a line-node gap ∆(φ) = ∆0 cos φ with ∆0 = 2.5kBTc (see text for detail). | 2018-04-03T04:31:37.612Z | 2001-02-27T00:00:00.000 | {
"year": 2001,
"sha1": "e70a63db883dc03d163af61c0ce9bab8f17f14d0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0102487",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "10b2ad51ed7186077934e008b293a54ee26a9537",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
157542177 | pes2o/s2orc | v3-fos-license | M acroeconoMic e ffects of o il P rice f luctuations in c oloMbia Efectos macroeconómicos de las fluctuaciones de los precios del petróleo en Colombia
This research aims to study the effects of oil price changes on the Colombian economy during 2001:Q1 to 2016:Q2. A structural vector auto-regression model in the spirit of Blanchard and Galí (2010) is estimated under a recursive identification scheme, where unexpected oil price variations are exogenous relative to the contemporaneous values of the remaining variables. Drawing on impulse-response estimates, a 10% increase in the oil price generates the following accumulated orthogonalized responses: i) a contemporaneous 0.4% increase in GDP growth, later on the effect reaches its maximum in the first quarter (1.7% increase) and starts to decay after two quarters; ii) a contemporaneous 1.2% decrease in unemployment, then the effect remains slightly negative and reaches its maximum after ten quarters (5.1% decrease); iii) a contemporaneous 0.9% decrease in inflation, followed by an 0.2% increase by quarter three, and thereafter the effect remains slightly negative.
MacroeconoMic effects of oil Price fluctuations in coloMbia
Efectos macroeconómicos de las fluctuaciones de los precios del petróleo en Colombia Leonardo Quero-Virla
INTRODUCTION
Oil is a key component of the global economy, and the relationship between its price and macroeconomic indicators has been addressed by economic researchers since the late 1970s, such as Hamilton (1983Hamilton ( , 1996)), Rotemberg and Woodford (1996), Kilian (2009), Blanchard and Galí (2010), among many others.However, most of the research on the subject has focused on advanced economies (especially the U.S.), which have been historically net-importers of oil.For emerging and developing economies, the effects of oil price fluctuation have been explored to a much lesser extent in recent work, for example by Lorde, Jackman and Thomas (2009) for Trinidad and Tobago, and Farzanegan & Markwardt (2009) for Iran.
Previous work has presented a variety of results, which suggest that the responses to oil price fluctuations might be heterogeneous from one country to another, depending on the characteristics of the economy, including whether it is emerging or developed, a net oil-exporter or a net oil-importer.
There were three main motivations for conducting this research.First, Colombia represents an interesting country for a case study, as it is an emerging market economy with relatively low oil reserves (when compared to other oil giants such as Venezuela and Mexico) that has managed to obtain a place among the largest oil exporters in the Latin American region during last years (BP, 2016).Second, although the oil sector in Colombia is substantial, the country has a somewhat diversified economy, which is not common among major oil producers; for instance, oil revenues as a percentage of GDP have consistently been less than 9% during 1970-2014(World Bank, 2016).Third, as stated previously, the majority of journal articles on this subject have been focused on the U.S. or other industrialized economies, and to the best of my knowledge, the Colombian case has not been widely explored in the empirical macroeconomic literature.That said, this work aims to make another contribution to the understanding of the oil prices-macroeconomy relationship, focusing on Colombia.
The empirical strategy used here relied on the standard structural vector auto-regression (SVAR) methodology, a heavily-used tool in modern macroeconomic research.I began the empirical analysis by examining the statistical properties of each time series and estimating a simple, unrestricted vector auto-regression.After this model was tested and accepted, I proceeded to estimate a structural specification in the spirit of Blanchard & Galí (2010), under the identification assumption that unexpected variations in the nominal price of oil are exogenous relative to the contemporaneous values of the main macroeconomic variables for Colombia.
The conclusions are driven mainly by an impulse-response analysis with a time horizon of 10 periods (quarters) ahead, which also applies for the structural forecast error variance decomposition.A unit shock in the oil price (1% increase) generates a contemporaneous increase in GDP growth (which starts to decay after one or two quarters) and contemporaneous decreases in unemployment and inflation (thereafter both effects remain slightly negative).Such results are inspected and discussed to a larger extent in the last two sections.Additionally, oil price innovations do not explain a
THE OIL SECTOR IN THE COLOMBIAN ECONOMY
Colombia's proved reserves of oil are not as large as those of other major oil producers, such as Venezuela or Mexico.Nevertheless, its production has increased sharply during the last decades.According to BP (2016), Colombia is the third largest oil producer in South America, after Venezuela and Brazil, and the fourth largest in Latin America if Mexico is included.Oil production in Colombia increased about 400% between 1965 and 2015, and between 2005 and 2015 it increased about 92%.Oil consumption has been increasing since 1965, but at a much slower pace; the gap between supply and demand reached its historic peak in 2015.The Americas Society (2010) gave the name of Energy Renaissance to the post-2003 period, which was preceded by an era where production started to decline (mainly by the end of the 1990's and early 2000's) due to geological setbacks and security problems, as the upstream activity was often located in remote places where the State had limited presence, thereby increasing the likelihood of kidnappings, pipeline bombings, extortions, etc.Following the sharp decline in oil production, reforms took place in 2003 to revisit the regulatory and fiscal framework to account for Colombia's less competitive geology.Such reforms were accompanied by other actions in the security sphere.Moreover, since the 2003 reforms, the Colombian oil sector has attracted a total of 38.8 billion USD of foreign direct investment (FDI) during 2003-2015 (Banco de la República de Colombia, 2016), and additionally the share of investment out of total FDI increased.The years of high foreign direct investment flows in the petroleum sector are linked with years of rapid growth in crude production, as shown in Figures 1 and 3. Regarding the oil revenues, they represented 6.4% of Colombian GDP in 2014 and consistently accounted for less than 9% of the GDP during 1970-2014, which is not a common trend among major oil producers; that fact makes Colombia less exposed to oil price risk when compared to Venezuela, where the oil revenues accounted for 38% of GDP in 2005 and 23% in 2012.The Colombian oil sector is dominated by Ecopetrol, which is a public stock-holding corporation, 88.5% percent state-owned, and associated with the Ministry of Energy.According to its management report (Ecopetrol S.A., 2015), the company ranks 19 th in the Platts Top 250 Global Energy Company Rankings and has a value of USD 2.0 billion.The company has the capacity to participate in every stage of the hydrocarbons chain, including both upstream (exploration and production) and downstream (trading, lubricants, petrochemicals) activities.
By the end of 2015, Colombia produced around one million barrels of oil per day and had 290,850 barrels per day of crude oil refining capacity at five refineries owned by Ecopetrol.The company aims to increase the refining capacity and improve the ability to process heavier crude oils by expanding the Barrancabermeja refinery, located in Santander Department.It has just started operations in the new Cartagena refinery, located in Bolívar Department (U.S. Energy Information Administration, 2016).
Despite being a state-owned company and unlike similar peers such as PDVSA in Venezuela and Petrobras in Brasil (which have been linked to corruption scandals by the international press), Ecopetrol is run in a business-oriented manner, with clear corporate strategies and values.It aims to increase its production by 1-2% up to 2020, maintain its current credit rating, invest 5 billion USD every year, and cut costs by 1 billion USD every year (Ecopetrol S.A., 2015).
Much of Colombia' s crude oil production takes place in the Andes foothills and in the eastern Amazonian jungles.Meta Department, in central Colombia, is also an important production area where heavy crude oil predominates.The Llanos basin contains the Rubiales oilfield, the largest producing oil field in the country.Also, it should be noted that the number of operating oil rigs has declined recently, as Figure 5 shows, but that is a common trend among South American producers.The petroleum sector faces a number of limitations, including a still deficient infrastructure.Pipelines and other energy facilities have been the target of attacks by anti-government guerrillas for many years, which have caused an important number of unplanned production disruptions: around 41 000 barrels per day (U.S. Energy Information Administration, 2016).Also some local communities oppose energy projects on their lands for spiritual beliefs about protecting natural resources, a concern that oil-related activity will attract criminal or violent groups to their territory (Americas Society, 2010), or a concern about the adverse effects of operating in environmentally-sensitive areas.
METHODOLOGY
This analysis of the effect of oil prices on the macroeconomy follows the Structural Vector Autoregression (SVAR) tradition, a well-known multivariate time series framework heavily-used in modern empirical macroeconomics.This methodology was initially developed by Christopher Sims (1980Sims ( , 1986)), but it has been extended by many other contributors.A full review of the estimation, identification strategies, benefits and drawbacks of SVARs can be found in work done by Lütkepohl (2005) and Kilian (2013).
SVARs are data-driven but still incorporate meaningful elements from economic theory or intuition just by setting a minimum quantity of restrictions, an appealing feature to establish cause-effect relationships.According to Kilian (2013), despite the increase in the use of dynamic stochastic general equilibrium models, SVARs continue to be the main tool for empirical work in macroeconomics.That said, the decision of selecting the empirical strategy was not a difficult task as it coincides with previous work done by Kilian (2009) on the same subject, and especially by Blanchard and Galí (2010), the main empirical reference for this paper.
Identification and Estimation
Although SVARs are structural models, they depart from reduced-form vector autoregressions (VARs).Hence, following Lukthepohl (2005), the empirical workflow of this paper begins with the estimation of a simple VAR, which is tested, before proceeding to perform structural analysis.
After that, a structural model is set up consisting of Y t =(OIL t , GDP t , UNEMP t , CPI t )', where OIL t is the percent change of the WTI crude price in USD; GDP t and CPI t , are percent changes of the -seasonally adjusted-GDP and the consumer price index, respectively; and UNEMP t , the averaged-by-quarter unemployment rate.Thus, The SVAR representation is: (1) a is a vector of constants or intercept terms, A i is a matrix of coefficients in period t-i,and ε t is a four-dimensional vector with serially uncorrelated and mutually uncorrelated errors.It is assumed that A 0 has a recursive structure, such that the reduced form errors e t can be decomposed according to e t = A 0 -1 ε t : (2) Following a recursive structure, restrictions placed on matrix A imply that unexpected variations in the nominal price of oil are exogenous relative to the contemporaneous value of the remaining macroeconomic indicators included in the SVAR, which is consistent with Blanchard and Galí (2010).They explain that such identification assumption would be clearly incorrect if macroeconomic developments in the country of consideration affect the world price of oil contemporaneously.This may be either because the economy under consideration is large, or because developments in the country are correlated with world developments.Thus, their research explored alternative assumptions and obtained nearly identical results among them.In the case of Colombia, a small open economy, it is unlikely that national macroeconomic fluctuations have a direct and contemporaneous effect on the global price of oil.Note that matrix B was set to an identity as no restrictions were imposed on it.
As stated in equation ( 2), the model can account for four shocks, however the empirical effort of this paper will focus on ε t OIL-shock or the oil price shock, as it is the most relevant with regard to the purpose of the study.A priori, given the importance of the oil sector in Colombia and also considering the relevance of the country in terms of crude production, following oil price increases, there were expected an increase in GDP growth, a decline in the unemployment rate, and a slight decrease in the inflation rate.Additionally, increases in oil prices increase oil exports, which is expected to strengthen the Colombian currency, causing a deflationary effect on domestic prices.
DATA
Preliminarily, the following time series were collected: (i) from Banco de la República de Colombia (2016), quarterly data on the real seasonally-adjusted gross domestic product, and monthly data on the consumer price index and the unemployment rate; (ii) from the International Monetary Fund (2016), quarterly data on the nominal West Texas Intermediate crude global price (period averages).
Subsequently, some transformations were applied to the time series in order to account for unit roots and to convert them to quarterly frequency where applicable.The resulting dataset covers the period 2001:Q1 to 2016:Q2, and includes: OIL t , the percent change (%) of the nominal WTI crude price in USD; GDP t and CPI t , percent changes (%) of the real -seasonally adjusted-GDP and the consumer price index, respectively; and UNEMP t , the averaged-by-quarter unemployment rate (%).In the specific case of CPI t , the end-of-quarter values were used to construct the final time series.
The stationarity of every series was confirmed by means of the KPSS test developed by Kwiatkowski et al. (1992), whose null hypothesis is that the tested time series is stationary.Every test included automatic lag selection procedure by Newey and West (1994).The results are shown in Table 1.There was no particular reason for selecting the 2001:Q1 to 2016:Q2 period besides the availability of reliable data, as both Banco Central de la República de Colombia ( 2016) and the International Monetary Fund (2016) offer the possibility of accessing such time series with ease.Notwithstanding what coincidentally makes the selected period interesting is the fact that it includes both low and high oil price sub-periods.For instance, the crude price went from 28.7 USD in 2001:Q1 to its peak of 123.9 USD in 2008:Q2, and some years later it started to decline by around 50% by the end of 2014.That oil price roller coaster was accompanied by internal developments and reforms in Colombia that made oil take a more important role in the national macroeconomy; as seen in Figure 1 and 3 (previous section), the oil production and foreign direct investment flows in the oil sector increased during the selected period, although unlike other Latin American peers, oil revenues as a share (%) of GDP remained low and relatively stable.
RESULTS
The first part of this section addresses the evaluation process of a preliminary VAR estimation, while the second one covers the final SVAR results.
Preliminary VAR results
Given the frequency of the data and following the Akaike and Hannan & Quinn information criteria for lag order selection, a VAR of order 4 was estimated.The following checks were performed: a) the absence of serial correlation was confirmed by means of the Lagrange-multiplier test; b) the model satisfied the stability condition, as all the roots of the companion matrix were inside the unit circle, i.e. less than one; c) the multivariate version of the Jarque-Bera test suggested the presence of normality in the residuals, although Lutkepohl (2011) explains that normality is not a sine qua non condition for the validity of statistical procedures related to VARs.
SVAR results
After having tested and accepted the preliminary VAR model, it is possible to proceed with the underlying structural estimation, which follows the identification strategy explained previously.The following  matrix was obtained: (3) and the corresponding estimated contemporaneous impact matrix is: (4) Given the estimated matrix A, or Â, SVAR coefficients a 21 and a 41 (effect of oil price changes on GDP growth and inflation, respectively) were negative, and coefficient a 31 (effect of oil price changes on unemployment) was positive.However, the coefficients were not statistically significant at conventional levels.
To better observe the effects of structural shocks across time, impulse-response functions are often more informative than estimated structural parameters themselves (Breitung, Brüggemann & Lütkepohl, 2004).Both structural and cumulative orthogonalized impulse-response functions were estimated with a time horizon of 10 quarters ahead in order to inspect visually the effect of ε t OIL-shock on macroeconomic variables.
According to the structural impulse-response functions in Figure 8, following an oil price shock, the GDP growth rate declines immediately but increases after one quarter, and the positive effect reaches its peak around quarter two; after that, the effect starts to decay.On the other hand, the response in the unemployment rate and the inflation rate are quite similar: following an oil price shock, both variables decline up to quarter three, and then increase to a maximum around quarter four; the effects are time-varying and do not decay even after ten quarters.
- Cumulative orthogonalized impulse-response functions are shown in Figure 9.The accumulated response in GDP growth reaches its maximum around quarter one, and thereafter starts to decay.Once again, the responses in the unemployment rate and inflation are quite similar: the accumulated responses in those two variables remain close to zero, although slightly negative even after ten quarters.Regarding innovations accounting, the structural forecast error variance decomposition was estimated.It should be noticed that in such case the forecast errors are not decomposed into contributions of the different variables (as in regular forecast error variance decomposition) but into the contributions of structural innovations.Table 4 shows that ε t OIL-shock innovations do not explain a significant share of structural forecast error variance in either GDP t , UNEMP t , or CPI t .In general, each variable's structural forecast error is highly affected by own structural shocks of the variable.
CONCLUSIONS
This analysis examines the effect of world oil price shocks on the Colombian economy.The identification assumptions used in the analysis were motivated by the work done by Blanchard and Galí (2010), and are consistent with the well-known fact that Colombia is a small open economy; therefore internal economic fluctuations are unlikely to generate an effect in the global economy, or in the price of oil.
A priori, given the importance of the oil sector in Colombia and also considering the relevance of the country in terms of crude production, following oil price increases, there were expected an increase in GDP growth, a decline in the unemployment rate, and a slight decrease in the inflation rate.The estimated effects were consistent with these expectations.
FigureFigure
Figure 1.Oil Production and Consumption in Colombia.
Figure
Figure 3. Foreign Direct Investment Flows in the Colombian Oil Sector.
Figure 4 .
Figure 4. Oil Revenues as % of GDP in Colombia, Brazil and Venezuela.
Figure 5 .
Figure 5. Oil rigs in Colombia, Brazil and Venezuela.
Figure 7 .
Figure 7. Stability condition related to the VAR.
Table . 2. Serial Correlation test related to the VAR
Source: Author's elaboration.Null hypothesis: no autocorrelation at lag order
Table 3 . Normality test related to the VAR
Source: Author's elaboration.Null hypothesis: Skewness and excess kurtosis are zero, i.e. residuals follow a normal distribution PP 34 | 38 Ecos de Economía: A Latin American Journal of Applied Economics | Vol.20 | No. 43 | 2016 Macroeconomic Effects of Oil Price Fluctuations in Colombia
Figure 8. Structural impulse-response functions (SIRF) to an Oil Price Shock.
Source: Author's elaboration.
Table 4 . Structural forecast error variance decomposition (SFEVD) based on the identification scheme
Source: Author's elaboration. | 2019-05-19T13:04:37.516Z | 2016-12-02T00:00:00.000 | {
"year": 2016,
"sha1": "8e9e00c76de2b94d035bbd4319c79948efcaca64",
"oa_license": "CCBY",
"oa_url": "https://publicaciones.eafit.edu.co/index.php/ecos-economia/article/download/4181/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6ee133d9f76a37742b03ba0b93d6fea84ec98929",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
270939139 | pes2o/s2orc | v3-fos-license | Battle of the Blocks: Which Pain Management Technique Triumphs in Gender-Affirming Bilateral Mastectomies?
Background Gender-affirming mastectomy, performed on transgender men and non-binary individuals, frequently leads to considerable postoperative pain. This pain can significantly affect both patient satisfaction and the overall recovery process. The study examines the efficacy of four analgesic techniques pectoral nerve (PECS) 2 block, erector spinae plane (ESP) block, thoracic wall local anesthesia infiltration (TWI), and systemic multimodal analgesia (SMA) in managing perioperative pain, with special consideration for the effects of chronic testosterone therapy on pain thresholds. Methods A retrospective analysis was conducted on patients aged 18 - 45 who underwent gender-affirming bilateral mastectomies at a New York City community hospital. The study compared intraoperative and post-anesthesia care unit (PACU) opioid consumption, postoperative pain scores, the interval to first rescue analgesia, and total PACU duration among the four analgesic techniques. Results The study found significant differences in intraoperative and PACU opioid consumption across the groups, with the PECS 2 block group showing the least opioid requirement. The PACU morphine milligram equivalent (MME) consumption was highest in the SMA group. Postoperative pain scores were significantly lower in the PECS and ESP groups at earlier time points post-surgery. However, by postoperative day 2, pain scores did not significantly differ among the groups. Chronic testosterone therapy did not significantly impact intraoperative opioid requirements. Conclusion The PECS 2 block is superior in reducing overall opioid consumption and providing effective postoperative pain control in gender-affirming mastectomies. The study underscores the importance of tailoring pain management strategies to the unique physiological responses of the transgender and non-binary community. Future research should focus on prospective designs, standardized block techniques, and the complex relationship between hormonal therapy and pain perception.
Introduction
In the realm of gender-affirming surgical procedures, top surgery clinically known as masculinization chest surgery stands as a pivotal intervention for transgender men and nonbinary individuals assigned female at birth [1].This surgical approach, primarily composed of mastectomy, is a cornerstone in alleviating gender dysphoria and enhancing the concordance between an individual's physical appearance and gender identity, thereby significantly augmenting the quality of life [2].Nevertheless, the perioperative period is frequently accompanied by considerable pain, a factor that can profoundly impact patient outcomes and recovery.Recognizing the necessity for efficacious pain management strategies is paramount in the context of such procedures.Historically, a variety of analgesic techniques have been employed, including the pectoral nerve (PECS) 2 block, erector spinae plane (ESP) block, thoracic wall local anesthesia infiltration (TWI), and systemic multimodal analgesia (SMA), supplemented by additional methods such as the serratus anterior block and intercostal nerve block [3][4][5][6].A critical aspect often overlooked in the analgesic paradigm is the influence of chronic testosterone therapy, which is commonly prescribed for the masculinization of transgender and non-binary individuals.Chronic administration of testosterone is known to modulate pain thresholds, thus potentially altering the analgesic requirements for these patients [7,8].Considering this, our retrospective comparative analysis aims to elucidate the analgesic efficacy of four perioperative pain management strategies in gender-affirming bilateral mastectomies.Our primary outcome will involve a comparison of intraoperative and post-anesthesia care unit (PACU) opioid consumption, quantified in morphine milligram equivalent (MME).Secondary outcomes will encompass differences in pain scores between patients receiving chronic testosterone therapy, postoperative pain scores, the interval to first rescue analgesia, and the to-
Battle of the Blocks: Which Pain Management Technique
Triumphs in Gender-Affirming Bilateral Mastectomies?
Sengottaian Sivakumar a, c , Aron Kressel b , Roni Mendonca a , Michael Girshin a Sivakumar et al J Clin Med Res.2024;16 (6):284-292 tal duration within the PACU.Through this investigation, we aspire to refine pain management protocols to better serve the transgender and non-binary community by acknowledging and integrating their unique physiological responses to chronic testosterone therapy.
Materials and Methods
This retrospective analysis was conducted upon approval by the Institutional Review Board (IRB committee approval number 22-12-226-182).The study was conducted in compliance with the ethical standards of the responsible institution on human subjects as well as with the Helsinki Declaration.It entailed a comprehensive review of electronic medical records for patients who underwent gender-affirming mastectomies at a community hospital in New York City from October 2021 to October 2022.Inclusion criteria were confined to patients aged 18 -45 years who underwent gender-affirming bilateral mastectomies under general anesthesia and were managed with one of four perioperative pain strategies: PECS 2 block, ESP block, TWI, or SMA.It is imperative to note that thoracic infiltration refers to administering a local anesthetic solution to incision lines, utilizing either 0.25% bupivacaine or 0.2% ropivacaine, 20 mL on each side for both the types of blocks.Additionally, systemic analgesia comprised the use of opioids and NSAIDs, including ketorolac and intravenous acetaminophen.Exclusion criteria encompassed patients with chronic pain disorders or opioid dependency, those who had revision or reconstructive surgeries in addition to the mastectomy, those with allergies or contraindications to any study analgesics, or those with significant comorbidities affecting pain management strategy safety or efficacy.Importantly, as this was a singlecentered study, all surgical procedures were performed by the same surgeon, ensuring consistency in surgical techniques and reducing variability in the surgical component of the study.In our study, patients were presented with the choice of two primary nerve block techniques for their analgesic management: the ESP block and the PECS block.The ESP block was typically performed with the patient in a sitting position prior to the induction of anesthesia, whereas the PECS block was usually administered with the patient in a supine position after anesthesia had been initiated.The choice between these techniques was offered to patients based on their comfort with being conscious during the block procedure and their preference for body positioning.Those who opted for the nerve block while awake and in a sitting position were allocated to the ESP group.Conversely, patients who preferred to be anesthetized and thus unaware of the block procedure were assigned to the PECS block group.Both the blocks were performed under ultrasound guidance.This patient-centered approach to the selection of analgesic technique allowed for individualized care, but also introduced a potential selection bias into our study, as patients' preferences may have been influenced by factors not accounted for in our analysis, such as previous experiences with anesthesia, anxiety levels, and expectations of postoperative pain and recovery.
Sample size calculation
A power analysis using Altiparmak et al's [9] effect size indicated that 16 subjects per group needed to achieve 80% power at a 0.05 alpha level.Our study enrolled 22 per group, summing to 88, surpassing the recommendation for robust statistical power.
Statistical analysis
In this study, we extracted a wide range of data, including patient demographics, surgical specifics, pain management approaches, perioperative pain levels, analgesic use, and various time-related measures.Patients were categorized into four groups based on their pain management technique.Our primary evaluation focused on intraoperative and PACU opioid use, measured in MME, and secondary evaluations included numeric rating scale (NRS) pain scores, timing of initial rescue analgesic, and PACU duration.We used analysis of variance (ANOVA) and Kruskal-Wallis tests for our dual-faceted statistical approach, accommodating both parametric and non-parametric data, with a significance cut-off at P < 0.05.Analyses were performed using SPSS software 27.0.
Results
Our retrospective analysis showed that the demographic characteristics -age (mean 26.99 ± 5.588 years) as shown in Table 1, body mass index, and American Society of Anesthesiology physical status -were consistent across the groups, thus allow- The ANOVA suggests that there was a significant difference in the means of intraoperative MME across the groups (PECS 17.77 ± 8.986 vs. ESP 24.45 ± 11.887 vs. SMA 24.91 ± 8.372 vs. TWI 23.78 ± 7.804, P = 0.044), with PECS group having the least and SMA group having the maximum intraoperative MME requirement.However, the post-hoc Tukey honestly significant difference (HSD) test did not show any pairwise group comparison of statistical significance (Fig. 1).
The ANOVA suggests that the PACU MME in various groups was statistically significant (PECS group 2.95 ± 3.078 vs. ESP 2.18 ± 2.889 vs. SMA 5.09 ± 2.372 vs. TWI 4.87 ± 1.842, P < 0.05), with SMA group having the highest requirement of analgesic in terms of MME in the PACU, and ESP the least.Pairwise group comparison showed a significant difference in PACU MME between PECS vs. SMA, ESP vs. SMA, ESP vs. TWI, and TWI vs. ESP (P < 0.05).There was no significant difference in PACU analgesic requirement between SMA and TWI (Fig. 2).
The study assessed postoperative pain levels at different time intervals (30, 60, 90, 120 min, and day 2) among the four groups: PECS, ESP, SMA, and TWI, as shown in Figure 3.At 30 min post-surgery, SMA and TWI groups reported the highest NRS pain levels (8.13 and 7.96, respectively), much higher than PECS (4.27) and ESP (4.73).This trend continues at 60 min, where SMA reports an average pain of 5.04 and TWI 4.74, compared to PECS and ESP, which report significantly lower values.By 90 min, the pain levels started to reduce for all groups, with the highest mean pain reported by SMA (2.52).At the 120-min mark, the pain levels across all groups are no-tably reduced, with SMA still reporting the highest mean pain of 1.35.ANOVA results corroborate these findings, with significant group differences observed at 30, 60, and 90 min (P < 0.05).However, by 120 min, the P-value suggests that the group differences are not statistically significant.Tukey HSD post-hoc analysis for pain at 30 and 60 min indicates significant differences between the SMA PECS and ESP groups.At the 90-min mark, the SMA group's pain differs significantly from the PECS group, but other pairwise comparisons are not consistently significant.The non-parametric Kruskal-Wallis test also supports these observations, with significant differences in pain scores across the groups at 30, 60, and 90 min but not at 120 min.Pain levels are significantly different across the groups at earlier postoperative intervals (30, 60, and 90 min), particularly between SMA and the PECS/ESP groups.However, these differences diminish and are not statistically significant by 120 min post-surgery (Fig. 3).
PECS group had the highest average time to rescue dose, and this was significantly different from all the other groups.However, no significant differences in time were observed between the groups ESP, SMA, and TWI, using both parametric (ANOVA) and non-parametric (Kruskal-Wallis) tests.Between the four groups, the time spent in the PACU in minutes was not 6).There was no inter-group significance either.The SMA and TWI groups had similar average times of 151.87 and 144.09 min, respectively.Looking at the data collectively, The same 88 patients were analyzed for pain scores based on their chronic testosterone therapy.Table 2 delineates two cohorts of patients differentiated by their chronic testoster-one therapy status, where group A, constituting 54 patients, was on such therapy, and group B, consisting of 34 patients, was not.Group A's mean intraoperative MME was recorded at 18.45 mg, with a confidence interval (CI) spanning 16.0 to 20.9 mg.Group B demonstrated a higher mean MME of 21.54 mg, with a CI extending from 19.0 to 24.1 mg.In assessing the difference in intraoperative MME between the two groups, we applied Welch's t-test.This statistical method does not presume equal variances or sample sizes between the groups.This approach is particularly pertinent in our study, as group A (T therapy) comprised 54 patients while group B (no T therapy) had 34 patients.The Welch's t-test yielded a t-statistic of -1.76 and a P-value of 0.082.Although this P-value suggests a trend toward lower MME usage in group A, it did not reach the conventional threshold for statistical significance (P = 0.082).This finding indicates that, while there is a numerical difference in mean MME between the groups, the evidence is not strong enough to rule out the possibility that this difference is due to random variation rather than the effect of T therapy (Fig. 6).
Discussion
Opioid sparing anesthesia and opioid free anesthesia (OFA) favors multimodal analgesia to reduce or avoid opioids during perioperative care [9].Benefits include less risk of side effects like nausea, vomiting, and respiratory depression, leading to higher patient satisfaction and possibly faster recovery.This approach is crucial amid the opioid crisis [9].This study finds that the PECS 2 block is superior in reducing overall opioid consumption and providing effective immediate postoperative pain control, which aligns with the philosophy of opioid sparing anesthesia.The lower PACU MME requirement and shorter PACU stays observed in patients who received PECS 2 block further support their potential role in an OFA protocol.The PECS 2 block, due to its anatomical specificity, provides targeted analgesia to the pectoral nerves, intercostobrachial nerve, long thoracic, and thoracodorsal nerves, which are essential in the innervation of the breast and axillary regions [10,11].This specificity starkly contrasts the ESP block, where the local anesthetic is deposited deep in the erector spinae muscle, necessitating a greater diffusion distance through muscle and fascia to reach the ventral and dorsal rami of the spinal nerves [12,13].The reduced diffusion distance required for the PECS 2 block means that the anesthetic can more rapidly and directly affect the intended nerves, leading to a faster onset of action and a reduction in the overall need for opioids.The anatomically targeted analgesic precision provided by the PECS block might be the underlying factor contributing to the observed superiority of the PECS 2 block in providing intraoperative analgesia, as indicated by the reduced requirements for opioids both during the surgery and in the PACU as measured by MME consumption.Conversely, the trans-fascial anesthetic diffusion effect associated with the ESP block could explain Our research demonstrated a notable reduction in intraoperative opioid use, with nerve blocks showing significantly lower MME consumption in the PACU compared to the SMA and TWI groups.Local anesthetic infiltration, which offers limited and localized pain relief, outperforms the PECS block's anatomically targeted analgesic precision.This technique achieves a more comprehensive blockade of superficial and deeper chest wall structures, leading to superior pain control [10].Moreover, the PECS block's ability to employ higher volumes and concentrations of anesthetics extends the analgesic effect, enhancing the duration and quality of pain management.This extended analgesia likely contributes to the improved POD 2 pain scores observed with the PECS and ESP blocks relative to SMA and TWI, minimizing the need for additional analgesics in the critical initial recovery period.
Our study found that, at 30, 60, and 90 min post-surgery, the SMA group experienced significantly higher pain levels than the PECS and ESP groups.However, by 120 min, this difference had disappeared.These early benefits are crucial, as adequate pain control in this phase is vital for patient comfort and early mobilization.Interestingly, the significance of these differences diminished over time, possibly due to titration of opioids to pain levels.Higher pain scores and higher opioid consumption in the PACU suggest that SMA might not be the most effective method for managing immediate postoperative pain.All pain management techniques converge in efficacy which is evident from less pain scores at 120 min after reaching PACU but at the cost of higher opioid consumption in SMA and TWI.Several studies [14][15][16][17][18][19][20][21] have indicated that PECS blocks are particularly effective for chest surgeries.Our results contribute to this body of work by suggesting that both PECS and ESP blocks may be effective in the immediate postoperative period compared to SMA or TWI, which is invaluable for gender-affirming mastectomies.Another noteworthy observation was the reduced PACU MME requirement and shorter PACU duration for patients in the PECS group.This is consistent with studies that also observed reduced opioid requirements for PECS and ESP blocks.Lower opioid consumption in the PACU is laudable due to the associated risks like respiratory depression, nausea, and pruritus.Additionally, shorter PACU stays indicate faster patient recovery and more efficient utilization of healthcare resources.
In the study, postoperative nausea and vomiting were experienced by six patients in the SMA group and three patients in the TWI group.This phenomenon can be linked to increased consumption of opioids.Meanwhile, four patients who received ESP blocks presented with hypotension in the PACU, a condition that was effectively managed with fluid boluses.The duration of observation for these four patients reached up to 4 h, significantly influencing the average recovery time calculated for the ESP group.The ESP block, when correctly performed, has a low incidence of complications due to the injection site being away from the pleura, major blood vessels, and the spinal cord.However, complications such as hypotension can occur, although it is not a commonly reported adverse effect.The ESP block can potentially lead to a sympathetic blockade which may cause vasodilation and subsequently hypotension.It is important to note that comprehensive data on complications like hypotension are still limited, and more studies, such as randomized controlled trials, are needed to better understand the safety and complication rates of ESP blocks.The mechanism of action is likely related to the spread of local anesthetic to the nerve roots that affect sympathetic tone, but the exact pathways and extent of spread require further research to fully elucidate [21].
The safety profile of the PECS block, which is typically utilized in breast surgeries, may be favored due to its targeted approach that minimally affects systemic physiological responses such as blood pressure.This could contrast with the ESP block that, although rare, may cause hypotension through a sympathetic blockade leading to vasodilation.The use of PECS blocks could therefore offer a dual benefit of reducing the risk of certain complications while delivering effective intraoperative analgesia specific to the surgical site.Results of our research align with studies [11,22] which also favor PECS block over ESP block.Further research, like randomized controlled trials, could help clarify these advantages, enhancing our understanding of the relative safety and efficacy of these anesthetic techniques.
Chronic testosterone therapy has been associated with decreased pain levels through a confluence of physiological and neurobiological mechanisms [8].It potentially enhances the analgesic effects of opioids by modulating opioid receptors and the endogenous opioid system, thereby increasing pain tolerance [23].Additionally, testosterone exhibits antiinflammatory properties, reducing pro-inflammatory cytokine production and directly contributing to pain alleviation in inflammatory conditions [24].Its role in neuroprotection and neural growth further supports the healing of nerve tissues and maintenance of neural pathways critical for pain transmission, suggesting a direct impact on reducing pain perception [25].Moreover, testosterone therapy may improve mood and psychological well-being, indirectly affecting pain perception by reducing anxiety and depression, which are common in chronic pain sufferers [26,27].Lastly, its involvement in regulating vascular tone and endothelial function might also contribute to pain reduction, particularly in conditions where vascular dysfunction plays a role [28].This multifaceted interaction underscores the complex relationship between testosterone and pain, highlighting the need for a nuanced understanding of hormonal therapy's role in pain management.In our study, a total of 88 patients were divided into two groups (groups A and B) according to chronic testosterone therapy status.Despite the substantial theoretical explanations supporting the notion that testosterone modulates pain, our study did not reveal any statistical significance between the groups, suggesting that the relationship between testosterone therapy and pain levels may not be straightforward and is influenced by a multitude of factors.
Our research is subject to several limitations that warrant acknowledgment.The inherent nature of its retrospective de-sign lacks the control and randomization afforded by a prospective study, which may introduce biases such as recall or selection biases.Although the sample size was statistically adequate, it may limit the broader applicability of the findings.Variability in the administration techniques of regional anesthetic blocks by different providers might have led to inconsistent pain management efficacy.The absence of patientreported outcomes in the study design needs to include critical data regarding the subjective experience of pain and treatment satisfaction.Moreover, the study did not capture long-term pain outcomes or the development of persistent postsurgical pain, which are significant for assessing surgical recovery.This study also did not consider psychosocial factors that can substantially affect pain perception and recovery.In addition, the division of the entire cohort into groups A and B resulted in an unbalanced sample size, which may undermine the reliability of significant difference levels ascertained, notwithstanding the use of Welch's t-test to adjust for unequal sample sizes and variances [29].Furthermore, as our study exclusively involved individuals assigned female at birth who underwent genderaffirming mastectomy as part of their transition to male, it does not address sex-related differences in outcomes.This limitation is critical to note, as it confines the generalizability of our findings specifically to this demographic without broader implications for sex-based comparative analysis.
Conclusion
Despite the outlined limitations, our study offers meaningful insights into the comparative efficacy of various perioperative pain management strategies for gender-affirming bilateral mastectomies.The findings suggest that the PECS 2 block is a superior analgesic approach with reduced opioid consumption in the intraoperative and immediate postoperative phase.Additionally, our results indicate that chronic testosterone therapy does not exert a significant influence on intraoperative opioid requirements.This research enhances the current literature [11,12,[14][15][16][17][18][19][20][21] by providing evidence to improve pain management protocols tailored to the transgender and non-binary community, with consideration for their distinct physiological responses to chronic testosterone therapy.To build on the findings of this study, future research should employ a prospective design with standardized regional block techniques.It should include long-term outcomes and patient-reported satisfaction measures.Delving deeper into the complex relationship between hormonal therapy and pain perception remains an intriguing and essential area for further exploration.
Figure 1 .
Figure 1.The mean intraoperative morphine milligram equivalent (MME) scores for each group (PECS, ESP, SMA, TWI) with 95% confidence intervals.Each bar represents the average MME score for a group, visually comparing the anesthesia requirements across the different groups.PECS: pectoral nerve; ESP: erector spinae plane; TWI: thoracic wall local anesthesia infiltration; SMA: systemic multimodal analgesia.
Figure 3 .
Figure 3.The mean pain scores at different time intervals (30, 60, 90, and 120 min) for each group (PECS, ESP, SMA, TWI) along with their 95% confidence intervals allowing for a direct comparison of how pain scores evolve for each treatment group.PECS: pectoral nerve; ESP: erector spinae plane; TWI: thoracic wall local anesthesia infiltration; SMA: systemic multimodal analgesia.
Figure 2 .
Figure 2. Bar graph representing the post-anesthesia care unit (PACU) morphine milligram equivalent (MME) for each group, complete with a legend indicating the mean PACU MME with standard error of the mean (SEM).
Figure 4 .
Figure 4. NRS pain scores on postoperative day 2 (POD 2) for the four groups.NRS: numeric rating scale.
Figure 5 .
Figure 5.The mean interval between arrival to post-anesthesia care unit (PACU) and administration of first rescue dose, with error bars representing the standard deviation for each group.
Figure 6 .
Figure 6.The average time spent in the post-anesthesia care unit (PACU) by group, with error bars representing the standard deviation for each group.
Table 2 .
Mean Intraoperative MME Scores for Both Group A (T Therapy) and Group B (No T Therapy) With 95% CIs However, it is noteworthy that despite the higher intraoperative opioid use, this group demonstrated better analgesia in the PACU, which is evidenced by a lesser need for opioids in terms of PACU MME requirements. | 2024-07-04T15:10:14.864Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "0decde4257af19543f4fc08b1956ff91192d7119",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.14740/jocmr5159",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03793ef8c7b2a0c24931282071aef3e4257aacd7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231943887 | pes2o/s2orc | v3-fos-license | Novel agents for atopic dermatitis in patients over 50 years of age: A case series
Lan et al recently highlighted the under-representation of older adults in clinical trials of systemic therapies for atopic dermatitis (AD). Late-onset AD is increasingly recognized in older adults. Spontaneous remission is uncommon with this phenotype. Existing drug treatments such as corticosteroids, methotrexate, ciclosporin, and azathioprine are complicated by adverse effects including increased malignancy risk, immunosuppression in the context of immunosenescence, and drug interactions in the setting of poly-pharmacy. A case series is presented of seven patients over 50 years of age with AD who were prescribed dupilumab or tofacitinib or upadacitinib for at least 6 months. All patients were clear or almost clear (investigator global assessment score 0/1) after 1 month of therapy. No significant adverse events were seen. This case series provides preliminary evidence about the safety and efficacy of these novel drugs for AD in older adults. Further studies with higher numbers of participants are needed to obtain real-world evidence for these drugs in older adults, given the limited data in clinical trials.
complicated by adverse effects including increased malignancy risk, immunosuppression in the context of immunosenescence, and drug interactions in the setting of polypharmacy. A case series is presented of seven patients over 50 years of age with AD who were prescribed dupilumab or tofacitinib or upadacitinib for at least 6 months. All patients were clear or almost clear (investigator global assessment score 0/1) after 1 month of therapy. No significant adverse events were seen. This case series provides preliminary evidence about the safety and efficacy of these novel drugs for AD in older adults. Further studies with higher numbers of participants are needed to obtain realworld evidence for these drugs in older adults, given the limited data in clinical trials. 1 Lateonset AD is increasingly recognized in older adults. 2 Spontaneous remission is uncommon with this phenotype. 3 Existing drug treatments such as corticosteroids, methotrexate, ciclosporin, and azathioprine are complicated by adverse effects including increased malignancy risk, immunosuppression in the context of immunosenescence, and drug interactions in the setting of polypharmacy. 4 We performed a single-center retrospective chart review of all patients over 50 years of age with AD who were prescribed dupilumab or tofacitinib or upadacitinib for at least 6 months. Patients were followed up 1 month following initiation, and three monthly thereafter.
Seven patients with AD on dupilumab or tofacitinib were identified: three patients over 65 years and four patients between 50 and 64 years. Three patients were male and four were female (Table 1).
Four patients were prescribed dupilumab 300 mg fortnightly, two patients were prescribed tofacitinib 5 mg twice daily, and one was prescribed upadacitinib 15 mg once daily.
Three patients (43%) had a history of skin cancer. Four patients (57%) had a history of asthma. Two patients (29%) had a history of hypertension. One patient had a history of latent tuberculosis.
All patients had previously received phototherapy and at least one systemic medication. Conventional systemic agents (methotrexate, ciclopsorin, mycophenolate mofetil, and azathioprine) had been ineffective for all patients. Ciclosporin had been stopped in two patients due to treatment-resistant hypertension. Conventional immunosuppressive agents were contraindicated in three patients due to previous skin cancer. Upadacitinib had been stopped in one patient, who had been part of a clinical trial, due to melanoma in situ.
Baseline eosinophilia was seen in two patients. Otherwise there were no aberrations in hematological or biochemical parameters, which were repeated every 3 months. Mild conjunctivitis was seen in two patients on dupilumab, which resolved with topical ocular lubricants.
One patient on dupilumab with a history of recurrent herpes simplex virus (HSV) keratitis was prescribed prophylactic valaciclovir, with no recrudescence of keratitis. All patients who were prescribed tofacitinib or upadacitinib were administered the shingles vaccine prior to initiation of therapy. No HSV or shingles infections were seen. Improvements in AD severity were rapid and profound. Six patients (86%) had a baseline investigator global assessment (IGA) score of four, indicating severe AD, and one patient had a baseline IGA score of three, indicating moderate AD. All patients (100%) were clear or almost clear (IGA 0/1) after one month of therapy, which was maintained at follow up visits. One patient declared that he had a "first decent night's sleep in 60 years" after his first dupilumab injection. Other patients noted life-changing improvements, with one patient able to return to work after 15 years of sick leave.
With a globally aging population, AD in older adults is a growing problem and, without intervention, often persists until the end of life.
Topical emollients and corticosteroids are frequently insufficient for disease control. Older patients are at higher risk of adverse drug effects due to declining hepatic and renal function, concomitant disease processes, and polypharmacy. Oral corticosteroids have multiple side effects which are amplified in the older population. Hepatic considerations limit use of methotrexate, and renal considerations and hypertension limit use of ciclosporin. Azathioprine is of concern in a population who may have received extensive ultraviolet radiation via phototherapy, given the increased risk of skin cancer. All these agents plus mycophenolate mofetil should be prescribed with caution in older patients at higher risk of neoplasia or infection.
Dupilumab is an IL-4Rα antagonist which modulates IL-4 and IL-13 signaling. It is highly targeted for treatment of AD and asthma and has an excellent safety profile. Janus kinase inhibitors interfere with JAK-STAT signaling and have a broad immunomodulatory effect. This case series provides preliminary evidence about the safety and efficacy of these novel drugs for AD in older adults. Further studies with higher numbers of participants are needed to obtain realworld evidence for these drugs in older adults, given the limited data in clinical trials.
CONFLICT OF INTEREST
The authors declare no conflicts of interest. | 2021-02-18T06:17:05.753Z | 2021-02-17T00:00:00.000 | {
"year": 2021,
"sha1": "f9276d342703c10753ea745f12064a0720a02810",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dth.14890",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "aebbc0630700821583db2f7cf507006c7d5e8a5f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7827505 | pes2o/s2orc | v3-fos-license | Search for atoxic cereals: a single blind, cross-over study on the safety of a single dose of Triticum monococcum, in patients with celiac disease
Background Cereals of baking quality with absent or reduced toxicity are actively sought as alternative therapy to a gluten-free diet (GFD) for patients with coeliac disease (CD). Triticum monococcum, an ancient wheat, is a potential candidate having no toxicity in in-vitro and ex-vivo studies. The aim of our study was to investigate on the safety of administration of a single dose of gluten of Tm in patients with CD on GFD. Methods We performed a single blind, cross-over study involving 12 CD patients who had been on a GFD for at least 12 months, challenged on day 0, 14 and 28 with a single fixed dose of 2.5 grams of the following (random order): Tm, rice (as reference atoxic protein) and Amygluten (as reference toxic protein) dispersed in a gluten-free pudding. The primary end-point of the study was the change in intestinal permeability, as assessed by changes in the urinary lactulose/rhamnose ratio (L/R ratio) measured by High Pressure Liquid Chromatography. We also assessed the occurrence of adverse gastrointestinal events, graded for intensity and duration according to the WHO scale. Variables were expressed as mean ± SD; paired t-test and χ2 test were used as appropriate. Results The urinary L/R ratio did not change significantly upon challenge with the 3 cereals, and was 0.055 ± 0.026 for Tm Vs 0.058 ± 0.035 for rice (p = 0.6736) and Vs 0.063 ± 0.054 with Amygluten (p = 0.6071). Adverse gastrointestinal events were 8 for Tm, Vs 11 for rice (p = 0.6321) and Vs 31 for Amygluten p = 0.0016), and, in all cases events were graded as “mild” or “moderate” with TM and rice, and as “severe” or “disabling” in 4 cases during Amygluten. Conclusions No definite conclusion can be drawn on the safety of Tm, based on no change in urinary L/R because even Amygluten, a toxic wheat protein, did not cause a significant change in urinary L/R indicating low sensitivity of this methodology in studies on acute toxicity. Tm was, however, well tolerated by all patients providing the rationale for further investigation on the safety of this cereal for CD patients. Trial registration EudraCT-AIFA n2008-000697-20
Background
Lifelong adherence to a strict gluten free diet (GFD) is at present the only treatment for patients with celiac disease (CD) [1] to reduce morbidity and mortality. Compliance to GFD is however difficult and affects the quality of life of patients because, besides economic and social factors [2,3], it involves the consumption of poorly palatable unleavened bakery products. This is the reason why alternative strategies are actively sought [4], which include the search for baking quality wheat that does not contain toxic gluten. This strategy takes advantages of the notion that there is natural variation in grain toxicity [5,6], and the old diploid grass-like species of Triticum genus are potential candidates as grains with reduced or absent toxicity. In particular Triticum monococcum (TM) has been shown to contain a low number of stimulatory epitopes of Tcell lines obtained from small intestinal biopsies on CD patients [6], and to lack the genes encoding the immunodominant 33 mer fragment [5]. Furthermore, the presence of a "protective" peptide similar to the 10-mer peptide (QQPQDAVQPF) of Durum wheat [7] has been detected in Tm [8].
Preliminary in-vitro and ex-vivo studies have provided encouraging results. Absent in vitro toxicity has been reported from in vitro studies, where Tm was unable to agglutinate K562(S) cells [9] and had no effect on NO and TGII expression in Caco-2/TC7 cells [10]. Absent toxicity of Tm has been reported by Pizzuti et al. [11] in a study ex vivo showing no morphological changes in duodenal biopsies cultured with peptic-tryptic digest of Tm gliadin. Taken together, the studies reported above suggest a favourable safety profile of Tm for CD patients and provide the rationale for testing Tm administration for toxicity in CD patients. It is however noteworthy that, in contrast with previous studies, in vitro toxicity of Tm has been recently reported by Gianfrani et al. [12]; such information was not available when we planned our study.
The aim of our study was to assess the safety of Tm administration by challenging CD patients complying with a GFD with a single 2.5 g protein extract of Tm by comparison, in random order, with that of an atoxic protein extract of rice, and with that of a toxic gluten, Amygluten. We measured changes in urinary recovery of lactulose and rhamnose (L/R) as an experimental biomarker of intestinal permeability [13].
Methods
We selected 12 consecutive CD patients on GFD for at least one year, at follow-up in our Celiac Clinic, and meeting the following selection criteria: strict compliance with the GFD, absence of symptoms, reconstitution of villous structure and negative tissue transglutaminase (t-TG) and/or antiendomysial (AMA) antibodies during GFD. Compliance with the GFD was assessed as previously described [14] using a 4 point Likert scale that includes no dietary indiscretions (score 1), 1 serving with gluten per month (score 2), < 4 servings per month (score 3) or = > 4 servings per month (score 4). We also selected 7 CD patients freshly diagnosed with villous atrophy and positive CD related serology and on a gluten containing diet. Twelve asymptomatic healthy subjects selected among the health care professionals in our Institution volunteered in the study as normal controls.
CD patients on GFD entered a single blind cross over study involving challenge with 3 proteins (random order): rice (MyProtein, Cent Ltd, Northwick, UK) as atoxic control, pure gluten (Amygluten, Tereos Syral, Marckolsheim, France) as toxic control, and Tm (Triticum monococcum ssp monococcum, cultivar "Monlis", CRA, Rome, Italy) as investigational protein. Challenge with different proteins was carried out on 3 separate occasions on day 0, 14 and 28. The primary endpoints were the effect of challenge on the urinary recovery L/R ratio as a measure of intestinal permeability, and the effect on symptoms. Patients were instructed to report any symptom experienced during the challenge, which was graded for severity according to the WHO toxicity grading scale as mild (grade 1), moderate (grade 2), severe (grade 3) and life threatening (grade 4) [15]. We used a 2.5 g protein dose for the challenge, corresponding approximately to one slice of bread and to the dose used by others [16][17][18] for "proof of the concept" challenge studies. The timetable in the study was as follows: fasted patients were admitted to a day-case Unit, asked to empty the urinary bladder and immediately after they were asked: t 0 to eat a gluten free pudding (BiAglut VAN, Heinz, Italy) with dispersed 2.5 g cereal protein t 2 h to drink a solution of 5 g lactulose + 1 g rhamnose in 60 ml water t 2-7 h to collect urine and to record symptoms At t 7 h urine volume was measured, a sample was retained, frozen and stored until analysis. Urinary samples were analyzed by using HPLC [19] for L and R concentration and for calculation of the L/R ratio in one batch for each patient. The normal urinary L/R, for our laboratory, calculated in 40 healthy controls is 0.045. All analyses were carried out at the Burlo-Garofolo Paediatric Hospital in Trieste (Italy) under the supervision of one of the authors (T.N.).
The same protocol used for CD patients on GFD challenged with cereal proteins was also adopted, with the exclusion of protein challenge, to measure intestinal permeability in 7 CD patients on gluten containing diet and in 12 healthy controls. Five CD patients on GFD and 5 healthy controls were also studied with the same protocol on 2 occasions, one day apart, to test for reproducibility of results.
Results were expressed as mean ± SD. Paired or unpaired t-test and χ 2 test were used as appropriate to compare continuous and categorical variables. The statistical analysis was carried out using GraphPad Prism 5 statistical package (GraphPad Software, San Diego, Ca, USA). The study was approved by the Ethics Committee of Spedali Civili of Brescia on February 5th, 2008 and was given the number n2008-000697-20 in our national registry of clinical trials (EudraCT-AIFA). Patients and controls signed a written informed consent to the study.
Results
Anthropometric and clinical characteristics of the subjects enrolled are reported in Table 1. All 12 CD patients had normal duodenal mucosa, and serology was negative during GFD in all but 1 patient who was t-TG negative and weakly positive at EMA testing. All seven CD patients on gluten containing diet had similar characteristics to patients on GFD, and all had duodenal atrophy and tested positive at serology. Mean age was lower and M/F was similar in the 12 healthy subjects as in CD patients.
Validation study
Urinary L/R was higher in CD patients on gluten containing diet (0.078 ± 0.022) than in controls (0.052 ± 0.031, p = 0.0345) and in patients on GFD (0.058 ± 0.034, p = 0.1852). Five control subjects and 5 CD patients entered the reproducibility study. Mean value of L/R was 0.046 ± 0.024 Vs 0.048 ± 0.021 (p = 0.5746) and was 0.033 ± 0.016 Vs 0.031 ± 0.018 (p = 0.6228) on day1 Vs day 2 in CD patients and control subjects, respectively, and coefficient of variation of measurements was 5.4% and 5.3% (Figure 1).
Urinary recovery of R ranged between 92% and 98% in urinary samples of the 31 subjects cumulatively entering the study.
Cereal challenge
Results of urinary L/R in the 12 CD patients on GFD challenged with 3 cereals are shown in Figure 2. There was no consistent trend for L/R ratio to change during acute challenge with Tm, Amygluten or rice, and mean values were 0.055 ± 0.03 for Tm Vs 0.058 ± 0.035 for rice (p = 0.6736) and Vs 0.063 ± 0.02 for Amygluten (p = 0.6071).
The effect of cereal challenge on symptoms is shown in Table 2. Eleven and 8 adverse gastrointestinal events were reported during challenge with rice and with Tm (p = 0.6321), respectively, and 31 events were reported during Amygluten, a significantly (p = 0.0016) higher value than for the other 2 cereals. Severity of adverse events was graded as mild or moderate with rice and Tm, and as "severe" or "disabling" in 4 cases during Amygluten.
Discussion
The main objective of our study was to assess the effect of challenge with Tm in CD patients on GDF using urinary L/R ratio as a method to measure changes in intestinal permeability in order to test in vivo safety and toxicity of a single low dose of Tm. Our results show that urinary L/R ratio was unchanged during Tm challenge in comparison with the results obtained with rice, an atoxic cereal for CD patients. This potentially interesting observation is however of limited interest because, as in Tm, even challenge with the toxic reference protein Amygluten caused no significant change in the urinary L/R ratio in relation to that measured during rice challenge. The reason for this lack of effect of Amygluten in uncertain. Our preliminary validation studies indicating high reproducibility of results for urinary L/R recovery both in celiacs and in healthy controls, and the ability of the test to discriminate healthy controls from CD patients on gluten containing diet, support the validity of methodology used for measurements. On the other hand, the lack of effect observed during challenge with toxic Amygluten indicates inadequacy of the experimental conditions for testing the working hypothesis. The most likely explanation is that the protein dose used for challenge, 2.5 g as a single dose, may be too low to cause the alteration of intestinal permeability that was reported by Greco et al. [13] which occured using 50 g protein challenge. Alternatively, the timing of urinary collection may be inadequate for detecting changes in the L/R ratio. Whatever the case, the methodology we used was clearly not sensitive enough to achieve the aims of our study.
Though results on urinary L/R ratio were disappointing, results on symptoms reported by patients during the challenge provided a clear-cut response, indicating that a single low dose of Tm is well tolerated by CD patients. Symptom incidence with Tm was similar to that observed during challenge with rice, the atoxic cereal, and symptoms were in all cases mild. In contrast, incidence of symptoms was 3 times higher during Amygluten than during Tm and rice challenge, indicating that the dose used for challenge was large enough to cause symptoms in case of toxicity. This clinical finding is in keeping with previous in vitro and ex-vivo observations suggesting no toxicity of Tm for CD patients, although we are well aware that our finding on symptoms cannot be taken as evidence of lack of toxicity of Tm.
Conclusions
In conclusion our study indicates that a protocol involving short-term challenge with a single low dose of cereal protein using urinary L/R recovery is not sensitive enough to discriminate the effect of toxic and atoxic cereals on intestinal permeability to sugars. As a consequence, no conclusion can be drawn on the safety of acute administration of Tm on CD patients. However, the lack of side effects reported by patients during challenge with Tm encourages to further explore the characteristic of this cereal as a potentially harmless wheat for CD patients, or as a cereal that may be tolerable for patients who are not celiacs but do not tolerate wheat based products because of gluten sensitivity. | 2017-06-21T11:04:42.062Z | 2013-05-24T00:00:00.000 | {
"year": 2013,
"sha1": "39bc22c850efa9587f5f4191542dd935029440d4",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/1471-230X-13-92",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0cfe88f830cb83cd91d1ea1395d5fec955568417",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256870424 | pes2o/s2orc | v3-fos-license | The truth of the matter: will immune-tolerant chronic hepatitis B patients benefit from antiviral treatment?
Immune-tolerant (IT) phase is the first stage of the natural history of chronic hepatitis B (CHB) infection, also known as HBeAg seropositivity with high HBV DNA level (typically > 7-8 log10 IU/mL) and normal ALT [below the upper limit of normal (ULN)] on more than 1 occasion over a 6–12-month period. This phase usually persists for 2–3 decades in CHB patients infected perinatally, followed by transition to the immune-active (IA) phase with increased ALT level (>2× ULN) along with decreasing HBV DNA level followed by HBeAg seroconversion. Since HBeAg seroconversion is usually followed by inactive infection and occurrence at an earlier age leads to excellent prognosis,[1] HBeAg seroconversion is 1 of the major endpoints for HBeAgpositive patients receiving antiviral therapy. Current guidelines recommend that patients in the IT phase can be monitored and treatment initiated only if there is evidence of significant inflammation and/or fibrosis, persistence of the IT phase after the age of 30 or 40 years, or a family history of HCC.[2–4] Whether all CHB patients, including those in the IT or indeterminate phase, should be treated has been hotly debated in recent years. The argument for expanding treatment indication to IT patients is based on 4 reasons: first, REVEAL cohort reported a biological gradient of HCC development based on HBV DNA level: the higher the HBV DNA level, the higher the risk of HCC incidence.[5] Second, inflammation and fibrosis may exist in patients with normal ALT levels. In a recent meta-analysis of 9377 CHB patients, significant fibrosis/advanced fibrosis was found in 22.3% of IT patients though none had cirrhosis.[6] Third, HBV integration and clonal hepatocyte expansion could be seen in the IT phase, which may contribute to carcinogenesis,[7] and antiviral treatment may decrease the integrated HBV DNA. Last, HBV-specific T cells of IT patients could still proliferate and secrete Th1 cytokine by means of in vitro expansion as in IA patients, which challenges the classic definition of immune tolerance.[8] To date, there is no direct evidence based on randomized clinical trials that antiviral therapy improves clinically important outcomes such as mortality, endstage liver disease, and HCC. Ex vivo HBV-specific T cell immune control has not been confirmed in IT patients.[9,10] IT patients generally have favorable outcomes with no/minimal risk of cirrhosis or HCC development after 5 to 10 years of follow-up.[11–13] However, several studies found that the HCC risk of untreated IT patients is not inconsequential and can be as high as that in immune-active patients. Such conflicting findings are likely due to inadvertent misclassification of the IT phase, as some studies relied on only 1 or 2 HBV DNA or ALT assessments and may have included patients with unrecognized phase transition or immune-active CHB with fluctuating ALT levels.[14] Another major source of confusion is the cutoff used for ULN for ALT, which may differ based on the analyzer/reagent used in the assay and the reference population.[15,16] Many studies use 40 U/L as the cutoff regardless of sex, though women’s ULN for ALT is lower than that of men and studies in blood donors and living liver donors who have undergone extensive screening for liver disease showed that normal ALT is lower, around 19–25 U/L for women and 30–35 U/L for men. Other studies have not explicitly excluded patients with cirrhosis who should receive treatment if HBV DNA is detected, regardless of ALT level, or patients with moderate/advanced fibrosis and fluctuating or mildly elevated (1–2× ULN) ALT. Thus, in Mason’s study showing HBV DNA integration and clonal proliferation in IT patients, 4 had ALT ~40 U/L, 2 had stage 2 fibrosis, 1 had stage 3 fibrosis, and 1 had a histologic activity score of 4.[7] When IT patients remain untreated, HBeAg seroconversion followed by inactive phase can occur spontaneously or when patients enter the IA phase, and treatment is offered to those who fail to achieve HBeAg seroconversion after 3–6 months of observation.[10] Although older IT patients had similar characteristics and rates of transition to the IA phase as younger IT patients,[17] it is important to assess liver fibrosis and necroinflammation in HBeAg-positive patients who remain in the IT phase after age 35 or 40 as prolonged periods of high-level
Immune-tolerant (IT) phase is the first stage of the natural history of chronic hepatitis B (CHB) infection, also known as HBeAg seropositivity with high HBV DNA level (typically > 7-8 log 10 IU/mL) and normal ALT [below the upper limit of normal (ULN)] on more than 1 occasion over a 6-12-month period. This phase usually persists for 2-3 decades in CHB patients infected perinatally, followed by transition to the immune-active (IA) phase with increased ALT level ( > 2× ULN) along with decreasing HBV DNA level followed by HBeAg seroconversion. Since HBeAg seroconversion is usually followed by inactive infection and occurrence at an earlier age leads to excellent prognosis, [1] HBeAg seroconversion is 1 of the major endpoints for HBeAgpositive patients receiving antiviral therapy. Current guidelines recommend that patients in the IT phase can be monitored and treatment initiated only if there is evidence of significant inflammation and/or fibrosis, persistence of the IT phase after the age of 30 or 40 years, or a family history of HCC. [2][3][4] Whether all CHB patients, including those in the IT or indeterminate phase, should be treated has been hotly debated in recent years.
The argument for expanding treatment indication to IT patients is based on 4 reasons: first, REVEAL cohort reported a biological gradient of HCC development based on HBV DNA level: the higher the HBV DNA level, the higher the risk of HCC incidence. [5] Second, inflammation and fibrosis may exist in patients with normal ALT levels. In a recent meta-analysis of 9377 CHB patients, significant fibrosis/advanced fibrosis was found in 22.3% of IT patients though none had cirrhosis. [6] Third, HBV integration and clonal hepatocyte expansion could be seen in the IT phase, which may contribute to carcinogenesis, [7] and antiviral treatment may decrease the integrated HBV DNA. Last, HBV-specific T cells of IT patients could still proliferate and secrete Th1 cytokine by means of in vitro expansion as in IA patients, which challenges the classic definition of immune tolerance. [8] To date, there is no direct evidence based on randomized clinical trials that antiviral therapy improves clinically important outcomes such as mortality, endstage liver disease, and HCC. Ex vivo HBV-specific T cell immune control has not been confirmed in IT patients. [9,10] IT patients generally have favorable outcomes with no/minimal risk of cirrhosis or HCC development after 5 to 10 years of follow-up. [11][12][13] However, several studies found that the HCC risk of untreated IT patients is not inconsequential and can be as high as that in immune-active patients.
Such conflicting findings are likely due to inadvertent misclassification of the IT phase, as some studies relied on only 1 or 2 HBV DNA or ALT assessments and may have included patients with unrecognized phase transition or immune-active CHB with fluctuating ALT levels. [14] Another major source of confusion is the cutoff used for ULN for ALT, which may differ based on the analyzer/reagent used in the assay and the reference population. [15,16] Many studies use 40 U/L as the cutoff regardless of sex, though women's ULN for ALT is lower than that of men and studies in blood donors and living liver donors who have undergone extensive screening for liver disease showed that normal ALT is lower, around 19-25 U/L for women and 30-35 U/L for men. Other studies have not explicitly excluded patients with cirrhosis who should receive treatment if HBV DNA is detected, regardless of ALT level, or patients with moderate/advanced fibrosis and fluctuating or mildly elevated (1-2× ULN) ALT. Thus, in Mason's study showing HBV DNA integration and clonal proliferation in IT patients, 4 had ALT~40 U/L, 2 had stage 2 fibrosis, 1 had stage 3 fibrosis, and 1 had a histologic activity score of 4. [7] When IT patients remain untreated, HBeAg seroconversion followed by inactive phase can occur spontaneously or when patients enter the IA phase, and treatment is offered to those who fail to achieve HBeAg seroconversion after 3-6 months of observation. [10] Although older IT patients had similar characteristics and rates of transition to the IA phase as younger IT patients, [17] it is important to assess liver fibrosis and necroinflammation in HBeAg-positive patients who remain in the IT phase after age 35 or 40 as prolonged periods of high-level HBV replication may increase their risk of HCC. [14] Indeed, EASL, APASL, and AASLD guidelines recommend antiviral treatment of IT patients above the age of 30, 35, and 40 years, respectively.
Treating IT patients with nucleos(t)ide analogues and peginterferon alone or in combination leads to a very low rate of HBeAg seroconversion (0% to 5%) and HBsAg seroclearance (0% to 3%) after 1 to 4 years of treatment, and incomplete HBV DNA suppression (0% to 23%), whereas a high rate of virologic and clinical relapse occurs after stopping treatment. [9,[18][19][20] These observations support the lack of evidence for the benefit of treating patients in the IT phase unless novel, potent therapeutics that can achieve a high rate of functional cure of HBV are available. [21] IT patients should be monitored to determine when they transition to the immune-active phase and when treatment should be initiated.
The meta-analysis by Lee et al [22] argues that the risk of HCC and liver complications in patients who are truly in the IT phase and who do not have cirrhosis is low, and there is no evidence that treatment would be beneficial. This is contradictory to many Korean studies that found a high risk of HCC in untreated IT patients. The strength of this meta-analysis, which included 11,903 patients from 13 studies, is based on its stringent recruitment criteria: First, only studies with at least 2 HBV DNA and ALT tests during 6 to 12 months of observation to define the IT phase were recruited; second, CHB patients with cirrhosis were excluded even if HBV DNA and ALT met the criteria for IT phase. The patients included were in line with the classic IT phenotype with a normal ALT and a high HBV DNA level [median (range): 8.1 (6.9-9.8) log 10 IU/mL]. The major discrepancy between this meta-analysis and Kim's [23] study (which included 413 patients showing the risk of HCC in untreated IT patients was higher than that of treated IA patients) is the potential mis-classication of IA patients as IT in Kim's study, in which 26% of their IT patients had HBV DNA level < 7 log 10 IU/mL and 25% had lower platelet count. The mean age of Kim's IT patients was similar to that of the IA patients (38 vs. 40 y), and many exceeded the age threshold for treatment according to EASL, APASL, and AASLD guidelines. Furthermore, the subset of typical IT patients (HBV DNA > 8 log 10 IU/mL and age <30 y) had a minimal risk of HCC. [23] This meta-analysis has certain limitations: First, studies included in this meta-analysis were retrospective in design. Second, HBeAg-positive patients with significant/advanced fibrosis but not cirrhosis might have been included since neither histologic nor noninvasive assessment of fibrosis was available in those studies. Third, the studies included some IT patients older than 30 or 40 who are recommended to undergo antiviral therapy even in the absence of significant inflammation and/or fibrosis. Fourth, the studies were heterogenous, but the authors dismissed that concern even though visual inspection of their forest plot showed that heterogeneity might impact the conclusions.
To settle this hot topic, if IT CHB patients should receive antiviral therapy, ideally, an adequately powered randomized controlled trial should be conducted, but this will require a large number (hundreds or thousands) of patients to be followed for ≥ 10 years, making it unethical and unfeasible. Instead, going forward, researchers on this topic should strive to use stringent criteria for the IT phase: HBeAg seropositivity, HBV DNA > 7 log 10 IU/mL, and normal ALT using sex-specific cutoff, confirmed on 1 to 2 follow-up tests over 6 to 12 months; exclude patients with cirrhosis, and stratify outcomes by age at enrollment, for example, ≤ 31-40 and > 40 years. With the increasing global prevalence of fatty liver, histologic or noninvasive assessment of hepatic steatosis and fibrosis may be necessary to differentiate CHB patients with mildly elevated ALT due to HBV or fatty liver, as antiviral therapy alone may not benefit the latter patients. In the absence of more concrete evidence to support a benefit in treating true IT patients, current guidelines for monitoring IT patients remain valid until novel therapies with a high rate of functional cure become available.
AUTHOR CONTRIBUTIONS
Both authors were responsible for the drafting, the critical revision of the manuscript for important intellectual content, and approval of the final version of the article.
CONFLICT OF INTEREST
Rachel Jeng has served as a speaker for Bristol-Myers Squibb and Gilead Sciences. Grace Wong has served as an advisory committee member for Gilead Sciences and Janssen, and as a speaker for Abbott, AbbVie, Ascletis, Bristol-Myers Squibb, Echosens, Gilead Sciences, Janssen, and Roche. She has also received a research grant from Gilead Sciences. | 2023-02-16T06:16:20.661Z | 2023-02-14T00:00:00.000 | {
"year": 2023,
"sha1": "169f6aaff506f983873c04beee49062d164f7d7f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/hc9.0000000000000060",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86985c4fb8185a4763ebad6611037e27eb752098",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18015427 | pes2o/s2orc | v3-fos-license | Dye-Sensitized Solar Cells Based on the Principles and Materials of Photosynthesis: Mechanisms of Suppression and Enhancement of Photocurrent and Conversion Efficiency
Attempts have been made to develop dye-sensitized solar cells based on the principles and materials of photosynthesis: We first tested photosynthetic pigments, carotenoids (Cars), chlorophylls (Chls) and their derivatives, to find sensitizers showing reasonable performance (photocurrent and conversion efficiency). We then tried to introduce the principles of photosynthesis, including electron transfer and energy transfer from Car to Phe a. Also, we tried co-sensitization using the pheophorbide (Phe) a and Chl c2 pair which further enhanced the performance of the component sensitizers as follows: Jsc = 9.0 + 13.8 → 14.0 mA cm−2 and η = 3.4 + 4.6 → 5.4%.
Introduction
Bacterial photosynthesis has been studied extensively: the structures of pigment-protein complexes were determined by X-ray crystallography and the excited-state dynamics of photosynthetic pigments, i.e., carotenoids (Cars) and bacteriochlorophylls (BChls), by time-resolved laser spectroscopies in relation to their physiological functions. The goal of primary processes of photosynthesis is to generate the source of chemical energy, ATP, and the reductant, NADPH. However, the initial process of photosynthesis is to trigger the electron-transfer reaction by the use of harvested light energy. Therefore, the principles and the materials of photosynthesis can be used to fabricate dye-sensitized solar-cells (DSSCs).
In this mini-review, we will try to reorganize the results of our eight years of investigation, and to present our recent results, as well. At the beginning, we will introduce photosynthetic pigments and the principles of bacterial photosynthesis to those readers who have been studying DSSCs but not familiar with photosynthesis.
Photosynthetic Pigments
Carotenoids. The physiological functions of Cars include light-harvesting and photo-protection. The light-harvesting function of Cars includes the absorption of the light energy followed by singletenergy transfer to BChl, which takes place in antennas including the peripheral LH2 and the central LH1 complexes. One of the photo-protective functions is the quenching of the lowest triplet (T 1 ) BChl, which can sensitize the generation of harmful singlet oxygen. The other photo-protective function is the reduction of doublet ground-state radical-cation (D 0 •+ ) BChl to prevent its oxidative degradation. Figure 1 presents an energy diagram comparing the singlet-excited states of Cars (1B u + , 3A g -, 1B u and 2A g -) and those of BChl a (Q x and Q y ). The energies of Car excited states decrease with the number of conjugated double bonds, n, as functions of 1/(2 n + 1) [1]. There are two different kinds of BChls in LH2, absorbing at 800 and 850 nm (named 'B800' and 'B850'), while LH1, only 'B880'. The relative heights of singlet-energy levels show that the most efficient singlet-energy transfer from Car to BChl can take place in Cars (n = 9 and 10) through three different channels (1B u + → Q x , 1B u -→ Q x and 2A g -→ Q y ).
Thus, Cars can be used in DSSCs to facilitate (i) singlet-energy transfer to BChl, (ii) triplet-energy transfer from T 1 BChl, and (iii) electron transfer to D 0 •+ BChl. In addition, Cars themselves can eject electron when an electron acceptor is available [3]. Bacteriochlorophylls and Chlorophylls. The physiological functions of BChls include singletenergy transfer and the ejection and transfer of electron. The singlet energy that has been transferred from Car to BChl in LH2 through multi-channels can be transferred further to LH1 and, then, to the reaction center (RC) using the Q y excitation of BChl. When the singlet Q y energy reaches 'the specialpair' BChl 2 (P), an electron is ejected. The initiation of electron transfer by the use of the Q y energy of P is the most important event in bacterial photosynthesis.
Figure 1.
A diagram comparing the energies of the singlet-and triplet-excited states of Cars with those of BChl a. The energies of the optically-allowed 1B u + state and the optically-forbidden 3A g -, 1B u and 2A g states were determined by measurement of resonance-Raman excitation profiles of crystalline mini-β-carotene, spheroidene, lycopene, anhydrorhodovibrin and spirilloxanthin (n = 9~13) [1]. The T 1 levels of the Cars (n = 9~11) and BChl a bound to LH2 antenna complexes were determined by highsensitively emission spectroscopy [2] ( [9]-reproduced by permission of The Royal Society of Chemistry). The plants and algae use larger pigment-protein complexes, to which a large number of Car and chlorophyll (Chl) molecules are bound in more complicated ways. However, the basic principles of structural organization are similar to the pigment-protein complexes in photosynthetic bacteria. The uniqueness of these organisms is that they use different types of Chls including Chl a, Chl b and Chl c, the structures of which are shown in Figure 2 together with that of BChl a. The three different Chls can be characterized by their location and function as follows: (i) Chl a is the most common plant Chl, taking part in the light-harvesting pigment-protein complexes of higher plants, algae, prochlorophytes and cyanobacteria, and mainly functioning as the primary electron donors in the photosystem (PS) I and II RCs and also as the first electron accepter in PS I RC. (ii) Chl b is much less ubiquitous than Chl a. It is only present in the light-harvesting complexes, that are not closely connected to the RCs, of higher plants, green algae, euglenophytes and prochlorophytes. The main difference in its electronicabsorption spectrum from that of Chl a is the red shift of the Soret absorption and the blue shift of the Q y absorption; the intensity of the latter relative to that of the former is much less in Chl b than in Chl a. (iii) Chl c was originally isolated from various marine algae as a mixture of closely-related pigments, Chl c 1 and Chl c 2 . They have the porphyrin macrocycle (in contrast to Chl a and Chl b having the chlorin macrocycle) and have acrylic acid (instead of propionic acid ester) attached to ring D and the carboxyl methyl ester attached to ring E. Chl c exhibits a very strong Soret absorption shifted to the lower energy and a pair of very weak Q y absorptions shifted to the higher energy (when compared to Chl a). Their main function is to mediate energy transfer from a carotenoid, fucoxanthin, to Chl a [4]. Thus, either Chl b or Chl c can transfer singlet energy to Chl a via the Q y state, and function as supplementary light-harvesting pigments for Chl a, facilitating the initial electron-transfer reaction in the nonbacterial photosynthetic organisms.
How to Apply the Principles of Photosynthesis to DSSCs
Comparison between dye-sensitized solar cells and the bacterial photosynthetic system. It is worthwhile to compare or contrast a typical Grätzel-type DSSC to the primary processes of bacterial photosynthesis (Figure 3), pictorially summarizing what have been described above: (a) Photoexcitation and electron injection in DSSCs: The assembly and principle of Grätzel-type DSSC is rather simple: A dye sensitizer is bound, through an anchoring group, to the surface of a semi-conductor, sintered TiO 2 nanoparticles, for example, which can tremendously increase the area of the boundary surface. Upon photo-excitation of the sensitizer, electron is injected into TiO 2 (to be transferred to the cathode) and the resultant dye radical cation is neutralized by the I -/ I -3 redox couple (by transferring electron from the anode). The dye molecules that are piled up above the first layer can collect the light energy and transfer their singlet excitation (functioning as an antenna) and, at the same time, dissipate the singlet and triplet energies (functioning as a self-quencher). (b) Cascade electron transfer in the bacterial reaction center (RC): After charge separation at the special-pair BChls (P), triggered by photo-excitation, the electron is transferred to accessory BChl (B), bacteriopheophytin (H), quinone A (Q A ) and eventually to quinone B (Q B ). The locations, orientations, and the one-electron oxidation potentials of the series of electron-transfer components are finely tuned by intermolecular interaction with the apo-peptide(s) and other pigment(s). (c) Cascade energy transfer in the bacterial photosynthetic system: Cars harvest the light energy (in the 500 nm region) as supplementary light harvester and transfer their singlet energy to BChls through plural channels. Then, the singlet energy of BChl is transferred in the order, LH2 → LH1 → RC. In the Car → BChl energy transfer, both the optically-allowed 1B u + and optically-forbidden 1B u and 2A g states of Cars as well as the opticallyallowed Q x and Q y states of BChls are involved (see Figure 1), whereas in the latter BChl → BChl energy transfer, only through the lowest Q y state of BChl, is involved. Figure 3. (a) Photo-excitation followed by electron injection and electron transfer in DSSC; (b) photo-excitation of special pair BChls (P) followed by cascade electron transfer in a sequence, special pair BChl 2 (P) → accessory BChl (B) → bacteriopheophytin (H) → quinone A (Q A ) → quinone B (Q B ) in bacterial reaction center (RC); and (c) photoexcitation of Car to the optically-allowed 1B u + state followed by energy transfer to the Q x and Q y levels of BChl, during the internal conversion processes of 1B u Car in LH2 antenna. Then, the Q y energy of BChl in the LH2 antenna is transferred to the LH1 antenna and eventually to P in the RC. 1Bu -Strategies to be taken. We know that Cars and Chls (including their derivatives) have the potential of electron injection into TiO 2 , upon photo-excitation, when they are bound directly to the linear or cyclic π-conjugated systems through the anchoring carboxyl group. We have started with using Cars as sensitizers, because we have accumulated knowledge concerning their excited-state energetics and dynamics (vide infra). Then, we proceeded to Chl and derivatives, in which the excited-state energy levels had been described by Gouterman [5,6] and excited-state dynamics had been studied by other investigators [7]. We first tried to learn the mechanisms how these photosynthetic pigments can function as the sensitizers in DSSCs, by systematically changing the degree of π-conjugation that determines the excited-state and the redox-state properties, which have turned out to be the key parameters in suppressing or enhancing the photocurrent and conversion efficiency of DSSC.
We also tried to introduce to the DSSC systems the first steps of the cascade electron transfer and energy transfer. We tried to incorporate sequential co-sensitization, the electron transfer and energy transfer from the Car moiety to the pheophorbide sensitizer and, also, parallel co-sensitization by the use of pheophorbide and chlorophyll sensitizers both having the anchoring carboxyl group.
In this review, we will try to let Figures illustrate the ideas and the experimental results by themselves, minimizing the lengths of sentences for explanation. We will briefly introduce the topics at the beginning and add a brief summary at the end, to facilitate the readers' understanding. After Conclusion and Future Perspective, we will briefly introduce "Relevant Work by Other Investigators" to benefit the readers in evaluating our contribution.
Polyene Sensitizers
Polyenes are linear conjugated systems, from which electron can be injected into TiO 2 , when the carboxyl group is directly attached to facilitate binding and electron injection. As a set of sensitizers, we used retinoic acid (RA) and carotenoic acids (CAs) having n = 5~13 double bonds ( Figure 4). Their dependence of excited-state energetics and dynamics on the conjugation length (n) has been welldocumented [8,9]. Their one-electron oxidation potential shifts with n to the negative side (to the higher energy) systematically (vide infra). We first examined the conjugation-length (n) dependence of the photocurrent and conversion efficiency (sometimes collectly called 'performance') of solar cells using the set of sensitizers, and tried to explain the results in terms of the excited-state dynamics of RA and CAs free in solution and bound to TiO 2 nanoparticles in suspension. The maximum performance was obtained in CA7; the decline of performance toward CA13 was explained by the initial electron-injection efficiency, whereas the decline toward RA5 was partially explained in terms of triplet generation at later stages after excitation.
Secondly, we examined the concentration dependence of the performance of CA7-sensitized solar cell by dilution of the sensitizer with a spacer, deoxycholic acid (DCA). Surprisingly, the performance was enhanced from that at 100% by the initial dilution to 70% and even after the later dilution to 30%. The concentration dependence of the IPCE profile and the electronic absorption spectrum suggested changes in the form of singlet excitation of the sensitizer on the TiO 2 layer. We suspected that 'singlettriplet annihilation' due to the aggregate formation is the key in suppressing the photocurrent and conversion efficiency before dilution.
Finally, we prepared a set of four sensitizers having different polarizabilities and, as a result, different tendency of aggregated formation, and examined changes in the photocurrent and conversion efficiency of the fabricated solar cells, depending on the dye concentration and the light intensity. The most aggregate-forming dye exhibited the enhancement of performance by lowering the concentration and the light intensity, supporting the idea of singlet-triplet annihilation. The details will be described below.
Mechanisms of Electron Injection and Charge Recombination Generating Radical Cation and Triplet Species
Conjugation-length dependence of photocurrent and conversion efficiency of RA-and CAsensitized solar cells. Figure 5a shows the I-V curves of solar cells using the set of sensitizers [10]. The short-circuit photocurrent density (J sc ) is in the order, RA5 < CA6 < CA7 > CA8 > CA9 > CA11 > CA13, whereas the open-circuit photovoltage (V oc ) is in the order, RA5 > CA6 > CA7 > CA8, and CA8, CA9, CA11 and CA13 exhibit similar values.
Presumably, the coverage on the surface of TiO 2 layer should be better-organized in the shorterchain RA5, CA6 and CA7 sensitizers in the complete all-trans configuration; the longer-chain sensitizers tend to form cis isomers, as well. Open-circuit photovoltage (V oc ) in Figure 5 must reflect this situation. Figures 6a and b present the conjugation-length dependence of short-circuit current density (J sc , hereafter called 'photocurrent') and solar energy-to-electricity conversion efficiency (η, called 'conversion efficiency'). Both photocurrent and conversion efficiency are at the maximum in CA7; they decline toward the shorter-chain in the order, CA6 and RA5, and also toward the longerchain in the order, CA8, CA9, CA11 and CA13. The relevant parameters of solar cells and the oneelectron oxidation potentials of the sensitizers are listed in Table 1 in Supporting Information of Ref. [10]. . Conjugation-length (n) dependence of (a) the photocurrent (J sc ) and (b) the conversion efficiency (η) in solar cells using the RA and CA sensitizers, and (c) the electron-injection efficiency (Φ ) in the RA and CA sensitizers bound to TiO 2 nanoparticles in suspension (reprinted with permission from [11] The excited-state dynamics of RA and CAs bound to TiO 2 nanoparticles in suspension. To understand the mechanism giving rise to the above dependence of photocurrent and conversion efficiency on n, we examined the excited-state dynamics of the set of sensitizers (except for CA13) bound to TiO 2 nanoparticles in suspension by subpicosecond and submicrosecond pump-probe spectroscopy [11]: Figure 7 shows an energy diagram for the π-conjugated chains of RA and CAs with n = 5~13: The linear dependence of the optically-active 1B u + state, as a function of 1/(2n + 1), was determined by conventional electronic-absorption spectroscopy. The linear dependence of the optically-forbidden 1B u -, 3A g and 2A g states was transferred from those of bacterial Cars (n = 9~13) determined by the measurement of resonance-Raman excitation profiles [1] (Figure 1); the energies for CA8~RA5 were extrapolation of the linear relations. According to the state ordering, after excitation to the 1B u + state by the absorption of photon, (i) RA5, CA6, CA7 and CA8 are expected to internally convert, in the order, 1B u + → 2A g -→ 1A g -(the ground state), (ii) CA9 and CA10, in the order, On the basis of the above set of energy levels and internal conversion processes, we analyzed, by means of singular-value-decomposition (SVD) followed by global fitting, the time-resolved data matrices for the set of RA5~CA11 sensitizers free in solution and bound to TiO 2 nanoparticles in suspension. are seen in the near-infrared region. Their spectral patterns agreed with those of the 1B u and 3A g states of neurosporene (n = 9) and lycopene (n = 11), respectively [12]. The time-dependent changes in population for CA9 shows extremely-rapid 1B u + → 1B u transformation followed by the slower 1B u -→ 2A g transformation, whereas those for CA11, extremely-rapid 1B u + → 3A g transformation followed by the slower 3A g -→ 2A g transformation.
The results for RA5~CA11 bound to TiO 2 nanoparticles in suspension are also shown in Figure 8 (the second and fourth panels): The singlet-excited states generated by the photo-excitation of the sensitizers bound to TiO 2 were basically the same as those generated free in solution. The most conspicuous difference in the excited-state dynamics, in the bound state, is that the transient absorptions of the triplet (T 1 ) and the radical-cation (D 0 •+ ) states appear immediately after electron injection. The former transient absorptions agree in energy with those of the T 1 states obtained by anthracene-sensitized photo-excitation, whereas the latter transient absorptions, with the stationarystate absorptions of radical cation obtained electrochemically (see the spectral lines shown in the second panels). The generation of the apparent D 0 •+ + T 1 Figure 9 presents the internal-conversion and electron-injection pathways and the relevant time constants for the free and bound states. Table 1 lists the electron-injection efficiencies through the 1B u + and 2A g channels and a sum of the two for the set of RA and CAs, which were calculated by the use of those time constants. The conjugation-length dependence of the total electron-injection efficiency (Φ) is depicted in Figure 6c. The highest efficiency in CA7 (almost unity) and the decline toward CA11 can be explained nicely in terms of electron-injection efficiency. The results definitely indicate that the decline toward the longer-chain, i.e., CA7 > CA8 > CA9 > CA11, reflects the intrinsic excited-state dynamics of the Car conjugated chain. However, the decline toward CA6 and RA5 is left unexplained. Table 2 shows that the values of one electron-oxidation potential systematical lowers with n, a trend which predicts the electron-injection efficiency monotonically increasing with n all the way from n = 5 to 11, which is contrary to the observation. Table 1. Electron-injection efficiencies (in %) through the 1B u + and the 2A g channels and a sum of them calculated by the use of time constants shown in Figure 9 (reprinted with permission from [11] We have applied submicrosecond pump-probe spectroscopy to examine the later stages after excitation. Figure 10 shows the results of the SVD and global-fitting analysis of submicrosecond time-resolved data matrices for the four shorter-chain RA and CAs. Here, a relaxation mechanism, including the splitting of a combined D 0 •+ + T 1 state into a pair of the D 0 •+ and T 1 states, has been nicely explained. The first SADS (upper panels) show that the T 1 /D 0 •+ population ratio in the combined D 0 •+ + T 1 state increases toward RA5. Consistently, the time-dependent changes in population (lower panels) show that the ratio of the split T 1 /D 0 •+ species also increases toward RA5. Table 3 lists the quantum yields for the D 0 •+ and T 1 species (φ D and φ T ) calculated by the use of the relevant time constants. The efficiency of electron injection (φ D ) gradually declines toward RA5. This trend partially solves the above-mentioned contradiction in the dependence on n shown in Figure 6, i.e., (a) and (b) vs (c). Finally, we will propose the mechanisms of charge-separation and charge-recombination, which generates the radical-cation and triplet species of RA and CAs on the surface of TiO 2 nanoparticles: Figure 11 presents the energies of the singlet, triplet and redox states of RA5 and CA6~CA11 in reference to the conduction-band edge (CBE) of TiO 2 . Importantly, the energy gap between the CBE and the T 1 levels is the smallest in RA5 and systematically increases toward CA11, which explains the decreasing order of the triplet generation mentioned above. Figure 12 proposes the excited-state dynamics in a typical CA that is bound to TiO 2 : (i) Process 0 → 1: Upon absorption of photon, electron is transferred to a higher singlet level (S 1 ). (ii) Process 1 → 1 2: Electron injection takes place to generate a charge-separated state having a singlet character on the boundary. (iii) 1 2 → 6: Electron is transferred further into TiO 2 to form a stable charge-separated state. (iv) 6 → 0: the reverse electron transfer followed by charge recombination takes place to relax into the ground state. This is a series of changes among the singlet-excited and redox states having a singlet character. Now, we will consider the generation of the triplet-excited and radical-cation states both having a triplet character: (v) Process 1 2 → 3 3: When there is a strong spin-orbit coupling in the chargeseparated state having the singlet character, it can transform, by the inversion of spin, into the chargeseparated state having a triplet character. When the energy gap between the CBE and the T 1 levels is small, the resultant charge-separated state can transform further into a charge-transfer complex ) levels were drawn based on the one-electron oxidation potential listed in Table 2, and the excited state levels were taken from these shown in Scheme 7. Here, the T 1 energy is assumed to be the 1/2 of the 2A g energy [13]. On the other hand, the energy of CBE was calculated by [14], where the pH value was assumed to be 3.0 (reprinted with permission from [11] © 2005, American Chemical Society).
In 3 3, the relative contribution of the T 1 -state CA becomes larger when the energy gap between the CBE of TiO 2 and the T 1 states of CA becomes smaller (see Figure 11); this is actually evidenced by the SADS of the D 0 •+ + T 1 state (see Figure 10). This charge-transfer complex can split into two independent components as follows: (vi) 3 ) pair to form triplet Car, after the intersystem crossing and the formation of charge-transfer complex, have been revealed by the analysis of the ps and μs time-resolved data obtained by pump-probe spectroscopy of RA and CAs bound to TiO 2 nanoparticles in suspension. The conjugation-length (n) dependence of the initial excited-state dynamics has nicely explained the photocurrent and conversion efficiency of solar cells using the RA and CA sensitizers, i.e., the maximum at n = 7 and the decline toward n = 11. On the other hand, the decline toward n = 5 has been explained partially in terms of the triplet generation at later stages. Figure 12. Excitation, electron transfer and relaxation dynamics in a typical Car bound to TiO 2 nanoparticles in suspension. Mechanisms of electron injection as well as charge recombination, following intersystem crossing and exciplex formation, to generate triplet (T 1 ) and radical cation (D 0 •+ ) species of the Car sensitizer. Each numbered state is expressed by a combination of TiO 2 and CA in the ground, redox or excited states (reprinted with permission from [11] © 2005, American Chemical Society).
Mechanisms of Singlet-Triplet Annihilation Suppressing Photocurrent and Conversion Efficiency
Dependence of photocurrent and conversion efficiency on the dye concentration in CA7-sensitized solar cells: a possible mechanism of singlet-triplet annihilation. Figure 5b (shown at the beginning of Section 2.1) presents the I-V curves of CA7-sensitized solar cells, when the sensitizer was diluted with a spacer, deoxycholic acid (DCA) [10]. Table 2 in Supporting Information of Ref. [10] lists the relevant parameters showing the performance of CA7-sensitized solar cells at different dye concentrations. Figure 13a shows the concentration dependence of J sc and η. Both parameters exhibit consistent but unique concentration dependence, which can be characterized as follows: (i) At 100%, these values are medium among the values at all the different concentrations. (ii) On going from 100% to 90%, the values exhibit a sudden drop. (iii) Then, they increase up to a maximum at 70%. (iv) From 70% down to 30%, the values gradually decrease. (v) Below 30%, they decrease steeply toward the values at 10%.
The consistent changes not only in photocurrent and conversion efficiency shown in Figure 13 but also in the IPCE profile (action spectrum) and the electronic absorption spectrum (see Ref. [10]) strongly suggest changes in the form of singlet excitation with the turning points at 90%, 70% and 30%. We propose four different forms of excitation based on Figure 14, where the dye molecules (○) are diluted with the spacer molecules (•): (i) At 100%, a coherent excitonic excitation takes place in an aggregate of dye molecules (we call this 'coherent delocalized excitation'). (ii) At 90%, this excitation is destroyed by a small number of spacer molecules that function as defects. (iii) At 70%, a localized excitation on a single molecule can migrate from one to another. This 'migrating excitation' must become most efficient when the dye concentration becomes around 2/3, because branched routes for the migrating excitation are formed. (iv) At 30%, the dye molecules become isolated being intervened by a larger number of spacer molecules. This 'isolated excitation' must become the largest in number when the dye concentration becomes around 1/3. Based on the above three different types of singlet excitation on the TiO 2 layer and the generation of the triplet state as an intrinsic property of CAs bound to TiO 2 (see Section 2.1), we propose a possible mechanism to explain the unique concentration dependence of photocurrent and conversion efficiency in the fabricated CA7-sensitized solar cell (see Figure 13a): (i) In a coherent delocalized excitation at 100%, there is a good chance that such widely-expanded excitation reaches a dye molecule in the T 1 state to cause the singlet-triplet annihilation. (ii) In partially-destroyed delocalized excitation at 90%, the advantage of the widely-expanded coherent excitation in electron injection is lost to suppress electron injection, but there is still a chance of collision between 'an expanded delocalized excitation' and a localized triplet excitation to annihilate the former. (iii) In a localized excitation migrating along one of the branched routes at 70%, there is a much less chance of collision with a triplet excitation unless it is located on the particular route. (iv) In an isolated singlet excitation, there is no chance of collision with an isolated triplet excitation. Then, the photocurrent and conversion efficiency decrease linearly with the decreasing number of dye molecules excited. Figure 13. Effects of dilution of the CA7 sensitizer with a spacer, deoxycholic acid (DCA, the structure will be shown in Figure 18), on (a) the photocurrent (J sc ) and conversion efficiency (η) and (b) the relative photocurrent ( r J sc ) and conversion efficiency ( r η) of CAsensitized solar cells. To obtain r J sc (X) at a mole fraction X, for example, J sc (X) was scaled against concentration, and, then, a ratio was taken in reference to the value with no dilution.
Thus, r J sc (X) = J sc (X)/X/J sc (X = 1). By the same token, r η (X) = η (X)/X/η (X = 1) (reprinted from [10], Copyright (2005), with permission from Elsevier). The relative photocurrent ( r J sc ) and conversion efficiency ( r η) are depicted in Figure 13b (see the caption for their definition). Their concentration dependence indicates that the changes in the singlet excitation take place continuously, and the relative performance ( r J sc and r η) becomes systematically enhanced until 9~10 times on going from the first to the last form of singlet excitation.
Figure 14.
Typical arrangements of the dye (○) and spacer (•) molecules on the TiO 2 surface formed during the processes of dilution of the former with the latter (reprinted from [10], Copyright (2005), with permission from Elsevier).
Summary: The dependence of the photocurrent and conversion efficiency of the CA7-sensitizerd solar cell on the dye concentration has been explained in terms of changes in the form of singlet excitation of the sensitizer molecules on the surface of TiO 2 layer, i.e., the coherent delocalized excitation → the localized migrating excitation → the isolated excitation. There is a good chance of substantial enhancement of performance, if we succeeded in achieving only the localized excitation, keeping the total number of excited-state dye molecules the same.
The substantially reduced performance at the 100% dye concentration is ascribable to the singlettriplet annihilation reaction. Therefore, the decrease in the photocurrent and conversion efficiency of solar cells from the CA7 sensitizer toward the RA5 sensitizer (see Figure 6a and b) can now be explained by the effect of singlet-triplet annihilation among the sensitizer molecules on the surface of the TiO 2 layer, in addition to the effect of the increasing triplet generation described in Section 2.1. Dependence of conversion efficiency on dye concentration and light intensity in solar cells using polyene sensitizers having different polarizabilities. Scheme 15 shows the structures of four different polyene sensitizers that were used for fabricating the solar cells [15]. The common skeleton of the sensitizers is the benzene ring connected to a polyene (n = 6), to the end of which the carboxyl group is attached (φ-6-CA); to the opposite end of the benzene ring the MeO-, (MeO) 3 -or Me 2 N-electrondonating groups is attached to realize the electron push-pull relation in the latter set of sensitizers.
The set of polyene sensitizers are named φ-6-CA, MeO-φ-6-CA, (MeO) 3 -φ-6-CA and Me 2 N-φ-6-CA as shown in the figure; the polarizability of polyene to enhance van der Waals intermolecular interaction to form aggregates is supposed to increase in this order. Actually, the transition-dipole moment calculated by the use of molar extinction coefficient (ε) was in the order, 14. and monotonously decreases toward the lower concentration. In the most-polarizable sensitizer, Me 2 N-φ-6-CA, on the other hand, the photocurrent is the lowest at 100% and monotonously increases toward the lower concentration. The latter change is contrary to our expectation, and can be explained only in terms of singlet-triplet annihilation. At 100%, the delocalized excitonic excitation should be generated due to the aggregate formation, which can be readily annihilated by collision with the triplet species within the expanded, excitonically-excited region. The chance of this singlet-triplet annihilation must become smaller by lowering the dye concentration. Figure 16b shows the dependence of the I-V curves of the solar cells on the light intensity at two different dye concentrations (5% and 100%). In the least-polarizable sensitizer, φ-6-CA, the photocurrent decreases with the lowering light intensity. On the other hand, in the most-polarizable sensitizer, Me 2 N-φ-6-CA, the photocurrent increases, instead. The latter change is contrary to our expectation, and can be explained in terms of singlet-triplet annihilation, because the generation of both the singlet and triplet excitation must become suppressed at the lower light intensity. Figure 17a plots the concentration dependence of conversion efficiency (η) for the set of polyene sensitizers. In the least-polarizable sensitizer, φ-6-CA, the conversion efficiency monotonously decreases, while in the most-polarizable sensitizer, Me 2 N-φ-6-CA, it monotonously increases with the lowering dye concentration. In the second-least polarizable sensitizer, MeO-φ-6-CA, conversion efficiency exhibits the maximum at 70%, while in the second-most polarizable sensitizer, (MeO) 3 -φ-6-CA, it exhibits the maximum at 5%. Table 2 in Supporting Information of Ref. [15] lists the values of (i) conversion efficiency (η), (ii) conversion efficiency scaled to the concentration ( s η), and (iii) the ratio of scaled conversion efficiency in reference to that at 100% ( r η). The concentration dependence of the r η values are depicted in Figure 8b. Interestingly, the relative conversion efficiency ( r η) at 5% is in the order,
Summary:
The absence or presence of singlet-triplet annihilation has been demonstrated by lowering the dye concentration and the light intensity in solar cells by the use of the four sensitizers having the increasing polarizability and, as a result, the increasing tendency of aggregate formation. The least polarizable (the least aggregate-forming) sensitizer gave rise to the decreasing conversion efficiency with the decreasing dye concentration and light intensity, whereas the most polarizable (the most aggregate-forming) sensitizer gave rise to the increasing conversion efficiency with the decreasing dye concentration and light intensity. The four different patterns, in the dependence on the dye concentration and the light intensity, can be used as a standard to examine the degree of aggregate formation and the absence and presence of singlet-triplet annihilation of a new sensitizer.
Pheophorbide Sensitizers Combined with Polyene Spacers
While searching for a sensitizer of Chl a derivative having a cyclic conjugated system, we found that pheophorbide a (Phe a) having the chlorin skeleton gave rise to reasonably-high photocurrent and conversion efficiency. As described in the previous section, a spacer is useful in preventing singlettriplet annihilation due to aggregate formation of dye sensitizers, and, also, polyenes have high potential of electron injection, we have tried to use Phe a as the sensitizer and bacterial and plant Cars as redox spacers. Electron transfer from neutral Car to Phe a radical cation (Phe a •+ ) must prevent the charge recombination and stabilize the TiO 2 --Car •+ charge-separated state. Actually, the Car spacers enhanced the photocurrent and conversion efficiency, and the above figure has been confirmed by subpicosecond pump-probe spectroscopy of Phe a and each bacterial Car bound to TiO 2 nanoparticles in suspension.
We found no signs of singlet-energy transfer in the above experiments even by the use of the shortest-chain Cars having the higher singlet energies than those of Phe a. We suspected that the direct van der Waals contact and the correct orientation of transition dipoles between the Car and the Phe a moieties may be necessary to facilitate efficient singlet-energy transfer. We then synthesized an adduct sensitizer consisting of Phe y (modified from Phe a) and Car, which actually realized the singletenergy transfer from the Car to the Phe moiety in addition to electron transfer, enhancing photocurrent and conversion efficiency. Further, the Car moiety, connected by single bonds to Phe y, prevented the aggregate formation and the resultant singlet-triplet annihilation, which was evidenced by the suppression of performance by lowering the light intensity. The details will be described below.
Mechanisms of Electron Transfer from Carotenoid Spacers to Pheophorbide a Sensitizer
Phe a-sensitized solar cells using bacterial Cars as redox spacers. Figure 18 presents the sensitizer, methyl 3-carboxyl-3-devinyl-pyropheophorbide a (hereafter, abbreviated as 'Phe a'), and spacers, deoxycholic acid (DCA) and bacterial Cars including neurosporene, spheroidene, lycopene, anhydrorhodovibrin and spirilloxanthin (note the three-letter abbreviations) having n = 9, 10, 11, 12 and 13 conjugated double bonds. The sensitizer consists of chlorin conjugated macrocycle, to which the carboxyl group is directly attached to facilitate the binding and electron injection to TiO 2 nanoparticles. DCA is a frequently-used saturated spacer with a carboxyl group, while Cars have no anchoring groups. Here, a 10% each of spacer was added to the sensitizer solution, in which the TiO 2deposited optically transparent electrode (OTE) was soaked overnight [16]. Figure 19 presents (a) the incident photon-to-current conversion efficiency (IPCE) profiles and (b) the I-V curves of Phe a-sensitized solar cells using Car redox spacers having different chain lengths (n); the solar cell with no spacers was also examined for comparison. Importantly, the patterns of IPCE profiles with and without Car spacers are basically the same, and no contribution of Car absorption is seen at all. Therefore, there is little chance of Car to Phe a singlet-energy transfer. The IPCE profile and the photocurrent in the I-V curve increase monotonously with the conjugation length (n) of the Car spacer. To obtain spectroscopic evidence for the Car to Phe a •+ electron transfer, we performed subpicosecond pump-probe spectroscopy of the Phe a sensitizer and each Car spacer both bound to TiO 2 nanoparticles in suspension [17]. The time constants of the Phe a •+ generation, as the result of electron injection to TiO 2 , are listed in Table 4; they have been determined by the SVD and globalfitting analysis of the data matrices in the 0.00-0.50 ps time region. Figure 21a shows the results of SVD and global-fitting in the 15 ps-1 ns region. The transient absorption of each Car •+ obtained as SADS nicely agrees with its stationary-state absorption obtained by opto-electrochemistry (the line spectra). Thus, the assignment of Neu •+ , Sph •+ , Lyc •+ , Ahr •+ and Spx •+ has been established. Each pair of time-dependent changes in population (Figure 21b) evidences electron transfer from the neutral Car to Phe a •+ to generate Car radical cation (Car •+ ). The time constants of electron transfer from each Car to Phe a •+ are listed in Table 4 as 'Phe a •+ decay'; they are in the 200-240 ps region. Figure 22 shows a mechanism of electron transfer: Spirilloxanthin having the lowest one-electron potential (the highest energy) most effectively promotes the electron transfer from Car to Phe a •+ and suppresses the reverse electron transfer in comparison to neurosporene having the highest one-electron potential (the lowest energy). On the other hand, the rate of resonance electron transfer is the highest in neurosporene, where energy gap to the S 0 /D 0 •+ level is the smallest. Now, we consider the reason why no Car-to-Phe a singlet-energy transfer took place: Figure Figure 24 presents the structures of plant Cars used as redox spacers, including neoxanthin, violaxanthin, lutein and β-carotene (note the three-latter abbreviations) with n = 8, 9, 10 and 11, respectively [18]. The former three have polar pheripheral groups, while the last one is a symmetric hydrocarbon. Figure 25 shows (a) the IPCE profiles and (b) the I-V curves of solar cells using the Phe a sensitizer and the set of Car spacers. Importantly, the IPCE profile and the photocurrent (J sc ) systematically shift to the higher values in the order, n = 9 < n = 8 < n = 10 < n = 11. Again, these is no clear indication of Car to Phe a energy transfer even in the shortest-chain Cars (n = 8 and 9). Table 3 of Ref, [18] lists relevant parameters concerning the performance of the solar cells. The E ox values are listed in Table 1 of Ref. [18]. between neoxanthin (n = 8) and violaxanthin (n = 9). As seen in their structures shown in Figure 24, the reversed order originates from the fact that violaxanthin having two electron-withdrawing epoxy groups has higher one-electron oxidation potential than neoxanthin having only one epoxy group. This evidences that the enhancement of the photocurrent and conversion efficiency is determined not by the number of conjugated double bonds but by the one-electron oxidation potential of the relevant Car spacer. Finally, we discuss why no singlet-energy transfer was seen even in the present set of plant Cars: The energy diagram in Figure 23 indicates that the 1B u + → Q x , the 1B u -→ Q x and the 2A g -→ Q y singlet energy-transfer pathways should be open for neoxanthin (n = 8), and only the 1B u + → Q x energy transfer pathway, for violaxanthin (n = 9). The results strongly suggest that the 20% Car added, here, as a conjugated spacer may not be enough or the effective distance between the Phe and Car may not be short enough for efficient singlet-energy transfer. Most probably, however, the correct orientation of the transition dipoles is necessary between the Car and Phe a moiety. Then, we proceeded to synthesize a Phe-Car adduct so designed. Summaries: A method is found to enhance the photocurrent and conversion efficiency of Phe asensitized solar cell by the addition of Car as a redox spacer to facilitate the Car to Phe a •+ electron transfer and to prevent immediate charge recombination in the TiO 2 --Phe a •+ state. The enhancement increases with the shift of the Car one-electron oxidation potential to the negative side. Subpicosecond pump-probe spectroscopy of Phe a and each bacterial Car bound to TiO 2 nanoparticle in suspension proved that the Car to Phe a •+ electron transfer actually took place. No clear signs of the Car-to-Phe a singlet-energy was seen even in the shortest-chain Cars (n = 8). Figure 27 presents the structures of 'Phe y' sensitizer, i.e., methyl 3 2 -carboxy-3 2 -cyanopyropherophorbide a and 'Phe-Car adduct', i.e., 3 2 -carboxy-3 2 -cyano-17 2 -(β-apo-8'-carotenoyl) oxymethyl-17 2 -decarboxy-pyropheophorbide a. Phe y has a structure similar to Phe a, in which the carboxyl group attached to ring A is replaced by the ethenyl-cyano-carboxyl group that was supposed to enhance electron injection. Phe-Car adduct consists of the Phe y and β-apo-8'-carotenoyl (n = 9) moieties. The π-conjugated systems of the two moieties are connected loosely through several single bonds so that their electron clouds can overlap with each other to facilitate efficient electron transfer, and the 1Bu + transition moment of the Car moiety and the Q x transition moment of the Phe moiety can be set parallel to facilitate the 1B u + to Q x singlet-energy transfer. When the adduct is bound to the TiO 2 surface, the intervening bulky Car group may prevent the formation of Phe y aggregate and, as a result, suppress the singlet-triplet annihilation reaction. Figure 28a compares the IPCE profiles of solar cells using the Phe y and Phe-Car adduct sensitizers [19]. In the longer-wavelength region (500-800 nm), we see the shift of basically the same IPCE profile from the former to the latter, similar to the cases of bacterial and plant Car spacers (see Figure 19 and Figure 25). In the shorter-wavelength region (370-470 nm), a bump is observed in the IPCE profile of Phe-Car adduct. Definitely, this is ascribable to singlet-energy transfer from the Car to Phe a moiety. The shift of the IPCE profile in this region is ascribable to electron transfer from the Car to the Phe y moiety. Figure 28b compares the I-V curves for the two sensitizers: the Phe y sensitizer gives rise to a higher V oc value, while the adduct sensitizer, a higher J sc value, The former observation presumably reflects the better packing of the Phe y sensitizers on the TiO 2 surface, because the bulky Car moiety in Phe-Car adduct must prevent ordered surface coverage. The latter observation must reflect the larger photo-current due to the electron transfer and energy transfer from the Car to the Phe moiety as mentioned above. Table 1 of Ref. [19] lists the relevant parameters concerning the performance of solar cells using the pair of sensitizers. The introduction of the Car moiety enhances J sc by 1.6 times and η by 1.3 times. The E ox values of Phe-Car adduct reflect those of the Car moiety (0.95 V) and the Phe y moiety (1.17 V), which supports the idea of electron transfer from the Car to the Phe moiety. Figure 29 compares the light-intensity dependence of the I-V curves of solar cells using the Phe y and Phe-Car adduct sensitizers. In the former, no clear changes in J sc is seen even by lowering the light intensity into 1/5, whereas in the latter, systematic decrease in J sc is seen as expected. The changes are somewhat comparable to the case of polyenes (see Figure 16): the light-intensity dependence of Phe y is similar to that of (MeO) 3 -φ-6-CA, whereas that of Phe-Car adduct, to that of φ-6-CA. The results indicate that some aggregation to cause singlet-triplet annihilation is formed in the Phe y sensitizer, whereas practically no aggregates are formed in the Phe-Car adduct sensitizer. Figure 29 pictorially proposes the mechanisms of enhancement in photocurrent and conversion efficiency on going from the Phe y to Phe-Car adduct sensitizer, which include (i) electron transfer and (ii) singlet-energy transfer from the Car to the Phe y moiety as well as (iii) the suppression of the singlet-triplet annihilation reaction by preventing the aggregate formation by the use of the bulky Car moiety.
Pheophorbide-Car adduct: Energy Transfer and Electron Transfer from Car to Phe Moiety
Summary: Both singlet-energy transfer and electron transfer from the Car to the Phe moiety have been realized in the Phe-Car adduct. The photocurrent (J sc ) was enhanced by 1.6 times, the photovoltage (V oc ) was lowered by 0.9 times and, as a result, the conversion efficiency (η) was enhanced by 1.3 times. The π-conjugated chain of the Car moiety prevented the aggregate formation of the Phe moiety so that no sign of singlet-triplet annihilation was seen. Therefore, the Phe-Car adduct is potentially an excellent sensitizer to be used in a more refined way; the addition of short polyene spacers to improve the coverage of the TiO 2 layer and to enhance the photovoltage (V oc ), for example.
Bacteriochlorin, Chlorin and Porphyrin Sensitizers
With the increasing number of conjugated double bonds in the macrocycle one by one, in the order, the bacteriochlorin → chlorin → porphyrin skeleton (see Figure 2), the Soret absorption shifts to the red and the Q y absorption, to the blue, while the relative intensity of absorptions, Soret vs. Q y , increases in this order. The structural and spectral changes are shown in Figure 31 and Figure 32, respectively. Thus, the excited-state dynamics, after the photo-excitation, can vary depending on the type of macrocycle. We first examined the photocurrent and conversion efficiency of solar cells using a set of pheophorbide (Phe) sensitizers (with no central metals) having such different types of macrocycle, and found that the performance increased monotonously in the order, Phe c 2 < Phe c 1 < Phe b < Phe x < Phe a ≈ BPhe a (i.e., porphyrin < chlorin ≤ bacteriochlorin) with the decreasing oneelectron oxidation potential and the increasing Q y absorption. Next, we examined the photocurrent and conversion efficiency of solar cells using Chl c 1 , Chl c 2 and their oxidized derivatives, and found that the introduction of Mg, i.e., Phe c 2 → Chl c 2 (Mg-Phe c 2 ), for example, substantially enhanced the performance. The results were ascribed to the negative shift of one-electron oxidation potential and, also, to the disappearance of the Q x level that enhances efficient internal conversion from the Soret level.
Finally, we succeeded in enhancing further the performance by co-sensitization, combining the most efficient two sensitizers we have found so far, i.e., Phe a and Chl c 2 (Mg-Phe c 2 ). Also, we have tried to reveal the mechanisms of co-sensitization, suppressing or enhancing the performance of the component sensitizers. The details will be described below.
Pheophorbide Sensitizers Having Bacteriochlorin, Chlorin and Porphyrin Skeletons
Dependence of photocurrent and conversion efficiency on one-electron oxidation potential and Q y absorption. Figure 31 shows a set of Phe sensitizers, which have different type of skeletons: (a) the bacteriochlorin skeleton in 3-deacetyl-3-carboxy-bacteriopyropoheophobide a (BPhe a); (b) the chlorin skeleton in methyl 3-carboxy-3-devinyl-pyropheophorbide a (Phe a), 3-deviny-3-ethyl-8deethyl-8-carboxy-pyropheo-phorbide a (Phe x) and methyl 7-deformyl-7-carboxy-pyropheophorbide b (Phe b); and (c) the porphyrin skeleton in pheophorbides c 1 and c 2 (Phe c 1 and c 2 ). We fabricated solar cells using the above set of Phe sensitizers, and compared their photocurrent and conversion efficiencies; we have tried to find key parameters that systematically influence the performance by the use of the set of sensitizers with similar structures. Figure 33 shows the IPCE profiles of solar cells using the above set of sensitizers [20], which can be characterized as follows: (i) In BPhe a, the IPCE profile is extended to the near-infrared region, a unique property of this sensitizer. (ii) The IPCE profiles in the Q y region are broader in BPhe a, Phe c 1 and Phe c 2 than those in Phe a, Phe x and Phe b. (iii) The IPCE profiles in the Soret region, relative to those in the Q y region, are higher in Phe x, Phe b, Phe c 1 and Phe c 2 than in BPhe a and Phe a. All these characteristics stem from the electronicabsorption spectra of the sensitizers in solution (Figure 32). Figure 34 shows the I-V curves of solar cells using the same set of sensitizers. Table 2 of Ref. [20] lists the relevant parameters derived from their IPCE profiles and I-V curves shown above and below. One-electron oxidation potentials of the sensitizers are also shown in Table 1 Now, we are going to discuss why the photocurrent and, as a result, the conversion efficiency of the solar cells depend on the Q y absorption and the one-electron oxidation potential of the Phe sensitizer. Figure 36 proposes the parallel flow of electrons in the ground and in the excited states after the excitation of Phe a sensitizer: (i) Upon excitation of the dye sensitizer, electron (e -) is transferred to an excited state, and, as a result, hole (h + ) is generated in the ground state. (ii) The electron is injected into TiO 2 , whereas the hole is transferred to the I -/I 3 redox couple. (iii) The latter generates electron flow from the I -/I 3 couple to the radical cation (D 0 •+ ) of the sensitizer. (iv) Upon the UV excitation of TiO 2 , electron transfer from the valence-band edge (VBE) to the conduction-band edge (CBE) can take place. (v) Thus, a parallel flow of electrons, i.e., one, via the excited state and the other, via the ground state can be generated, in principle.
(A) Dependence on the Q y absorption: Figure 37 presents a proposed mechanism of electron injection from the excited states of the Phe sensitizer to the conduction band of TiO 2 by tunneling through a barrier. The efficiency of electron injection via each excited state, i.e., Soret, Q x or Q y , should be determined by competition among (i) electron injection, (ii) internal conversion and (iii) energy transfer to the sensitizer molecules stacked on the upper layers to be dissipated. The key parameter is the rates of internal conversion, which can be assumed to be on the order of (0.1 ps) -1 , (0.01 ps) -1 and (1 ns) -1 for the Soret, Q x and Q y states, respectively. When the one-electron oxidation potential is high (left-hand-side), the barrier is relatively high in energy, and, as a result, the rates of internal conversion and energy transfer via the Soret or the Q x state can be faster than that of electron injection. Then, only the electron injection via the Q y state must play the major role due to its much longer lifetime. Thus, the G → Q y absorption mainly determines the photocurrent, (B) Dependence on the redox potential: Figure 36 shows that the one-electron oxidation potential determines the relative heights of the Soret, Q x and Q y levels to the barrier for the electron injection. However, since the details of the barrier for electron injection via the excited states are not known at this moment, we will just consider the effect of one-electron oxidation potential on the electron injection via the ground state (see Figure 36).
As described in Ref. [20], the following equation can be derived by the use of the Marcus theory: Combining Equations (1) and (2), it turns out that: By the use of this equation, we tried to fit the observed values of J sc as a function of Q y absorption and E ox . The numerical fitting results are shown in Table S-1(a) of Supporting Information of Ref. [20]; here, they are presented graphically in Figure 38.
The results apparently indicate that one-electron oxidation potential plays the predominant role. However, this does not mean that the electron transfer through the ground redox state is more effective than the electron injection through the Q y state. As mentioned above, the dependence on the oneelectron oxidation potential in the electron injection via the excited states can be more important than that via the ground state. ) and the Q y , Q x and Soret excited-state levels of the set of Phe sensitizers in reference to the levels of the valance-band-edge (VBE) and the conduction-band-edge (CBE) of TiO 2 . It can be readily understood that the higher the one-electron oxidation potential (the lower the energy), the less efficient the electron injection via the higher excited states by the electron tunneling mechanism through the barrier (see also Figure 37).
Summary: The clear dependence of photocurrent and conversion efficiency on the Q y absorption and the one-electron oxidation potential has been found for the set of Phe sensitizers having the chlorin and porphyrin skeletons, including Phe a, Phe x, Phe b, Phe c 1 and Phe c 2 . A fitting result to J sc ( Figure 38) has been obtained, where J excite reflects the electron injection from the Q y state of Phe sensitizer, and J redox must reflect not only the redox electron transfer in the ground state but also the electron injection from the Q y state through the tunneling mechanism. It is suggested that the idea of electron injection, only through the Q y absorption, originates from the high one-electron oxidation potential (S 0 /D 0 •+ ) of the Phe c 2 sensitizer. Figure 40 presents the chemical structures of the Chls c and Chls c' pairs extracted from a sea weed called 'Undaria pinnatifida (Wakame)'. The structures were determined by mass spectrometry and 1 H-NMR spectroscopy (including the rotating-frame Overhauser effect spectroscopy (ROESY) measurement to determine the nuclear Overhauser effect (NOE) correlations) [21]: Chl c 1 (Chl c 1 ') and Chl c 2 (Chl c 2 ') have an ethyl group and a vinyl group, respectively, attached to ring B in different conformations. Chl c 1 and Chl c 2 (Chl c 1 ' and Chl c 2 ') have hydrogen (a hydroxyl group) attached to ring E, and also the carboxyl group attached to ring D through the vinyl group in the trans (cis) conformation with respect to a single bond attached to ring D. Thus, Chl c 1 ' and Chl c 2 ' can form intramolecular hydrogen bonding between the hydroxyl and carboxyl groups. Importantly, the chemical-shift values of the vinyl H suggest that the electron density is in the order, Chl c 2 > Chl c 1 > Chl c 2 ' > Chl c 1 ' [21]. Figures 41a and b show the IPCE profiles and the I-V curves, respectively, for solar cells using the set of four sensitizers. Table 3 Chl c1 c2 c1' c2'
Solar cells sensitized by Chls c 1 and c 2 and their oxidized derivatives Chls c 1 ' and c 2 '.
Concerning the Chl c 2 -sensitizerd solar cell, Figures 42a and b show that the photocurrent (J sc ) and conversion efficiency (η) monotonously decrease toward the lower dye concentration, and Figure 42c shows that both the J sc and V oc values decrease toward the lower light intensity. There is no sign of singlet-triplet annihilation reaction at all due to the aggregate formation in this particular sensitizer. Chl c 2 has exhibited the highest photocurrent (J sc = 13.8 mA·cm -2 ) and conversion efficiency (η = 4.6%) among all the sensitizers we have tested. It is rather surprising because Phe c 2 showed one of the lowest photocurrent (J sc = 6.0 mA·cm -2 ) and conversion efficiency (η = 1.1%), although the absorption spectrum of Chl c 2 (Mg-Phe c 2 ) to be shown in the next section (Figure 45e, dotted line) is not very different from that of Phe c 2 shown in the previous section (Figure 32, bottom). An important difference, however, is the absence and presence of the Q x absorption in the former and the latter, respectively. Most importantly, however, the one-electron oxidation potential of Chl c 2 (1.06 V) is much lower than that of Phe c 2 (1.33 eV).
Here, we will try to explain why the photocurrent and conversion efficiency of the solar cell using the Chl c 2 sensitizer are much higher than those of the solar cell using the Phe c 2 sensitizer in terms of (i) the much lower one-electron oxidation potential and (ii) the absence of the Q x level in the former: Figure 37 shows the effect of lowering the one-electron oxidation potential (on going from the lefthad-side to the right-hand-side); then, the excited-state electronic levels shift to the higher energy relative to the barrier. Then, electron injection via the Soret level becomes tremendously enhanced, taking advantage of the highest light absorption efficiency of the Soret absorption. Further, the absence of the Q x level must lengthen the lifetime of the Soret level to enhance the efficiency of electron injection. In addition, the central Mg atom seems to prevent the aggregation of the sensitizer molecules and the resultant singlet-triplet annihilation.
Summary. One of the most efficient sensitizer, i.e., Chl c 2 , has been found and the mechanism of giving rise to the highest performance has been proposed as mentioned above. Figure 44 exhibits (a) the IPCE profiles and (b) the I-V curves for the five pairs of sensitizers, which can be classified into three different types of co-sensitization, i.e., a-type + a-type, a-type + btype and a-type + c-type. In the present experiments of co-sensitization, Phe a was used as the reference sensitizer. The IPCE profiles and the I-V curves pictorially demonstrate that the cosensitization of a-type + a-type gives rise to the suppression, whereas those of a-type + b-type and a-type + c-type give rise to the enhancement of photocurrent and conversion efficiency. Table 5 summarizes the J sc , V oc , FF and η values of the solar cell using the reference Phe a sensitizer as well as the pairs of solar cells singly-sensitized by the individual co-sensitizer or cosensitized with Phe a. Concerning co-sensitization, the three different pairs of sensitizers give rise to suppression or enhancement in reference to the average of performance of the component sensitizers (Table 5): (i) The a-type + a-type co-sensitization gives rise to suppression of performance; the relative performance values decrease for both sensitizers, i.e., Mg-Phe a ( r J sc = 0.83, r η = 0.76) and Phe y ( r J sc = 0.96, r η = 0.88), the averaged ratios being ~0.8 and ~0.9, respectively. (ii) The a-type + b-type co-sensitization with the co-sensitizer, Phe b, shows remarkably-high enhancement ( r J sc = 1.60, r η = 1.65), the averaged ratio being 1.6. (iii) The a-type + c-type co-sensitization causes large enhancement with the sensitizers, Zn-Phe c 1 ( r J sc = 1.23, r η = 1.35) and Mg-Phe c 2 ( r J sc = 1.47, r η = 1.50), the averaged ratio being ~1.3 and ~1.5, respectively. Importantly, the combination of the chlorin (Phe a) and the porphyrin (Mg-Phe c 2 ) sensitizers, each showing the highest two individual performance (concerning the maximum values ever exhibited), give rise to the highest enhancement of the J sc value (9.0 and 9.9 → 14.0 mA·cm -2 ) and the η value (3.4 and 3.8 → 5.4%). Figure 45 shows the electronic-absorption spectra of the pairs of sensitizers in ethanol solution, which can be characterized as follows. Individual sensitizers: (i) Chlorin sensitizers of both a-type (Phe a and Phe y) and b-type (Phe b) clearly exhibit the Soret, Q x and Q y absorption peaks, whereas the metal-porphyrin sensitizers of c-type (Zn-Phe a & Mg-Phe a) exhibit only the Soret and Q y 2 r r sc J η + absorption peaks, the latter of which is split into two. Therefore, completely-different internal conversion processes are expected, i.e., the stepwise Soret → Q x → Q y internal conversion in the a-type and b-type sensitizer, whereas the direct Soret → Q y internal conversion in the c-type sensitizer.
Solar cells sensitized by pheophorbide sensitizers without and with the central metal, Mg or Zn.
(ii) Phe a is characterized by a sharp, blue-shifted Soret absorption, whereas the rest of the chlorin sensitizers (Mg-Phe a, Phe y and Phe b) are characterized by a broad, red-shifted Soret absorption. The metal porphyrin sensitizers (Zn-Phe c 1 and Mg-Phe c 2 ) exhibit a sharp, red-shifted Soret absorption. A pair of co-sensitizers: Depending on the overlapped and split absorption peaks due to the pair of sensitizers, competitive or complementary light absorption is expected to take place. Concerning the overlap of co-sensitizer absorption peaks, (iii) the 'a-type + a-type' co-sensitizer pair exhibits the overlaps of the Soret, Q x and Q y absorptions in a complicated way. (iv) The 'a-type + b-type' pair, i.e., Phe a and Phe b, exhibits split Soret absorptions, but strongly-overlapped Q x and Q y absorption peaks.
(v) The 'a-type + c-type' pair exhibits no overlaps in either the Soret or the Q y absorptions. To evaluate the overlap over the spectral region, we have defined spectral separation (S), The values are listed in Table 5. Importantly, it is the smallest in the a-type + b-type pair and the largest in the a-type + c-type pair (see Table 5).
We examined the effects due to the type of macrocycles and the position of the carboxyl group on the molecular orbitals by means of the time-dependent density-function-theory (TD-DFT) calculations: Figure 46 shows the calculated four major molecular orbitals, including HOMO-1, HOMO, LUMO and LUMO + 1 (here, HOMO and LUMO stands for the highest-occupied molecular orbital and the lowest-unoccupied molecular orbital, respectively). The shapes of the four molecular orbitals are different depending on the type of macrocycle, chlorin or porphyrin. The LUMO and LUMO+1, that are expected to play the key role in the electron injection into TiO 2 , are found to be extended toward the carboxyl group; in other words, the electron density is shifted toward the carboxyl group to get ready for electron injection (see the regions shown in dotted circles). Also, the electronic transitions are mainly determined by the combination of {HOMO-1, HOMO} → {LUMO, LUMO + 1} transitions and, therefore, all the Soret, Q x and Q y transitions are expected to be strongly influenced by the position of the carboxyl group (or, in other words, by the direction of polarization). The results of DFT calculations shown in Figure 46 have provided us with a strong support to the ideas that (a) the type of macrocycle, chlorin or porphyrin, and (b) the position of the carboxyl group, on the y-axis or the x-axis, strongly affect (a) the state energies and the rates of internal conversion and (b) the directions of electron-injection and transition-dipole moment, respectively.
The suppression or enhancement of performance in co-sensitization can be explained in terms of the light absorption (competitive or complementary), the direction of transition-dipole moment (parallel or orthogonal) and the singlet-energy transfer (interactive or independent) between the pair of sensitizers: (i) The absorption spectra of the sensitizers (in Figure 45) show that the major light absorption through the Soret bands is highly competitive in the a-type + a-type pair, complementary rather than competitive in the a-type + b-type pair, and absolutely complementary in the a-type + c-type pair. Therefore, the highest enhancement in the a-type + b-type co-sensitization and the next highest enhancement in the a-type + c-type co-sensitization can be rationalized in terms of complementary absorption not by the Q x and Q y levels but by the Soret levels.
(ii) The combination of the a-type sensitizer having the carboxyl group in the y-direction and the b-type or c-type sensitizer having the carboxyl group in the x-direction should give rise to the highest enhancement of photocurrent and conversion efficiency, because of the minimum interference of the transition dipoles between the pair of co-sensitizers. Polarization and electron-injection along the orthogonal directions must prevent the interference between the intermolecular transition dipoletransition dipole interactions that can trigger intermolecular energy transfer and the resultant dissipation of the singlet energy.
(iii) The different pathways of internal conversion, Soret → Q x → Q y in the a-type sensitizer and Soret → Q y in the b-type or c-type sensitizer may also prevent interaction in the internal-conversion processes because of the different time scales of internal conversion.
Summary: Co-sensitization by the use of the best and the second-best sensitizers, i.e., Chl c 2 (Mg-Phe c 2 ) and Phe a, we have achieved the maximum enhancement in photocurrent (J sc = 14.0 mA cm 2 ) and conversion efficiency (η = 5.4%), the enhancement factor being 1.47 and 1.50 times in reference to the averaged value of the performance of the component co-sensitizers. The enhancement is ascribed to the complementary light absorption, the orthogonal transition-dipole moments and the different pathways of internal conversion.
Conclusions
(i) By the use of a set of RA and CA sensitizers (n = 5~13), the dependence of photocurrent and conversion efficiency of DSSC on the conjugation-length of the sensitizer was determined to be, in the order, RA5 < CA6 < CA7 > CA8 > CA9 > CA11 > CA13. For comparison, the electron-injection efficiencies for RA5-CA11 bound to TiO 2 nanoparticles in suspension were determined by means of subpicosecond time-resolved pump-probe spectroscopy. The maximum for CA7 and the decline toward CA11 were explained in terms of excited-state dynamics of the sensitizers. On the other hand, the decline toward RA5 was explained by the increasing efficiency of triplet generation and, as a result, the enhanced singlet-triplet annihilation due to the aggregate formation of the dye sensitizers on the TiO 2 surface.
(ii) Excited-state dynamics including the formation of a charge-transfer complex, what we call 'the combined D 0 •+ + T 1 state', consisting of a charge-separated (TiO 2 --CA (D 0 •+ ) and a neutral (TiO 2 -CA (T 1 )) states, and its subsequent splitting into the D 0 •+ plus T 1 Car species, was identified by subpicosecond and microsecond time-resolved pump-probe spectroscopy, respectively. (iii) The mechanism of singlet-triplet annihilation to suppress the photocurrent and conversion efficiency was first identified by their dependence on the dye concentration in the CA7-sensitized solar cell. This mechanism was confirmed by the use of sensitizers having the increasing transition-dipole moments and, as a result, the increasing trend of aggregate formation. The least polarizable (the least aggregate-forming) sensitizer gave rise to the decreasing conversion efficiency, whereas the most polarizable (the most aggregate-forming) sensitizer gave rise to the increasing conversion efficiency, both with the decreasing dye concentration and light intensity.
(iv) Sets of bacterial (n = 9~13) and plant (n = 8~11) Cars were used as redox spacers for the Phe a-sensitized solar cell. The idea behind this attempt is to induce electron transfer from Car to Phe a radical cation (Phe a •+ ) to stabilize the charge-separated TiO 2 --Car •+ state to prevent immediate charge recombination of the TiO 2 --Phe a •+ pair. Rapid electron injection into TiO 2 to generate Phe a •+ (20-40 fs) followed by electron transfer from bacterial Cars to Phe a •+ (200-240 ps) was evidenced by subpicosecond pump-probe spectroscopy of each Phe a-bacterial Car pair bound to TiO 2 nanoparticles in suspension. Among the two set of Cars, β-carotene having the lowest one-electron oxidation potential (E ox = 0.61 V) exhibited the maximum enhancement of conversion efficiency (η = 3.4 → 4.2%). In the above mixture of Car and Phe a, no singlet-energy transfer was observed. However, in Phe-Car adduct sensitizer, both singlet-energy transfer and electron transfer from the Car to the Phe moiety were identified in the solar cell. No sign of singlet-triplet annihilation due to aggregate formation was seen in this particular sensitizer.
(v) In a set of Phe sensitizers having the chlorin and porphyrin macrocycles, photocurrent (J sc ) was found to be the functions of the integrated Q y absorption and one-electron oxidation potential (E ox ). Phe c 2 having the highest one-electron oxidation potential (E ox = 1.33 V) exhibited the lowest conversion efficiency (η = 1.1%) among the Phe sensitizers. On the other hand, Chl c 2 (Mg-Phe c 2 ) having low one-electron oxidation potential (E ox = 1.06 V) exhibited the highest conversion efficiency (η = 4.6%) among all the sensitizers we have tested. The extremely-low conversion efficiency in Phe c 2 was ascribed to the high E ox value and electron injection via the Q y level, whereas the high conversion efficiency in Chl c 2 was ascribed to the low E ox value and electron injection via the Soret level, which is stabilizer by the absence of the Q x level.
(vi) By co-sensitization using the Phe a and Chl c 2 sensitizers of the second-best and the best performance, we have succeeded in enhancing the photocurrent and conversion efficiency to 14.0 mA·cm -2 and η = 5.4%, respectively. The enhancement was ascribed to the supplementary light absorption, the orthogonal directions of transition-dipoles and the independent internal conversion processes between the pair of sensitizers.
Future Perspective
(1) Pump-probe subpicosecond time-resolved spectroscopy of the single Car sensitizer as well as the Chl a sensitizer plus Car redox spacer, both bound to TiO 2 nanoparticles in suspension, has turned out to be very powerful in elucidating the initial electron-injection and electron-transfer mechanisms, respectively. This technique should be applied to determine the mechanisms of excitation, energytransfer and electron injection in each chlorin or porphyrin sensitizer as well as the pairs of these sensitizers used for co-sensitization.
(2) In the case of the well-characterized CA and RA sensitizers, it is time to start pump-probe timeresolved spectroscopy of fabricated DSSCs, in various time regions, to elucidate the real electron flow processes in the cell.
(3) To establish the mechanism of singlet-triplet annihilation, which is a key issue to enhance the performance of DSSCs in general, other spectroscopic methods such as time-resolved fluorescence T 1 states as we assumed in the state ' 3 3' mentioned above. Their conclusions are in general agreement with ours except for the assignment of SADS.
In summary, our unique contribution seems to be the identification of the T 1 -D 0 •+ charge-transfer complex ( 3 3) by the SVD and global-fitting analysis of spectral data in the μs time range. Car to radical cation electron transfer in DSSC and photosynthetic systems. We believed that the addition of Cars to Phe a-sensitized solar cells as redox spacers to stabilize the charge-separated state was our own idea, but now we realize that the Car to Chl a •+ (BChl a •+ ) electron transfer is actually one of the principles of photosynthesis: Noguchi et al. [28] identified, by FTIR spectroscopy, the generation of β-carotene radical cation in photosystem (PS) II membrane at 80 K under the oxidizing condition. Hanley et al. [29] studied the oxidation of β-carotene in Mn-depleted PS II by means of EPR and electronic-absorption spectroscopy. They proposed possible electron-transfer pathways among the Car, P680, Cyt b 559 and Chl z. On the other hand, Polívka et al. [30] identified spheroidene radical cation in the LH2 complex from Rba. sphaeroides. The detailed mechanisms and function in the photosynthetic systems are still not clear (see [31] for a review), although there is a good chance of the Chl a •+ and BChl a •+ generation in the special pair of PS II RC and the B850 aggregate in LH2. In this relation, we should point out that we actually observed the excimer formation of Phe a, in our system, prion to the generation of Phe a •+ (see Ref. [17]).
Singlet-energy transfer in Car-Phe adducts. Debreczny et al. [32] studied singlet-energy transfer in adducts, where two Cars, i.e., fucoxanthin (n = 7) and zeaxanthin (n = 11), were covalently attached to each of five different pyropheophorbides. In all the five compounds containing fucoxanthin, energy transfer was found to occur from the higher-lying fucoxanthin S 1 state to the lower-lying pyropheophorbide S 1 state with the 12-44% efficiency. In contrast, all the five zeaxanthin-containing compounds showed no clear evidence for energy transfer from the zeaxanthin S 1 state to the pyropheophorbide S 1 state.
Macpherson et al. [33] prepared a model photosynthetic antenna system, consisting of a Car moiety covalently linked to a purpurin to study singlet-energy transfer by means of fluorescence upconversion spectroscopy. The S 2 lifetime of 150 ± 3 fs in the isolated Car and that in Car-purpurin dyad of 40 ± 3 fs lead to the energy-transfer efficiency via the S 2 state, 73 ± 6%. On the other hand, the S 1 lifetime of Car (7.8 ps) was not changed at all even after the formation of the dyad. Taken together, the S 2 state of the Car moiety is concluded as the sole donor state in the singlet-energy transfer.
Those model antennas can be used as a guide for designing the dyad sensitizers in the future, after adding the carboxyl group for the binding and electron injection to TiO 2 .
Chlorin and porphyrin sensitizers. As a pioneering work in the usage of Chl derivatives and related porphyrins, Kay and Grätzel [35] found that compounds containing copper as the central metal gave rise to the highest IPCEs. Cu mesoporphyrin IX exhibited an IPCE value as high as 83% at the Soret absorption, i.e., a unit quantum yield of charge separation when the loss of light energy by reflection and scattering was taken into account. On the other hand, Cu chlorophyllin gave rise to performance with a J sc value of 9.4 mA cm -2 , a V oc value of 0.52 V, and the resultant η value of 2.6%. It was found that the conjugation of the carbonyl group with the π electron system of the chromophore was not absolutely necessary, and that cholanic acids as co-adsorbates were useful to improve the photocurrent and photovoltage of solar cells using those sensitizers.
Nazeeruddin et al. [36] show that Zn porphyrins exhibit much better performance than Cu porphyrins. Tetraporphyrinato Zn(II) ethenyl benzoic acid showed the best performance as the sensitizer, i.e., J sc = 9.7 mA cm -2 , V oc = 0.66 V and η = 4.8%. Campbell et al. [37] compared the performance of a wide variety of porphyrins to reveal structural dependence. They found this compound was the best as a sensitizer.
The performance of our DSSCs may have been underestimated due to our fabricating technique. The conversion efficiency of the Phe a-sensitized cell exhibited η = 3.4% when we fabricated, but a DSSC fabricated by Dr. Nazeeruddin, by the use of the same sensitizer, showed the value as high as η = 5.1% (personal communication). Assuming 'the technical factor' of 5.1/3.4 = 1.5, the conversion efficiencies for solar cells sensitized by Chl c 2 and co-sensitized by Phe a + Chl c 2 turn out to be 3.8 × 1.5 = 5.7 and 5.4 × 1.5 = 8.1%, respectively. Obviously, we need to improve our technique of solar-cell fabrication to correctly determine the conversion efficiency for each sensitizer. | 2014-10-01T00:00:00.000Z | 2009-10-27T00:00:00.000 | {
"year": 2009,
"sha1": "30dd1c0653282268bcee57a3dbc309b615f36961",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/1422-0067/10/11/4575/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30dd1c0653282268bcee57a3dbc309b615f36961",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
7010918 | pes2o/s2orc | v3-fos-license | The effect of participatory community communication on HIV preventive behaviors among ethnic minority youth in central Vietnam
Background In Vietnam, socially marginalized groups such as ethnic minorities in mountainous areas are often difficult to engage in HIV research and prevention programs. This intervention study aimed to estimate the effect of participatory community communication (PCC) on changing HIV preventive ideation and behavior among ethnic minority youth in a rural district from central Vietnam. Methods In a cross-sectional survey after the PCC intervention, using a structured questionnaire, 800 ethnic minority youth were approached for face-to-face interviews. Propensity score matching (PSM) technique was then utilized to match these participants into two groups-intervention and control-for estimating the effect of the PCC. Results HIV preventive knowledge and ideation tended to increase as the level of recall changed accordingly. The campaign had a significant indirect effect on condom use through its effect on ideation or perceptions. When intervention and control group statistically equivalently reached in terms of individual and social characteristics by PSM, proportions of displaying HIV preventive knowledge, ideation and condom use were significantly higher in intervention group than in matched control counterparts, accounting for net differences of 7.4%, 12.7% and 5%, respectively, and can be translated into the number of 210; 361 and 142 ethnic minority youth in the population. Conclusions The study informs public health implications both theoretically and practically to guide effective HIV control programs for marginalized communities in resources-constrained settings like rural Vietnam and similar contexts of developing countries.
Background
The overall picture of HIV/AIDS in Vietnam continues to be worrying over time. Since the first case of HIV infection sexually transmitted in 1990 [1], it then has spread dramatically through both risk sexual and drug use behavior across the country, with 160,019 cases reported nationwide [2]. To date researchers and policy makers remain uncertain about further development of this epidemic as well as progression of prevention efforts. Given previous research and literature, the HIV transmission has been focused largely upon injecting drug users (IDU) and female sex workers (FSW) due to their unsafe behaviors such as sharing injecting equipment and non condom sex [1,[3][4][5][6]. However, much attention to such high-risk groups may restrict our understanding of other groups that may be also at risk of being affected by the epidemic [7,8]. Several subpopulations in Vietnam such as migrants, militants, and rural communities actually have also been believed as sufferers. Studies in some developed and developing countries have identified the association of migration [9][10][11], social contexts [12], and deficit in HIV prevention knowledge [8], social vulnerabilities of marginalized groups [13,14], and lack of access as well as insufficient and incomprehensive approaches to HIV preventive information and programs with increased epidemic [15].
Although fighting against the HIV/AIDS spread in Vietnam has become comparatively successful in several aspects, there are high risk groups that have not been engaged in HIV-related research and prevention programs. The recent data show that in Vietnam HIV cases have been reported in all 63 provinces and cities, almost 98% of districts, and more than 70% of wards, communes, and towns [2]. This means that the HIV epidemic has affected not only high-risk groups in urban areas, but other communities, especially ethnic minority youth in rural settings due to many individual and social factors [16].
In the Mekong river sub-region, the East-west Economic Corridor (EWEC) is one of the three routes with 1,600 km in length to connect the Indian and Pacific Oceans. The corridor has been built on the route that was known during the Vietnam War as "Route Number Nine" running through two mountainous districts, namely Dakrong and Huong Hoa, the very poor districts of Quang Tri province in central Vietnam. These districts have two main ethnic minorities Pahco and Vankieu, bordering with Laos through Lao Bao border gate. The completion of this route paved regional mobility and helped generate trade opportunities as well as facilitate cultural exchange for the region, especially between Laos, Thailand and Vietnam. The development of road infrastructure in rural residential areas has led to an increase in trade and accessibility and has therefore spurred economic and social development. Local people are increasingly able to access previously inaccessible services and trade opportunities. The development of routes, however, has influenced the social habits of affected communities in unexpected ways such as prostitution, drug use, sexual abuse and harassment and others. Many of these changes have the potential to negatively affect public health, especially by increasing unprotected groups' risks of contracting HIV and sexually transmitted diseases (STDs). In the previous baseline survey of the project "Building a community-based pilot model for preventing HIV/AIDS in two mountainous districts of Quang Tri province", it was found that in parallel with the process of migration the facilitators and risk factors that increased transmitting HIV included changed lifestyles, social norms in favor of pre-marriage sex, local tradition "di sim" for seeking sex partners, non-condom sex due to limited perceptions of preventing HIV as well as lack of access to HIV prevention services [17]. Using the results of that survey, we implemented a variety of PCC that aimed to promote perceptional and behavioral change for preventing HIV among ethnic minority youth of some communes in Dakrong district of Quang Tri province.
In evaluating the impact of a communication campaign, program officers and researchers would like to be able to estimate the effect of intervention that is designed to change a behavior. It has been well recognized among researchers and evaluators that calculating effectiveness is the most important part, but is sometimes the most challenging [18]. One can not claim a particular amount of behavior change without a causal attribution. A causal inference must be reached that attributes the net change in behavior to exposure to the intervention and not to other impacts or, worse yet, to changes that occurred before the intervention was implemented. Because in this study we launched the intervention across the villages of two communes Dakrong and Ango and because residents including respondents were highly mobile between intervention and control villages, it is a difficult task for researchers to design a randomized control group study.
The objective of this study was to estimate the relatively net effect of participatory community communications (PCC) on ideational and behavioral change for HIV prevention among ethnic minority youth in a mountainous district in central Vietnam using propensity score matching (PSM).
From literature PSM is highly recommended as it is one of the strong statistical techniques [18][19][20]. This method can help reduce selection bias as it allows for quasi-experimental contrasts between subjects receiving "treatment" and those in "control" groups based on their observed characteristics. Proper use of PSM should also allow for rigorously derived and relatively unbiased estimates of communication effects on participants' behavior [21]. Because of its ability to reduce selection bias, PSM has become increasingly used in the fields of education [22], communication [20], medicine and epidemiology [23], policy evaluation [24], economics [25], and psychology [26]. Although commonly applied in diverse disciplines in various settings, too little has been achieved in measuring the effect of PCC on HIV preventive behavior among ethnic minority youth in marginalized areas of developing countries like Vietnam.
A theoretical framework for this study was based on a comprehensive conceptual model by Kincaid [19,20] ( Figure 1) that has been adapted from a wide range of literature resources. Under this theory, the psychological influences including knowledge, attitude, social norm, intention, self-efficacy, and others can be combined as ideation. Specific communication interventions may be designed to influence only one or several types of psychological processes. All sorts of psychological processes are expected to affect behavior even if communication is designed to influence only one of them. Communication affects behavior indirectly by providing information that changes one or all of such processes. Exogenous determinants including demographic, socioeconomic and contextual characteristics affect endogenous variables such as recall of communication messages, ideation and behavior.
Study design and settings
A cross-sectional sample survey was conducted with after PCC intervention had been completed using a face-to-face structured questionnaire in some villages of Dakrong and Ango commune from Dakrong district in Quang Tri, central Vietnam.
Intervention campaigns and settings
A multi-campaign intervention was designed and engaged with local communities, most of whom were ethnic minority youth in two communes-Ango and Dakrong-Dakrong district, Quang Tri province. Before the intervention, a group of the local ethnic minority youth was recruited to participate in intervention. They were encouraged to visit local households and attend social events at their home villages to collect local stories that reflected messages on HIV/AIDS prevention, then discussed with research team to choose the relevant messages. In final analysis, there were nine key messages that conveyed the contents on HIV/AIDS prevention including 1) Be faithful with one wife and one husband, 2) Practice safe sex, 3) Don't have sex with sex workers, 4) Use condoms correctly when having sex, 5) Be friendly with condoms, 6) Don't inject drugs, 7) Don't share syringes and needles, 8) Take a test for HIV, and 9) Care for pregnant women.
In total, there were also nine communication campaigns, each of which incorporated several or all of such messages in order to promote HIV/AIDS preventive behaviors among ethnic minority youth. The first campaign "HIV/AIDS prevention drama" was launched from October 2009 to July 2010. Using local stories, 38 ethnic minority youths involved in drama technically guided and supported by a local director. The second campaign "Women's health clubs" aimed to encourage open dialogues and interactive talks with ethnic minority women on HIV prevention topics from October 2009 to July 2010. By April 2009, "HIV/AIDS knowledge contest" was carried out with the participation of close to 1000 ethnic minority youth across two communes. Another competition with a topic "Be friendly with condoms" took place in December 2009 and there were 600 ethnic minority youth engaged in this campaign. Within the July 2010, the next contest named "Typical role models" attracted 900 ethnic minority youth involved. Meetings by 400 ethnic minority youth and the live show by 1600 ethnic minority youth were held for promoting "Be friendly with condoms" during the December of 2009. During this time, community-based clinics were also established for providing examinations and consultations on STD prevention and treatment for more than 1800 local women at reproductive age. A total of 64 community initiatives for preventing HIV/AIDS, each of which mobilized 70 to 100 ethnic minority youth involved, were finally successfully designed and promoted by such ethnic minority youth themselves from October 2009 to July 2010.
Participants and sample size
A representative sample survey of 800 ethnic minority youth aged 15-45 years from 2 communes-Ango and Dakrong-in Dakrong district, Quang Tri province were selected and interviewed by August 2010. This sample size was obtained by the cluster sampling with identified 40 random clusters. A cluster is defined as a number of households or families living fairly close to one another in a group of residents in each village. To increase the representativeness, villages with more populations had higher chance to be included into the sample.
Data collection
As a procedure, this intervention project was approved by the Human Research Ethics Committees at the Hanoi Medical University. After the intervention, participants in the survey were verbally informed about the study, that participation was voluntary, that they had the right to withdraw at any point, and, that data would be handled confidentially. After obtaining informed consent, a structured questionnaire was administered to participants on a face-to-face basis of interview. The survey was anonymous without documenting participants' names in any research materials. The survey was conducted by well trained interviewers-both men and women to conduct gender-matched interviews (men vs men and women with women). These interviewers were community health workers who were experienced with the face-to-face interviews in many community research surveys before. Households that stayed close about 10 metres in distance to each other were identified as a cluster. The first household interviewed was randomly selected. When data collectors entered a household, they interviewed all youth aged 15-45 years, thereafter moved to another until they reached roughly 20 youth in each cluster and 800 participants in total in a doorto-door approach.
Exogenous variables
The socio-demographic characteristics were measured with age (number of years), gender (male, female), ethnic (Pahco, Vankieu), marital status (married, unmarried), mobility (migrant, non-migrant), socioeconomic status [SES] (a composite of education level, Kinh language competency-listening, speaking, reading and writing-and number of valuable properties, Cronbach's alpha = .65) with higher scores reflecting higher SES, social network (a composite of number of friends and relatives in other areas, number of visits to relatives during the past three months, distance to the closest friend or relative, and joining local social organizations (Cronbach's alpha = .55), and access to HIV/AIDS preventive information. Access to HIV/AIDS preventive information was assessed with 7 items regarding sources of information accessed, of STDS listed, of local projects and programs known, of AIDS prevention activities of our project known, of times watching or joining AIDS preventive dramas and of times joining community initiatives for preventing HIV (Cronbach's alpha = .85); the sum scores with higher scores reflect higher level of access to HIV/AIDS preventive information.
Recall
Actual exposure to PCC for HIV prevention may be measured as a state whether a youth has participated in an intervention of any kind. However, a good method of communication evaluation requires a valid, reliable measure of recall of key messages or contents at the interval level of measurement [18,27]. In this study, a categorycoded question was added to ask students to remember all possible communication messages. In total, 9 key messages were recalled by participants. This question formed a continuous scale measuring the level of recall that ranged from 0 to 9 messages (Cronbach's alpha = .80). To simplify this measurement and accommodate the logistic regression and propensity score analysis, the scale was classified into 1 and 0 (recall versus no recall).
Ideation
HIV preventive knowledge was assessed with 15 true/ false/don't know items adapted from the previous measures [28,29]. Scoring the information scale was accomplished by dichotomizing each item into a value of 1 (correct) and 0 (incorrect or don't know) and then summing the item values to form a composite score with higher scores reflecting increased knowledge about HIV prevention (Cronbach's alpha = .83).
To determine respondents' attitude toward HIV prevention such as condom use, respondents rated their performance of 4 prevention acts on a 2-point semantic scale (agree and disagree) from 0 (negative evaluation) to 1 (positive evaluation) [e.g., "How good or bad would it be if you talked about condom use (to keep from getting HIV/AIDS) with your sex partner(s) before having sex with them?"] [29]. A composite score was obtained by summing responses to items with higher scores indicating higher levels of attitude toward HIV prevention (Cronbach's alpha = .75).
Social norms assesses respondents' subjective norms of social support for their HIV prevention practice (e.g., "Do people in your village think you should talk about condom use with your partner(s) before having sex with them?") [29]. A composite score was retained by summing responses to 5 yes/no items on this scale with higher scores indicating higher levels of social norms toward HIV prevention (Cronbach's alpha = .40).
Perceived risk of HIV infection consisted of 3 yes/no items asking if respondents thought they were at high risk of HIV infection (e.g., "Are you worried at risk of HIV infection if you had sex with commercial sex workers?") [29]. A composite score was computed by summing responses to items with higher scores indicating higher levels of perceived risk (Cronbach's alpha = .78).
Self-efficacy of HIV prevention was assessed with 4 yes/no items tapping perceived difficulty of HIV prevention practice such as condom use on the scale from hard (0) to easy (1) such as "how hard would it be for you to advice or persuade your sex partners to use a condom when having sex with them?" [29]. A composite score was obtained by summing responses to items with higher scores reflecting higher levels of self-efficacy (Cronbach's alpha = .76).
These five related sub-constructs representing the cognitive and social interaction component of ideation were used to construct the latent measure of ideation for HIV prevention (Cronbach's alpha = .77). For the logistic regression analysis, using a cut-off of 50% the measures were split into 1 and 0, corresponding to higher and lower levels of ideation.
HIV prevention behavior or condom use
Protected sexual behavior combined 3 items asking if participants used condoms at the last sex with sex partner(s), if they used condoms for HIV prevention or other purposes and if they used condoms before or after insertive sex. Combining these 3 items into a single outcome variable has two advantages in that it makes the measurement more valid and reliable as well as allows the analysis of the impact of intervention [18,27]. The levels of protected sexual behavior met the minimal requirements of order with respect to ideation with the support from one-sided test of significance [30] performed with P = .01. This level of probability indicated the rejection of the null hypothesis of equality of levels and supporting the alternative hypothesis of order. To make logistic regression analysis and estimation of the impact possible, the scale of the single sexual behavior was classified as 1 and 0 reflecting safer and riskier level of sexual behavior (Cronbach's alpha = .67).
Data analysis Trend and Jonckheere's one-sided test
These statistics tested if there is an increased trend according to the level of exposure (recall) to the intervention.
Simple proportion differences
We used Chi-square tests to determine whether proportion differences in knowledge, ideation and condom use of students receiving and not receiving PCC were statistically significant. We used a P value of .05 for these analyses. We considered the results obtained from Chisquare tests of these proportion differences as a "benchmark" for the further analyses using PSM as referred to below.
Logistic regression modeling
The model of direct and indirect effects used for this analysis requires three equations, one for each endogenous variable: Where Y 1i is safer sexual behavior for subject i, Y 2i is ideation, Y 3i represents exposure to communication for subject i, X i , Wi and Zi are matrices of exogenous socioeconomic and demographic control variables, the three β, two γ and one σ coefficients are parameters to be estimated from the data, and μ i , vi, and ξi are the disturbance (residual) terms. Because safer sexual behavior, ideation, and exposure are measured with a binary scale, logistic regression is used to estimate the parameters of the equation. The differentiation among the X, Z, and W matrix of exogenous control variables indicates that each endogenous variable should be determined by exogenous variables not included in the other two equations. However, some overlap of exogenous variables can be acceptable, but each endogenous variable must have at least one exogenous control variable that is excluded from the equations for all other endogenous variables [31].
Creating the matched sample using propensity score matching A propensity score is the probability of being exposed to a treatment or an intervention given a set of observed covariates, X. The method as this was developed as a means to balance the treatment and control units so that a direct comparison would make a valid conclusion. The technique was found robust as recently in practice it was difficult, if not possible, to match on more than two variables unless PSM is used [32,33]. For research survey, a single score for matching is generated using statistically regressing exposure on all of the variables that determine exposure and also may be related to the outcome variable [18].
This technique requires a two-stage process. Stage 1 involves the use of a logistic or probit regression model to calculate all respondents' propensity for experiencing a treatment of interest, in this case, receiving PCC. In stage 2, we used the estimated propensity scores obtained in stage 1 to match youth who did and did not receive PCC. To obtain a full sample, we used stratification matching which uses all treatment and control cases. Using the STATA 10.0, the full range of sample members' propensity scores is divided into propensity score strata, or blocks, each of which includes treatment and control cases with the same or nearly the same propensities for receiving the treatment. The number of appropriate strata depends on the number necessary to gain a balanced propensity score. Within each of these strata, the ATT (Average Treatment Effect for the Treated) is calculated, and then the ATT's across strata are averaged to produce a final ATT.
Socio-demographic characteristics of the sample
The distribution by gender (male vs. female) and marital status (married vs. unmarried) was relatively equal, with average age of 24 years old. Most were ethnic people-Pahco (41.43%) and Van Kieu (56.32%), few were Kinh people (2.3%). In terms of SES, all indicators reflected the lower value: incomplete secondary education (close to grade 6), most respondents were able to speak and understand Kinh language (> 95%), but only 71% were fluent in reading and writing this language. The most valuable asset was television (70%), followed by motorbike and mobile phone, each making up more than 40%. The SES score was around of mid point (mean = 5.45, range = 0-11). The temporary migration percentage was rather high (almost 50%), whereas the level of social connection was low (mean = 2, range = 0-6). Generally, access to HIV/AIDS information from different sources was low (only 24% had good access); among the communication messages, the three messages "Use condoms correctly when having sex", "Practice safe sex" and "Be faithful with one wife and one husband" were recalled the most (17%, 11% and 9%, respectively). The overall rate of recall of at least one HIV prevention message was more than one-fifth (22.1%). Table 1 shows that the percentages of knowledge and ideation were significantly higher in recall over the nonrecall group (P < .001), but the difference in condom use prevalence was not significant.
Logistic regression modeling for predictors of recall, ideation and APB
The second column in Table 2 is the descriptive statistics of variables and factors of each model. The models of recall of HIV preventive communication messages, ideation and behavior for HIV prevention were in turn identified by six factors each (P of most factors < .05). The results of the statistical test to exclude inappropriate variables and factors from each model are presented in the second row from the bottom. In all models, the exclusion of some factors did not change the parameters of each model (P > .05). HIV preventive ideation, after controlling for socio-demographic variables, was the most significant predictor of condom use behavior (OR = 2.38; P < .001). The results suggested that recall of HIV prevention messages has direct impact (P < .05) on ideation, but has indirect impact on condom use (P > .05). Analysis of the "biprobit" equation shows that the correlation coefficient "rho" between HIV preventive ideation and condom use behavior was very low, especially since "correlation errors" were not significant (P > .05). Table 3 presents the results underlying propensity score matching (PSM) analysis according to the stratified probability score methodology, aiming to maximize matching all observed individuals (n = 800) to obtain the intervention group and control group statistically balanced in terms of socio-demographic characteristics. Propensity score is the probability of recalling HIV preventive communication messages of the intervention, with mean of 0.21 (range = 0.001-0.999; SD = 0.27; data not shown in the interest of space). Such propensity scores were stratified into 6 strata or 6 blocks by which external factors or confounders of the intervention group and the control group, including age, ethnic, SES, social network, migration and access to information were equivalent (6 strata and 6 exogenous factors meaning 36 separate tests which showed statistically no significance) [Data not shown in the interest of space]. The results, after being adjusted by PSM, showed that there were statistically significant differences [P < .05] between intervention group and control group in proportions of knowledge, ideation and condom use. Participants who recalled HIV preventive communication messages were more likely to have better knowledge, ideation and behavior for HIV prevention as compared to those who did not recall any messages, specifically knowledge (7.4% higher), ideation (12.7% higher), and safer sexual behavior (5.0% higher) [P of Z-test < 0.05]. Table 1 Unadjusted differences in HIV preventive knowledge, ideation and behavior between exposed and nonexposed youths
Discussions and conclusions
In this community-based intervention study, we sought to estimate the effects of launching communication campaigns for HIV prevention with the participation of local communities on increasing ethnic minority youth's HIV prevention behaviors. We did so using a fairly large sample of such youths and methods that can greatly reduce selection bias. Because we could not randomly assign local youth to receive or not receive communication messages, we used PSM techniques to contrast the HIV preventive behaviors of ethnic minority youth who did and who did not receive these campaigns but who had been matched on a wide range of observed background characteristics. We also conducted benchmark analyses using trend test, Chi 2 tests of proportion differences and logistic regression before further examination using PSM analysis.
Evidence of the effect of participatory community communication
Our analyses indicate that the PCC campaigns provided to Vietnamese ethnic minority youth during the almost one year would be of sufficient strength to prevent risk behaviors for HIV infection. We found that PCC had positive effects on ethnic minority youth's HIV preventive behaviors. Youth displayed increased levels of knowledge and ideation for HIV prevention as they had increased levels of exposure to HIV preventive messages. These results are consistent with data in Kincaid's research [27] indicating that the levels of contraceptive knowledge and ideation among the Philippine people increased according to the increased levels of recall of communication messages, and contraceptive behaviors such as condom use among the Philippine people increased as knowledge and ideation with this regard increased. Other research in Africa also showed a similar pattern [34] as our study. In contrasting the proportion differences, we found that youth receiving PCC campaigns demonstrated significantly higher knowledge Table 3 The net differences in the percentage of HIV preventive ideation and behavior (condom use) between the two groups adjusted by PSM # indicating the number of balanced blocks in terms of characteristics or statistically insignificant difference in characteristics between non exposed group and exposed group, meaning that the propensity score was balanced when stratified into 6 blocks, which allow for calculating the net impact of the intervention as compared between the two groups. *P < 0.05; **P < 0.01; ***P < 0.001 Figure 3 Comparison of the unadjusted increase in HIV preventive knowledge, ideation and condom use to the increase adjusted by PSM (N = 800). and ideation than closely matched peers not receiving such campaigns. Youth displayed statistically different knowledge and ideation gain between the two groups, in which exposed group displayed higher proportions than the other group. Likewise, exposed (or recall) group also showed a higher proportion of condom use than nonexposed counterparts. However, these two groups of youth had statistically equivalent condom use proportion. Interpretations of this lack of statistically significant effects on condom use would be problematic if we are based only on the simple statistical devices such as Chi square test for contrasting this effect. According to Morgan [35], using univariate analysis to interpret effects of intervention may be of great bias.
To further examine the effects of PCC, we continued to analyse data using multivariate regression procedure. After adjusting for socio-demographic variables, we found that youth who were more exposed to HIV preventive information were more likely to recall HIV preventive messages (OR = 1.37; P < .001). Youth who approached HIV preventive information and recalled communication messages were more likely to display a higher proportion of HIV preventive ideation (OR = 3.24; P < .01). Youth who displayed a higher proportion of HIV preventive ideation were more likely to engage in condom use when having sex with a sexual partner (OR = 2.38; P < .001). However, recall of HIV preventive messages was statistically not associated with condom use. These results indicated that recall indirectly affected condom use through ideation or put in other words ideation played as intervening or mediating role between exposure to intervention (recall) and behavior. This appears evident that PCC campaigns have changed HIV preventive knowledge and ideation of the youth first, and then influenced their behaviors. This result is in consort with data by Kincaid [27] and Kincaid and Do [18] suggesting that interventions have indirect impact on a HIV preventive behavior only as mediated by their impact on ideation.
The above results and discussions show the significant effects of the PCC on a number of outcomes such as knowledge, ideation and condom use behavior among the ethnic minority youth. However, these youth have not yet been matched on their propensity to receive and not to receive the communication campaign; as a result, such estimates may remain biased [18][19][20]27]. From these benchmark results, we continued to estimate the effects of the intervention based on the PSM as shown in Table 3. The results indicate significant differences in proportions of youth displaying HIV preventive knowledge, ideation and behavior (condom use) between youth who exposed and recalled and who did not expose and recall communication messages but who have been matched on their propensity to expose and recall HIV preventive messages. The combination of two main analyses-unadjusted estimates (trend, Chi square) and adjusted estimates (multivariate regression, PSM)-has made the justification of the net effects of intervention possible. The three equations (represented in Table 2) controlled for potentially confounding (socio-demographic) variables that might affect behavior. After controlling for these variables, the recall of HIV preventive messages had a significant effect on knowledge, ideation and behavior. The potential effect of unobserved variables (not in the equations) and the reciprocal effect of behavior on ideation and recall were ruled out by the statistical tests for endogeneity. The only criterion missing for a causal inference was a counterfactual condition which could have been provided only in a controlled experimental design. In our study, the counterfactual condition was made by PSM technique in order to create a matched controlled group so the comparison of net difference would be possible. However, acceptance of such a causal inference for recall and ideation on condom use does not necessarily mean that other causes were not also operating. Youth may approach or be exposed to other sources of HIV prevention such as internet or other projects. But at least in this study we argue that the effect on knowledge, ideation and condom use would be as a result of the PCC per se which was designed and delivered by our interventions because we measured the recall of the key contents provided by such campaigns. Comparing between the treatment group and matched control group, a percentage increase of 7.4, 12.7 and 5 in HIV preventive knowledge, ideation and condom use behavior, respectively, after the communication messages delivered through a variety of community-based communication campaigns may sound small. However, because the sample of 800 represents a population of 2,844 ethnic minority youth, the actual net increase in the number of youth displaying knowledge, ideation and condom use for HIV prevention is estimated to be 210; 361 and 142, respectively. The above proportions of preventive change are in mid-range as compared with those of contraceptives and condom use when having sex among the married spouses in the Philippines [18,27] as well as behaviors of condom use and HIV testing among the residents in Africa [34].
In this study, we did these estimates of PCC's effectiveness using rigorous methodology. We used multiple items to measure outcomes of interest such as HIV preventive knowledge, ideation and behavior. Each of these measures displayed comparatively strong psychometric properties. We also combined both unadjusted and adjusted analytical methods, especially PSM in estimating campaigns' impact. Our sample was selected from a fairly large-scale representative sample of Vietnamese ethnic minority youth. Collectively, these methods yielded the same general pattern of findings, which helps ensure that flawed methodology is an unlikely explanation for the study's findings.
Limitations
Our study has several limitations. First, the propensity score model for recall of HIV preventive messages includes variables or factors, many of which were identified in prior research as predictive of the receipt of communication campaigns. However, our model may not have incorporated additional variables that predict a youth's receipt of such campaigns. Therefore, the results may be biased by hidden bias or omitted variables. However, we did not find any evidence of this bias based on our analyses. The test for endogeneity for indirectly assessing if there were any variables not included that affected the models shows a little value of rho coefficient (P > .05), suggesting there was no or little hidden bias.
The research is also affected by the limitations that apply to using self-report measures for sensitive issues such as sexual behavior. Recall and reporting bias that may give a rise to under-estimation would be inherent. However, as this study designed a survey with the anonymous and confidential commitment, it was expected to partly reduce such a bias. We conducted an intervention study using a not really longitudinal design because we launched interventions first, but only thereafter carried out a post-intervention sample survey; therefore, this cross-sectional design may limit the order of causality.
Our study was designed to provide a general or overall estimate of PCC's effects. Our intent-to-treat analyses provide estimates of PCC's "use effectiveness" rather than its "method effectiveness". We currently can not offer detail on the effectiveness of specific types of behavioral change communication in rural and remote settings. For example, we are unable to say whether participatory community intervention, as we did in the current study, was more or less effective than non-participatory approach. Further, our point estimates of communication campaigns' effects are limited to ethnic minority youth in central Vietnam, therefore, may not be generalized across the country.
Conclusions and implications of the study results
This study, to our knowledge, is the first to examine the effects of community-based intervention campaigns for HIV prevention applied to ethnic minority youth. Interpretations of this study provide public health implications both theoretically and practically. Consistent with the literature, our study supports the indirect effect of the communication (message recall) on behavior or the intervening role of ideation between communication and behavior as shown in the theoretical framework. This suggests that designs of intervention and evaluation should include ideation in addition to the communication in order to obtain a holistic theoretical model to support research. Communication interventions should design campaigns that help maximize actual exposure such as recall of communication messages and improve knowledge and ideation so the campaigns are more likely to increase practices of healthy behavior. This study therefore hopefully will lead to an increased understanding of how to improve perceptions and change behaviors for preventing HIV in ethnic minority young communities.
Using the PCC as an intervention approach, this research helps to empower local people address their own problems by themselves. We involved local ethnic minority people and youth to identify their own health problems, develop their local stories for dramas and participate in a number of activities such as HIV preventive dramas, open dialogues, knowledge contests, meetings and health check and treatment for STDs, and others. We believed that knowledge and skills obtained from the current interventions help ethnic minority youth continue to address their unaddressed health problems such as risky behaviors for HIV as well as other problems even though our interventions will not continue. The study therefore has made a significant contribution to achieving sustained results and outcomes. It is recommended that the model of PCC be rolled out to other rural areas in Vietnam and maybe to similar contexts of developing countries.
Given the limitations of the current study, it is recommended that future research consider a longitudinal design in other parts of the country such as southern and northern Vietnam in order to support temporal and generalizational inferences. Future research may also seek to compare and contrast the impact of PCC in relation to that of traditional approaches. In doing so, there are still more substantial opportunities to prevent the spread of HIV/AIDS among ethnic minority youth and other population groups, depending on the commitment, determination, and effort of researchers, policy makers and practitioners. | 2014-10-01T00:00:00.000Z | 2012-03-08T00:00:00.000 | {
"year": 2012,
"sha1": "8db4436368f47ee3fe638ab43ed230c819cfc554",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-12-170",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8db4436368f47ee3fe638ab43ed230c819cfc554",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14136686 | pes2o/s2orc | v3-fos-license | Floquet-Bloch Theory and Its Application to the Dispersion Curves of Nonperiodic Layered Systems
Dispersion curves play a relevant role in nondestructive testing.They provide estimations of the elastic and geometrical parameters from experiments and offer a better perspective to explain the wave field behavior inside bodies. They are obtained by different methods. The Floquet-Bloch theory is presented as an alternative to them. The method is explained in an intuitive manner; it is compared to other frequently employed techniques, like searching root based algorithms or the multichannel analysis of surface waves methodology, and finally applied to fit the results of a real experiment. The Floquet-Bloch strategy computes the solution on a unit cell, whose influence is studied here. It is implemented in commercially finite element software and increasing the number of layers of the system does not bring additional numerical difficulties. The lateral unboundedness of the layers is implicitly taken care of, without having to resort to artificial extensions of the modelling domain designed to produce damping as happens with perfectly matched layers or absorbing regions. The study is performed for the single layer case and the results indicate that for unit cell aspect ratios under 0.2 accurate dispersion curves are obtained.Themethod is finally used to estimate the elastic parameters of a real steel slab.
Introduction
Floquet-Bloch (hereafter F-B) theory provides a strategy to analyze the behavior of systems with a periodic structure.Floquet's seminal paper dealt with the solution of 1D partial differential equations with periodic coefficients [1].In solid state physics, Bloch generalized Floquet's results to 3D systems and obtained the description of the wave function associated with an electron traveling across a periodic crystal lattice [2].This wave function is a solution of the Schrödinger equation with a periodic potential and Bloch showed that it was the product of a simple plane wave multiplied by a periodic function with the same periodicity of the lattice.The mathematical description of these ideas, in the context of quantum mechanics, can be found in [3,4].
In the literature dealing with wave propagation problems in mechanical systems the theory is referred to as Floquet-Bloch theory or, simply, Floquet theory.In layered systems, due to the heterogeneity of the relevant elastic properties, to particular geometric features, or to both, only certain wave modes can physically propagate inside the structure [5].Each of these modes can be identified by a determined-generally nonlinear-function relating the time frequency and the spatial frequency (or wave number).These relationships are called dispersion curves default and, as they summarize all the oscillatory behavior of the system, their calculation is of paramount importance in NDE applications [6].
Vibrations occur also in objects with periodic structure [7].These problems usually admit a separation between the time and the spatial dependent parts of the solution.For instance, the Helmholtz equation is a known example of equation describing the spatial behavior [8].There, the physical periodic structure of the studied object translates into spatial periodicity of its coefficients.Therefore, the F-B theory has been applied to obtain the dispersive properties of different mechanical periodic systems [8][9][10][11][12].
Mathematical Problems in Engineering
Many relevant structures can be assumed to be layered systems of infinite extent, for example, [13,14] in civil engineering constructions, [15,16] in optics, or [17] in electromagnetics.Therefore, theoretical methods and experimental techniques to obtain their dispersion curves have been devised.From the theoretical side, different matrix techniques have been developed to address the calculation.They involve numerical computational methods whose complexity increases with the number of layers in the system [18,19] or more recently [6].
In laboratory experiments or field work, the dispersion curves can be obtained using, for example, the multichannel analysis of surface waves (MASW) method.The MASW procedure involves collecting equally spaced measures of vibration along a profile on the system surface using, for example, accelerometers.The resulting 2D space-time discrete image is Fourier-transformed to the frequency-wave number (, ) domain and then processed to build the dispersion curves [20,21].The method has some drawbacks inherent to the Fourier transform limitations which will be discussed later.The MASW has been applied successfully in the characterization of pavement systems [13], as a seismic data acquisition technique [20] or for geotechnical characterization [22].The MASW strategy is here also used to perform a computer numerical simulation of the system, closely mimicking the field setup.The issue of infinite lateral extent is usually tackled by using perfectly matched layers (PML) [23][24][25][26] as has been done here or absorbing regions.Both techniques present drawbacks [26].
In this paper, an alternative way to calculate the dispersion curves of layered systems with infinite lateral extent using the F-B theory is presented.The method has never been applied to the dispersion curves calculations of nonperiodic layered systems.Here it is used to obtain the dispersion curves of a single layer case and to estimate the elastic parameters of a real steel slab, for showing the method.However, the novelty in this work is that it can be applied to an arbitrary number of layers, even if the layers are anisotropic or orthotropic, with the same complexity level.The power of the method is that the equations are solved by the finite element software, because the F-B theory only affects the propagation term, which is the same, whatever the nature of the layers.It is not necessary to develop the equations for each specific problem and to generate complex codes to get the dispersion relations.
The F-B theory reduces the problem to calculations performed in the so-called unit cell, subject to certain specific boundary conditions derived from the F-B theory and elastodynamics.The influence of the size of the unit cell is ascertained.The results are first compared with the dispersion curves derived from the Rayleigh equations [5], solved by a searching root numerical method.Comparison is also made with the curves resulting from a FEM computer simulation followed by a 2D Fourier transformation.Finally, a real experiment was performed on a steel slab employing the MASW method and the empirical dispersion curves were compared with the analytical and numerical ones.
The results show that the F-B method compares favorably with other methods and fits accurately the empirical data, providing a good alternative to obtain dispersion curves in layered systems.The F-B technique can be run on a finite element package like COMSOL Multiphysics, can be applied to an arbitrary number of layers in the system, with the same complexity level, and eliminates issues of infinite lateral extent.
Floquet-Bloch Theory: Explanation
The Floquet-Bloch theory provides a strategy to obtain a set of solutions of a linear ordinary equations system of the form where f() is the solution vector and the matrix M is periodic such that M(+) = M() for a certain period .At first sight it might seem that the solution of such a problem would have to be also -periodic.But Floquet showed that this need not be so.There exists, however, a simple relationship between the solution's behaviors inside one period and outside it.If F is a fundamental matrix of solutions, then another matrix B can be found such that B can be constructed by setting = 0 in (2) such that B = F −1 (0)F().A simplest case is obtained using F(0) = I so that B = F().As there is not a unique choice for the fundamental matrix F and how it is exactly chosen depends on the problem, B is also not unique.But its eigenvalues are intrinsic of the problem and, under the right transformation, can be used as a propagator or evolution factor relating the value of the solution at a point inside the period with its value at a point outside of it.Only the solution inside a period is, therefore, needed verifying that Following the classical nomenclature = exp( ) is known as Floquet multiplier, being the complex Floquet exponent.Moreover, Floquet found that the solution at any point can also be factored in two terms: Here p() is a periodic function, playing the role of the eigenvectors if M was a constant matrix and carrying the periodicity of the coefficients of the problem.The complex exponential distorts the strict periodicity of p() incorporating damping or ever growing effects in the amplitudes of the solution depending on the value of ||.This is why solutions are, in general, not periodic and also why the Floquet perspective is usually employed to study their stability.A solution will be stable if the Floquet multipliers verify || ≤ 1 [27].
Guided Waves in Layers: Analytical Dispersion Curves
The guided wave propagation problem in a homogeneous, isotropic, and infinite single layer has been widely treated in the literature [5,28].In this paper, we follow the theory developed in [5].So consider an infinite (in direction), homogeneous, isotropic, and elastic layer with thickness 2 as shown in Figure 1.
The conservation of momentum equation plus free traction boundary conditions leads to a system of equations that produces a solution when its determinant vanishes.In the absence of body forces the equation reads where u is the displacement vector, C(, ) is the elastic constants tensor, and ∇ (u(, , )) is the strain tensor.If u ∈ 2 , then u = ∇ + ∇ × H. Here, is a scalar field and H is a vector field.For the plane strain case, the fields P-SV and SH are decoupled.Considering the P-SV field the component of the displacement field and its partial derivative = = 0 vanish.A solution of the form (, , ) = () exp( − ) appears.It expresses the propagation of a shape () (Figure 1) trapped in the thickness of the layer along the -direction with wave number and frequency .
The dispersion relation for the single infinite layer case is [5] tan with where 1 is the compressional wave (P) velocity and 2 the shear wave (S) velocity.Expression equation ( 6) is known as the Rayleigh-Lamb equation (R-L).The plus sign in the (R-L) expression corresponds with symmetric modes and the minus with the antisymmetric ones (Figure 1).
Calculation of the Dispersion Curves from the Analytical
Rayleigh-Lamb Equation.The R-L equation is transcendental, so no closed analytical solution is available.It can, though, be cast into a form amenable to the use of iterative, root finding, local algorithms, but these present various difficulties arising from the nature of the equations.First, due to the tangent functions, the left hand side is discontinuous at certain points where local algorithms for smooth functions will find difficulties [29].Due to this and to the sampling rate characteristics of the searching algorithm, some roots might be missed.
A visual inspection to the pattern followed by the dispersion curves on a phase velocity versus frequency (V − ) plot (Figure 1) clearly shows the presence of near vertical and horizontal stretches with physical relevance.For instance, the horizontal line towards which the modes A0 and S0 converge contains the information of the so-called Rayleigh waves [30].The S1 mode, on the other hand, owns a point with vertical tangent, that is, minimum wave number, in an otherwise near vertical portion of the curve with the most important property of having zero group velocity, producing then a useful resonance [31].
For local root searching methods, like the Newton-Raphson method [32], the strategy consists in using one found root as a seed to calculate the next point.Methods performing a 1D search with a fixed value of (resp., ) will very poorly characterize a near vertical (resp., horizontal) portion of the curve, unless the sampling rate is extremely dense.Other methods start from the frequencies at zero wave number where the roots can be calculated analytically and try to follow up each curve with some sort of linear predictions [33].Problems arise at the mode intersections (Figure 1).It has been suggested that the search grid should locally mimic the behavior of the dispersion curves [34].At any rate, the methods are computationally intensive for useful tolerances and their extension to the case of more layers, where the curve pattern is much more intricate, is inefficient.
In this paper, the root loci of the R-L equation have been obtained using the bisection method, which always converges.It is very slow and fails wherever multiple roots exist in the proposed interval.A pair of time frequency, phase velocity values are input in the R-L equation and its sign evaluated.Keeping the frequency fixed, the velocity has been varied with a step of V = 200 m/s, from 0 to 10 4 m/s.When the sign changes, the bisection method is iteratively applied to obtain a root with an accuracy = 0.3 m/s.The process is repeated for different frequency values and the roots are plotted in (V − ) (Figure 1).
The elastic parameters input to the R-L equation are thickness = 0.045 m, longitudinal wave (P) velocity 1 = 5800 m/s, and shear wave (S) velocity 2 = 3200 m/s because they will be shown to produce the best fit to the empirical tests performed on a steel slab (Section 6).
Some of the problems discussed above can be clearly seen in the (V − ) representation (Figure 1(c)).The upper part of the modes (S1, S2) was impossible to calculate because, due to their almost infinite slope, the number of roots there may be found to be infinite or zero.Starting, instead, from a fixed velocity shifts the difficulty to the near horizontal segments of the curves.Besides, a regular searching grid in the ( − ) plane becomes an irregular sampling in the (V − ) domain, where some stretches with the same amount of complexity independent of the number of layers may not be sufficiently well characterized.The proposed F-B based method will allow an arbitrary degree of accuracy in the calculation of the curves without suffering from these problematic issues.This might also be an advantage for systematic performance of sensitivity analysis, testing a relevant amount of perturbed models around one found solution.
Dispersion Curves Calculation Using FEM with a MASW Type Scheme
The multichannel analysis of surface waves (MASW) methodology is a procedure to numerically or empirically calculate dispersion curves.The strategy is to measure (or obtain numerically), on the surface of the system, the wave field in a number of equally spaced points and take readings at a certain temporal sampling rate [35].This sampled 2D time-space field is then Fourier-transformed into a 2D time frequencywave number domain.A continuous surface of amplitudes is now obtained in this (V − ) domain using interpolation or fitting methods.The dispersion curves can be obtained as the loci of local maxima of that surface.The MASW method has been discussed in the literature [20,36,37].
Given the inverse relationship between the length of the profile on the surface and the wave number sampling interval and, correspondingly, between the range of sensed wave numbers and the distance between adjacent sensors [38], field constraints (i.e., of logistic or economic type) on the feasible profile length or on the number of sensors available for use do influence the representation.Reciprocity [21] alleviates the difficulty allowing keeping one sensor fixed while moving the impactor in the so-called multichannel record with one receiver (MROR) technique.
When the scheme is applied to perform a numerical calculation on guided waves, the simulation domain has to be finite.This brings the problem of unwanted reflections and mode conversions at those boundaries, coming back into the relevant domain and corrupting the signal.The naive option of extending the domain implies increasing the number of nodes and computation times and assumes that boundary reflection events are separated in time from the studied events, which might not be possible.The use of the so-called absorbing regions, where the wave field enters and is computationally absorbed, has been treated in the literature [23][24][25][26].For instance, perfectly matched layers (PML) or absorbing layers using increasing damping (ALID) have been frequently employed.PML are regions attached to the boundaries where the wave enters and decays exponentially [25,26].In ALID, the domain is enlarged with layers of the same material but with increasing damping parameters [39].Although both are implemented in commercial finite element codes, the success of both techniques relies on iteratively finding the optimum design parameters.This can be time consuming and varies with the characteristics lack of the system one is interested in.
In this case, COMSOL Multiphysics has been used for the simulation.The characteristic parameters employed for the PML are PML scaling factor = 1 and PML curvature parameter = 1 (Figure 2).
Numerical Implementation.
In this section the results of the numerical simulations using the FEM software COMSOL Multiphysics, following a MASW procedure and employing PML to take lateral unboundedness into account, are presented.The dispersion curves will be discussed and serve as a reference to be compared with those obtained by the searching root algorithm (Section 3), those calculated using the F-B approach (Section 5), and the field curves presented in Section 6.The elastic parameters are the same as in Section 3 and the thickness now is = 10 cm.The length of the simulated profile is = 2 m and that of the PML is = 1 m.
A frequency domain study has been performed with a step of = 100 Hz and sweeping frequencies up to = 50 kHz with uniform energy distribution.40 point accelerometers are separated = 5 cm.The results after 2D Fourier transform and interpolation are shown in Figure 3.Some observations are in order.First, certain portions of some modes are not excited [40][41][42].This affects mainly the lower frequency parts of the zero order symmetric mode (S0).Should a half cycle sinusoidal function have been used as impact model [43], the frequency energy input at lower frequencies would have been relatively higher and the A0 low frequency part would be visible whereas the, now visible, higher frequency branch would be absent.This argument does not affect the S0 mode.
The empty triangular space in the bottom right part of Figure 3 corresponds to pairs of frequency-wave numbers impossible to be reached with a sensor separation of = 5 cm.For this sampling distance, the highest representable wave number is = 62.82 m −1 .As V ph = 2/, the phase velocity stays always over V min = 2/62.82for any given frequency.
Additional numerical artifacts arise due to the finite length of the profile.As this is mathematically equivalent to multiplication by a boxcar window it generates, in the wave number domain, a convolution with a sinc function.This effect has been zoomed in Figure 4.This spatial convolution shows up, Figure 4(b), as a spurious repetition of some branches.For more complex patterns present in more than one layer system the real position of the dispersion lines becomes more uncertain.
All the described difficulties with the MASW domain are absent in the Floquet-Bloch technique.
Floquet-Bloch Theory and Guided Waves in Layers
The equation of movement for the single layer case was presented in Section 3 (5).To solve it, it is necessary to try plane wave type solutions: Equation ( 8) establishes that the displacement solution of the problem is a certain shape (), trapped in the thickness of the system, which propagates in -direction with a certain wave number and frequency along the infinite lateral extension (Figure 1).
The layer is considered infinite; however, the solutions can be computed over a finite computational domain (unit cell), subject to certain boundary conditions.Consider an infinite layer as is shown in Figure 5.
According to [43], a layer behavior will appear experimentally if both dimensions of the layer are, at least, ten times the thickness dimension.This goal will be achieved in the experimental study, taking into account the values presented in Figure 7.
Consider, now, the spatial part of (8): Therefore, the displacement field at the left side of the unit cell (Figure 5(c)) will be related to the displacement at the right side, such that u ( + , ) = () exp () exp () = u (, ) exp () .(10) Note that the function () does not change (Figure 5(c)).It defines the form of the considered mode and is only the propagative term (the exponential ones) which defines the propagation of ().
Moreover, a lateral infinite layered system is a trivially periodic medium in the propagation -direction.Due to it, the F-B theory states that the solution of the problem can be written as For certain periodic function u FB (, ), F-B exponent FB and where the th-dependency appears due to the relationship between the wave number and the F-B exponent FB .
From ( 9) and ( 11), the solution of the problem will be equivalent if the wave vector and the F-B exponent FB are related as where is the length of the unit cell in the propagation direction (Figures 5(b) and 5(c)) such that substituting ( 12) into ( 9) leads to where now the term u FB = () exp(2/) is clearly periodic.However, due to the th-dependent term, a fixed value of the F-B exponent FB and values of the wave vector appear.Now, the relation of the solutions at the left and right sides of the unit cell can be used to define the proposed F-B boundary conditions as Due to the relationship between FB and the th-dependent periodic term, the dispersion relation () becomes ( FB ) which can be calculated over a unit cell of arbitrary length solving the following eigenvalue problem (Figure 5): [∇C(, ) where u , FB are the displacement field and [∇C(, )∇ (u , FB (, , ))] any component in the -direction of the stress tensor (traction free-surface boundaries in the unit cell).The theory presented in this section applied to the lateral sides of the unit cell can be used for any kind of layered systems, whatever the nature of the layers (isotropic or anisotropic) and the number of them are.The reason is that the proposed F-B boundary conditions affect only the propagative part of the solution (-direction), which appears in all terms of the equations when the problem is treated in a classical analytical way and is simplified disappearing from the equations.
Therefore, the complicated part of the problem, which is to obtain the function () (given the propagation modes), is solved by the finite element software.This kind of software allows solving a huge kind of complicated systems which are intractable analytically.This is the case, for example, of layered systems with a big number of layers or when the layers are anisotropic composites.
However, the perspective for getting analytical solutions is always the same, to obtain a function (complicated in general) in the thickness direction (the mode) which propagates laterally.In this approach, the propagation part is always extracted in the equations when infinite layered systems are considered.
Because of the above reason, the theory applied to define the F-B boundary conditions can be used for any complicated systems while they are laterally infinite.The boundary conditions only affect the propagation part and allow converting the infinity analytical problem, in a finite problem, which can be solved with commercial Finite Element software.Therefore, it is not necessary to develop equations (generally complicated) for each certain problem and perform complex numerical codes based on searching roots algorithms [44,45].
Aspect Ratio Effects.
Using COMSOL Multiphysics software, the eigenvalue problem equation ( 15) can be solved by sweeping different values of FB and obtaining the corresponding values ( FB ).
However, to transform the problem of the infinite layered system to a finite problem through the F-B boundary conditions, introduce artificial aspects due to the periodicity of the -dependent term.For this reason, given certain F-B wave number FB , from (12), values of the wave number (and eigenfrequencies ) will be obtained.
An example is presented in Figure 6(a), where the value of the F-B wave vector is fixed as FB = 0 .For this value, all eigenfrequencies corresponding with the propagation wave vector values derived from (12) are obtained (in the example only are presented the fundamental one 0 and the first one corresponding to = 1 as 1 , which match with the wave vector 1 = / + 0 ).
Results and Discussion
Derived from the Proposed Floquet-Bloch Method.Since the calculation is developed in terms of the F-B wave vector, the solution becomes -periodic, and the dispersion representation obtained has all eigenfrequencies inside the first periodic zone of the solution (Figure 6).Due to it, the branches of the dispersion curves are reflected back in the limits of the zone, being necessary to take into account the aspect ratio of the computational domain (unit cell) in terms of getting a good dispersion curves representation.
Different aspect ratios of the unit cell Ra = 2/ have been explored Ra = 1, 0.4, 0.2, 0.1 to obtain the dispersion curves.Results for the cases Ra = 1 and Ra = 0.1 are shown in Figure 6.The value of chosen for the unit cell establishes at / the periodicity in the wave number domain.The reflected lines intersect the canonical modes in the unit cell preventing them from being clearly identified (Figure 6(a)).Aspect ratios smaller than 0.2 are usually enough to obtain a representative number of modes defining clear dispersion curves (Figure 6(b)).
In the case Ra = 1 the limit / is situated at low wave number values and the reflected branches render the spectrum unclear.However, a good representation is obtained for Ra = 0.1 (emphasized in the blue frame of Figure 6(b)) because the reflected branches arise at higher wave numbers (Figure 6
(b)).
A good criterion is to choose the lateral dimension at least five times the thickness in the computational domain.
Real Test on a Steel Slab and Results
A real NDT experiment has been conducted on the surface of a steel slab shown in Figure 7.The profile was set up along the symmetry axis and included 57 equally spaced measurement points separated at a distance of 5 cm.An instrumented hammer and accelerometers have been employed following the MROR method described in Section 4. The accelerometers were threaded producing a coupling resonance around 12 KHz.An outline of the experiment configuration, together with the relevant dimensions and photographs of the test setup and the impacts generated, is shown in Figure 7.
The shape of the hammer impacts in the time domain is very consistent (Figure 7).The average impact duration is Δ = 143 ± 7 .Beyond = 15 kHz, the amplitudes of the impact spectra are very weak and fall mostly under the measurement noise level.
Results and Discussion
. The resulting empirical dispersion curves can be seen in Figure 8(c).
The estimated parameters using the proposed F-B boundary conditions are presented in Table 1.
The estimated errors are obtained with the propagation law (16), taking into account the temporal and spatial frequency steps = 166.7 Hz, = 0.1754 based on the Nyquist criteria and on the fact that V = 2/.The computation time in finite element software is under 2 minutes, computing the curves in an i7 PC processor: A fit has been achieved where only the A0 mode is clearly seen in the measurements.There are two main reasons for that.On the one hand, the tip of the hammer only significantly inputs frequencies until 15 kHz (Figure 7(d)) so that every mode, including the A0, above this frequency will not be seen.On the other hand, the S0 mode is not there because its excitability is very low [42].The excitability is a concept related to what parts of the different modes are detectable at the surface.These commercial receivers measure the out-ofplane (perpendicular to the surface) acceleration component, which has a very low excitability value at the low frequency part of the S0 mode.The vertical band of high energy near 1-1.2 kHz is the coupling frequency of the accelerometer.
Comparison of the Three
Methodologies.This section presents the comparison of the different methods to obtain dispersion curves: the searching root method, the numerical MASW method together with the proposed F-B method, and their ability to match the empirical dispersion curves.
The results have been calculated for a layer with the same thickness as the slab used for the experiment.The numerical MASW simulation performed in COMSOL Multiphysics has used as impacts the experimental ones.Figure 8(a) presents the dispersion curves obtained with the searching root algorithm and with F-B method and the numerical MASW simulation together with the relevant part of the experiment.From Figure 8, the proposed F-B method matches perfectly with the analytical solution of the dispersion relation in the layer and with the experimental and simulated dispersion curves obtained with MASW method.As was emphasized in previous sections, the F-B method provides better results in the vertical zones of the modes than the method based on searching roots algorithm (Figure 8(a)).Another feature is that the F-B method as analytical method is not affected by the excitability concept, as happens with the MASW method because the F-B method is not based on the measurements of the displacement field.The results of the simulated experiment (Figure 8(b)) are again affected by the excitability and the S0 mode is not obtained as happens in the real experiment, which might be taken as a proof that the S0 absence is not due to pitfalls in the measuring process.The experimental results are affected by the resonant frequencies of the accomplished method used for the accelerometers (Figure 8(c)) highlighting that the measuring process could be improved proving different coupling systems.
Conclusions
The main conclusion of this study is that the F-B theory can be used to compute the theoretical dispersion curves of layered systems with infinite lateral extension over a finite unit cell.The method is applied directly using a finite element commercial software and is free of the drawbacks associated with other numerical procedures used.It is also a tidy method to calculate curves with more than one layer, avoiding ambiguities at crossing points or with particular slopes.Based on the obtained results, aspect ratios with lower values than Ra ≤ 0.2 of the computational domain are enough to obtain a good number of modes in a clear plot representation.The method allows obtaining the associated eigenvectors in (15) Searching root method F-B method so the excitability curves could be obtained too in the same computing process and compared with those results obtained with the MASW procedure.The infinite lateral extension is implicitly taken into account without the need to use ad hoc domain extensions.
Figure 1 :
Figure1: An infinite homogenous isotropic and elastic layer with thickness = 2 as part of an infinite 3D layer under plain strain consideration (a).Typical deformation for the symmetric and antisymmetric modes (b) and the dispersion curves in (V − ) representation, for a given thickness = 0.045 m, P and S velocities 1 = 5800 m/s and 2 = 3200 m/s, obtained with searching root algorithm (c).The parts marked with dashed boxes and the intersection points between modes present difficulties with typical searching root algorithms.
Figure 2 :
Figure 2: Typical scheme for MASW method implementation in a FEM software.The computational domain has been set to = 3 m, the PML parameters used = 1 and = 1, the distance between receivers = 5 cm, the PML's sizes = 1 m, and the thickness = 10 cm.
Figure 3 :Figure 4 :
Figure 3: Dispersion curves for a single layer in frequency-wave number representation (a) and in phase velocity-frequency representation (b).Parts that are lacking in the phase velocity plot (b) correspond to wave numbers greater than the measured ones.
Figure 5 :Figure 6 :
Figure 5: Infinite layer and the unit cell elected (a).Computational domain (unit cell) properties and dimensions (b).F-B boundary conditions applied to the laterals of the computational domain (unit cell) for a certain shape ().
Figure 7 :
Figure 7: Outline of the experiment assembly (a), a photograph (b), and the impacts generated in the experiment in the time domain (c) and spectra (d).The average time has been Δ = 143 ± 7 .
Figure 8 :
Figure 8: Superposition of the dispersion curves obtained with the numerical searching root method (6) and with the proposed F-B method, for aspect ratio Ra = 0.1 (a).Overlap of F-B method and the simulated MASW method dispersion curves (b).Overlap of F-B method and the experimental dispersion curves (c).
Table 1 :
Estimated elastic parameters of a real steel slab using the experimental signals obtained with MASW method fitted with the proposed F-B method. | 2015-03-06T19:42:58.000Z | 2015-01-12T00:00:00.000 | {
"year": 2015,
"sha1": "c266b4e2fa42e6ebb250e1b8c6b9359606959f2d",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2015/475364.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c266b4e2fa42e6ebb250e1b8c6b9359606959f2d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
268576234 | pes2o/s2orc | v3-fos-license | Iatrogenic Venous Compression Syndrome Following Bilateral Hip Arthroplasty: A Unique Case of Bilateral Femoral Vein Compression in a Patient With May-Thurner Syndrome
Iatrogenic venous compression syndrome is defined by extrinsic vein compression due to medical hardware, particularly relevant after joint replacement surgeries. Inserting medical hardware can lead to immediate risks such as deep vein thrombosis and pulmonary embolisms due to local tissue inflammation. The long-term issues include venous insufficiency due to chronic vessel irritation, subsequently causing intimal proliferation and thickening. Despite the existing knowledge of venous compression syndromes, iatrogenic cases are severely underreported. Here, we present a unique case of bilateral common femoral vein compression in a patient with May-Thurner syndrome and prior bilateral hip arthroplasty. An 85-year-old man with a history of venous insufficiency and bilateral hip arthroplasty for osteoarthritis presented with bilateral leg edema. Unsuccessful sclerotherapy and radiofrequency ablation led to a referral to a vascular specialist for venous duplex scans, venograms, and intravascular ultrasound. May-Thurner syndrome was revealed in the left common iliac vein, prompting the deployment of an 18 mm × 16 mm stent. Subsequently, during a venogram, what initially appeared to be a vasospasm in the left common femoral vein was diagnosed as extrinsic iatrogenic venous compression due to acetabular hip screws. This was found after two IV injections of 400 mg nitrogen and one balloon angioplasty could not resolve the compression. After advancement over a 0.35" microwire and accurate positioning over the center of the left common femoral vein lesion, a 16 mm × 90 mm stent was deployed. The venogram and intravascular ultrasound also showed a similar compression in the right common femoral vein. Another 400 mg IV nitrogen did not expand the lesion, so it was concluded that there was similarly an iatrogenic venous compression of the right common femoral vein, also due to acetabular hip screws in the right femur. A follow-up was scheduled a couple of weeks later to address the issue in the right common femoral vein. The underreported issue of iatrogenic venous compression following joint replacements highlights the need for better recognition and management of vascular complications due to inflammation and intimal proliferation. This is especially the case in high-risk patients, such as those with May-Thurner syndrome.
Introduction
Iatrogenic venous compression syndrome, defined by extrinsic vein compression, is a complex clinical disorder with diverse causes.It can be caused by any medical operation inadvertently, but it is often associated with orthopedic and vascular surgery [1][2][3].Iatrogenic venous compression is linked to an increased incidence of deep venous thrombosis (DVT), as well as venous insufficiency [4].The proximity of medical hardware, such as trans-acetabular screws, can cause local bleeding.This results in tissue inflammation and iatrogenic compression of neighboring veins.The link between this compression and an elevated risk of DVT is described by Virchow's triad, which includes endothelial damage, hypercoagulability, and venous stasis.Veins are easily compressed and are prone to stasis and thrombosis due to their thin walls and low pressure [5].Venous insufficiency is a less common complication of iatrogenic venous compression syndromes.Its presumed pathophysiology is a prolonged presence of medical hardware and its pressure on the vein causing chronic endothelial irritation, intimal proliferation, and progressive narrowing of the vessel, ultimately resulting in significant stenosis [1].This gradual process explains why most patients experience venous insufficiency symptoms only after a few years following surgery [1].
Iatrogenic venous compression syndromes are a subset of iliofemoral venous compression syndromes.They refer to a group of disorders defined by venous structural compression, which is often caused by anatomical, disease-related, or iatrogenic causes.The May-Thurner syndrome represents a noteworthy anatomical variant, involving the compression of the left common iliac vein (LCIV) by the right common iliac artery [6].Similarly, conditions such as the Nutcracker syndrome and Paget-Schroetter syndrome contribute to venous compression owing to distinct anatomical anomalies [6].Disease-induced variants are evident in instances where enlarged cysts, tumors, and abdominal aneurysms impose compression upon venous structures [7].
In addition to anatomical and disease-related factors, iatrogenic venous compression syndromes have been recognized as a consequence of surgical interventions.However, the precise prevalence of these hardwarerelated occurrences remains indeterminate, and their documentation is often limited.While iatrogenic venous compression syndromes stemming from surgical interventions are likely infrequent [2,3], they contribute to the broader spectrum of iliofemoral venous compression syndromes.
In this article, we present a unique case in which a patient, in addition to being diagnosed with May-Thurner syndrome, was also diagnosed with a bilateral iatrogenic venous compression, affecting both the right and left common femoral veins, due to prior total hip arthroplasty.
This article was previously presented as a podium presentation at the 2023 VEITHsymposium on November 17, 2023.
Case Presentation
An 85-year-old Caucasian male patient with a history of venous insufficiency and prior bilateral total hip arthroplasty due to osteoarthritis, presented with bilateral swollen, heavy, and fatigued legs.He complained of being slowed down and prevented in his mobility, despite recently undergoing unsuccessful sclerotherapy and endovenous radiofrequency ablation treatment of the left and right greater saphenous veins.These interventions provided him no significant improvement in symptoms as he still experienced them daily.
In June 2023, due to the persistence and exacerbation of symptoms despite prior interventions, the patient was referred for a secondary consultation from a vascular specialist.Upon thorough physical examination, notable findings included evident swelling in the extremities, the presence of varicose veins in the left lower extremity, bilateral telangiectasias, evident hemosiderin staining, and 2+ edema observed on both the right and left sides.Bilateral pitting edema was also documented, while signs of erythema, ecchymosis, and open ulcers were conspicuously absent.Subsequent to a venous duplex ultrasound, the patient received a diagnosis of class III varicose veins, classified according to the Clinical, Etiology, Anatomic, Pathophysiology (CEAP) classification system for venous disorders.Despite the application of conservative treatments and the implementation of endovenous radiofrequency ablation on both the left and right greater saphenous veins in the lower extremities, the patient experienced no alleviation of symptoms.In light of this, and with the patient's and family's concurrence, a decision was made to pursue further comprehensive medical assessments to elucidate the underlying causative factors.A follow-up plan was established, with a venogram scheduled to provide continued insight into the condition's progression and inform subsequent steps in the management process.
In August 2023, the patient underwent a comprehensive medical procedure involving a venogram and an intravascular ultrasound (IVUS) assessment, conducted with the aim of excluding the presence of May-Thurner syndrome.The procedure involved gaining venous access through the right femoral vein, followed by the careful advancement of a 0.35" microwire in a retrograde manner to reach the LCIV.Contrast material was subsequently introduced into the system to achieve visualization of the LCIV.The imaging revealed a mild constriction within the left common femoral artery at the lumbar level L5 (Figure 1).
FIGURE 1: Visualization of the compression of the left common iliac vein (LCIV).
Following this, an IVUS catheter was meticulously guided to the targeted area, enabling detailed visualization of the iliac vein.The IVUS examination confirmed the compression of the LCIV, definitively corroborating the diagnosis of May-Thurner syndrome.
Upon ensuring precise placement, an appropriately sized stent measuring 18 mm × 16 mm was introduced over the previously positioned microwire.The subsequent phase of the procedure ensued, involving the continued advancement of the microwire to achieve deeper penetration, contrast was given to facilitate visualization of the left common femoral vein.The venogram images revealed a conspicuously irregular venous compression, anatomically situated at the level of the femoral head.Initially, the working diagnosis leaned toward vasospasm, and an intervention was initiated by administering a 400mg intravenous dose of nitrogen with the intention of mitigating the presumed vasospasm.Following this, the contrast was reintroduced into the system, yet the site of compression displayed no improvement.Consequently, a second dose of 400 mg nitrogen intravenous infusion was administered.Despite this effort, the contrastenhanced visualization exhibited persistent unaltered findings.Subsequently, balloon angioplasty was undertaken as an attempt to alleviate the venous compression.Regrettably, this intervention also yielded no improvement in the observed condition.After a comprehensive assessment, a definitive conclusion was drawn: the venous compression that had initially been speculated to be a vasospasm of the left common femoral vein due to the presence of the catheter, was, in fact, a genuine stenosis induced by external venous compression.
An IVUS examination revealed no calcifications within the lesion, suggesting iatrogenic venous compression syndrome due to chronic irritation by an adjacent acetabular screw, leading to significant stenosis.This inference was supported by the patient's history of total hip replacement surgery.
With precise positioning achieved within the left common femoral vein, a stent measuring 16 mm × 90mm (Venous WALLSTENT, Boston Scientific, Natick, Massachusetts, USA) was meticulously introduced to alleviate the extrinsic venous compression (Figure 2).The stent placed in the left common iliac vein (LCIV) for the May-Thurner syndrome is also visible (black).
Subsequently, contrast injection was administered to facilitate the visualization of the right common femoral vein.Both venogram and IVUS examinations unveiled a significant, irregularly shaped compression within the right common femoral vein, situated anatomically at the level of the right femoral head.This compression was notably accentuated by the immediate presence of hardware.Initial evaluation led to the presumption that this condition, akin to the prior instance in the left common femoral vein, was attributable to vasospasm.In an endeavor to counteract this, a dosage of 400 mg of intravenous nitrogen was administered.However, as in the previous case, this intervention yielded no improvement in the observed condition (Figure 3).Based on the previous experience, a diagnosis of iatrogenic venous compression syndrome in the right common femoral artery was made, also due to acetabular screws in the femoral head causing significant stenosis.A follow-up appointment was scheduled for a venogram procedure aimed at stent placement to alleviate the observed compression.
Discussion
Chronic venous insufficiency (CVI) occurs when veins in the lower extremities fail to guide blood back to the heart effectively.This can result in persistently high pressure in the veins and lead to symptoms such as tightness, heaviness, fatigue, leg cramps, restless legs, and skin changes such as thickening or discoloration.The most common reasons for CVI include a lack of functional valves in the lower limb veins from birth, biochemical alterations in the venous valves, and DVT.Risk factors include obesity, smoking, pregnancy, sedentary lifestyle, hypertension, and history of DVT.Complications of CVI include venous ulcers, thrombophlebitis, DVT, pulmonary embolism, bleeding, secondary lymphedema, and chronic pain [8][9][10].In most cases, complaints of CVI can be approached with compression stockings, endovenous radiofrequency ablation, or sclerotherapy [9].If the patient doesn't experience alleviation of his symptoms, the problem is more likely situated above the level of the groin, where more rare syndromes such as May-Thurner syndrome, Nutcracker syndrome or similar pathologies should be considered [11].In this case, perforating acetabular hip screws caused an inflammatory reaction with fibrosis, leading to chronic endothelial irritation and subsequent stenosis.
Perforations associated with the use of periacetabular screws in total hip arthroplasty have been reported at rates ranging from 0.9% to 7.0%, as documented by Eberl et al [12].These occurrences, while not rare, often lack clinical significance.Consequently, most experts do not advocate for repositioning the screws in such cases.While the literature on iatrogenic venous compression due to acetabular screws remains relatively scarce, there have been reports detailing severe adverse events, including acute right leg DVT and massive iliofemoral thrombosis [13].Additionally, cases of venous insufficiency surfacing several years after total hip arthroplasty have been documented [14].However, it is worth noting that most documented cases involving acetabular screw incidents predominantly revolve around DVT and thrombosis [5].CVI has garnered more extensive attention as a potential consequence of iatrogenic venous compression.It has been more widely reported as an outcome of anterior pedicle screw perforation in spinal fusion surgeries and scoliosis correction procedures [1].Furthermore, case reports have detailed venous compression as a complication in vascular repairs [2,3] and even in penile prosthesis surgeries [5].
However, what sets our case apart from most reports is its unique presentation of bilateral pathology.To the best of our knowledge, this is the first documented case reporting bilateral iatrogenic compression resulting from bilateral total hip arthroplasty.In this particular patient, an underlying May-Thurner syndrome likely played a catalytic role.This pre-existing condition predisposed him to developing such complications.It is noteworthy that both May-Thurner syndrome and iatrogenic venous compression are encompassed within the broader spectrum of iliofemoral venous compression syndromes, sharing a common pathogenesis.
The continuous pressure exerted by the acetabular screws led to prolonged endothelial irritation, fostering intimal thickening and stenosis [1].Consequently, this pathophysiological process contributed to the aggravation of venous insufficiency symptoms several years post-surgery.In this particular case, the placement of stents was deemed the most effective intervention for rectifying the stenosis and restoring adequate vascular flow.
As discussed, iatrogenic venous compression syndromes constitute a subset within a broader spectrum of venous compression syndromes.This overarching category encompasses anatomical variations such as May-Thurner, Paget-Schroetter, and Nutcracker syndromes [6], as well as disease-related variants like those associated with enlarged cysts and abdominal aneurysms [7].While anatomical and disease-related variants have garnered substantial attention in medical literature, iatrogenic venous compression remains a significantly underdiagnosed and underreported contributor to venous insufficiency.
A comprehensive understanding of vascular anatomy, coupled with meticulous pre-and intraoperative imaging assessments of vessels vulnerable to compression during hip surgeries, can play a pivotal role in mitigating the incidence of perforations.This proactive approach not only aids in averting complications such as DVT and venous insufficiency, it also facilitates early recognition and management of iatrogenic venous compression syndromes, fostering improved patient outcomes.
Conclusions
This case highlights the importance of recognizing iatrogenic venous compression syndromes post-joint replacement surgeries, such as total hip arthroplasty.Significant occlusion in both common femoral veins, occurring years after the procedure, underscores the need for long-term monitoring and proactive management.The problem likely stems from hardware proximity, leading to intimal thickening and stenosis.Further research is needed to understand risk factors and preventive measures.Vigilant long-term monitoring is crucial to mitigate potential vascular complications.
FIGURE 2 :
FIGURE 2: Visualization of the left common femoral vein (LCFV) (blue) before and after stent placement with close proximity of the acetabular hip screw.
FIGURE 3 :
FIGURE 3: Visualization of the compression of the right common femoral vein (RCFV) with narrow proximity of the right acetabular hip screw. | 2024-03-22T16:20:57.749Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "de50db7532d7e79cd1ce4527597aadb9a162fa1f",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/233949/20240318-31983-19m38f7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "640d0c561b5297d1f20506f42f2a5975dace8556",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233356247 | pes2o/s2orc | v3-fos-license | ASSESSMENT OF USAGE CONDITIONS OF SEPTIC TANKS IN HA NOI AND HAI PHONG
The study had been conducted in 2017 2018 to generate usage conditions of septic tanks in Ha Noi and Hai Phong through: (1) collecting information on usage condition of septic tanks by interview survey; and (2) obtaining analytical results of wastewater inflowing to and out flowing from the septic tanks selected by interview survey results. The number of target interview surveys was 200 including both type of the septic tanks treating (i) black water only, and (ii) black and grey water. Among the septic tanks surveyed, 20 septic tanks were selected for analysing effluent characteristics. The analyzed parameters included water temperature, pH, BOD5, COD, TSS, Total Phosphorous (TP), Total Nitrogen (TN), NH4-N, and total coliform. Poor quality of wastewater discharged from septic tanks (BOD5 = 80 ÷ 1250 mg/l and COD=170÷2110 mg/l) proved the inefficiency of septic tanks in treating black water from the households. The value of BOD5 showed strong correlation with other wastewater quality parameters, especially with CODCr (R 2 = 0.962), T-SS (R 2 = 0.669) and NH4-N (R 2 = 0.905); and also with desludging frequencies (R 2 = 0.727). Therefor, BOD5 can be used as an indicator for performance of the septic tanks and desludging frequency of the septic tanks were recommended through the relationship between desludging frequency and effluent quality from septic tank.
INTRODUCTION
On September 2015, the United Nations General Assembly adopted the Sustainable Development Goals (SDGs), which comprise of 17 goals and 169 targets, to address broad-wide ranging issues to be solved in economic, social and environmental fields comprehensively. Among the SDGs, the SDG 6.3.1 is related to the safe level of wastewater treatment. World Health Organization (WHO) has recently formulated the draft proposal on the Protocol for Stepby-Step Monitoring Methodology for Indicators of SDG 6.3.1: proportion of wastewater safety treated [1]. Viet Nam is facing the challenge of trying to keep pace with increasing wastewater pollution associated with rapid urbanization, especially in the large cities. While over 90 percent of households dispose wastewater to septic tanks, only 4 percent of septage is treated. Fecal sludge management was generally poor in most cities [2].
Septic tank is an on-site domestic wastewater treatment facility which is very popular in Viet Nam and many countries around the world. The septic tank is responsible for preliminary or complete cleaning of the black water before it is discharged to the external drainage network or receiving bodies (soil, river, lake) [3]. The principle of the septic tank is to perform sedimentation and anaerobic fermentation process. Septic tanks in Viet Nam usually have 2 to 3 compartments. The septic tank has low treatment efficiency, allows the separation of a part of suspended solids and an insignificant part of dissolved substances in the wastewater, and does not meet the requirements of discharge into the environment. However, septic tanks are still very popular in Viet Nam. Studies show that the performance of septic tanks in urban areas is bad due to improper design, construction, management and use of septic tanks [2]. Effluent from the septic tank remains highly polluted by organic matters, nutrient and microorganisms.
The main objective of this study is to generalize the usage condition of septic tanks in the urban and rural areas of Ha Noi and Hai Phong (Viet Nam) which included the (1) collecting information on usage condition of septic tanks by interview survey; and (2) obtaining analytical results of wastewater inflowing to and outflowing from the selected septic tanks. Data provided better insight relationship among different criteria of septic tank operation. Based on the data analysis, an indicator for septic tank performance was selected. The information is useful for improving design and operation of septic tank as preliminary treatment facility.
Interview survey
The survey is conducted by door-to-door interviews to the target 200 households with questionnaires. About 120 households had been selected in Ha Noi (60 in urban and 60 in rural areas) and 80 households had been selected in Hai Phong (40 in urban and 40 in rural areas). Criteria for selection the household include: -Typical urban districts located in the former city area, -Population, -Districts where the progress of the urbanization is remarkable.
In each district, 2 communes are selected randomly, based on household lists. From each commune, 3 households were also randomly selected.
The questioner is composed of five sections to get information on: (1) General information of household; (2) Source, amount of use, and way of use of domestic water (3) Septic tank structure, use (black water only or combined black and graywater), operation (desludging frequency), and (4) Type of water body (drainage system) discharged treated wastewater. The interviews were taken by direct question together with interviewers' observation.
Wastewater sampling and analysis
The targeted wastewater is domestic wastewater treated by septic tank only before discharging to public drainage. Wastewater from industrial, commercial facilities, hospitals and other public services had not been included in this survey.
In field measurement and chemical analysis included water temperature, pH (OAKTON 35632). Wastewater samples were collected and analyzed at the R&D Lab in Environmental Technology (School of Environmental Science and Technology -HUST) by Standard methods: COD (TCVN 6491:1999, ISO 6060:1989), BOD 5 Among the septic tanks surveyed, 20 septic tanks are selected for collecting discharged wastewater sample and analyzing discharged wastewater quality. Both types of the tanks, treating (i) black water only and (ii) black and grey water, are targeted.
The wastewater samples are collected as composite samples during a day at the outlet point of pipe from septic tank. Time-composite sample within 24 hours combining from 04 grab samples at 04 sampling time. The implementation process, including quality management, is conducted on the basis of an evolving "working document", which will ultimately become the final report. All samples are analyzed at the R&D laboratory of INEST that has been certified by VILAS 406 (ISO 17025).
Interview survey
The surveys in Ha Noi and Hai Phong showed that 98 percent of households disposed wastewater to 2-3 compartments septic tanks. In urban areas, currently 76 % households use water saving type of toilet facilities, while this rate was only 59 % in rural areas. The volume of majority septic tanks in Ha Noi was from 4 -10 m 3 and in Hai Phong was from 3 -5 m 3 (Fig. 1). The ratios of the households with/without desludging experiences are shown in Figure 2. It is found that the septage in most septic tanks in rural areas (85 % surveyed household in rural Ha Noi area and 76 % in rural Hai Phong area) have never been desludged, although regular desludging is essential to maintain septic tanks. Survey showed that, in urban area, the fecal sludge from septic tanks in 53 % -59 % of the households had never sucked. The survey also revealed that 94.6 % of the households equipping septic tanks did not have a custom of regular dislodge, desludging performance were mainly due to problem arisen. The situation however had already improved comparing to situation surveyed 10 years ago in Ha Noi urban areas, when over 80 % of households had never been dislodged [5]. Frequency of desludging septic tanks had been investigated with dislodged septic tank and is presented in Fig. 3. The survey of septic tanks use showed that only 20 % of septic tanks used for combined black and grey water. In independent house, septic tanks were used for black water only.
Discharged wastewater sampling and analysis
Sampling and analysis survey were performed in 50 households. Among those 50 facilities, 20 septic tanks are screened for collecting discharged wastewater sample and analyzing discharged wastewater quality.
Criteria considered for the screening process included: (1) Type of house: both types of the households, (i) independent house and (ii) new rise-high condominium / multiple dwelling, are targeted; a public facility is included; (2) Location: both Ha Noi and Hai Phong and in both urban and rural zones; (3) Type of the tanks: both types of the tanks treating (i) black water only and (ii) black and grey water; (4) Frequency of desludging: both cases, (1) No-desludging and Desludging, are targeted; a frequency of desludging was also an important factor. Relationships between each water quality for households ( Figure 4) and relationships between desludging interval and effluent water quality from septic tank ( Figure 5) are unveiled. Figure 4 showed strong Pearson correlation of BOD 5 with COD Cr (R 2 = 0.962), T-SS (R 2 = 0.669), NH 4 -N (R 2 = 0.905) and T-Coliform (R 2 = 0.627). The weaker correlations with T-N (R 2 = 0.451) and T-P (R 2 = -0.140) were observed. However, week correlation of these parameters was caused by the outlier data points. The outlier showed data at the old multiple dwelling houses, which had never been dislodged. The Spearman correlation with excluding outlier data points from statistical analyses showed the significantly strong correlation of T-N, T-P.
The Pearson correlation proved the relevance of selecting BOD 5 as indicator for performance of the septic tanks. In future works, monitoring the septic tanks can use single BOD parameter to assess the septic tank performance.
Data also show the form of Nitrogen in the septic tanks were mainly NH 4 -N. Ammonium was mainly in soluble form and cannot be treated by the septic tank under anaerobic condition. Nitrogen in the solid was digested in the septic tank and also formed an extra ammonia released together with effluents. The situation followed the general characteristics of black water septic tanks [6,7] Figure 5 showed the relationship between desludging interval and effluent water quality (BOD 5 , COD Cr ) from the septic tank. During statistical analyses, data from the multiple dwelling houses and the houses in which the septic tanks received both black and grey water were excluded. The correlation of COD Cr (R 2 = 0.640), BOD 5 (R 2 = 0.727) with a desludging interval thus significantly increased.
This phenomenal showed that a longer desludging interval could provide a higher concentration of septage. This trend can be mainly understood as a mixture of a decomposition process of organic matter by anaerobic digestion and an accumulation process of solid matters by settling functions. As the results, concentrations of pollutants in septage increased with an increase in the specific desludging interval. Higher septage concentration due to longer desludging interval should be given attention for septage treatment design. For the future direction of septic tanks and septage management, septage treatment needs to be designed basing on desludging strategy. Besides, proper designs, construction and operation of septic tank can make this a promising facility for on-site wastewater treatment in Vietnamese residential areas [8].
CONCLUSIONS
This study generates usage condition of septic tanks in the urban area and local area of Ha Noi and Hai Phong. About 95 % of black water was flushed into septic tanks including public one and 97 % of effluent from septic tanks were discharged into sewer pipes. It should be clear that the septic tanks played major role of management of sanitation in urban areas. In addition, 94.6 % of the households equipping septic tanks did not have a custom of regular dislodge and 68-70 % of the households have never dislodged before. Dissemination of necessity of regular dislodge is considered crucial issues to improve sanitation in urban Ha Noi and Hai Phong.
As the results, desludging frequency of the septic tanks are recommended through the relationship between desludging interval and effluent water quality from septic tank. And the recommended volume of water-saving toilets is determined through the distributions of "water volume per flush in toilet", "water consumption", "estimated volume of septic tanks" and "desludging frequency" in the cities. The correlation also proved the relevance of selecting BOD 5 as indicator for performance of the septic tanks. In future works, monitoring the septic tanks can use single BOD parameter to assess the septic tank performance. | 2021-04-22T22:39:53.094Z | 2020-05-25T00:00:00.000 | {
"year": 2020,
"sha1": "376f43c6b53853489b82f759a41c29288220587a",
"oa_license": null,
"oa_url": "http://vjs.ac.vn/index.php/jst/article/download/14345/103810383654",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "376f43c6b53853489b82f759a41c29288220587a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
93331651 | pes2o/s2orc | v3-fos-license | Lifetime measurement of the 5d$^2$D$_{5/2}$ state in Ba$^+$
The lifetime of the metastable 5d$^2$D$_{5/2}$ state has been measured for a single trapped Ba$^+$ ion in a Paul trap in Ultra High Vacuum (UHV) in the 10$^{-10}$ mbar pressure range. A total of 5046 individual periods when the ion was shelved in this state have been recorded. A preliminary value $\tau_{D_{5/2}} = 26.4(1.7)$~s is obtained through extrapolation to zero residual gas pressure.
Introduction
The accurate determination of the transition probability of transitions in heavy alkali earth systems is an important step in the research program to measure Atomic Parity Violation (APV) in such systems [1,2,3,4,5,6,7,8,9]. In the research reported here, a single trapped Ba + ion has been investigated and the lifetime of its 5d 2 D 5/2 state has been measured. This provides essential input for testing atomic structure and, in particular, the atomic wavefunctions of the involved states at percent level accuracy. Such measurements are highly sensitive to variations of parameters that determine the experiment's performance during long periods (i.e. several hours) and which may cause systematic uncertainties. In particular, such effects may arise from interactions of the ion with background gas.
There are two main reasons for choosing single trapped Ba + ion in UHV to perform precise lifetime measurements. Firstly, Barium (Ba) is a heavy alkaline earth metal. The Ba + ion has a rather simple electronic configuration.
Precise measurements provide for accurate tests of the atomic wavefunctions. Secondly, systematic errors due to collisions with other particles (such as different species) are highly suppressed.
The lifetime of the metastable 5d 2 D 5/2 state in Ba + has been measured earlier in different experiments [10,11,12,13,14,15,16]. Calculations are presently performed by different independent theory groups [5,6,17,18,19,20,21]. All the measurements to date as well as calculated values for the lifetime of 5d 2 D 5/2 state in Ba + have been compiled in Table 1. 2 Experimental setup The trap for Ba + in this experiment is a hyperbolic Paul trap [22]. It consists of a ring electrode and two end caps made of copper. The electrodes are mounted on a Macor holder. The chosen geometry results in a harmonic pseudopotential at the center of the trap when AC voltages are applied between the ring and the two endcaps. The latter are grounded. The operating RF frequency for the trap is Ω RF = 5.44 MHz. The trap with its Macor holder is mounted on Oxygen Free High Conductivity (OFHC) copper base plate. In order to trap ions, there is a Ba oven (0.9 mm diameter × 40 mm length resistively heated stainless steel tube) which contains a mixture of BaCO 3 and Zr. This oven produces a flux of order of 10 6 thermal Ba atoms/s. A laser at wavelength 413 nm is used to produce Ba + ions in the trap by two-photon photoionisation. We use laser light at λ 1 = 493 nm (frequency doubled from a Coherent MBR-110 Ti:Sa laser) for driving the 6s 2 S 1/2 -6p 2 P 1/2 cooling transition and laser light at λ 2 = 649 nm (produced from Coherent CR-699 ring dye laser) for the 6p 2 P 1/2 -5d 2 D 3/2 repump transition (see Fig. 2). In the experiments reported here, the power of λ 1 is between 6 µW and 50 µW and that of λ 2 is between 6 µW and 45 µW. The Gaussian radius of the laser beams is about 60 µm at the position of the ion for all the measurements. Fluorescence from the 6s 2 S 1/2 -6p 2 P 1/2 transition in the Ba + ion is detected with a photomultiplier tube (PMT) and an EMCCD camera. Fig. 1 shows our hyperbolic Paul trap together with the image of ions that are trapped and localized at the potential minimum of the trap.
Electron shelving technique
Ba + ions have a closed three-level system. One of the excited states, the 5d 2 D 5/2 state, is long-lived (see Fig. 2). Simultaneous laser radiation at λ 1 and λ 2 is therefore needed to cool the ion in the center of the trap. When the ion is exposed to the light of two laser beams at wavelengths λ 1 and λ 2 (see Fig. 2), there is a closed cycle of 6s 2 S 1/2 -6p 2 P 1/2 -5d 2 D 3/2 transitions. Observing the fluorescence from the 6p 2 P 1/2 -6s 2 S 1/2 transition implies that the ion is "not shelved" in the 5d 2 D 5/2 state. The electron shelving technique is employed in our experiment to determine the lifetime of the 5d 2 D 5/2 state. With an additional fiber-coupled high power LED (M455F1) at λ 3 = 455 nm wavelength the ion can be "shelved" to the 5d 2 D 5/2 state via excitation to the 6p 2 P 3/2 state and this state's subsequent decay. The direct observation of "quantum jumps" in a single Ba + ion between the 5d 2 D 5/2 and 6s 2 S 1/2 states has been first demonstrated by Nagourney et al. [10]. The decay of the 6p 2 P 3/2 state is the start of a shelving period which ends with a quantum jump from the 5d 2 D 5/2 state to 6s 2 S 1/2 state. Fig. 3 displays the highest PMT count rate (2200cnts/s) when the ion is not shelved and the lowest count rate (600cnts/s) as background when it is shelved to the metastable 5d 2 D 5/2 state. The "on/off" and "off/on" transitions in the fluorescence signal corresponds to the start and end of one single interval, when the ion was in the 5d 2 D 5/2 state.
Measurements
In order to measure the lifetime τ D 5/2 , a total of 5046 individual shelved periods have been recorded in 71 data samples and analysed. They were taken under in part significantly different conditions to enable observing and correcting for systematic errors [23]. Fig. 4 represents one example of the analysed samples. It shows an exponential decay. Such a decay function is fitted to each data set using a binned log-likelihood method. The lifetime τ D 5/2 is obtained for each data sample from the corresponding fit parameters. We note that experimental situations can be created where ion heating results in longer measured durations of individual dark periods than the actual dwell time of the ion in the D 5/2 state. This can be seen in the slow recovery of the fluorescence light. Collisions with background gas can reduce the lifetime of the metastable state. In order to extrapolate the absolute value for the lifetime to zero pressure, the lifetime τ D 5/2 was measured at different background pressures. Fig. 5 displays the results for a selection of 1600 out of 5046 shelved periods. The uncertainty for the lifetime in each value corresponds to the statistical error from fitting an exponential decay to the data. A range of pressures between 2.5 × 10 −10 and 8.7 × 10 −10 mbar was explored by changing the temperature of the vacuum chamber in the range from 289 K to 296 K and by adjusting the pumping speed of the ion pump. For the small change in temperature needed here, changes in the collision cross-sections between the ion and the residual gas atoms can be neglected. A linear function is fitted to the data. The lifetime of the 5d 2 D 5/2 state is found to be τ D 5/2 = 26.4(1.7) s. 3446 shelved periods are used to check for systematics, such as potentially arising from laser intensities, laser frequency detunings, rf voltages for trap and effects from the operating conditions of the ion pump. No significant effects have been observed.
Conclusions
In summary, the lifetime of the metastable 5d 2 D 5/2 has been measured for a single Ba + ion. The measured value is preliminary because cross checks for systematics are still ongoing. Our result agrees within 2σ with the most recent theoretical value τ D 5/2 = 29.8(3) s [6] and with the latest independent experimental value of τ D 5/2 = 31.2(9) s [15]. Fig. 6 displays the time evolution of the measurements and the theory values for the lifetime of the 5d 2 D 5/2 state in a Ba + ion. | 2015-04-12T21:22:56.000Z | 2015-04-12T00:00:00.000 | {
"year": 2015,
"sha1": "09a99a3b61d24abc2a2ad55ce90e6f95e3d2e938",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10751-015-1161-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "09a99a3b61d24abc2a2ad55ce90e6f95e3d2e938",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Chemistry"
]
} |
244023007 | pes2o/s2orc | v3-fos-license | Optimal Scheduling of the Peirce-Smith Converter in the Copper Smelting Process
Copper losses during the Peirce-Smith converter (PSC) operation is of great concern in the copper smelting process. Two primary objectives of the PSC are to produce blister copper with a shorter batch time and to keep the copper losses at a minimum level. Due to the nature of the process, those two objectives are contradictory to each other. Moreover, actions inside the PSC are subject to several operational constraints that make it difficult to develop a scheduling framework for its optimal operation. In this work, a basic but efficient linear multi-period scheduling framework for the PSC is presented that finds the optimal timings of the PSC operations to keep the copper losses and the batch time at a minimum level. An industrial case study is used to illustrate the effectiveness of the proposed framework. This novel solution can be implemented in other smelting processes and used for the design of an inter-PSC scheduling framework.
Introduction
The copper industry is one of the backbones of the European processing industry, representing more than 12% of the worldwide production of refined copper [1]. This industry can utilize both high and low quality concentrates for its operation. At present, the global trend is that the high quality deposits of the copper will soon be depleted and the utilization of low quality concentrates will become a common practice [2,3]. At present, more than 60% of the concentrates come from low-grade deposits that have a high fraction of impurities (e.g., arsenic and bismuth) [4,5]. This utilization of low quality concentrates leads to higher operational costs in terms of copper losses, which remain a part of the slag as waste.
In the copper industry, the smelting process is used to extract copper from both high and low quality concentrates. Copper smelting operators that have access to high quality concentrates have a cost advantage since the lost copper possesses no significant value to the process operator; therefore, no extra technological investment is needed for its recovery. However, with the prospect of high quality deposits running out, this advantage can only be described as short-term. In fact, smelting operators have long-term advantages and turn a profit by utilizing low quality concentrates. This low quality concentrate utilization increases the relative importance of the copper losses and sets a need for minimizing the copper losses.
The copper smelting process is a large-scale complex industrial process that typically consists of a flash smelting furnace (FSF), Peirce-Smith converter (PSC), anode furnace, casting unit, electrolysis, slag treatment, and an acid plant, as shown in Figure 1. In this process, a major challenge is the scheduling of the PSC. Parallel functioning of the PSC units, high gas emissions, and the amount of copper that is lost during the PSC operation are some of the factors that contribute to this challenge [6][7][8]. Among those factors, copper losses affect its commercial viability and it is heavily dependent on the scheduling of its intra-operations.
In PSC, copper losses depend on numerous factors, such as the ratio of iron to silica in the slag, the operating temperature, the limitation of the mechanical structure, and other physical-chemical factors [10][11][12]. Furthermore, the amount of copper that is lost to the PSC slag is much higher than the amount of copper that is lost in other smelting units. For example, these losses are about 4-8 % in the PSC, while these same losses can vary from 1 to 2% in other units [7,8]. Copper losses that exceed 2-3% strongly affect the economics of the process [13]. Hence, neglecting the slag as waste is not an attractive solution since the slag contains a considerable amount of dissolved copper [7]. One way to recover the dissolved copper from the slag is to use a slag treatment unit. However, the slag treatment unit demands sufficient processing time, fundamental modifications, and capital investment for its effective operation [14,15].
Copper losses in the smelting process can be reduced either by improving the chemical nature of the process or by using innovative scheduling techniques. Davenport et al. [7] proposed chemical based strategies. These include minimizing slag generation, minimizing copper entrainment in the slag, and pyrometallurgical slag reduction. These strategies depend on the chemical nature of the process, and sometimes they cannot be adopted because of limitations in the quality of the concentrate, quality requirements, and other operational constraints. Lennartsson et al. [16] presented an interesting model for the PSC, which is based on thermodynamics. In this model formulation, PSC is divided into various operating zones and the content of various elements present in the matte is calculated. The results are then compared with the industrial data. Another thermodynamic based model for the PSC is presented by the Tan [17]. This model predicts the behavior of slag and matte in the PSC, their amount and composition, PSC temperature, and slag blow endpoint detection. The model is validated by industrial data and some industrial applications have also been presented.
In the literature, multiple solutions for the scheduling of the PSC can be found. Harjunkoski et al. [18] presented a continuous-time inter-PSC scheduling scheme for the copper production process with the objective of maximizing the production in the process. The problem was presented as a mixed-integer linear programming (MILP), which only captures the essential aspects of the process and optimizes the overall production processing time. Suominen et al. [19] presented an inter-PSC scheduling approach for the copper smelting process that optimizes the process time and the deviation from the target copper matte grade. That approach uses a continuous-time formulation, and the scheduling problem is formulated as a mixed-integer non-linear programming (MINLP). Navarra [20] introduced another MINLP formulation for the scheduling of the PSC that maximizes the production, while respecting the chemical, volumetric, and other operational constraints. All of the above studies ignore minimization of the copper losses; therefore, the effectiveness of these solutions is limited in many real copper smelter applications.
Motivated by the copper loss minimization in the slag, one technique to address this problem is to use practical scheduling techniques. In this study, a novel multi-period scheduling framework has been developed for the PSC using the discrete-time formulation (see, e.g., [21]). The proposed framework is developed using MILP techniques that keep the copper losses at minimum level by finding the optimal timings of the process operations.
The remainder of the paper is organized as follows: Section 2 briefly describes a general description of the PSC. The problem formulation is presented in Section 3. The mathematical formulation of the problem is discussed in Section 4, which is followed by describing a case study in which the framework is tested, and the simulation results are discussed. Section 5 presents the concluding remarks and provides an outlook for the future work.
Peirce-Smith Converter
A process diagram of the PSC is shown in Figure 2. The PSC involves a sequence of actions and phases, which are executed in a sequential order, and it always operates in batches. This unit is in service in more than 70% of copper smelters [22]. The purpose of the PSC is to oxidize part of the iron, sulfur, and other undesirable elements present in the matte to form slag, gases, and heat. Additional materials that are added to the PSC are silica, air, and industrial oxygen. Silica is added to accelerate metal oxide dissolution in the slag, which provides easy material discharge during the PSC operation and adequate matte/slag separation [23]. Industrial oxygen and air are used to control the PSC temperature for the purpose of keeping the matte and slag in the molten state. During the operation, the oxygen blast reacts with the iron and sulfur to produce a metal oxide that leaves the PSC in the form of slag (dissolved FeO and CuO) and off-gas (SO 2 ), which is used by the acid plant for the manufacture of sulfuric acid.
The process of converting FSF matte into blister copper is completed in two stages, called slag-making and copper-making. During the slag-making stage-iron and part of sulfur is oxidized first to produce slag in repetitive slag blow operations, while the sulfur is oxidized in a single long copper blow operation during the copper-making stage. The slag is removed periodically during the slag skimming operation. After the final slag skimming operation, the residual material is often referred to as white metal (high content of Cu 2 S). Thereafter, the copper-making stage begins. During this stage, the chemical nature of the reaction remains the same where the remaining amount of the sulfur is oxidized by passing the white metal through the oxygen blast until the maximum threshold level of sulfur in the white metal is achieved. This final product is often referred to as blister copper (≈99%).
Temperature is one of the important parameters that has a direct influence on the PSC performance. During the slag-making stage, the temperature rises because of the iron and sulfur oxidation [24]. Increase in the PSC temperature results in high copper losses. On the other hand, decreasing the temperature leads to state change of the slag from molten to solid. Details about the temperature constraints and its effects can be found in the literature [7,20]. The slag-making stage is often split into three slag blows: the first, second, and third slag blow. To keep the temperature within the operational limits, the process operator defines the upper limit for the first two slag blows. Such action allows the process operator to maintain the PSC temperature within its operational limits. During the third slag blow, the temperature is kept in a feasible range by selecting an appropriate oxygen enrichment, while coolants (e.g., scrap with a high matte grade or slag from the anode furnace produced during the previous batch) are added during the copper blow to prevent excess of temperature. At the end of the copper-making stage, the PSC is emptied, and it is ready to produce the next batch of the blister copper.
Copper Losses
In the PSC, a decrease in the iron (Fe) content in the matte increases the copper (Cu) losses in the slag in a non-linear fashion [17]. The rate of those copper losses are smaller during the first slag blow than in the second slag blow, which in turn is smaller than in the third slag blow, as shown in Figure 3. The objective of the slag blows is to oxidize unwanted elements from the matte as quickly as possible. Due to the fact that PSC batches with shorter batch times are preferred, a general PSC operation will favor a longer duration of the first and second slag blows. However, such actions would result in too small iron content in the matte during the slag blows, which will lead to higher copper losses in the slag.
Problem Formulation
In this section, we define the problem statement and mathematical formulation of the scheduling problem.
Problem Statement
In this study, we consider a generic PSC installed in a copper smelting process. Although the real process is more complex in nature than the process framework considered in this work, it still reflects the main aspects of the real PSC operation.
Given: a PSC that follows a predefined sequence of actions that are carried out in a sequential order to produce a single batch of blister copper. During the PSC operation, it receives matte as a raw material from the FSF.
Determine: production schedule for a single batch of blister copper that delivers a product with a pre-defined quality and respects the process operational constraints.
Objective: in PSC, shortening the batch time magnifies the copper losses in a nonlinear fashion. Hence, the objective is to minimize the copper losses and shorten the batch times simultaneously.
Note that in this formulation, the required product quality can be attained by oxidizing the mandatory amount of iron and sulfur from the matte, whereas the batch period can be optimized by avoiding unnecessary idle times.
Limitations: this framework assumes that the FSF operates at full capacity and it continuously produces matte with a perfectly known matte grade. This matte grade can change during its operation. There is always initial inventory available in FSF; thus, PSC operation can begin without any delay. Consequently, FSF internal process dynamics has no effect on PSC operation. In this work, it is assumed that the constant oxygen supply is available when required; hence, the rate of oxidation of elements in the PSC remain constant. During the slag skimming stage, all of the produced slag are removed from the PSC and moved to the slag treatment plant without any delay. The slag treatment plant, slag container, and SO 2 capturing unit have unlimited capacity; therefore, they are not considered in the present framework. The slag type does not change during the PSC operation.
Temperature raises inside the PSC during its operation, as discussed in Section 2. In this framework, it is assumed that the temperature is kept in a feasible range by preselecting an appropriate constant oxygen enrichment during the slag blow stages; therefore, temperature is not a decision variable in this framework. Moreover, by defining suitable maximum values for the first and second slag blow; the temperature reaching to its maximum threshold is avoided. It is also assumed that, in the copper-making stage, coolants are added when needed to keep the temperature in a feasible range. During each slag blow, a minimum amount of slag is produced. This is achieved by setting a minimum duration constraint for all of the slag blows.
Mathematical Formulation
In the literature, most of the scheduling problems with similar objectives are formulated using the big-M type of constraints. Such formulations are subject to weak linear relaxations and poor computational performances [25]. To maintain the model complexity at a low level, another way to formulate similar problems is to use a combination of two binary variables for scheduling every single process operation.
The main characteristics of this framework are: • The processing times of PSC operations are deterministic, and vary depending upon the type of operation. • The process schedule follows a pre-defined recipe that is already defined by the process operator. • The schedule contains indefinite idle times if any operation from the pre-defined recipe is not performed in the prescribed way.
Here, PSC operations refer to the pre-defined tasks that must be carried out in a specific order to produce a single batch of the blister copper. The sets, variables, and parameters that are used in this framework are describe next.
Sets: In this formulation, there are three fundamental types of constraints: precedence sequence, material balance, and copper ratio scheme. The formulation of those constraints are discussed next.
Precedence Sequence
In this scheduling problem, only one operation z i can be processed at a given time t as shown in Equation (1). Every operation z i is processed for a duration as shown in Equations (2) and (3). If the process duration is not fixed, the framework estimates the optimal value.
For the precedence sequence of each PSC operations z i , this framework uses a combination of two sets of binary variables. One set of binary variables, as represented by b t z i , corresponds to the occurrence of operation z i at time t. The other set of binary variables s t z i , which are also called start-up variables, are used to ensure that the PSC operations are processed in the order as presented by the process operator. Whenever the variable b t z i value is set to one at time t, the associated variable s t z i is also set to one at the same time that remains active until the final batch time as shown in Equation (4). A graphical illustration of the precedence sequence is provided in Figure 4. After the completion of operation z i , the next operation can be scheduled as per constraint given in (5). In this expression, the summation of variables s t z i ensures that all of the preceding operations b t z i are scheduled prior to time t {1, 2, ..., t − 1, t}.
Material Balances
Material balances can be divided in three type of constraints as shown in Equations (6)-(8). All of them calculate the amount of material from the beginning of the batch until time t {1, 2, ..., t}. When the PSC batch begins, the previous quantity represents the corresponding initial inventory value. Equation (6) retains the matte quantity in the PSC. During the operation, a known amount of material c z i is transferred from the FSF to the PSC. The input matte is composed of numerous unwanted elements and their content in matte depends on the matte grade mg. For example, the matte having 60% matte grade contains 15% iron and 25% sulfur [17]. The content of those unwanted elements in the PSC (e.g., Mass t Fe , Mass t S ) is estimated by Equation (7). Equation (8) represents the accumulated mass of the copper lost to the slag. Here, the copper is lost at a rate r z i .
Copper Ratio Scheme
During the slag-making stage, iron is oxidized from the matte and added to the slag. This flow of iron decreases the iron content in matte and, thus, increases the matte grade. This increase in the matte grade increases the copper losses in the slag [13]. As the iron content in the matte begins approaching the near zero-level value, the matte grade starts reaching to its peak value that results in the copper loss to the slag in an exponential fashion [13].
In the copper smelting process, where the flow of oxygen and iron oxidation rate (r Fe ) remain unchanged during the slag-making stage, the trajectory of matte grade over the complete batch horizon is calculated using Equation (9). Since the copper losses depend on the matte grade value, the copper loss trajectory in the slag can also be estimated as a function of time t using the results presented in [13]. This copper loss trajectory can be piece-wise linearized over the batch horizon, as shown in Figure 5.
The matte grade can be defined as the amount of non-oxidized iron present in matte. Since copper losses depend on the matte grade, as presented in Figure 5, and matte grade is defined by the non-oxidized iron present in matte, the copper losses in slag can be calculated using the amount of iron present in matte, as shown in Figure 3. Therefore, this study uses this iron in the matte to limit the copper level in the slag. This concept is deducted from the results presented in [13] and it is referred to as the copper ratio scheme. In this framework, iron in the matte is calculated using Equation (10). Using the M t Fe , Equations (10)- (12) are used to estimate the optimal duration for the slag blows for which the total copper losses in the slag are at a minimum level. Trajectory A t j is the approximation of the copper losses in the slag that depends on the non-oxidized iron in matte and parameters B j and C j , which are positive real numbers. Here, j defines the number of times Equations (10)- (12) are defined in the framework, with different B j and C j values. As the iron amount in the matte decreases during the slag blow operations, trajectory A t j starts increasing as illustrated in Figure 6 [17]. For each slag blow operation, Equations (11) and (12) determine its optimal termination point from the set of feasible points, as shown by the red points ( ) in Figure 6. In this study, each optimal termination point is referred to as the copper ratio point (Cu t ratio ), which represents the ratio between non-oxidized iron in matte and copper present in the slag. This simple but beneficial trade-off between the iron in matte and copper in the slag keeps the overall copper losses at a minimum level, subjected to the blister copper grade requirement.
Batch Idle Time
In the copper smelting process, the PSC batches with shorter durations are preferred over batches that have unnecessary idle times. The prime reason for such practice is that the idle time decreases the temperature inside the PSC, thus affecting its overall performance. Therefore, this framework uses a penalty term to minimize the unnecessary idle time as shown in Equation (13). The penalty term idle t has a value of one whenever any PSC operation b t z i becomes active throughout the batch horizon; otherwise, it remains at zero.
Objective Function
The objective function in Equation (14) minimizes the total copper losses in the slag, unwanted elements content in the matte, and the copper ratio point. The last term of the objective function is used to penalize unnecessary idling time.
During the development of the mathematical model, few assumptions have been made that limit the scope of this work. Copper losses in the slag are represented by a piece-wise linear approximation and the values are taken from the literature; therefore, the selected values reflect the characteristics of this process. Another parameter that limits the performance of this framework is the choice of parameters values in the copper ratio scheme. Any unrealistic (e.g., negative) values selection may result in an unrealistic slag blow duration that may lead to higher copper losses in slag. The maximum duration value of the slag blows, which are defined by the process operator, also affect its performance. Small value selection may lead to larger copper loss, which is undesirable for this process.
Case Study
In this section, we describe a benchmark case study that was used to demonstrate the effectiveness of the proposed scheduling framework. In this case study, the PSC has the same sequence of operations as described in Figure 2. There are three loading operations, three slag blows, three slag skimming, and a single long copper blow; therefore, i = {1, 2, 3}. For this case study, Equations (10)-(12) are defined only once; therefore, j = {1}. The batch begins with two consecutive matte loadings, which are executed during the first loading operation. The matte in the PSC contains copper, iron, and sulfur. The proportion of those elements in the matte and process parameters were taken from the literature, and are listed in Table 1 [17,23]. It is assumed that all loading and slag skimming operations take a single unit time and the PSC produces the blister copper with at least 98.75% matte grade. This scheduling problem is modeled in GAMS and solved with solver CPLEX 12.7 [26,27]. The computations were performed using a 2.60 GHz IntelCoreTM i7 6700HQ processor with 32 GB of RAM, running Windows 10 Enterprise, 64-bit. To evaluate the performance of the proposed framework, five different scenarios are presented here. In scenarios 1-4, the framework was simulated with the objective as presented in Equation (14), which uses the copper ratio scheme; whereas in scenario 5, a PSC schedule is produced with a modified objective function, which is shown in Equation (15). This modified objective function produces a PSC schedule without using copper ratio scheme. Scenarios 1-4 provide the sensitivity analysis about the choice of copper ratio parameter values, whereas scenario 5 provides the novelty and importance of the copper ratio scheme in terms of copper losses. For each individual scenario, we presented the corresponding schedule that produces one batch of the blister copper, the amount of the copper losses, and the computational requirement. In the smelting process, the computational costs are not a pressing issue. However, we presented it here to provide a better sensitivity analysis of the proposed framework. min Mass cu ,Mass ele ,idle Figure 7 shows that the framework has the ability to produce the schedules for all given scenarios. From Table 2, it can be observed that the copper losses in scenario 5 are higher as compared to the copper losses in scenarios 1-4. Without a copper ratio scheme, the first and second slag blows terminate early and the third slag has a longer duration; hence, the copper losses are high. Consequently, the iron content in the matte for scenario 5 remains high, as shown in Figure 8. On the other hand, when the copper ratio scheme is part of the overall objective function as presented in scenarios 1-4, the amount of copper losses in the slag are reduced. The simple reason for this is that the framework finds the optimal slag blow duration. Although the framework provides a feasible schedule for scenario 5, the process operator might consider it an unreasonable schedule considering the amount of copper loss. Furthermore, shorter slag blows do not generate the required amount of heat in the PSC; thus, external resources are used to maintain the required temperature in the PSC. Therefore, the schedule produced in scenario 5 cannot be used practically in many smelting processes. Table 2 provides a beneficial comparison between scenarios 1-4 and scenario 5 in terms of the copper that can be saved using the copper ratio scheme. Despite the fact that the framework with the copper ratio scheme requires more computational resources, it is still an optimal choice to utilize the scheme given the importance of the copper losses.
Another importance factor that affects the performance of this scheduling framework is the choice of the copper ratio parameter values. In Table 2, it is visible that, for any two given scenarios, selecting the higher value of C j as compared to B j reduces the copper losses, but increases the computational requirements. On other hand, when a lower C j value is used in comparison to the B j value, its computational requirements are reduced, but it increases the copper losses and the batch time. Hence, the process operator should select copper ratio values carefully as it affects the overall PSC scheduling and operation.
This case study shows that the proposed framework can produce schedules for the given matte. However, the framework has the capability of generating schedules for any given matte grade. It can be observed that utilization of the copper ratio scheme results in idle time generation to the schedules. Process operators who are concerned about the idle times would utilize this copper ratio scheme carefully. On the other hand, process operators who are sensitive to the copper losses would prefer the schedules provided by the framework using the copper ratio scheme.
Conclusions
This paper presents a linear dynamic MILP-based scheduling model for the PSC in the smelting process. The key features adopted in this formulation are the provision of a simple and linear approach for scheduling of PSC operations, minimization of the copper losses in slag, and the fulfillment of essential operational constraints that are often found in a real smelting plant.
The objective was to design linear discrete-time model for the PSC operation and apply linear techniques to keep the copper loss at a minimum level. The results of this work provide information to process operators about the effect of the copper losses on the overall schedule. Furthermore, using the proposed scheduling formulation, the operators have more command and understanding of the overall process functionality; thus, they can comfortably observe the benefits of their operational decisions. In this framework, operations, such as PSC maintenance break, start-up, and shutdown are ignored; therefore, it can be extended to more complex scenarios, and has the potential to be used for inter-PSCs scheduling. | 2021-11-12T16:14:27.948Z | 2021-11-09T00:00:00.000 | {
"year": 2021,
"sha1": "aa18b5ac90056b35524e51d380b37ccca27a66a4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/9/11/2004/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "22792f194d263c5452c1274fb3f74f3673a0ce74",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
36641366 | pes2o/s2orc | v3-fos-license | An Expression-independent Catalog of Genes from Human Chromosome 22
To accomplish large-scale identification of genes from a single human chromosome, exon amplification was applied to large pools of clones from a flow-sorted human chromosome 22 cosmid library. Sequence analysis of more than one-third of the 6400 cloned products identified 35% of the known genes previously localized to this chromosome, as well as several unmapped genes and randomly sequenced cDNAs. Among the more interesting sequence similarities are those that represent novel human genes that are related to others with known or putative functions, such as one exon from a gene that may represent the human homolog of Drosophila Polycomb. it is anticipated that sequences from at least half of the genes residing on chromosome 22 are contained within this exon library. This approach is expected to facilitate fine-structure physical and transcription mapping of human chromosomes, and accelerate the process of disease gene identification.
A primary goal of the human genome initiative is the construction of fine-structure physical maps of the chromosomes in anticipation of full DNA sequence analysis.However, probably the most important purpose of this mapping, the identification and placement of human genes, can be carried out effectively before determining the complete sequence of the human genome, and can aid in increasing the resolution of the physical map.Low-resolution physical maps of human chromosomes have been described recently (Cohen et al. 1993), but considerably greater detail is needed to maximize their utility and proceed with large-scale sequencing.Identification of a representative set of genes or gene fragments corresponding to a specific genomic region would satisfy many of the requirements for finer mapping and would add a level of functional significance to these evolving maps.The resulting gene, or transcription maps, would provide a new framework for the study of structural, functional, and organizational aspects of chromosomes, and would lead to more efficient identification of genes involved in human disease.Consequently, recently the development of methods for rapid gene identification has received greater attention, and numerous strategies, including ap-3These authors contributed equally to this work.4Corresponding author.E-MAIL buckler@helix.mgh.harvard.edu;FAX (617)726-5736.
proaches based on hybridization and biological selection (Auch and Reth 1990;Duyk et al. 1990;Buckler et al. 1991;Lovett et al. 1991;Parimoo et al. 1991), have been proposed.Exon amplification, an example of the latter category, relies on selection for functional splice sites flanking exons and thereby, avoids problematic issues such as tissue specificity or relative mRNA abundance that are inherent to other gene identification approaches.Recently, we have modified the exon amplification technique to make it applicable to the isolation of gene sequences from very complex sources of genomic DNA (Church et al. 1994).As an initial test, we have applied this method to the isolation of large numbers of gene sequences from a single human chromosome.
Construction of a Chromosome-specific Exon Library
Human chromosome 22 was chosen as a model for construction of exon libraries because of intensive mapping and disease gene identification efforts in this region of the genome.Plasmid DNA was prepared from pooled clones of each of the 130-microtiter plates in an arrayed cosmid library constructed from flow-sorted human chromosome 22 (LL22NC03), which represents approximately five equivalents of chromosome 22.An additional four pools were generated from a human-specific subset of cosmids derived from a human-hamster hybrid cell line, GM10888, containing chromosome 22 as its sole human component (Lichter et al. 1990).A total of 134 plate-pool cosmid DNAs were prepared, and each sample was digested and shotgun-cloned into the in vivo splicing vector, pSPL3 (Church et al. 1994).Plasmid DNA from pSPL3 subclones (pools of 500-2000 insert-containing clones) was transfected transiently into COS7 cells, which facilitated SV40 large T-mediated plasmid amplification and transcription.Cytoplasmic RNA derived from these transfectants was used in RNA-PCR amplifications, and the resulting products were cloned directionally as described in Methods.Exon clones (48 from each pool, or -6400 clones) were arrayed, grown, and stored in 96-well microtiter plates.Approximately 24 clones from each of the first 100 pools were sequenced (a total of 2304 sequences generated) in a single pass, yielding 709 unique sequences, or an average of 7.1 unique sequences per pool.To determine the number of unique sequences in the rest of the exon library, the remaining 24 exon clones from 10 pools were sequenced and compared to all of the 709 initial sequences.An additional 40 unique sequences were produced, yielding an average of four remaining unique clones per pool.Therefore, we estimate that an average of 11.1 unique clones exist in each pool, and that sequence has been produced for -64% (7.1 + 11.1) of the unique clones in the first 100 pools.Because there are 134 exon pools, our upper estimate of the total number of unique sequences in the entire exon library is 1487 (134 x 11.1).Thus, the sequences that we have generated to date represent approximately half (47%) of those present in the library.
Complete sequence was produced for 91% of the clones analyzed, yielding a minimum average length of -125 bp per clone.This is close to the value of 135 bp that we have reported previously for completely sequenced sets of exons (Church et al. 1994), and likely will be similar when all the sequences are complete.We have estimated the accuracy of the sequences produced to be -99.5%,based on alignments to previously known, well-characterized gene sequences (Table 1).
An average threefold redundancy of clones corresponding to each unique sequence was observed, and is likely to be higher than this value when sequencing is complete; this may be attrib-utable to biases introduced during certain steps of the procedure.Although cultured in separate wells, growth differences among individual cosmid clones within each pool may have introduced bias during the shotgun subcloning step.However, we have designed our approach to minimize the probability that more than one exon will be isolated from the same cosmid (to maximize the uniformity of exon distribution across the chromosome).By maintaining a high complexity of target DNA (i.e., large numbers of cosmids), the growth bias is likely to be distributed across several cosmids in each pool, thus increasing the likelihood that an exon will be derived preferentially from several of the more prevalent genomic clones.Bias in the library may also be attributable to differential splicing efficiency of particular exons, as well as preferential RNA-PCR amplification or exon cloning.These aspects may be more difficult to control and are likely to be dependent on the sequence composition of exons or splice site sequences.As a result, specific exons may have a reduced probability of being trapped, but it is likely that other exons from the same gene will be identified.
Sequence Data Base Comparisons
The sequences were compared to those in public data bases (Altschul et al. 1990; BLAST comparison with Genbank and EMBL versions and updates that were available 7/23/95), and a summary of these results is presented in Tables 2 and 3.One hundred ninety-nine of the 709 sequences (28%) analyzed are highly similar to known genes from a number of species.Included in these are 101 sequences that are identical (~97% nucleotide identity) to segments of previously identified human genes.These can be subdivided into 48 sequences from 24 different genes that were mapped previously to chromosome 22 (in some cases, multiple exons were isolated from the same gene), and 53 other sequences corresponding to heretofore unmapped genes and expressed sequence tags (EST).Included in the latter group were sequences from RanGTPaseactivating protein 1 (Bischoff et al. 1995), phosphatidlyinositol 4-kinase (Wong and Cantley 1994), small nuclear ribonucleoprotein Sm D3 (Lehmeier et al. 1994), glutathionine S-transferase T1 (Pemble et al. 1994), and cadherin-13 (Tanihara et al. 1994), which are now localized provisionally to chromosome 22.In a few cases, the sequence identity was found with the corn- The criteria for categorizing similarities were based on BLASTN or BLASTX results (Altschul et al. 1990) plementary strand of known genes.The most notable of these were the matches of sequences 285 and 760, which were identical to the complementary strands of Ewing sarcoma and cytochrome P450 IID6 gene sequences, respectively.In both cases, the match was found near the 3' end of the mRNA sequences of these genes, and the sequences flanking the aligned regions closely match consensus splice sites.Whether these sequences represent artifacts or genes encoded on the overlapping DNA strand opposite to the known genes remains unclear.
The remaining 98 sequences represent human homologs of genes from other species, members of gene families, or genes sharing strong similarities with known genes.Among the more interesting sequences with similarities are those that represent novel human genes that are related to others with known or putative functions.For example, the predicted amino acid se-quence of exon 637 is highly similar to part of a common domain, termed the chromodomain, found in genes whose products associate with heterochromatin (James and Elgin 1986;Paro and Hogness 1991;Singh et al. 1991;Delmas et al. 1993).The best studied of these genes are Drosophila heterochromatin-associated protein, HP1, and Polycomb.Both of these genes have been shown to control, by repression, developmental regulators such as homeotic genes.The chromodomain appears to be essential for assembly of these proteins into chromatin as part of a multiple protein complex, as mutations or deletions in this domain in the Polycomb protein abolish its ability to associate with heterochromatin (Messmer et al. 1992).Thus, the sequence represented by exon 637 may represent a novel regulator of homeotic function in human development.Figure 1 is an amino acid alignment of exon 637 with the chromodomains of other pro-
100.0
Sequences have been placed into each category based on the results of BLASTN and BLASTX comparisons (Altschul et al. 1990).Criteria for similarity categories are described in the legend to Table 1.
The previously localized genes identified by data base comparisons and the number of different sequences matching them are listed.a One of the sequences matching EWSRI and the matching sequence for CYP2D6 were identical to the mRNA complementary strands of these genes.
teins.Exon 637, however, does not contain the complete chromodomain, but begins several amino acids downstream and continues beyond the carboxyl end of the motif.Interestingly, the genomic structure of the Polycomb locus of Drosophila has been determined, and the 5' end of exon 637 is at the precise location of an intron-exon boundary within this gene (Paro and Hogness 1991).This suggests that some phylogenetic conservation of genomic structure exists for the 637 gene or that the 637 gene may represent the human homolog of Polycomb.
In addition to exon 637, several of the sequences appear to be closely related to genes involved in growth regulatory, developmental, or cell type-specific-processes. Isolation of these types of genes, many of which are likely to be representative of low-abundance or tissue-specific mRNA species exemplifies an advantage of the exon amplification approach: expressionindependent gene identification.The overwhelming majority of human genes are expressed at low levels, producing low-abundance mRNA (Hastie and Bisho p 1976).Many other approaches that require significant levels of gene expression, or knowledge of tissue specificity, may fail to identify such genes with any efficiency.This includes most large-scale random cDNA sequencing strategies (Adams et al. 1991(Adams et al. , 1992;;Khan et al. 1992;Okubo et al. 1992), which are biased toward identification of mRNAs that are highly expressed in the tissue (or cells) from which the library was generated, as well as approaches using RNA derived from monochromsomal-or region-specific human-rodent hybrid cell lines (Liu et al. 1989;Corbo et al. 1990 et al. 1992).Direct or cDNA selection procedures (Parimoo et al. 1991;Lovett et al. 1991) have been designed to minimize this problem of representation by enriching for rare mRNAs and complement exon amplification in that they can enrich for these low abundance species, if they are present.These approaches are also adaptable for en masse, region-specific gene identification, as Del Mastro et al. (1995) demonstrate using human chromosome 5 as a target.One hundred six of the 709 sequences (-15%) were artifacts or similar to repetitive elements.The artifacts were derived from a number of sources but originate primarily from pSPL3 and the pLawrist16 cosmid vector.The higher prevalence of these clones as artifacts in these assays, as compared to assays of single cosmid clones, may be attributable to the vast molar excess of these sequences relative to the genomic sequences targeted for exon isolation.It should be noted that sequences from pSPL3 or pLawrist comprise -20% (10% each) of all clones in the library, based on sequencing of -2300 clones.These can be eliminated readily by hybridization detection, thereby significantly reducing the effort required for sequencing of similar libraries.
Localization of Exons to Chromosome 22
The starting genomic DNA for these experiments was derived from flow-sorted chromosomes de-rived from a human-hamster hybrid that contains preferentially human chromosome 22, but also retains chromosomes 9 and Y at a low frequency.Thus, the possibility exists that some of the exons originate from nonchromosome 22 genomic DNA, including hamster.This estimate has been confirmed by mapping these sequences to Southern blots of human, hamster, and GM10888 (monochromosome 22 human-hamster hybrid) DNAs (data not shown).Of 21 randomly chosen exons that hybridized to the blots, 17 (81%) mapped to chromsome 22, 3 were of hamster origin, and 1 was human, but apparently not originating from chromosome 22.These numbers are consistent with the estimated nonchomosome 22 content of the starting cosmid library.The percentage of chromosome 22-specific sequences is likely to be higher, as we excluded from our analysis exons from previously mapped genes.
DISCUSSION
The collection of clones described above represents one of the first large-scale, chromosomespecific isolations of human gene sequences, and will serve as an inroad to the development of an integrated physical and transcription map of human chromosome 22.It is likely to be an invaluable tool for creating the high resolution maps that are needed for sequencing of the human genome.Extrapolation of our results leads to a prediction that -1500 sequences will be generated after identification of all unique sequences in this collection of clones (of which >1200 will be nonrepetitive and nonartifact), and an estimation that nearly half of these sequences have been produced to date.In this study 24 of 59 (41%) of the fully sequenced genes (nonpseudogenes) currently known to map to chromosome 22 were identified, suggesting that a similar percentage of all genes on this chromosome are represented by the sequences that have been generated thus far.Therefore, it is anticipated that exons from as many as 80% of the genes on this chromosome will be represented in this library after completion of library sequencing.To further increase the representation of genes in this library and eliminate much of the bias in gene identification, subsequent isolation of exons will be performed on cosmids that stochastically failed to produce exons in the initial library construction.
Several aspects of this approach make it ideal for construction of detailed physical/ transcription maps.First, the use of genomic DNA from a specific human chromosome allows positional information to be associated provisionally to each exon, circumventing the need to localize cloned gene fragments that have been isolated and sequenced as random cDNA, and provides a more direct approach to saturate specific regions of the genome with such sequences.It should be noted, however, that sources such as flow-sorted chromosomes, or libraries constructed from them, frequently originate from human-rodent somatic cell hybrids; a small fraction of genomic DNA, hence exons, may originate from the rodent parent or from other human chromosomes present within the hybrid.The chromosome 22 cosmid library used in this study was derived from a hybrid cell line that also retains human chromosomes 9 and Y at a low frequency.We have estimated that 10%-20% of the exon library contains sequences from genomic DNA not originating from human chromosome 22, the majority of which is from hamster.Thus, although the exon library is highly enriched for chromosome 22 gene sequences, it is not pure, and the sequences are annotated as such.It should be noted, however, that no exons from known genes that map to any other human chromosome were identified to date in these studies, whereas 48 exons from chromosome 22specific genes were isolated.In addition, the recent availability of high-quality monochromosomal hybrids for most human chromosomes, coupled with improved flow-sorting will reduce dramatically the problem for future exon library constructions.
Second, the exons produced by this procedure are consummate multi-purpose mapping reagents.Because the vast majority of exons are single copy sequences, they can be used as hybridization probes in filter-based mapping procedures.Moreover, we have found that exon sequences are easily converted to sequence-tagged sites (STS) for use in PCR-based mapping schemes (Green and Olson 1990).With an average spacing of 60-70 kb, the chromosome 22 exons iden-tified thus far could be used to complete the Human Genome Initiative's goal of one STS per 100 kb for each chromosome.The conversion of exons to cDNAs would then provide both ordering and orientation across groups of yeast artificial chromosomes (YACs) or cosmids as well as confirm any existing contig information.Also, the cross-species conservation of many exons allows for effective comparative mapping of genes and for direct comparison of emerging physical maps in humans, mice, and other model genomes.
Third, because exon amplification is not dependent on the level or pattern of expression of the gene that is isolated, representational biases inherent in tissues or cells from which cDNA libraries are constructed, are eliminated.Exons can be used as a DNA sequence source for determining the specific expression pattern of the gene from which it originated by using quantitative assays of RNA expression, such as Northern blotting, S 1 nuclease, or RNase protection, in situ hybridization, or by using PCR to detect the presence of exon sequences in a cDNA library (Church et al. 1993;Munroe et al. 1995).The resulting information allows for effective screening of appropriate cDNA libraries to saturate quickly a given genomic region with genes.This property of the technique has already been applied successfully to positional cloning efforts in Huntington's disease and neurofibromatosis 2 (Huntington's Disease Collaborative Research Group 1993;Trofatter et al. 1993), resulting in identification of the disease genes by isolation of 28 and 8 of their exons, respectively, as well as successful effort to identify several other human and mouse disease genes (Vidal et al. 1993;Vulpe et al. 1993;Walker et al. 1993;Cachon-Gonzalez et al. 1994;H~istbacka et al. 1994).
The large-scale exon isolation approach that we have applied here is currently being transferred to several other human chromosomes.Our data suggest that this method could identify segments from the majority of human genes before to the generation of the human genome's sequence.Moreover, the strategy would help to achieve this goal by facilitating the necessary construction and comparison of fine structure physical maps while simultaneously integrating them into transcription maps of greater utility to a wide range of researchers in genetics and biology.Continued application of this strategy represents a most effective and cost-efficient means of substantiating the most prominent rationale for pursuing the Human Genome Initiative, cre-ation of the infrastructure needed to support the rapid and efficient discovery of genes causing human disease.
Exon Library Construction
Exon amplification was performed as described (Church et al. 1994), with some modification.Cosmid-containing clones were propagated in 96-well microtiter plates, pooled, and cosmid DNA purified using the alkaline lysis method.Before propagation, cosmids containing ribosomal gene DNA (rDNA) were identified by hybridization and removed from each plate.This was done to insure against overrepresentation of chromosome 22 rDNA sequences in the amplified and cloned products.Shotgun cloning into pSPL3, transfections, and RNA isolations were performed as described (Church et al. 1994).Initially, RNA-based-PCR amplification (RT-PCR) and cloning of the resulting products was performed as described (Church et al. 1994), but was replaced by ligation-independent cloning using uracil DNA glycosylase (UDG; Rashtchian et al. 1991).This entailed the replacement of oligodeoxynucleotide primers SD2 and SA4 in the second PCR amplification, with SDDU and SADU.The sequences of these primers are as follows: SDDU: 5"-AUAAGCUUGAUCUCACAAGCTG CACGCTCTAG-3', SADU: 5'-UUCGAGUAGUACUTTCTATTCCT TCGGGCCTGT-3'.
Ten nanograms of EcoRV-digested pBluescript IIKS + was amplified with BSDU and BSAU.Fifty nanograms of the amplified, linearized plasmid was mixed with 50-100 ng of RT-PCR product, and the mixture was digested with I unit of UDG (GIBCO-BRL) at 37°C in a 10-~1 volume of I × PCR buffer.The digested and annealed products were immediately transformed into Escherichia coli DH5~ host.UDG cloning streamlined the procedure and completely eliminated a significant frequency of clone chimerism.Clones from each pool were picked, propagated, frozen, and stored in 96-well microtiter plates.Sequencing was performed using the method of Sanger et al. (1977).Sequences were automatically read using a Millipore Bioimage DNA sequence film reader operating on a Sun Sparc Station.Sequence data base comparisons were performed using the BLAST network service of the National Center for Biotechnology Information (Altschul et al. 1990).The sequences have been deposited in Genbank with the following accession numbers: H55062-H55737.
Table 1 .
Summary of sequence data base search results | 2017-12-22T06:34:50.398Z | 1995-10-01T00:00:00.000 | {
"year": 1995,
"sha1": "3bb5d0ffc865bba579ad2d4b53d8c39424d39cd2",
"oa_license": "CCBYNC",
"oa_url": "https://genome.cshlp.org/content/5/3/214.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3bb5d0ffc865bba579ad2d4b53d8c39424d39cd2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
256757470 | pes2o/s2orc | v3-fos-license | The Effect of Carbon Doping on the Crystal Structure and Electrical Properties of Sb2Te3
As a new generation of non-volatile memory, phase change random access memory (PCRAM) has the potential to fill the hierarchical gap between DRAM and NAND FLASH in computer storage. Sb2Te3, one of the candidate materials for high-speed PCRAM, has high crystallization speed and poor thermal stability. In this work, we investigated the effect of carbon doping on Sb2Te3. It was found that the FCC phase of C-doped Sb2Te3 appeared at 200 °C and began to transform into the HEX phase at 25 °C, which is different from the previous reports where no FCC phase was observed in C-Sb2Te3. Based on the experimental observation and first-principles density functional theory calculation, it is found that the formation energy of FCC-Sb2Te3 structure decreases gradually with the increase in C doping concentration. Moreover, doped C atoms tend to form C molecular clusters in sp2 hybridization at the grain boundary of Sb2Te3, which is similar to the layered structure of graphite. And after doping C atoms, the thermal stability of Sb2Te3 is improved. We have fabricated the PCRAM device cell array of a C-Sb2Te3 alloy, which has an operating speed of 5 ns, a high thermal stability (10-year data retention temperature 138.1 °C), a low device power consumption (0.57 pJ), a continuously adjustable resistance value, and a very low resistance drift coefficient.
Introduction
With the continuous development of an information society, people's demand for information storage and calculation continues to grow. Phase change random access memory (PCRAM) is greatly welcomed by researchers in industrial electronics, artificial intelligence, and other fields because of its simple process, high integration, non-volatility, multi-level storage, and other characteristics [1]. The non-volatile memory technology 3D-XPoint, developed by Intel and Micron and announced for the first time in August 2015, is a new type of non-volatile memory that can significantly reduce latency, so that more data can be stored near the central processing unit [2], and its essence is PCRAM. Intel claims that its speed and life span are 1000 times that of NAND Flash, its integration density is 10 times that of traditional memory, and its cost is half that of Dynamic Random Access Memory (DRAM) [3].
The key technology of PCRAM lies in phase change materials. PCRAM realizes data storage by using the resistance difference of the phase change material between the amorphous state and the crystalline state. The common Ge 2 Sb 2 Te 5 (GST) phase change material is the most widely used material at present, but it has the disadvantages of slow speed (~50 ns) and poor thermal stability (~82 • C) [4,5], which cannot meet the requirements of high-speed and high thermal stability PCRAM, thus limiting its application in electronic devices. In order to improve the performance of PCRAM, it is key to find phase change
Density Functional Theory (DFT) Methods
In the calculation work, we constructed a 3 × 3 × 3 supercell model of an FCC-Sb 2 Te 3 , which contains 180 atoms (72 Sb atoms and 108 Te atoms), and randomly generated 36 Sb cation vacancies in the system. We also constructed an FCC-Sb 2 Te 3 model with 160 atoms of Σ3 twin grain boundary, including 64 Sb atoms, 96 Te atoms, and 32 random Sb cation vacancies. The thickness of the vacuum layer between supercells is 10 Å.
We use the Vienna ab initio simulation package (VASP) for density functional theory calculation. We adopted the projector augmented wave (PAW) potentials for describing the ion-electron interaction, the generalized gradient approximation (GGA) of Perdew-Burke-Ernzerhof (PBE) for exchange-correlation interactions between electrons [15,[19][20][21][22][23]. The calculated valence electrons include 2s 2 2p 2 of C, 5s 2 5p 3 of Sb, and 5s 2 5p 4 of Te, and the plane wave cutoff energy is set to 550 eV. With Γ point as the origin, the Monkhorst-Pack method is used to generate 1 × 1 × 1 and 3 × 3 × 1 k-point grids, respectively, and the Gaussian smearing method is used to adjust each orbital occupation. The cutoff energy and k-point grids have been tested, and the atomic structures of these two models are fully optimized. The energy convergence criterion is 10 −5 eV, while the atomic force is less than 0.05 eV/Å. We implemented the Crystal Orbital Hamiltonian Population (COHP) bonding analyses using the LOBSTER setup [24].
Experimental Methods
By magnetron sputtering, the C and Sb 2 Te 3 were co-sputtered onto a SiO 2 /Si (100) substrate, and the thickness of the film can be controlled by adjusting the sputtering time Nanomaterials 2023, 13, 671 3 of 12 and sputtering power. The magnetron sputtering power of Sb 2 Te 3 is 20W RF and the power of C is 40W DC. The deposition proceeds with Ar at a flow rate of 20 SCCM, with a background pressure of 3 × 10 −4 Pa. The film was heated in situ at a heating rate of 60 • C/min in a self-made vacuum heating station, and the changes of resistance with time at various temperatures were recorded. Data retention for 10 years (and 100 years) was estimated by the Arrhenius Equation.
To explore the electrical behavior of C-Sb 2 Te 3 in memory cells, the PCRAM cells were fabricated with the traditional T-shaped (mushroomtype) structure. The bottom W heat electrode with a diameter of 190 nm was fabricated by 0.13 µm complementary metal oxide semiconductor technology. The bottom W heat electrode is covered with a C-Sb 2 Te 3 film with a thickness of about 135 nm, and 40 nm TiN is deposited as the top electrode. The PCRAM cells were patterned using an etching process. The prototype PCRAM cells were annealed at 300 • C for 10 min in a N 2 atmosphere, and then the electrical properties, such as current voltage (I-V), resistance voltage (R-V), and resistance time (R-t), were tested by a self-made test system. The test system consists of an arbitrary waveform generator (Tektronix AWG5002B, Beaverton, OR, USA) and a digital source meter (Keithley-2400, Beaverton, OR, USA). The thin films were continuously annealed at 200-300 • C for 5 min in a N 2 atmosphere, and the lattice information of the thin films was explored by X-ray diffraction (XRD, Rigaku, Tokyo, Japan).
Atomic Configuration for C-Doped Sb 2 Te 3
In order to determine the position of the C atom in Sb 2 Te 3 , we first consider four possible doping ways of the C atom in an FCC-Sb 2 Te 3 : replacing the Sb atom (C Sb ), replacing the Te atom (C Te ), occupying Sb cation vacancy (C V ), and interstitial doping (C I ). On this basis, the formation energy of C doping in an FCC-Sb 2 Te 3 was calculated and compared. The Equation for calculating the formation energy of C doping is as follows [15,20,21,25]: where E tot [X] and E tot [bulk] are the total energies of a supercell with and without C doping, respectively, and n i represents the number of doped atoms. n i > 0 means adding atoms to the supercell, n i < 0 means removing atoms from the supercell, and µ i means the chemical potential of i substance. In this paper, the chemical potentials of C, Sb, and Te are calculated according to the simple substance trigonal phase.
The FCC-Sb 2 Te 3 supercell used in the calculation contains 36 cation vacancies, 72 Sb atoms, and 108 Te atoms. The FCC-Sb 2 Te 3 model is shown in Figure 1a. The calculation results show that the formation energy of C atoms in every position of the FCC-Sb 2 Te 3 is very high, as shown in Figure 1b, which indicates that the doping system is not easy to form or unstable; that is, the substitution/occupation position of C atoms is unreasonable. mosphere, and then the electrical properties, such as current voltage (I-V), resistance voltage (R-V), and resistance time (R-t), were tested by a self-made test system. The test system consists of an arbitrary waveform generator (Tektronix AWG5002B, Beaverton, OR, USA) and a digital source meter (Keithley-2400, Beaverton, OR, USA). The thin films were continuously annealed at 200-300 °C for 5 min in a N2 atmosphere, and the lattice information of the thin films was explored by X-ray diffraction (XRD, Rigaku, Tokyo, Japan).
Atomic Configuration for C-Doped Sb2Te3
In order to determine the position of the C atom in Sb2Te3, we first consider four possible doping ways of the C atom in an FCC-Sb2Te3: replacing the Sb atom (CSb), replacing the Te atom (CTe), occupying Sb cation vacancy (CV), and interstitial doping (CI). On this basis, the formation energy of C doping in an FCC-Sb2Te3 was calculated and compared. The Equation for calculating the formation energy of C doping is as follows [15,20,21,25]: where E X and E bulk are the total energies of a supercell with and without C doping, respectively, and n represents the number of doped atoms. n > 0 means adding atoms to the supercell, n < 0 means removing atoms from the supercell, and μ means the chemical potential of i substance. In this paper, the chemical potentials of C, Sb, and Te are calculated according to the simple substance trigonal phase.
The FCC-Sb2Te3 supercell used in the calculation contains 36 cation vacancies, 72 Sb atoms, and 108 Te atoms. The FCC-Sb2Te3 model is shown in Figure 1a. The calculation results show that the formation energy of C atoms in every position of the FCC-Sb2Te3 is very high, as shown in Figure 1b, which indicates that the doping system is not easy to form or unstable; that is, the substitution/occupation position of C atoms is unreasonable. It is found from the article [18,25-28] that C atoms are not simply doped in these four ways, but form C molecular clusters (such as C chain and/or C ring) on the crystal plane. The Σ3 twin grain boundary in an annealed C-Ge2Sb2Te5 alloy accounts for 7.49% of the total polycrystalline structure [28]. Figure 2a shows the FCC-Sb2Te3 with 160 atoms of Σ3 twin grain boundary, and the thickness of the vacuum layer in the c-axis direction is 10 Å. It is found from the article [18,[25][26][27][28] that C atoms are not simply doped in these four ways, but form C molecular clusters (such as C chain and/or C ring) on the crystal plane. The Σ3 twin grain boundary in an annealed C-Ge 2 Sb 2 Te 5 alloy accounts for 7.49% of the total polycrystalline structure [28]. Figure 2a shows the FCC-Sb 2 Te 3 with 160 atoms of Σ3 twin grain boundary, and the thickness of the vacuum layer in the c-axis direction is 10 Å. After the structural relaxation, we found that the C atoms have a tendency to gradually converge together and form the C chain and/or C ring [29], as shown in Figure 2b. The longer the C chain, the more C rings, and the lower its formation energy. On the contrary, the more dispersed the C atom is, the higher its formation energy is. Figure 3 shows the formation energies of different C doping contents and forms calculated by Equation (1). As can be seen in Figure 3, comparing the formation energy of a single crystal structure (See Figure 1a) and a twin structure (See Figure 2a), the latter is lower, indicating that C atoms are more inclined to stay at the grain boundary than to replace Sb/Te atoms or occupy intervals/Sb cation vacancies in a single crystal. It is further analyzed that with the increase in C doping concentration, or with the increase in C chain and/or C ring, the formation energy of Sb2Te3 can gradually decrease, indicating that C atoms tend to converge to each other to form a C chain and/or C ring, which mainly exists in the form of a C chain and/or C ring at the twin grain boundary, as shown in Figure S1. It can also be seen in Figure 3 that although the formation energy of C-Sb2Te3 gradually decreases with the increase in C doping concentration, its value is still greater than 0 After the structural relaxation, we found that the C atoms have a tendency to gradually converge together and form the C chain and/or C ring [29], as shown in Figure 2b. The longer the C chain, the more C rings, and the lower its formation energy. On the contrary, the more dispersed the C atom is, the higher its formation energy is. Figure 3 shows the formation energies of different C doping contents and forms calculated by Equation (1). As can be seen in Figure 3, comparing the formation energy of a single crystal structure (See Figure 1a) and a twin structure (See Figure 2a), the latter is lower, indicating that C atoms are more inclined to stay at the grain boundary than to replace Sb/Te atoms or occupy intervals/Sb cation vacancies in a single crystal. It is further analyzed that with the increase in C doping concentration, or with the increase in C chain and/or C ring, the formation energy of Sb 2 Te 3 can gradually decrease, indicating that C atoms tend to converge to each other to form a C chain and/or C ring, which mainly exists in the form of a C chain and/or C ring at the twin grain boundary, as shown in Figure S1. After the structural relaxation, we found that the C atoms have a tendency to gradually converge together and form the C chain and/or C ring [29], as shown in Figure 2b. The longer the C chain, the more C rings, and the lower its formation energy. On the contrary, the more dispersed the C atom is, the higher its formation energy is. Figure 3 shows the formation energies of different C doping contents and forms calculated by Equation (1). As can be seen in Figure 3, comparing the formation energy of a single crystal structure (See Figure 1a) and a twin structure (See Figure 2a), the latter is lower, indicating that C atoms are more inclined to stay at the grain boundary than to replace Sb/Te atoms or occupy intervals/Sb cation vacancies in a single crystal. It is further analyzed that with the increase in C doping concentration, or with the increase in C chain and/or C ring, the formation energy of Sb2Te3 can gradually decrease, indicating that C atoms tend to converge to each other to form a C chain and/or C ring, which mainly exists in the form of a C chain and/or C ring at the twin grain boundary, as shown in Figure S1. It can also be seen in Figure 3 that although the formation energy of C-Sb2Te3 gradually decreases with the increase in C doping concentration, its value is still greater than 0 It can also be seen in Figure 3 that although the formation energy of C-Sb 2 Te 3 gradually decreases with the increase in C doping concentration, its value is still greater than 0 (the formation energy of C 64 Sb 64 Te 96 is 0.92 eV/f.u.), indicating that the structure is not easy to form or unstable. Perhaps this is the reason why the FCC structure was not found in the previous reports of Sb 2 Te 3 doped with carbon, yet the hexagonal (HEX) structure was found. Yin et al. reported that there was no FCC structure in the experiment of C-doped Sb 2 Te 3 , but the FCC structure was observed in the experiment of N-doped Sb 2 Te 3 [30] and C-N co-doped Sb 2 Te 3 [31]. The samples of pure Sb 2 Te 3 and Sb 2 Te 3 films doped with different carbon contents were prepared, and their crystal structures were analyzed by XRD, as shown in Figure 4. It can be seen in Figure 4 that when Sb 2 Te 3 is at 225 • C, the FCC phase and the HEX phase coexist, which indicates that Sb 2 Te 3 starts to change from the FCC phase to the HEX phase at this time. In C 40W Sb 2 Te 3 , there is no HEX phase at 200 and 225 • C, but the characteristic peak of the FCC phase appears, and the transition from the FCC phase to the HEX phase begins at 250 • C. However, the characteristic peak of the FCC phase was not observed at 225-250 • C in C 20W Sb 2 Te 3 , which indicated that with the increase in C doping concentration, the formation energy of C-Sb 2 Te 3 decreased, so that the FCC phase can appear in C-doped Sb 2 Te 3 , and the FCC phase is likely to be stable in the C-Sb 2 Te 3 structure as the C concentration increases. The formation energy of the metastable FCC-Sb 2 Te 3 structure constructed by us is calculated to be 0.04eV/f.u., which is close to kT = 0.026 eV at room temperature. However, the formation energy of the FCC-Sb 2 Te 3 structure is greater than 0, which also indicates that its stability is poor. In order to further understand the chemical stability of the Sb 2 Te 3 structure, we performed the COHP analysis for Sb 2 Te 3 , as shown in Figure 5. The upper and lower portions of the -COHP curve indicate bonding (stable) and anti-bonding (unstable) interactions, respectively. From the -COHP of Sb 2 Te 3 in Figure 5, the existence of the anti-bonding state of Sb-Te atoms below Fermi level (E f ) also indicates that the stability of the FCC-Sb 2 Te 3 structure is poor [32], where the cutoff distance of an Sb-Te bond is 3. (the formation energy of C64Sb64Te96 is 0.92 eV/f.u.), indicating that the structure is not easy to form or unstable. Perhaps this is the reason why the FCC structure was not found in the previous reports of Sb2Te3 doped with carbon, yet the hexagonal (HEX) structure was found. Yin et al. reported that there was no FCC structure in the experiment of C-doped Sb2Te3, but the FCC structure was observed in the experiment of N-doped Sb2Te3 [30] and C-N co-doped Sb2Te3 [31]. The samples of pure Sb2Te3 and Sb2Te3 films doped with different carbon contents were prepared, and their crystal structures were analyzed by XRD, as shown in Figure 4. It can be seen in Figure 4 that when Sb2Te3 is at 225 °C, the FCC phase and the HEX phase coexist, which indicates that Sb2Te3 starts to change from the FCC phase to the HEX phase at this time. In C40WSb2Te3, there is no HEX phase at 200 and 225 °C, but the characteristic peak of the FCC phase appears, and the transition from the FCC phase to the HEX phase begins at 250 °C. However, the characteristic peak of the FCC phase was not observed at 225-250 °C in C20WSb2Te3, which indicated that with the increase in C doping concentration, the formation energy of C-Sb2Te3 decreased, so that the FCC phase can appear in C-doped Sb2Te3, and the FCC phase is likely to be stable in the C-Sb2Te3 structure as the C concentration increases. The formation energy of the metastable FCC-Sb2Te3 structure constructed by us is calculated to be 0.04eV/f.u., which is close to kT = 0.026 eV at room temperature. However, the formation energy of the FCC-Sb2Te3 structure is greater than 0, which also indicates that its stability is poor. In order to further understand the chemical stability of the Sb2Te3 structure, we performed the COHP analysis for Sb2Te3, as shown in Figure 5. The upper and lower portions of the -COHP curve indicate bonding (stable) and anti-bonding (unstable) interactions, respectively. From the -COHP of Sb2Te3 in Figure 5, the existence of the anti-bonding state of Sb-Te atoms below Fermi level (Ef) also indicates that the stability of the FCC-Sb2Te3 structure is poor [32], where the cutoff distance of an Sb-Te bond is 3.
Electronic Properties and Origin of Change of Crystalline C-doped Sb 2 Te 3
To understand the mechanism of the thermal stability improvement of C-doped Sb 2 Te 3 , contour plots of electron localization function (ELF) projected on the same planes for Sb 2 Te 3 and 64C-Sb 2 Te 3 are shown in Figure 6, and the ELF maxima of various bonds are shown in Table 1. It is shown that the ELF maximum of the C-C bond is much higher than that of other bonds, indicating that the strength of the C-C bond is very high, which also proves that C atoms mainly exist in the form of C molecular clusters in Sb 2 Te 3 . The ELF maximum values of C-Te and C-Sb bonds are obviously much higher than 0.5, which indicates that C-Te and C-Sb bonds have high strength; that is, there are some molecular clusters containing C-Te and C-Sb. In addition, it also shows that the doping of the C atoms changes the local environment of each element in Sb 2 Te 3 and increases the strength of the Sb-Te covalent bond, thus obviously improving the stability of Sb 2 Te 3 after C doping [36,37].
Electronic Properties and Origin of Change of Crystalline C-doped Sb2Te3
To understand the mechanism of the thermal stability improvement of C-doped Sb2Te3, contour plots of electron localization function (ELF) projected on the same planes for Sb2Te3 and 64C-Sb2Te3 are shown in Figure 6, and the ELF maxima of various bonds are shown in Table 1. It is shown that the ELF maximum of the C-C bond is much higher than that of other bonds, indicating that the strength of the C-C bond is very high, which also proves that C atoms mainly exist in the form of C molecular clusters in Sb2Te3. The ELF maximum values of C-Te and C-Sb bonds are obviously much higher than 0.5, which indicates that C-Te and C-Sb bonds have high strength; that is, there are some molecular clusters containing C-Te and C-Sb. In addition, it also shows that the doping of the C atoms changes the local environment of each element in Sb2Te3 and increases the strength of the Sb-Te covalent bond, thus obviously improving the stability of Sb2Te3 after C doping [36,37]. In order to represent the effect of C element doping on the amorphous thermal stability of Sb2Te3 materials, we tested the failure time of thin film materials at different temperatures. The failure time is considered to be that the film resistance drops to half of the initial resistance at this set temperature T. As shown in Figure 7, the 10-year (and 100year) data retention is estimated according to the Arrhenius equation: where t is the failure time of the film at a set temperature T, τ is the preexponential factor, Ea is the activation energy, and Kb is the Boltzmann constant. It can be seen in Figure 7 = exp ( ), (2) In order to represent the effect of C element doping on the amorphous thermal stability of Sb 2 Te 3 materials, we tested the failure time of thin film materials at different temperatures. The failure time is considered to be that the film resistance drops to half of the initial Nanomaterials 2023, 13, 671 7 of 12 resistance at this set temperature T. As shown in Figure 7, the 10-year (and 100-year) data retention is estimated according to the Arrhenius equation: where t is the failure time of the film at a set temperature T, τ is the preexponential factor, E a is the activation energy, and K b is the Boltzmann constant. It can be seen in Figure 7 that the addition of C atoms obviously improves the 10-year (or 100-year) data retention of Sb 2 Te 3 . Figure 8a shows the pair correlation function (PCF) of the C-C bond and the Sb-Te bond in crystal Sb2Te3 and C-Sb2Te3 after structural relaxation. For the first peak, we found that the peak value of the C-C bond is far greater than that of the Sb-Te bond, C-Sb bond C-Te bond, etc., indicating that C atoms are more inclined to combine with C atoms in Sb2Te3 to form C molecular clusters. To our surprise, the position of the first peak of the C-C bond is 1.406 Å, which is very close to the inter-layer atomic spacing of 1.42 Å in graphite structure [38,39]. In addition, the maximum bond angle distribution of the C-C-C configuration in C molecular clusters is about 105-125°, and the coordination number of C atoms is mainly 3-coordinate, as shown in Figure 8b,c. These indicate that the doped C atoms in Sb2Te3 are not randomly formed when forming C molecular clusters, but tend to form graphite-like layered structures by sp 2 hybridization [18,28]. The low PCF peaks of the C-Sb and C-Te bonds in Figure S2 indicate that C atoms bond less with Sb and Te atoms. It is observed that after doping the C atom in Sb2Te3, the position of the first peak value of the Sb-Te bond decreases and the bond length becomes shorter, indicating that the binding between Sb and Te atoms is strengthened, thus making the structure more stable. Meanwhile, the extremely unstable Sb-Sb homobonds are reduced, which also contributes to the enhancement of structural stability, as shown in Figure S2. Figure 8a shows the pair correlation function (PCF) of the C-C bond and the Sb-Te bond in crystal Sb 2 Te 3 and C-Sb 2 Te 3 after structural relaxation. For the first peak, we found that the peak value of the C-C bond is far greater than that of the Sb-Te bond, C-Sb bond, C-Te bond, etc., indicating that C atoms are more inclined to combine with C atoms in Sb 2 Te 3 to form C molecular clusters. To our surprise, the position of the first peak of the C-C bond is 1.406 Å, which is very close to the inter-layer atomic spacing of 1.42 Å in graphite structure [38,39]. In addition, the maximum bond angle distribution of the C-C-C configuration in C molecular clusters is about 105-125 • , and the coordination number of C atoms is mainly 3-coordinate, as shown in Figure 8b,c. These indicate that the doped C atoms in Sb 2 Te 3 are not randomly formed when forming C molecular clusters, but tend to form graphite-like layered structures by sp 2 hybridization [18,28]. The low PCF peaks of the C-Sb and C-Te bonds in Figure S2 indicate that C atoms bond less with Sb and Te atoms. It is observed that after doping the C atom in Sb 2 Te 3 , the position of the first peak value of the Sb-Te bond decreases and the bond length becomes shorter, indicating that the binding between Sb and Te atoms is strengthened, thus making the structure more stable. Meanwhile, the extremely unstable Sb-Sb homobonds are reduced, which also contributes to the enhancement of structural stability, as shown in Figure S2. Figure 8a shows the pair correlation function (PCF) of the C-C bond and the Sb-Te bond in crystal Sb2Te3 and C-Sb2Te3 after structural relaxation. For the first peak, we found that the peak value of the C-C bond is far greater than that of the Sb-Te bond, C-Sb bond, C-Te bond, etc., indicating that C atoms are more inclined to combine with C atoms in Sb2Te3 to form C molecular clusters. To our surprise, the position of the first peak of the C-C bond is 1.406 Å, which is very close to the inter-layer atomic spacing of 1.42 Å in graphite structure [38,39]. In addition, the maximum bond angle distribution of the C-C-C configuration in C molecular clusters is about 105-125°, and the coordination number of C atoms is mainly 3-coordinate, as shown in Figure 8b,c. These indicate that the doped C atoms in Sb2Te3 are not randomly formed when forming C molecular clusters, but tend to form graphite-like layered structures by sp 2 hybridization [18,28]. The low PCF peaks of the C-Sb and C-Te bonds in Figure S2 indicate that C atoms bond less with Sb and Te atoms. It is observed that after doping the C atom in Sb2Te3, the position of the first peak value of the Sb-Te bond decreases and the bond length becomes shorter, indicating that the binding between Sb and Te atoms is strengthened, thus making the structure more stable. Meanwhile, the extremely unstable Sb-Sb homobonds are reduced, which also contributes to the enhancement of structural stability, as shown in Figure S2.
Electrical Performance Test of a Prototype PCRAM Device Based on C-Sb 2 Te 3 Material
The electrical programming characteristics of the C-Sb 2 Te 3 prototype PCRAM device are shown in Figure 9a. The illustration is the voltage pulse we applied to the PCRAM cell and the pulse width is fixed and the voltage amplitude step is set to 0.1 V. Starting from the first pulse amplitude of 0.1 V until the PCRAM cell stops after the SET and the RESET, as illustrated in Figure 9a, a reading voltage of 0.1 V is applied between every two pulses to record the resistance value of the PCRAM test cell. The initial state resistance value and the final state resistance value are controlled to be equal as much as possible. Obviously, the final state resistance value before adjusting the voltage pulse width is taken as the initial state resistance value after adjusting the voltage pulse width, and the initial state resistance value will affect the SET voltage value and even the RESET voltage value of this electrical programming. It can be seen from Figure 9a that the resistance resolution of the PCRAM cells exceeds two orders of magnitude, which meets the requirements of PCRAM. Our PCRAM device cells can be programmed at a very low SET/RESET voltage with a pulse width of 500 ns-5 ns, and the SET voltage and RESET voltage are as low as 1.5 V and 2.2 V when the voltage pulse width is 6 ns, which indicates that our PCRAM device has an operating speed of 5 ns and a low device power consumption (0.57 pJ), as shown in Figure 9b. Further observation in Figure 9b shows that the RESET power consumption of the PCRAM device cell decreases with the decrease in the width of the programming pulse. At the same time, it was noted that the PCRAM device cell could not be completely SET operated, as the width of the programming pulse decreased, resulting in the resistance value of its low resistance state to increase. This explains the behavior of the RESET power consumption change of the PCRAM device cell. However, the PCRAM device cell cannot be completely SET, which may be caused by the FCC phase appearing before the amorphous phase is transformed into the HEX phase due to the high C doping concentration. The multistage storage function can be realized by setting different voltage pulse widths and voltage pulse amplitudes, such as "0" at 10 7 Ω, "1" at 10 5 Ω, and "2" at 10 4 Ω [40]. We also noticed that during the SET/RESET process of our PCRAM device, there was a continuous resistance change. In this case, we used a voltage pulse with a pulse width of 500 ns to RESET the PCRAM in a low resistance state, and the continuously adjustable resistance value can be obtained as shown in Figure 10a. With this change, maybe we can realize several basic synaptic functions at the cell level, including long-term plasticity (LTP) [41,42], short-term plasticity (STP) [41,42], spike timing-dependent plasticity (STDP) [43,44], and spike rate-dependent plasticity (SRDP) [44,45], and maybe can also realize more complex or higher-order learning behaviors at the network level, such as supervised learning [46] and associative learning [47], as well as non-von Neumann architecture of in-memory computing [48,49]. In general, for this phenomenon of continuous resistance change, the resistance drift caused by the widening of the band gap due to the structural relaxation (SR) of amorphous Sb 2 Te 3 is a great obstacle to multilevel storage, neuromorphic learning and in-memory computing. Li et al. greatly reduced this resistance drift phenomenon by bipolar pulse operation on the PCRAM cell [50], which provides an effective means to improve the stability of the phase-change neuromorphic applications. We adjusted the resistance values of the PCRAM cell to high and low resistance states and each intermediate resistance state by controlling the electrical signal applied to the PCRAM cell, and fitting the resistance drift coefficient by Equation (3): where R 0 is the resistance value of the PCRAM cell at time t 0 ; that is, the initial resistance. ν is the resistance drift coefficient, indicating the change in the resistance value of the PCRAM cell with time. The results measured at room temperature are shown in Figure 10b.
Conclusions
We have obtained an understanding of the doping position and existence form of C atoms in the FCC-Sb 2 Te 3 and the improved device performance after C doping by performing density functional theory calculations for different concentrations and forms of C doping in single crystal FCC-Sb 2 Te 3 and Σ3 twin boundary FCC-Sb 2 Te 3 . The results show that the formation energy of C-Sb 2 Te 3 decreases with the increase in C doping concentration, which is consistent with the appearance of the FCC phase in high-concentration C-doped Sb 2 Te 3 in our experiment. In addition, C atoms prefer to form C molecule clusters by sp 2 hybridization at the grain boundary of Sb 2 Te 3 , similar to the layered structure of graphite, which changes the local environment of each element in Sb 2 Te 3 , resulting in the improvement of thermal stability of Sb 2 Te 3 . We fabricated the prototype device cells of PCRAM, which had an operating speed of 5 ns, a high thermal stability (10-year data retention temperature 138.1 • C), a low device power consumption of 0.57 pJ, and a resistance drift coefficient as low as 0.025, showing a continuously adjustable resistance. These performances all indicate that C-Sb 2 Te 3 -based PCRAM devices have great potential in applications, such as multilevel storage and spiking neural networks.
Supplementary Materials: The following supporting information can be downloaded at https:// www.mdpi.com/article/10.3390/nano13040671/s1, Figure S1: After structural relaxation of different C doping contents and C existing forms; Figure S2: The pair correlation function after structure relaxation; Figure S3: XRD diagram of C 40W Sb 2 Te 3 at 225 • C; Figure S4: The optical image of the fabricated PCRAM cells; Figure S5: I-V curve of PCRAM unit set from high resistance state to low resistance state by DC current; Figure S6: SET the PCRAM in a low resistance state with a large/small set pulse, and then RESET it with a 500 ns pulse to obtain a continuously adjustable resistance value. | 2023-02-11T16:11:26.414Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "e170bb5ccf398d0de6356765d1ca94e14141f10c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/13/4/671/pdf?version=1675922649",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "38518d609bb575f4dd6157e6edb869ce302cd954",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.